Informed Source Separation: A Bayesian Tutorial
NASA Technical Reports Server (NTRS)
Knuth, Kevin H.
2005-01-01
Source separation problems are ubiquitous in the physical sciences; any situation where signals are superimposed calls for source separation to estimate the original signals. In h s tutorial I will discuss the Bayesian approach to the source separation problem. This approach has a specific advantage in that it requires the designer to explicitly describe the signal model in addition to any other information or assumptions that go into the problem description. This leads naturally to the idea of informed source separation, where the algorithm design incorporates relevant information about the specific problem. This approach promises to enable researchers to design their own high-quality algorithms that are specifically tailored to the problem at hand.
Single-channel mixed signal blind source separation algorithm based on multiple ICA processing
NASA Astrophysics Data System (ADS)
Cheng, Xiefeng; Li, Ji
2017-01-01
Take separating the fetal heart sound signal from the mixed signal that get from the electronic stethoscope as the research background, the paper puts forward a single-channel mixed signal blind source separation algorithm based on multiple ICA processing. Firstly, according to the empirical mode decomposition (EMD), the single-channel mixed signal get multiple orthogonal signal components which are processed by ICA. The multiple independent signal components are called independent sub component of the mixed signal. Then by combining with the multiple independent sub component into single-channel mixed signal, the single-channel signal is expanded to multipath signals, which turns the under-determined blind source separation problem into a well-posed blind source separation problem. Further, the estimate signal of source signal is get by doing the ICA processing. Finally, if the separation effect is not very ideal, combined with the last time's separation effect to the single-channel mixed signal, and keep doing the ICA processing for more times until the desired estimated signal of source signal is get. The simulation results show that the algorithm has good separation effect for the single-channel mixed physiological signals.
Constrained Null Space Component Analysis for Semiblind Source Separation Problem.
Hwang, Wen-Liang; Lu, Keng-Shih; Ho, Jinn
2018-02-01
The blind source separation (BSS) problem extracts unknown sources from observations of their unknown mixtures. A current trend in BSS is the semiblind approach, which incorporates prior information on sources or how the sources are mixed. The constrained independent component analysis (ICA) approach has been studied to impose constraints on the famous ICA framework. We introduced an alternative approach based on the null space component (NCA) framework and referred to the approach as the c-NCA approach. We also presented the c-NCA algorithm that uses signal-dependent semidefinite operators, which is a bilinear mapping, as signatures for operator design in the c-NCA approach. Theoretically, we showed that the source estimation of the c-NCA algorithm converges with a convergence rate dependent on the decay of the sequence, obtained by applying the estimated operators on corresponding sources. The c-NCA can be formulated as a deterministic constrained optimization method, and thus, it can take advantage of solvers developed in optimization society for solving the BSS problem. As examples, we demonstrated electroencephalogram interference rejection problems can be solved by the c-NCA with proximal splitting algorithms by incorporating a sparsity-enforcing separation model and considering the case when reference signals are available.
NASA Astrophysics Data System (ADS)
Lee, Dong-Sup; Cho, Dae-Seung; Kim, Kookhyun; Jeon, Jae-Jin; Jung, Woo-Jin; Kang, Myeng-Hwan; Kim, Jae-Ho
2015-01-01
Independent Component Analysis (ICA), one of the blind source separation methods, can be applied for extracting unknown source signals only from received signals. This is accomplished by finding statistical independence of signal mixtures and has been successfully applied to myriad fields such as medical science, image processing, and numerous others. Nevertheless, there are inherent problems that have been reported when using this technique: instability and invalid ordering of separated signals, particularly when using a conventional ICA technique in vibratory source signal identification of complex structures. In this study, a simple iterative algorithm of the conventional ICA has been proposed to mitigate these problems. The proposed method to extract more stable source signals having valid order includes an iterative and reordering process of extracted mixing matrix to reconstruct finally converged source signals, referring to the magnitudes of correlation coefficients between the intermediately separated signals and the signals measured on or nearby sources. In order to review the problems of the conventional ICA technique and to validate the proposed method, numerical analyses have been carried out for a virtual response model and a 30 m class submarine model. Moreover, in order to investigate applicability of the proposed method to real problem of complex structure, an experiment has been carried out for a scaled submarine mockup. The results show that the proposed method could resolve the inherent problems of a conventional ICA technique.
Independent component analysis algorithm FPGA design to perform real-time blind source separation
NASA Astrophysics Data System (ADS)
Meyer-Baese, Uwe; Odom, Crispin; Botella, Guillermo; Meyer-Baese, Anke
2015-05-01
The conditions that arise in the Cocktail Party Problem prevail across many fields creating a need for of Blind Source Separation. The need for BSS has become prevalent in several fields of work. These fields include array processing, communications, medical signal processing, and speech processing, wireless communication, audio, acoustics and biomedical engineering. The concept of the cocktail party problem and BSS led to the development of Independent Component Analysis (ICA) algorithms. ICA proves useful for applications needing real time signal processing. The goal of this research was to perform an extensive study on ability and efficiency of Independent Component Analysis algorithms to perform blind source separation on mixed signals in software and implementation in hardware with a Field Programmable Gate Array (FPGA). The Algebraic ICA (A-ICA), Fast ICA, and Equivariant Adaptive Separation via Independence (EASI) ICA were examined and compared. The best algorithm required the least complexity and fewest resources while effectively separating mixed sources. The best algorithm was the EASI algorithm. The EASI ICA was implemented on hardware with Field Programmable Gate Arrays (FPGA) to perform and analyze its performance in real time.
A Markov model for blind image separation by a mean-field EM algorithm.
Tonazzini, Anna; Bedini, Luigi; Salerno, Emanuele
2006-02-01
This paper deals with blind separation of images from noisy linear mixtures with unknown coefficients, formulated as a Bayesian estimation problem. This is a flexible framework, where any kind of prior knowledge about the source images and the mixing matrix can be accounted for. In particular, we describe local correlation within the individual images through the use of Markov random field (MRF) image models. These are naturally suited to express the joint pdf of the sources in a factorized form, so that the statistical independence requirements of most independent component analysis approaches to blind source separation are retained. Our model also includes edge variables to preserve intensity discontinuities. MRF models have been proved to be very efficient in many visual reconstruction problems, such as blind image restoration, and allow separation and edge detection to be performed simultaneously. We propose an expectation-maximization algorithm with the mean field approximation to derive a procedure for estimating the mixing matrix, the sources, and their edge maps. We tested this procedure on both synthetic and real images, in the fully blind case (i.e., no prior information on mixing is exploited) and found that a source model accounting for local autocorrelation is able to increase robustness against noise, even space variant. Furthermore, when the model closely fits the source characteristics, independence is no longer a strict requirement, and cross-correlated sources can be separated, as well.
Sukholthaman, Pitchayanin; Sharp, Alice
2016-06-01
Municipal solid waste has been considered as one of the most immediate and serious problems confronting urban government in most developing and transitional economies. Providing solid waste performance highly depends on the effectiveness of waste collection and transportation process. Generally, this process involves a large amount of expenditures and has very complex and dynamic operational problems. Source separation has a major impact on effectiveness of waste management system as it causes significant changes in quantity and quality of waste reaching final disposal. To evaluate the impact of effective source separation on waste collection and transportation, this study adopts a decision support tool to comprehend cause-and-effect interactions of different variables in waste management system. A system dynamics model that envisages the relationships of source separation and effectiveness of waste management in Bangkok, Thailand is presented. Influential factors that affect waste separation attitudes are addressed; and the result of change in perception on waste separation is explained. The impacts of different separation rates on effectiveness of provided collection service are compared in six scenarios. 'Scenario 5' gives the most promising opportunities as 40% of residents are willing to conduct organic and recyclable waste separation. The results show that better service of waste collection and transportation, less monthly expense, extended landfill life, and satisfactory efficiency of the provided service at 60.48% will be achieved at the end of the simulation period. Implications of how to get public involved and conducted source separation are proposed. Copyright © 2016 Elsevier Ltd. All rights reserved.
A blind source separation approach for humpback whale song separation.
Zhang, Zhenbin; White, Paul R
2017-04-01
Many marine mammal species are highly social and are frequently encountered in groups or aggregations. When conducting passive acoustic monitoring in such circumstances, recordings commonly contain vocalizations of multiple individuals which overlap in time and frequency. This paper considers the use of blind source separation as a method for processing these recordings to separate the calls of individuals. The example problem considered here is that of the songs of humpback whales. The high levels of noise and long impulse responses can make source separation in underwater contexts a challenging proposition. The approach present here is based on time-frequency masking, allied to a noise reduction process. The technique is assessed using simulated and measured data sets, and the results demonstrate the effectiveness of the method for separating humpback whale songs.
Zhou, Guoxu; Yang, Zuyuan; Xie, Shengli; Yang, Jun-Mei
2011-04-01
Online blind source separation (BSS) is proposed to overcome the high computational cost problem, which limits the practical applications of traditional batch BSS algorithms. However, the existing online BSS methods are mainly used to separate independent or uncorrelated sources. Recently, nonnegative matrix factorization (NMF) shows great potential to separate the correlative sources, where some constraints are often imposed to overcome the non-uniqueness of the factorization. In this paper, an incremental NMF with volume constraint is derived and utilized for solving online BSS. The volume constraint to the mixing matrix enhances the identifiability of the sources, while the incremental learning mode reduces the computational cost. The proposed method takes advantage of the natural gradient based multiplication updating rule, and it performs especially well in the recovery of dependent sources. Simulations in BSS for dual-energy X-ray images, online encrypted speech signals, and high correlative face images show the validity of the proposed method.
Blind source separation by sparse decomposition
NASA Astrophysics Data System (ADS)
Zibulevsky, Michael; Pearlmutter, Barak A.
2000-04-01
The blind source separation problem is to extract the underlying source signals from a set of their linear mixtures, where the mixing matrix is unknown. This situation is common, eg in acoustics, radio, and medical signal processing. We exploit the property of the sources to have a sparse representation in a corresponding signal dictionary. Such a dictionary may consist of wavelets, wavelet packets, etc., or be obtained by learning from a given family of signals. Starting from the maximum a posteriori framework, which is applicable to the case of more sources than mixtures, we derive a few other categories of objective functions, which provide faster and more robust computations, when there are an equal number of sources and mixtures. Our experiments with artificial signals and with musical sounds demonstrate significantly better separation than other known techniques.
An EEG blind source separation algorithm based on a weak exclusion principle.
Lan Ma; Blu, Thierry; Wang, William S-Y
2016-08-01
The question of how to separate individual brain and non-brain signals, mixed by volume conduction in electroencephalographic (EEG) and other electrophysiological recordings, is a significant problem in contemporary neuroscience. This study proposes and evaluates a novel EEG Blind Source Separation (BSS) algorithm based on a weak exclusion principle (WEP). The chief point in which it differs from most previous EEG BSS algorithms is that the proposed algorithm is not based upon the hypothesis that the sources are statistically independent. Our first step was to investigate algorithm performance on simulated signals which have ground truth. The purpose of this simulation is to illustrate the proposed algorithm's efficacy. The results show that the proposed algorithm has good separation performance. Then, we used the proposed algorithm to separate real EEG signals from a memory study using a revised version of Sternberg Task. The results show that the proposed algorithm can effectively separate the non-brain and brain sources.
Time-dependent wave splitting and source separation
NASA Astrophysics Data System (ADS)
Grote, Marcus J.; Kray, Marie; Nataf, Frédéric; Assous, Franck
2017-02-01
Starting from classical absorbing boundary conditions, we propose a method for the separation of time-dependent scattered wave fields due to multiple sources or obstacles. In contrast to previous techniques, our method is local in space and time, deterministic, and avoids a priori assumptions on the frequency spectrum of the signal. Numerical examples in two space dimensions illustrate the usefulness of wave splitting for time-dependent scattering problems.
NATIONAL MANAGEMENT MEASURES TO CONTROL NONPOINT SOURCE POLLUTION FROM HYDROMODIFICATION
Hydromodification What Are the Nonpoint Source-Related Problems Associated with Hydromodification? Hydromodification activities have been separated into the categories of channelization and channel modification, dams, and streambank and shoreline erosion. A frequent result of c...
NASA Astrophysics Data System (ADS)
Vesselinov, V. V.; Alexandrov, B.
2014-12-01
The identification of the physical sources causing spatial and temporal fluctuations of state variables such as river stage levels and aquifer hydraulic heads is challenging. The fluctuations can be caused by variations in natural and anthropogenic sources such as precipitation events, infiltration, groundwater pumping, barometric pressures, etc. The source identification and separation can be crucial for conceptualization of the hydrological conditions and characterization of system properties. If the original signals that cause the observed state-variable transients can be successfully "unmixed", decoupled physics models may then be applied to analyze the propagation of each signal independently. We propose a new model-free inverse analysis of transient data based on Non-negative Matrix Factorization (NMF) method for Blind Source Separation (BSS) coupled with k-means clustering algorithm, which we call NMFk. NMFk is capable of identifying a set of unique sources from a set of experimentally measured mixed signals, without any information about the sources, their transients, and the physical mechanisms and properties controlling the signal propagation through the system. A classical BSS conundrum is the so-called "cocktail-party" problem where several microphones are recording the sounds in a ballroom (music, conversations, noise, etc.). Each of the microphones is recording a mixture of the sounds. The goal of BSS is to "unmix'" and reconstruct the original sounds from the microphone records. Similarly to the "cocktail-party" problem, our model-freee analysis only requires information about state-variable transients at a number of observation points, m, where m > r, and r is the number of unknown unique sources causing the observed fluctuations. We apply the analysis on a dataset from the Los Alamos National Laboratory (LANL) site. We identify and estimate the impact and sources are barometric pressure and water-supply pumping effects. We also estimate the location of the water-supply pumping wells based on the available data. The possible applications of the NMFk algorithm are not limited to hydrology problems; NMFk can be applied to any problem where temporal system behavior is observed at multiple locations and an unknown number of physical sources are causing these fluctuations.
Variational Bayesian Learning for Wavelet Independent Component Analysis
NASA Astrophysics Data System (ADS)
Roussos, E.; Roberts, S.; Daubechies, I.
2005-11-01
In an exploratory approach to data analysis, it is often useful to consider the observations as generated from a set of latent generators or "sources" via a generally unknown mapping. For the noisy overcomplete case, where we have more sources than observations, the problem becomes extremely ill-posed. Solutions to such inverse problems can, in many cases, be achieved by incorporating prior knowledge about the problem, captured in the form of constraints. This setting is a natural candidate for the application of the Bayesian methodology, allowing us to incorporate "soft" constraints in a natural manner. The work described in this paper is mainly driven by problems in functional magnetic resonance imaging of the brain, for the neuro-scientific goal of extracting relevant "maps" from the data. This can be stated as a `blind' source separation problem. Recent experiments in the field of neuroscience show that these maps are sparse, in some appropriate sense. The separation problem can be solved by independent component analysis (ICA), viewed as a technique for seeking sparse components, assuming appropriate distributions for the sources. We derive a hybrid wavelet-ICA model, transforming the signals into a domain where the modeling assumption of sparsity of the coefficients with respect to a dictionary is natural. We follow a graphical modeling formalism, viewing ICA as a probabilistic generative model. We use hierarchical source and mixing models and apply Bayesian inference to the problem. This allows us to perform model selection in order to infer the complexity of the representation, as well as automatic denoising. Since exact inference and learning in such a model is intractable, we follow a variational Bayesian mean-field approach in the conjugate-exponential family of distributions, for efficient unsupervised learning in multi-dimensional settings. The performance of the proposed algorithm is demonstrated on some representative experiments.
The resolution of point sources of light as analyzed by quantum detection theory
NASA Technical Reports Server (NTRS)
Helstrom, C. W.
1972-01-01
The resolvability of point sources of incoherent light is analyzed by quantum detection theory in terms of two hypothesis-testing problems. In the first, the observer must decide whether there are two sources of equal radiant power at given locations, or whether there is only one source of twice the power located midway between them. In the second problem, either one, but not both, of two point sources is radiating, and the observer must decide which it is. The decisions are based on optimum processing of the electromagnetic field at the aperture of an optical instrument. In both problems the density operators of the field under the two hypotheses do not commute. The error probabilities, determined as functions of the separation of the points and the mean number of received photons, characterize the ultimate resolvability of the sources.
Kawai, Kosuke; Huong, Luong Thi Mai
2017-03-01
Proper management of food waste, a major component of municipal solid waste (MSW), is needed, especially in developing Asian countries where most MSW is disposed of in landfill sites without any pretreatment. Source separation can contribute to solving problems derived from the disposal of food waste. An organic waste source separation and collection programme has been operated in model areas in Hanoi, Vietnam, since 2007. This study proposed three key parameters (participation rate, proper separation rate and proper discharge rate) for behaviour related to source separation of household organic waste, and monitored the progress of the programme based on the physical composition of household waste sampled from 558 households in model programme areas of Hanoi. The results showed that 13.8% of 558 households separated organic waste, and 33.0% discharged mixed (unseparated) waste improperly. About 41.5% (by weight) of the waste collected as organic waste was contaminated by inorganic waste, and one-third of the waste disposed of as organic waste by separators was inorganic waste. We proposed six hypothetical future household behaviour scenarios to help local officials identify a final or midterm goal for the programme. We also suggested that the city government take further actions to increase the number of people participating in separating organic waste, improve the accuracy of separation and prevent non-separators from discharging mixed waste improperly.
Blind Source Separation of Seismic Events with Independent Component Analysis: CTBT related exercise
NASA Astrophysics Data System (ADS)
Rozhkov, Mikhail; Kitov, Ivan
2015-04-01
Blind Source Separation (BSS) methods used in signal recovery applications are attractive for they use minimal a priori information about the signals they are dealing with. Homomorphic deconvolution and cepstrum estimation are probably the only methods used in certain extent in CTBT applications that can be attributed to the given branch of technology. However Expert Technical Analysis (ETA) conducted in CTBTO to improve the estimated values for the standard signal and event parameters according to the Protocol to the CTBT may face problems which cannot be resolved with certified CTBTO applications and may demand specific techniques not presently used. The problem to be considered within the ETA framework is the unambiguous separation of signals with close arrival times. Here, we examine two scenarios of interest: (1) separation of two almost co-located explosions conducted within fractions of seconds, and (2) extraction of explosion signals merged with wavetrains from strong earthquake. The importance of resolving the problem related to case 1 is connected with the correct explosion yield estimation. Case 2 is a well-known scenario of conducting clandestine nuclear tests. While the first case can be approached somehow with the means of cepstral methods, the second case can hardly be resolved with the conventional methods implemented at the International Data Centre, especially if the signals have close slowness and azimuth. Independent Component Analysis (in its FastICA implementation) implying non-Gaussianity of the underlying processes signal's mixture is a blind source separation method that we apply to resolve the mentioned above problems. We have tested this technique with synthetic waveforms, seismic data from DPRK explosions and mining blasts conducted within East-European platform as well as with signals from strong teleseismic events (Sumatra, April 2012 Mw=8.6, and Tohoku, March 2011 Mw=9.0 earthquakes). The data was recorded by seismic arrays of the International Monitoring System of CTBTO and by small-aperture seismic array Mikhnevo (MHVAR) operated by the Institute of Geosphere Dynamics, Russian Academy of Sciences. Our approach demonstrated a good ability of separation of seismic sources with very close origin times and locations (hundreds of meters), and/or having close arrival times (fractions of seconds), and recovering their waveforms from the mixture. Perspectives and limitations of the method are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gencaga, Deniz; Knuth, Kevin H.; Carbon, Duane F.
Understanding the origins of life has been one of the greatest dreams throughout history. It is now known that star-forming regions contain complex organic molecules, known as Polycyclic Aromatic Hydrocarbons (PAHs), each of which has particular infrared spectral characteristics. By understanding which PAH species are found in specific star-forming regions, we can better understand the biochemistry that takes place in interstellar clouds. Identifying and classifying PAHs is not an easy task: we can only observe a single superposition of PAH spectra at any given astrophysical site, with the PAH species perhaps numbering in the hundreds or even thousands. This ismore » a challenging source separation problem since we have only one observation composed of numerous mixed sources. However, it is made easier with the help of a library of hundreds of PAH spectra. In order to separate PAH molecules from their mixture, we need to identify the specific species and their unique concentrations that would provide the given mixture. We develop a Bayesian approach for this problem where sources are separated from their mixture by Metropolis Hastings algorithm. Separated PAH concentrations are provided with their error bars, illustrating the uncertainties involved in the estimation process. The approach is demonstrated on synthetic spectral mixtures using spectral resolutions from the Infrared Space Observatory (ISO). Performance of the method is tested for different noise levels.« less
Education and Family in Conflict
ERIC Educational Resources Information Center
Lee, Jihye
2011-01-01
In recent years, the demands of high-quality education have become a source of problems in South Korea, forming a new type of separated family. When the parents send their children to foreign countries for advanced education, the fathers, the wives, and children are separated for a significantly long period of time. Usually, the fathers, called…
An American Vital Interest: Preserving the Nuclear Enterprise Supplier Base
2012-02-15
not present significant problems since multiple sources are available and the suppliers are mostly small 7 companies that rely on Honeywell for...business. However, these vendors do present occasional problems in delivering incorrect parts, quantities or documentation. Honeywell continually...rectify vendor issues that require multi-agency involvement.8 Pantex also experiences similar problems with parts they procure separately from their
PWC-ICA: A Method for Stationary Ordered Blind Source Separation with Application to EEG.
Ball, Kenneth; Bigdely-Shamlo, Nima; Mullen, Tim; Robbins, Kay
2016-01-01
Independent component analysis (ICA) is a class of algorithms widely applied to separate sources in EEG data. Most ICA approaches use optimization criteria derived from temporal statistical independence and are invariant with respect to the actual ordering of individual observations. We propose a method of mapping real signals into a complex vector space that takes into account the temporal order of signals and enforces certain mixing stationarity constraints. The resulting procedure, which we call Pairwise Complex Independent Component Analysis (PWC-ICA), performs the ICA in a complex setting and then reinterprets the results in the original observation space. We examine the performance of our candidate approach relative to several existing ICA algorithms for the blind source separation (BSS) problem on both real and simulated EEG data. On simulated data, PWC-ICA is often capable of achieving a better solution to the BSS problem than AMICA, Extended Infomax, or FastICA. On real data, the dipole interpretations of the BSS solutions discovered by PWC-ICA are physically plausible, are competitive with existing ICA approaches, and may represent sources undiscovered by other ICA methods. In conjunction with this paper, the authors have released a MATLAB toolbox that performs PWC-ICA on real, vector-valued signals.
PWC-ICA: A Method for Stationary Ordered Blind Source Separation with Application to EEG
Bigdely-Shamlo, Nima; Mullen, Tim; Robbins, Kay
2016-01-01
Independent component analysis (ICA) is a class of algorithms widely applied to separate sources in EEG data. Most ICA approaches use optimization criteria derived from temporal statistical independence and are invariant with respect to the actual ordering of individual observations. We propose a method of mapping real signals into a complex vector space that takes into account the temporal order of signals and enforces certain mixing stationarity constraints. The resulting procedure, which we call Pairwise Complex Independent Component Analysis (PWC-ICA), performs the ICA in a complex setting and then reinterprets the results in the original observation space. We examine the performance of our candidate approach relative to several existing ICA algorithms for the blind source separation (BSS) problem on both real and simulated EEG data. On simulated data, PWC-ICA is often capable of achieving a better solution to the BSS problem than AMICA, Extended Infomax, or FastICA. On real data, the dipole interpretations of the BSS solutions discovered by PWC-ICA are physically plausible, are competitive with existing ICA approaches, and may represent sources undiscovered by other ICA methods. In conjunction with this paper, the authors have released a MATLAB toolbox that performs PWC-ICA on real, vector-valued signals. PMID:27340397
Blind source separation problem in GPS time series
NASA Astrophysics Data System (ADS)
Gualandi, A.; Serpelloni, E.; Belardinelli, M. E.
2016-04-01
A critical point in the analysis of ground displacement time series, as those recorded by space geodetic techniques, is the development of data-driven methods that allow the different sources of deformation to be discerned and characterized in the space and time domains. Multivariate statistic includes several approaches that can be considered as a part of data-driven methods. A widely used technique is the principal component analysis (PCA), which allows us to reduce the dimensionality of the data space while maintaining most of the variance of the dataset explained. However, PCA does not perform well in finding the solution to the so-called blind source separation (BSS) problem, i.e., in recovering and separating the original sources that generate the observed data. This is mainly due to the fact that PCA minimizes the misfit calculated using an L2 norm (χ 2), looking for a new Euclidean space where the projected data are uncorrelated. The independent component analysis (ICA) is a popular technique adopted to approach the BSS problem. However, the independence condition is not easy to impose, and it is often necessary to introduce some approximations. To work around this problem, we test the use of a modified variational Bayesian ICA (vbICA) method to recover the multiple sources of ground deformation even in the presence of missing data. The vbICA method models the probability density function (pdf) of each source signal using a mix of Gaussian distributions, allowing for more flexibility in the description of the pdf of the sources with respect to standard ICA, and giving a more reliable estimate of them. Here we present its application to synthetic global positioning system (GPS) position time series, generated by simulating deformation near an active fault, including inter-seismic, co-seismic, and post-seismic signals, plus seasonal signals and noise, and an additional time-dependent volcanic source. We evaluate the ability of the PCA and ICA decomposition techniques in explaining the data and in recovering the original (known) sources. Using the same number of components, we find that the vbICA method fits the data almost as well as a PCA method, since the χ 2 increase is less than 10 % the value calculated using a PCA decomposition. Unlike PCA, the vbICA algorithm is found to correctly separate the sources if the correlation of the dataset is low (<0.67) and the geodetic network is sufficiently dense (ten continuous GPS stations within a box of side equal to two times the locking depth of a fault where an earthquake of Mw >6 occurred). We also provide a cookbook for the use of the vbICA algorithm in analyses of position time series for tectonic and non-tectonic applications.
40 CFR 246.200-5 - Recommended procedures: Methods of separation and collection.
Code of Federal Regulations, 2012 CFR
2012-07-01
... designed to recover high grades of office paper at the source of generation, i.e., the desk, are the... recommended system is the desk-top system because it is designed to maximize recovery of high value material... desk-top system has been designed to minimize these problems. (d) The precise method of separation and...
40 CFR 246.200-5 - Recommended procedures: Methods of separation and collection.
Code of Federal Regulations, 2013 CFR
2013-07-01
... designed to recover high grades of office paper at the source of generation, i.e., the desk, are the... recommended system is the desk-top system because it is designed to maximize recovery of high value material... desk-top system has been designed to minimize these problems. (d) The precise method of separation and...
40 CFR 246.200-5 - Recommended procedures: Methods of separation and collection.
Code of Federal Regulations, 2011 CFR
2011-07-01
... designed to recover high grades of office paper at the source of generation, i.e., the desk, are the... recommended system is the desk-top system because it is designed to maximize recovery of high value material... desk-top system has been designed to minimize these problems. (d) The precise method of separation and...
40 CFR 246.200-5 - Recommended procedures: Methods of separation and collection.
Code of Federal Regulations, 2010 CFR
2010-07-01
... designed to recover high grades of office paper at the source of generation, i.e., the desk, are the... recommended system is the desk-top system because it is designed to maximize recovery of high value material... desk-top system has been designed to minimize these problems. (d) The precise method of separation and...
40 CFR 246.200-5 - Recommended procedures: Methods of separation and collection.
Code of Federal Regulations, 2014 CFR
2014-07-01
... designed to recover high grades of office paper at the source of generation, i.e., the desk, are the... recommended system is the desk-top system because it is designed to maximize recovery of high value material... desk-top system has been designed to minimize these problems. (d) The precise method of separation and...
Some remarks on the design of transonic tunnels with low levels of flow unsteadiness
NASA Technical Reports Server (NTRS)
Mabey, D. G.
1976-01-01
The principal sources of flow unsteadiness in the circuit of a transonic wind tunnel are presented. Care must be taken to avoid flow separations, acoustic resonances and large scale turbulence. Some problems discussed are the elimination of diffuser separations, the aerodynamic design of coolers and the unsteadiness generated in ventilated working sections.
Simulating variable source problems via post processing of individual particle tallies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bleuel, D.L.; Donahue, R.J.; Ludewigt, B.A.
2000-10-20
Monte Carlo is an extremely powerful method of simulating complex, three dimensional environments without excessive problem simplification. However, it is often time consuming to simulate models in which the source can be highly varied. Similarly difficult are optimization studies involving sources in which many input parameters are variable, such as particle energy, angle, and spatial distribution. Such studies are often approached using brute force methods or intelligent guesswork. One field in which these problems are often encountered is accelerator-driven Boron Neutron Capture Therapy (BNCT) for the treatment of cancers. Solving the reverse problem of determining the best neutron source formore » optimal BNCT treatment can be accomplished by separating the time-consuming particle-tracking process of a full Monte Carlo simulation from the calculation of the source weighting factors which is typically performed at the beginning of a Monte Carlo simulation. By post-processing these weighting factors on a recorded file of individual particle tally information, the effect of changing source variables can be realized in a matter of seconds, instead of requiring hours or days for additional complete simulations. By intelligent source biasing, any number of different source distributions can be calculated quickly from a single Monte Carlo simulation. The source description can be treated as variable and the effect of changing multiple interdependent source variables on the problem's solution can be determined. Though the focus of this study is on BNCT applications, this procedure may be applicable to any problem that involves a variable source.« less
NASA Technical Reports Server (NTRS)
Cappelli, Daniele; Mansour, Nagi N.
2012-01-01
Separation can be seen in most aerodynamic flows, but accurate prediction of separated flows is still a challenging problem for computational fluid dynamics (CFD) tools. The behavior of several Reynolds Averaged Navier-Stokes (RANS) models in predicting the separated ow over a wall-mounted hump is studied. The strengths and weaknesses of the most popular RANS models (Spalart-Allmaras, k-epsilon, k-omega, k-omega-SST) are evaluated using the open source software OpenFOAM. The hump ow modeled in this work has been documented in the 2004 CFD Validation Workshop on Synthetic Jets and Turbulent Separation Control. Only the baseline case is treated; the slot flow control cases are not considered in this paper. Particular attention is given to predicting the size of the recirculation bubble, the position of the reattachment point, and the velocity profiles downstream of the hump.
NASA Astrophysics Data System (ADS)
Bliss, Donald; Franzoni, Linda; Rouse, Jerry; Manning, Ben
2005-09-01
An analysis method for time-dependent broadband diffuse sound fields in enclosures is described. Beginning with a formulation utilizing time-dependent broadband intensity boundary sources, the strength of these wall sources is expanded in a series in powers of an absorption parameter, thereby giving a separate boundary integral problem for each power. The temporal behavior is characterized by a Taylor expansion in the delay time for a source to influence an evaluation point. The lowest-order problem has a uniform interior field proportional to the reciprocal of the absorption parameter, as expected, and exhibits relatively slow exponential decay. The next-order problem gives a mean-square pressure distribution that is independent of the absorption parameter and is primarily responsible for the spatial variation of the reverberant field. This problem, which is driven by input sources and the lowest-order reverberant field, depends on source location and the spatial distribution of absorption. Additional problems proceed at integer powers of the absorption parameter, but are essentially higher-order corrections to the spatial variation. Temporal behavior is expressed in terms of an eigenvalue problem, with boundary source strength distributions expressed as eigenmodes. Solutions exhibit rapid short-time spatial redistribution followed by long-time decay of a predominant spatial mode.
Localization from near-source quasi-static electromagnetic fields
NASA Astrophysics Data System (ADS)
Mosher, J. C.
1993-09-01
A wide range of research has been published on the problem of estimating the parameters of electromagnetic and acoustical sources from measurements of signals measured at an array of sensors. In the quasi-static electromagnetic cases examined here, the signal variation from a point source is relatively slow with respect to the signal propagation and the spacing of the array of sensors. As such, the location of the point sources can only be determined from the spatial diversity of the received signal across the array. The inverse source localization problem is complicated by unknown model order and strong local minima. The nonlinear optimization problem is posed for solving for the parameters of the quasi-static source model. The transient nature of the sources can be exploited to allow subspace approaches to separate out the signal portion of the spatial correlation matrix. Decomposition techniques are examined for improved processing, and an adaptation of MUltiple SIgnal Characterization (MUSIC) is presented for solving the source localization problem. Recent results on calculating the Cramer-Rao error lower bounds are extended to the multidimensional problem here. This thesis focuses on the problem of source localization in magnetoencephalography (MEG), with a secondary application to thunderstorm source localization. Comparisons are also made between MEG and its electrical equivalent, electroencephalography (EEG). The error lower bounds are examined in detail for several MEG and EEG configurations, as well as localizing thunderstorm cells over Cape Canaveral and Kennedy Space Center. Time-eigenspectrum is introduced as a parsing technique for improving the performance of the optimization problem.
Localization from near-source quasi-static electromagnetic fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mosher, John Compton
1993-09-01
A wide range of research has been published on the problem of estimating the parameters of electromagnetic and acoustical sources from measurements of signals measured at an array of sensors. In the quasi-static electromagnetic cases examined here, the signal variation from a point source is relatively slow with respect to the signal propagation and the spacing of the array of sensors. As such, the location of the point sources can only be determined from the spatial diversity of the received signal across the array. The inverse source localization problem is complicated by unknown model order and strong local minima. Themore » nonlinear optimization problem is posed for solving for the parameters of the quasi-static source model. The transient nature of the sources can be exploited to allow subspace approaches to separate out the signal portion of the spatial correlation matrix. Decomposition techniques are examined for improved processing, and an adaptation of MUtiple SIgnal Characterization (MUSIC) is presented for solving the source localization problem. Recent results on calculating the Cramer-Rao error lower bounds are extended to the multidimensional problem here. This thesis focuses on the problem of source localization in magnetoencephalography (MEG), with a secondary application to thunderstorm source localization. Comparisons are also made between MEG and its electrical equivalent, electroencephalography (EEG). The error lower bounds are examined in detail for several MEG and EEG configurations, as well as localizing thunderstorm cells over Cape Canaveral and Kennedy Space Center. Time-eigenspectrum is introduced as a parsing technique for improving the performance of the optimization problem.« less
Zeremdini, Jihen; Ben Messaoud, Mohamed Anouar; Bouzid, Aicha
2015-09-01
Humans have the ability to easily separate a composed speech and to form perceptual representations of the constituent sources in an acoustic mixture thanks to their ears. Until recently, researchers attempt to build computer models of high-level functions of the auditory system. The problem of the composed speech segregation is still a very challenging problem for these researchers. In our case, we are interested in approaches that are addressed to the monaural speech segregation. For this purpose, we study in this paper the computational auditory scene analysis (CASA) to segregate speech from monaural mixtures. CASA is the reproduction of the source organization achieved by listeners. It is based on two main stages: segmentation and grouping. In this work, we have presented, and compared several studies that have used CASA for speech separation and recognition.
Blind speech separation system for humanoid robot with FastICA for audio filtering and separation
NASA Astrophysics Data System (ADS)
Budiharto, Widodo; Santoso Gunawan, Alexander Agung
2016-07-01
Nowadays, there are many developments in building intelligent humanoid robot, mainly in order to handle voice and image. In this research, we propose blind speech separation system using FastICA for audio filtering and separation that can be used in education or entertainment. Our main problem is to separate the multi speech sources and also to filter irrelevant noises. After speech separation step, the results will be integrated with our previous speech and face recognition system which is based on Bioloid GP robot and Raspberry Pi 2 as controller. The experimental results show the accuracy of our blind speech separation system is about 88% in command and query recognition cases.
Extending compile-time reverse mode and exploiting partial separability in ADIFOR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bischof, C.H.; El-Khadiri, M.
1992-10-01
The numerical methods employed in the solution of many scientific computing problems require the computation of the gradient of a function f: R[sup n] [yields] R. ADIFOR is a source translator that, given a collection of subroutines to compute f, generates Fortran 77 code for computing the derivative of this function. Using the so-called torsion problem from the MINPACK-2 test collection as an example, this paper explores two issues in automatic differentiation: the efficient computation of derivatives for partial separable functions and the use of the compile-time reverse mode for the generation of derivatives. We show that orders of magnitudesmore » of improvement are possible when exploiting partial separability and maximizing use of the reverse mode.« less
The Other Side of Method Bias: The Perils of Distinct Source Research Designs
ERIC Educational Resources Information Center
Kammeyer-Mueller, John; Steel, Piers D. G.; Rubenstein, Alex
2010-01-01
Common source bias has been the focus of much attention. To minimize the problem, researchers have sometimes been advised to take measurements of predictors from one observer and measurements of outcomes from another observer or to use separate occasions of measurement. We propose that these efforts to eliminate biases due to common source…
Warmerdam, G; Vullings, R; Van Pul, C; Andriessen, P; Oei, S G; Wijn, P
2013-01-01
Non-invasive fetal electrocardiography (ECG) can be used for prolonged monitoring of the fetal heart rate (FHR). However, the signal-to-noise-ratio (SNR) of non-invasive ECG recordings is often insufficient for reliable detection of the FHR. To overcome this problem, source separation techniques can be used to enhance the fetal ECG. This study uses a physiology-based source separation (PBSS) technique that has already been demonstrated to outperform widely used blind source separation techniques. Despite the relatively good performance of PBSS in enhancing the fetal ECG, PBSS is still susceptible to artifacts. In this study an augmented PBSS technique is developed to reduce the influence of artifacts. The performance of the developed method is compared to PBSS on multi-channel non-invasive fetal ECG recordings. Based on this comparison, the developed method is shown to outperform PBSS for the enhancement of the fetal ECG.
Classical-processing and quantum-processing signal separation methods for qubit uncoupling
NASA Astrophysics Data System (ADS)
Deville, Yannick; Deville, Alain
2012-12-01
The Blind Source Separation problem consists in estimating a set of unknown source signals from their measured combinations. It was only investigated in a non-quantum framework up to now. We propose its first quantum extensions. We thus introduce the Quantum Source Separation field, investigating both its blind and non-blind configurations. More precisely, we show how to retrieve individual quantum bits (qubits) only from the global state resulting from their undesired coupling. We consider cylindrical-symmetry Heisenberg coupling, which e.g. occurs when two electron spins interact through exchange. We first propose several qubit uncoupling methods which typically measure repeatedly the coupled quantum states resulting from individual qubits preparations, and which then statistically process the classical data provided by these measurements. Numerical tests prove the effectiveness of these methods. We then derive a combination of quantum gates for performing qubit uncoupling, thus avoiding repeated qubit preparations and irreversible measurements.
Design of FPGA ICA for hyperspectral imaging processing
NASA Astrophysics Data System (ADS)
Nordin, Anis; Hsu, Charles C.; Szu, Harold H.
2001-03-01
The remote sensing problem which uses hyperspectral imaging can be transformed into a blind source separation problem. Using this model, hyperspectral imagery can be de-mixed into sub-pixel spectra which indicate the different material present in the pixel. This can be further used to deduce areas which contain forest, water or biomass, without even knowing the sources which constitute the image. This form of remote sensing allows previously blurred images to show the specific terrain involved in that region. The blind source separation problem can be implemented using an Independent Component Analysis algorithm. The ICA Algorithm has previously been successfully implemented using software packages such as MATLAB, which has a downloadable version of FastICA. The challenge now lies in implementing it in a form of hardware, or firmware in order to improve its computational speed. Hardware implementation also solves insufficient memory problem encountered by software packages like MATLAB when employing ICA for high resolution images and a large number of channels. Here, a pipelined solution of the firmware, realized using FPGAs are drawn out and simulated using C. Since C code can be translated into HDLs or be used directly on the FPGAs, it can be used to simulate its actual implementation in hardware. The simulated results of the program is presented here, where seven channels are used to model the 200 different channels involved in hyperspectral imaging.
Computational methods for inverse problems in geophysics: inversion of travel time observations
Pereyra, V.; Keller, H.B.; Lee, W.H.K.
1980-01-01
General ways of solving various inverse problems are studied for given travel time observations between sources and receivers. These problems are separated into three components: (a) the representation of the unknown quantities appearing in the model; (b) the nonlinear least-squares problem; (c) the direct, two-point ray-tracing problem used to compute travel time once the model parameters are given. Novel software is described for (b) and (c), and some ideas given on (a). Numerical results obtained with artificial data and an implementation of the algorithm are also presented. ?? 1980.
Dong, Junzi; Colburn, H. Steven
2016-01-01
In multisource, “cocktail party” sound environments, human and animal auditory systems can use spatial cues to effectively separate and follow one source of sound over competing sources. While mechanisms to extract spatial cues such as interaural time differences (ITDs) are well understood in precortical areas, how such information is reused and transformed in higher cortical regions to represent segregated sound sources is not clear. We present a computational model describing a hypothesized neural network that spans spatial cue detection areas and the cortex. This network is based on recent physiological findings that cortical neurons selectively encode target stimuli in the presence of competing maskers based on source locations (Maddox et al., 2012). We demonstrate that key features of cortical responses can be generated by the model network, which exploits spatial interactions between inputs via lateral inhibition, enabling the spatial separation of target and interfering sources while allowing monitoring of a broader acoustic space when there is no competition. We present the model network along with testable experimental paradigms as a starting point for understanding the transformation and organization of spatial information from midbrain to cortex. This network is then extended to suggest engineering solutions that may be useful for hearing-assistive devices in solving the cocktail party problem. PMID:26866056
Dong, Junzi; Colburn, H Steven; Sen, Kamal
2016-01-01
In multisource, "cocktail party" sound environments, human and animal auditory systems can use spatial cues to effectively separate and follow one source of sound over competing sources. While mechanisms to extract spatial cues such as interaural time differences (ITDs) are well understood in precortical areas, how such information is reused and transformed in higher cortical regions to represent segregated sound sources is not clear. We present a computational model describing a hypothesized neural network that spans spatial cue detection areas and the cortex. This network is based on recent physiological findings that cortical neurons selectively encode target stimuli in the presence of competing maskers based on source locations (Maddox et al., 2012). We demonstrate that key features of cortical responses can be generated by the model network, which exploits spatial interactions between inputs via lateral inhibition, enabling the spatial separation of target and interfering sources while allowing monitoring of a broader acoustic space when there is no competition. We present the model network along with testable experimental paradigms as a starting point for understanding the transformation and organization of spatial information from midbrain to cortex. This network is then extended to suggest engineering solutions that may be useful for hearing-assistive devices in solving the cocktail party problem.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bischof, C.H.; El-Khadiri, M.
1992-10-01
The numerical methods employed in the solution of many scientific computing problems require the computation of the gradient of a function f: R{sup n} {yields} R. ADIFOR is a source translator that, given a collection of subroutines to compute f, generates Fortran 77 code for computing the derivative of this function. Using the so-called torsion problem from the MINPACK-2 test collection as an example, this paper explores two issues in automatic differentiation: the efficient computation of derivatives for partial separable functions and the use of the compile-time reverse mode for the generation of derivatives. We show that orders of magnitudesmore » of improvement are possible when exploiting partial separability and maximizing use of the reverse mode.« less
DEEP ATTRACTOR NETWORK FOR SINGLE-MICROPHONE SPEAKER SEPARATION.
Chen, Zhuo; Luo, Yi; Mesgarani, Nima
2017-03-01
Despite the overwhelming success of deep learning in various speech processing tasks, the problem of separating simultaneous speakers in a mixture remains challenging. Two major difficulties in such systems are the arbitrary source permutation and unknown number of sources in the mixture. We propose a novel deep learning framework for single channel speech separation by creating attractor points in high dimensional embedding space of the acoustic signals which pull together the time-frequency bins corresponding to each source. Attractor points in this study are created by finding the centroids of the sources in the embedding space, which are subsequently used to determine the similarity of each bin in the mixture to each source. The network is then trained to minimize the reconstruction error of each source by optimizing the embeddings. The proposed model is different from prior works in that it implements an end-to-end training, and it does not depend on the number of sources in the mixture. Two strategies are explored in the test time, K-means and fixed attractor points, where the latter requires no post-processing and can be implemented in real-time. We evaluated our system on Wall Street Journal dataset and show 5.49% improvement over the previous state-of-the-art methods.
Prescription Drug Abuse: A Fast-Growing Problem | NIH MedlinePlus the Magazine
... graph is available on a separate page. Source: Substance Abuse and Mental Health Services Administration, 2005 National Survey ... you suggest I take for my addiction or substance abuse? Do I need to see a mental health ...
J Padilla, Alcides; Trujillo, Juan C
2018-04-01
Solid waste management in many cities of developing countries is not environmentally sustainable. People traditionally dispose of their solid waste in unsuitable urban areas like sidewalks and satellite dumpsites. This situation nowadays has become a serious public health problem in big Latin American conurbations. Among these densely-populated urban spaces, the Colombia's capital and main city stands out as a special case. In this study, we aim to identify the factors that shape the attitudes towards source-separated recycling among households in Bogotá. Using data from the Colombian Department of Statistics and Bogotá's multi-purpose survey, we estimated a multivariate Probit model. In general, our results show that the higher the household's socioeconomic class, the greater its effort for separating solid wastes. Likewise, our findings also allowed us to characterize household profiles regarding solid waste separation and considering each socioeconomic class. Among these profiles, we found that at lower socioeconomic classes, the attitudes towards solid waste separation are influenced by the use of Internet, the membership to an environmentalist organization, the level of education of the head of household and the homeownership. Hence, increasing the education levels within the poorest segment of the population, promoting affordable housing policies and facilitating Internet access for the vulnerable population could reinforce households' attitudes towards a greater source-separated recycling effort. Copyright © 2017 Elsevier Ltd. All rights reserved.
Characterisation of source-separated household waste intended for composting
Sundberg, Cecilia; Franke-Whittle, Ingrid H.; Kauppi, Sari; Yu, Dan; Romantschuk, Martin; Insam, Heribert; Jönsson, Håkan
2011-01-01
Large-scale composting of source-separated household waste has expanded in recent years in the Nordic countries. One problem can be low pH at the start of the process. Incoming biowaste at four composting plants was characterised chemically, physically and microbiologically. The pH of food waste ranged from 4.7 to 6.1 and organic acid concentration from 24 to 81 mmol kg−1. The bacterial diversity in the waste samples was high, with all samples dominated by Gammaproteobacteria, particularly Pseudomonas and Enterobacteria (Escherichia coli, Klebsiella, Enterobacter). Lactic acid bacteria were also numerically important and are known to negatively affect the composting process because the lactic acid they produce lowers the pH, inhibiting other bacteria. The bacterial groups needed for efficient composting, i.e. Bacillales and Actinobacteria, were present in appreciable amounts. The results indicated that start-up problems in the composting process can be prevented by recycling bulk material and compost. PMID:21075618
Characterisation of source-separated household waste intended for composting.
Sundberg, Cecilia; Franke-Whittle, Ingrid H; Kauppi, Sari; Yu, Dan; Romantschuk, Martin; Insam, Heribert; Jönsson, Håkan
2011-02-01
Large-scale composting of source-separated household waste has expanded in recent years in the Nordic countries. One problem can be low pH at the start of the process. Incoming biowaste at four composting plants was characterised chemically, physically and microbiologically. The pH of food waste ranged from 4.7 to 6.1 and organic acid concentration from 24 to 81 mmol kg(-1). The bacterial diversity in the waste samples was high, with all samples dominated by Gammaproteobacteria, particularly Pseudomonas and Enterobacteria (Escherichia coli, Klebsiella, Enterobacter). Lactic acid bacteria were also numerically important and are known to negatively affect the composting process because the lactic acid they produce lowers the pH, inhibiting other bacteria. The bacterial groups needed for efficient composting, i.e. Bacillales and Actinobacteria, were present in appreciable amounts. The results indicated that start-up problems in the composting process can be prevented by recycling bulk material and compost. Copyright © 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Yang, Yang; Li, Xiukun
2016-06-01
Separation of the components of rigid acoustic scattering by underwater objects is essential in obtaining the structural characteristics of such objects. To overcome the problem of rigid structures appearing to have the same spectral structure in the time domain, time-frequency Blind Source Separation (BSS) can be used in combination with image morphology to separate the rigid scattering components of different objects. Based on a highlight model, the separation of the rigid scattering structure of objects with time-frequency distribution is deduced. Using a morphological filter, different characteristics in a Wigner-Ville Distribution (WVD) observed for single auto term and cross terms can be simplified to remove any cross-term interference. By selecting time and frequency points of the auto terms signal, the accuracy of BSS can be improved. An experimental simulation has been used, with changes in the pulse width of the transmitted signal, the relative amplitude and the time delay parameter, in order to analyzing the feasibility of this new method. Simulation results show that the new method is not only able to separate rigid scattering components, but can also separate the components when elastic scattering and rigid scattering exist at the same time. Experimental results confirm that the new method can be used in separating the rigid scattering structure of underwater objects.
Blind source separation and localization using microphone arrays
NASA Astrophysics Data System (ADS)
Sun, Longji
The blind source separation and localization problem for audio signals is studied using microphone arrays. Pure delay mixtures of source signals typically encountered in outdoor environments are considered. Our proposed approach utilizes the subspace methods, including multiple signal classification (MUSIC) and estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithms, to estimate the directions of arrival (DOAs) of the sources from the collected mixtures. Since audio signals are generally considered broadband, the DOA estimates at frequencies with the large sum of squared amplitude values are combined to obtain the final DOA estimates. Using the estimated DOAs, the corresponding mixing and demixing matrices are computed, and the source signals are recovered using the inverse short time Fourier transform. Subspace methods take advantage of the spatial covariance matrix of the collected mixtures to achieve robustness to noise. While the subspace methods have been studied for localizing radio frequency signals, audio signals have their special properties. For instance, they are nonstationary, naturally broadband and analog. All of these make the separation and localization for the audio signals more challenging. Moreover, our algorithm is essentially equivalent to the beamforming technique, which suppresses the signals in unwanted directions and only recovers the signals in the estimated DOAs. Several crucial issues related to our algorithm and their solutions have been discussed, including source number estimation, spatial aliasing, artifact filtering, different ways of mixture generation, and source coordinate estimation using multiple arrays. Additionally, comprehensive simulations and experiments have been conducted to examine various aspects of the algorithm. Unlike the existing blind source separation and localization methods, which are generally time consuming, our algorithm needs signal mixtures of only a short duration and therefore supports real-time implementation.
NASA Astrophysics Data System (ADS)
Xu, Kangning; Wang, Chengwen; Zheng, Min; Yuan, Xin
2010-11-01
This study aimed to construct an on-site eco-sewerage system for modern office buildings in urban area based on combined innovative technologies of vacuum and source-separation. Results showed that source-separated grey water had low concentrations of pollutants, which helped the reuse of grey water. However, the system had a low separation efficiency between the yellow water and the brown water, which was caused by the plug problem in the urine collection from the urine-diverting toilets. During the storage of yellow water for liquid fertilizer production, nearly all urea nitrogen transferred to ammonium nitrogen and about 2/3 phosphorus was lost because of the struvite precipitation. Total bacteria and coliforms increased first in the storage, but then decreased to low concentrations. The anaerobic/anoxic/aerobic MBR had high elimination rates of COD, ammonium nitrogen and total nitrogen of the brown water, which were 94.2%, 98.1% and 95.1%, respectively. However, the effluent still had high contents of colority, nitrate and phosphorus, which affected the application of the effluent for flushing water. Even though, the effluent might be used as dilution water for the yellow water fertilizer. Based on the results and the assumption of an ideal operation of the vacuum source-separation system, a future plan for on-site eco-sewerage system of modern office buildings was constructed. Its sustainability was validated by the analysis of the substances flow of water and nutrients.
Deep Learning Based Binaural Speech Separation in Reverberant Environments.
Zhang, Xueliang; Wang, DeLiang
2017-05-01
Speech signal is usually degraded by room reverberation and additive noises in real environments. This paper focuses on separating target speech signal in reverberant conditions from binaural inputs. Binaural separation is formulated as a supervised learning problem, and we employ deep learning to map from both spatial and spectral features to a training target. With binaural inputs, we first apply a fixed beamformer and then extract several spectral features. A new spatial feature is proposed and extracted to complement the spectral features. The training target is the recently suggested ideal ratio mask. Systematic evaluations and comparisons show that the proposed system achieves very good separation performance and substantially outperforms related algorithms under challenging multi-source and reverberant environments.
Becker, H; Albera, L; Comon, P; Nunes, J-C; Gribonval, R; Fleureau, J; Guillotel, P; Merlet, I
2017-08-15
Over the past decades, a multitude of different brain source imaging algorithms have been developed to identify the neural generators underlying the surface electroencephalography measurements. While most of these techniques focus on determining the source positions, only a small number of recently developed algorithms provides an indication of the spatial extent of the distributed sources. In a recent comparison of brain source imaging approaches, the VB-SCCD algorithm has been shown to be one of the most promising algorithms among these methods. However, this technique suffers from several problems: it leads to amplitude-biased source estimates, it has difficulties in separating close sources, and it has a high computational complexity due to its implementation using second order cone programming. To overcome these problems, we propose to include an additional regularization term that imposes sparsity in the original source domain and to solve the resulting optimization problem using the alternating direction method of multipliers. Furthermore, we show that the algorithm yields more robust solutions by taking into account the temporal structure of the data. We also propose a new method to automatically threshold the estimated source distribution, which permits to delineate the active brain regions. The new algorithm, called Source Imaging based on Structured Sparsity (SISSY), is analyzed by means of realistic computer simulations and is validated on the clinical data of four patients. Copyright © 2017 Elsevier Inc. All rights reserved.
Uncertainty principles for inverse source problems for electromagnetic and elastic waves
NASA Astrophysics Data System (ADS)
Griesmaier, Roland; Sylvester, John
2018-06-01
In isotropic homogeneous media, far fields of time-harmonic electromagnetic waves radiated by compactly supported volume currents, and elastic waves radiated by compactly supported body force densities can be modelled in very similar fashions. Both are projected restricted Fourier transforms of vector-valued source terms. In this work we generalize two types of uncertainty principles recently developed for far fields of scalar-valued time-harmonic waves in Griesmaier and Sylvester (2017 SIAM J. Appl. Math. 77 154–80) to this vector-valued setting. These uncertainty principles yield stability criteria and algorithms for splitting far fields radiated by collections of well-separated sources into the far fields radiated by individual source components, and for the restoration of missing data segments. We discuss proper regularization strategies for these inverse problems, provide stability estimates based on the new uncertainty principles, and comment on reconstruction schemes. A numerical example illustrates our theoretical findings.
Quantum Theory of Superresolution for Incoherent Optical Imaging
NASA Astrophysics Data System (ADS)
Tsang, Mankei
Rayleigh's criterion for resolving two incoherent point sources has been the most influential measure of optical imaging resolution for over a century. In the context of statistical image processing, violation of the criterion is especially detrimental to the estimation of the separation between the sources, and modern far-field superresolution techniques rely on suppressing the emission of close sources to enhance the localization precision. Using quantum optics, quantum metrology, and statistical analysis, here we show that, even if two close incoherent sources emit simultaneously, measurements with linear optics and photon counting can estimate their separation from the far field almost as precisely as conventional methods do for isolated sources, rendering Rayleigh's criterion irrelevant to the problem. Our results demonstrate that superresolution can be achieved not only for fluorophores but also for stars. Recent progress in generalizing our theory for multiple sources and spectroscopy will also be discussed. This work is supported by the Singapore National Research Foundation under NRF Grant No. NRF-NRFF2011-07 and the Singapore Ministry of Education Academic Research Fund Tier 1 Project R-263-000-C06-112.
Source splitting via the point source method
NASA Astrophysics Data System (ADS)
Potthast, Roland; Fazi, Filippo M.; Nelson, Philip A.
2010-04-01
We introduce a new algorithm for source identification and field splitting based on the point source method (Potthast 1998 A point-source method for inverse acoustic and electromagnetic obstacle scattering problems IMA J. Appl. Math. 61 119-40, Potthast R 1996 A fast new method to solve inverse scattering problems Inverse Problems 12 731-42). The task is to separate the sound fields uj, j = 1, ..., n of n \\in \\mathbb {N} sound sources supported in different bounded domains G1, ..., Gn in \\mathbb {R}^3 from measurements of the field on some microphone array—mathematically speaking from the knowledge of the sum of the fields u = u1 + sdotsdotsdot + un on some open subset Λ of a plane. The main idea of the scheme is to calculate filter functions g_1, \\ldots, g_n, n\\in \\mathbb {N} , to construct uell for ell = 1, ..., n from u|Λ in the form u_{\\ell }(x) = \\int _{\\Lambda } g_{\\ell,x}(y) u(y) {\\,\\rm d}s(y), \\qquad \\ell =1,\\ldots, n. We will provide the complete mathematical theory for the field splitting via the point source method. In particular, we describe uniqueness, solvability of the problem and convergence and stability of the algorithm. In the second part we describe the practical realization of the splitting for real data measurements carried out at the Institute for Sound and Vibration Research at Southampton, UK. A practical demonstration of the original recording and the splitting results for real data is available online.
Information Theoretic Studies and Assessment of Space Object Identification
2014-03-24
localization are contained in Ref. [5]. 1.7.1 A Bayesian MPE Based Analysis of 2D Point-Source-Pair Superresolution In a second recently submitted paper [6], a...related problem of the optical superresolution (OSR) of a pair of equal-brightness point sources separated spatially by a distance (or angle) smaller...1403.4897 [physics.optics] (19 March 2014). 6. S. Prasad, “Asymptotics of Bayesian error probability and 2D pair superresolution ,” submitted to Opt. Express
NASA Astrophysics Data System (ADS)
Chen, X.; Abercrombie, R. E.; Pennington, C.
2017-12-01
Recorded seismic waveforms include contributions from earthquake source properties and propagation effects, leading to long-standing trade-off problems between site/path effects and source effects. With near-field recordings, the path effect is relatively small, so the trade-off problem can be simplified to between source and site effects (commonly referred as "kappa value"). This problem is especially significant for small earthquakes where the corner frequencies are within similar ranges of kappa values, so direct spectrum fitting often leads to systematic biases due to corner frequency and magnitude. In response to the significantly increased seismicity rate in Oklahoma, several local networks have been deployed following major earthquakes: the Prague, Pawnee and Fairview earthquakes. Each network provides dense observations within 20 km surrounding the fault zone, recording tens of thousands of aftershocks between M1 to M3. Using near-field recordings in the Prague area, we apply a stacking approach to separate path/site and source effects. The resulting source parameters are consistent with parameters derived from ground motion and spectral ratio methods from other studies; they exhibit spatial coherence within the fault zone for different fault patches. We apply these source parameter constraints in an analysis of kappa values for stations within 20 km of the fault zone. The resulting kappa values show significantly reduced variability compared to those from direct spectral fitting without constraints on the source spectrum; they are not biased by earthquake magnitudes. With these improvements, we plan to apply the stacking analysis to other local arrays to analyze source properties and site characteristics. For selected individual earthquakes, we will also use individual-pair empirical Green's function (EGF) analysis to validate the source parameter estimations.
Review of chemical separation techniques applicable to alpha spectrometric measurements
NASA Astrophysics Data System (ADS)
de Regge, P.; Boden, R.
1984-06-01
Prior to alpha-spectrometric measurements several chemical manipulations are usually required to obtain alpha-radiating sources with the desired radiochemical and chemical purity. These include sampling, dissolution or leaching of the elements of interest, conditioning of the solution, chemical separation and preparation of the alpha-emitting source. The choice of a particular method is dependent on different criteria but always involves aspects of the selectivity or the quantitative nature of the separations. The availability of suitable tracers or spikes and modern high resolution instruments resulted in the wide-spread application of isotopic dilution techniques to the problems associated with quantitative chemical separations. This enhanced the development of highly elective methods and reagents which led to important simplifications in the separation schemes. The chemical separation methods commonly used in connection with alpha-spectrometric measurements involve precipitation with selected scavenger elements, solvent extraction, ion exchange and electrodeposition techniques or any combination of them. Depending on the purpose of the final measurement and the type of sample available the chemical separation methods have to be adapted to the particular needs of environment monitoring, nuclear chemistry and metrology, safeguards and safety, waste management and requirements in the nuclear fuel cycle. Against the background of separation methods available in the literature the present paper highlights the current developments and trends in the chemical techniques applicable to alpha spectrometry.
Meng, Qing-Hao; Yang, Wei-Xing; Wang, Yang; Zeng, Ming
2011-01-01
This paper addresses the collective odor source localization (OSL) problem in a time-varying airflow environment using mobile robots. A novel OSL methodology which combines odor-source probability estimation and multiple robots' search is proposed. The estimation phase consists of two steps: firstly, the separate probability-distribution map of odor source is estimated via Bayesian rules and fuzzy inference based on a single robot's detection events; secondly, the separate maps estimated by different robots at different times are fused into a combined map by way of distance based superposition. The multi-robot search behaviors are coordinated via a particle swarm optimization algorithm, where the estimated odor-source probability distribution is used to express the fitness functions. In the process of OSL, the estimation phase provides the prior knowledge for the searching while the searching verifies the estimation results, and both phases are implemented iteratively. The results of simulations for large-scale advection-diffusion plume environments and experiments using real robots in an indoor airflow environment validate the feasibility and robustness of the proposed OSL method.
Meng, Qing-Hao; Yang, Wei-Xing; Wang, Yang; Zeng, Ming
2011-01-01
This paper addresses the collective odor source localization (OSL) problem in a time-varying airflow environment using mobile robots. A novel OSL methodology which combines odor-source probability estimation and multiple robots’ search is proposed. The estimation phase consists of two steps: firstly, the separate probability-distribution map of odor source is estimated via Bayesian rules and fuzzy inference based on a single robot’s detection events; secondly, the separate maps estimated by different robots at different times are fused into a combined map by way of distance based superposition. The multi-robot search behaviors are coordinated via a particle swarm optimization algorithm, where the estimated odor-source probability distribution is used to express the fitness functions. In the process of OSL, the estimation phase provides the prior knowledge for the searching while the searching verifies the estimation results, and both phases are implemented iteratively. The results of simulations for large-scale advection–diffusion plume environments and experiments using real robots in an indoor airflow environment validate the feasibility and robustness of the proposed OSL method. PMID:22346650
Magnetic anomalies in east Pacific using MAGSAT data
NASA Technical Reports Server (NTRS)
Harrison, C. G. A. (Principal Investigator)
1983-01-01
Methods for solving problems encountered in separating the core field from the crustal field are summarized as well as those methods developed for inverting total magnetic field data to obtain source functions for oceanic areas. Accounting for magnetization contrasts and the magnetization values measured in rocks of marine origin are also discussed.
NASA Technical Reports Server (NTRS)
Raymond, C.; Hajj, G.
1994-01-01
We review the problem of separating components of the magnetic field arising from sources in the Earth's core and lithosphere, from those contributions arising external to the Earth, namely ionospheric and magnetospheric fields, in spacecraft measurements of the Earth's magnetic field.
NASA Technical Reports Server (NTRS)
Pelevin, V. N.; Kozlyaninov, M. V.
1981-01-01
The problem of light fields in the ocean is in basic ocean optics. Twenty-six separate studies discuss: (1) the field of solar radiation in the ocean; (2) stationary and nonstationary light fields created in the sea by artificial sources; and (3) the use of optical methods to study biological and hydrodynamic characteristics of the sea.
Liao, Yu-Kai; Tseng, Sheng-Hao
2014-01-01
Accurately determining the optical properties of multi-layer turbid media using a layered diffusion model is often a difficult task and could be an ill-posed problem. In this study, an iterative algorithm was proposed for solving such problems. This algorithm employed a layered diffusion model to calculate the optical properties of a layered sample at several source-detector separations (SDSs). The optical properties determined at various SDSs were mutually referenced to complete one round of iteration and the optical properties were gradually revised in further iterations until a set of stable optical properties was obtained. We evaluated the performance of the proposed method using frequency domain Monte Carlo simulations and found that the method could robustly recover the layered sample properties with various layer thickness and optical property settings. It is expected that this algorithm can work with photon transport models in frequency and time domain for various applications, such as determination of subcutaneous fat or muscle optical properties and monitoring the hemodynamics of muscle. PMID:24688828
NASA Astrophysics Data System (ADS)
Gualandi, A.; Serpelloni, E.; Belardinelli, M. E.
2014-12-01
A critical point in the analysis of ground displacements time series is the development of data driven methods that allow to discern and characterize the different sources that generate the observed displacements. A widely used multivariate statistical technique is the Principal Component Analysis (PCA), which allows to reduce the dimensionality of the data space maintaining most of the variance of the dataset explained. It reproduces the original data using a limited number of Principal Components, but it also shows some deficiencies. Indeed, PCA does not perform well in finding the solution to the so-called Blind Source Separation (BSS) problem, i.e. in recovering and separating the original sources that generated the observed data. This is mainly due to the assumptions on which PCA relies: it looks for a new Euclidean space where the projected data are uncorrelated. Usually, the uncorrelation condition is not strong enough and it has been proven that the BSS problem can be tackled imposing on the components to be independent. The Independent Component Analysis (ICA) is, in fact, another popular technique adopted to approach this problem, and it can be used in all those fields where PCA is also applied. An ICA approach enables us to explain the time series imposing a fewer number of constraints on the model, and to reveal anomalies in the data such as transient signals. However, the independence condition is not easy to impose, and it is often necessary to introduce some approximations. To work around this problem, we use a variational bayesian ICA (vbICA) method, which models the probability density function (pdf) of each source signal using a mix of Gaussian distributions. This technique allows for more flexibility in the description of the pdf of the sources, giving a more reliable estimate of them. Here we present the application of the vbICA technique to GPS position time series. First, we use vbICA on synthetic data that simulate a seismic cycle (interseismic + coseismic + postseismic + seasonal + noise), and study the ability of the algorithm to recover the original (known) sources of deformation. Secondly, we apply vbICA to different tectonically active scenarios, such as earthquakes in central and northern Italy, as well as the study of slow slip events in Cascadia.
Exemplar-Based Image Inpainting Using a Modified Priority Definition.
Deng, Liang-Jian; Huang, Ting-Zhu; Zhao, Xi-Le
2015-01-01
Exemplar-based algorithms are a popular technique for image inpainting. They mainly have two important phases: deciding the filling-in order and selecting good exemplars. Traditional exemplar-based algorithms are to search suitable patches from source regions to fill in the missing parts, but they have to face a problem: improper selection of exemplars. To improve the problem, we introduce an independent strategy through investigating the process of patches propagation in this paper. We first define a new separated priority definition to propagate geometry and then synthesize image textures, aiming to well recover image geometry and textures. In addition, an automatic algorithm is designed to estimate steps for the new separated priority definition. Comparing with some competitive approaches, the new priority definition can recover image geometry and textures well.
Exemplar-Based Image Inpainting Using a Modified Priority Definition
Deng, Liang-Jian; Huang, Ting-Zhu; Zhao, Xi-Le
2015-01-01
Exemplar-based algorithms are a popular technique for image inpainting. They mainly have two important phases: deciding the filling-in order and selecting good exemplars. Traditional exemplar-based algorithms are to search suitable patches from source regions to fill in the missing parts, but they have to face a problem: improper selection of exemplars. To improve the problem, we introduce an independent strategy through investigating the process of patches propagation in this paper. We first define a new separated priority definition to propagate geometry and then synthesize image textures, aiming to well recover image geometry and textures. In addition, an automatic algorithm is designed to estimate steps for the new separated priority definition. Comparing with some competitive approaches, the new priority definition can recover image geometry and textures well. PMID:26492491
Saleh, M; Karfoul, A; Kachenoura, A; Senhadji, L; Albera, L
2016-08-01
Improving the execution time and the numerical complexity of the well-known kurtosis-based maximization method, the RobustICA, is investigated in this paper. A Newton-based scheme is proposed and compared to the conventional RobustICA method. A new implementation using the nonlinear Conjugate Gradient one is investigated also. Regarding the Newton approach, an exact computation of the Hessian of the considered cost function is provided. The proposed approaches and the considered implementations inherit the global plane search of the initial RobustICA method for which a better convergence speed for a given direction is still guaranteed. Numerical results on Magnetic Resonance Spectroscopy (MRS) source separation show the efficiency of the proposed approaches notably the quasi-Newton one using the BFGS method.
Blind separation of positive sources by globally convergent gradient search.
Oja, Erkki; Plumbley, Mark
2004-09-01
The instantaneous noise-free linear mixing model in independent component analysis is largely a solved problem under the usual assumption of independent nongaussian sources and full column rank mixing matrix. However, with some prior information on the sources, like positivity, new analysis and perhaps simplified solution methods may yet become possible. In this letter, we consider the task of independent component analysis when the independent sources are known to be nonnegative and well grounded, which means that they have a nonzero pdf in the region of zero. It can be shown that in this case, the solution method is basically very simple: an orthogonal rotation of the whitened observation vector into nonnegative outputs will give a positive permutation of the original sources. We propose a cost function whose minimum coincides with nonnegativity and derive the gradient algorithm under the whitening constraint, under which the separating matrix is orthogonal. We further prove that in the Stiefel manifold of orthogonal matrices, the cost function is a Lyapunov function for the matrix gradient flow, implying global convergence. Thus, this algorithm is guaranteed to find the nonnegative well-grounded independent sources. The analysis is complemented by a numerical simulation, which illustrates the algorithm.
NASA Astrophysics Data System (ADS)
Quednau, Philipp; Trommer, Ralph; Schmidt, Lorenz-Peter
2016-03-01
Wireless transmission systems in smart metering networks share the advantage of lower installation costs due to the expandability of separate infrastructure but suffer from transmission problems. In this paper the issue of interference of wireless transmitted smart meter data with third party systems and data from other meters is investigated and an approach for solving the problem is presented. A multi-channel wireless m-bus receiver was developed to separate the desired data from unwanted interferers by spatial filtering. The according algorithms are presented and the influence of different antenna types on the spatial filtering is investigated. The performance of the spatial filtering is evaluated by extensive measurements in a realistic surrounding with several hundreds of active wireless m-bus transponders. These measurements correspond to the future environment for data-collectors as they took place in rural and urban areas with smart gas meters equipped with wireless m-bus transponders installed in almost all surrounding buildings.
NASA Technical Reports Server (NTRS)
Anderson, Bernhard H.; Miller, Daniel N.
1999-01-01
Turbofan engine-face flow distortion is one of the most troublesome and least understood problems for designers of modern engine inlet systems. One concern is that there are numerous sources of flow-field distortion that are ingested by the inlet or generated within the inlet duct itself. Among these are: (1) flow separation at the cowl lip during in-flight maneuvering, (2) flow separation on the compression surfaces due to shock-wave/boundary layer interactions, (3) spillage of the fuselage boundary layer into the inlet duct, (4) ingestion of aircraft vortices and wakes emanating from upstream disturbances, and (5) strong secondary flow gradients and flow separation induced by wall curvature within the inlet duct itself. Most developing aircraft (including the B70, F-111, F-14, Mig-25, Tornado, and Airbus A300) have experienced one or more of these types of problems, particularly at high Mach numbers and/or extreme maneuver conditions when flow distortion at the engine face exceeded the allowable limits of the engine.
On a two-phase Hele-Shaw problem with a time-dependent gap and distributions of sinks and sources
NASA Astrophysics Data System (ADS)
Savina, Tatiana; Akinyemi, Lanre; Savin, Avital
2018-01-01
A two-phase Hele-Shaw problem with a time-dependent gap describes the evolution of the interface, which separates two fluids sandwiched between two plates. The fluids have different viscosities. In addition to the change in the gap width of the Hele-Shaw cell, the interface is driven by the presence of some special distributions of sinks and sources located in both the interior and exterior domains. The effect of surface tension is neglected. Using the Schwarz function approach, we give examples of exact solutions when the interface belongs to a certain family of algebraic curves and the curves do not form cusps. The family of curves are defined by the initial shape of the free boundary.
Ghost interactions in MEG/EEG source space: A note of caution on inter-areal coupling measures.
Palva, J Matias; Wang, Sheng H; Palva, Satu; Zhigalov, Alexander; Monto, Simo; Brookes, Matthew J; Schoffelen, Jan-Mathijs; Jerbi, Karim
2018-06-01
When combined with source modeling, magneto- (MEG) and electroencephalography (EEG) can be used to study long-range interactions among cortical processes non-invasively. Estimation of such inter-areal connectivity is nevertheless hindered by instantaneous field spread and volume conduction, which artificially introduce linear correlations and impair source separability in cortical current estimates. To overcome the inflating effects of linear source mixing inherent to standard interaction measures, alternative phase- and amplitude-correlation based connectivity measures, such as imaginary coherence and orthogonalized amplitude correlation have been proposed. Being by definition insensitive to zero-lag correlations, these techniques have become increasingly popular in the identification of correlations that cannot be attributed to field spread or volume conduction. We show here, however, that while these measures are immune to the direct effects of linear mixing, they may still reveal large numbers of spurious false positive connections through field spread in the vicinity of true interactions. This fundamental problem affects both region-of-interest-based analyses and all-to-all connectome mappings. Most importantly, beyond defining and illustrating the problem of spurious, or "ghost" interactions, we provide a rigorous quantification of this effect through extensive simulations. Additionally, we further show that signal mixing also significantly limits the separability of neuronal phase and amplitude correlations. We conclude that spurious correlations must be carefully considered in connectivity analyses in MEG/EEG source space even when using measures that are immune to zero-lag correlations. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Interior sound field control using generalized singular value decomposition in the frequency domain.
Pasco, Yann; Gauthier, Philippe-Aubert; Berry, Alain; Moreau, Stéphane
2017-01-01
The problem of controlling a sound field inside a region surrounded by acoustic control sources is considered. Inspired by the Kirchhoff-Helmholtz integral, the use of double-layer source arrays allows such a control and avoids the modification of the external sound field by the control sources by the approximation of the sources as monopole and radial dipole transducers. However, the practical implementation of the Kirchhoff-Helmholtz integral in physical space leads to large numbers of control sources and error sensors along with excessive controller complexity in three dimensions. The present study investigates the potential of the Generalized Singular Value Decomposition (GSVD) to reduce the controller complexity and separate the effect of control sources on the interior and exterior sound fields, respectively. A proper truncation of the singular basis provided by the GSVD factorization is shown to lead to effective cancellation of the interior sound field at frequencies below the spatial Nyquist frequency of the control sources array while leaving the exterior sound field almost unchanged. Proofs of concept are provided through simulations achieved for interior problems by simulations in a free field scenario with circular arrays and in a reflective environment with square arrays.
NASA Astrophysics Data System (ADS)
Geddes, Earl Russell
The details of the low frequency sound field for a rectangular room can be studied by the use of an established analytic technique--separation of variables. The solution is straightforward and the results are well-known. A non -rectangular room has boundary conditions which are not separable and therefore other solution techniques must be used. This study shows that the finite element method can be adapted for use in the study of sound fields in arbitrary shaped enclosures. The finite element acoustics problem is formulated and the modification of a standard program, which is necessary for solving acoustic field problems, is examined. The solution of the semi-non-rectangular room problem (one where the floor and ceiling remain parallel) is carried out by a combined finite element/separation of variables approach. The solution results are used to construct the Green's function for the low frequency sound field in five rooms (or data cases): (1) a rectangular (Louden) room; (2) The smallest wall of the Louden room canted 20 degrees from normal; (3) The largest wall of the Louden room canted 20 degrees from normal; (4) both the largest and the smallest walls are canted 20 degrees; and (5) a five-sided room variation of Case 4. Case 1, the rectangular room was calculated using both the finite element method and the separation of variables technique. The results for the two methods are compared in order to access the accuracy of the finite element method models. The modal damping coefficient are calculated and the results examined. The statistics of the source and receiver average normalized RMS P('2) responses in the 80 Hz, 100 Hz, and 125 Hz one-third octave bands are developed. The receiver averaged pressure response is developed to determine the effect of the source locations on the response. Twelve source locations are examined and the results tabulated for comparison. The effect of a finite sized source is looked at briefly. Finally, the standard deviation of the spatial pressure response is studied. The results for this characteristic show that it not significantly different in any of the rooms. The conclusions of the study are that only the frequency variations of the pressure response are affected by a room's shape. Further, in general, the simplest modification of a rectangular room (i.e., changing the angle of only one of the smallest walls), produces the most pronounced decrease of the pressure response variations in the low frequency region.
Joint Source-Channel Decoding of Variable-Length Codes with Soft Information: A Survey
NASA Astrophysics Data System (ADS)
Guillemot, Christine; Siohan, Pierre
2005-12-01
Multimedia transmission over time-varying wireless channels presents a number of challenges beyond existing capabilities conceived so far for third-generation networks. Efficient quality-of-service (QoS) provisioning for multimedia on these channels may in particular require a loosening and a rethinking of the layer separation principle. In that context, joint source-channel decoding (JSCD) strategies have gained attention as viable alternatives to separate decoding of source and channel codes. A statistical framework based on hidden Markov models (HMM) capturing dependencies between the source and channel coding components sets the foundation for optimal design of techniques of joint decoding of source and channel codes. The problem has been largely addressed in the research community, by considering both fixed-length codes (FLC) and variable-length source codes (VLC) widely used in compression standards. Joint source-channel decoding of VLC raises specific difficulties due to the fact that the segmentation of the received bitstream into source symbols is random. This paper makes a survey of recent theoretical and practical advances in the area of JSCD with soft information of VLC-encoded sources. It first describes the main paths followed for designing efficient estimators for VLC-encoded sources, the key component of the JSCD iterative structure. It then presents the main issues involved in the application of the turbo principle to JSCD of VLC-encoded sources as well as the main approaches to source-controlled channel decoding. This survey terminates by performance illustrations with real image and video decoding systems.
NASA Astrophysics Data System (ADS)
Sadhu, A.; Narasimhan, S.; Antoni, J.
2017-09-01
Output-only modal identification has seen significant activity in recent years, especially in large-scale structures where controlled input force generation is often difficult to achieve. This has led to the development of new system identification methods which do not require controlled input. They often work satisfactorily if they satisfy some general assumptions - not overly restrictive - regarding the stochasticity of the input. Hundreds of papers covering a wide range of applications appear every year related to the extraction of modal properties from output measurement data in more than two dozen mechanical, aerospace and civil engineering journals. In little more than a decade, concepts of blind source separation (BSS) from the field of acoustic signal processing have been adopted by several researchers and shown that they can be attractive tools to undertake output-only modal identification. Originally intended to separate distinct audio sources from a mixture of recordings, mathematical equivalence to problems in linear structural dynamics have since been firmly established. This has enabled many of the developments in the field of BSS to be modified and applied to output-only modal identification problems. This paper reviews over hundred articles related to the application of BSS and their variants to output-only modal identification. The main contribution of the paper is to present a literature review of the papers which have appeared on the subject. While a brief treatment of the basic ideas are presented where relevant, a comprehensive and critical explanation of their contents is not attempted. Specific issues related to output-only modal identification and the relative advantages and limitations of BSS methods both from theoretical and application standpoints are discussed. Gap areas requiring additional work are also summarized and the paper concludes with possible future trends in this area.
A new DOD and DOA estimation method for MIMO radar
NASA Astrophysics Data System (ADS)
Gong, Jian; Lou, Shuntian; Guo, Yiduo
2018-04-01
The battlefield electromagnetic environment is becoming more and more complex, and MIMO radar will inevitably be affected by coherent and non-stationary noise. To solve this problem, an angle estimation method based on oblique projection operator and Teoplitz matrix reconstruction is proposed. Through the reconstruction of Toeplitz, nonstationary noise is transformed into Gauss white noise, and then the oblique projection operator is used to separate independent and correlated sources. Finally, simulations are carried out to verify the performance of the proposed algorithm in terms of angle estimation performance and source overload.
NASA Astrophysics Data System (ADS)
Gualandi, Adriano; Serpelloni, Enrico; Elina Belardinelli, Maria; Bonafede, Maurizio; Pezzo, Giuseppe; Tolomei, Cristiano
2015-04-01
A critical point in the analysis of ground displacement time series, as those measured by modern space geodetic techniques (primarly continuous GPS/GNSS and InSAR) is the development of data driven methods that allow to discern and characterize the different sources that generate the observed displacements. A widely used multivariate statistical technique is the Principal Component Analysis (PCA), which allows to reduce the dimensionality of the data space maintaining most of the variance of the dataset explained. It reproduces the original data using a limited number of Principal Components, but it also shows some deficiencies, since PCA does not perform well in finding the solution to the so-called Blind Source Separation (BSS) problem. The recovering and separation of the different sources that generate the observed ground deformation is a fundamental task in order to provide a physical meaning to the possible different sources. PCA fails in the BSS problem since it looks for a new Euclidean space where the projected data are uncorrelated. Usually, the uncorrelation condition is not strong enough and it has been proven that the BSS problem can be tackled imposing on the components to be independent. The Independent Component Analysis (ICA) is, in fact, another popular technique adopted to approach this problem, and it can be used in all those fields where PCA is also applied. An ICA approach enables us to explain the displacement time series imposing a fewer number of constraints on the model, and to reveal anomalies in the data such as transient deformation signals. However, the independence condition is not easy to impose, and it is often necessary to introduce some approximations. To work around this problem, we use a variational bayesian ICA (vbICA) method, which models the probability density function (pdf) of each source signal using a mix of Gaussian distributions. This technique allows for more flexibility in the description of the pdf of the sources, giving a more reliable estimate of them. Here we introduce the vbICA technique and present its application on synthetic data that simulate a GPS network recording ground deformation in a tectonically active region, with synthetic time-series containing interseismic, coseismic, and postseismic deformation, plus seasonal deformation, and white and coloured noise. We study the ability of the algorithm to recover the original (known) sources of deformation, and then apply it to a real scenario: the Emilia seismic sequence (2012, northern Italy), which is an example of seismic sequence occurred in a slowly converging tectonic setting, characterized by several local to regional anthropogenic or natural sources of deformation, mainly subsidence due to fluid withdrawal and sediments compaction. We apply both PCA and vbICA to displacement time-series recorded by continuous GPS and InSAR (Pezzo et al., EGU2015-8950).
Three-Dimensional Passive-Source Reverse-Time Migration of Converted Waves: The Method
NASA Astrophysics Data System (ADS)
Li, Jiahang; Shen, Yang; Zhang, Wei
2018-02-01
At seismic discontinuities in the crust and mantle, part of the compressional wave energy converts to shear wave, and vice versa. These converted waves have been widely used in receiver function (RF) studies to image discontinuity structures in the Earth. While generally successful, the conventional RF method has its limitations and is suited mostly to flat or gently dipping structures. Among the efforts to overcome the limitations of the conventional RF method is the development of the wave-theory-based, passive-source reverse-time migration (PS-RTM) for imaging complex seismic discontinuities and scatters. To date, PS-RTM has been implemented only in 2D in the Cartesian coordinate for local problems and thus has limited applicability. In this paper, we introduce a 3D PS-RTM approach in the spherical coordinate, which is better suited for regional and global problems. New computational procedures are developed to reduce artifacts and enhance migrated images, including back-propagating the main arrival and the coda containing the converted waves separately, using a modified Helmholtz decomposition operator to separate the P and S modes in the back-propagated wavefields, and applying an imaging condition that maintains a consistent polarity for a given velocity contrast. Our new approach allows us to use migration velocity models with realistic velocity discontinuities, improving accuracy of the migrated images. We present several synthetic experiments to demonstrate the method, using regional and teleseismic sources. The results show that both regional and teleseismic sources can illuminate complex structures and this method is well suited for imaging dipping interfaces and sharp lateral changes in discontinuity structures.
History, problems, and prospects of Islamic insurance (Takaful) in Bangladesh.
Khan, Issa; Rahman, Noor Naemah Binti Abdul; Yusoff, Mohd Yakub Zulkifli Bin Mohd; Nor, Mohd Roslan Bin Mohd
2016-01-01
This study explains the history, current problems, and future possibilities of Islamic insurance (takaful) in Bangladesh. To articulate these issues, the researcher has adopted the qualitative method, and data has been collected through secondary sources i.e. articles, books, and online resources. The study reveals that Islamic insurance in Bangladesh is regulated by the Insurance Act 2010 which is contradictory with Islamic insurance causing numerous problems for Islamic insurance. This study also points out that Islamic insurance is a fast growing industry with huge prospects in Bangladesh. The government should introduce separate regulations for both Islamic and conventional insurance. The research concludes with suggestions for the further development of Islamic insurance in Bangladesh.
Characterization and identification of Na-Cl sources in ground water
Panno, S.V.; Hackley, Keith C.; Hwang, H.-H.; Greenberg, S.E.; Krapac, I.G.; Landsberger, S.; O'Kelly, D. J.
2006-01-01
Elevated concentrations of sodium (Na+) and chloride (Cl -) in surface and ground water are common in the United States and other countries, and can serve as indicators of, or may constitute, a water quality problem. We have characterized the most prevalent natural and anthropogenic sources of Na+ and Cl- in ground water, primarily in Illinois, and explored techniques that could be used to identify their source. We considered seven potential sources that included agricultural chemicals, septic effluent, animal waste, municipal landfill leachate, sea water, basin brines, and road deicers. The halides Cl-, bromide (Br-), and iodide (I-) were useful indicators of the sources of Na+-Cl- contamination. Iodide enrichment (relative to Cl-) was greatest in precipitation, followed by uncontaminated soil water and ground water, and landfill leachate. The mass ratios of the halides among themselves, with total nitrogen (N), and with Na+ provided diagnostic methods for graphically distinguishing among sources of Na+ and Cl- in contaminated water. Cl/Br ratios relative to Cl- revealed a clear, although overlapping, separation of sample groups. Samples of landfill leachate and ground water known to be contaminated by leachate were enriched in I- and Br-; this provided an excellent fingerprint for identifying leachate contamination. In addition, total N, when plotted against Cl/Br ratios, successfully separated water contaminated by road salt from water contaminated by other sources. Copyright ?? 2005 National Ground Water Association.
A resolution measure for three-dimensional microscopy
Chao, Jerry; Ram, Sripad; Abraham, Anish V.; Ward, E. Sally; Ober, Raimund J.
2009-01-01
A three-dimensional (3D) resolution measure for the conventional optical microscope is introduced which overcomes the drawbacks of the classical 3D (axial) resolution limit. Formulated within the context of a parameter estimation problem and based on the Cramer-Rao lower bound, this 3D resolution measure indicates the accuracy with which a given distance between two objects in 3D space can be determined from the acquired image. It predicts that, given enough photons from the objects of interest, arbitrarily small distances of separation can be estimated with prespecified accuracy. Using simulated images of point source pairs, we show that the maximum likelihood estimator is capable of attaining the accuracy predicted by the resolution measure. We also demonstrate how different factors, such as extraneous noise sources and the spatial orientation of the imaged object pair, can affect the accuracy with which a given distance of separation can be determined. PMID:20161040
NASA Astrophysics Data System (ADS)
Elliott, Stephen J.; Cheer, Jordan; Bhan, Lam; Shi, Chuang; Gan, Woon-Seng
2018-04-01
The active control of an incident sound field with an array of secondary sources is a fundamental problem in active control. In this paper the optimal performance of an infinite array of secondary sources in controlling a plane incident sound wave is first considered in free space. An analytic solution for normal incidence plane waves is presented, indicating a clear cut-off frequency for good performance, when the separation distance between the uniformly-spaced sources is equal to a wavelength. The extent of the near field pressure close to the source array is also quantified, since this determines the positions of the error microphones in a practical arrangement. The theory is also extended to oblique incident waves. This result is then compared with numerical simulations of controlling the sound power radiated through an open aperture in a rigid wall, subject to an incident plane wave, using an array of secondary sources in the aperture. In this case the diffraction through the aperture becomes important when its size is compatible with the acoustic wavelength, in which case only a few sources are necessary for good control. When the size of the aperture is large compared to the wavelength, and diffraction is less important but more secondary sources need to be used for good control, the results then become similar to those for the free field problem with an infinite source array.
Lineage mapper: A versatile cell and particle tracker
NASA Astrophysics Data System (ADS)
Chalfoun, Joe; Majurski, Michael; Dima, Alden; Halter, Michael; Bhadriraju, Kiran; Brady, Mary
2016-11-01
The ability to accurately track cells and particles from images is critical to many biomedical problems. To address this, we developed Lineage Mapper, an open-source tracker for time-lapse images of biological cells, colonies, and particles. Lineage Mapper tracks objects independently of the segmentation method, detects mitosis in confluence, separates cell clumps mistakenly segmented as a single cell, provides accuracy and scalability even on terabyte-sized datasets, and creates division and/or fusion lineages. Lineage Mapper has been tested and validated on multiple biological and simulated problems. The software is available in ImageJ and Matlab at isg.nist.gov.
Jabbar, Ahmed Najah
2018-04-13
This letter suggests two new types of asymmetrical higher-order kernels (HOK) that are generated using the orthogonal polynomials Laguerre (positive or right skew) and Bessel (negative or left skew). These skewed HOK are implemented in the blind source separation/independent component analysis (BSS/ICA) algorithm. The tests for these proposed HOK are accomplished using three scenarios to simulate a real environment using actual sound sources, an environment of mixtures of multimodal fast-changing probability density function (pdf) sources that represent a challenge to the symmetrical HOK, and an environment of an adverse case (near gaussian). The separation is performed by minimizing the mutual information (MI) among the mixed sources. The performance of the skewed kernels is compared to the performance of the standard kernels such as Epanechnikov, bisquare, trisquare, and gaussian and the performance of the symmetrical HOK generated using the polynomials Chebyshev1, Chebyshev2, Gegenbauer, Jacobi, and Legendre to the tenth order. The gaussian HOK are generated using the Hermite polynomial and the Wand and Schucany procedure. The comparison among the 96 kernels is based on the average intersymbol interference ratio (AISIR) and the time needed to complete the separation. In terms of AISIR, the skewed kernels' performance is better than that of the standard kernels and rivals most of the symmetrical kernels' performance. The importance of these new skewed HOK is manifested in the environment of the multimodal pdf mixtures. In such an environment, the skewed HOK come in first place compared with the symmetrical HOK. These new families can substitute for symmetrical HOKs in such applications.
Problems, pitfalls and probes: Welcome to the jungle of electrochemical noise technology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edgemon, G.L.
1998-02-19
The rise in electrochemical noise (EN) as a corrosion monitoring technique has resulted in unique problems associated with the field application of this method. Many issues relate to the design of the EN probe electrodes. The ability of an electrochemical noise monitoring system to identify and discriminate between localized corrosion mechanisms is related primarily to the capability of the probe to separate the corrosion cell anode from the corresponding cathode. Effectiveness of this separation is largely determined by the details of and the proper design of the probe that is in the environment of interest. No single probe design ormore » geometry can be effectively use in every situation to monitor all types of corrosion. In this paper the authors focus on a case study and probe development history related to monitoring corrosion in an extremely hostile environment using EN. While the ultimate application of EN was and continues to be successful, the case study shows that patience and persistence was necessary to meet and properly implement the monitoring program. Other possible source of problems and frustration with implementing EN are also discussed.« less
Lunar occultations for gamma-ray source measurements
NASA Technical Reports Server (NTRS)
Koch, David G.; Hughes, E. B.; Nolan, Patrick L.
1990-01-01
The unambiguous association of discrete gamma-ray sources with objects radiating at other wavelengths, the separation of discrete sources from the extended emission within the Galaxy, the mapping of gamma-ray emission from nearby galaxies and the measurement of structure within a discrete source cannot presently be accomplished at gamma-ray energies. In the past, the detection processes used in high-energy gamma-ray astronomy have not allowed for good angular resolution. This problem can be overcome by placing gamma-ray detectors on the moon and using the horizon as an occulting edge to achieve arcsec resolution. For purposes of discussion, this concept is examined for gamma rays above 100 MeV for which pair production dominates the detection process and locally-generated nuclear gamma rays do not contribute to the background.
Proceedings of the Numerical Modeling for Underground Nuclear Test Monitoring Symposium
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, S.R.; Kamm, J.R.
1993-11-01
The purpose of the meeting was to discuss the state-of-the-art in numerical simulations of nuclear explosion phenomenology with applications to test ban monitoring. We focused on the uniqueness of model fits to data, the measurement and characterization of material response models, advanced modeling techniques, and applications of modeling to monitoring problems. The second goal of the symposium was to establish a dialogue between seismologists and explosion-source code calculators. The meeting was divided into five main sessions: explosion source phenomenology, material response modeling, numerical simulations, the seismic source, and phenomenology from near source to far field. We feel the symposium reachedmore » many of its goals. Individual papers submitted at the conference are indexed separately on the data base.« less
Dreger, Stefanie; Meyer, Nicole; Fromme, Hermann; Bolte, Gabriele
2015-11-01
Environmental noise is considered a threat to public health as 20% of the EU population is exposed to health influencing noise levels. An association of noise and mental health problems in children has been suggested by some studies, but results are not consistent and there are no longitudinal studies of this association. Our aim was to investigate the influence of different environmental noise sources at children's homes on incident mental health problems in school-aged children. A cohort study of children from first (t0) to fourth grade (t1) of primary school was conducted. Different environmental noise sources (day/night separately) at children's home were assessed via parental annoyance reports. Increased noise exposure between t0 and t1 was the exposure variable. Incident mental health problems were assessed with the parental version of the Strengths and Difficulties Questionnaire (SDQ). RRs and 95% CIs were analysed to investigate the association between different noise sources and incident mental health problems. The study population consisted of 583 boys and 602 girls. The most common increase in noise exposure between t0 and t1 was road traffic noise day (26.38%). After adjusting for covariates exposure to road traffic night was significantly associated with the total difficulties score (RR=2.06; 95% CI=1.25-3.40), emotional symptoms (RR=1.69, 95% CI=1.04-2.72), and conduct problems (RR=1.57, 95% CI=1.04-2.38). Noise by neighbours during the day was associated with conduct problems (RR=1.62, 95% CI=1.11-2.40) and hyperactivity (RR=1.69, 95% CI=1.08-2.65). Aircraft noise day and construction work day were not associated with any of the SDQ categories at a significant level. Environmental noise is an important public health problem. This is the first study to investigate the association of a broad range of noise sources and incident mental health problems in children in a cohort study. Our results suggest that exposure to noise at children's home is associated with mental health problems such as emotional symptoms, conduct problems and hyperactivity. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Bi, ChuanXing; Jing, WenQian; Zhang, YongBin; Xu, Liang
2015-02-01
The conventional nearfield acoustic holography (NAH) is usually based on the assumption of free-field conditions, and it also requires that the measurement aperture should be larger than the actual source. This paper is to focus on the problem that neither of the above-mentioned requirements can be met, and to examine the feasibility of reconstructing the sound field radiated by partial source, based on double-layer pressure measurements made in a non-free field by using patch NAH combined with sound field separation technique. And also, the sensitivity of the reconstructed result to the measurement error is analyzed in detail. Two experiments involving two speakers in an exterior space and one speaker inside a car cabin are presented. The experimental results demonstrate that the patch NAH based on single-layer pressure measurement cannot obtain a satisfied result due to the influences of disturbing sources and reflections, while the patch NAH based on double-layer pressure measurements can successfully remove these influences and reconstruct the patch sound field effectively.
NASA Astrophysics Data System (ADS)
Yao, Jiachi; Xiang, Yang; Qian, Sichong; Li, Shengyang; Wu, Shaowei
2017-11-01
In order to separate and identify the combustion noise and the piston slap noise of a diesel engine, a noise source separation and identification method that combines a binaural sound localization method and blind source separation method is proposed. During a diesel engine noise and vibration test, because a diesel engine has many complex noise sources, a lead covering method was carried out on a diesel engine to isolate other interference noise from the No. 1-5 cylinders. Only the No. 6 cylinder parts were left bare. Two microphones that simulated the human ears were utilized to measure the radiated noise signals 1 m away from the diesel engine. First, a binaural sound localization method was adopted to separate the noise sources that are in different places. Then, for noise sources that are in the same place, a blind source separation method is utilized to further separate and identify the noise sources. Finally, a coherence function method, continuous wavelet time-frequency analysis method, and prior knowledge of the diesel engine are combined to further identify the separation results. The results show that the proposed method can effectively separate and identify the combustion noise and the piston slap noise of a diesel engine. The frequency of the combustion noise and the piston slap noise are respectively concentrated at 4350 Hz and 1988 Hz. Compared with the blind source separation method, the proposed method has superior separation and identification effects, and the separation results have fewer interference components from other noise.
1949-09-08
error from this source can be substantially reduced by the use of polystyrene insulating materials in the plugboard system of problem patching (Section...present at some point in the machine (see Section 5b). -10- ( 1 d ~ PLUGBOARD Our experience ·with the operation of the REAC indicates that...utilization of the machine could be very significantly increased by a drastic revision of the patch bay. We propose to install a separable plugboard which
Reachability Analysis in Probabilistic Biological Networks.
Gabr, Haitham; Todor, Andrei; Dobra, Alin; Kahveci, Tamer
2015-01-01
Extra-cellular molecules trigger a response inside the cell by initiating a signal at special membrane receptors (i.e., sources), which is then transmitted to reporters (i.e., targets) through various chains of interactions among proteins. Understanding whether such a signal can reach from membrane receptors to reporters is essential in studying the cell response to extra-cellular events. This problem is drastically complicated due to the unreliability of the interaction data. In this paper, we develop a novel method, called PReach (Probabilistic Reachability), that precisely computes the probability that a signal can reach from a given collection of receptors to a given collection of reporters when the underlying signaling network is uncertain. This is a very difficult computational problem with no known polynomial-time solution. PReach represents each uncertain interaction as a bi-variate polynomial. It transforms the reachability problem to a polynomial multiplication problem. We introduce novel polynomial collapsing operators that associate polynomial terms with possible paths between sources and targets as well as the cuts that separate sources from targets. These operators significantly shrink the number of polynomial terms and thus the running time. PReach has much better time complexity than the recent solutions for this problem. Our experimental results on real data sets demonstrate that this improvement leads to orders of magnitude of reduction in the running time over the most recent methods. Availability: All the data sets used, the software implemented and the alignments found in this paper are available at http://bioinformatics.cise.ufl.edu/PReach/.
NASA Astrophysics Data System (ADS)
Harmsen, Eric W.; Converse, James C.; Anderson, Mary P.; Hoopes, John A.
1991-09-01
Effluent from septic tank-drainfields can degrade groundwater quality and contaminate nearby water-supply wells. Such groundwater contamination is a problem in the unsewered subdivisions of the sand plain of central Wisconsin, for example. To help planners minimize the risk of direct contamination of a water-supply well by a septic system, a model was developed to estimate the location of the critical dividing pathline between a rectangular contaminant source (the septic tank drainfield) and a partially penetrating pumping well. The model is capable of handling three-dimensional, transient flow in an unconfined, homogeneous, anisotropic aquifer of infinite areal extent, under a regional horizontal hydraulic gradient. Model results are in very good agreement with several other numerical and analytical models. Examples are given for which the safe, horizontal and vertical separation distances to avoid well water contamination are determined for typical central Wisconsin sand plain conditions. A companion paper (Harmsen et al., 1991) describes the application of this model, using a Monte-Carlo analysis, to study the variation of these separation distances in the Wisconsin sand plain. The model can also be applied to larger scale problems and, therefore, could be useful in implementing the U.S. Environmental Protection Agency's new well head protection program.
Solans, Xavier; Alonso, Rosa María; Constans, Angelina; Mansilla, Alfonso
2007-06-01
Several studies have showed an association between the work in waste treatment plants and occupational health problems such as irritation of skin, eyes and mucous membranes, pulmonary diseases, gastrointestinal problems and symptoms of organic dust toxic syndrome (ODTS). These symptoms have been related to bioaerosol exposure. The aim of this study was to investigate the occupational exposure to biological agents in a plant sorting source-separated packages (plastics materials, ferric and non-ferric metals) household waste. Airborne samples were collected with M Air T Millipore sampler. The concentration of total fungi and bacteria and gram-negative bacteria were determined and the most abundant genera were identified. The results shown that the predominant airborne microorganisms were fungi, with counts greater than 12,000 cfu/m(3) and gram-negative bacteria, with a environmental concentration between 1,395 and 5,280 cfu/m(3). In both cases, these concentrations were higher than levels obtained outside of the sorting plant. Among the fungi, the predominant genera were Penicillium and Cladosporium, whereas the predominant genera of gram-negative bacteria were Escherichia, Enterobacter, Klebsiella and Serratia. The present study shows that the workers at sorting source-separated packages (plastics materials, ferric and non-ferric metals) domestic waste plant may be exposed to airborne biological agents, especially fungi and gram-negative bacteria.
Zou, Yonghong; Wang, Lixia; Christensen, Erik R
2015-10-01
This work intended to explain the challenges of the fingerprints based source apportionment method for polycyclic aromatic hydrocarbons (PAH) in the aquatic environment, and to illustrate a practical and robust solution. The PAH data detected in the sediment cores from the Illinois River provide the basis of this study. Principal component analysis (PCA) separates PAH compounds into two groups reflecting their possible airborne transport patterns; but it is not able to suggest specific sources. Not all positive matrix factorization (PMF) determined sources are distinguishable due to the variability of source fingerprints. However, they constitute useful suggestions for inputs for a Bayesian chemical mass balance (CMB) analysis. The Bayesian CMB analysis takes into account the measurement errors as well as the variations of source fingerprints, and provides a credible source apportionment. Major PAH sources for Illinois River sediments are traffic (35%), coke oven (24%), coal combustion (18%), and wood combustion (14%). Copyright © 2015. Published by Elsevier Ltd.
The Iterative Reweighted Mixed-Norm Estimate for Spatio-Temporal MEG/EEG Source Reconstruction.
Strohmeier, Daniel; Bekhti, Yousra; Haueisen, Jens; Gramfort, Alexandre
2016-10-01
Source imaging based on magnetoencephalography (MEG) and electroencephalography (EEG) allows for the non-invasive analysis of brain activity with high temporal and good spatial resolution. As the bioelectromagnetic inverse problem is ill-posed, constraints are required. For the analysis of evoked brain activity, spatial sparsity of the neuronal activation is a common assumption. It is often taken into account using convex constraints based on the l 1 -norm. The resulting source estimates are however biased in amplitude and often suboptimal in terms of source selection due to high correlations in the forward model. In this work, we demonstrate that an inverse solver based on a block-separable penalty with a Frobenius norm per block and a l 0.5 -quasinorm over blocks addresses both of these issues. For solving the resulting non-convex optimization problem, we propose the iterative reweighted Mixed Norm Estimate (irMxNE), an optimization scheme based on iterative reweighted convex surrogate optimization problems, which are solved efficiently using a block coordinate descent scheme and an active set strategy. We compare the proposed sparse imaging method to the dSPM and the RAP-MUSIC approach based on two MEG data sets. We provide empirical evidence based on simulations and analysis of MEG data that the proposed method improves on the standard Mixed Norm Estimate (MxNE) in terms of amplitude bias, support recovery, and stability.
Vehicle routing for the eco-efficient collection of household plastic waste.
Bing, Xiaoyun; de Keizer, Marlies; Bloemhof-Ruwaard, Jacqueline M; van der Vorst, Jack G A J
2014-04-01
Plastic waste is a special category of municipal solid waste. Plastic waste collection is featured with various alternatives of collection methods (curbside/drop-off) and separation methods (source-/post-separation). In the Netherlands, the collection routes of plastic waste are the same as those of other waste, although plastic is different than other waste in terms of volume to weight ratio. This paper aims for redesigning the collection routes and compares the collection options of plastic waste using eco-efficiency as performance indicator. Eco-efficiency concerns the trade-off between environmental impacts, social issues and costs. The collection problem is modeled as a vehicle routing problem. A tabu search heuristic is used to improve the routes. Collection alternatives are compared by a scenario study approach. Real distances between locations are calculated with MapPoint. The scenario study is conducted based on real case data of the Dutch municipality Wageningen. Scenarios are designed according to the collection alternatives with different assumptions in collection method, vehicle type, collection frequency and collection points, etc. Results show that the current collection routes can be improved in terms of eco-efficiency performance by using our method. The source-separation drop-off collection scenario has the best performance for plastic collection assuming householders take the waste to the drop-off points in a sustainable manner. The model also shows to be an efficient decision support tool to investigate the impacts of future changes such as alternative vehicle type and different response rates. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.
Parameter estimation for slit-type scanning sensors
NASA Technical Reports Server (NTRS)
Fowler, J. W.; Rolfe, E. G.
1981-01-01
The Infrared Astronomical Satellite, scheduled for launch into a 900 km near-polar orbit in August 1982, will perform an infrared point source survey by scanning the sky with slit-type sensors. The description of position information is shown to require the use of a non-Gaussian random variable. Methods are described for deciding whether separate detections stem from a single common source, and a formulism is developed for the scan-to-scan problems of identifying multiple sightings of inertially fixed point sources for combining their individual measurements into a refined estimate. Several cases are given where the general theory yields results which are quite different from the corresponding Gaussian applications, showing that argument by Gaussian analogy would lead to error.
Dual Key Speech Encryption Algorithm Based Underdetermined BSS
Zhao, Huan; Chen, Zuo; Zhang, Xixiang
2014-01-01
When the number of the mixed signals is less than that of the source signals, the underdetermined blind source separation (BSS) is a significant difficult problem. Due to the fact that the great amount data of speech communications and real-time communication has been required, we utilize the intractability of the underdetermined BSS problem to present a dual key speech encryption method. The original speech is mixed with dual key signals which consist of random key signals (one-time pad) generated by secret seed and chaotic signals generated from chaotic system. In the decryption process, approximate calculation is used to recover the original speech signals. The proposed algorithm for speech signals encryption can resist traditional attacks against the encryption system, and owing to approximate calculation, decryption becomes faster and more accurate. It is demonstrated that the proposed method has high level of security and can recover the original signals quickly and efficiently yet maintaining excellent audio quality. PMID:24955430
Restrictive loads powered by separate or by common electrical sources
NASA Technical Reports Server (NTRS)
Appelbaum, J.
1989-01-01
In designing a multiple load electrical system, the designer may wish to compare the performance of two setups: a common electrical source powering all loads, or separate electrical sources powering individual loads. Three types of electrical sources: an ideal voltage source, an ideal current source, and solar cell source powering resistive loads were analyzed for their performances in separate and common source systems. A mathematical proof is given, for each case, indicating the merit of the separate or common source system. The main conclusions are: (1) identical resistive loads powered by ideal voltage sources perform the same in both system setups, (2) nonidentical resistive loads powered by ideal voltage sources perform the same in both system setups, (3) nonidentical resistive loads powered by ideal current sources have higher performance in separate source systems, and (4) nonidentical resistive loads powered by solar cells have higher performance in a common source system for a wide range of load resistances.
NASA Astrophysics Data System (ADS)
Trindade, B. C.; Reed, P. M.
2017-12-01
The growing access and reduced cost for computing power in recent years has promoted rapid development and application of multi-objective water supply portfolio planning. As this trend continues there is a pressing need for flexible risk-based simulation frameworks and improved algorithm benchmarking for emerging classes of water supply planning and management problems. This work contributes the Water Utilities Management and Planning (WUMP) model: a generalizable and open source simulation framework designed to capture how water utilities can minimize operational and financial risks by regionally coordinating planning and management choices, i.e. making more efficient and coordinated use of restrictions, water transfers and financial hedging combined with possible construction of new infrastructure. We introduce the WUMP simulation framework as part of a new multi-objective benchmark problem for planning and management of regionally integrated water utility companies. In this problem, a group of fictitious water utilities seek to balance the use of the mentioned reliability driven actions (e.g., restrictions, water transfers and infrastructure pathways) and their inherent financial risks. Several traits of this problem make it ideal for a benchmark problem, namely the presence of (1) strong non-linearities and discontinuities in the Pareto front caused by the step-wise nature of the decision making formulation and by the abrupt addition of storage through infrastructure construction, (2) noise due to the stochastic nature of the streamflows and water demands, and (3) non-separability resulting from the cooperative formulation of the problem, in which decisions made by stakeholder may substantially impact others. Both the open source WUMP simulation framework and its demonstration in a challenging benchmarking example hold value for promoting broader advances in urban water supply portfolio planning for regions confronting change.
Mechanical-biological waste treatment and the associated occupational hygiene in Finland
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tolvanen, Outi K.; Haenninen, Kari I.
2006-07-01
A special feature of waste management in Finland has been the emphasis on the source separation of kitchen biowaste (catering waste); more than two-thirds of the Finnish population participates in this separation. Source-separated biowaste is usually treated by composting. The biowaste of about 5% of the population is handled by mechanical-biological treatment. A waste treatment plant at Mustasaari is the only plant in Finland using digestion for kitchen biowaste. For the protection of their employees, the plant owners commissioned a study on environmental factors and occupational hygiene in the plant area. During 1998-2000 the concentrations of dust, microbes and endotoxinsmore » and noise levels were investigated to identify possible problems at the plant. Three different work areas were investigated: the pre-processing and crushing hall, the bioreactor hall and the drying hall. Employees were asked about work-related health problems. Some problems with occupational hygiene were identified: concentrations of microbes and endotoxins may increase to levels harmful to health during waste crushing and in the bioreactor hall. Because employees complained of symptoms such as dry cough and rash or itching appearing once or twice a month, it is advisable to use respirator masks (class P3) during dusty working phases. The noise level in the drying hall exceeded the Finnish threshold value of 85 dBA. Qualitatively harmful factors for the health of employees are similar in all closed waste treatment plants in Finland. Quantitatively, however, the situation at the Mustasaari treatment plant is better than at some Finnish dry waste treatment plants. Therefore is reasonable to conclude that mechanical sorting, which produces a dry waste fraction for combustion and a biowaste fraction for anaerobic treatment, is in terms of occupational hygiene better for employees than combined aerobic treatment and dry waste treatment.« less
On-road and wind-tunnel measurement of motorcycle helmet noise.
Kennedy, J; Carley, M; Walker, I; Holt, N
2013-09-01
The noise source mechanisms involved in motorcycling include various aerodynamic sources and engine noise. The problem of noise source identification requires extensive data acquisition of a type and level that have not previously been applied. Data acquisition on track and on road are problematic due to rider safety constraints and the portability of appropriate instrumentation. One way to address this problem is the use of data from wind tunnel tests. The validity of these measurements for noise source identification must first be demonstrated. In order to achieve this extensive wind tunnel tests have been conducted and compared with the results from on-track measurements. Sound pressure levels as a function of speed were compared between on track and wind tunnel tests and were found to be comparable. Spectral conditioning techniques were applied to separate engine and wind tunnel noise from aerodynamic noise and showed that the aerodynamic components were equivalent in both cases. The spectral conditioning of on-track data showed that the contribution of engine noise to the overall noise is a function of speed and is more significant than had previously been thought. These procedures form a basis for accurate experimental measurements of motorcycle noise.
ERP denoising in multichannel EEG data using contrasts between signal and noise subspaces.
Ivannikov, Andriy; Kalyakin, Igor; Hämäläinen, Jarmo; Leppänen, Paavo H T; Ristaniemi, Tapani; Lyytinen, Heikki; Kärkkäinen, Tommi
2009-06-15
In this paper, a new method intended for ERP denoising in multichannel EEG data is discussed. The denoising is done by separating ERP/noise subspaces in multidimensional EEG data by a linear transformation and the following dimension reduction by ignoring noise components during inverse transformation. The separation matrix is found based on the assumption that ERP sources are deterministic for all repetitions of the same type of stimulus within the experiment, while the other noise sources do not obey the determinancy property. A detailed derivation of the technique is given together with the analysis of the results of its application to a real high-density EEG data set. The interpretation of the results and the performance of the proposed method under conditions, when the basic assumptions are violated - e.g. the problem is underdetermined - are also discussed. Moreover, we study how the factors of the number of channels and trials used by the method influence the effectiveness of ERP/noise subspaces separation. In addition, we explore also the impact of different data resampling strategies on the performance of the considered algorithm. The results can help in determining the optimal parameters of the equipment/methods used to elicit and reliably estimate ERPs.
Need for improvements in physical pretreatment of source-separated household food waste.
Bernstad, A; Malmquist, L; Truedsson, C; la Cour Jansen, J
2013-03-01
The aim of the present study was to investigate the efficiency in physical pretreatment processes of source-separated solid organic household waste. The investigation of seventeen Swedish full-scale pretreatment facilities, currently receiving separately collected food waste from household for subsequent anaerobic digestion, shows that problems with the quality of produced biomass and high maintenance costs are common. Four full-scale physical pretreatment plants, three using screwpress technology and one using dispergation technology, were compared in relation to resource efficiency, losses of nitrogen and potential methane production from biodegradable matter as well as the ratio of unwanted materials in produced biomass intended for wet anaerobic digestion. Refuse generated in the processes represent 13-39% of TS in incoming wet waste. The methane yield from these fractions corresponds to 14-36Nm(3)/ton separately collected solid organic household waste. Also, 13-32% of N-tot in incoming food waste is found in refuse. Losses of both biodegradable material and nutrients were larger in the three facilities using screwpress technology compared to the facility using dispersion technology.(1) Thus, there are large potentials for increase of both the methane yield and nutrient recovery from separately collected solid organic household waste through increased efficiency in facilities for physical pretreatment. Improved pretreatment processes could thereby increase the overall environmental benefits from anaerobic digestion as a treatment alternative for solid organic household waste. Copyright © 2012 Elsevier Ltd. All rights reserved.
Improved definition of crustal magnetic anomalies for MAGSAT data
NASA Technical Reports Server (NTRS)
Brown, R. D.; Frawley, J. F.; Davis, W. M.; Ray, R. D.; Didwall, E.; Regan, R. D. (Principal Investigator)
1982-01-01
The routine correction of MAGSAT vector magnetometer data for external field effects such as the ring current and the daily variation by filtering long wavelength harmonics from the data is described. Separation of fields due to low altitude sources from those caused by high altitude sources is affected by means of dual harmonic expansions in the solution of Dirichlet's problem. This regression/harmonic filter procedure is applied on an orbit by orbit basis, and initial tests on MAGSAT data from orbit 1176 show reduction in external field residuals by 24.33 nT RMS in the horizontal component, and 10.95 nT RMS in the radial component.
Classical field configurations and infrared slavery
NASA Astrophysics Data System (ADS)
Swanson, Mark S.
1987-09-01
The problem of determining the energy of two spinor particles interacting through massless-particle exchange is analyzed using the path-integral method. A form for the long-range interaction energy is obtained by analyzing an abridged vertex derived from the parent theory. This abridged vertex describes the radiation of zero-momentum particles by pointlike sources. A path-integral formalism for calculating the energy of the radiation field associated with this abridged vertex is developed and applications are made to determine the energy necessary for adiabatic separation of two sources in quantum electrodynamics and for an SU(2) Yang-Mills theory. The latter theory is shown to be consistent with confinement via infrared slavery.
Minnehaha Creek Watershed SWMM5 Model Data Analysis and Future Recommendations
2013-07-01
comprehensive inventory of data inconsistencies without a source data inventory. To solve this problem, MCWD needs to develop a detailed, georeferenced, GIS...LMCW models, USACE recommends that MCWD keep the SWMM5 models separated instead of combining them into one comprehensive SWMM5 model for the entire...SWMM5 geometry. SWMM5 offers three routing methods: steady flow, kinematic wave, and dynamic wave. Each method offers advantages and disadvantages and
Restoration of recto-verso colour documents using correlated component analysis
NASA Astrophysics Data System (ADS)
Tonazzini, Anna; Bedini, Luigi
2013-12-01
In this article, we consider the problem of removing see-through interferences from pairs of recto-verso documents acquired either in grayscale or RGB modality. The see-through effect is a typical degradation of historical and archival documents or manuscripts, and is caused by transparency or seeping of ink from the reverse side of the page. We formulate the problem as one of separating two individual texts, overlapped in the recto and verso maps of the colour channels through a linear convolutional mixing operator, where the mixing coefficients are unknown, while the blur kernels are assumed known a priori or estimated off-line. We exploit statistical techniques of blind source separation to estimate both the unknown model parameters and the ideal, uncorrupted images of the two document sides. We show that recently proposed correlated component analysis techniques overcome the already satisfactory performance of independent component analysis techniques and colour decorrelation, when the two texts are even sensibly correlated.
Broadband Processing in a Noisy Shallow Ocean Environment: A Particle Filtering Approach
Candy, J. V.
2016-04-14
Here we report that when a broadband source propagates sound in a shallow ocean the received data can become quite complicated due to temperature-related sound-speed variations and therefore a highly dispersive environment. Noise and uncertainties disrupt this already chaotic environment even further because disturbances propagate through the same inherent acoustic channel. The broadband (signal) estimation/detection problem can be decomposed into a set of narrowband solutions that are processed separately and then combined to achieve more enhancement of signal levels than that available from a single frequency, thereby allowing more information to be extracted leading to a more reliable source detection.more » A Bayesian solution to the broadband modal function tracking, pressure-field enhancement, and source detection problem is developed that leads to nonparametric estimates of desired posterior distributions enabling the estimation of useful statistics and an improved processor/detector. In conclusion, to investigate the processor capabilities, we synthesize an ensemble of noisy, broadband, shallow-ocean measurements to evaluate its overall performance using an information theoretical metric for the preprocessor and the receiver operating characteristic curve for the detector.« less
Hybrid Weighted Minimum Norm Method A new method based LORETA to solve EEG inverse problem.
Song, C; Zhuang, T; Wu, Q
2005-01-01
This Paper brings forward a new method to solve EEG inverse problem. Based on following physiological characteristic of neural electrical activity source: first, the neighboring neurons are prone to active synchronously; second, the distribution of source space is sparse; third, the active intensity of the sources are high centralized, we take these prior knowledge as prerequisite condition to develop the inverse solution of EEG, and not assume other characteristic of inverse solution to realize the most commonly 3D EEG reconstruction map. The proposed algorithm takes advantage of LORETA's low resolution method which emphasizes particularly on 'localization' and FOCUSS's high resolution method which emphasizes particularly on 'separability'. The method is still under the frame of the weighted minimum norm method. The keystone is to construct a weighted matrix which takes reference from the existing smoothness operator, competition mechanism and study algorithm. The basic processing is to obtain an initial solution's estimation firstly, then construct a new estimation using the initial solution's information, repeat this process until the solutions under last two estimate processing is keeping unchanged.
Septation and separation within the outflow tract of the developing heart
Webb, Sandra; Qayyum, Sonia R; Anderson, Robert H; Lamers, Wouter H; Richardson, Michael K
2003-01-01
The developmental anatomy of the ventricular outlets and intrapericardial arterial trunks is a source of considerable confusion. First, major problems exist because of the multiple names and definitions used to describe this region of the heart as it develops. Second, there is no agreement on the boundaries of the described components, nor on the number of ridges or cushions to be found dividing the outflow tract, and the pattern of their fusion. Evidence is also lacking concerning the role of the fused cushions relative to that of the so-called aortopulmonary septum in separating the intrapericardial components of the great arterial trunks. In this review, we discuss the existing problems, as we see them, in the context of developmental and postnatal morphology. We concentrate, in particular, on the changes in the nature of the wall of the outflow tract, which is initially myocardial throughout its length. Key features that, thus far, do not seem to have received appropriate attention are the origin, and mode of separation, of the intrapericardial portions of the arterial trunks, and the formation of the walls of the aortic and pulmonary valvar sinuses. Also as yet undetermined is the formation of the free-standing muscular subpulmonary infundibulum, the mechanism of its separation from the aortic valvar sinuses, and its differentiation, if any, from the muscular ventricular outlet septum. PMID:12739611
Separators used in microbial electrochemical technologies: Current status and future prospects.
Daud, Siti Mariam; Kim, Byung Hong; Ghasemi, Mostafa; Daud, Wan Ramli Wan
2015-11-01
Microbial electrochemical technologies (METs) are emerging green processes producing useful products from renewable sources without causing environmental pollution and treating wastes. The separator, an important part of METs that greatly affects the latter's performance, is commonly made of Nafion proton exchange membrane (PEM). However, many problems have been identified associated with the Nafion PEM such as high cost of membrane, significant oxygen and substrate crossovers, and transport of cations other than protons protons and biofouling. A variety of materials have been offered as alternative separators such as ion-exchange membranes, salt bridges, glass fibers, composite membranes and porous materials. It has been claimed that low cost porous materials perform better than PEM. These include J-cloth, nylon filter, glass fiber mat, non-woven cloth, earthen pot and ceramics that enable non-ion selective charge transfer. This paper provides an up-to-date review on porous separators and plots directions for future studies. Copyright © 2015 Elsevier Ltd. All rights reserved.
Chen, Haibin; Yang, Yan; Jiang, Wei; Song, Mengjie; Wang, Ying; Xiang, Tiantian
2017-02-01
A case study on the source separation of municipal solid waste (MSW) was performed in Changsha, the capital city of Hunan Province, China. The objective of this study is to analyze the effects of different separation methods and compare their effects with citizens' attitudes and inclination. An effect evaluation method based on accuracy rate and miscellany rate was proposed to study the performance of different separation methods. A large-scale questionnaire survey was conducted to determine citizens' attitudes and inclination toward source separation. Survey result shows that the vast majority of respondents hold consciously positive attitudes toward participation in source separation. Moreover, the respondents ignore the operability of separation methods and would rather choose the complex separation method involving four or more subclassed categories. For the effects of separation methods, the site experiment result demonstrates that the relatively simple separation method involving two categories (food waste and other waste) achieves the best effect with the highest accuracy rate (83.1%) and the lowest miscellany rate (16.9%) among the proposed experimental alternatives. The outcome reflects the inconsistency between people's environmental awareness and behavior. Such inconsistency and conflict may be attributed to the lack of environmental knowledge. Environmental education is assumed to be a fundamental solution to improve the effect of source separation of MSW in Changsha. Important management tips on source separation, including the reformation of the current pay-as-you-throw (PAYT) system, are presented in this work. A case study on the source separation of municipal solid waste was performed in Changsha. An effect evaluation method based on accuracy rate and miscellany rate was proposed to study the performance of different separation methods. The site experiment result demonstrates that the two-category (food waste and other waste) method achieves the best effect. The inconsistency between people's inclination and the effect of source separation exists. The proposed method can be expanded to other cities to determine the most effective separation method during planning stages or to evaluate the performance of running source separation systems.
NASA Astrophysics Data System (ADS)
Dong, Shaojiang; Sun, Dihua; Xu, Xiangyang; Tang, Baoping
2017-06-01
Aiming at the problem that it is difficult to extract the feature information from the space bearing vibration signal because of different noise, for example the running trend information, high-frequency noise and especially the existence of lot of power line interference (50Hz) and its octave ingredients of the running space simulated equipment in the ground. This article proposed a combination method to eliminate them. Firstly, the EMD is used to remove the running trend item information of the signal, the running trend that affect the signal processing accuracy is eliminated. Then the morphological filter is used to eliminate high-frequency noise. Finally, the components and characteristics of the power line interference are researched, based on the characteristics of the interference, the revised blind source separation model is used to remove the power line interferences. Through analysis of simulation and practical application, results suggest that the proposed method can effectively eliminate those noise.
Factors affecting the sustainability of solid waste management system-the case of Palestine.
Al-Khateeb, Ammar J; Al-Sari, Majed I; Al-Khatib, Issam A; Anayah, Fathi
2017-02-01
Understanding the predictors of sustainability in solid waste management (SWM) systems can significantly contribute to eliminate many waste management problems. In this paper, the sustainability elements of SWM systems of interest are (1) attitudes toward separation at the source, (2) behaviour regarding reuse and/or recycling and (3) willingness to pay for an improved service of SWM. The predictors affecting these three elements were studied in two Palestinian cities: Ramallah and Jericho. The data were collected via structured questionnaires and direct interviews with the respondents, and the analysis utilized a logistic regression model. The results showed that the place of residence and dwelling premises are the significant factors influencing attitudes toward separation at the source; the place of residence and age are the significant factors explaining behaviour regarding reuse and/or recycling; while the dwelling premises, gender, level of education and being received education on waste management are the significant factors affecting willingness to pay for an improved service of SWM.
NASA Astrophysics Data System (ADS)
Stark, Dominic; Launet, Barthelemy; Schawinski, Kevin; Zhang, Ce; Koss, Michael; Turp, M. Dennis; Sartori, Lia F.; Zhang, Hantian; Chen, Yiru; Weigel, Anna K.
2018-06-01
The study of unobscured active galactic nuclei (AGN) and quasars depends on the reliable decomposition of the light from the AGN point source and the extended host galaxy light. The problem is typically approached using parametric fitting routines using separate models for the host galaxy and the point spread function (PSF). We present a new approach using a Generative Adversarial Network (GAN) trained on galaxy images. We test the method using Sloan Digital Sky Survey r-band images with artificial AGN point sources added that are then removed using the GAN and with parametric methods using GALFIT. When the AGN point source is more than twice as bright as the host galaxy, we find that our method, PSFGAN, can recover point source and host galaxy magnitudes with smaller systematic error and a lower average scatter (49 per cent). PSFGAN is more tolerant to poor knowledge of the PSF than parametric methods. Our tests show that PSFGAN is robust against a broadening in the PSF width of ± 50 per cent if it is trained on multiple PSFs. We demonstrate that while a matched training set does improve performance, we can still subtract point sources using a PSFGAN trained on non-astronomical images. While initial training is computationally expensive, evaluating PSFGAN on data is more than 40 times faster than GALFIT fitting two components. Finally, PSFGAN is more robust and easy to use than parametric methods as it requires no input parameters.
Optimal Integration of Departures and Arrivals in Terminal Airspace
NASA Technical Reports Server (NTRS)
Xue, Min; Zelinski, Shannon Jean
2013-01-01
Coordination of operations with spatially and temporally shared resources, such as route segments, fixes, and runways, improves the efficiency of terminal airspace management. Problems in this category are, in general, computationally difficult compared to conventional scheduling problems. This paper presents a fast time algorithm formulation using a non-dominated sorting genetic algorithm (NSGA). It was first applied to a test problem introduced in existing literature. An experiment with a test problem showed that new methods can solve the 20 aircraft problem in fast time with a 65% or 440 second delay reduction using shared departure fixes. In order to test its application in a more realistic and complicated problem, the NSGA algorithm was applied to a problem in LAX terminal airspace, where interactions between 28% of LAX arrivals and 10% of LAX departures are resolved by spatial separation in current operations, which may introduce unnecessary delays. In this work, three types of separations - spatial, temporal, and hybrid separations - were formulated using the new algorithm. The hybrid separation combines both temporal and spatial separations. Results showed that although temporal separation achieved less delay than spatial separation with a small uncertainty buffer, spatial separation outperformed temporal separation when the uncertainty buffer was increased. Hybrid separation introduced much less delay than both spatial and temporal approaches. For a total of 15 interacting departures and arrivals, when compared to spatial separation, the delay reduction of hybrid separation varied between 11% or 3.1 minutes and 64% or 10.7 minutes corresponding to an uncertainty buffer from 0 to 60 seconds. Furthermore, as a comparison with the NSGA algorithm, a First-Come-First-Serve based heuristic method was implemented for the hybrid separation. Experiments showed that the results from the NSGA algorithm have 9% to 42% less delay than the heuristic method with varied uncertainty buffer sizes.
Oppenheimer, F.; Bell, J.W.
1959-02-17
An improvement in the mounting arrangement for the ion source within the vacuum tank of a calutron is presented. The entire source is supported by the vacuum envelope through the medium of a bracket secured to a removable face plate. The bracket forms a supporting platform that is generally transverse to the magnetic field. The ion generator is mounted on a pedestal-type insulator supported on the bracket, and the hot leads are brought into the vacuum envelope through a tubular elbow connected to the vacuum envelope, having the axis of its outer opening aligned with the magnetic field at which point a bushing-type insulator is employed. With this arrangement thc ion source is maintained at a positive potential with respect to the vacuum tank and the problem of electron bombardment of the insulator is considerably reduced.
A review of multivariate methods in brain imaging data fusion
NASA Astrophysics Data System (ADS)
Sui, Jing; Adali, Tülay; Li, Yi-Ou; Yang, Honghui; Calhoun, Vince D.
2010-03-01
On joint analysis of multi-task brain imaging data sets, a variety of multivariate methods have shown their strengths and been applied to achieve different purposes based on their respective assumptions. In this paper, we provide a comprehensive review on optimization assumptions of six data fusion models, including 1) four blind methods: joint independent component analysis (jICA), multimodal canonical correlation analysis (mCCA), CCA on blind source separation (sCCA) and partial least squares (PLS); 2) two semi-blind methods: parallel ICA and coefficient-constrained ICA (CC-ICA). We also propose a novel model for joint blind source separation (BSS) of two datasets using a combination of sCCA and jICA, i.e., 'CCA+ICA', which, compared with other joint BSS methods, can achieve higher decomposition accuracy as well as the correct automatic source link. Applications of the proposed model to real multitask fMRI data are compared to joint ICA and mCCA; CCA+ICA further shows its advantages in capturing both shared and distinct information, differentiating groups, and interpreting duration of illness in schizophrenia patients, hence promising applicability to a wide variety of medical imaging problems.
Phytochemical and Biological Activities of Four Wild Medicinal Plants
Ahmad, Shabir; AbdEl-Salam, Naser M.; Fouad, H.; Rehman, Najeeb Ur; Hussain, Hidayat; Saeed, Wajid
2014-01-01
The fruits of four wild plants, namely, Capparis decidua, Ficus carica, Syzygium cumini, and Ziziphus jujuba, are separately used as traditional dietary and remedial agents in remote areas of Khyber Pakhtunkhwa, Pakistan. The results of our study on these four plants revealed that the examined fruits were a valuable source of nutraceuticals and exhibited good level of antimicrobial activity. The fruits of these four investigated plants are promising source of polyphenols, flavonoids, alkaloids, terpenoids, and saponins. These four plants' fruits are good sources of iron, zinc, copper, manganese, selenium, and chromium. It was also observed that these fruits are potential source of antioxidant agent and the possible reason could be that these samples had good amount of phytochemicals. Hence, the proper propagation, conservation, and chemical investigation are recommended so that these fruits should be incorporated for the eradication of food and health related problems. PMID:25374941
Calculation of periodic flows in a continuously stratified fluid
NASA Astrophysics Data System (ADS)
Vasiliev, A.
2012-04-01
Analytic theory of disturbances generated by an oscillating compact source in a viscous continuously stratified fluid was constructed. Exact solution of the internal waves generation problem was constructed taking into account diffusivity effects. This analysis is based on set of fundamental equations of incompressible flows. The linearized problem of periodic flows in a continuously stratified fluid, generated by an oscillating part of the inclined plane was solved by methods of singular perturbation theory. A rectangular or disc placed on a sloping plane and oscillating linearly in an arbitrary direction was selected as a source of disturbances. The solutions include regularly perturbed on dissipative component functions describing internal waves and a family of singularly perturbed functions. One of the functions from the singular components family has an analogue in a homogeneous fluid that is a periodic or Stokes' flow. Its thickness is defined by a universal micro scale depending on kinematics viscosity coefficient and a buoyancy frequency with a factor depending on the wave slope. Other singular perturbed functions are specific for stratified flows. Their thickness are defined the diffusion coefficient, kinematic viscosity and additional factor depending on geometry of the problem. Fields of fluid density, velocity, vorticity, pressure, energy density and flux as well as forces acting on the source are calculated for different types of the sources. It is shown that most effective source of waves is the bi-piston. Complete 3D problem is transformed in various limiting cases that are into 2D problem for source in stratified or homogeneous fluid and the Stokes problem for an oscillating infinite plane. The case of the "critical" angle that is equality of the emitting surface and the wave cone slope angles needs in separate investigations. In this case, the number of singular component is saved. Patterns of velocity and density fields were constructed and analyzed by methods of computational mathematics. Singular components of the solution affect the flow pattern of the inhomogeneous stratified fluid, not only near the source of the waves, but at a large distance. Analytical calculations of the structure of wave beams are matched with laboratory experiments. Some deviations at large distances from the source are formed due to the contribution of background wave field associated with seiches in the laboratory tank. In number of the experiments vortices with closed contours were observed on some distances from the disk. The work was supported by Ministry of Education and Science RF (Goscontract No. 16.518.11.7059), experiments were performed on set up USU "HPC IPMec RAS".
Thibodeau, C; Monette, F; Glaus, M; Laflamme, C B
2011-01-01
The black water and grey water source-separation sanitation system aims at efficient use of energy (biogas), water and nutrients but currently lacks evidence of economic viability to be considered a credible alternative to the conventional system. This study intends to demonstrate economic viability, identify main cost contributors and assess critical influencing factors. A technico-economic model was built based on a new neighbourhood in a Canadian context. Three implementation scales of source-separation system are defined: 500, 5,000 and 50,000 inhabitants. The results show that the source-separation system is 33% to 118% more costly than the conventional system, with the larger cost differential obtained by lower source-separation system implementation scales. A sensitivity analysis demonstrates that vacuum toilet flow reduction from 1.0 to 0.25 L/flush decreases source-separation system cost between 23 and 27%. It also shows that high resource costs can be beneficial or unfavourable to the source-separation system depending on whether the vacuum toilet flow is low or normal. Therefore, the future of this configuration of the source-separation system lies mainly in vacuum toilet flow reduction or the introduction of new efficient effluent volume reduction processes (e.g. reverse osmosis).
Solutions to problems of weathering in Antarctic eucrites
NASA Technical Reports Server (NTRS)
Strait, Melissa M.
1990-01-01
Neutron activation analysis was performed for major and trace elements on a suite of eucrites from both Antarctic and non-Antarctic sources. The chemistry was examined to see if there was an easy way to distinguish Antarctic eucrites that had been disturbed in their trace elements systematics from those that had normal abundances relative to non-Antarctic eucrites. There was no simple correlation found, and identifying the disturbed meteorites still remains a problem. In addition, a set of mineral separates from an eucrite were analyzed. The results showed no abnormalities in the chemistry and provides a possible way to use Antarctic eucrites that were disturbed in modelling of the eucrite parent body.
Prediction of vortex shedding from circular and noncircular bodies in subsonic flow
NASA Technical Reports Server (NTRS)
Mendenhall, Michael R.; Lesieutre, Daniel J.
1987-01-01
An engineering prediction method and associated computer code VTXCLD are presented which predict nose vortex shedding from circular and noncircular bodies in subsonic flow at angles of attack and roll. The axisymmetric body is represented by point sources and doublets, and noncircular cross sections are transformed to a circle by either analytical or numerical conformal transformations. The leeward vortices are modeled by discrete vortices in crossflow planes along the body; thus, the three-dimensional steady flow problem is reduced to a two-dimensional, unsteady, separated flow problem for solution. Comparison of measured and predicted surface pressure distributions, flowfield surveys, and aerodynamic characteristics are presented for bodies with circular and noncircular cross sectional shapes.
Prediction of vortex shedding from circular and noncircular bodies in supersonic flow
NASA Technical Reports Server (NTRS)
Mendenhall, M. R.; Perkins, S. C., Jr.
1984-01-01
An engineering prediction method and associated computer code NOZVTX to predict nose vortex shedding from circular and noncircular bodies in supersonic flow at angles of attack and roll are presented. The body is represented by either a supersonic panel method for noncircular cross sections or line sources and doublets for circular cross sections, and the lee side vortex wake is modeled by discrete vortices in crossflow planes. The three-dimensional steady flow problem is reduced to a two-dimensional, unsteady, separated flow problem for solution. Comparison of measured and predicted surface pressure distributions, flow field surveys, and aerodynamic characteristics is presented for bodies with circular and noncircular cross-sectional shapes.
Common source-multiple load vs. separate source-individual load photovoltaic system
NASA Technical Reports Server (NTRS)
Appelbaum, Joseph
1989-01-01
A comparison of system performance is made for two possible system setups: (1) individual loads powered by separate solar cell sources; and (2) multiple loads powered by a common solar cell source. A proof for resistive loads is given that shows the advantage of a common source over a separate source photovoltaic system for a large range of loads. For identical loads, both systems perform the same.
Rifai Chai; Naik, Ganesh R; Tran, Yvonne; Sai Ho Ling; Craig, Ashley; Nguyen, Hung T
2015-08-01
An electroencephalography (EEG)-based counter measure device could be used for fatigue detection during driving. This paper explores the classification of fatigue and alert states using power spectral density (PSD) as a feature extractor and fuzzy swarm based-artificial neural network (ANN) as a classifier. An independent component analysis of entropy rate bound minimization (ICA-ERBM) is investigated as a novel source separation technique for fatigue classification using EEG analysis. A comparison of the classification accuracy of source separator versus no source separator is presented. Classification performance based on 43 participants without the inclusion of the source separator resulted in an overall sensitivity of 71.67%, a specificity of 75.63% and an accuracy of 73.65%. However, these results were improved after the inclusion of a source separator module, resulting in an overall sensitivity of 78.16%, a specificity of 79.60% and an accuracy of 78.88% (p <; 0.05).
Sentse, Miranda; Ormel, Johan; Veenstra, René; Verhulst, Frank C; Oldehinkel, Albertine J
2011-02-01
The potential effect of parental separation during early adolescence on adolescent externalizing and internalizing problems was investigated in a longitudinal sample of adolescents (n = 1274; mean age = 16.27; 52.3% girls). Pre-separation mental health problems were controlled for. Building on a large number of studies that overall showed a small effect of parental separation, it was argued that separation may only or especially have an effect under certain conditions. It was examined whether child temperament (effortful control and fearfulness) moderates the impact of parental separation on specific mental health domains. Hypotheses were derived from a goal-framing theory, with a focus on goals related to satisfying the need for autonomy and the need to belong. Controlling for the overlap between the outcome domains, we found that parental separation led to an increase in externalizing problems but not internalizing problems when interactions with child temperament were ignored. Moreover, child temperament moderated the impact of parental separation, in that it was only related to increased externalizing problems for children low on effortful control, whereas it was only related to increased internalizing problems for children high on fearfulness. The results indicate that person-environment interactions are important for understanding the development of mental health problems and that these interactions can be domain-specific. PsycINFO Database Record (c) 2011 APA, all rights reserved.
Tully, Lucy A; Moffitt, Terrie E; Caspi, Avshalom; Taylor, Alan; Kiernan, Helena; Andreou, Penny
2004-04-01
We investigated the effects of classroom separation on twins' behavior, progress at school, and reading abilities. This investigation was part of a longitudinal study of a nationally-representative sample of twins (the E-risk Study) who were assessed at the start of school (age 5) and followed up (age 7). We examined three groups of twins: pairs who were in the same class at both ages; pairs who were in separate classes at both ages; and pairs who were in the same class at age 5, but separated by age 7. When compared to those not separated, those separated early had significantly more teacher-rated internalizing problems and those separated later showed more internalizing problems and lower reading scores. Monozygotic (MZ) twins showed more problems as a result of separation than dizygotic (DZ) twins. No group differences emerged for externalizing problems, ADHD or prosocial behaviors. The implications of the findings for parents and teachers of twins, and for school practices about separating twins, are discussed.
NASA Astrophysics Data System (ADS)
Zhou, Yatong; Han, Chunying; Chi, Yue
2018-06-01
In a simultaneous source survey, no limitation is required for the shot scheduling of nearby sources and thus a huge acquisition efficiency can be obtained but at the same time making the recorded seismic data contaminated by strong blending interference. In this paper, we propose a multi-dip seislet frame based sparse inversion algorithm to iteratively separate simultaneous sources. We overcome two inherent drawbacks of traditional seislet transform. For the multi-dip problem, we propose to apply a multi-dip seislet frame thresholding strategy instead of the traditional seislet transform for deblending simultaneous-source data that contains multiple dips, e.g., containing multiple reflections. The multi-dip seislet frame strategy solves the conflicting dip problem that degrades the performance of the traditional seislet transform. For the noise issue, we propose to use a robust dip estimation algorithm that is based on velocity-slope transformation. Instead of calculating the local slope directly using the plane-wave destruction (PWD) based method, we first apply NMO-based velocity analysis and obtain NMO velocities for multi-dip components that correspond to multiples of different orders, then a fairly accurate slope estimation can be obtained using the velocity-slope conversion equation. An iterative deblending framework is given and validated through a comprehensive analysis over both numerical synthetic and field data examples.
Soil recycling paves the way for treating brownfields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gladdys, R.
A soil recycling and stabilization process allows once-contaminated soil to be incorporated into paving materials. Contaminated soils is more widespread than often realized, with one of the more common sources being petroleum products such as fuel oil and gasoline. Until recently, the conventional solution was to have the material excavated, separated from remining soil and trucked to a hazardous waste landfill. This article describes an alternative approach under the following topics: move the solution, not the problem; on site recycling; heavy metals stabilization; economics.
Treefrogs as Animal Models for Research on Auditory Scene Analysis and the Cocktail Party Problem
Bee, Mark A.
2014-01-01
The perceptual analysis of acoustic scenes involves binding together sounds from the same source and separating them from other sounds in the environment. In large social groups, listeners experience increased difficulty performing these tasks due to high noise levels and interference from the concurrent signals of multiple individuals. While a substantial body of literature on these issues pertains to human hearing and speech communication, few studies have investigated how nonhuman animals may be evolutionarily adapted to solve biologically analogous communication problems. Here, I review recent and ongoing work aimed at testing hypotheses about perceptual mechanisms that enable treefrogs in the genus Hyla to communicate vocally in noisy, multi-source social environments. After briefly introducing the genus and the methods used to study hearing in frogs, I outline several functional constraints on communication posed by the acoustic environment of breeding “choruses”. Then, I review studies of sound source perception aimed at uncovering how treefrog listeners may be adapted to cope with these constraints. Specifically, this review covers research on the acoustic cues used in sequential and simultaneous auditory grouping, spatial release from masking, and dip listening. Throughout the paper, I attempt to illustrate how broad-scale, comparative studies of carefully considered animal models may ultimately reveal an evolutionary diversity of underlying mechanisms for solving cocktail-party-like problems in communication. PMID:24424243
Thompson, Ronald G; Lizardi, Dana; Keyes, Katherine M; Hasin, Deborah S
2008-12-01
This study examined whether the experiences of childhood or adolescent parental divorce/separation and parental alcohol problems affected the likelihood of offspring DSM-IV lifetime alcohol dependence, controlling for parental history of drug, depression, and antisocial behavior problems. Data were drawn from the 2001-2002 National Epidemiological Survey on Alcohol and Related Conditions (NESARC), a nationally representative United States survey of 43,093 civilian non-institutionalized participants aged 18 and older, interviewed in person. Logistic regression models were used to calculate the main and interaction effects of childhood or adolescent parental divorce/separation and parental history of alcohol problems on offspring lifetime alcohol dependence, after adjusting for parental history of drug, depression, and antisocial behavior problems. Childhood or adolescent parental divorce/separation and parental history of alcohol problems were significantly related to offspring lifetime alcohol dependence, after adjusting for parental history of drug, depression, and antisocial behavior problems. Experiencing parental divorce/separation during childhood, even in the absence of parental history of alcohol problems, remained a significant predictor of lifetime alcohol dependence. Experiencing both childhood or adolescent parental divorce/separation and parental alcohol problems had a significantly stronger impact on the risk for DSM-IV alcohol dependence than the risk incurred by either parental risk factor alone. Further research is needed to better identify the factors that increase the risk for lifetime alcohol dependence among those who experience childhood or adolescent parental divorce/separation.
Thompson, Ronald G.; Lizardi, Dana; Keyes, Katherine M.; Hasin, Deborah S.
2013-01-01
Background This study examined whether the experiences of childhood or adolescent parental divorce/separation and parental alcohol problems affected the likelihood of offspring DSM-IV lifetime alcohol dependence, controlling for parental history of drug, depression, and antisocial behavior problems. Method Data were drawn from the 2001–2002 National Epidemiological Survey on Alcohol and Related Conditions (NESARC), a nationally representative United States survey of 43,093 civilian non-institutionalized participants aged 18 and older, interviewed in person. Logistic regression models were used to calculate the main and interaction effects of childhood or adolescent parental divorce/separation and parental history of alcohol problems on offspring lifetime alcohol dependence, after adjusting for parental history of drug, depression, and antisocial behavior problems. Results Childhood or adolescent parental divorce/separation and parental history of alcohol problems were significantly related to offspring lifetime alcohol dependence, after adjusting for parental history of drug, depression, and antisocial behavior problems. Experiencing parental divorce/separation during childhood, even in the absence of parental history of alcohol problems, remained a significant predictor of lifetime alcohol dependence. Experiencing both childhood or adolescent parental divorce/separation and parental alcohol problems had a significantly stronger impact on the risk for DSM-IV alcohol dependence than the risk incurred by either parental risk factor alone. Conclusions Further research is needed to better identify the factors that increase the risk for lifetime alcohol dependence among those who experience childhood or adolescent parental divorce/separation. PMID:18757141
Optimization of municipal solid waste collection and transportation routes.
Das, Swapan; Bhattacharyya, Bidyut Kr
2015-09-01
Optimization of municipal solid waste (MSW) collection and transportation through source separation becomes one of the major concerns in the MSW management system design, due to the fact that the existing MSW management systems suffer by the high collection and transportation cost. Generally, in a city different waste sources scatter throughout the city in heterogeneous way that increase waste collection and transportation cost in the waste management system. Therefore, a shortest waste collection and transportation strategy can effectively reduce waste collection and transportation cost. In this paper, we propose an optimal MSW collection and transportation scheme that focus on the problem of minimizing the length of each waste collection and transportation route. We first formulize the MSW collection and transportation problem into a mixed integer program. Moreover, we propose a heuristic solution for the waste collection and transportation problem that can provide an optimal way for waste collection and transportation. Extensive simulations and real testbed results show that the proposed solution can significantly improve the MSW performance. Results show that the proposed scheme is able to reduce more than 30% of the total waste collection path length. Copyright © 2015 Elsevier Ltd. All rights reserved.
The inverse electroencephalography pipeline
NASA Astrophysics Data System (ADS)
Weinstein, David Michael
The inverse electroencephalography (EEG) problem is defined as determining which regions of the brain are active based on remote measurements recorded with scalp EEG electrodes. An accurate solution to this problem would benefit both fundamental neuroscience research and clinical neuroscience applications. However, constructing accurate patient-specific inverse EEG solutions requires complex modeling, simulation, and visualization algorithms, and to date only a few systems have been developed that provide such capabilities. In this dissertation, a computational system for generating and investigating patient-specific inverse EEG solutions is introduced, and the requirements for each stage of this Inverse EEG Pipeline are defined and discussed. While the requirements of many of the stages are satisfied with existing algorithms, others have motivated research into novel modeling and simulation methods. The principal technical results of this work include novel surface-based volume modeling techniques, an efficient construction for the EEG lead field, and the Open Source release of the Inverse EEG Pipeline software for use by the bioelectric field research community. In this work, the Inverse EEG Pipeline is applied to three research problems in neurology: comparing focal and distributed source imaging algorithms; separating measurements into independent activation components for multifocal epilepsy; and localizing the cortical activity that produces the P300 effect in schizophrenia.
Hou, Shibing; Wu, Jiang; Qin, Yufei; Xu, Zhenming
2010-07-01
Electrostatic separation is an effective and environmentally friendly method for recycling waste printed circuit board (PCB) by several kinds of electrostatic separators. However, some notable problems have been detected in its applications and cannot be efficiently resolved by optimizing the separation process. Instead of the separator itself, these problems are mainly caused by some external factors such as the nonconductive powder (NP) and the superficial moisture of feeding granule mixture. These problems finally lead to an inefficient separation. In the present research, the impacts of these external factors were investigated and a robust design was built to optimize the process and to weaken the adverse impact. A most robust parameter setting (25 kv, 80 rpm) was concluded from the experimental design. In addition, some theoretical methods, including cyclone separation, were presented to eliminate these problems substantially. This will contribute to efficient electrostatic separation of waste PCB and make remarkable progress for industrial applications.
Optimization of light source parameters in the photodynamic therapy of heterogeneous prostate
NASA Astrophysics Data System (ADS)
Li, Jun; Altschuler, Martin D.; Hahn, Stephen M.; Zhu, Timothy C.
2008-08-01
The three-dimensional (3D) heterogeneous distributions of optical properties in a patient prostate can now be measured in vivo. Such data can be used to obtain a more accurate light-fluence kernel. (For specified sources and points, the kernel gives the fluence delivered to a point by a source of unit strength.) In turn, the kernel can be used to solve the inverse problem that determines the source strengths needed to deliver a prescribed photodynamic therapy (PDT) dose (or light-fluence) distribution within the prostate (assuming uniform drug concentration). We have developed and tested computational procedures to use the new heterogeneous data to optimize delivered light-fluence. New problems arise, however, in quickly obtaining an accurate kernel following the insertion of interstitial light sources and data acquisition. (1) The light-fluence kernel must be calculated in 3D and separately for each light source, which increases kernel size. (2) An accurate kernel for light scattering in a heterogeneous medium requires ray tracing and volume partitioning, thus significant calculation time. To address these problems, two different kernels were examined and compared for speed of creation and accuracy of dose. Kernels derived more quickly involve simpler algorithms. Our goal is to achieve optimal dose planning with patient-specific heterogeneous optical data applied through accurate kernels, all within clinical times. The optimization process is restricted to accepting the given (interstitially inserted) sources, and determining the best source strengths with which to obtain a prescribed dose. The Cimmino feasibility algorithm is used for this purpose. The dose distribution and source weights obtained for each kernel are analyzed. In clinical use, optimization will also be performed prior to source insertion to obtain initial source positions, source lengths and source weights, but with the assumption of homogeneous optical properties. For this reason, we compare the results from heterogeneous optical data with those obtained from average homogeneous optical properties. The optimized treatment plans are also compared with the reference clinical plan, defined as the plan with sources of equal strength, distributed regularly in space, which delivers a mean value of prescribed fluence at detector locations within the treatment region. The study suggests that comprehensive optimization of source parameters (i.e. strengths, lengths and locations) is feasible, thus allowing acceptable dose coverage in a heterogeneous prostate PDT within the time constraints of the PDT procedure.
Advances in audio source seperation and multisource audio content retrieval
NASA Astrophysics Data System (ADS)
Vincent, Emmanuel
2012-06-01
Audio source separation aims to extract the signals of individual sound sources from a given recording. In this paper, we review three recent advances which improve the robustness of source separation in real-world challenging scenarios and enable its use for multisource content retrieval tasks, such as automatic speech recognition (ASR) or acoustic event detection (AED) in noisy environments. We present a Flexible Audio Source Separation Toolkit (FASST) and discuss its advantages compared to earlier approaches such as independent component analysis (ICA) and sparse component analysis (SCA). We explain how cues as diverse as harmonicity, spectral envelope, temporal fine structure or spatial location can be jointly exploited by this toolkit. We subsequently present the uncertainty decoding (UD) framework for the integration of audio source separation and audio content retrieval. We show how the uncertainty about the separated source signals can be accurately estimated and propagated to the features. Finally, we explain how this uncertainty can be efficiently exploited by a classifier, both at the training and the decoding stage. We illustrate the resulting performance improvements in terms of speech separation quality and speaker recognition accuracy.
Model reduction method using variable-separation for stochastic saddle point problems
NASA Astrophysics Data System (ADS)
Jiang, Lijian; Li, Qiuqi
2018-02-01
In this paper, we consider a variable-separation (VS) method to solve the stochastic saddle point (SSP) problems. The VS method is applied to obtain the solution in tensor product structure for stochastic partial differential equations (SPDEs) in a mixed formulation. The aim of such a technique is to construct a reduced basis approximation of the solution of the SSP problems. The VS method attempts to get a low rank separated representation of the solution for SSP in a systematic enrichment manner. No iteration is performed at each enrichment step. In order to satisfy the inf-sup condition in the mixed formulation, we enrich the separated terms for the primal system variable at each enrichment step. For the SSP problems by regularization or penalty, we propose a more efficient variable-separation (VS) method, i.e., the variable-separation by penalty method. This can avoid further enrichment of the separated terms in the original mixed formulation. The computation of the variable-separation method decomposes into offline phase and online phase. Sparse low rank tensor approximation method is used to significantly improve the online computation efficiency when the number of separated terms is large. For the applications of SSP problems, we present three numerical examples to illustrate the performance of the proposed methods.
Household food waste separation behavior and the importance of convenience.
Bernstad, Anna
2014-07-01
Two different strategies aiming at increasing household source-separation of food waste were assessed through a case-study in a Swedish residential area (a) use of written information, distributed as leaflets amongst households and (b) installation of equipment for source-segregation of waste with the aim of increasing convenience food waste sorting in kitchens. Weightings of separately collected food waste before and after distribution of written information suggest that this resulted in neither a significant increased amount of separately collected food waste, nor an increased source-separation ratio. After installation of sorting equipment in households, both the amount of separately collected food waste as well as the source-separation ratio increased vastly. Long-term monitoring shows that results where longstanding. Results emphasize the importance of convenience and existence of infrastructure necessary for source-segregation of waste as important factors for household waste recycling, but also highlight the need of addressing these aspects where waste is generated, i.e. already inside the household. Copyright © 2014 Elsevier Ltd. All rights reserved.
Xu, Wanying; Zhou, Chuanbin; Lan, Yajun; Jin, Jiasheng; Cao, Aixin
2015-05-01
Municipal solid waste (MSW) management (MSWM) is most important and challenging in large urban communities. Sound community-based waste management systems normally include waste reduction and material recycling elements, often entailing the separation of recyclable materials by the residents. To increase the efficiency of source separation and recycling, an incentive-based source separation model was designed and this model was tested in 76 households in Guiyang, a city of almost three million people in southwest China. This model embraced the concepts of rewarding households for sorting organic waste, government funds for waste reduction, and introducing small recycling enterprises for promoting source separation. Results show that after one year of operation, the waste reduction rate was 87.3%, and the comprehensive net benefit under the incentive-based source separation model increased by 18.3 CNY tonne(-1) (2.4 Euros tonne(-1)), compared to that under the normal model. The stakeholder analysis (SA) shows that the centralized MSW disposal enterprises had minimum interest and may oppose the start-up of a new recycling system, while small recycling enterprises had a primary interest in promoting the incentive-based source separation model, but they had the least ability to make any change to the current recycling system. The strategies for promoting this incentive-based source separation model are also discussed in this study. © The Author(s) 2015.
Prediction of subsonic vortex shedding from forebodies with chines
NASA Technical Reports Server (NTRS)
Mendenhall, Michael R.; Lesieutre, Daniel J.
1990-01-01
An engineering prediction method and associated computer code VTXCHN to predict nose vortex shedding from circular and noncircular forebodies with sharp chine edges in subsonic flow at angles of attack and roll are presented. Axisymmetric bodies are represented by point sources and doublets, and noncircular cross sections are transformed to a circle by either analytical or numerical conformal transformations. The lee side vortex wake is modeled by discrete vortices in crossflow planes along the body; thus the three-dimensional steady flow problem is reduced to a two-dimensional, unsteady, separated flow problem for solution. Comparison of measured and predicted surface pressure distributions, flow field surveys, and aerodynamic characteristics are presented for noncircular bodies alone and forebodies with sharp chines.
Two denominators for one numerator: the example of neonatal mortality.
Harmon, Quaker E; Basso, Olga; Weinberg, Clarice R; Wilcox, Allen J
2018-06-01
Preterm delivery is one of the strongest predictors of neonatal mortality. A given exposure may increase neonatal mortality directly, or indirectly by increasing the risk of preterm birth. Efforts to assess these direct and indirect effects are complicated by the fact that neonatal mortality arises from two distinct denominators (i.e. two risk sets). One risk set comprises fetuses, susceptible to intrauterine pathologies (such as malformations or infection), which can result in neonatal death. The other risk set comprises live births, who (unlike fetuses) are susceptible to problems of immaturity and complications of delivery. In practice, fetal and neonatal sources of neonatal mortality cannot be separated-not only because of incomplete information, but because risks from both sources can act on the same newborn. We use simulations to assess the repercussions of this structural problem. We first construct a scenario in which fetal and neonatal factors contribute separately to neonatal mortality. We introduce an exposure that increases risk of preterm birth (and thus neonatal mortality) without affecting the two baseline sets of neonatal mortality risk. We then calculate the apparent gestational-age-specific mortality for exposed and unexposed newborns, using as the denominator either fetuses or live births at a given gestational age. If conditioning on gestational age successfully blocked the mediating effect of preterm delivery, then exposure would have no effect on gestational-age-specific risk. Instead, we find apparent exposure effects with either denominator. Except for prediction, neither denominator provides a meaningful way to define gestational-age-specific neonatal mortality.
Peng, Hai-Qin; Liu, Yan; Gao, Xue-Long; Wang, Hong-Wu; Chen, Yi; Cai, Hui-Yi
2017-11-01
While point source pollutions have gradually been controlled in recent years, the non-point source pollution problem has become increasingly prominent. The receiving waters are frequently polluted by the initial stormwater from the separate stormwater system and the wastewater from sewage pipes through stormwater pipes. Consequently, calculating the intercepted runoff depth has become a problem that must be resolved immediately for initial stormwater pollution management. The accurate calculation of intercepted runoff depth provides a solid foundation for selecting the appropriate size of intercepting facilities in drainage and interception projects. This study establishes a separate stormwater system for the Yishan Building watershed of Fuzhou City using the InfoWorks Integrated Catchment Management (InfoWorks ICM), which can predict the stormwater flow velocity and the flow of discharge outlet after each rainfall. The intercepted runoff depth is calculated from the stormwater quality and environmental capacity of the receiving waters. The average intercepted runoff depth from six rainfall events is calculated as 4.1 mm based on stormwater quality. The average intercepted runoff depth from six rainfall events is calculated as 4.4 mm based on the environmental capacity of the receiving waters. The intercepted runoff depth differs when calculated from various aspects. The selection of the intercepted runoff depth depends on the goal of water quality control, the self-purification capacity of the water bodies, and other factors of the region.
Disturbance Source Separation in Shear Flows Using Blind Source Separation Methods
NASA Astrophysics Data System (ADS)
Gluzman, Igal; Cohen, Jacob; Oshman, Yaakov
2017-11-01
A novel approach is presented for identifying disturbance sources in wall-bounded shear flows. The method can prove useful for active control of boundary layer transition from laminar to turbulent flow. The underlying idea is to consider the flow state, as measured in sensors, to be a mixture of sources, and to use Blind Source Separation (BSS) techniques to recover the separate sources and their unknown mixing process. We present a BSS method based on the Degenerate Unmixing Estimation Technique. This method can be used to identify any (a priori unknown) number of sources by using the data acquired by only two sensors. The power of the new method is demonstrated via numerical and experimental proofs of concept. Wind tunnel experiments involving boundary layer flow over a flat plate were carried out, in which two hot-wire anemometers were used to separate disturbances generated by disturbance generators such as a single dielectric barrier discharge plasma actuator and a loudspeaker.
Blind separation of incoherent and spatially disjoint sound sources
NASA Astrophysics Data System (ADS)
Dong, Bin; Antoni, Jérôme; Pereira, Antonio; Kellermann, Walter
2016-11-01
Blind separation of sound sources aims at reconstructing the individual sources which contribute to the overall radiation of an acoustical field. The challenge is to reach this goal using distant measurements when all sources are operating concurrently. The working assumption is usually that the sources of interest are incoherent - i.e. statistically orthogonal - so that their separation can be approached by decorrelating a set of simultaneous measurements, which amounts to diagonalizing the cross-spectral matrix. Principal Component Analysis (PCA) is traditionally used to this end. This paper reports two new findings in this context. First, a sufficient condition is established under which "virtual" sources returned by PCA coincide with true sources; it stipulates that the sources of interest should be not only incoherent but also spatially orthogonal. A particular case of this instance is met by spatially disjoint sources - i.e. with non-overlapping support sets. Second, based on this finding, a criterion that enforces both statistical and spatial orthogonality is proposed to blindly separate incoherent sound sources which radiate from disjoint domains. This criterion can be easily incorporated into acoustic imaging algorithms such as beamforming or acoustical holography to identify sound sources of different origins. The proposed methodology is validated on laboratory experiments. In particular, the separation of aeroacoustic sources is demonstrated in a wind tunnel.
Optimal partial mass transportation and obstacle Monge-Kantorovich equation
NASA Astrophysics Data System (ADS)
Igbida, Noureddine; Nguyen, Van Thanh
2018-05-01
Optimal partial mass transport, which is a variant of the optimal transport problem, consists in transporting effectively a prescribed amount of mass from a source to a target. The problem was first studied by Caffarelli and McCann (2010) [6] and Figalli (2010) [12] with a particular attention to the quadratic cost. Our aim here is to study the optimal partial mass transport problem with Finsler distance costs including the Monge cost given by the Euclidian distance. Our approach is different and our results do not follow from previous works. Among our results, we introduce a PDE of Monge-Kantorovich type with a double obstacle to characterize active submeasures, Kantorovich potential and optimal flow for the optimal partial transport problem. This new PDE enables us to study the uniqueness and monotonicity results for the active submeasures. Another interesting issue of our approach is its convenience for numerical analysis and computations that we develop in a separate paper [14] (Igbida and Nguyen, 2018).
Experimental analysis of precursors to severe problem behavior.
Fritz, Jennifer N; Iwata, Brian A; Hammond, Jennifer L; Bloom, Sarah E
2013-01-01
Some individuals engage in both mild and severe forms of problem behavior. Research has shown that when mild behaviors precede severe behaviors (i.e., the mild behaviors serve as precursors), they can (a) be maintained by the same source of reinforcement as severe behavior and (b) reduce rates of severe behavior observed during assessment. In Study 1, we developed an objective checklist to identify precursors via videotaped trials for 16 subjects who engaged in problem behavior and identified at least 1 precursor for every subject. In Study 2, we conducted separate functional analyses of precursor and severe problem behaviors for 8 subjects, and obtained correspondence between outcomes in 7 cases. In Study 3, we evaluated noncontingent reinforcement schedule thinning plus differential reinforcement of alternative behavior to reduce precursors, increase appropriate behavior, and maintain low rates of severe behavior during 3 treatment analyses for 2 subjects. Results showed that this treatment strategy was effective for behaviors maintained by positive and negative reinforcement. © Society for the Experimental Analysis of Behavior.
SYNTHESIS OF NOVEL ALL-DIELECTRIC GRATING FILTERS USING GENETIC ALGORITHMS
NASA Technical Reports Server (NTRS)
Zuffada, Cinzia; Cwik, Tom; Ditchman, Christopher
1997-01-01
We are concerned with the design of inhomogeneous, all dielectric (lossless) periodic structures which act as filters. Dielectric filters made as stacks of inhomogeneous gratings and layers of materials are being used in optical technology, but are not common at microwave frequencies. The problem is then finding the periodic cell's geometric configuration and permittivity values which correspond to a specified reflectivity/transmittivity response as a function of frequency/illumination angle. This type of design can be thought of as an inverse-source problem, since it entails finding a distribution of sources which produce fields (or quantities derived from them) of given characteristics. Electromagnetic sources (electric and magnetic current densities) in a volume are related to the outside fields by a well known linear integral equation. Additionally, the sources are related to the fields inside the volume by a constitutive equation, involving the material properties. Then, the relationship linking the fields outside the source region to those inside is non-linear, in terms of material properties such as permittivity, permeability and conductivity. The solution of the non-linear inverse problem is cast here as a combination of two linear steps, by explicitly introducing the electromagnetic sources in the computational volume as a set of unknowns in addition to the material unknowns. This allows to solve for material parameters and related electric fields in the source volume which are consistent with Maxwell's equations. Solutions are obtained iteratively by decoupling the two steps. First, we invert for the permittivity only in the minimization of a cost function and second, given the materials, we find the corresponding electric fields through direct solution of the integral equation in the source volume. The sources thus computed are used to generate the far fields and the synthesized triter response. The cost function is obtained by calculating the deviation between the synthesized value of reflectivity/transmittivity and the desired one. Solution geometries for the periodic cell are sought as gratings (ensembles of columns of different heights and widths), or combinations of homogeneous layers of different dielectric materials and gratings. Hence the explicit unknowns of the inversion step are the material permittivities and the relative boundaries separating homogeneous parcels of the periodic cell.
Perceptually controlled doping for audio source separation
NASA Astrophysics Data System (ADS)
Mahé, Gaël; Nadalin, Everton Z.; Suyama, Ricardo; Romano, João MT
2014-12-01
The separation of an underdetermined audio mixture can be performed through sparse component analysis (SCA) that relies however on the strong hypothesis that source signals are sparse in some domain. To overcome this difficulty in the case where the original sources are available before the mixing process, the informed source separation (ISS) embeds in the mixture a watermark, which information can help a further separation. Though powerful, this technique is generally specific to a particular mixing setup and may be compromised by an additional bitrate compression stage. Thus, instead of watermarking, we propose a `doping' method that makes the time-frequency representation of each source more sparse, while preserving its audio quality. This method is based on an iterative decrease of the distance between the distribution of the signal and a target sparse distribution, under a perceptual constraint. We aim to show that the proposed approach is robust to audio coding and that the use of the sparsified signals improves the source separation, in comparison with the original sources. In this work, the analysis is made only in instantaneous mixtures and focused on voice sources.
A study of numerical methods for hyperbolic conservation laws with stiff source terms
NASA Technical Reports Server (NTRS)
Leveque, R. J.; Yee, H. C.
1988-01-01
The proper modeling of nonequilibrium gas dynamics is required in certain regimes of hypersonic flow. For inviscid flow this gives a system of conservation laws coupled with source terms representing the chemistry. Often a wide range of time scales is present in the problem, leading to numerical difficulties as in stiff systems of ordinary differential equations. Stability can be achieved by using implicit methods, but other numerical difficulties are observed. The behavior of typical numerical methods on a simple advection equation with a parameter-dependent source term was studied. Two approaches to incorporate the source term were utilized: MacCormack type predictor-corrector methods with flux limiters, and splitting methods in which the fluid dynamics and chemistry are handled in separate steps. Various comparisons over a wide range of parameter values were made. In the stiff case where the solution contains discontinuities, incorrect numerical propagation speeds are observed with all of the methods considered. This phenomenon is studied and explained.
NASA Astrophysics Data System (ADS)
Bi, Chuan-Xing; Geng, Lin; Zhang, Xiao-Zheng
2016-05-01
In the sound field with multiple non-stationary sources, the measured pressure is the sum of the pressures generated by all sources, and thus cannot be used directly for studying the vibration and sound radiation characteristics of every source alone. This paper proposes a separation model based on the interpolated time-domain equivalent source method (ITDESM) to separate the pressure field belonging to every source from the non-stationary multi-source sound field. In the proposed method, ITDESM is first extended to establish the relationship between the mixed time-dependent pressure and all the equivalent sources distributed on every source with known location and geometry information, and all the equivalent source strengths at each time step are solved by an iterative solving process; then, the corresponding equivalent source strengths of one interested source are used to calculate the pressure field generated by that source alone. Numerical simulation of two baffled circular pistons demonstrates that the proposed method can be effective in separating the non-stationary pressure generated by every source alone in both time and space domains. An experiment with two speakers in a semi-anechoic chamber further evidences the effectiveness of the proposed method.
Wu, Jiang; Li, Jia; Xu, Zhenming
2008-07-15
Electrostatic separation is an effective and environmentally friendly method for recycling comminuted waste printed circuit boards (PCB). As a classical separator, the roll-type corona-electrostatic separator (RTS) has some advantages in this field. However, there are still some notable problems, such as the middling products and their further treatment, impurity of nonconductive products because of the aggregation of fine particles, and stability of the separation process and balance between the production capacity and the separation quality. To overcome these problems, a conception of two-step separation is presented, and a new two-roll type corona-electrostatic separator (T-RTS) was built As compared to RTS, the conductive products increase by 8.9%, the middling products decrease by 45%, and the production capacity increases by 50% in treating comminuted PCB wastes by T-RTS. In addition, the separation process in T-RTS is more stable. Therefore, T-RTS is a promising separator for recycling comminuted PCB.
Imaging of neural oscillations with embedded inferential and group prevalence statistics.
Donhauser, Peter W; Florin, Esther; Baillet, Sylvain
2018-02-01
Magnetoencephalography and electroencephalography (MEG, EEG) are essential techniques for studying distributed signal dynamics in the human brain. In particular, the functional role of neural oscillations remains to be clarified. For that reason, imaging methods need to identify distinct brain regions that concurrently generate oscillatory activity, with adequate separation in space and time. Yet, spatial smearing and inhomogeneous signal-to-noise are challenging factors to source reconstruction from external sensor data. The detection of weak sources in the presence of stronger regional activity nearby is a typical complication of MEG/EEG source imaging. We propose a novel, hypothesis-driven source reconstruction approach to address these methodological challenges. The imaging with embedded statistics (iES) method is a subspace scanning technique that constrains the mapping problem to the actual experimental design. A major benefit is that, regardless of signal strength, the contributions from all oscillatory sources, which activity is consistent with the tested hypothesis, are equalized in the statistical maps produced. We present extensive evaluations of iES on group MEG data, for mapping 1) induced oscillations using experimental contrasts, 2) ongoing narrow-band oscillations in the resting-state, 3) co-modulation of brain-wide oscillatory power with a seed region, and 4) co-modulation of oscillatory power with peripheral signals (pupil dilation). Along the way, we demonstrate several advantages of iES over standard source imaging approaches. These include the detection of oscillatory coupling without rejection of zero-phase coupling, and detection of ongoing oscillations in deeper brain regions, where signal-to-noise conditions are unfavorable. We also show that iES provides a separate evaluation of oscillatory synchronization and desynchronization in experimental contrasts, which has important statistical advantages. The flexibility of iES allows it to be adjusted to many experimental questions in systems neuroscience.
Imaging of neural oscillations with embedded inferential and group prevalence statistics
2018-01-01
Magnetoencephalography and electroencephalography (MEG, EEG) are essential techniques for studying distributed signal dynamics in the human brain. In particular, the functional role of neural oscillations remains to be clarified. For that reason, imaging methods need to identify distinct brain regions that concurrently generate oscillatory activity, with adequate separation in space and time. Yet, spatial smearing and inhomogeneous signal-to-noise are challenging factors to source reconstruction from external sensor data. The detection of weak sources in the presence of stronger regional activity nearby is a typical complication of MEG/EEG source imaging. We propose a novel, hypothesis-driven source reconstruction approach to address these methodological challenges. The imaging with embedded statistics (iES) method is a subspace scanning technique that constrains the mapping problem to the actual experimental design. A major benefit is that, regardless of signal strength, the contributions from all oscillatory sources, which activity is consistent with the tested hypothesis, are equalized in the statistical maps produced. We present extensive evaluations of iES on group MEG data, for mapping 1) induced oscillations using experimental contrasts, 2) ongoing narrow-band oscillations in the resting-state, 3) co-modulation of brain-wide oscillatory power with a seed region, and 4) co-modulation of oscillatory power with peripheral signals (pupil dilation). Along the way, we demonstrate several advantages of iES over standard source imaging approaches. These include the detection of oscillatory coupling without rejection of zero-phase coupling, and detection of ongoing oscillations in deeper brain regions, where signal-to-noise conditions are unfavorable. We also show that iES provides a separate evaluation of oscillatory synchronization and desynchronization in experimental contrasts, which has important statistical advantages. The flexibility of iES allows it to be adjusted to many experimental questions in systems neuroscience. PMID:29408902
Optimization of municipal solid waste collection and transportation routes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Das, Swapan, E-mail: swapan2009sajal@gmail.com; Bhattacharyya, Bidyut Kr., E-mail: bidyut53@yahoo.co.in
2015-09-15
Graphical abstract: Display Omitted - Highlights: • Profitable integrated solid waste management system. • Optimal municipal waste collection scheme between the sources and waste collection centres. • Optimal path calculation between waste collection centres and transfer stations. • Optimal waste routing between the transfer stations and processing plants. - Abstract: Optimization of municipal solid waste (MSW) collection and transportation through source separation becomes one of the major concerns in the MSW management system design, due to the fact that the existing MSW management systems suffer by the high collection and transportation cost. Generally, in a city different waste sources scattermore » throughout the city in heterogeneous way that increase waste collection and transportation cost in the waste management system. Therefore, a shortest waste collection and transportation strategy can effectively reduce waste collection and transportation cost. In this paper, we propose an optimal MSW collection and transportation scheme that focus on the problem of minimizing the length of each waste collection and transportation route. We first formulize the MSW collection and transportation problem into a mixed integer program. Moreover, we propose a heuristic solution for the waste collection and transportation problem that can provide an optimal way for waste collection and transportation. Extensive simulations and real testbed results show that the proposed solution can significantly improve the MSW performance. Results show that the proposed scheme is able to reduce more than 30% of the total waste collection path length.« less
Joint Blind Source Separation by Multi-set Canonical Correlation Analysis
Li, Yi-Ou; Adalı, Tülay; Wang, Wei; Calhoun, Vince D
2009-01-01
In this work, we introduce a simple and effective scheme to achieve joint blind source separation (BSS) of multiple datasets using multi-set canonical correlation analysis (M-CCA) [1]. We first propose a generative model of joint BSS based on the correlation of latent sources within and between datasets. We specify source separability conditions, and show that, when the conditions are satisfied, the group of corresponding sources from each dataset can be jointly extracted by M-CCA through maximization of correlation among the extracted sources. We compare source separation performance of the M-CCA scheme with other joint BSS methods and demonstrate the superior performance of the M-CCA scheme in achieving joint BSS for a large number of datasets, group of corresponding sources with heterogeneous correlation values, and complex-valued sources with circular and non-circular distributions. We apply M-CCA to analysis of functional magnetic resonance imaging (fMRI) data from multiple subjects and show its utility in estimating meaningful brain activations from a visuomotor task. PMID:20221319
Ultra-thin layer chromatography with integrated silver colloid-based SERS detection.
Wallace, Ryan A; Lavrik, Nickolay V; Sepaniak, Michael J
2017-01-01
Simplified lab-on-a-chip techniques are desirable for quick and efficient detection of analytes of interest in the field. The following work involves the use of deterministic pillar arrays on the micro-scale as a platform to separate compounds, and the use of Ag colloid within the arrays as a source of increased signal via surface enhanced Raman spectroscopy (SERS). One problem traditionally seen with SERS surfaces containing Ag colloid is oxidation; however, our platforms are superhydrophobic, reducing the amount of oxidation taking place on the surface of the Ag colloid. This work includes the successful separation and SERS detection of a fluorescent dye compounds (resorufin and sulforhodamine 640), fluorescent anti-tumor drugs (Adriamycin and Daunomycin), and purine and pyrimidine bases (adenine, cytosine, guanine, hypoxanthine, and thymine). © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Instantaneous and Frequency-Warped Signal Processing Techniques for Auditory Source Separation.
NASA Astrophysics Data System (ADS)
Wang, Avery Li-Chun
This thesis summarizes several contributions to the areas of signal processing and auditory source separation. The philosophy of Frequency-Warped Signal Processing is introduced as a means for separating the AM and FM contributions to the bandwidth of a complex-valued, frequency-varying sinusoid p (n), transforming it into a signal with slowly-varying parameters. This transformation facilitates the removal of p (n) from an additive mixture while minimizing the amount of damage done to other signal components. The average winding rate of a complex-valued phasor is explored as an estimate of the instantaneous frequency. Theorems are provided showing the robustness of this measure. To implement frequency tracking, a Frequency-Locked Loop algorithm is introduced which uses the complex winding error to update its frequency estimate. The input signal is dynamically demodulated and filtered to extract the envelope. This envelope may then be remodulated to reconstruct the target partial, which may be subtracted from the original signal mixture to yield a new, quickly-adapting form of notch filtering. Enhancements to the basic tracker are made which, under certain conditions, attain the Cramer -Rao bound for the instantaneous frequency estimate. To improve tracking, the novel idea of Harmonic -Locked Loop tracking, using N harmonically constrained trackers, is introduced for tracking signals, such as voices and certain musical instruments. The estimated fundamental frequency is computed from a maximum-likelihood weighting of the N tracking estimates, making it highly robust. The result is that harmonic signals, such as voices, can be isolated from complex mixtures in the presence of other spectrally overlapping signals. Additionally, since phase information is preserved, the resynthesized harmonic signals may be removed from the original mixtures with relatively little damage to the residual signal. Finally, a new methodology is given for designing linear-phase FIR filters which require a small fraction of the computational power of conventional FIR implementations. This design strategy is based on truncated and stabilized IIR filters. These signal-processing methods have been applied to the problem of auditory source separation, resulting in voice separation from complex music that is significantly better than previous results at far lower computational cost.
Source separation of household waste: a case study in China.
Zhuang, Ying; Wu, Song-Wei; Wang, Yun-Long; Wu, Wei-Xiang; Chen, Ying-Xu
2008-01-01
A pilot program concerning source separation of household waste was launched in Hangzhou, capital city of Zhejiang province, China. Detailed investigations on the composition and properties of household waste in the experimental communities revealed that high water content and high percentage of food waste are the main limiting factors in the recovery of recyclables, especially paper from household waste, and the main contributors to the high cost and low efficiency of waste disposal. On the basis of the investigation, a novel source separation method, according to which household waste was classified as food waste, dry waste and harmful waste, was proposed and performed in four selected communities. In addition, a corresponding household waste management system that involves all stakeholders, a recovery system and a mechanical dehydration system for food waste were constituted to promote source separation activity. Performances and the questionnaire survey results showed that the active support and investment of a real estate company and a community residential committee play important roles in enhancing public participation and awareness of the importance of waste source separation. In comparison with the conventional mixed collection and transportation system of household waste, the established source separation and management system is cost-effective. It could be extended to the entire city and used by other cities in China as a source of reference.
STS-32 OV-102 air revitalization system (ARS) humidity separator problem
NASA Technical Reports Server (NTRS)
1990-01-01
During STS-32, onboard Columbia, Orbiter Vehicle (OV) 102, a leakage problem at environmental control and life support system (ECLSS) air revitalization system (ARS) humidity separator A below the middeck is documented in this closeup view. Note the many bubbles around the separator. The crew cleared out stowage bags, lithium hydroxide (LiOH) cannisters and other materials to get at the problem. It was eventually repaired.
Stadelmann, Stephanie; Perren, Sonja; Groeben, Maureen; von Klitzing, Kai
2010-03-01
In this longitudinal study, we examine whether the effect of parental separation on kindergarten children's behavioral/emotional problems varies according to the level of family conflict, and children's parental representations. One hundred and eighty seven children were assessed at ages 5 and 6. Family conflict was assessed using parents' ratings. Children's parental representations were assessed using a story-stem task. A multiinformant approach (parent, teacher, child) was employed to assess children's behavioral/emotional problems. Bivariate results showed that separation, family conflict, and negative parental representations were associated with children's behavioral/emotional problems. However, in multivariate analyses, when controlling for gender and symptoms at age 5, we found that children of separated parents who showed negative parental representations had a significantly greater increase in conduct problems between 5 and 6 than all other children. In terms of emotional symptoms and hyperactivity, symptoms at 5 and (for hyperactivity only) gender were the only predictors for symptoms 1 year later. Our results suggest that kindergarten children's representations of parent-child relationships moderate the impact of parental separation on the development of conduct problems, and underline play and narration as a possible route to access the thoughts and feelings of young children faced with parental separation.
Separation of Biologically Active Compounds by Membrane Operations.
Zhu, Xiaoying; Bai, Renbi
2017-01-01
Bioactive compounds from various natural sources have been attracting more and more attention, owing to their broad diversity of functionalities and availabilities. However, many of the bioactive compounds often exist at an extremely low concentration in a mixture so that massive harvesting is needed to obtain sufficient amounts for their practical usage. Thus, effective fractionation or separation technologies are essential for the screening and production of the bioactive compound products. The applicatons of conventional processes such as extraction, distillation and lyophilisation, etc. may be tedious, have high energy consumption or cause denature or degradation of the bioactive compounds. Membrane separation processes operate at ambient temperature, without the need for heating and therefore with less energy consumption. The "cold" separation technology also prevents the possible degradation of the bioactive compounds. The separation process is mainly physical and both fractions (permeate and retentate) of the membrane processes may be recovered. Thus, using membrane separation technology is a promising approach to concentrate and separate bioactive compounds. A comprehensive survey of membrane operations used for the separation of bioactive compounds is conducted. The available and established membrane separation processes are introduced and reviewed. The most frequently used membrane processes are the pressure driven ones, including microfiltration (MF), ultrafiltration (UF) and nanofiltration (NF). They are applied either individually as a single sieve or in combination as an integrated membrane array to meet the different requirements in the separation of bioactive compounds. Other new membrane processes with multiple functions have also been developed and employed for the separation or fractionation of bioactive compounds. The hybrid electrodialysis (ED)-UF membrane process, for example has been used to provide a solution for the separation of biomolecules with similar molecular weights but different surface electrical properties. In contrast, the affinity membrane technology is shown to have the advantages of increasing the separation efficiency at low operational pressures through selectively adsorbing bioactive compounds during the filtration process. Individual membranes or membrane arrays are effectively used to separate bioactive compounds or achieve multiple fractionation of them with different molecule weights or sizes. Pressure driven membrane processes are highly efficient and widely used. Membrane fouling, especially irreversible organic and biological fouling, is the inevitable problem. Multifunctional membranes and affinity membranes provide the possibility of effectively separating bioactive compounds that are similar in sizes but different in other physical and chemical properties. Surface modification methods are of great potential to increase membrane separation efficiency as well as reduce the problem of membrane fouling. Developing membranes and optimizing the operational parameters specifically for the applications of separation of various bioactive compounds should be taken as an important part of ongoing or future membrane research in this field. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Noisy oscillator: Random mass and random damping.
Burov, Stanislav; Gitterman, Moshe
2016-11-01
The problem of a linear damped noisy oscillator is treated in the presence of two multiplicative sources of noise which imply a random mass and random damping. The additive noise and the noise in the damping are responsible for an influx of energy to the oscillator and its dissipation to the surrounding environment. A random mass implies that the surrounding molecules not only collide with the oscillator but may also adhere to it, thereby changing its mass. We present general formulas for the first two moments and address the question of mean and energetic stabilities. The phenomenon of stochastic resonance, i.e., the expansion due to the noise of a system response to an external periodic signal, is considered for separate and joint action of two sources of noise and their characteristics.
Back-trajectory modeling of high time-resolution air measurement data to separate nearby sources
Strategies to isolate air pollution contributions from sources is of interest as voluntary or regulatory measures are undertaken to reduce air pollution. When different sources are located in close proximity to one another and have similar emissions, separating source emissions ...
NASA Astrophysics Data System (ADS)
Gao, Lingli; Pan, Yudi
2018-05-01
The correct estimation of the seismic source signature is crucial to exploration geophysics. Based on seismic interferometry, the virtual real source (VRS) method provides a model-independent way for source signature estimation. However, when encountering multimode surface waves, which are commonly seen in the shallow seismic survey, strong spurious events appear in seismic interferometric results. These spurious events introduce errors in the virtual-source recordings and reduce the accuracy of the source signature estimated by the VRS method. In order to estimate a correct source signature from multimode surface waves, we propose a mode-separated VRS method. In this method, multimode surface waves are mode separated before seismic interferometry. Virtual-source recordings are then obtained by applying seismic interferometry to each mode individually. Therefore, artefacts caused by cross-mode correlation are excluded in the virtual-source recordings and the estimated source signatures. A synthetic example showed that a correct source signature can be estimated with the proposed method, while strong spurious oscillation occurs in the estimated source signature if we do not apply mode separation first. We also applied the proposed method to a field example, which verified its validity and effectiveness in estimating seismic source signature from shallow seismic shot gathers containing multimode surface waves.
A physical classification scheme for blazars
NASA Astrophysics Data System (ADS)
Landt, Hermine; Padovani, Paolo; Perlman, Eric S.; Giommi, Paolo
2004-06-01
Blazars are currently separated into BL Lacertae objects (BL Lacs) and flat spectrum radio quasars based on the strength of their emission lines. This is performed rather arbitrarily by defining a diagonal line in the Ca H&K break value-equivalent width plane, following Marchã et al. We readdress this problem and put the classification scheme for blazars on firm physical grounds. We study ~100 blazars and radio galaxies from the Deep X-ray Radio Blazar Survey (DXRBS) and 2-Jy radio survey and find a significant bimodality for the narrow emission line [OIII]λ5007. This suggests the presence of two physically distinct classes of radio-loud active galactic nuclei (AGN). We show that all radio-loud AGN, blazars and radio galaxies, can be effectively separated into weak- and strong-lined sources using the [OIII]λ5007-[OII]λ3727 equivalent width plane. This plane allows one to disentangle orientation effects from intrinsic variations in radio-loud AGN. Based on DXRBS, the strongly beamed sources of the new class of weak-lined radio-loud AGN are made up of BL Lacs at the ~75 per cent level, whereas those of the strong-lined radio-loud AGN include mostly (~97 per cent) quasars.
NASA Astrophysics Data System (ADS)
Gao, M.; Song, S.; Beig, G.; Zhang, H.; Hu, J.; Ying, Q.; McElroy, M. B.
2017-12-01
Fast urbanization and industrialization in China and India have led to severe ozone pollution, threatening public health in these densely populated countries. We show the spatial and seasonal characteristics of ozone concentrations using nation-wide observations for these two countries in 2013. We used the Weather Research and Forecasting model coupled to chemistry (WRF-Chem) to conduct one-year simulations and to evaluate how current models capture the important photochemical processes using the exhaustive available datasets in China and India, including surface measurements, ozonesonde data and satellite retrievals. We also employed the factor separation approach to distinguish the contributions of different sectors to ozone during different seasons. The back trajectory model FLEXPART was applied to investigate the role of transport in highly polluted regions (e.g., North China Plain, Yangtze River delta, and Pearl River Delta) during different seasons. Preliminary results indicate that the WRF-Chem model provides a satisfactory representation of the temporal and spatial variations of ozone for both China and India. The factor separation approach offers valuable insights into relevant sources of ozone for both countries providing valuable guidance for policy options designed to mitigate the related problem.
NASA Astrophysics Data System (ADS)
Lee, Sang-Young
2017-05-01
Forthcoming wearable/flexible electronics with compelling shape diversity and mobile usability have garnered significant attention as a kind of disruptive technology to drastically change our daily lives. From a power source point of view, conventional rechargeable batteries (represented by lithium-ion batteries) with fixed shapes and dimensions are generally fabricated by winding (or stacking) cell components (such as anodes, cathodes and separator membranes) and then packaging them with (cylindrical-/rectangular-shaped) metallic canisters or pouch films, finally followed by injection of liquid electrolytes. In particular, the use of liquid electrolytes gives rise to serious concerns in cell assembly, because they require strict packaging materials to avoid leakage problems and also separator membranes to prevent electrical contact between electrodes. For these reasons, the conventional cell assembly and materials have pushed the batteries to lack of variety in form factors, thus imposing formidable challenges on their integration into versatile-shaped electronic devices. Here, as a facile and efficient strategy to address the aforementioned longstanding challenge, we demonstrate a new class of printed solid-state Li-ion batteries and also all-inkjet-printed solid-state supercapacitors with exceptional shape conformability and aesthetic versatility which lie far beyond those achievable with conventional battery technologies.
Shashilov, Victor A; Sikirzhytski, Vitali; Popova, Ludmila A; Lednev, Igor K
2010-09-01
Here we report on novel quantitative approaches for protein structural characterization using deep UV resonance Raman (DUVRR) spectroscopy. Specifically, we propose a new method combining hydrogen-deuterium (HD) exchange and Bayesian source separation for extracting the DUVRR signatures of various structural elements of aggregated proteins including the cross-beta core and unordered parts of amyloid fibrils. The proposed method is demonstrated using the set of DUVRR spectra of hen egg white lysozyme acquired at various stages of HD exchange. Prior information about the concentration matrix and the spectral features of the individual components was incorporated into the Bayesian equation to eliminate the ill-conditioning of the problem caused by 100% correlation of the concentration profiles of protonated and deuterated species. Secondary structure fractions obtained by partial least squares (PLS) and least squares support vector machines (LS-SVMs) were used as the initial guess for the Bayessian source separation. Advantages of the PLS and LS-SVMs methods over the classical least squares calibration (CLSC) are discussed and illustrated using the DUVRR data of the prion protein in its native and aggregated forms. Copyright (c) 2010 Elsevier Inc. All rights reserved.
Lansford, Jennifer E.; Malone, Patrick S.; Castellino, Domini R.; Dodge, Kenneth A.; Pettit, Gregory S.; Bates, John E.
2009-01-01
This study examined whether the occurrence and timing of parental separation or divorce was related to trajectories of academic grades and mother- and teacher-reported internalizing and externalizing problems. The authors used hierarchical linear models to estimate trajectories for children who did and did not experience their parents' divorce or separation in kindergarten through 10th grade (N = 194). A novel approach to analyzing the timing of divorce/separation was adopted, and trajectories were estimated from 1 year prior to the divorce/separation to 3 years after the event. Results suggest that early parental divorce/separation is more negatively related to trajectories of internalizing and externalizing problems than is later divorce/separation, whereas later divorce/separation is more negatively related to grades. One implication of these findings is that children may benefit most from interventions focused on preventing internalizing and externalizing problems, whereas adolescents may benefit most from interventions focused on promoting academic achievement. PMID:16756405
Lansford, Jennifer E; Malone, Patrick S; Castellino, Domini R; Dodge, Kenneth A; Pettit, Gregory S; Bates, John E
2006-06-01
This study examined whether the occurrence and timing of parental separation or divorce was related to trajectories of academic grades and mother- and teacher-reported internalizing and externalizing problems. The authors used hierarchical linear models to estimate trajectories for children who did and did not experience their parents' divorce or separation in kindergarten through 10th grade (N = 194). A novel approach to analyzing the timing of divorce/separation was adopted, and trajectories were estimated from 1 year prior to the divorce/separation to 3 years after the event. Results suggest that early parental divorce/separation is more negatively related to trajectories of internalizing and externalizing problems than is later divorce/separation, whereas later divorce/separation is more negatively related to grades. One implication of these findings is that children may benefit most from interventions focused on preventing internalizing and externalizing problems, whereas adolescents may benefit most from interventions focused on promoting academic achievement. ((c) 2006 APA, all rights reserved).
Audio visual speech source separation via improved context dependent association model
NASA Astrophysics Data System (ADS)
Kazemi, Alireza; Boostani, Reza; Sobhanmanesh, Fariborz
2014-12-01
In this paper, we exploit the non-linear relation between a speech source and its associated lip video as a source of extra information to propose an improved audio-visual speech source separation (AVSS) algorithm. The audio-visual association is modeled using a neural associator which estimates the visual lip parameters from a temporal context of acoustic observation frames. We define an objective function based on mean square error (MSE) measure between estimated and target visual parameters. This function is minimized for estimation of the de-mixing vector/filters to separate the relevant source from linear instantaneous or time-domain convolutive mixtures. We have also proposed a hybrid criterion which uses AV coherency together with kurtosis as a non-Gaussianity measure. Experimental results are presented and compared in terms of visually relevant speech detection accuracy and output signal-to-interference ratio (SIR) of source separation. The suggested audio-visual model significantly improves relevant speech classification accuracy compared to existing GMM-based model and the proposed AVSS algorithm improves the speech separation quality compared to reference ICA- and AVSS-based methods.
Automated Conflict Resolution, Arrival Management and Weather Avoidance for ATM
NASA Technical Reports Server (NTRS)
Erzberger, H.; Lauderdale, Todd A.; Chu, Yung-Cheng
2010-01-01
The paper describes a unified solution to three types of separation assurance problems that occur in en-route airspace: separation conflicts, arrival sequencing, and weather-cell avoidance. Algorithms for solving these problems play a key role in the design of future air traffic management systems such as NextGen. Because these problems can arise simultaneously in any combination, it is necessary to develop integrated algorithms for solving them. A unified and comprehensive solution to these problems provides the foundation for a future air traffic management system that requires a high level of automation in separation assurance. The paper describes the three algorithms developed for solving each problem and then shows how they are used sequentially to solve any combination of these problems. The first algorithm resolves loss-of-separation conflicts and is an evolution of an algorithm described in an earlier paper. The new version generates multiple resolutions for each conflict and then selects the one giving the least delay. Two new algorithms, one for sequencing and merging of arrival traffic, referred to as the Arrival Manager, and the other for weather-cell avoidance are the major focus of the paper. Because these three problems constitute a substantial fraction of the workload of en-route controllers, integrated algorithms to solve them is a basic requirement for automated separation assurance. The paper also reviews the Advanced Airspace Concept, a proposed design for a ground-based system that postulates redundant systems for separation assurance in order to achieve both high levels of safety and airspace capacity. It is proposed that automated separation assurance be introduced operationally in several steps, each step reducing controller workload further while increasing airspace capacity. A fast time simulation was used to determine performance statistics of the algorithm at up to 3 times current traffic levels.
scarlet: Source separation in multi-band images by Constrained Matrix Factorization
NASA Astrophysics Data System (ADS)
Melchior, Peter; Moolekamp, Fred; Jerdee, Maximilian; Armstrong, Robert; Sun, Ai-Lei; Bosch, James; Lupton, Robert
2018-03-01
SCARLET performs source separation (aka "deblending") on multi-band images. It is geared towards optical astronomy, where scenes are composed of stars and galaxies, but it is straightforward to apply it to other imaging data. Separation is achieved through a constrained matrix factorization, which models each source with a Spectral Energy Distribution (SED) and a non-parametric morphology, or multiple such components per source. The code performs forced photometry (with PSF matching if needed) using an optimal weight function given by the signal-to-noise weighted morphology across bands. The approach works well if the sources in the scene have different colors and can be further strengthened by imposing various additional constraints/priors on each source. Because of its generic utility, this package provides a stand-alone implementation that contains the core components of the source separation algorithm. However, the development of this package is part of the LSST Science Pipeline; the meas_deblender package contains a wrapper to implement the algorithms here for the LSST stack.
The necessity of recognizing all events in X-ray detection.
Papp, T; Maxwell, J A; Papp, A T
2010-01-01
In our work in studying properties of inner shell ionization, we are troubled that the experimental data used to determine the basic parameters of X-ray physics have a large and unexplainable scatter. As we looked into the problems we found that many of them contradict simple logic, elemental arithmetic, even parity and angular momentum conservation laws. We have identified that the main source of the problems, other than the human factor, is rooted in the signal processing electronics. To overcome these problems we have developed a fully digital signal processor, which not only has excellent resolution and line shape, but also allows proper accounting of all events. This is achieved by processing all events and separating them into two or more spectra (maximum 16), where the first spectrum is the accepted or good spectrum and the second spectrum is the spectrum of all rejected events. The availability of all the events allows one to see the other part of the spectrum. To our surprise the total information explains many of the shortcomings and contradictions of the X-ray database. The data processing methodology cannot be established on the partial and fractional information offered by other approaches. Comparing Monte Carlo detector modeling results with the partial spectra is ambiguous. It suggests that the metrology of calibration by radioactive sources as well as other X-ray measurements could be improved by the availability of the proper accounting of all events. It is not enough to know that an event was rejected and increment the input counter, it is necessary to know, what was rejected and why it happened, whether it was a noise or a disturbed event, a retarded event or a true event, or any pile up combination of these events. Such information is supplied by our processor reporting the events rejected by each discriminator in separate spectra. Several industrial applications of this quality assurance capable signal processor are presented. Copyright 2009 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wright, L.; Coddington, O.; Pilewskie, P.
2016-12-01
Hyperspectral instruments are a growing class of Earth observing sensors designed to improve remote sensing capabilities beyond discrete multi-band sensors by providing tens to hundreds of continuous spectral channels. Improved spectral resolution, range and radiometric accuracy allow the collection of large amounts of spectral data, facilitating thorough characterization of both atmospheric and surface properties. These new instruments require novel approaches for processing imagery and separating surface and atmospheric signals. One approach is numerical source separation, which allows the determination of the underlying physical causes of observed signals. Improved source separation will enable hyperspectral imagery to better address key science questions relevant to climate change, including land-use changes, trends in clouds and atmospheric water vapor, and aerosol characteristics. We developed an Informed Non-negative Matrix Factorization (INMF) method for separating atmospheric and surface sources. INMF offers marked benefits over other commonly employed techniques including non-negativity, which avoids physically impossible results; and adaptability, which tailors the method to hyperspectral source separation. The INMF algorithm is adapted to separate contributions from physically distinct sources using constraints on spectral and spatial variability, and library spectra to improve the initial guess. We also explore methods to produce an initial guess of the spatial separation patterns. Using this INMF algorithm we decompose hyperspectral imagery from the NASA Hyperspectral Imager for the Coastal Ocean (HICO) with a focus on separating surface and atmospheric signal contributions. HICO's coastal ocean focus provides a dataset with a wide range of atmospheric conditions, including high and low aerosol optical thickness and cloud cover, with only minor contributions from the ocean surfaces in order to isolate the contributions of the multiple atmospheric sources.
Life cycle assessment of a household solid waste source separation programme: a Swedish case study.
Bernstad, Anna; la Cour Jansen, Jes; Aspegren, Henrik
2011-10-01
The environmental impact of an extended property close source-separation system for solid household waste (i.e., a systems for collection of recyclables from domestic properties) is investigated in a residential area in southern Sweden. Since 2001, households have been able to source-separate waste into six fractions of dry recyclables and food waste sorting. The current system was evaluated using the EASEWASTE life cycle assessment tool. Current status is compared with an ideal scenario in which households display perfect source-separation behaviour and a scenario without any material recycling. Results show that current recycling provides substantial environmental benefits compared to a non-recycling alternative. The environmental benefit varies greatly between recyclable fractions, and the recyclables currently most frequently source-separated by households are often not the most beneficial from an environmental perspective. With optimal source-separation of all recyclables, the current net contribution to global warming could be changed to a net-avoidance while current avoidance of nutrient enrichment, acidification and photochemical ozone formation could be doubled. Sensitivity analyses show that the type of energy substituted by incineration of non-recycled waste, as well as energy used in recycling processes and in the production of materials substituted by waste recycling, is of high relevance for the attained results.
NASA Astrophysics Data System (ADS)
Araújo, Iván Gómez; Sánchez, Jesús Antonio García; Andersen, Palle
2018-05-01
Transmissibility-based operational modal analysis is a recent and alternative approach used to identify the modal parameters of structures under operational conditions. This approach is advantageous compared with traditional operational modal analysis because it does not make any assumptions about the excitation spectrum (i.e., white noise with a flat spectrum). However, common methodologies do not include a procedure to extract closely spaced modes with low signal-to-noise ratios. This issue is relevant when considering that engineering structures generally have closely spaced modes and that their measured responses present high levels of noise. Therefore, to overcome these problems, a new combined method for modal parameter identification is proposed in this work. The proposed method combines blind source separation (BSS) techniques and transmissibility-based methods. Here, BSS techniques were used to recover source signals, and transmissibility-based methods were applied to estimate modal information from the recovered source signals. To achieve this combination, a new method to define a transmissibility function was proposed. The suggested transmissibility function is based on the relationship between the power spectral density (PSD) of mixed signals and the PSD of signals from a single source. The numerical responses of a truss structure with high levels of added noise and very closely spaced modes were processed using the proposed combined method to evaluate its ability to identify modal parameters in these conditions. Colored and white noise excitations were used for the numerical example. The proposed combined method was also used to evaluate the modal parameters of an experimental test on a structure containing closely spaced modes. The results showed that the proposed combined method is capable of identifying very closely spaced modes in the presence of noise and, thus, may be potentially applied to improve the identification of damping ratios.
Improving the Nulling Beamformer Using Subspace Suppression.
Rana, Kunjan D; Hämäläinen, Matti S; Vaina, Lucia M
2018-01-01
Magnetoencephalography (MEG) captures the magnetic fields generated by neuronal current sources with sensors outside the head. In MEG analysis these current sources are estimated from the measured data to identify the locations and time courses of neural activity. Since there is no unique solution to this so-called inverse problem, multiple source estimation techniques have been developed. The nulling beamformer (NB), a modified form of the linearly constrained minimum variance (LCMV) beamformer, is specifically used in the process of inferring interregional interactions and is designed to eliminate shared signal contributions, or cross-talk, between regions of interest (ROIs) that would otherwise interfere with the connectivity analyses. The nulling beamformer applies the truncated singular value decomposition (TSVD) to remove small signal contributions from a ROI to the sensor signals. However, ROIs with strong crosstalk will have high separating power in the weaker components, which may be removed by the TSVD operation. To address this issue we propose a new method, the nulling beamformer with subspace suppression (NBSS). This method, controlled by a tuning parameter, reweights the singular values of the gain matrix mapping from source to sensor space such that components with high overlap are reduced. By doing so, we are able to measure signals between nearby source locations with limited cross-talk interference, allowing for reliable cortical connectivity analysis between them. In two simulations, we demonstrated that NBSS reduces cross-talk while retaining ROIs' signal power, and has higher separating power than both the minimum norm estimate (MNE) and the nulling beamformer without subspace suppression. We also showed that NBSS successfully localized the auditory M100 event-related field in primary auditory cortex, measured from a subject undergoing an auditory localizer task, and suppressed cross-talk in a nearby region in the superior temporal sulcus.
NASA Astrophysics Data System (ADS)
Capuano, Paolo; De Lauro, Enza; De Martino, Salvatore; Falanga, Mariarosaria; Petrosino, Simona
2015-04-01
One of the main challenge in volcano-seismological literature is to locate and characterize the source of volcano/tectonic seismic activity. This passes through the identification at least of the onset of the main phases, i.e. the body waves. Many efforts have been made to solve the problem of a clear separation of P and S phases both from a theoretical point of view and developing numerical algorithms suitable for specific cases (see, e.g., Küperkoch et al., 2012). Recently, a robust automatic procedure has been implemented for extracting the prominent seismic waveforms from continuously recorded signals and thus allowing for picking the main phases. The intuitive notion of maximum non-gaussianity is achieved adopting techniques which involve higher-order statistics in frequency domain., i.e, the Convolutive Independent Component Analysis (CICA). This technique is successful in the case of the blind source separation of convolutive mixtures. In seismological framework, indeed, seismic signals are thought as the convolution of a source function with path, site and the instrument response. In addition, time-delayed versions of the same source exist, due to multipath propagation typically caused by reverberations from some obstacle. In this work, we focus on the Volcano Tectonic (VT) activity at Campi Flegrei Caldera (Italy) during the 2006 ground uplift (Ciaramella et al., 2011). The activity was characterized approximately by 300 low-magnitude VT earthquakes (Md < 2; for the definition of duration magnitude, see Petrosino et al. 2008). Most of them were concentrated in distinct seismic sequences with hypocenters mainly clustered beneath the Solfatara-Accademia area, at depths ranging between 1 and 4 km b.s.l.. The obtained results show the clear separation of P and S phases: the technique not only allows the identification of the S-P time delay giving the timing of both phases but also provides the independent waveforms of the P and S phases. This is an enormous advantage for all the problems related to the source inversion and location In addition, the VT seismicity was accompanied by hundreds of LP events (characterized by spectral peaks in the 0.5-2-Hz frequency band) that were concentrated in a 7-day interval. The main interest is to establish whether the occurrence of LPs is only limited to the swarm that reached a climax on days 26-28 October as indicated by Saccorotti et al. (2007), or a longer period is experienced. The automatically extracted waveforms with improved signal-to-noise ratio via CICA coupled with automatic phase picking allowed to compile a more complete seismic catalog and to better quantify the seismic energy release including the presence of LP events from the beginning of October until mid of November. Finally, a further check of the volcanic nature of extracted signals is achieved by looking at the seismological properties and the content of entropy held in the traces (Falanga and Petrosino 2012; De Lauro et al., 2012). Our results allow us to move towards a full description of the complexity of the source, which can be used for hazard-model development and forecast-model testing, showing an illustrative example of the applicability of the CICA method to regions with low seismicity in high ambient noise
Wang, Sheng H; Lobier, Muriel; Siebenhühner, Felix; Puoliväli, Tuomas; Palva, Satu; Palva, J Matias
2018-06-01
Inter-areal functional connectivity (FC), neuronal synchronization in particular, is thought to constitute a key systems-level mechanism for coordination of neuronal processing and communication between brain regions. Evidence to support this hypothesis has been gained largely using invasive electrophysiological approaches. In humans, neuronal activity can be non-invasively recorded only with magneto- and electroencephalography (MEG/EEG), which have been used to assess FC networks with high temporal resolution and whole-scalp coverage. However, even in source-reconstructed MEG/EEG data, signal mixing, or "source leakage", is a significant confounder for FC analyses and network localization. Signal mixing leads to two distinct kinds of false-positive observations: artificial interactions (AI) caused directly by mixing and spurious interactions (SI) arising indirectly from the spread of signals from true interacting sources to nearby false loci. To date, several interaction metrics have been developed to solve the AI problem, but the SI problem has remained largely intractable in MEG/EEG all-to-all source connectivity studies. Here, we advance a novel approach for correcting SIs in FC analyses using source-reconstructed MEG/EEG data. Our approach is to bundle observed FC connections into hyperedges by their adjacency in signal mixing. Using realistic simulations, we show here that bundling yields hyperedges with good separability of true positives and little loss in the true positive rate. Hyperedge bundling thus significantly decreases graph noise by minimizing the false-positive to true-positive ratio. Finally, we demonstrate the advantage of edge bundling in the visualization of large-scale cortical networks with real MEG data. We propose that hypergraphs yielded by bundling represent well the set of true cortical interactions that are detectable and dissociable in MEG/EEG connectivity analysis. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Laplace Boundary-Value Problem in Paraboloidal Coordinates
ERIC Educational Resources Information Center
Duggen, L.; Willatzen, M.; Voon, L. C. Lew Yan
2012-01-01
This paper illustrates both a problem in mathematical physics, whereby the method of separation of variables, while applicable, leads to three ordinary differential equations that remain fully coupled via two separation constants and a five-term recurrence relation for series solutions, and an exactly solvable problem in electrostatics, as a…
Supramolecular complexation for environmental control.
Albelda, M Teresa; Frías, Juan C; García-España, Enrique; Schneider, Hans-Jörg
2012-05-21
Supramolecular complexes offer a new and efficient way for the monitoring and removal of many substances emanating from technical processes, fertilization, plant and animal protection, or e.g. chemotherapy. Such pollutants range from toxic or radioactive metal ions and anions to chemical side products, herbicides, pesticides to drugs including steroids, and include degradation products from natural sources. The applications involve usually fast and reversible complex formation, due to prevailing non-covalent interactions. This is of importance for sensing as well as for separation techniques, where the often expensive host compounds can then be reused almost indefinitely. Immobilization of host compounds, e.g. on exchange resins or on membranes, and their implementation in smart new materials hold particular promise. The review illustrates how the design of suitable host compounds in combination with modern sensing and separation methods can contribute to solve some of the biggest problems facing chemistry, which arise from the everyday increasing pollution of the environment.
Thermochemical water decomposition. [hydrogen separation for energy applications
NASA Technical Reports Server (NTRS)
Funk, J. E.
1977-01-01
At present, nearly all of the hydrogen consumed in the world is produced by reacting hydrocarbons with water. As the supply of hydrocarbons diminishes, the problem of producing hydrogen from water alone will become increasingly important. Furthermore, producing hydrogen from water is a means of energy conversion by which thermal energy from a primary source, such as solar or nuclear fusion of fission, can be changed into an easily transportable and ecologically acceptable fuel. The attraction of thermochemical processes is that they offer the potential for converting thermal energy to hydrogen more efficiently than by water electrolysis. A thermochemical hydrogen-production process is one which requires only water as material input and mainly thermal energy, or heat, as an energy input. Attention is given to a definition of process thermal efficiency, the thermodynamics of the overall process, the single-stage process, the two-stage process, multistage processes, the work of separation and a process evaluation.
NASA Astrophysics Data System (ADS)
Saito, Takahiro; Takahashi, Hiromi; Komatsu, Takashi
2006-02-01
The Retinex theory was first proposed by Land, and deals with separation of irradiance from reflectance in an observed image. The separation problem is an ill-posed problem. Land and others proposed various Retinex separation algorithms. Recently, Kimmel and others proposed a variational framework that unifies the previous Retinex algorithms such as the Poisson-equation-type Retinex algorithms developed by Horn and others, and presented a Retinex separation algorithm with the time-evolution of a linear diffusion process. However, the Kimmel's separation algorithm cannot achieve physically rational separation, if true irradiance varies among color channels. To cope with this problem, we introduce a nonlinear diffusion process into the time-evolution. Moreover, as to its extension to color images, we present two approaches to treat color channels: the independent approach to treat each color channel separately and the collective approach to treat all color channels collectively. The latter approach outperforms the former. Furthermore, we apply our separation algorithm to a high quality chroma key in which before combining a foreground frame and a background frame into an output image a color of each pixel in the foreground frame are spatially adaptively corrected through transformation of the separated irradiance. Experiments demonstrate superiority of our separation algorithm over the Kimmel's separation algorithm.
Partial information decomposition as a spatiotemporal filter.
Flecker, Benjamin; Alford, Wesley; Beggs, John M; Williams, Paul L; Beer, Randall D
2011-09-01
Understanding the mechanisms of distributed computation in cellular automata requires techniques for characterizing the emergent structures that underlie information processing in such systems. Recently, techniques from information theory have been brought to bear on this problem. Building on this work, we utilize the new technique of partial information decomposition to show that previous information-theoretic measures can confound distinct sources of information. We then propose a new set of filters and demonstrate that they more cleanly separate out the background domains, particles, and collisions that are typically associated with information storage, transfer, and modification in cellular automata.
Lake water quality mapping from LANDSAT
NASA Technical Reports Server (NTRS)
Scherz, J. P.
1977-01-01
The lakes in three LANDSAT scenes were mapped by the Bendix MDAS multispectral analysis system. Field checking the maps by three separate individuals revealed approximately 90-95% correct classification for the lake categories selected. Variations between observers was about 5%. From the MDAS color coded maps the lake with the worst algae problem was easily located. This lake was closely checked and a pollution source of 100 cows was found in the springs which fed this lake. The theory, lab work and field work which made it possible for this demonstration project to be a practical lake classification procedure are presented.
Induced natural convection thermal cycling device
Heung, Leung Kit [Aiken, SC
2002-08-13
A device for separating gases, especially isotopes, by thermal cycling of a separation column using a pressure vessel mounted vertically and having baffled sources for cold and heat. Coils at the top are cooled with a fluid such as liquid nitrogen. Coils at the bottom are either electrical resistance coils or a tubular heat exchange. The sources are shrouded with an insulated "top hat" and simultaneously opened and closed at the outlets to cool or heat the separation column. Alternatively, the sources for cold and heat are mounted separately outside the vessel and an external loop is provided for each circuit.
46 CFR 129.395 - Radio installations.
Code of Federal Regulations, 2014 CFR
2014-10-01
... INSTALLATIONS Power Sources and Distribution Systems § 129.395 Radio installations. A separate circuit, with... radios, if installed, may be powered from a local lighting power source, such as the pilothouse lighting panel, provided each radio power source has a separate overcurrent protection device. ...
46 CFR 129.395 - Radio installations.
Code of Federal Regulations, 2011 CFR
2011-10-01
... INSTALLATIONS Power Sources and Distribution Systems § 129.395 Radio installations. A separate circuit, with... radios, if installed, may be powered from a local lighting power source, such as the pilothouse lighting panel, provided each radio power source has a separate overcurrent protection device. ...
46 CFR 129.395 - Radio installations.
Code of Federal Regulations, 2012 CFR
2012-10-01
... INSTALLATIONS Power Sources and Distribution Systems § 129.395 Radio installations. A separate circuit, with... radios, if installed, may be powered from a local lighting power source, such as the pilothouse lighting panel, provided each radio power source has a separate overcurrent protection device. ...
46 CFR 129.395 - Radio installations.
Code of Federal Regulations, 2013 CFR
2013-10-01
... INSTALLATIONS Power Sources and Distribution Systems § 129.395 Radio installations. A separate circuit, with... radios, if installed, may be powered from a local lighting power source, such as the pilothouse lighting panel, provided each radio power source has a separate overcurrent protection device. ...
NASA Astrophysics Data System (ADS)
Perminov, A. V.; Nikulin, I. L.
2016-03-01
We propose a mathematical model describing the motion of a metal melt in a variable inhomogeneous magnetic field of a short solenoid. In formulating the problem, we made estimates and showed the possibility of splitting the complete magnetohydrodynamical problem into two subproblems: a magnetic field diffusion problem where the distributions of the external and induced magnetic fields and currents are determined, and a heat and mass transfer problem with known distributions of volume sources of heat and forces. The dimensionless form of the heat and mass transfer equation was obtained with the use of averaging and multiscale methods, which permitted writing and solving separately the equations for averaged flows and temperature fields and their oscillations. For the heat and mass transfer problem, the boundary conditions for a real technological facility are discussed. The dimensionless form of the magnetic field diffusion equation is presented, and the experimental computational procedure and results of the numerical simulation of the magnetic field structure in the melt for various magnetic Reynolds numbers are described. The extreme dependence of heat release on the magnetic Reynolds number has been interpreted.
Adam, Emma K; Chase-Lansdale, P Lindsay
2002-09-01
Associations between histories of family disruption (residential moves and separations from parent figures) and adolescent adjustment (including educational, internalizing, externalizing, and sexual behavior outcomes) were examined in a random sample of 267 African American girls from 3 urban poverty neighborhoods. Higher numbers of residential moves and parental separations significantly predicted greater adolescent adjustment problems after household demographic characteristics were controlled. Adolescents' perceptions of their current relationships and neighborhoods were significantly associated with adolescent adjustment but did not mediate the effects of family disruption. Associations between parental separations and adolescent outcomes were strongest for externalizing problems and were found for both male and female caregivers, for long-standing and more temporary caregivers, and for separations in early childhood, middle childhood, and adolescence.
Cohen, Michael X
2017-09-27
The number of simultaneously recorded electrodes in neuroscience is steadily increasing, providing new opportunities for understanding brain function, but also new challenges for appropriately dealing with the increase in dimensionality. Multivariate source separation analysis methods have been particularly effective at improving signal-to-noise ratio while reducing the dimensionality of the data and are widely used for cleaning, classifying and source-localizing multichannel neural time series data. Most source separation methods produce a spatial component (that is, a weighted combination of channels to produce one time series); here, this is extended to apply source separation to a time series, with the idea of obtaining a weighted combination of successive time points, such that the weights are optimized to satisfy some criteria. This is achieved via a two-stage source separation procedure, in which an optimal spatial filter is first constructed and then its optimal temporal basis function is computed. This second stage is achieved with a time-delay-embedding matrix, in which additional rows of a matrix are created from time-delayed versions of existing rows. The optimal spatial and temporal weights can be obtained by solving a generalized eigendecomposition of covariance matrices. The method is demonstrated in simulated data and in an empirical electroencephalogram study on theta-band activity during response conflict. Spatiotemporal source separation has several advantages, including defining empirical filters without the need to apply sinusoidal narrowband filters. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Improved Multiple-Species Cyclotron Ion Source
NASA Technical Reports Server (NTRS)
Soli, George A.; Nichols, Donald K.
1990-01-01
Use of pure isotope 86Kr instead of natural krypton in multiple-species ion source enables source to produce krypton ions separated from argon ions by tuning cylcotron with which source used. Addition of capability to produce and separate krypton ions at kinetic energies of 150 to 400 MeV necessary for simulation of worst-case ions occurring in outer space.
Parameterizing unresolved obstacles with source terms in wave modeling: A real-world application
NASA Astrophysics Data System (ADS)
Mentaschi, Lorenzo; Kakoulaki, Georgia; Vousdoukas, Michalis; Voukouvalas, Evangelos; Feyen, Luc; Besio, Giovanni
2018-06-01
Parameterizing the dissipative effects of small, unresolved coastal features, is fundamental to improve the skills of wave models. The established technique to deal with this problem consists in reducing the amount of energy advected within the propagation scheme, and is currently available only for regular grids. To find a more general approach, Mentaschi et al., 2015b formulated a technique based on source terms, and validated it on synthetic case studies. This technique separates the parameterization of the unresolved features from the energy advection, and can therefore be applied to any numerical scheme and to any type of mesh. Here we developed an open-source library for the estimation of the transparency coefficients needed by this approach, from bathymetric data and for any type of mesh. The spectral wave model WAVEWATCH III was used to show that in a real-world domain, such as the Caribbean Sea, the proposed approach has skills comparable and sometimes better than the established propagation-based technique.
Design of an Image Fusion Phantom for a Small Animal microPET/CT Scanner Prototype
NASA Astrophysics Data System (ADS)
Nava-García, Dante; Alva-Sánchez, Héctor; Murrieta-Rodríguez, Tirso; Martínez-Dávalos, Arnulfo; Rodríguez-Villafuerte, Mercedes
2010-12-01
Two separate microtomography systems recently developed at Instituto de Física, UNAM, produce anatomical (microCT) and physiological images (microPET) of small animals. In this work, the development and initial tests of an image fusion method based on fiducial markers for image registration between the two modalities are presented. A modular Helix/Line-Sources phantom was designed and constructed; this phantom contains fiducial markers that can be visualized in both imaging systems. The registration was carried out by solving the rigid body alignment problem of Procrustes to obtain rotation and translation matrices required to align the two sets of images. The microCT/microPET image fusion of the Helix/Line-Sources phantom shows excellent visual coincidence between different structures, showing a calculated target-registration-error of 0.32 mm.
STS-32 OV-102 air revitalization system (ARS) humidity separator problem
1990-01-20
During STS-32, onboard Columbia, Orbiter Vehicle (OV) 102, a leakage problem at environmental control and life support system (ECLSS) air revitalization system (ARS) humidity separator A below the middeck is solved with a plastic bag and a towel. The towel inserted inside a plastic bag absorbed the water that had collected at the separator inlet.
STS-32 OV-102 air revitalization system (ARS) humidity separator problem
NASA Technical Reports Server (NTRS)
1990-01-01
During STS-32, onboard Columbia, Orbiter Vehicle (OV) 102, a leakage problem at environmental control and life support system (ECLSS) air revitalization system (ARS) humidity separator A below the middeck is solved with a plastic bag and a towel. The towel inserted inside a plastic bag absorbed the water that had collected at the separator inlet.
Rapid determination of 210Po in water samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maxwell, Sherrod L.; Culligan, Brian K.; Hutchison, Jay B.
2013-08-02
A new rapid method for the determination of 210Po in water samples has been developed at the Savannah River National Laboratory (SRNL) that can be used for emergency response or routine water analyses. If a radiological dispersive device (RDD) event or a radiological attack associated with drinking water supplies occurs, there will be an urgent need for rapid analyses of water samples, including drinking water, ground water and other water effluents. Current analytical methods for the assay of 210Po in water samples have typically involved spontaneous auto-deposition of 210Po onto silver or other metal disks followed by counting by alphamore » spectrometry. The auto-deposition times range from 90 minutes to 24 hours or more, at times with yields that may be less than desirable. If sample interferences are present, decreased yields and degraded alpha spectrums can occur due to unpredictable thickening in the deposited layer. Separation methods have focused on the use of Sr Resin, often in combination with 210Pb analysis. A new rapid method for 210Po in water samples has been developed at the Savannah River National Laboratory (SRNL) that utilizes a rapid calcium phosphate co-precipitation method, separation using DGA Resin (N,N,N,N-tetraoctyldiglycolamide extractant-coated resin, Eichrom Technologies or Triskem-International), followed by rapid microprecipitation of 210Po using bismuth phosphate for counting by alpha spectrometry. This new method can be performed quickly with excellent removal of interferences, high chemical yields and very good alpha peak resolution, eliminating any potential problems with the alpha source preparation for emergency or routine samples. A rapid sequential separation method to separate 210Po and actinide isotopes was also developed. This new approach, rapid separation with DGA Resin plus microprecipitation for alpha source preparation, is a significant advance in radiochemistry for the rapid determination of 210Po.« less
Kurtosis Approach for Nonlinear Blind Source Separation
NASA Technical Reports Server (NTRS)
Duong, Vu A.; Stubbemd, Allen R.
2005-01-01
In this paper, we introduce a new algorithm for blind source signal separation for post-nonlinear mixtures. The mixtures are assumed to be linearly mixed from unknown sources first and then distorted by memoryless nonlinear functions. The nonlinear functions are assumed to be smooth and can be approximated by polynomials. Both the coefficients of the unknown mixing matrix and the coefficients of the approximated polynomials are estimated by the gradient descent method conditional on the higher order statistical requirements. The results of simulation experiments presented in this paper demonstrate the validity and usefulness of our approach for nonlinear blind source signal separation.
NASA Astrophysics Data System (ADS)
Rolsma, Caleb
As a class of carbon-based nanomaterials, single-walled carbon nanotubes (SWNT) have many structural variations, called chiralities, each with different properties. Many potential applications of SWNT require the properties of a single chirality, but current synthesis methods can only produce single chiralities at prohibitive costs, or mixtures of chiralities at more affordable prices. Post-synthesis chirality separations provide a solution to this problem, and hydrogel separations are one such method. Despite much work in this field, the underlying interactions between SWNT and hydrogel are not fully understood. During separation, large quantities of SWNT are irretrievably lost due to irreversible adsorption to the hydrogel, posing a major problem to separation efficiency, while also offering an interesting scientific problem concerning the interaction of SWNT with hydrogels and surfactants. This thesis explores the problem of irreversible adsorption, offering an explanation for the process from a mechanistic viewpoint, opening new ways for improvement in separation. In brief, this work concludes adsorption follows three pathways, two of which lead to irreversible adsorption, both mediated by the presence of surfactants and limited by characteristics of the hydrogel surface. These findings stand to increase the general understanding of hydrogel SWNT separations, leading to improvements in separation, and bringing the research field closer to the many potential applications of single-chirality SWNT.
Systematic study of target localization for bioluminescence tomography guided radiation therapy
Yu, Jingjing; Zhang, Bin; Iordachita, Iulian I.; Reyes, Juvenal; Lu, Zhihao; Brock, Malcolm V.; Patterson, Michael S.; Wong, John W.
2016-01-01
Purpose: To overcome the limitation of CT/cone-beam CT (CBCT) in guiding radiation for soft tissue targets, the authors developed a spectrally resolved bioluminescence tomography (BLT) system for the small animal radiation research platform. The authors systematically assessed the performance of the BLT system in terms of target localization and the ability to resolve two neighboring sources in simulations, tissue-mimicking phantom, and in vivo environments. Methods: Multispectral measurements acquired in a single projection were used for the BLT reconstruction. The incomplete variables truncated conjugate gradient algorithm with an iterative permissible region shrinking strategy was employed as the optimization scheme to reconstruct source distributions. Simulation studies were conducted for single spherical sources with sizes from 0.5 to 3 mm radius at depth of 3–12 mm. The same configuration was also applied for the double source simulation with source separations varying from 3 to 9 mm. Experiments were performed in a standalone BLT/CBCT system. Two self-illuminated sources with 3 and 4.7 mm separations placed inside a tissue-mimicking phantom were chosen as the test cases. Live mice implanted with single-source at 6 and 9 mm depth, two sources at 3 and 5 mm separation at depth of 5 mm, or three sources in the abdomen were also used to illustrate the localization capability of the BLT system for multiple targets in vivo. Results: For simulation study, approximate 1 mm accuracy can be achieved at localizing center of mass (CoM) for single-source and grouped CoM for double source cases. For the case of 1.5 mm radius source, a common tumor size used in preclinical study, their simulation shows that for all the source separations considered, except for the 3 mm separation at 9 and 12 mm depth, the two neighboring sources can be resolved at depths from 3 to 12 mm. Phantom experiments illustrated that 2D bioluminescence imaging failed to distinguish two sources, but BLT can provide 3D source localization with approximately 1 mm accuracy. The in vivo results are encouraging that 1 and 1.7 mm accuracy can be attained for the single-source case at 6 and 9 mm depth, respectively. For the 2 sources in vivo study, both sources can be distinguished at 3 and 5 mm separations, and approximately 1 mm localization accuracy can also be achieved. Conclusions: This study demonstrated that their multispectral BLT/CBCT system could be potentially applied to localize and resolve multiple sources at wide range of source sizes, depths, and separations. The average accuracy of localizing CoM for single-source and grouped CoM for double sources is approximately 1 mm except deep-seated target. The information provided in this study can be instructive to devise treatment margins for BLT-guided irradiation. These results also suggest that the 3D BLT system could guide radiation for the situation with multiple targets, such as metastatic tumor models. PMID:27147371
Liao, Wei; Hua, Xue-Ming; Zhang, Wang; Li, Fang
2014-05-01
In the present paper, the authors calculated the plasma's peak electron temperatures under different heat source separation distance in laser- pulse GMAW hybrid welding based on Boltzmann spectrometry. Plasma's peak electron densities under the corresponding conditions were also calculated by using the Stark width of the plasma spectrum. Combined with high-speed photography, the effect of heat source separation distance on electron temperature and electron density was studied. The results show that with the increase in heat source separation distance, the electron temperatures and electron densities of laser plasma did not changed significantly. However, the electron temperatures of are plasma decreased, and the electron densities of are plasma first increased and then decreased.
Albert, Carlo; Ulzega, Simone; Stoop, Ruedi
2016-04-01
Parameter inference is a fundamental problem in data-driven modeling. Given observed data that is believed to be a realization of some parameterized model, the aim is to find parameter values that are able to explain the observed data. In many situations, the dominant sources of uncertainty must be included into the model for making reliable predictions. This naturally leads to stochastic models. Stochastic models render parameter inference much harder, as the aim then is to find a distribution of likely parameter values. In Bayesian statistics, which is a consistent framework for data-driven learning, this so-called posterior distribution can be used to make probabilistic predictions. We propose a novel, exact, and very efficient approach for generating posterior parameter distributions for stochastic differential equation models calibrated to measured time series. The algorithm is inspired by reinterpreting the posterior distribution as a statistical mechanics partition function of an object akin to a polymer, where the measurements are mapped on heavier beads compared to those of the simulated data. To arrive at distribution samples, we employ a Hamiltonian Monte Carlo approach combined with a multiple time-scale integration. A separation of time scales naturally arises if either the number of measurement points or the number of simulation points becomes large. Furthermore, at least for one-dimensional problems, we can decouple the harmonic modes between measurement points and solve the fastest part of their dynamics analytically. Our approach is applicable to a wide range of inference problems and is highly parallelizable.
Yuan, Yalin; Yabe, Mitsuyasu
2014-01-01
A source separation program for household kitchen waste has been in place in Beijing since 2010. However, the participation rate of residents is far from satisfactory. This study was carried out to identify residents’ preferences based on an improved management strategy for household kitchen waste source separation. We determine the preferences of residents in an ad hoc sample, according to their age level, for source separation services and their marginal willingness to accept compensation for the service attributes. We used a multinomial logit model to analyze the data, collected from 394 residents in Haidian and Dongcheng districts of Beijing City through a choice experiment. The results show there are differences of preferences on the services attributes between young, middle, and old age residents. Low compensation is not a major factor to promote young and middle age residents accept the proposed separation services. However, on average, most of them prefer services with frequent, evening, plastic bag attributes and without instructor. This study indicates that there is a potential for local government to improve the current separation services accordingly. PMID:25546279
Ion current detector for high pressure ion sources for monitoring separations
Smith, R.D.; Wahl, J.H.; Hofstadler, S.A.
1996-08-13
The present invention relates generally to any application involving the monitoring of signal arising from ions produced by electrospray or other high pressure (>100 torr) ion sources. The present invention relates specifically to an apparatus and method for the detection of ions emitted from a capillary electrophoresis (CE) system, liquid chromatography, or other small-scale separation methods. And further, the invention provides a very simple diagnostic as to the quality of the separation and the operation of an electrospray source. 7 figs.
Ion current detector for high pressure ion sources for monitoring separations
Smith, Richard D.; Wahl, Jon H.; Hofstadler, Steven A.
1996-01-01
The present invention relates generally to any application involving the monitoring of signal arising from ions produced by electrospray or other high pressure (>100 torr) ion sources. The present invention relates specifically to an apparatus and method for the detection of ions emitted from a capillary electrophoresis (CE) system, liquid chromatography, or other small-scale separation methods. And further, the invention provides a very simple diagnostic as to the quality of the separation and the operation of an electrospray source.
On a young-elderly support system maintained in separation in urban areas.
Wang, S
1995-01-01
The model of economic support for the elderly in China is moving from extended family support in cohabitation to a separation between aged and kin households. This article analyzes the nature and problems with a young-old support system based on separation of households. The breakdown in the traditional support system for the elderly is attributed to family control policy, the rapid pace of economic reforms, and modernization influences from abroad. In 1993 only 27.2% of households in Beijing comprised young or middle-aged persons living with the elderly. Today, the empty nest period lasts almost 30 years. The 1993 Beijing survey on aging found that value orientations were different between the young and the old. The greatest conflict in values was found to be over children's education. Parents preferred strict discipline, while grandparents tended to spoil grandchildren. Cross-generational differences were evident in general living arrangements, patterns of consumption, recreational activities, social exchanges, and opinions about social life. There was a stated desire to maintain privacy and live separately from parents. There was a shift in the nature of support for the elderly. Today, support tends to emphasize spiritual comfort. The aged in the 1993 survey reported the value of children as preventing solitude, having a complete family, bringing happiness, and a number of other reasons. The highest rankings favored children as a source of psychological comfort. The health of the elderly is improved, which allows for their greater independence in living. Many elderly in Beijing (77%) have a relatively stable income from pensions. Findings from a 1993 Western District survey are quoted as showing that 84.2% of young and middle-aged people are willing to provide financial support, but only 31.3% do so. The average level of support for the elderly was 33.7% of total household monthly income. Problems are identified as the financial burden on low-income young and middle-aged people, the heavy burden of household chores among those supporting the elderly, and a negative impact on study, work, and family relations from elder care. Five specific suggestions are made to overcome these problems.
Fehr, M
2014-09-01
Business opportunities in the household waste sector in emerging economies still evolve around the activities of bulk collection and tipping with an open material balance. This research, conducted in Brazil, pursued the objective of shifting opportunities from tipping to reverse logistics in order to close the balance. To do this, it illustrated how specific knowledge of sorted waste composition and reverse logistics operations can be used to determine realistic temporal and quantitative landfill diversion targets in an emerging economy context. Experimentation constructed and confirmed the recycling trilogy that consists of source separation, collection infrastructure and reverse logistics. The study on source separation demonstrated the vital difference between raw and sorted waste compositions. Raw waste contained 70% biodegradable and 30% inert matter. Source separation produced 47% biodegradable, 20% inert and 33% mixed material. The study on collection infrastructure developed the necessary receiving facilities. The study on reverse logistics identified private operators capable of collecting and processing all separated inert items. Recycling activities for biodegradable material were scarce and erratic. Only farmers would take the material as animal feed. No composting initiatives existed. The management challenge was identified as stimulating these activities in order to complete the trilogy and divert the 47% source-separated biodegradable discards from the landfills. © The Author(s) 2014.
Thurston, Idia B; Hardin, Robin; Decker, Kristina; Arnold, Trisha; Howell, Kathryn H; Phares, Vicky
2018-01-01
Understanding social and environmental factors that contribute to parental help-seeking intentions is an important step in addressing service underutilization for children in need of treatment. This study examined factors that contribute to parents' intentions to seek formal and informal help for child psychopathology (anxiety and attention-deficit/hyperactivity disorder [ADHD]). A total of 251 parents (N = 128 mothers, N = 123 fathers; 49% Black, 51% White) read 3 vignettes describing children with anxiety, ADHD, and no diagnosis. Measures of problem recognition, perceived barriers, and formal (pediatricians, psychologists, teachers) and informal (religious leaders, family/friends, self-help) help seeking were completed. Four separate hierarchical logistic regression models were used to examine parental help-seeking likelihood from formal and informal sources for internalizing and externalizing symptoms. Predictors were socioeconomic status, parent race, age, and sex, parent problem recognition (via study vignettes), and perceived barriers to mental health service utilization. Mothers were more likely than fathers to seek help from pediatricians, psychologists, teachers, and religious leaders for child anxiety and pediatricians, religious leaders, and self-help resources for child ADHD. Black parents were more likely to seek help from religious leaders and White parents were more likely to use self-help resources. Problem recognition was associated with greater intentions to seek help from almost all formal and informal sources (except from friends/family). Understanding factors that contribute to parental help seeking for child psychopathology is critical for increasing service utilization and reducing the negative effects of mental health problems. This study highlights the importance of decreasing help-seeking barriers and increasing problem recognition to improve health equity. © 2017 Wiley Periodicals, Inc.
Development of Vertical Cable Seismic System for Hydrothermal Deposit Survey (2) - Feasibility Study
NASA Astrophysics Data System (ADS)
Asakawa, E.; Murakami, F.; Sekino, Y.; Okamoto, T.; Mikada, H.; Takekawa, J.; Shimura, T.
2010-12-01
In 2009, Ministry of Education, Culture, Sports, Science and Technology(MEXT) started the survey system development for Hydrothermal deposit. We proposed the Vertical Cable Seismic (VCS), the reflection seismic survey with vertical cable above seabottom. VCS has the following advantages for hydrothermal deposit survey. . (1) VCS is an effective high-resolution 3D seismic survey within limited area. (2) It achieves high-resolution image because the sensors are closely located to the target. (3) It avoids the coupling problems between sensor and seabottom that cause serious damage of seismic data quality. (4) Various types of marine source are applicable with VCS such as sea-surface source (air gun, water gun etc.) , deep-towed or ocean bottom sources. (5) Autonomous recording system. Our first experiment of 2D/3D VCS surveys has been carried out in Lake Biwa, JAPAN. in November 2009. The 2D VCS data processing follows the walk-away VSP, including wave field separation and depth migration. The result gives clearer image than the conventional surface seismic. Prestack depth migration is applied to 3D data to obtain good quality 3D depth volume. Uncertainty of the source/receiver poisons in water causes the serious problem of the imaging. We used several transducer/transponder to estimate these positions. The VCS seismic records themselves can also provide sensor position using the first break of each trace and we calibrate the positions. We are currently developing the autonomous recording VCS system and planning the trial experiment in actual ocean to establish the way of deployment/recovery and the examine the position through the current flow in November, 2010. The second VCS survey will planned over the actual hydrothermal deposit with deep-towed source in February, 2011.
Kurtosis Approach Nonlinear Blind Source Separation
NASA Technical Reports Server (NTRS)
Duong, Vu A.; Stubbemd, Allen R.
2005-01-01
In this paper, we introduce a new algorithm for blind source signal separation for post-nonlinear mixtures. The mixtures are assumed to be linearly mixed from unknown sources first and then distorted by memoryless nonlinear functions. The nonlinear functions are assumed to be smooth and can be approximated by polynomials. Both the coefficients of the unknown mixing matrix and the coefficients of the approximated polynomials are estimated by the gradient descent method conditional on the higher order statistical requirements. The results of simulation experiments presented in this paper demonstrate the validity and usefulness of our approach for nonlinear blind source signal separation Keywords: Independent Component Analysis, Kurtosis, Higher order statistics.
Dong, Jun; Ni, Mingjiang; Chi, Yong; Zou, Daoan; Fu, Chao
2013-08-01
In China, the continuously increasing amount of municipal solid waste (MSW) has resulted in an urgent need for changing the current municipal solid waste management (MSWM) system based on mixed collection. A pilot program focusing on source-separated MSW collection was thus launched (2010) in Hangzhou, China, to lessen the related environmental loads. And greenhouse gas (GHG) emissions (Kyoto Protocol) are singled out in particular. This paper uses life cycle assessment modeling to evaluate the potential environmental improvement with regard to GHG emissions. The pre-existing MSWM system is assessed as baseline, while the source separation scenario is compared internally. Results show that 23 % GHG emissions can be decreased by source-separated collection compared with the base scenario. In addition, the use of composting and anaerobic digestion (AD) is suggested for further optimizing the management of food waste. 260.79, 82.21, and -86.21 thousand tonnes of GHG emissions are emitted from food waste landfill, composting, and AD, respectively, proving the emission reduction potential brought by advanced food waste treatment technologies. Realizing the fact, a modified MSWM system is proposed by taking AD as food waste substitution option, with additional 44 % GHG emissions saved than current source separation scenario. Moreover, a preliminary economic assessment is implemented. It is demonstrated that both source separation scenarios have a good cost reduction potential than mixed collection, with the proposed new system the most cost-effective one.
ERIC Educational Resources Information Center
Biskup, Claudia; Pfister, Gertrud; Robke, Cathrin
1998-01-01
Examines the results of interviews with elementary school children that gauged the attitudes towards and reasons for a partial separation by gender. Proposes an occassional separation of girls and boys for special pedogogical intervention. Discusses the findings. (CMK)
Ammonia producing engine utilizing oxygen separation
Easley, Jr., William Lanier; Coleman, Gerald Nelson [Petersborough, GB; Robel, Wade James [Peoria, IL
2008-12-16
A power system is provided having a power source, a first power source section with a first intake passage and a first exhaust passage, a second power source section with a second intake passage and a second exhaust passage, and an oxygen separator. The second intake passage may be fluidly isolated from the first intake passage.
NASA Technical Reports Server (NTRS)
Wolgemuth, D. J.; Gizang-Ginsberg, E.; Engelmyer, E.; Gavin, B. J.; Ponzetto, C.
1985-01-01
The use of a self-contained unit-gravity cell separation apparatus for separation of populations of mouse testicular cells is described. The apparatus, a Celsep (TM), maximizes the unit area over which sedimentation occurs, reduces the amount of separation medium employed, and is quite reproducible. Cells thus isolated have been good sources for isolation of DNA, and notably, high molecular weight RNA.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karim Ghani, Wan Azlina Wan Ab., E-mail: wanaz@eng.upm.edu.my; Rusli, Iffah Farizan, E-mail: iffahrusli@yahoo.com; Biak, Dayang Radiah Awang, E-mail: dayang@eng.upm.edu.my
Highlights: ► Theory of planned behaviour (TPB) has been conducted to identify the influencing factors for participation in source separation of food waste using self administered questionnaires. ► The findings suggested several implications for the development and implementation of waste separation at home programme. ► The analysis indicates that the attitude towards waste separation is determined as the main predictors where this in turn could be a significant predictor of the repondent’s actual food waste separation behaviour. ► To date, none of similar have been reported elsewhere and this finding will be beneficial to local Authorities as indicator in designingmore » campaigns to promote the use of waste separation programmes to reinforce the positive attitudes. - Abstract: Tremendous increases in biodegradable (food waste) generation significantly impact the local authorities, who are responsible to manage, treat and dispose of this waste. The process of separation of food waste at its generation source is identified as effective means in reducing the amount food waste sent to landfill and can be reused as feedstock to downstream treatment processes namely composting or anaerobic digestion. However, these efforts will only succeed with positive attitudes and highly participations rate by the public towards the scheme. Thus, the social survey (using questionnaires) to analyse public’s view and influencing factors towards participation in source separation of food waste in households based on the theory of planned behaviour technique (TPB) was performed in June and July 2011 among selected staff in Universiti Putra Malaysia, Serdang, Selangor. The survey demonstrates that the public has positive intention in participating provided the opportunities, facilities and knowledge on waste separation at source are adequately prepared by the respective local authorities. Furthermore, good moral values and situational factors such as storage convenience and collection times are also encouraged public’s involvement and consequently, the participations rate. The findings from this study may provide useful indicator to the waste management authorities in Malaysia in identifying mechanisms for future development and implementation of food waste source separation activities in household programmes and communication campaign which advocate the use of these programmes.« less
VizieR Online Data Catalog: Blazars equivalent widths and radio luminosity (Landt+, 2004)
NASA Astrophysics Data System (ADS)
Landt, H.; Padovani, P.; Perlman, E. S.; Giommi, P.
2004-07-01
Blazars are currently separated into BL Lacertae objects (BL Lacs) and flat spectrum radio quasars based on the strength of their emission lines. This is performed rather arbitrarily by defining a diagonal line in the Ca H&K break value-equivalent width plane, following Marcha et al. (1996MNRAS.281..425M). We readdress this problem and put the classification scheme for blazars on firm physical grounds. We study ~100 blazars and radio galaxies from the Deep X-ray Radio Blazar Survey (DXRBS, Cat. and ) and 2-Jy radio survey and find a significant bimodality for the narrow emission line [OIII]{lambda}5007. This suggests the presence of two physically distinct classes of radio-loud active galactic nuclei (AGN). We show that all radio-loud AGN, blazars and radio galaxies, can be effectively separated into weak- and strong-lined sources using the [OIII]{lambda}5007-[OII]{lambda}3727 equivalent width plane. This plane allows one to disentangle orientation effects from intrinsic variations in radio-loud AGN. Based on DXRBS, the strongly beamed sources of the new class of weak-lined radio-loud AGN are made up of BL Lacs at the ~75 per cent level, whereas those of the strong-lined radio-loud AGN include mostly (~97 per cent) quasars. (4 data files).
Efficient source separation algorithms for acoustic fall detection using a microsoft kinect.
Li, Yun; Ho, K C; Popescu, Mihail
2014-03-01
Falls have become a common health problem among older adults. In previous study, we proposed an acoustic fall detection system (acoustic FADE) that employed a microphone array and beamforming to provide automatic fall detection. However, the previous acoustic FADE had difficulties in detecting the fall signal in environments where interference comes from the fall direction, the number of interferences exceeds FADE's ability to handle or a fall is occluded. To address these issues, in this paper, we propose two blind source separation (BSS) methods for extracting the fall signal out of the interferences to improve the fall classification task. We first propose the single-channel BSS by using nonnegative matrix factorization (NMF) to automatically decompose the mixture into a linear combination of several basis components. Based on the distinct patterns of the bases of falls, we identify them efficiently and then construct the interference free fall signal. Next, we extend the single-channel BSS to the multichannel case through a joint NMF over all channels followed by a delay-and-sum beamformer for additional ambient noise reduction. In our experiments, we used the Microsoft Kinect to collect the acoustic data in real-home environments. The results show that in environments with high interference and background noise levels, the fall detection performance is significantly improved using the proposed BSS approaches.
Qi, H.; Coplen, T.B.; Wassenaar, L.I.
2011-01-01
It is well known that N2 in the ion source of a mass spectrometer interferes with the CO background during the δ18O measurement of carbon monoxide. A similar problem arises with the high-temperature conversion (HTC) analysis of nitrogenous O-bearing samples (e.g. nitrates and keratins) to CO for δ18O measurement, where the sample introduces a significant N2 peak before the CO peak, making determination of accurate oxygen isotope ratios difficult. Although using a gas chromatography (GC) column longer than that commonly provided by manufacturers (0.6 m) can improve the efficiency of separation of CO and N2 and using a valve to divert nitrogen and prevent it from entering the ion source of a mass spectrometer improved measurement results, biased δ18O values could still be obtained. A careful evaluation of the performance of the GC separation column was carried out. With optimal GC columns, the δ18O reproducibility of human hair keratins and other keratin materials was better than ±0.15 ‰ (n = 5; for the internal analytical reproducibility), and better than ±0.10 ‰ (n = 4; for the external analytical reproducibility).
Cultured Cortical Neurons Can Perform Blind Source Separation According to the Free-Energy Principle
Isomura, Takuya; Kotani, Kiyoshi; Jimbo, Yasuhiko
2015-01-01
Blind source separation is the computation underlying the cocktail party effect––a partygoer can distinguish a particular talker’s voice from the ambient noise. Early studies indicated that the brain might use blind source separation as a signal processing strategy for sensory perception and numerous mathematical models have been proposed; however, it remains unclear how the neural networks extract particular sources from a complex mixture of inputs. We discovered that neurons in cultures of dissociated rat cortical cells could learn to represent particular sources while filtering out other signals. Specifically, the distinct classes of neurons in the culture learned to respond to the distinct sources after repeating training stimulation. Moreover, the neural network structures changed to reduce free energy, as predicted by the free-energy principle, a candidate unified theory of learning and memory, and by Jaynes’ principle of maximum entropy. This implicit learning can only be explained by some form of Hebbian plasticity. These results are the first in vitro (as opposed to in silico) demonstration of neural networks performing blind source separation, and the first formal demonstration of neuronal self-organization under the free energy principle. PMID:26690814
Fusion of 3D laser scanner and depth images for obstacle recognition in mobile applications
NASA Astrophysics Data System (ADS)
Budzan, Sebastian; Kasprzyk, Jerzy
2016-02-01
The problem of obstacle detection and recognition or, generally, scene mapping is one of the most investigated problems in computer vision, especially in mobile applications. In this paper a fused optical system using depth information with color images gathered from the Microsoft Kinect sensor and 3D laser range scanner data is proposed for obstacle detection and ground estimation in real-time mobile systems. The algorithm consists of feature extraction in the laser range images, processing of the depth information from the Kinect sensor, fusion of the sensor information, and classification of the data into two separate categories: road and obstacle. Exemplary results are presented and it is shown that fusion of information gathered from different sources increases the effectiveness of the obstacle detection in different scenarios, and it can be used successfully for road surface mapping.
Decomposition Technique for Remaining Useful Life Prediction
NASA Technical Reports Server (NTRS)
Saha, Bhaskar (Inventor); Goebel, Kai F. (Inventor); Saxena, Abhinav (Inventor); Celaya, Jose R. (Inventor)
2014-01-01
The prognostic tool disclosed here decomposes the problem of estimating the remaining useful life (RUL) of a component or sub-system into two separate regression problems: the feature-to-damage mapping and the operational conditions-to-damage-rate mapping. These maps are initially generated in off-line mode. One or more regression algorithms are used to generate each of these maps from measurements (and features derived from these), operational conditions, and ground truth information. This decomposition technique allows for the explicit quantification and management of different sources of uncertainty present in the process. Next, the maps are used in an on-line mode where run-time data (sensor measurements and operational conditions) are used in conjunction with the maps generated in off-line mode to estimate both current damage state as well as future damage accumulation. Remaining life is computed by subtracting the instance when the extrapolated damage reaches the failure threshold from the instance when the prediction is made.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Counselman, C.C. III
1973-09-01
Very-long-baseline interferometry (VLBI) techniques have already been used to determine the vector separations between antennas thousands of kilometers apart to within 2 m and the directions of extragalactic radio sources to 0.1'', and to track an artificial satellite of the earth and the Apollo Lunar Rover on the surface of the Moon. The relative loostions of the Apollo Lunar Surface Experiment Package (ALSEP) transmitters on the lunar surface are being measured within 1 m, and the Moon's libration is being messured to 1'' of selenocentric src. Attempts are under way to measure the solar gravitational deflection of radio waves moremore » accurately than previously possible, by means of VLBI. A wide variety of scientific problems is being attacked by VLBI techniques, which may soon be two orders of magnitude more accurate than at present. (auth)« less
Time-frequency approach to underdetermined blind source separation.
Xie, Shengli; Yang, Liu; Yang, Jun-Mei; Zhou, Guoxu; Xiang, Yong
2012-02-01
This paper presents a new time-frequency (TF) underdetermined blind source separation approach based on Wigner-Ville distribution (WVD) and Khatri-Rao product to separate N non-stationary sources from M(M <; N) mixtures. First, an improved method is proposed for estimating the mixing matrix, where the negative value of the auto WVD of the sources is fully considered. Then after extracting all the auto-term TF points, the auto WVD value of the sources at every auto-term TF point can be found out exactly with the proposed approach no matter how many active sources there are as long as N ≤ 2M-1. Further discussion about the extraction of auto-term TF points is made and finally the numerical simulation results are presented to show the superiority of the proposed algorithm by comparing it with the existing ones.
Systematic study of target localization for bioluminescence tomography guided radiation therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Jingjing; Zhang, Bin; Reyes, Juvenal
Purpose: To overcome the limitation of CT/cone-beam CT (CBCT) in guiding radiation for soft tissue targets, the authors developed a spectrally resolved bioluminescence tomography (BLT) system for the small animal radiation research platform. The authors systematically assessed the performance of the BLT system in terms of target localization and the ability to resolve two neighboring sources in simulations, tissue-mimicking phantom, and in vivo environments. Methods: Multispectral measurements acquired in a single projection were used for the BLT reconstruction. The incomplete variables truncated conjugate gradient algorithm with an iterative permissible region shrinking strategy was employed as the optimization scheme to reconstructmore » source distributions. Simulation studies were conducted for single spherical sources with sizes from 0.5 to 3 mm radius at depth of 3–12 mm. The same configuration was also applied for the double source simulation with source separations varying from 3 to 9 mm. Experiments were performed in a standalone BLT/CBCT system. Two self-illuminated sources with 3 and 4.7 mm separations placed inside a tissue-mimicking phantom were chosen as the test cases. Live mice implanted with single-source at 6 and 9 mm depth, two sources at 3 and 5 mm separation at depth of 5 mm, or three sources in the abdomen were also used to illustrate the localization capability of the BLT system for multiple targets in vivo. Results: For simulation study, approximate 1 mm accuracy can be achieved at localizing center of mass (CoM) for single-source and grouped CoM for double source cases. For the case of 1.5 mm radius source, a common tumor size used in preclinical study, their simulation shows that for all the source separations considered, except for the 3 mm separation at 9 and 12 mm depth, the two neighboring sources can be resolved at depths from 3 to 12 mm. Phantom experiments illustrated that 2D bioluminescence imaging failed to distinguish two sources, but BLT can provide 3D source localization with approximately 1 mm accuracy. The in vivo results are encouraging that 1 and 1.7 mm accuracy can be attained for the single-source case at 6 and 9 mm depth, respectively. For the 2 sources in vivo study, both sources can be distinguished at 3 and 5 mm separations, and approximately 1 mm localization accuracy can also be achieved. Conclusions: This study demonstrated that their multispectral BLT/CBCT system could be potentially applied to localize and resolve multiple sources at wide range of source sizes, depths, and separations. The average accuracy of localizing CoM for single-source and grouped CoM for double sources is approximately 1 mm except deep-seated target. The information provided in this study can be instructive to devise treatment margins for BLT-guided irradiation. These results also suggest that the 3D BLT system could guide radiation for the situation with multiple targets, such as metastatic tumor models.« less
NASA Technical Reports Server (NTRS)
Shaw, Harry C.
2007-01-01
Rapid identification of pathogenic bacterial species is an important factor in combating public health problems such as E. coli contamination. Food and waterborne pathogens account for sickness in 76 million people annually (CDC). Diarrheagenic E. coli is a major source of gastrointestinal illness. Severe sepsis and Septicemia within the hospital environment are also major problems. 75 1,000 cases annually with a 30-50% mortality rate (Crit Care Med, July '01, Vol. 29, 1303-10). Patient risks run the continuum from fever to organ failure and death. Misdiagnosis or inappropriate treatment increases mortality. There exists a need for rapid screening of samples for identification of pathogenic species (Certain E. coli strains are essential for health). Critical to the identification process is the ability to isolate analytes of interest rapidly. This poster discusses novel devices for the separation of particles on the basis of the dielectric properties, mass and surface charge characteristics is presented. Existing designs involve contact between electrode surfaces and analyte medium resulting in contamination of the electrode bearing elements Two different device designs using different bulk micromachining MEMS processes (PolyMUMPS and a PyrexBIGold electrode design) are presented. These designs cover a range of particle sizes from small molecules through eucaryotic cells. The application of separation of bacteria is discussed in detail. Simulation data for electrostatic and microfluidic characteristics are provided. Detailed design characteristics and physical features of the as fabricated PolyMUMPS design are provided. Analysis of the simulation data relative to the expected performance of the devices will be provided and subsequent conclusions discussed.
Study on Separation of Structural Isomer with Magneto-Archimedes method
NASA Astrophysics Data System (ADS)
Kobayashi, T.; Mori, T.; Akiyama, Y.; Mishima, F.; Nishijima, S.
2017-09-01
Organic compounds are refined by separating their structural isomers, however each separation method has some problems. For example, distillation consumes large energy. In order to solve these problems, new separation method is needed. Considering organic compounds are diamagnetic, we focused on magneto-Archimedes method. With this method, particle mixture dispersed in a paramagnetic medium can be separated in a magnetic field due to the difference of the density and magnetic susceptibility of the particles. In this study, we succeeded in separating isomers of phthalic acid as an example of structural isomer using MnCl2 solution as the paramagnetic medium. In order to use magneto-Archimedes method for separating materials for food or medicine, we proposed harmless medium using oxygen and fluorocarbon instead of MnCl2 aqueous solution. As a result, the possibility of separating every structural isomer was shown.
A time reversal algorithm in acoustic media with Dirac measure approximations
NASA Astrophysics Data System (ADS)
Bretin, Élie; Lucas, Carine; Privat, Yannick
2018-04-01
This article is devoted to the study of a photoacoustic tomography model, where one is led to consider the solution of the acoustic wave equation with a source term writing as a separated variables function in time and space, whose temporal component is in some sense close to the derivative of the Dirac distribution at t = 0. This models a continuous wave laser illumination performed during a short interval of time. We introduce an algorithm for reconstructing the space component of the source term from the measure of the solution recorded by sensors during a time T all along the boundary of a connected bounded domain. It is based at the same time on the introduction of an auxiliary equivalent Cauchy problem allowing to derive explicit reconstruction formula and then to use of a deconvolution procedure. Numerical simulations illustrate our approach. Finally, this algorithm is also extended to elasticity wave systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leung, P. T.; Young, K.
Reciprocity in wave propagation usually refers to the symmetry of the Green's function under the interchange of the source and the observer coordinates, but this condition is not gauge invariant in quantum mechanics, a problem that is particularly significant in the presence of a vector potential. Several possible alternative criteria are given and analyzed with reference to different examples with nonzero magnetic fields and/or vector potentials, including the case of a multiply connected spatial domain. It is shown that the appropriate reciprocity criterion allows for specific phase factors separable into functions of the source and observer coordinates and that thismore » condition is robust with respect to the addition of any scalar potential. In the Aharonov-Bohm effect, reciprocity beyond monoenergetic experiments holds only because of subsidiary conditions satisfied in actual experiments: the test charge is in units of e and the flux is produced by a condensate of particles with charge 2e.« less
Numerical simulation of tonal fan noise of computers and air conditioning systems
NASA Astrophysics Data System (ADS)
Aksenov, A. A.; Gavrilyuk, V. N.; Timushev, S. F.
2016-07-01
Current approaches to fan noise simulation are mainly based on the Lighthill equation and socalled aeroacoustic analogy, which are also based on the transformed Lighthill equation, such as the wellknown FW-H equation or the Kirchhoff theorem. A disadvantage of such methods leading to significant modeling errors is associated with incorrect solution of the decomposition problem, i.e., separation of acoustic and vortex (pseudosound) modes in the area of the oscillation source. In this paper, we propose a method for tonal noise simulation based on the mesh solution of the Helmholtz equation for the Fourier transform of pressure perturbation with boundary conditions in the form of the complex impedance. A noise source is placed on the surface surrounding each fan rotor. The acoustic fan power is determined by the acoustic-vortex method, which ensures more accurate decomposition and determination of the pressure pulsation amplitudes in the near field of the fan.
Floating-point scaling technique for sources separation automatic gain control
NASA Astrophysics Data System (ADS)
Fermas, A.; Belouchrani, A.; Ait-Mohamed, O.
2012-07-01
Based on the floating-point representation and taking advantage of scaling factor indetermination in blind source separation (BSS) processing, we propose a scaling technique applied to the separation matrix, to avoid the saturation or the weakness in the recovered source signals. This technique performs an automatic gain control in an on-line BSS environment. We demonstrate the effectiveness of this technique by using the implementation of a division-free BSS algorithm with two inputs, two outputs. The proposed technique is computationally cheaper and efficient for a hardware implementation compared to the Euclidean normalisation.
Predictors of continued problem drinking and substance use following military discharge.
Norman, Sonya B; Schmied, Emily; Larson, Gerald E
2014-07-01
The goals of the present study were to (a) examine change in rates of problem alcohol/substance use among a sample of veterans between their last year of military service and their first year following separation, (b) identify predictors of continued problem use in the first year after separation, and (c) evaluate the hypothesis that avoidant coping, posttraumatic stress disorder (PTSD) symptoms, and chronic stress place individuals at particularly high risk for continued problem use. Participants (N = 1,599) completed self-report measures before and during the year following separation. Participants who endorsed either having used more than intended or wanting or needing to cut down during the past year were considered to have problem use. Of 742 participants reporting problem substance use at baseline, 42% reported continued problem substance use at follow-up ("persistors"). Persistors reported more trouble adjusting to civilian life, had a greater likelihood of driving while intoxicated, and had a greater likelihood of aggression. Multivariate analyses showed that avoidant coping score at baseline and higher PTSD symptom score and greater sensation seeking at follow up predicted continued problem use. Understanding risk factors for continued problem use is a prerequisite for targeted prevention of chronic problems and associated negative life consequences.
Rigamonti, L; Grosso, M; Giugliano, M
2009-02-01
This life cycle assessment study analyses material and energy recovery within integrated municipal solid waste (MSW) management systems, and, in particular, the recovery of the source-separated materials (packaging and organic waste) and the energy recovery from the residual waste. The recovery of materials and energy are analysed together, with the final aim to evaluate possible optimum levels of source-separated collection that lead to the most favourable energetic and environmental results; this method allows identification of an optimum configuration of the MSW management system. The results show that the optimum level of source-separated collection is about 60%, when all the materials are recovered with high efficiency; it decreases to about 50%, when the 60% level is reached as a result of a very high recovery efficiency for organic fractions at the expense of the packaging materials, or when this implies an appreciable reduction of the quality of collected materials. The optimum MSW management system is thus characterized by source-separated collection levels as included in the above indicated range, with subsequent recycling of the separated materials and energy recovery of the residual waste in a large-scale incinerator operating in combined heat and power mode.
Separating Turbofan Engine Noise Sources Using Auto and Cross Spectra from Four Microphones
NASA Technical Reports Server (NTRS)
Miles, Jeffrey Hilton
2008-01-01
The study of core noise from turbofan engines has become more important as noise from other sources such as the fan and jet were reduced. A multiple-microphone and acoustic-source modeling method to separate correlated and uncorrelated sources is discussed. The auto- and cross spectra in the frequency range below 1000 Hz are fitted with a noise propagation model based on a source couplet consisting of a single incoherent monopole source with a single coherent monopole source or a source triplet consisting of a single incoherent monopole source with two coherent monopole point sources. Examples are presented using data from a Pratt& Whitney PW4098 turbofan engine. The method separates the low-frequency jet noise from the core noise at the nozzle exit. It is shown that at low power settings, the core noise is a major contributor to the noise. Even at higher power settings, it can be more important than jet noise. However, at low frequencies, uncorrelated broadband noise and jet noise become the important factors as the engine power setting is increased.
Ishii, Stephanie K L; Boyer, Treavor H
2015-08-01
Alternative approaches to wastewater management including urine source separation have the potential to simultaneously improve multiple aspects of wastewater treatment, including reduced use of potable water for waste conveyance and improved contaminant removal, especially nutrients. In order to pursue such radical changes, system-level evaluations of urine source separation in community contexts are required. The focus of this life cycle assessment (LCA) is managing nutrients from urine produced in a residential setting with urine source separation and struvite precipitation, as compared with a centralized wastewater treatment approach. The life cycle impacts evaluated in this study pertain to construction of the urine source separation system and operation of drinking water treatment, decentralized urine treatment, and centralized wastewater treatment. System boundaries include fertilizer offsets resulting from the production of urine based struvite fertilizer. As calculated by the Tool for the Reduction and Assessment of Chemical and Other Environmental Impacts (TRACI), urine source separation with MgO addition for subsequent struvite precipitation with high P recovery (Scenario B) has the smallest environmental cost relative to existing centralized wastewater treatment (Scenario A) and urine source separation with MgO and Na3PO4 addition for subsequent struvite precipitation with concurrent high P and N recovery (Scenario C). Preliminary economic evaluations show that the three urine management scenarios are relatively equal on a monetary basis (<13% difference). The impacts of each urine management scenario are most sensitive to the assumed urine composition, the selected urine storage time, and the assumed electricity required to treat influent urine and toilet water used to convey urine at the centralized wastewater treatment plant. The importance of full nutrient recovery from urine in combination with the substantial chemical inputs required for N recovery via struvite precipitation indicate the need for alternative methods of N recovery. Copyright © 2015 Elsevier Ltd. All rights reserved.
Procedure for Separating Noise Sources in Measurements of Turbofan Engine Core Noise
NASA Technical Reports Server (NTRS)
Miles, Jeffrey Hilton
2006-01-01
The study of core noise from turbofan engines has become more important as noise from other sources like the fan and jet have been reduced. A multiple microphone and acoustic source modeling method to separate correlated and uncorrelated sources has been developed. The auto and cross spectrum in the frequency range below 1000 Hz is fitted with a noise propagation model based on a source couplet consisting of a single incoherent source with a single coherent source or a source triplet consisting of a single incoherent source with two coherent point sources. Examples are presented using data from a Pratt & Whitney PW4098 turbofan engine. The method works well.
Transverse Dimensions of Chorus in the Source Region
NASA Technical Reports Server (NTRS)
Santolik, O.; Gurnett, D. A.
2003-01-01
We report measurement of whistler-mode chorus by the four Cluster spacecraft at close separations. We focus our analysis on the generation region close to the magnetic equatorial plane at a radial distance of 4.4 Earth's radii. We use both linear and rank correlation analysis to define perpendicular dimensions of the sources of chorus elements below one half of the electron cyclotron frequency. Correlation is significant throughout the range of separation distances of 60-260 km parallel to the field line and 7-100 km in the perpendicular plane. At these scales, the correlation coefficient is independent for parallel separations, and decreases with perpendicular separation. The observations are consistent with a statistical model of the source region assuming individual sources as gaussian peaks of radiated power with a common half-width of 35 km perpendicular to the magnetic field. This characteristic scale is comparable to the wavelength of observed waves.
Statistics of natural reverberation enable perceptual separation of sound and space
Traer, James; McDermott, Josh H.
2016-01-01
In everyday listening, sound reaches our ears directly from a source as well as indirectly via reflections known as reverberation. Reverberation profoundly distorts the sound from a source, yet humans can both identify sound sources and distinguish environments from the resulting sound, via mechanisms that remain unclear. The core computational challenge is that the acoustic signatures of the source and environment are combined in a single signal received by the ear. Here we ask whether our recognition of sound sources and spaces reflects an ability to separate their effects and whether any such separation is enabled by statistical regularities of real-world reverberation. To first determine whether such statistical regularities exist, we measured impulse responses (IRs) of 271 spaces sampled from the distribution encountered by humans during daily life. The sampled spaces were diverse, but their IRs were tightly constrained, exhibiting exponential decay at frequency-dependent rates: Mid frequencies reverberated longest whereas higher and lower frequencies decayed more rapidly, presumably due to absorptive properties of materials and air. To test whether humans leverage these regularities, we manipulated IR decay characteristics in simulated reverberant audio. Listeners could discriminate sound sources and environments from these signals, but their abilities degraded when reverberation characteristics deviated from those of real-world environments. Subjectively, atypical IRs were mistaken for sound sources. The results suggest the brain separates sound into contributions from the source and the environment, constrained by a prior on natural reverberation. This separation process may contribute to robust recognition while providing information about spaces around us. PMID:27834730
Statistics of natural reverberation enable perceptual separation of sound and space.
Traer, James; McDermott, Josh H
2016-11-29
In everyday listening, sound reaches our ears directly from a source as well as indirectly via reflections known as reverberation. Reverberation profoundly distorts the sound from a source, yet humans can both identify sound sources and distinguish environments from the resulting sound, via mechanisms that remain unclear. The core computational challenge is that the acoustic signatures of the source and environment are combined in a single signal received by the ear. Here we ask whether our recognition of sound sources and spaces reflects an ability to separate their effects and whether any such separation is enabled by statistical regularities of real-world reverberation. To first determine whether such statistical regularities exist, we measured impulse responses (IRs) of 271 spaces sampled from the distribution encountered by humans during daily life. The sampled spaces were diverse, but their IRs were tightly constrained, exhibiting exponential decay at frequency-dependent rates: Mid frequencies reverberated longest whereas higher and lower frequencies decayed more rapidly, presumably due to absorptive properties of materials and air. To test whether humans leverage these regularities, we manipulated IR decay characteristics in simulated reverberant audio. Listeners could discriminate sound sources and environments from these signals, but their abilities degraded when reverberation characteristics deviated from those of real-world environments. Subjectively, atypical IRs were mistaken for sound sources. The results suggest the brain separates sound into contributions from the source and the environment, constrained by a prior on natural reverberation. This separation process may contribute to robust recognition while providing information about spaces around us.
Separation of time scales in the HCA model for sand
NASA Astrophysics Data System (ADS)
Niemunis, Andrzej; Wichtmann, Torsten
2014-10-01
Separation of time scales is used in a high cycle accumulation (HCA) model for sand. An important difficulty of the model is the limited applicability of the Miner's rule to multiaxial cyclic loadings applied simultaneously or in a combination with monotonic loading. Another problem is the lack of simplified objective HCA formulas for geotechnical settlement problems. Possible solutions of these problems are discussed.
Babinec, Peter; Krafcík, Andrej; Babincová, Melánia; Rosenecker, Joseph
2010-08-01
Magnetic nanoparticles for therapy and diagnosis are at the leading edge of the rapidly developing field of bionanotechnology. In this study, we have theoretically studied motion of magnetic nano- as well as micro-particles in the field of cylindrical Halbach array of permanent magnets. Magnetic flux density was modeled as magnetostatic problem by finite element method and particle motion was described using system of ordinary differential equations--Newton law. Computations were done for nanoparticles Nanomag-D with radius 65 nm, which are often used in magnetic drug targeting, as well as microparticles DynaBeads-M280 with radius 1.4 microm, which can be used for magnetic separation. Analyzing snapshots of trajectories of hundred magnetite particles of each size in the water as well as in the air, we have found that optimally designed magnetic circuits of permanent magnets in quadrupolar Halbach array have substantially shorter capture time than simple blocks of permanent magnets commonly used in experiments, therefore, such a Halbach array may be useful as a potential source of magnetic field for magnetic separation and targeting of magnetic nanoparticles as well as microparticles for delivery of drugs, genes, and cells in various biomedical applications.
Acoustic and elastic multiple scattering and radiation from cylindrical structures
NASA Astrophysics Data System (ADS)
Amirkulova, Feruza Abdukadirovna
Multiple scattering (MS) and radiation of waves by a system of scatterers is of great theoretical and practical importance and is required in a wide variety of physical contexts such as the implementation of "invisibility" cloaks, the effective parameter characterization, and the fabrication of dynamically tunable structures, etc. The dissertation develops fast, rapidly convergent iterative techniques to expedite the solution of MS problems. The formulation of MS problems reduces to a system of linear algebraic equations using Graf's theorem and separation of variables. The iterative techniques are developed using Neumann expansion and Block Toeplitz structure of the linear system; they are very general, and suitable for parallel computations and a large number of MS problems, i.e. acoustic, elastic, electromagnetic, etc., and used for the first time to solve MS problems. The theory is implemented in Matlab and FORTRAN, and the theoretical predictions are compared to computations obtained by COMSOL. To formulate the MS problem, the transition matrix is obtained by analyzing an acoustic and an elastic single scattering of incident waves by elastic isotropic and anisotropic solids. The mathematical model of wave scattering from multilayered cylindrical and spherical structures is developed by means of an exact solution of dynamic 3D elasticity theory. The recursive impedance matrix algorithm is derived for radially heterogeneous anisotropic solids. An explicit method for finding the impedance in piecewise uniform, transverse-isotropic material is proposed; the solution is compared to elasticity theory solutions involving Buchwald potentials. Furthermore, active exterior cloaking devices are modeled for acoustic and elastic media using multipole sources. A cloaking device can render an object invisible to some incident waves as seen by some external observer. The active cloak is generated by a discrete set of multipole sources that destructively interfere with an incident wave to produce zero total field over a finite spatial region. The approach precisely determines the necessary source amplitudes and enables a cloaked region to be determined using Graf's theorem. To apply the approach, the infinite series of multipole expansions are truncated, and the accuracy of cloaking is studied by modifying the truncation parameter.
NASA Astrophysics Data System (ADS)
Wright, L.; Coddington, O.; Pilewskie, P.
2015-12-01
Current challenges in Earth remote sensing require improved instrument spectral resolution, spectral coverage, and radiometric accuracy. Hyperspectral instruments, deployed on both aircraft and spacecraft, are a growing class of Earth observing sensors designed to meet these challenges. They collect large amounts of spectral data, allowing thorough characterization of both atmospheric and surface properties. The higher accuracy and increased spectral and spatial resolutions of new imagers require new numerical approaches for processing imagery and separating surface and atmospheric signals. One potential approach is source separation, which allows us to determine the underlying physical causes of observed changes. Improved signal separation will allow hyperspectral instruments to better address key science questions relevant to climate change, including land-use changes, trends in clouds and atmospheric water vapor, and aerosol characteristics. In this work, we investigate a Non-negative Matrix Factorization (NMF) method for the separation of atmospheric and land surface signal sources. NMF offers marked benefits over other commonly employed techniques, including non-negativity, which avoids physically impossible results, and adaptability, which allows the method to be tailored to hyperspectral source separation. We adapt our NMF algorithm to distinguish between contributions from different physically distinct sources by introducing constraints on spectral and spatial variability and by using library spectra to inform separation. We evaluate our NMF algorithm with simulated hyperspectral images as well as hyperspectral imagery from several instruments including, the NASA Airborne Visible/Infrared Imaging Spectrometer (AVIRIS), NASA Hyperspectral Imager for the Coastal Ocean (HICO) and National Ecological Observatory Network (NEON) Imaging Spectrometer.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Scope. 246.100 Section 246.100 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES SOURCE SEPARATION FOR... source separation of residential, commercial, and institutional solid wastes. Explicitly excluded are...
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 25 2011-07-01 2011-07-01 false Scope. 246.100 Section 246.100 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES SOURCE SEPARATION FOR... source separation of residential, commercial, and institutional solid wastes. Explicitly excluded are...
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 26 2013-07-01 2013-07-01 false Scope. 246.100 Section 246.100 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES SOURCE SEPARATION FOR... source separation of residential, commercial, and institutional solid wastes. Explicitly excluded are...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Scope. 246.100 Section 246.100 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES SOURCE SEPARATION FOR... source separation of residential, commercial, and institutional solid wastes. Explicitly excluded are...
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 25 2014-07-01 2014-07-01 false Scope. 246.100 Section 246.100 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES SOURCE SEPARATION FOR... source separation of residential, commercial, and institutional solid wastes. Explicitly excluded are...
Indirect current control with separate IZ drop compensation for voltage source converters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kanetkar, V.R.; Dawande, M.S.; Dubey, G.K.
1995-12-31
Indirect Current Control (ICC) of boost type Voltage Source Converters (VSCs) using separate compensation of line IZ voltage drop is presented. A separate bi-directional VSC is used to produce the compensation voltage. This simplifies the ICC regulator scheme as the power flow is controlled through single modulation index. Experimental verification is provided for bi-directional control of the power flow.
Separation of concurrent broadband sound sources by human listeners
NASA Astrophysics Data System (ADS)
Best, Virginia; van Schaik, André; Carlile, Simon
2004-01-01
The effect of spatial separation on the ability of human listeners to resolve a pair of concurrent broadband sounds was examined. Stimuli were presented in a virtual auditory environment using individualized outer ear filter functions. Subjects were presented with two simultaneous noise bursts that were either spatially coincident or separated (horizontally or vertically), and responded as to whether they perceived one or two source locations. Testing was carried out at five reference locations on the audiovisual horizon (0°, 22.5°, 45°, 67.5°, and 90° azimuth). Results from experiment 1 showed that at more lateral locations, a larger horizontal separation was required for the perception of two sounds. The reverse was true for vertical separation. Furthermore, it was observed that subjects were unable to separate stimulus pairs if they delivered the same interaural differences in time (ITD) and level (ILD). These findings suggested that the auditory system exploited differences in one or both of the binaural cues to resolve the sources, and could not use monaural spectral cues effectively for the task. In experiments 2 and 3, separation of concurrent noise sources was examined upon removal of low-frequency content (and ITDs), onset/offset ITDs, both of these in conjunction, and all ITD information. While onset and offset ITDs did not appear to play a major role, differences in ongoing ITDs were robust cues for separation under these conditions, including those in the envelopes of high-frequency channels.
Connes' embedding problem and Tsirelson's problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Junge, M.; Palazuelos, C.; Navascues, M.
2011-01-15
We show that Tsirelson's problem concerning the set of quantum correlations and Connes' embedding problem on finite approximations in von Neumann algebras (known to be equivalent to Kirchberg's QWEP conjecture) are essentially equivalent. Specifically, Tsirelson's problem asks whether the set of bipartite quantum correlations generated between tensor product separated systems is the same as the set of correlations between commuting C{sup *}-algebras. Connes' embedding problem asks whether any separable II{sub 1} factor is a subfactor of the ultrapower of the hyperfinite II{sub 1} factor. We show that an affirmative answer to Connes' question implies a positive answer to Tsirelson's. Conversely,more » a positive answer to a matrix valued version of Tsirelson's problem implies a positive one to Connes' problem.« less
Method for the chemical separation of GE-68 from its daughter Ga-68
Fitzsimmons, Jonathan M.; Atcher, Robert W.
2010-06-01
The present invention is directed to a generator apparatus for separating a daughter gallium-68 radioisotope substantially free of impurities from a parent gernanium-68 radioisotope, including a first resin-containing column containing parent gernanium-68 radioisotope and daughter gallium-68 radioisotope, a source of first eluent connected to said first resin-containing column for separating daughter gallium-68 radioisotope from the first resin-containing column, said first eluent including citrate whereby the separated gallium is in the form of gallium citrate, a mixing space connected to said first resin-containing column for admixing a source of hydrochloric acid with said separated gallium citrate whereby gallium citrate is converted to gallium tetrachloride, a second resin-containing column for retention of gallium-68 tetrachloride, and, a source of second eluent connected to said second resin-containing column for eluting the daughter gallium-68 radioisotope from said second resin-containing column.
Neural imaging to track mental states while using an intelligent tutoring system.
Anderson, John R; Betts, Shawn; Ferris, Jennifer L; Fincham, Jon M
2010-04-13
Hemodynamic measures of brain activity can be used to interpret a student's mental state when they are interacting with an intelligent tutoring system. Functional magnetic resonance imaging (fMRI) data were collected while students worked with a tutoring system that taught an algebra isomorph. A cognitive model predicted the distribution of solution times from measures of problem complexity. Separately, a linear discriminant analysis used fMRI data to predict whether or not students were engaged in problem solving. A hidden Markov algorithm merged these two sources of information to predict the mental states of students during problem-solving episodes. The algorithm was trained on data from 1 day of interaction and tested with data from a later day. In terms of predicting what state a student was in during a 2-s period, the algorithm achieved 87% accuracy on the training data and 83% accuracy on the test data. The results illustrate the importance of integrating the bottom-up information from imaging data with the top-down information from a cognitive model.
Separation in Logistic Regression: Causes, Consequences, and Control.
Mansournia, Mohammad Ali; Geroldinger, Angelika; Greenland, Sander; Heinze, Georg
2018-04-01
Separation is encountered in regression models with a discrete outcome (such as logistic regression) where the covariates perfectly predict the outcome. It is most frequent under the same conditions that lead to small-sample and sparse-data bias, such as presence of a rare outcome, rare exposures, highly correlated covariates, or covariates with strong effects. In theory, separation will produce infinite estimates for some coefficients. In practice, however, separation may be unnoticed or mishandled because of software limits in recognizing and handling the problem and in notifying the user. We discuss causes of separation in logistic regression and describe how common software packages deal with it. We then describe methods that remove separation, focusing on the same penalized-likelihood techniques used to address more general sparse-data problems. These methods improve accuracy, avoid software problems, and allow interpretation as Bayesian analyses with weakly informative priors. We discuss likelihood penalties, including some that can be implemented easily with any software package, and their relative advantages and disadvantages. We provide an illustration of ideas and methods using data from a case-control study of contraceptive practices and urinary tract infection.
A new solution to emulsion liquid membrane problems by non-Newtonian conversion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skelland, A.H.P.; Meng, X.
1996-02-01
Surfactant-stabilized emulsion liquid membrane processes constitute an emerging separation technology that has repeatedly been shown to be highly suited for such diverse separation processes as metal recovery or removal from dilute aqueous solutions; separations in the food industry; removal of organic bases and acids from water; and separation of hydrocarbons. Emulsion liquid membrane separation processes remain excessively vulnerable to one or more of four major problems. Difficulties lie in developing liquid membranes that combine high levels of both stability and permeability with acceptably low levels of swelling and ease of subsequent demulsification for membrane and solute recovery. This article providesmore » a new technique for simultaneously overcoming the first three problems, while identifying physical indications that the proposed solution may have little adverse effect on the fourth problem (demulsification) and may even alleviate it. Numerous benefits of optimized conversion of the membrane phase into suitable non-Newtonian form are identified, their mechanisms outlined, and experimental verifications provided. These include increased stability, retained (or enhanced) permeability, reduced swelling, increased internal phase volume, and increased stirrer speeds. The highly favorable responsiveness of both aliphatic and aromatic membranes to the new technique is demonstrated.« less
Active room compensation for sound reinforcement using sound field separation techniques.
Heuchel, Franz M; Fernandez-Grande, Efren; Agerkvist, Finn T; Shabalina, Elena
2018-03-01
This work investigates how the sound field created by a sound reinforcement system can be controlled at low frequencies. An indoor control method is proposed which actively absorbs the sound incident on a reflecting boundary using an array of secondary sources. The sound field is separated into incident and reflected components by a microphone array close to the secondary sources, enabling the minimization of reflected components by means of optimal signals for the secondary sources. The method is purely feed-forward and assumes constant room conditions. Three different sound field separation techniques for the modeling of the reflections are investigated based on plane wave decomposition, equivalent sources, and the Spatial Fourier transform. Simulations and an experimental validation are presented, showing that the control method performs similarly well at enhancing low frequency responses with the three sound separation techniques. Resonances in the entire room are reduced, although the microphone array and secondary sources are confined to a small region close to the reflecting wall. Unlike previous control methods based on the creation of a plane wave sound field, the investigated method works in arbitrary room geometries and primary source positions.
On Some Separated Algorithms for Separable Nonlinear Least Squares Problems.
Gan, Min; Chen, C L Philip; Chen, Guang-Yong; Chen, Long
2017-10-03
For a class of nonlinear least squares problems, it is usually very beneficial to separate the variables into a linear and a nonlinear part and take full advantage of reliable linear least squares techniques. Consequently, the original problem is turned into a reduced problem which involves only nonlinear parameters. We consider in this paper four separated algorithms for such problems. The first one is the variable projection (VP) algorithm with full Jacobian matrix of Golub and Pereyra. The second and third ones are VP algorithms with simplified Jacobian matrices proposed by Kaufman and Ruano et al. respectively. The fourth one only uses the gradient of the reduced problem. Monte Carlo experiments are conducted to compare the performance of these four algorithms. From the results of the experiments, we find that: 1) the simplified Jacobian proposed by Ruano et al. is not a good choice for the VP algorithm; moreover, it may render the algorithm hard to converge; 2) the fourth algorithm perform moderately among these four algorithms; 3) the VP algorithm with the full Jacobian matrix perform more stable than that of the VP algorithm with Kuafman's simplified one; and 4) the combination of VP algorithm and Levenberg-Marquardt method is more effective than the combination of VP algorithm and Gauss-Newton method.
No-search algorithm for direction of arrival estimation
NASA Astrophysics Data System (ADS)
Tuncer, T. Engin; Ã-Zgen, M. Tankut
2009-10-01
Direction of arrival estimation (DOA) is an important problem in ionospheric research and electromagnetics as well as many other fields. When superresolution techniques are used, a computationally expensive search should be performed in general. In this paper, a no-search algorithm is presented. The idea is to separate the source signals in the time-frequency plane by using the Short-Time Fourier Transform. The direction vector for each source is found by coherent summation over the instantaneous frequency (IF) tracks of the individual sources which are found automatically by employing morphological image processing. Both overlapping and nonoverlapping source IF tracks can be processed and identified by the proposed approach. The CLEAN algorithm is adopted in order to isolate the IF tracks of the overlapping sources with different powers. The proposed method is very effective in finding the IF tracks and can be applied for signals with arbitrary IF characteristics. While the proposed method can be applied to any sensor geometry, planar uniform circular arrays (UCA) bring additional advantages. Different properties of the UCA are presented, and it is shown that the DOA angles can be found as the mean-square error optimum solution of a linear matrix equation. Several simulations are done, and it is shown that the proposed approach performs significantly better than the conventional methods.
NASA Technical Reports Server (NTRS)
Johnson, O. W.
1964-01-01
A modified spray gun, with separate containers for resin and additive components, solves the problems of quick hardening and nozzle clogging. At application, separate atomizers spray the liquids in front of the nozzle face where they blend.
Theoretical and Numerical Studies of a Vortex - Interaction Problem
NASA Astrophysics Data System (ADS)
Hsu, To-Ming
The problem of vortex-airfoil interaction has received considerable interest in the helicopter industry. This phenomenon has been shown to be a major source of noise, vibration, and structural fatigue in helicopter flight. Since unsteady flow is always associated with vortex shedding and movement of free vortices, the problem of vortex-airfoil interaction also serves as a basic building block in unsteady aerodynamics. A careful study of the vortex-airfoil interaction reveals the major effects of the vortices on the generation of unsteady aerodynamic forces, especially the lift. The present work establishes three different flow models to study the vortex-airfoil interaction problem: a theoretical model, an inviscid flow model, and a viscous flow model. In the first two models, a newly developed aerodynamic force theorem has been successfully applied to identify the contributions to unsteady forces from various vortical systems in the flow field. Through viscous flow analysis, different features of laminar interaction, turbulent attached interaction, and turbulent separated interaction are examined. Along with the study of the vortex-airfoil interaction problem, several new schemes are developed for inviscid and viscous flow solutions. New formulas are derived to determine the trailing edge flow conditions, such as flow velocity and direction, in unsteady inviscid flow. A new iteration scheme that is faster for higher Reynolds number is developed for solving the viscous flow problem.
NASA Astrophysics Data System (ADS)
Kustusch, Mary Bridget
2016-06-01
Students in introductory physics struggle with vector algebra and these challenges are often associated with contextual and representational features of the problems. Performance on problems about cross product direction is particularly poor and some research suggests that this may be primarily due to misapplied right-hand rules. However, few studies have had the resolution to explore student use of right-hand rules in detail. This study reviews literature in several disciplines, including spatial cognition, to identify ten contextual and representational problem features that are most likely to influence performance on problems requiring a right-hand rule. Two quantitative measures of performance (correctness and response time) and two qualitative measures (methods used and type of errors made) were used to explore the impact of these problem features on student performance. Quantitative results are consistent with expectations from the literature, but reveal that some features (such as the type of reasoning required and the physical awkwardness of using a right-hand rule) have a greater impact than others (such as whether the vectors are placed together or separate). Additional insight is gained by the qualitative analysis, including identifying sources of difficulty not previously discussed in the literature and revealing that the use of supplemental methods, such as physically rotating the paper, can mitigate errors associated with certain features.
An analysis of the crossover between local and massive separation on airfoils
NASA Technical Reports Server (NTRS)
Barnett, M.; Carter, J. E.
1987-01-01
Massive separation on airfoils operating at high Reynolds number is an important problem to the aerodynamicist, since its onset generally determines the limiting performance of an airfoil, and it can lead to serious problems related to aircraft control as well as turbomachinery operation. The phenomenon of crossover between local separation and massive separation on realistic airfoil geometries induced by airfoil thickness is investigated for low speed (incompressible) flow. The problem is studied both for the asymptotic limit of infinite Reynolds number using triple-deck theory, and for finite Reynolds number using interacting boundary-layer theory. Numerical results are presented which follow the evolution of the flow as it develops from a mildly separated state to one dominated by the massively separated flow structure as the thickness of the airfoil geometry is systematically increased. The effect of turbulence upon the evolution of the flow is considered, and the impact is significant, with the principal effect being the suppression of the onset of separation. Finally, the effect of surface suction and injection for boundary-layer control is considered. The approach which was developed provides a valuable tool for the analysis of boundary-layer separation up to and beyond stall. Another important conclusion is that interacting boundary-layer theory provides an efficient tool for the analysis of the effect of turbulence and boundary-layer control upon separated vicsous flow.
Golden roots to golden fruits of mental health in Gujarat.
Mehta, Ritambhara; Shah, Anil; Vankar, G K; Chauhan, Ajay; Bakre, Ravindra
2018-02-01
Through the behavioral descriptions in age-old texts it is obvious that Mental Health problems exist since the existence of Homo Sapiens and humanity, with ever changing norms, contexts, definitions and hence their management. Gujarat state of India is one of the oldest land plateaus existing. It has been inhabited, ruled and governed by many different people, races, kings; and invaded through its longest sea-coast by Dutch, Portuguese, British. Even after freedom of India in 1947, Gujarat emerged as a separate state in 1960 only. The history of Mental Health, before being a separate state, could be summed up in 2 Mental Hospitals started by British governance and 2 very unique institutions. Post NMHP, there has been a tremendous growth in the sector, supported by many leaders in the governance. This is an attempt to review some documented and some gathered information from dependable sources, from pre-independence colonial era, post-independence and post-statehood contemporary period.
Valero, Enrique; Álvarez, Xana; Cancela, Ángeles; Sánchez, Ángel
2015-01-01
Each year there are more frequent blooms of green algae and cyanobacteria, representing a serious environmental problem of eutrophication. Electroflocculation (EF) was studied to harvest the algae which are present in reservoirs, as well as different factors which may influence on the effectiveness of the process: the voltage applied to the culture medium, run times, electrodes separation and natural sedimentation. Finally, the viability of its use to obtain biodiesel was studied by direct transesterification. The EF process carried out at 10V for 1min, with an electrode separation of 5.5cm and a height of 4cm in culture vessel, obtained a recovery efficiency greater than 95%, and octadecenoic and palmitic acids were obtained as the fatty acid methyl esters (FAMEs). EF is an effective method to harvest green algae during the blooms, obtaining the greatest amount of biomass for subsequent use as a source of biodiesel. Copyright © 2015 Elsevier Ltd. All rights reserved.
Mass transfer apparatus and method for separation of gases
Blount, Gerald C.
2015-10-13
A process and apparatus for separating components of a source gas is provided in which more soluble components of the source gas are dissolved in an aqueous solvent at high pressure. The system can utilize hydrostatic pressure to increase solubility of the components of the source gas. The apparatus includes gas recycle throughout multiple mass transfer stages to improve mass transfer of the targeted components from the liquid to gas phase. Separated components can be recovered for use in a value added application or can be processed for long-term storage, for instance in an underwater reservoir.
Mass transfer apparatus and method for separation of gases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blount, Gerald C.; Gorensek, Maximilian Boris; Hamm, Luther L.
A process and apparatus for separating components of a source gas is provided in which more soluble components of the source gas are dissolved in an aqueous solvent at high pressure. The system can utilize hydrostatic pressure to increase solubility of the components of the source gas. The apparatus includes gas recycle throughout multiple mass transfer stages to improve mass transfer of the targeted components from the liquid to gas phase. Separated components can be recovered for use in a value added application or can be processed for long-term storage, for instance in an underwater reservoir.
Bernstad, Anna; la Cour Jansen, Jes; Aspegren, Henrik
2011-03-01
Through an agreement with EEE producers, Swedish municipalities are responsible for collection of hazardous waste and waste electrical and electronic equipment (WEEE). In most Swedish municipalities, collection of these waste fractions is concentrated to waste recycling centres where households can source-separate and deposit hazardous waste and WEEE free of charge. However, the centres are often located on the outskirts of city centres and cars are needed in order to use the facilities in most cases. A full-scale experiment was performed in a residential area in southern Sweden to evaluate effects of a system for property-close source separation of hazardous waste and WEEE. After the system was introduced, results show a clear reduction in the amount of hazardous waste and WEEE disposed of incorrectly amongst residual waste or dry recyclables. The systems resulted in a source separation ratio of 70 wt% for hazardous waste and 76 wt% in the case of WEEE. Results show that households in the study area were willing to increase source separation of hazardous waste and WEEE when accessibility was improved and that this and similar collection systems can play an important role in building up increasingly sustainable solid waste management systems. Copyright © 2010 Elsevier Ltd. All rights reserved.
A Review of Function Allocation and En Route Separation Assurance
NASA Technical Reports Server (NTRS)
Lewis, Timothy A.; Aweiss, Arwa S.; Guerreiro, Nelson M.; Daiker, Ronald J.
2016-01-01
Today's air traffic control system has reached a limit to the number of aircraft that can be safely managed at the same time. This air traffic capacity bottleneck is a critical problem along the path to modernization for air transportation. The design of the next separation assurance system to address this problem is a cornerstone of air traffic management research today. This report reviews recent work by NASA and others in the areas of function allocation and en route separation assurance. This includes: separation assurance algorithms and technology prototypes; concepts of operations and designs for advanced separation assurance systems; and specific investigations into air-ground and human-automation function allocation.
Jamali, Shahab; Fujioka, Takako; Ross, Bernhard
2014-06-01
Extensive rehabilitation training can lead to functional improvement even years after a stroke. Although neuronal plasticity is considered as a main origin of such ameliorations, specific subtending mechanisms need further investigation. Our aim was to obtain objective neuromagnetic measures sensitive to brain reorganizations induced by a music-supported training. We applied 20-Hz vibrotactile stimuli to the index finger and the ring finger, recorded somatosensory steady-state responses with magnetoencephalography, and analyzed the cortical sources displaying oscillations synchronized with the external stimuli in two groups of healthy older adults before and after musical training or without training. In addition, we applied the same analysis for an anecdotic report of a single chronic stroke patient with hemiparetic arm and hand problems, who received music-supported therapy (MST). Healthy older adults showed significant finger separation within the primary somatotopic map. Beta dipole sources were more anterior located compared to gamma sources. An anterior shift of sources and increases in synchrony between the stimuli and beta and gamma oscillations were observed selectively after music training. In the stroke patient a normalization of somatotopic organization was observed after MST, with digit separation recovered after training and stimulus induced gamma synchrony increased. The proposed stimulation paradigm captures the integrity of primary somatosensory hand representation. Source position and synchronization between the stimuli and gamma activity are indices, sensitive to music-supported training. Responsiveness was also observed in a chronic stroke patient, encouraging for the music-supported therapy. Notably, changes in somatosensory responses were observed, even though the therapy did not involve specific sensory discrimination training. The proposed protocol can be used for monitoring changes in neuronal organization during training and will improve the understanding of the brain mechanisms underlying rehabilitation. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
UP TO 100,000 RELIABLE STRONG GRAVITATIONAL LENSES IN FUTURE DARK ENERGY EXPERIMENTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Serjeant, S.
2014-09-20
The Euclid space telescope will observe ∼10{sup 5} strong galaxy-galaxy gravitational lens events in its wide field imaging survey over around half the sky, but identifying the gravitational lenses from their observed morphologies requires solving the difficult problem of reliably separating the lensed sources from contaminant populations, such as tidal tails, as well as presenting challenges for spectroscopic follow-up redshift campaigns. Here I present alternative selection techniques for strong gravitational lenses in both Euclid and the Square Kilometre Array, exploiting the strong magnification bias present in the steep end of the Hα luminosity function and the H I mass function.more » Around 10{sup 3} strong lensing events are detectable with this method in the Euclid wide survey. While only ∼1% of the total haul of Euclid lenses, this sample has ∼100% reliability, known source redshifts, high signal-to-noise, and a magnification-based selection independent of assumptions of lens morphology. With the proposed Square Kilometre Array dark energy survey, the numbers of reliable strong gravitational lenses with source redshifts can reach 10{sup 5}.« less
Step-off, vertical electromagnetic responses of a deep resistivity layer buried in marine sediments
NASA Astrophysics Data System (ADS)
Jang, Hangilro; Jang, Hannuree; Lee, Ki Ha; Kim, Hee Joon
2013-04-01
A frequency-domain, marine controlled-source electromagnetic (CSEM) method has been applied successfully in deep water areas for detecting hydrocarbon (HC) reservoirs. However, a typical technique with horizontal transmitters and receivers requires large source-receiver separations with respect to the target depth. A time-domain EM system with vertical transmitters and receivers can be an alternative because vertical electric fields are sensitive to deep resistive layers. In this paper, a time-domain modelling code, with multiple source and receiver dipoles that are finite in length, has been written to investigate transient EM problems. With the use of this code, we calculate step-off responses for one-dimensional HC reservoir models. Although the vertical electric field has much smaller amplitude of signal than the horizontal field, vertical currents resulting from a vertical transmitter are sensitive to resistive layers. The modelling shows a significant difference between step-off responses of HC- and water-filled reservoirs, and the contrast can be recognized at late times at relatively short offsets. A maximum contrast occurs at more than 4 s, being delayed with the depth of the HC layer.
NASA Astrophysics Data System (ADS)
Perton, Mathieu; Contreras-Zazueta, Marcial A.; Sánchez-Sesma, Francisco J.
2016-06-01
A new implementation of indirect boundary element method allows simulating the elastic wave propagation in complex configurations made of embedded regions that are homogeneous with irregular boundaries or flat layers. In an older implementation, each layer of a flat layered region would have been treated as a separated homogeneous region without taking into account the flat boundary information. For both types of regions, the scattered field results from fictitious sources positioned along their boundaries. For the homogeneous regions, the fictitious sources emit as in a full-space and the wave field is given by analytical Green's functions. For flat layered regions, fictitious sources emit as in an unbounded flat layered region and the wave field is given by Green's functions obtained from the discrete wavenumber (DWN) method. The new implementation allows then reducing the length of the discretized boundaries but DWN Green's functions require much more computation time than the full-space Green's functions. Several optimization steps are then implemented and commented. Validations are presented for 2-D and 3-D problems. Higher efficiency is achieved in 3-D.
An evaluation of the Parents Plus-Parenting When Separated programme.
Keating, Adele; Sharry, John; Murphy, Michelle; Rooney, Brendan; Carr, Alan
2016-04-01
This study evaluated the Parents Plus-Parenting when Separated Programme, an intervention specifically designed to address the needs of separated parents in an Irish context. In a randomized control trial, 82 separated parents with young children were assigned to the Parents Plus-Parenting when Separated Programme treatment group and 79 to a waiting-list control group. They were assessed on measures of client goals, parenting satisfaction, child and parental adjustment and interparental conflict at baseline (Time 1) and 6 weeks later (Time 2), after the treatment group completed the Parents Plus-Parenting when Separated Programme. From Time 1 to 2, significant goal attainment, increases in parenting satisfaction and decreases in child behaviour problems, parental adjustment problems and interparental conflict occurred in the Parents Plus-Parenting when Separated Programme group, but not in the control group. These results supported the effectiveness of Parents Plus-Parenting when Separated Programme, which should be made more widely available to separated parents. © The Author(s) 2015.
NASA Astrophysics Data System (ADS)
Armenio, Vincenzo; Fakhari, Ahmad; Petronio, Andrea; Padovan, Roberta; Pittaluga, Chiara; Caprino, Giovanni
2015-11-01
Massive flow separation is ubiquitous in industrial applications, ruling drag and hydrodynamic noise. In spite of considerable efforts, its numerical prediction still represents a challenge for CFD models in use in engineering. Aside commercial software, over the latter years the opensource software OpenFOAMR (OF) has emerged as a valid tool for prediction of complex industrial flows. In the present work, we simulate two flows representative of a class of situations occurring in industrial problems: the flow around sphere and that around a wall-mounted square cylinder at Re = 10000 . We compare the performance two different tools, namely OF and ANSYS CFX 15.0 (CFX) using different unstructured grids and turbulence models. The grids have been generated using SNAPPYHEXMESH and ANSYS ICEM CFD 15.0 with different near wall resolutions. The codes have been run in a RANS mode using k - ɛ model (OF) and SST - k - ω (CFX) with and without wall-layer models. OF has been also used in LES, WMLES and DES mode. Regarding the sphere, RANS models were not able to catch separation, while good prediction of separation and distribution of stresses over the surface were obtained using LES, WMLES and DES. Results for the second test case are currently under analysis. Financial support from COSMO ``cfd open source per opera mortta'' PAR FSC 2007-2013, Friuli Venezia Giulia.
Liang, Tu; Fu, Qing; Li, Fangbing; Zhou, Wei; Xin, Huaxia; Wang, Hui; Jin, Yu; Liang, Xinmiao
2015-08-01
A systematic strategy based on hydrophilic interaction liquid chromatography was developed for the separation, purification and quantification of raffinose family oligosaccharides from Lycopus lucidus Turcz. Methods with enough hydrophilicity and selectivity were utilized to resolve the problems encountered in the separation of oligosaccharides such as low retention, low resolution and poor solubility. The raffinose family oligosaccharides in L. lucidus Turcz. were isolated using solid-phase extraction followed by hydrophilic interaction liquid chromatography at semi-preparative scale to obtain standards of stachyose, verbascose and ajugose. Utilizing the obtained oligosaccharides as standards, a quantitative determination method was developed, validated and applied for the content determination of raffinose family oligosaccharides both in the aerial and root parts of L. lucidus Turcz. There were no oligosaccharides in the aerial parts, while in the root parts, the total content was 686.5 mg/g with the average distribution: raffinose 66.5 mg/g, stachyose 289.0 mg/g, verbascose 212.4 mg/g, and ajugose 118.6 mg/g. The result provided the potential of roots of L. lucidus Turcz. as new raffinose family oligosaccharides sources for functional food. Moreover, since the present systematic strategy is efficient, sensitive and robust, separation, purification and quantification of oligosaccharides by hydrophilic interaction liquid chromatography seems to be possible. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Alternate Sources for Propellant Ingredients.
1976-07-07
0dJ variety of reasons; (3) sole source; (4) medical/ OSHA /EPA problems; (5) dependent on foreign Imports; and (6) specification problems. •’. .’ . . I...problems exist for a variety of reasons; (3) sole sourc:e; (4) medical/ OSHA /EPA problems; (5) dependent on foreign imports; and (6) specification problems...regulations of OSHA or EPA affect pro- duction or use of the product; 5. Plant capacity - when demand increases faster that; predictions; 6. Supply
NASA Astrophysics Data System (ADS)
Subudhi, Sudhakar; Sreenivas, K. R.; Arakeri, Jaywant H.
2013-01-01
This work is concerned with the removal of unwanted fluid through the source-sink pair. The source consists of fluid issuing out of a nozzle in the form of a jet and the sink is a pipe that is kept some distance from the source pipe. Of concern is the percentage of source fluid sucked through the sink. The experiments have been carried in a large glass water tank. The source nozzle diameter is 6 mm and the sink pipe diameter is either 10 or 20 mm. The horizontal and vertical separations and angles between these source and sink pipes are adjustable. The flow was visualized using KMnO4 dye, planer laser induced fluorescence and particle streak photographs. To obtain the effectiveness (that is percentage of source fluid entering the sink pipe), titration method is used. The velocity profiles with and without the sink were obtained using particle image velocimetry. The sink flow rate to obtain a certain effectiveness increase dramatically with lateral separation. The sink diameter and the angle between source and the sink axes don't influence effectiveness as much as the lateral separation.
Özbek, Emel; Bongers, Ilja L; Lobbestael, Jill; van Nieuwenhuizen, Chijs
2015-12-01
This study investigated the relationship between acculturation and psychological problems in Turkish and Moroccan young adults living in the Netherlands. A sample of 131 healthy young adults aged between 18 and 24 years old, with a Turkish or Moroccan background was recruited using snowball sampling. Data on acculturation, internalizing and externalizing problems, beliefs about psychological problems, attributions of psychological problems and barriers to care were collected and analyzed using Latent Class Analysis and multinomial logistic regression. Three acculturation classes were identified in moderately to highly educated, healthy Turkish or Moroccan young adults: integration, separation and diffusion. None of the participants in the sample were marginalized or assimilated. Young adults reporting diffuse acculturation reported more internalizing and externalizing problems than those who were integrated or separated. Separated young adults reported experiencing more practical barriers to care than integrated young adults. Further research with a larger sample, including young adult migrants using mental health services, is required to improve our understanding of acculturation, psychological problems and barriers to care in this population. Including experiences of discrimination in the model might improve our understanding of the relationship between different forms of acculturation and psychological problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patterson, M.E.; Hammitt, W.E.; Titre, J.P.
1992-12-01
Recreational use of the Chickamauga Lock has more than doubled since 1984, when 3,139 recreational craft used the lock. Forty percent of the total annual use occurs during the month of June. The most common reason for heavy use during the month is to attend special events, although other locks in the Tennessee River Navigation System have also shown an increase in recreational use since 1984. Overall, the study suggests a low level of conflict between recreational and commercial users. Conflict among recreational users appears to be even less of a problem. The biggest source of conflict at the currentmore » time is not the actual delays, but recreational boaters' inability to predict whether the lock will be available for use prior to arriving at the dam. Nearly one half of the boaters indicated that this is a common problem during special events. The Corps can reduce this source of conflict to some extent by using an FM repeater to announce the estimated time of recreational and commercial lockages. A majority of the respondents supported this management alternative. The second most popular management alternative was the construction of a separate lock for commercial traffic.« less
Howell, D; Oliver, T K; Keller-Olaman, S; Davidson, J R; Garland, S; Samuels, C; Savard, J; Harris, C; Aubin, M; Olson, K; Sussman, J; MacFarlane, J; Taylor, C
2014-04-01
Sleep disturbance is prevalent in cancer with detrimental effects on health outcomes. Sleep problems are seldom identified or addressed in cancer practice. The purpose of this review was to identify the evidence base for the assessment and management of cancer-related sleep disturbance (insomnia and insomnia syndrome) for oncology practice. The search of the health literature included grey literature data sources and empirical databases from June 2004 to June 2012. The evidence was reviewed by a Canadian Sleep Expert Panel, comprised of nurses, psychologists, primary care physicians, oncologists, physicians specialized in sleep disturbances, researchers and guideline methodologists to develop clinical practice recommendations for pan-Canadian use reported in a separate paper. Three clinical practice guidelines and 12 randomized, controlled trials were identified as the main source of evidence. Additional guidelines and systematic reviews were also reviewed for evidence-based recommendations on the assessment and management of insomnia not necessarily in cancer. A need to routinely screen for sleep disturbances was identified and the randomized, controlled trial (RCT) evidence suggests benefits for cognitive behavioural therapy for improving sleep quality in cancer. Sleep disturbance is a prevalent problem in cancer that needs greater recognition in clinical practice and in future research.
The X-ray Pulsar 2A 1822-371 as a super-Eddington source
NASA Astrophysics Data System (ADS)
Bak Nielsen, A.; Patruno, A.
2017-10-01
The LMXB pulsar 2A 1822-371 is a slow accreting x-ray pulsar which shows several peculiar properties. The pulsar is observed to spin-up continuously on a timescale of 7000 years , shorter than expected for these type of systems. The orbital period is expanding on an extremely short timescale that challenges current theories of binary evolution. Furthermore, the presence of a thick accretion disc corona poses a problem, since we observe X-ray pulsations which would otherwise be smeared out by the Compton scattering. I propose a solution to all of the above problems by suggesting that the system may be a super-Eddington source with a donor out of thermal equilibrium. I propose that 2A 1822-371 has a thin accretion outflow being launched from the inner accretion disk region. The solution reconciles both the need for an accretion disk corona, the fast spin-up and the changes in the orbital separation. I will also present preliminary results obtained with new XMM-Newton data that show the possible presence of a low frequency modulation similar to those observed in two accreting millisecond pulsars. Given the relatively strong magnetic field of 2A 1822-371, the modulation requires a super-Eddington mass transfer rate, further strengthening the proposed scenario.
NASA Astrophysics Data System (ADS)
Joung, InSuk; Kim, Jong Yun; Gross, Steven P.; Joo, Keehyoung; Lee, Jooyoung
2018-02-01
Many problems in science and engineering can be formulated as optimization problems. One way to solve these problems is to develop tailored problem-specific approaches. As such development is challenging, an alternative is to develop good generally-applicable algorithms. Such algorithms are easy to apply, typically function robustly, and reduce development time. Here we provide a description for one such algorithm called Conformational Space Annealing (CSA) along with its python version, PyCSA. We previously applied it to many optimization problems including protein structure prediction and graph community detection. To demonstrate its utility, we have applied PyCSA to two continuous test functions, namely Ackley and Eggholder functions. In addition, in order to provide complete generality of PyCSA to any types of an objective function, we demonstrate the way PyCSA can be applied to a discrete objective function, namely a parameter optimization problem. Based on the benchmarking results of the three problems, the performance of CSA is shown to be better than or similar to the most popular optimization method, simulated annealing. For continuous objective functions, we found that, L-BFGS-B was the best performing local optimization method, while for a discrete objective function Nelder-Mead was the best. The current version of PyCSA can be run in parallel at the coarse grained level by calculating multiple independent local optimizations separately. The source code of PyCSA is available from http://lee.kias.re.kr.
Estimating the Earthquake Source Time Function by Markov Chain Monte Carlo Sampling
NASA Astrophysics Data System (ADS)
Dȩbski, Wojciech
2008-07-01
Many aspects of earthquake source dynamics like dynamic stress drop, rupture velocity and directivity, etc. are currently inferred from the source time functions obtained by a deconvolution of the propagation and recording effects from seismograms. The question of the accuracy of obtained results remains open. In this paper we address this issue by considering two aspects of the source time function deconvolution. First, we propose a new pseudo-spectral parameterization of the sought function which explicitly takes into account the physical constraints imposed on the sought functions. Such parameterization automatically excludes non-physical solutions and so improves the stability and uniqueness of the deconvolution. Secondly, we demonstrate that the Bayesian approach to the inverse problem at hand, combined with an efficient Markov Chain Monte Carlo sampling technique, is a method which allows efficient estimation of the source time function uncertainties. The key point of the approach is the description of the solution of the inverse problem by the a posteriori probability density function constructed according to the Bayesian (probabilistic) theory. Next, the Markov Chain Monte Carlo sampling technique is used to sample this function so the statistical estimator of a posteriori errors can be easily obtained with minimal additional computational effort with respect to modern inversion (optimization) algorithms. The methodological considerations are illustrated by a case study of the mining-induced seismic event of the magnitude M L ≈3.1 that occurred at Rudna (Poland) copper mine. The seismic P-wave records were inverted for the source time functions, using the proposed algorithm and the empirical Green function technique to approximate Green functions. The obtained solutions seem to suggest some complexity of the rupture process with double pulses of energy release. However, the error analysis shows that the hypothesis of source complexity is not justified at the 95% confidence level. On the basis of the analyzed event we also show that the separation of the source inversion into two steps introduces limitations on the completeness of the a posteriori error analysis.
Wang, Rui; Jin, Xin; Wang, Ziyuan; Gu, Wantao; Wei, Zhechao; Huang, Yuanjie; Qiu, Zhuang; Jin, Pengkang
2018-01-01
This paper proposes a new system of multilevel reuse with source separation in printing and dyeing wastewater (PDWW) treatment in order to dramatically improve the water reuse rate to 35%. By analysing the characteristics of the sources and concentrations of pollutants produced in different printing and dyeing processes, special, highly, and less contaminated wastewaters (SCW, HCW, and LCW, respectively) were collected and treated separately. Specially, a large quantity of LCW was sequentially reused at multiple levels to meet the water quality requirements for different production processes. Based on this concept, a multilevel reuse system with a source separation process was established in a typical printing and dyeing enterprise. The water reuse rate increased dramatically to 62%, and the reclaimed water was reused in different printing and dyeing processes based on the water quality. This study provides promising leads in water management for wastewater reclamation. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gordon, John Howard; Alvare, Javier
A reactor has two chambers, namely an oil feedstock chamber and a source chamber. An ion separator separates the oil feedstock chamber from the source chamber, wherein the ion separator allows alkali metal ions to pass from the source chamber, through the ion separator, and into the oil feedstock chamber. A cathode is at least partially housed within the oil feedstock chamber and an anode is at least partially housed within the source chamber. A quantity of an oil feedstock is within the oil feedstock chamber, the oil feedstock comprising at least one carbon atom and a heteroatom and/or onemore » or more heavy metals, the oil feedstock further comprising naphthenic acid. When the alkali metal ion enters the oil feedstock chamber, the alkali metal reacts with the heteroatom, the heavy metals and/or the naphthenic acid, wherein the reaction with the alkali metal forms inorganic products.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, B; Reyes, J; Wong, J
Purpose: To overcome the limitation of CT/CBCT in guiding radiation for soft tissue targets, we developed a bioluminescence tomography(BLT) system for preclinical radiation research. We systematically assessed the system performance in target localization and the ability of resolving two sources in simulations, phantom and in vivo environments. Methods: Multispectral images acquired in single projection were used for the BLT reconstruction. Simulation studies were conducted for single spherical source radius from 0.5 to 3 mm at depth of 3 to 12 mm. The same configuration was also applied for the double sources simulation with source separations varying from 3 to 9more » mm. Experiments were performed in a standalone BLT/CBCT system. Two sources with 3 and 4.7 mm separations placed inside a tissue-mimicking phantom were chosen as the test cases. Live mice implanted with single source at 6 and 9 mm depth, 2 sources with 3 and 5 mm separation at depth of 5 mm or 3 sources in the abdomen were also used to illustrate the in vivo localization capability of the BLT system. Results: Simulation and phantom results illustrate that our BLT can provide 3D source localization with approximately 1 mm accuracy. The in vivo results are encouraging that 1 and 1.7 mm accuracy can be attained for the single source case at 6 and 9 mm depth, respectively. For the 2 sources study, both sources can be distinguished at 3 and 5 mm separations at approximately 1 mm accuracy using 3D BLT but not 2D bioluminescence image. Conclusion: Our BLT/CBCT system can be potentially applied to localize and resolve targets at a wide range of target sizes, depths and separations. The information provided in this study can be instructive to devise margins for BLT-guided irradiation and suggests that the BLT could guide radiation for multiple targets, such as metastasis. Drs. John W. Wong and Iulian I. Iordachita receive royalty payment from a licensing agreement between Xstrahl Ltd and Johns Hopkins University.« less
Bai, John Y H; Jonas Chan, C K; Elliffe, Douglas; Podlesnik, Christopher A
2016-11-01
The baseline rate of a reinforced target response decreases with the availability of response-independent sources of alternative reinforcement; however, resistance to disruption and relapse increases. Because many behavioral treatments for problem behavior include response-dependent reinforcement of alternative behavior, the present study assessed whether response-dependent alternative reinforcement also decreases baseline response rates but increases resistance to extinction and relapse. We reinforced target responding at equal rates across two components of a multiple schedule with pigeons. We compared resistance to extinction and relapse via reinstatement of (1) a target response trained concurrently with a reinforced alternative response in one component with (2) a target response trained either concurrently or in separate components from the alternative response across conditions. Target response rates trained alone in baseline were higher but resistance to extinction and relapse via reinstatement tests were greater after training concurrently with the alternative response. In another assessment, training target and alternative responding together, but separating them during extinction and reinstatement tests, produced equal resistance to extinction and relapse. Together, these findings are consistent with behavioral momentum theory-operant response-reinforcer relations determined baseline response rates but Pavlovian stimulus-reinforcer relations established during training determined resistance to extinction and relapse. These findings imply that reinforcing alternative behavior to treat problem behavior could initially reduce rates but increase persistence. © 2016 Society for the Experimental Analysis of Behavior.
The novel high-performance 3-D MT inverse solver
NASA Astrophysics Data System (ADS)
Kruglyakov, Mikhail; Geraskin, Alexey; Kuvshinov, Alexey
2016-04-01
We present novel, robust, scalable, and fast 3-D magnetotelluric (MT) inverse solver. The solver is written in multi-language paradigm to make it as efficient, readable and maintainable as possible. Separation of concerns and single responsibility concepts go through implementation of the solver. As a forward modelling engine a modern scalable solver extrEMe, based on contracting integral equation approach, is used. Iterative gradient-type (quasi-Newton) optimization scheme is invoked to search for (regularized) inverse problem solution, and adjoint source approach is used to calculate efficiently the gradient of the misfit. The inverse solver is able to deal with highly detailed and contrasting models, allows for working (separately or jointly) with any type of MT responses, and supports massive parallelization. Moreover, different parallelization strategies implemented in the code allow optimal usage of available computational resources for a given problem statement. To parameterize an inverse domain the so-called mask parameterization is implemented, which means that one can merge any subset of forward modelling cells in order to account for (usually) irregular distribution of observation sites. We report results of 3-D numerical experiments aimed at analysing the robustness, performance and scalability of the code. In particular, our computational experiments carried out at different platforms ranging from modern laptops to HPC Piz Daint (6th supercomputer in the world) demonstrate practically linear scalability of the code up to thousands of nodes.
Physical angular momentum separation for QED
NASA Astrophysics Data System (ADS)
Sun, Weimin
2017-04-01
We study the non-uniqueness problem of the gauge-invariant angular momentum separation for the case of QED, which stems from the recent controversy concerning the proper definitions of the orbital angular momentum and spin operator of the individual parts of a gauge field system. For the free quantum electrodynamics without matter, we show that the basic requirement of Euclidean symmetry selects a unique physical angular momentum separation scheme from the multitude of the possible angular momentum separation schemes constructed using the various gauge-invariant extensions (GIEs). Based on these results, we propose a set of natural angular momentum separation schemes for the case of interacting QED by invoking the formalism of asymptotic fields. Some perspectives on such a problem for the case of QCD are briefly discussed.
Stanaćević, Milutin; Li, Shuo; Cauwenberghs, Gert
2016-07-01
A parallel micro-power mixed-signal VLSI implementation of independent component analysis (ICA) with reconfigurable outer-product learning rules is presented. With the gradient sensing of the acoustic field over a miniature microphone array as a pre-processing method, the proposed ICA implementation can separate and localize up to 3 sources in mild reverberant environment. The ICA processor is implemented in 0.5 µm CMOS technology and occupies 3 mm × 3 mm area. At 16 kHz sampling rate, ASIC consumes 195 µW power from a 3 V supply. The outer-product implementation of natural gradient and Herault-Jutten ICA update rules demonstrates comparable performance to benchmark FastICA algorithm in ideal conditions and more robust performance in noisy and reverberant environment. Experiments demonstrate perceptually clear separation and precise localization over wide range of separation angles of two speech sources presented through speakers positioned at 1.5 m from the array on a conference room table. The presented ASIC leads to a extreme small form factor and low power consumption microsystem for source separation and localization required in applications like intelligent hearing aids and wireless distributed acoustic sensor arrays.
Yan, Bo; Pan, Chongle; Olman, Victor N; Hettich, Robert L; Xu, Ying
2004-01-01
Mass spectrometry is one of the most popular analytical techniques for identification of individual proteins in a protein mixture, one of the basic problems in proteomics. It identifies a protein through identifying its unique mass spectral pattern. While the problem is theoretically solvable, it remains a challenging problem computationally. One of the key challenges comes from the difficulty in distinguishing the N- and C-terminus ions, mostly b- and y-ions respectively. In this paper, we present a graph algorithm for solving the problem of separating bfrom y-ions in a set of mass spectra. We represent each spectral peak as a node and consider two types of edges: a type-1 edge connects two peaks possibly of the same ion types and a type-2 edge connects two peaks possibly of different ion types, predicted based on local information. The ion-separation problem is then formulated and solved as a graph partition problem, which is to partition the graph into three subgraphs, namely b-, y-ions and others respectively, so to maximize the total weight of type-1 edges while minimizing the total weight of type-2 edges within each subgraph. We have developed a dynamic programming algorithm for rigorously solving this graph partition problem and implemented it as a computer program PRIME. We have tested PRIME on 18 data sets of high accurate FT-ICR tandem mass spectra and found that it achieved ~90% accuracy for separation of b- and y- ions.
The effects of the structure characteristics on Magnetic Barkhausen noise in commercial steels
NASA Astrophysics Data System (ADS)
Deng, Yu; Li, Zhe; Chen, Juan; Qi, Xin
2018-04-01
This study has been done by separately measuring Magnetic Barkhausen noise (MBN) under different structure characteristics, namely the carbon content, hardness, roughness, and elastic modulus in commercial steels. The result of the experiments shows a strong dependence of MBN parameters (peak height, Root mean square (RMS), and average value) on structure characteristics. These effects, according to this study, can be explained by two kinds of source mechanisms of the MBN, domain wall nucleation and wall propagation. The discovery obtained in this paper can provide basic knowledge to understand the existing surface condition problem of Magnetic Barkhausen noise as a non-destructive evaluation technique and bring MBN into wider application.
Nanotechnology and clean energy: sustainable utilization and supply of critical materials
NASA Astrophysics Data System (ADS)
Fromer, Neil A.; Diallo, Mamadou S.
2013-11-01
Advances in nanoscale science and engineering suggest that many of the current problems involving the sustainable utilization and supply of critical materials in clean and renewable energy technologies could be addressed using (i) nanostructured materials with enhanced electronic, optical, magnetic and catalytic properties and (ii) nanotechnology-based separation materials and systems that can recover critical materials from non-traditional sources including mine tailings, industrial wastewater and electronic wastes with minimum environmental impact. This article discusses the utilization of nanotechnology to improve or achieve materials sustainability for energy generation, conversion and storage. We highlight recent advances and discuss opportunities of utilizing nanotechnology to address materials sustainability for clean and renewable energy technologies.
NASA Technical Reports Server (NTRS)
Trainor, J. H.; Teegarden, B. J.
1971-01-01
Demonstration that meaningful galactic and solar cosmic radiation measurements can be carried out on deep space missions. The radioisotopic thermoelectric generators (RTGs) which must be used as a source of power and perhaps of heat are a problem, but with proper separation from the experiments, with orientation, and with some shielding the damage effects can be reduced to an acceptable level. The Pioneer spacecraft are crucial in that they are targeted at the heart of Jupiter's radiation belts, and should supply the details of those belts. The subsequent Grand Tour opportunities can be selected for those periods which result in larger distances of closest approach to Jupiter if necessary.
NASA Technical Reports Server (NTRS)
Boggs, S. E.; Lin, R. P.; Coburn, W.; Feffer, P.; Pelling, R. M.; Schroeder, P.; Slassi-Sennou, S.
1997-01-01
The balloon-borne high resolution gamma ray and X-ray germanium spectrometer (HIREGS) was used to observe the Galactic center and two positions along the Galactic plane from Antarctica in January 1995. For its flight, the collimators were configured to measure the Galactic diffuse hard X-ray continuum between 20 and 200 keV by directly measuring the point source contributions to the wide field of view flux for subtraction. The hard X-ray spectra of GX 1+4 and GRO J1655-40 were measured with the diffuse continuum subtracted off. The analysis technique for source separation is discussed and the preliminary separated spectra for these point sources and the Galactic diffuse emission are presented.
Source Separation and Treament of Anthropogenic Urine (WERF Report INFR4SG09b)
Abstract: Anthropogenic urine, although only 1% of domestic wastewater flow, is responsible for 50-80% of the nutrients and a substantial portion of the pharmaceuticals and hormones present in the influent to wastewater treatment plants. Source separation and treatment of urine...
Inverting Monotonic Nonlinearities by Entropy Maximization
López-de-Ipiña Pena, Karmele; Caiafa, Cesar F.
2016-01-01
This paper proposes a new method for blind inversion of a monotonic nonlinear map applied to a sum of random variables. Such kinds of mixtures of random variables are found in source separation and Wiener system inversion problems, for example. The importance of our proposed method is based on the fact that it permits to decouple the estimation of the nonlinear part (nonlinear compensation) from the estimation of the linear one (source separation matrix or deconvolution filter), which can be solved by applying any convenient linear algorithm. Our new nonlinear compensation algorithm, the MaxEnt algorithm, generalizes the idea of Gaussianization of the observation by maximizing its entropy instead. We developed two versions of our algorithm based either in a polynomial or a neural network parameterization of the nonlinear function. We provide a sufficient condition on the nonlinear function and the probability distribution that gives a guarantee for the MaxEnt method to succeed compensating the distortion. Through an extensive set of simulations, MaxEnt is compared with existing algorithms for blind approximation of nonlinear maps. Experiments show that MaxEnt is able to successfully compensate monotonic distortions outperforming other methods in terms of the obtained Signal to Noise Ratio in many important cases, for example when the number of variables in a mixture is small. Besides its ability for compensating nonlinearities, MaxEnt is very robust, i.e. showing small variability in the results. PMID:27780261
Inverting Monotonic Nonlinearities by Entropy Maximization.
Solé-Casals, Jordi; López-de-Ipiña Pena, Karmele; Caiafa, Cesar F
2016-01-01
This paper proposes a new method for blind inversion of a monotonic nonlinear map applied to a sum of random variables. Such kinds of mixtures of random variables are found in source separation and Wiener system inversion problems, for example. The importance of our proposed method is based on the fact that it permits to decouple the estimation of the nonlinear part (nonlinear compensation) from the estimation of the linear one (source separation matrix or deconvolution filter), which can be solved by applying any convenient linear algorithm. Our new nonlinear compensation algorithm, the MaxEnt algorithm, generalizes the idea of Gaussianization of the observation by maximizing its entropy instead. We developed two versions of our algorithm based either in a polynomial or a neural network parameterization of the nonlinear function. We provide a sufficient condition on the nonlinear function and the probability distribution that gives a guarantee for the MaxEnt method to succeed compensating the distortion. Through an extensive set of simulations, MaxEnt is compared with existing algorithms for blind approximation of nonlinear maps. Experiments show that MaxEnt is able to successfully compensate monotonic distortions outperforming other methods in terms of the obtained Signal to Noise Ratio in many important cases, for example when the number of variables in a mixture is small. Besides its ability for compensating nonlinearities, MaxEnt is very robust, i.e. showing small variability in the results.
Volk, Sonja; Schreiber, Thomas D.; Eisen, David; Wiese, Calvin; Planatscher, Hannes; Pynn, Christopher J.; Stoll, Dieter; Templin, Markus F.; Joos, Thomas O.; Pötz, Oliver
2012-01-01
Blood plasma is a valuable source of potential biomarkers. However, its complexity and the huge dynamic concentration range of its constituents complicate its analysis. To tackle this problem, an immunoprecipitation strategy was employed using antibodies directed against short terminal epitope tags (triple X proteomics antibodies), which allow the enrichment of groups of signature peptides derived from trypsin-digested plasma. Isolated signature peptides are subsequently detected using MALDI-TOF/TOF mass spectrometry. Sensitivity of the immunoaffinity approach was, however, compromised by the presence of contaminant peaks derived from the peptides of nontargeted high abundant proteins. A closer analysis of the enrichment strategy revealed nonspecific peptide binding to the solid phase affinity matrix as the major source of the contaminating peptides. We therefore implemented a sucrose density gradient ultracentrifugation separation step into the procedure. This yielded a 99% depletion of contaminating peptides from a sucrose fraction containing 70% of the peptide-antibody complexes and enabled the detection of the previously undetected low abundance protein filamin-A. Assessment of this novel approach using 15 different triple X proteomics antibodies demonstrated a more consistent detection of a greater number of targeted peptides and a significant reduction in the intensity of nonspecific peptides. Ultracentrifugation coupled with immunoaffinity MS approaches presents a powerful tool for multiplexed plasma protein analysis without the requirement for demanding liquid chromatography separation techniques. PMID:22527512
Dierkes, C; Göbel, P; Lohmann, M; Coldewey, W G
2006-01-01
Source control by on-site retention and infiltration of stormwater is a sustainable and proven alternative to classical drainage methods. Unfortunately, sedimentary particles and pollutants from drained surfaces cause clogging and endanger soil and groundwater during long-term operation of infiltration devices. German water authorities recommend the use of infiltration devices, such as swales or swale-trench-systems. Direct infiltration by underground facilities, such as pipes, trenches or sinks, without pretreatment of runoff is generally not permitted. Problems occur with runoff from metal roofs, traffic areas and industrial sites. However, due to site limitations, underground systems are often the only feasible option. To overcome this situation, a pollution control pit was developed with a hydrodynamic separator and a multistage filter made of coated porous concrete. The system treats runoff at source and protects soil, groundwater and receiving waterways. Typically, more than 90% of the pollutants such as sedimentary particles, hydrocarbons and heavy metals can be removed. Filters have been developed to treat even higher polluted stormwater loads from metal roofs and industrial sites. The treatment process is based on sedimentation, filtration, adsorption and chemical precipitation. Sediments are trapped in a special chamber within the pit and can be removed easily. Other pollutants are captured in the concrete filter upstream of the sediment separator chamber. Filters can be easily replaced.
Tien, Nguyen Xuan; Kim, Semog; Rhee, Jong Myung; Park, Sang Yoon
2017-07-25
Fault tolerance has long been a major concern for sensor communications in fault-tolerant cyber physical systems (CPSs). Network failure problems often occur in wireless sensor networks (WSNs) due to various factors such as the insufficient power of sensor nodes, the dislocation of sensor nodes, the unstable state of wireless links, and unpredictable environmental interference. Fault tolerance is thus one of the key requirements for data communications in WSN applications. This paper proposes a novel path redundancy-based algorithm, called dual separate paths (DSP), that provides fault-tolerant communication with the improvement of the network traffic performance for WSN applications, such as fault-tolerant CPSs. The proposed DSP algorithm establishes two separate paths between a source and a destination in a network based on the network topology information. These paths are node-disjoint paths and have optimal path distances. Unicast frames are delivered from the source to the destination in the network through the dual paths, providing fault-tolerant communication and reducing redundant unicast traffic for the network. The DSP algorithm can be applied to wired and wireless networks, such as WSNs, to provide seamless fault-tolerant communication for mission-critical and life-critical applications such as fault-tolerant CPSs. The analyzed and simulated results show that the DSP-based approach not only provides fault-tolerant communication, but also improves network traffic performance. For the case study in this paper, when the DSP algorithm was applied to high-availability seamless redundancy (HSR) networks, the proposed DSP-based approach reduced the network traffic by 80% to 88% compared with the standard HSR protocol, thus improving network traffic performance.
Plasma separation process. Betacell (BCELL) code, user's manual
NASA Astrophysics Data System (ADS)
Taherzadeh, M.
1987-11-01
The emergence of clearly defined applications for (small or large) amounts of long-life and reliable power sources has given the design and production of betavoltaic systems a new life. Moreover, because of the availability of the Plasma Separation Program, (PSP) at TRW, it is now possible to separate the most desirable radioisotopes for betacell power generating devices. A computer code, named BCELL, has been developed to model the betavoltaic concept by utilizing the available up-to-date source/cell parameters. In this program, attempts have been made to determine the betacell energy device maximum efficiency, degradation due to the emitting source radiation and source/cell lifetime power reduction processes. Additionally, comparison is made between the Schottky and PN junction devices for betacell battery design purposes. Certain computer code runs have been made to determine the JV distribution function and the upper limit of the betacell generated power for specified energy sources. A Ni beta emitting radioisotope was used for the energy source and certain semiconductors were used for the converter subsystem of the betacell system. Some results for a Promethium source are also given here for comparison.
Bracke, Piet F; Colman, Elien; Symoens, Sara A A; Van Praag, Lore
2010-04-29
Little is known about differences in professional care seeking based on marital status. The few existing studies show more professional care seeking among the divorced or separated compared to the married or cohabiting. The aim of this study is to determine whether, in a sample of the European general population, the divorced or separated seek more professional mental health care than the married or cohabiting, regardless of self-reported mental health problems. Furthermore, we examine whether two country-level features--the supply of mental health professionals and the country-level divorce rates--contribute to marital status differences in professional care-seeking behavior. We use data from the Eurobarometer 248 on mental well-being that was collected via telephone interviews. The unweighted sample includes 27,146 respondents (11,728 men and 15,418 women). Poisson hierarchical regression models were estimated to examine whether the divorced or separated have higher professional health care use for emotional or psychological problems, after controlling for mental and somatic health, sociodemographic characteristics, support from family and friends, and degree of urbanization. We also considered country-level divorce rates and indicators of the supply of mental health professionals, and applied design and population weights. We find that professional care seeking is strongly need based. Moreover, the divorced or separated consult health professionals for mental health problems more often than people who are married or who cohabit do. In addition, we find that the gap between the divorced or separated and the married or cohabiting is highest in countries with low divorce rates. The higher rates of professional care seeking for mental health problems among the divorced or separated only partially correlates with their more severe mental health problems. In countries where marital dissolution is more common, the marital status gap in professional care seeking is narrower, partially because professional care seeking is more common among the married or cohabiting.
Developing a system for blind acoustic source localization and separation
NASA Astrophysics Data System (ADS)
Kulkarni, Raghavendra
This dissertation presents innovate methodologies for locating, extracting, and separating multiple incoherent sound sources in three-dimensional (3D) space; and applications of the time reversal (TR) algorithm to pinpoint the hyper active neural activities inside the brain auditory structure that are correlated to the tinnitus pathology. Specifically, an acoustic modeling based method is developed for locating arbitrary and incoherent sound sources in 3D space in real time by using a minimal number of microphones, and the Point Source Separation (PSS) method is developed for extracting target signals from directly measured mixed signals. Combining these two approaches leads to a novel technology known as Blind Sources Localization and Separation (BSLS) that enables one to locate multiple incoherent sound signals in 3D space and separate original individual sources simultaneously, based on the directly measured mixed signals. These technologies have been validated through numerical simulations and experiments conducted in various non-ideal environments where there are non-negligible, unspecified sound reflections and reverberation as well as interferences from random background noise. Another innovation presented in this dissertation is concerned with applications of the TR algorithm to pinpoint the exact locations of hyper-active neurons in the brain auditory structure that are directly correlated to the tinnitus perception. Benchmark tests conducted on normal rats have confirmed the localization results provided by the TR algorithm. Results demonstrate that the spatial resolution of this source localization can be as high as the micrometer level. This high precision localization may lead to a paradigm shift in tinnitus diagnosis, which may in turn produce a more cost-effective treatment for tinnitus than any of the existing ones.
The two major sources of arsenic exposure used in an arsenic risk assessment are water and diet. The extraction, separation and quantification of individual arsenic species from dietary sources is considered an area of uncertainty within the arsenic risk assessment. The uncertain...
Bodin, Theo; Björk, Jonas; Ardö, Jonas; Albin, Maria
2015-01-01
Background: Access to a quiet side in one’s dwelling is thought to compensate for higher noise levels at the most exposed façade. It has also been indicated that noise from combined traffic sources causes more noise annoyance than equal average levels from either road traffic or railway noise separately. Methods: 2612 persons in Malmö, Sweden, answered to a residential environment survey including questions on outdoor environment, noise sensitivity, noise annoyance, sleep quality and concentration problems. Road traffic and railway noise was modeled using Geographic Information System. Results: Access to a quiet side, i.e., at least one window facing yard, water or green space, was associated with reduced risk of annoyance OR (95%CI) 0.47 (0.38–0.59), and concentration problems 0.76 (0.61–0.95). Bedroom window facing the same environment was associated to reduced risk of reporting of poor sleep quality 0.78 (0.64–1.00). Railway noise was associated with reduced risk of annoyance below 55 dB(A) but not at higher levels of exposure. Conclusions: Having a window facing a yard, water or green space was associated to a substantially reduced risk of noise annoyance and concentration problems. If this window was the bedroom window, sleeping problems were less likely. PMID:25642690
NASA Astrophysics Data System (ADS)
Moini, Mehdi; Rollman, Christopher M.
2016-03-01
We introduce a battery operated capillary electrophoresis electrospray ionization (CE/ESI) source for mass spectrometry with optical isomer separation capability. The source fits in front of low or high resolution mass spectrometers similar to a nanospray source with about the same weight and size. The source has two high voltage power supplies (±25 kV HVPS) capable of operating in forward or reverse polarity modes and powered by a 12 V rechargeable lithium ion battery with operation time of ~10 h. In ultrafast CE mode, in which short narrow capillaries (≤15 μm i.d., 15-25 cm long) and field gradients ≥1000 V/cm are used, peak widths at the base are <1 s wide. Under these conditions, the source provides high resolution separation, including optical isomer resolution in ~1 min. Using a low resolution mass spectrometer (LTQ Velos) with a scan time of 0.07 s/scan, baseline separation of amino acids and their optical isomers were achieved in ~1 min. Moreover, bovine serum albumin (BSA) was analyzed in ~1 min with 56% coverage using the data-dependent MS/MS. Using a high resolution mass spectrometer (Thermo Orbitrap Elite) with 15,000 resolution, the fastest scan time achieved was 0.15 s, which was adequate for CE-MS analysis when optical isomer separation is not required or when the optical isomers were well separated. Figures of merit including a detection limit of 2 fmol and linear dynamic range of two orders of magnitude were achieved for amino acids.
Incorporation of epidemiological findings into radiation protection standards.
Goldsmith, J R
In standard setting there is a tendency to use data from experimental studies in preference to findings from epidemiological studies. Yet the epidemiological studies are usually the first and at times the only source of data on such critical effects as cancer, reproductive failure, and chronic cardiac and cardiovascular disease in exposed humans. A critique of the protection offered by current and proposed standards for ionizing and non-ionizing radiation illustrates some of the problems. Similar problems occur with water and air pollutants and with occupational exposures of many types. The following sorts of problems were noted: (a) Consideration of both thermal and non-thermal effects especially of non-ionizing radiation. (b) Interpretation of non-significant results as equivalent to no effect. (c) Accepting author's interpretation of a study, rather than examining its data independently for evidence of hazard. (d) Discounting data on unanticipated effects because of poor fit to preconceptions. (e) Dependence on threshold assumptions and demonstrations of dose-response relationships. (f) Choice of insensitive epidemiological indicators and procedures. (g) Consideration of each study separately, rather than giving weight to the conjunction of evidence from all available studies. These problems may be minimized by greater involvement of epidemiologists and their professional organizations in decisions about health protection.
Moisture separator reheater failure prevention
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilcrest, J.D.; Mollerus, F.J.
1983-01-01
Moisture separator reheaters (MSRs) are used in many nuclear plants between the HP and LP turbines to remove moisture and provide some superheat, thereby improving the plant heat rate. Many of the operating MSRs have experienced problems of the following types: flow induced vibration, condensate subcooling oscillation, excessive U-tube leg ..delta..T, and shroud buckling. Although MSR vendors have made modifications to reduce these problems, the problems have not been completely solved. Further improvements in both MSR design and operation are needed. This paper discusses the necessary improvements.
Improved moving source photometry with TRIPPy
NASA Astrophysics Data System (ADS)
Alexandersen, Mike; Fraser, Wesley Cristopher
2017-10-01
Photometry of moving sources is more complicated than for stationary sources, because the sources trail their signal out over more pixels than a point source of the same magnitude. Using a circular aperture of same size as would be appropriate for point sources can cut out a large amount of flux if a moving source moves substantially relative to the size of the aperture during the exposure, resulting in underestimated fluxes. Using a large circular aperture can mitigate this issue at the cost of a significantly reduced signal to noise compared to a point source, as a result of the inclusion of a larger background region within the aperture.Trailed Image Photometry in Python (TRIPPy) solves this problem by using a pill-shaped aperture: the traditional circular aperture is sliced in half perpendicular to the direction of motion and separated by a rectangle as long as the total motion of the source during the exposure. TRIPPy can also calculate the appropriate aperture correction (which will depend both on the radius and trail length of the pill-shaped aperture), and has features for selecting good PSF stars, creating a PSF model (convolved moffat profile + lookup table) and selecting a custom sky-background area in order to ensure no other sources contribute to the background estimate.In this poster, we present an overview of the TRIPPy features and demonstrate the improvements resulting from using TRIPPy compared to photometry obtained by other methods with examples from real projects where TRIPPy has been implemented in order to obtain the best-possible photometric measurements of Solar System objects. While TRIPPy has currently mainly been used for Trans-Neptunian Objects, the improvement from using the pill-shaped aperture increases with source motion, making TRIPPy highly relevant for asteroid and centaur photometry as well.
EBQ code: Transport of space-charge beams in axially symmetric devices
NASA Astrophysics Data System (ADS)
Paul, A. C.
1982-11-01
Such general-purpose space charge codes as EGUN, BATES, WODF, and TRANSPORT do not gracefully accommodate the simulation of relativistic space-charged beams propagating a long distance in axially symmetric devices where a high degree of cancellation has occurred between the self-magnetic and self-electric forces of the beam. The EBQ code was written specifically to follow high current beam particles where space charge is important in long distance flight in axially symmetric machines possessing external electric and magnetic field. EBQ simultaneously tracks all trajectories so as to allow procedures for charge deposition based on inter-ray separations. The orbits are treated in Cartesian geometry (position and momentum) with z as the independent variable. Poisson's equation is solved in cylindrical geometry on an orthogonal rectangular mesh. EBQ can also handle problems involving multiple ion species where the space charge from each must be included. Such problems arise in the design of ion sources where different charge and mass states are present.
NASA Astrophysics Data System (ADS)
Arhatari, Benedicta D.; Abbey, Brian
2018-01-01
Ross filter pairs have recently been demonstrated as a highly effective means of producing quasi-monoenergetic beams from polychromatic X-ray sources. They have found applications in both X-ray spectroscopy and for elemental separation in X-ray computed tomography (XCT). Here we explore whether they could be applied to the problem of metal artefact reduction (MAR) for applications in medical imaging. Metal artefacts are a common problem in X-ray imaging of metal implants embedded in bone and soft tissue. A number of data post-processing approaches to MAR have been proposed in the literature, however these can be time-consuming and sometimes have limited efficacy. Here we describe and demonstrate an alternative approach based on beam conditioning using Ross filter pairs. This approach obviates the need for any complex post-processing of the data and enables MAR and segmentation from the surrounding tissue by exploiting the absorption edge contrast of the implant.
NASA Astrophysics Data System (ADS)
Xia, Ya-Rong; Zhang, Shun-Li; Xin, Xiang-Peng
2018-03-01
In this paper, we propose the concept of the perturbed invariant subspaces (PISs), and study the approximate generalized functional variable separation solution for the nonlinear diffusion-convection equation with weak source by the approximate generalized conditional symmetries (AGCSs) related to the PISs. Complete classification of the perturbed equations which admit the approximate generalized functional separable solutions (AGFSSs) is obtained. As a consequence, some AGFSSs to the resulting equations are explicitly constructed by way of examples.
NASA Astrophysics Data System (ADS)
Godino, Neus; Jorde, Felix; Lawlor, Daryl; Jaeger, Magnus; Duschl, Claus
2015-08-01
Microalgae are a promising source of bioactive ingredients for the food, pharmaceutical and cosmetic industries. Every microalgae research group or production facility is facing one major problem regarding the potential contamination of the algal cell with bacteria. Prior to the storage of the microalgae in strain collections or to cultivation in bioreactors, it is necessary to carry out laborious purification procedures to separate the microalgae from the undesired bacterial cells. In this work, we present a disposable microfluidic cartridge for the high-throughput purification of microalgae samples based on inertial microfluidics. Some of the most relevant microalgae strains have a larger size than the relatively small, few micron bacterial cells, so making them distinguishable by size. The inertial microfluidic cartridge was fabricated with inexpensive materials, like pressure sensitive adhesive (PSA) and thin plastic layers, which were patterned using a simple cutting plotter. In spite of fabrication restrictions and the intrinsic difficulties of biological samples, the separation of microalgae from bacteria reached values in excess of 99%, previously only achieved using conventional high-end and high cost lithography methods. Moreover, due to the simple and high-throughput characteristic of the separation, it is possible to concatenate serial purification to exponentially decrease the absolute amount of bacteria in the final purified sample.
Optimum rocket propulsion for energy-limited transfer
NASA Technical Reports Server (NTRS)
Zuppero, Anthony; Landis, Geoffrey A.
1991-01-01
In order to effect large-scale return of extraterrestrial resources to Earth orbit, it is desirable to optimize the propulsion system to maximize the mass of payload returned per unit energy expended. This optimization problem is different from the conventional rocket propulsion optimization. A rocket propulsion system consists of an energy source plus reaction mass. In a conventional chemical rocket, the energy source and the reaction mass are the same. For the transportation system required, however, the best system performance is achieved if the reaction mass used is from a locally available source. In general, the energy source and the reaction mass will be separate. One such rocket system is the nuclear thermal rocket, in which the energy source is a reactor and the reaction mass a fluid which is heated by the reactor and exhausted. Another energy-limited rocket system is the hydrogen/oxygen rocket where H2/O2 fuel is produced by electrolysis of water using a solar array or a nuclear reactor. The problem is to choose the optimum specific impulse (or equivalently exhaust velocity) to minimize the amount of energy required to produce a given mission delta-v in the payload. The somewhat surprising result is that the optimum specific impulse is not the maximum possible value, but is proportional to the mission delta-v. In general terms, at the beginning of the mission it is optimum to use a very low specific impulse and expend a lot of reaction mass, since this is the most energy efficient way to transfer momentum. However, as the mission progresses, it becomes important to minimize the amount of reaction mass expelled, since energy is wasted moving the reaction mass. Thus, the optimum specific impulse will increase with the mission delta-v. Optimum I(sub sp) is derived for maximum payload return per energy expended for both the case of fixed and variable I(sub sp) engines. Sample missions analyzed include return of water payloads from the moons of Mars and of Saturn.
Method and apparatus for controlling carrier envelope phase
Chang, Zenghu [Manhattan, KS; Li, Chengquan [Sunnyvale, CA; Moon, Eric [Manhattan, KS
2011-12-06
A chirped pulse amplification laser system. The system generally comprises a laser source, a pulse modification apparatus including first and second pulse modification elements separated by a separation distance, a positioning element, a measurement device, and a feedback controller. The laser source is operable to generate a laser pulse and the pulse modification apparatus operable to modify at least a portion of the laser pulse. The positioning element is operable to reposition at least a portion of the pulse modification apparatus to vary the separation distance. The measurement device is operable to measure the carrier envelope phase of the generated laser pulse and the feedback controller is operable to control the positioning element based on the measured carrier envelope phase to vary the separation distance of the pulse modification elements and control the carrier envelope phase of laser pulses generated by the laser source.
Wu, Jiang; Qin, Yufei; Zhou, Quan; Xu, Zhenming
2009-05-30
The electrostatic separation is an effective and environmentally friendly method for recycling metals and nonmetals from crushed printed circuit board (PCB) wastes. However, it still confronts some problems brought by nonconductive powder (NP). Firstly, the NP is fine and liable to aggregate. This leads to an increase of middling products and loss of metals. Secondly, the stability of separation process is influenced by NP. Finally, some NPs accumulate on the surface of the corona and electrostatic electrodes during the process. These problems lead to an inefficient separation. In the present research, the impacts of NP on electrostatic separation are investigated. The experimental results show that: the separation is notably influenced when the NP content is more than 10%. With the increase of NP content, the middling products sharply increase from 1.4 g to 4.3g (increase 207.1%), while the conductive products decrease from 24.0 g to 19.1g (decrease 20.4%), and the separation process become more instable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amitava Sarkar; James K. Neathery; Burtron H. Davis
A fundamental filtration study was started to investigate the separation of Fischer-Tropsch Synthesis (FTS) liquids from iron-based catalyst particles. Slurry-phase FTS in slurry bubble column reactor systems is the preferred mode of operation since the reaction is highly exothermic. Consequently, heavy wax products in one approach may be separated from catalyst particles before being removed from the reactor system. Achieving an efficient wax product separation from iron-based catalysts is one of the most challenging technical problems associated with slurry-phase iron-based FTS and is a key factor for optimizing operating costs. The separation problem is further compounded by attrition of ironmore » catalyst particles and the formation of ultra-fine particles.« less
Hearing Scenes: A Neuromagnetic Signature of Auditory Source and Reverberant Space Separation
Oliva, Aude
2017-01-01
Abstract Perceiving the geometry of surrounding space is a multisensory process, crucial to contextualizing object perception and guiding navigation behavior. Humans can make judgments about surrounding spaces from reverberation cues, caused by sounds reflecting off multiple interior surfaces. However, it remains unclear how the brain represents reverberant spaces separately from sound sources. Here, we report separable neural signatures of auditory space and source perception during magnetoencephalography (MEG) recording as subjects listened to brief sounds convolved with monaural room impulse responses (RIRs). The decoding signature of sound sources began at 57 ms after stimulus onset and peaked at 130 ms, while space decoding started at 138 ms and peaked at 386 ms. Importantly, these neuromagnetic responses were readily dissociable in form and time: while sound source decoding exhibited an early and transient response, the neural signature of space was sustained and independent of the original source that produced it. The reverberant space response was robust to variations in sound source, and vice versa, indicating a generalized response not tied to specific source-space combinations. These results provide the first neuromagnetic evidence for robust, dissociable auditory source and reverberant space representations in the human brain and reveal the temporal dynamics of how auditory scene analysis extracts percepts from complex naturalistic auditory signals. PMID:28451630
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carmichael, Joshua D.; Hartse, Hans
Colocated explosive sources often produce correlated seismic waveforms. Multichannel correlation detectors identify these signals by scanning template waveforms recorded from known reference events against "target" data to find similar waveforms. This screening problem is challenged at thresholds required to monitor smaller explosions, often because non-target signals falsely trigger such detectors. Therefore, it is generally unclear what thresholds will reliably identify a target explosion while screening non-target background seismicity. Here, we estimate threshold magnitudes for hypothetical explosions located at the North Korean nuclear test site over six months of 2010, by processing International Monitoring System (IMS) array data with a multichannelmore » waveform correlation detector. Our method (1) accounts for low amplitude background seismicity that falsely triggers correlation detectors but is unidentifiable with conventional power beams, (2) adapts to diurnally variable noise levels and (3) uses source-receiver reciprocity concepts to estimate thresholds for explosions spatially separated from the template source. Furthermore, we find that underground explosions with body wave magnitudes m b = 1.66 are detectable at the IMS array USRK with probability 0.99, when using template waveforms consisting only of P -waves, without false alarms. We conservatively find that these thresholds also increase by up to a magnitude unit for sources located 4 km or more from the Feb.12, 2013 announced nuclear test.« less
Single-trial event-related potential extraction through one-unit ICA-with-reference
NASA Astrophysics Data System (ADS)
Lih Lee, Wee; Tan, Tele; Falkmer, Torbjörn; Leung, Yee Hong
2016-12-01
Objective. In recent years, ICA has been one of the more popular methods for extracting event-related potential (ERP) at the single-trial level. It is a blind source separation technique that allows the extraction of an ERP without making strong assumptions on the temporal and spatial characteristics of an ERP. However, the problem with traditional ICA is that the extraction is not direct and is time-consuming due to the need for source selection processing. In this paper, the application of an one-unit ICA-with-Reference (ICA-R), a constrained ICA method, is proposed. Approach. In cases where the time-region of the desired ERP is known a priori, this time information is utilized to generate a reference signal, which is then used for guiding the one-unit ICA-R to extract the source signal of the desired ERP directly. Main results. Our results showed that, as compared to traditional ICA, ICA-R is a more effective method for analysing ERP because it avoids manual source selection and it requires less computation thus resulting in faster ERP extraction. Significance. In addition to that, since the method is automated, it reduces the risks of any subjective bias in the ERP analysis. It is also a potential tool for extracting the ERP in online application.
Single-trial event-related potential extraction through one-unit ICA-with-reference.
Lee, Wee Lih; Tan, Tele; Falkmer, Torbjörn; Leung, Yee Hong
2016-12-01
In recent years, ICA has been one of the more popular methods for extracting event-related potential (ERP) at the single-trial level. It is a blind source separation technique that allows the extraction of an ERP without making strong assumptions on the temporal and spatial characteristics of an ERP. However, the problem with traditional ICA is that the extraction is not direct and is time-consuming due to the need for source selection processing. In this paper, the application of an one-unit ICA-with-Reference (ICA-R), a constrained ICA method, is proposed. In cases where the time-region of the desired ERP is known a priori, this time information is utilized to generate a reference signal, which is then used for guiding the one-unit ICA-R to extract the source signal of the desired ERP directly. Our results showed that, as compared to traditional ICA, ICA-R is a more effective method for analysing ERP because it avoids manual source selection and it requires less computation thus resulting in faster ERP extraction. In addition to that, since the method is automated, it reduces the risks of any subjective bias in the ERP analysis. It is also a potential tool for extracting the ERP in online application.
Stripline split-ring resonator with integrated optogalvanic sample cell
NASA Astrophysics Data System (ADS)
Persson, Anders; Berglund, Martin; Thornell, Greger; Possnert, Göran; Salehpour, Mehran
2014-04-01
Intracavity optogalvanic spectroscopy (ICOGS) has been proposed as a method for unambiguous detection of rare isotopes. Of particular interest is 14C, where detection of extremely low concentrations in the 1:1015 range (14C: 12C), is of interest in, e.g., radiocarbon dating and pharmaceutical sciences. However, recent reports show that ICOGS suffers from substantial problems with reproducibility. To qualify ICOGS as an analytical method, more stable and reliable plasma generation and signal detection are needed. In our proposed setup, critical parameters have been improved. We have utilized a stripline split-ring resonator microwave-induced microplasma source to excite and sustain the plasma. Such a microplasma source offers several advantages over conventional ICOGS plasma sources. For example, the stripline split-ring resonator concept employs separated plasma generation and signal detection, which enables sensitive detection at stable plasma conditions. The concept also permits in situ observation of the discharge conditions, which was found to improve reproducibility. Unique to the stripline split-ring resonator microplasma source in this study, is that the optogalvanic sample cell has been embedded in the device itself. This integration enables improved temperature control and more stable and accurate signal detection. Significant improvements are demonstrated, including reproducibility, signal-to-noise ratio, and precision.
Duplicate Health Insurance Coverage: Determinants of Variation Across States
Luft, Harold S.; Maerki, Susan C.
1982-01-01
Although it is recognized that many people have duplicate private health insurance coverage, either through separate purchase or as health benefits in multi-earner families, there has been little analysis of the factors determining duplicate coverage rates. A new data source, the Survey of Income and Education, offers a comparison with the only previous source of state level data, the estimates from the Health Insurance Association of America. The R2 between the two sets is only .3 and certain problems can be traced to the methodology underlying the HIAA figures. Using figures for gross and net coverage, the ratio of total policies to people with private coverage ranges from .94 in Utah to 1.53 in Illinois. Measures of industry distribution, per capita income and employment explain a large portion of the variance, but it appears that these factors operate in opposite directions for group and non-group policies. Similar sociodemographic variables also explain net coverage. These findings have substantial implications for research and the structuring of employee health benefits. PMID:10309638
Development of a test method for carbonyl compounds from stationary source emissions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhihua Fan; Peterson, M.R.; Jayanty, R.K.M.
1997-12-31
Carbonyl compounds have received increasing attention because of their important role in ground-level ozone formation. The common method used for the measurement of aldehydes and ketones is 2,4-dinitrophenylhydrazine (DNPH) derivatization followed by high performance liquid chromatography and ultra violet (HPLC-UV) analysis. One of the problems associated with this method is the low recovery for certain compounds such as acrolein. This paper presents a study in the development of a test method for the collection and measurement of carbonyl compounds from stationary source emissions. This method involves collection of carbonyl compounds in impingers, conversion of carbonyl compounds to a stable derivativemore » with O-2,3,4,5,6-pentafluorobenzyl hydroxylamine hydrochloride (PFBHA), and separation and measurement by electron capture gas chromatography (GC-ECD). Eight compounds were selected for the evaluation of this method: formaldehyde, acetaldehyde, acrolein, acetone, butanal, methyl ethyl ketone (MEK), methyl isobutyl ketone (MIBK), and hexanal.« less
Components of executive functioning in metamemory.
Mäntylä, Timo; Rönnlund, Michael; Kliegel, Matthias
2010-10-01
This study examined metamemory in relation to three basic executive functions (set shifting, working memory updating, and response inhibition) measured as latent variables. Young adults (Experiment 1) and middle-aged adults (Experiment 2) completed a set of executive functioning tasks and the Prospective and Retrospective Memory Questionnaire (PRMQ). In Experiment 1, source recall and face recognition tasks were included as indicators of objective memory performance. In both experiments, analyses of the executive functioning data yielded a two-factor solution, with the updating and inhibition tasks constituting a common factor and the shifting tasks a separate factor. Self-reported memory problems showed low predictive validity, but subjective and objective memory performance were related to different components of executive functioning. In both experiments, set shifting, but not updating and inhibition, was related to PRMQ, whereas source recall showed the opposite pattern of correlations in Experiment 1. These findings suggest that metamemorial judgments reflect selective effects of executive functioning and that individual differences in mental flexibility contribute to self-beliefs of efficacy.
NASA Astrophysics Data System (ADS)
Eriçok, Ozan Burak; Ertürk, Hakan
2018-07-01
Optical characterization of nanoparticle aggregates is a complex inverse problem that can be solved by deterministic or statistical methods. Previous studies showed that there exists a different lower size limit of reliable characterization, corresponding to the wavelength of light source used. In this study, these characterization limits are determined considering a light source wavelength range changing from ultraviolet to near infrared (266-1064 nm) relying on numerical light scattering experiments. Two different measurement ensembles are considered. Collection of well separated aggregates made up of same sized particles and that of having particle size distribution. Filippov's cluster-cluster algorithm is used to generate the aggregates and the light scattering behavior is calculated by discrete dipole approximation. A likelihood-free Approximate Bayesian Computation, relying on Adaptive Population Monte Carlo method, is used for characterization. It is found that when the wavelength range of 266-1064 nm is used, successful characterization limit changes from 21-62 nm effective radius for monodisperse and polydisperse soot aggregates.
Liang, Yong [Richland, WA; Daschbach, John L [Richland, WA; Su, Yali [Richland, WA; Chambers, Scott A [Kennewick, WA
2006-08-22
A method for producing quantum dots. The method includes cleaning an oxide substrate and separately cleaning a metal source. The substrate is then heated and exposed to the source in an oxygen environment. This causes metal oxide quantum dots to form on the surface of the substrate.
Liang, Yong [Richland, WA; Daschbach, John L [Richland, WA; Su, Yali [Richland, WA; Chambers, Scott A [Kennewick, WA
2003-03-18
A method for producing quantum dots. The method includes cleaning an oxide substrate and separately cleaning a metal source. The substrate is then heated and exposed to the source in an oxygen environment. This causes metal oxide quantum dots to form on the surface of the substrate.
Cohen, Michael X; Gulbinaite, Rasa
2017-02-15
Steady-state evoked potentials (SSEPs) are rhythmic brain responses to rhythmic sensory stimulation, and are often used to study perceptual and attentional processes. We present a data analysis method for maximizing the signal-to-noise ratio of the narrow-band steady-state response in the frequency and time-frequency domains. The method, termed rhythmic entrainment source separation (RESS), is based on denoising source separation approaches that take advantage of the simultaneous but differential projection of neural activity to multiple electrodes or sensors. Our approach is a combination and extension of existing multivariate source separation methods. We demonstrate that RESS performs well on both simulated and empirical data, and outperforms conventional SSEP analysis methods based on selecting electrodes with the strongest SSEP response, as well as several other linear spatial filters. We also discuss the potential confound of overfitting, whereby the filter captures noise in absence of a signal. Matlab scripts are available to replicate and extend our simulations and methods. We conclude with some practical advice for optimizing SSEP data analyses and interpreting the results. Copyright © 2016 Elsevier Inc. All rights reserved.
Public opinion about the source separation of municipal solid waste in Shanghai, China.
Zhang, Weiqian; Che, Yue; Yang, Kai; Ren, Xiangyu; Tai, Jun
2012-12-01
For decades the generation of municipal solid waste (MSW) in Shanghai has been increasing. Despite the long-time efforts aimed at MSW management (MSWM), the disposal of MSW achieves poor performance. Thus, a MSW minimisation plan for Shanghai was proposed in December 2010. In this study, direct face-to-face interviews and a structured questionnaire survey were used in four different Shanghai community types. We conducted an econometric analysis of the social factors that influence the willingness to pay for MSW separation and discussed the household waste characteristics, the daily waste generation and the current treatment of kitchen wastes. The results suggested that the respondents are environmentally aware of separation, but only practise minimal separation. Negative neighbour effects, confused classification of MSW, and mixed transportation and disposal are the dominant limitations of MSW source-separated collection. Most respondents are willing to pay for MSWM. Public support is influenced by household population, income and cost. The attitudes and behaviours of citizens are important for reducing the amount of MSW disposal by 50% per capita by 2020 (relative to 2010). Concerted efforts should be taken to enlarge pilot areas. In addition, the source separation of kitchen wastes should be promoted.
De Feo, Giovanni; Ferrara, Carmen; Finelli, Alessio; Grosso, Alberto
2017-12-07
The main aim of this study was to perform a Life cycle assessment study as well as an economic evaluation of the recovery of recyclable materials in a municipal solid waste management system. If citizens separate erroneously waste fractions, they produce both environmental and economic damages. The environmental and economic evaluation was performed for the case study of Nola (34.349 inhabitants) in Southern Italy, with a kerbside system that assured a source separation of 62% in 2014. The economic analysis provided a quantification of the economic benefits obtainable for the population in function of the achievable percentage of source separation. The comparison among the environmental performance of four considered scenarios showed that the higher the level of source separation was, the lower the overall impacts were. This occurred because, even if the impacts of the waste collection and transport increased, they were overcome by the avoided impacts of the recycling processes. Increasing the source separation by 1% could avoid the emission of 5 kg CO 2 eq. and 5 g PM10 for each single citizen. The economic and environmental indicators defined in this study provide simple and effective information useful for a wide-ranging audience in a behavioural change programme perspective.
NASA Astrophysics Data System (ADS)
Milej, Daniel; Janusek, Dariusz; Gerega, Anna; Wojtkiewicz, Stanislaw; Sawosz, Piotr; Treszczanowicz, Joanna; Weigl, Wojciech; Liebert, Adam
2015-10-01
The aim of the study was to determine optimal measurement conditions for assessment of brain perfusion with the use of optical contrast agent and time-resolved diffuse reflectometry in the near-infrared wavelength range. The source-detector separation at which the distribution of time of flights (DTOF) of photons provided useful information on the inflow of the contrast agent to the intracerebral brain tissue compartments was determined. Series of Monte Carlo simulations was performed in which the inflow and washout of the dye in extra- and intracerebral tissue compartments was modeled and the DTOFs were obtained at different source-detector separations. Furthermore, tests on diffuse phantoms were carried out using a time-resolved setup allowing the measurement of DTOFs at 16 source-detector separations. Finally, the setup was applied in experiments carried out on the heads of adult volunteers during intravenous injection of indocyanine green. Analysis of statistical moments of the measured DTOFs showed that the source-detector separation of 6 cm is recommended for monitoring of inflow of optical contrast to the intracerebral brain tissue compartments with the use of continuous wave reflectometry, whereas the separation of 4 cm is enough when the higher-order moments of DTOFs are available.
Determination of urine-derived odorous compounds in a source separation sanitation system.
Liu, Bianxia; Giannis, Apostolos; Chen, Ailu; Zhang, Jiefeng; Chang, Victor W C; Wang, Jing-Yuan
2017-02-01
Source separation sanitation systems have attracted more and more attention recently. However, separate urine collection and treatment could induce odor issues, especially in large scale application. In order to avoid such issues, it is necessary to monitor the odor related compounds that might be generated during urine storage. This study investigated the odorous compounds that emitted from source-separated human urine under different hydrolysis conditions. Batch experiments were conducted to investigate the effect of temperature, stale/fresh urine ratio and urine dilution on odor emissions. It was found that ammonia, dimethyl disulfide, allyl methyl sulfide and 4-heptanone were the main odorous compounds generated from human urine, with headspace concentrations hundreds of times higher than their respective odor thresholds. Furthermore, the high temperature accelerated urine hydrolysis and liquid-gas mass transfer, resulting a remarkable increase of odor emissions from the urine solution. The addition of stale urine enhanced urine hydrolysis and expedited odor emissions. On the contrary, diluted urine emitted less odorous compounds ascribed to reduced concentrations of odorant precursors. In addition, this study quantified the odor emissions and revealed the constraints of urine source separation in real-world applications. To address the odor issue, several control strategies are recommended for odor mitigation or elimination from an engineering perspective. Copyright © 2016. Published by Elsevier B.V.
Relative Water Uptake as a Criterion for the Design of Trickle Irrigation Systems
NASA Astrophysics Data System (ADS)
Communar, G.; Friedman, S. P.
2008-12-01
Previously derived analytical solutions to the 2- and 3-dimensional water flow problems describing trickle irrigation are not being widely used in practice because those formulations either ignore root water uptake or refer to it as a known input. In this lecture we are going to describe a new modeling approach and demonstrate its applicability for designing the geometry of trickle irrigation systems, namely the spacing between the emitters and drip lines. The major difference between our and previous modeling approaches is that we refer to the root water uptake as to the unknown solution of the problem and not as to a known input. We postulate that the solution to the steady-state water flow problem with a root sink that is acting under constant, maximum suction defines un upper bound to the relative water uptake (water use efficiency) in actual transient situations and propose to use it as a design criterion. Following previous derivations of analytical solutions we assume that the soil hydraulic conductivity increases exponentially with its matric head, which allows the linearization of the Richards equation, formulated in terms of the Kirchhoff matric flux potential. Since the transformed problem is linear, the relative water uptake for any given configuration of point or line sources and sinks can be calculated by superposition of the Green's functions of all relevant water sources and sinks. In addition to evaluating the relative water uptake, we also derived analytical expressions for the steam functions. The stream lines separating the water uptake zone from the percolating water provide insight to the dependence of the shape and extent of the actual rooting zone on the source- sink geometry and soil properties. A minimal number of just 3 system parameters: Gardner's (1958) alfa as a soil type quantifier and the depth and diameter of the pre-assumed active root zone are sufficient to characterize the interplay between capillary and gravitational effects on water flow and the competition between the processes of root water uptake and percolation. For accounting also for evaporation from the soil surface, when significant, another parameter is required, adopting the solution of Lomen and Warrick (1978).
Parental separation/divorce in childhood and partnership outcomes at age 30.
Fergusson, David M; McLeod, Geraldine F H; John Horwood, L
2014-04-01
Previous research has found that children exposed to separation/divorce may also experience relationship problems in adulthood. The aim of this investigation was to examine this issue in a birth cohort of over 900 New Zealand children studied to age 30. Data were gathered over the course of the Christchurch Health and Development Study (CHDS). The CHDS is a 30 year longitudinal study of a birth cohort of 1265 children born in Christchurch (NZ) in 1977. The data collected included the following: (a) timing and number of parental separations and divorces from birth to 15 years; (b) partnership outcomes (16-30 years) of the number of cohabiting/marriage partnerships; positive partner relations; negative partner relations; partner adjustment/conduct problems; and interpartner violence victimization and perpetration; and (c) potential covariate factors. Study findings showed the presence of significant associations between childhood parental separations/divorces and number of cohabiting/marriage partnerships (16-30 years) (p < .001), negative partner relations (p = .021), extent of partner adjustment/conduct problems (p < .001), and perpetration of interpartner violence (p = .018). Childhood parental separation/divorce explained less than 2.5% of the variance in partnership outcomes. These associations were explained statistically by a series of covariate factors associated with childhood parental separation/divorce including parental history of illicit drug use, childhood sexual abuse, childhood conduct problems (7-9 years), interparental conflict and violence, childhood physical punishment/maltreatment, family socio-economic status at the child's birth, and parental history of criminality. Tests of gender interaction showed that the effect of childhood parental separations/divorces may be the same for males and females. Analysis of the number of childhood parental separations/divorces experienced into three age groups (birth to 5, 5-10 years and 10-15 years) yielded similar results. These findings suggest that the general associations between childhood parental separation/divorce and partner relationships in adulthood reflect the consequences of various contextual factors that are associated with childhood parental separation. © 2013 The Authors. Journal of Child Psychology and Psychiatry © 2013 Association for Child and Adolescent Mental Health.
ERIC Educational Resources Information Center
Kim, Kyung-Sun; Sin, Sei-Ching Joanna
2007-01-01
A survey of undergraduate students examined how students' beliefs about their problem-solving styles and abilities (including avoidant style, confidence, and personal control in problem-solving) influenced their perception and selection of sources, as reflected in (1) perceived characteristics of sources, (2) source characteristics considered…
Role of diversity in ICA and IVA: theory and applications
NASA Astrophysics Data System (ADS)
Adalı, Tülay
2016-05-01
Independent component analysis (ICA) has been the most popular approach for solving the blind source separation problem. Starting from a simple linear mixing model and the assumption of statistical independence, ICA can recover a set of linearly-mixed sources to within a scaling and permutation ambiguity. It has been successfully applied to numerous data analysis problems in areas as diverse as biomedicine, communications, finance, geo- physics, and remote sensing. ICA can be achieved using different types of diversity—statistical property—and, can be posed to simultaneously account for multiple types of diversity such as higher-order-statistics, sample dependence, non-circularity, and nonstationarity. A recent generalization of ICA, independent vector analysis (IVA), generalizes ICA to multiple data sets and adds the use of one more type of diversity, statistical dependence across the data sets, for jointly achieving independent decomposition of multiple data sets. With the addition of each new diversity type, identification of a broader class of signals become possible, and in the case of IVA, this includes sources that are independent and identically distributed Gaussians. We review the fundamentals and properties of ICA and IVA when multiple types of diversity are taken into account, and then ask the question whether diversity plays an important role in practical applications as well. Examples from various domains are presented to demonstrate that in many scenarios it might be worthwhile to jointly account for multiple statistical properties. This paper is submitted in conjunction with the talk delivered for the "Unsupervised Learning and ICA Pioneer Award" at the 2016 SPIE Conference on Sensing and Analysis Technologies for Biomedical and Cognitive Applications.
Miniature multichannel biotelemeter system
NASA Technical Reports Server (NTRS)
Carraway, J. B.; Sumida, J. T. (Inventor)
1974-01-01
A miniature multichannel biotelemeter system is described. The system includes a transmitter where signals from different sources are sampled to produce a wavetrain of pulses. The transmitter also separates signals by sync pulses. The pulses amplitude modulate a radio frequency carrier which is received at a receiver unit. There the sync pulses are detected by a demultiplexer which routes the pulses from each different source to a separate output channel where the pulses are used to reconstruct the signals from the particular source.
Carbon footprint of urban source separation for nutrient recovery.
Kjerstadius, H; Bernstad Saraiva, A; Spångberg, J; Davidsson, Å
2017-07-15
Source separation systems for the management of domestic wastewater and food waste has been suggested as more sustainable sanitation systems for urban areas. The present study used an attributional life cycle assessment to investigate the carbon footprint and potential for nutrient recovery of two sanitation systems for a hypothetical urban area in Southern Sweden. The systems represented a typical Swedish conventional system and a possible source separation system with increased nutrient recovery. The assessment included the management chain from household collection, transport, treatment and final return of nutrients to agriculture or disposal of the residuals. The results for carbon footprint and nutrient recovery (phosphorus and nitrogen) concluded that the source separation system could increase nutrient recovery (0.30-0.38 kg P capita -1 year -1 and 3.10-3.28 kg N capita -1 year -1 ), while decreasing the carbon footprint (-24 to -58 kg CO 2 -eq. capita -1 year -1 ), compared to the conventional system. The nutrient recovery was increased by the use of struvite precipitation and ammonium stripping at the wastewater treatment plant. The carbon footprint decreased, mainly due to the increased biogas production, increased replacement of mineral fertilizer in agriculture and less emissions of nitrous oxide from wastewater treatment. In conclusion, the study showed that source separation systems could potentially be used to increase nutrient recovery from urban areas, while decreasing the climate impact. Copyright © 2017 Elsevier Ltd. All rights reserved.
Karim Ghani, Wan Azlina Wan Ab; Rusli, Iffah Farizan; Biak, Dayang Radiah Awang; Idris, Azni
2013-05-01
Tremendous increases in biodegradable (food waste) generation significantly impact the local authorities, who are responsible to manage, treat and dispose of this waste. The process of separation of food waste at its generation source is identified as effective means in reducing the amount food waste sent to landfill and can be reused as feedstock to downstream treatment processes namely composting or anaerobic digestion. However, these efforts will only succeed with positive attitudes and highly participations rate by the public towards the scheme. Thus, the social survey (using questionnaires) to analyse public's view and influencing factors towards participation in source separation of food waste in households based on the theory of planned behaviour technique (TPB) was performed in June and July 2011 among selected staff in Universiti Putra Malaysia, Serdang, Selangor. The survey demonstrates that the public has positive intention in participating provided the opportunities, facilities and knowledge on waste separation at source are adequately prepared by the respective local authorities. Furthermore, good moral values and situational factors such as storage convenience and collection times are also encouraged public's involvement and consequently, the participations rate. The findings from this study may provide useful indicator to the waste management authorities in Malaysia in identifying mechanisms for future development and implementation of food waste source separation activities in household programmes and communication campaign which advocate the use of these programmes. Copyright © 2012 Elsevier Ltd. All rights reserved.
Xu, Yihua; Pitot, Henry C
2006-03-01
In the studies of quantitative stereology of rat hepatocarcinogenesis, we have used image analysis technology (automatic particle analysis) to obtain data such as liver tissue area, size and location of altered hepatic focal lesions (AHF), and nuclei counts. These data are then used for three-dimensional estimation of AHF occurrence and nuclear labeling index analysis. These are important parameters for quantitative studies of carcinogenesis, for screening and classifying carcinogens, and for risk estimation. To take such measurements, structures or cells of interest should be separated from the other components based on the difference of color and density. Common background problems seen on the captured sample image such as uneven light illumination or color shading can cause severe problems in the measurement. Two application programs (BK_Correction and Pixel_Separator) have been developed to solve these problems. With BK_Correction, common background problems such as incorrect color temperature setting, color shading, and uneven light illumination background, can be corrected. With Pixel_Separator different types of objects can be separated from each other in relation to their color, such as seen with different colors in immunohistochemically stained slides. The resultant images of such objects separated from other components are then ready for particle analysis. Objects that have the same darkness but different colors can be accurately differentiated in a grayscale image analysis system after application of these programs.
Full-Scale Turbofan Engine Noise-Source Separation Using a Four-Signal Method
NASA Technical Reports Server (NTRS)
Hultgren, Lennart S.; Arechiga, Rene O.
2016-01-01
Contributions from the combustor to the overall propulsion noise of civilian transport aircraft are starting to become important due to turbofan design trends and expected advances in mitigation of other noise sources. During on-ground, static-engine acoustic tests, combustor noise is generally sub-dominant to other engine noise sources because of the absence of in-flight effects. Consequently, noise-source separation techniques are needed to extract combustor-noise information from the total noise signature in order to further progress. A novel four-signal source-separation method is applied to data from a static, full-scale engine test and compared to previous methods. The new method is, in a sense, a combination of two- and three-signal techniques and represents an attempt to alleviate some of the weaknesses of each of those approaches. This work is supported by the NASA Advanced Air Vehicles Program, Advanced Air Transport Technology Project, Aircraft Noise Reduction Subproject and the NASA Glenn Faculty Fellowship Program.
Interferometric superlocalization of two incoherent optical point sources.
Nair, Ranjith; Tsang, Mankei
2016-02-22
A novel interferometric method - SLIVER (Super Localization by Image inVERsion interferometry) - is proposed for estimating the separation of two incoherent point sources with a mean squared error that does not deteriorate as the sources are brought closer. The essential component of the interferometer is an image inversion device that inverts the field in the transverse plane about the optical axis, assumed to pass through the centroid of the sources. The performance of the device is analyzed using the Cramér-Rao bound applied to the statistics of spatially-unresolved photon counting using photon number-resolving and on-off detectors. The analysis is supported by Monte-Carlo simulations of the maximum likelihood estimator for the source separation, demonstrating the superlocalization effect for separations well below that set by the Rayleigh criterion. Simulations indicating the robustness of SLIVER to mismatch between the optical axis and the centroid are also presented. The results are valid for any imaging system with a circularly symmetric point-spread function.
Aittomäki, Akseli; Martikainen, Pekka; Laaksonen, Mikko; Lahelma, Eero; Rahkonen, Ossi
2012-10-01
Our aim was to find out whether the associations between health and both individual and household economic position reflected a causal effect on health of household affluence and consumption potential. We attempted to separate this effect from health-selection effects, in other words the potential effect of health on economic position, and from various effects related to occupational position and prestige that might correlate with the economic indicators. We made a distinction between individual labour-market advantage and household economic resources in order to reflect these theoretical definitions. Our aim was to test and compare two hypotheses: 1) low household economic resources lead to an increase in health problems later on, and 2) health problems are disadvantageous on the labour market, and consequently decrease the level of economic resources. We used prospective register data obtained from the databases of Statistics Finland and constituting an 11-per-cent random sample of the Finnish population in 1993-2006. Health problems were measured in terms of sickness allowance paid by the Finnish Social Insurance Institution, household economic resources in terms of household-equivalent disposable income and taxable wealth, and labour-market advantage in terms of individual taxable income and months of unemployment. We used structural equation models (n = 211,639) to examine the hypothesised causal pathways. Low household economic resources predicted future health problems, and health problems predicted future deterioration in labour-market advantage. The effect of economic resources on health problems was somewhat stronger. These results suggest that accumulated exposure to low economic resources leads to increasing health problems, and that this causal mechanism is a more significant source of persistent health inequalities than health problems that bring about a permanent decrease in economic resources. Copyright © 2012 Elsevier Ltd. All rights reserved.
Influence of the electrode gap separation on the pseudospark-sourced electron beam generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, J., E-mail: junping.zhao@qq.com; State Key Laboratory of Electrical Insulation and Power Equipment, West Xianning Road, Xi'an 710049; Department of Physics, SUPA, University of Strathclyde, Glasgow, G4 0NG Scotland
Pseudospark-sourced electron beam is a self-focused intense electron beam which can propagate without any external focusing magnetic field. This electron beam can drive a beam-wave interaction directly or after being post-accelerated. It is especially suitable for terahertz radiation generation due to the ability of a pseudospark discharge to produce small size in the micron range and very high current density and bright electron beams. In this paper, a single-gap pseudospark discharge chamber has been built and tested with several electrode gap separations to explore the dependence of the pseudospark-sourced electron beam current on the discharge voltage and the electrode gapmore » separation. Experimental results show that the beam pulses have similar pulse width and delay time from the distinct drop of the applied voltage for smaller electrode gap separations but longer delay time for the largest gap separation used in the experiment. It has been found that the electron beam only starts to occur when the charging voltage is above a certain value, which is defined as the starting voltage of the electron beam. The starting voltage is different for different electrode gap separations and decreases with increasing electrode gap separation in our pseudospark discharge configuration. The electron beam current increases with the increasing discharge voltage following two tendencies. Under the same discharge voltage, the configuration with the larger electrode gap separation will generate higher electron beam current. When the discharge voltage is higher than 10 kV, the beam current generated at the electrode gap separation of 17.0 mm, is much higher than that generated at smaller gap separations. The ionization of the neutral gas in the main gap is inferred to contribute more to the current increase with increasing electrode gap separation.« less
Zhang, Yaguang; Jia, Dan; Sun, Wanqi; Yang, Xue; Zhang, Chuanbo; Zhao, Fanglong; Lu, Wenyu
2018-05-01
Sophorolipids (SLs) are biosurfactants with widespread applications. The yield and purity of SLs are two important factors to be considered during their commercial large-scale production. Notably, SL accumulation causes an increase in viscosity, decrease in dissolved oxygen and product inhibition in the fermentation medium. This inhibits the further production and purification of SLs. This describes the development of a novel integrated system for SL production using Candida albicans O-13-1. Semicontinuous fermentation was performed using a novel bioreactor with dual ventilation pipes and dual sieve-plates (DVDSB). SLs were separated and recovered using a newly designed two-stage separation system. After SL recovery, the fermentation broth containing residual glucose and oleic acid was recycled back into the bioreactor. This novel approach considerably alleviated the problem of product inhibition and accelerated the rate of substrate utilization. Production of SLs achieved was 477 g l -1 , while their productivity was 1.59 g l -1 h -1 . Purity of SLs improved by 23.3%, from 60% to 74%, using DVDSB with the separation system. The conversion rate of carbon source increased from 0.5 g g -1 (in the batch fermentation) to 0.6 g g -1 . These results indicated that the integrated system could improve the efficiency of production and purity of SLs. © 2017 The Authors. Microbial Biotechnology published by John Wiley & Sons Ltd and Society for Applied Microbiology.
Stormflow-hydrograph separation based on isotopes: the thrill is gone--what's next?
Burns, Douglas A.
2002-01-01
Beginning in the 1970s, the promise of a new method for separatingstormflow hydrographs using18O,2H, and3Hprovedanirresistibletemptation, and was a vast improvement over graphical separationand solute tracer methods that were prevalent at the time. Eventu-ally, hydrologists realized that this new method entailed a plethoraof assumptions about temporal and spatial homogeneity of isotopiccomposition (many of which were commonly violated). Nevertheless,hydrologists forged ahead with dozens of isotope-based hydrograph-separation studies that were published in the 1970s and 1980s.Hortonian overland flow was presumed dead. By the late 1980s,the new isotope-based hydrograph separation technique had movedinto adolescence, accompanied by typical adolescent problems suchas confusion and a search for identity. As experienced hydrologistscontinued to use the isotope technique to study stormflow hydrol-ogy in forested catchments in humid climates, their younger peersfollowed obligingly—again and again. Was Hortonian overland flowreally dead and forgotten, though? What about catchments in whichpeople live and work? And what about catchments in dry climatesand the tropics? How useful were study results when several of theassumptions about the homogeneity of source waters were commonlyviolated? What if two components could not explain the variation ofisotopic composition measured in the stream during stormflow? Andwhat about uncertainty? As with many new tools, once the initialshine wore off, the limitations of the method became a concern—oneof which was that isotope-based hydrograph separations alone couldnot reveal much about the flow paths by which water arrives at astream channel during storms.
Cost-effectiveness Analysis with Influence Diagrams.
Arias, M; Díez, F J
2015-01-01
Cost-effectiveness analysis (CEA) is used increasingly in medicine to determine whether the health benefit of an intervention is worth the economic cost. Decision trees, the standard decision modeling technique for non-temporal domains, can only perform CEA for very small problems. To develop a method for CEA in problems involving several dozen variables. We explain how to build influence diagrams (IDs) that explicitly represent cost and effectiveness. We propose an algorithm for evaluating cost-effectiveness IDs directly, i.e., without expanding an equivalent decision tree. The evaluation of an ID returns a set of intervals for the willingness to pay - separated by cost-effectiveness thresholds - and, for each interval, the cost, the effectiveness, and the optimal intervention. The algorithm that evaluates the ID directly is in general much more efficient than the brute-force method, which is in turn more efficient than the expansion of an equivalent decision tree. Using OpenMarkov, an open-source software tool that implements this algorithm, we have been able to perform CEAs on several IDs whose equivalent decision trees contain millions of branches. IDs can perform CEA on large problems that cannot be analyzed with decision trees.
Plasma Separation Process: Betacell (BCELL) code: User's manual. [Bipolar barrier junction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taherzadeh, M.
1987-11-13
The emergence of clearly defined applications for (small or large) amounts of long-life and reliable power sources has given the design and production of betavoltaic systems a new life. Moreover, because of the availability of the plasma separation program, (PSP) at TRW, it is now possible to separate the most desirable radioisotopes for betacell power generating devices. A computer code, named BCELL, has been developed to model the betavoltaic concept by utilizing the available up-to-date source/cell parameters. In this program, attempts have been made to determine the betacell energy device maximum efficiency, degradation due to the emitting source radiation andmore » source/cell lifetime power reduction processes. Additionally, comparison is made between the Schottky and PN junction devices for betacell battery design purposes. Certain computer code runs have been made to determine the JV distribution function and the upper limit of the betacell generated power for specified energy sources. A Ni beta emitting radioisotope was used for the energy source and certain semiconductors were used for the converter subsystem of the betacell system. Some results for a Promethium source are also given here for comparison. 16 refs.« less
A selection of giant radio sources from NVSS
Proctor, D. D.
2016-06-01
Results of the application of pattern-recognition techniques to the problem of identifying giant radio sources (GRSs) from the data in the NVSS catalog are presented, and issues affecting the process are explored. Decision-tree pattern-recognition software was applied to training-set source pairs developed from known NVSS large-angular-size radio galaxies. The full training set consisted of 51,195 source pairs, 48 of which were known GRSs for which each lobe was primarily represented by a single catalog component. The source pairs had a maximum separation ofmore » $$20^{\\prime} $$ and a minimum component area of 1.87 square arcmin at the 1.4 mJy level. The importance of comparing the resulting probability distributions of the training and application sets for cases of unknown class ratio is demonstrated. The probability of correctly ranking a randomly selected (GRS, non-GRS) pair from the best of the tested classifiers was determined to be 97.8 ± 1.5%. The best classifiers were applied to the over 870,000 candidate pairs from the entire catalog. Images of higher-ranked sources were visually screened, and a table of over 1600 candidates, including morphological annotation, is presented. These systems include doubles and triples, wide-angle tail and narrow-angle tail, S- or Z-shaped systems, and core-jets and resolved cores. In conclusion, while some resolved-lobe systems are recovered with this technique, generally it is expected that such systems would require a different approach.« less
Separability and dynamical symmetry of Quantum Dots
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, P.-M., E-mail: zhpm@impcas.ac.cn; Zou, L.-P., E-mail: zoulp@impcas.ac.cn; Horvathy, P.A., E-mail: horvathy@lmpt.univ-tours.fr
2014-02-15
The separability and Runge–Lenz-type dynamical symmetry of the internal dynamics of certain two-electron Quantum Dots, found by Simonović et al. (2003), are traced back to that of the perturbed Kepler problem. A large class of axially symmetric perturbing potentials which allow for separation in parabolic coordinates can easily be found. Apart from the 2:1 anisotropic harmonic trapping potential considered in Simonović and Nazmitdinov (2013), they include a constant electric field parallel to the magnetic field (Stark effect), the ring-shaped Hartmann potential, etc. The harmonic case is studied in detail. -- Highlights: • The separability of Quantum Dots is derived frommore » that of the perturbed Kepler problem. • Harmonic perturbation with 2:1 anisotropy is separable in parabolic coordinates. • The system has a conserved Runge–Lenz type quantity.« less
NASA Astrophysics Data System (ADS)
Plestenjak, Bor; Gheorghiu, Călin I.; Hochstenbach, Michiel E.
2015-10-01
In numerous science and engineering applications a partial differential equation has to be solved on some fairly regular domain that allows the use of the method of separation of variables. In several orthogonal coordinate systems separation of variables applied to the Helmholtz, Laplace, or Schrödinger equation leads to a multiparameter eigenvalue problem (MEP); important cases include Mathieu's system, Lamé's system, and a system of spheroidal wave functions. Although multiparameter approaches are exploited occasionally to solve such equations numerically, MEPs remain less well known, and the variety of available numerical methods is not wide. The classical approach of discretizing the equations using standard finite differences leads to algebraic MEPs with large matrices, which are difficult to solve efficiently. The aim of this paper is to change this perspective. We show that by combining spectral collocation methods and new efficient numerical methods for algebraic MEPs it is possible to solve such problems both very efficiently and accurately. We improve on several previous results available in the literature, and also present a MATLAB toolbox for solving a wide range of problems.
NASA Astrophysics Data System (ADS)
Lo, Y. T.; Yen, H. Y.
2012-04-01
Taiwan is located at a complex juncture between the Eurasian and Philippine Sea plates. The mountains in Taiwan are very young, formed as a result of the collision between an island arc system and the Asian continental margin. To separate sources of gravity field in depth, a method is suggested, based on upward and downward continuation. Both new methods are applied to isolate the contribution of the Moho interface to the total field and to find its 3D topography. At the first stage, we separate near surface and deeper sources. At the next stage, we isolate the effect of very deep sources. After subtracting this field from the total effect of deeper sources, we obtain the contribution of the Moho interface. We make inversion separately for the area. In this study, we use the detail gravity data around this area to investigate the reliable subsurface density structure. First, we combine with land and marine gravity data to obtain gravity anomaly. Second, considering the geology, tomography and other constrains, we simulate the 3D density structure. The main goal of our study is to understand the Moho topography and sediment-crustal boundary in Taiwan area. We expect that our result can consistent with previous studies.
Is plagioclase removal responsible for the negative Eu anomaly in the source regions of mare basalts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shearer, C.K.; Papike, J.J.
1989-12-01
The nearly ubiquitous presence of a negative Eu anomaly in the mare basalts has been suggested to indicate prior separation and flotation of plagioclase from the basalt source region during its crystallization from a lunar magma ocean (LMO). Are there any mare basalts derived from a mantle source which did not experience prior plagioclase separation Crystal chemical rationale for REE substitution in pyroxene suggests that the combination of REE size and charge, M2 site characteristics of pyroxene, fO{sub 2}, magma chemistry, and temperature may account for the negative Eu anomaly in the source region of some types of primitive, lowmore » TiO{sub 2} mare basalts. This origin for the negative Eu anomaly does not preclude the possibility of the LMO as many mare basalts still require prior plagioclase crystallization and separation and/or hybridization involving a KREEP component.« less
Combined ICA-LORETA analysis of mismatch negativity.
Marco-Pallarés, J; Grau, C; Ruffini, G
2005-04-01
A major challenge for neuroscience is to map accurately the spatiotemporal patterns of activity of the large neuronal populations that are believed to underlie computing in the human brain. To study a specific example, we selected the mismatch negativity (MMN) brain wave (an event-related potential, ERP) because it gives an electrophysiological index of a "primitive intelligence" capable of detecting changes, even abstract ones, in a regular auditory pattern. ERPs have a temporal resolution of milliseconds but appear to result from mixed neuronal contributions whose spatial location is not fully understood. Thus, it is important to separate these sources in space and time. To tackle this problem, a two-step approach was designed combining the independent component analysis (ICA) and low-resolution tomography (LORETA) algorithms. Here we implement this approach to analyze the subsecond spatiotemporal dynamics of MMN cerebral sources using trial-by-trial experimental data. We show evidence that a cerebral computation mechanism underlies MMN. This mechanism is mediated by the orchestrated activity of several spatially distributed brain sources located in the temporal, frontal, and parietal areas, which activate at distinct time intervals and are grouped in six main statistically independent components.
NASA Technical Reports Server (NTRS)
Klink, D. M.
1977-01-01
The characteristics of five fuel loads burned within a metal lavatory were identified. In 15 of the tests the lavatory door remained closed for the 30-minute test period while in 15 additional tests the door was opened after the fire had developed. Upon completion of these tests the most severe source was selected for use in the baseline test. In the baseline test, the lavatory and adjacent panels, all of which were constructed of contemporary materials, were tested for a period of 1 hour. Thermal, environmental, and biological data were obtained for all fuel loads, door conditions, and the baseline test. All tests were conducted in a cabin fire simulator with separate ventilation of the cabin and lavatory representative of an inflight condition. The baseline test established that by using the most severe fuel source: (1) the exposed animal subject survived without complications; (2) no toxic levels of gas within the cabin were detected; (3) a propagating fire did not develop in adjacent structures; (4) the lavatory containing the fire remained structurally intact; (5) decomposition of portions of the lavatory did occur; and (6) cabin visibility would have presented a problem after 5 minutes.
A review on automated sorting of source-separated municipal solid waste for recycling.
Gundupalli, Sathish Paulraj; Hait, Subrata; Thakur, Atul
2017-02-01
A crucial prerequisite for recycling forming an integral part of municipal solid waste (MSW) management is sorting of useful materials from source-separated MSW. Researchers have been exploring automated sorting techniques to improve the overall efficiency of recycling process. This paper reviews recent advances in physical processes, sensors, and actuators used as well as control and autonomy related issues in the area of automated sorting and recycling of source-separated MSW. We believe that this paper will provide a comprehensive overview of the state of the art and will help future system designers in the area. In this paper, we also present research challenges in the field of automated waste sorting and recycling. Copyright © 2016 Elsevier Ltd. All rights reserved.
Mediation and Moderation of Divorce Effects on Children’s Behavior Problems
Weaver, Jennifer; Schofield, Thomas
2016-01-01
Using data from the NICHD Study of Early Child Care and Youth Development, we examined children’s internalizing and externalizing behavior problems from age 5 to age 15 in relation to whether they had experienced a parental divorce. Children from divorced families had more behavior problems compared with a propensity score-matched sample of children from intact families according to both teachers and mothers. They exhibited more internalizing and externalizing problems at the first assessment after the parents’ separation and at the last available assessment (age 11 for teacher reports, or age 15 for mother reports). Divorce also predicted both short-term and long-term rank-order increases in behavior problems. Associations between divorce and child behavior problems were moderated by family income (assessed before the divorce) such that children from families with higher incomes prior to the separation had fewer internalizing problems than children from families with lower incomes prior to the separation. Higher levels of pre-divorce maternal sensitivity and child IQ also functioned as protective factors for children of divorce. Mediation analyses showed that children were more likely to exhibit behavior problems after the divorce if their post-divorce home environment was less supportive and stimulating, their mother was less sensitive and more depressed, and their household income was lower. We discuss avenues for intervention, particularly efforts to improve the quality of home environments in divorced families. PMID:25419913
Redundant speed control for brushless Hall effect motor
NASA Technical Reports Server (NTRS)
Nola, F. J. (Inventor)
1973-01-01
A speed control system for a brushless Hall effect device equipped direct current (D.C.) motor is described. Separate windings of the motor are powered by separate speed responsive power sources. A change in speed, upward or downward, because of the failure of a component of one of the power sources results in a corrective signal being generated in the other power source to supply an appropriate power level and polarity to one winding to cause the motor to be corrected in speed.
Direction of arrival estimation using blind separation of sources
NASA Astrophysics Data System (ADS)
Hirari, Mehrez; Hayakawa, Masashi
1999-05-01
The estimation of direction of arrival (DOA) and polarization of an incident electromagnetic (EM) wave is of great importance in many applications. In this paper we propose a new approach for the estimation of DOA for polarized EM waves using blind separation of sources. In this approach we use a vector sensor, a sensor whose output is a complete set of the EM field components of the irradiating wave, and we reconstruct the waveforms of all the original signals that is, all the EM components of the sources' fields. From the waveform of each source we calculate its amplitude and phase and consequently calculate its DOA and polarization using the field analysis method. The separation of sources is conducted iteratively using a recurrent Hopfield-like single-layer neural network. The simulation results for two sources have been investigated. We have considered coherent and incoherent sources and also the case of varying DOAs vis-ā-vis the sensor and a varying polarization. These are cases seldom treated by other approaches even though they exist in real-world applications. With the proposed method we have obtained almost on-time tracking for the DOA and polarization of any incident sources with a significant reduction of both memory and computation costs.
Nanotechnology, resources, and pollution control
NASA Astrophysics Data System (ADS)
Gillett, Stephen L.
1996-09-01
The separation of different kinds of atoms or molecules from each other is a fundamental technological problem. Current techniques of resource extraction, which use the ancient paradigm of the differential partitioning of elements into coexisting phases, are simple but extremely wasteful and require feedstocks (`ores') that are already anomalously enriched. This is impractical for pollution control and desalination, which require extraction of low concentrations; instead, atomistic separation, typically by differential motion through semipermeable membranes, is used. The present application of such membranes is seriously limited, however, mostly because of limitations in their fabrication by conventional bulk techniques. The capabilities of biological systems, such as vertebrate kidneys, are vastly better, largely because they are intrinsically structured at a molecular scale. Nanofabrication of semipermeable membranes promises capabilities on the order of those of biological systems, and this in turn could provide much financial incentive for the development of molecular assemblers, as well established markets exist already. Continued incentives would exist, moreover, as markets expanded with decreasing costs, leading to such further applications as remediation of polluted sites, cheap desalination, and resource extraction from very low-grade sources.
NASA Technical Reports Server (NTRS)
Hartfield, Roy, Jr.
1996-01-01
Raman scattering is an inelastic molecular scattering process in which incident radiation is reemitted at a fixed change in frequency. Raman spectroscopy can be used to measure the number density and temperature of the irradiated species. The strength of the Raman signal is inversely proportional to the wavelength raised to the fourth power. Consequently, high signal to noise ratios are obtained by using ultraviolet (UV) excitation sources. Using UV sources for Raman Spectroscopy in flames is complicated by the fact that some of the primary constituents in hydrogen-oxygen combustion absorb and reemit light in the UV and these fluorescence processes interfere with the Raman signals. This problem has been handled in atmospheric pressure flames in some instances by using a narrowband tunable excimer laser as a source. This allows for detuning from absorption transitions and the elimination of interfering fluorescence signals at the Raman wavelengths. This approach works well in the atmospheric pressure flame; however, it has two important disadvantages. First, injection-locked narrowband tunable excimer lasers are very expensive. More importantly, however, is the fact that at the high pressures characteristic of rocket engine combustion chambers, the absorption transitions are broadened making it difficult to tune to a spectral location at which substantial absorption would not occur. The approach taken in this work is to separate the Raman signal from the fluorescence background by taking advantage of the fact that Raman signal has nonisotropic polarization characteristics while the fluorescence signals are unpolarized. Specifically, for scattering at right angles to the excitation beam path, the Raman signal is completely polarized. The Raman signal is separated from the fluorescence background by collecting both horizontally and vertically polarized signals separately. One of the polarizations has both the Raman signal and the fluorescence background while the other has only the fluorescence signal. The Raman scatter is the difference between the signals. By choosing an appropriate optical setup, both signals can be obtained simultaneously with the same monochromator; hence, time resolved measurements are possible using this approach.
Pervaporative stripping of acetone, butanol and ethanol to improve ABE fermentation.
Jitesh, K; Pangarkar, V G; Niranjan, K
2000-01-01
Acetone-butanol-ethanol fermentation by anaerobic bacterium C. acetobutylicum is a potential source for feedstock chemicals. The problem of product induced inhibition makes this fermentation economically infeasible. Pervaporation is studied as an effective separation technique to remove the toxic inhibitory products. Various membranes like Styrene Butadiene Rubber (SBR), Ethylene Propylene Diene Rubber (EPDM), plain Poly Dimethyl Siloxane (PDMS) and silicalite filled PDMS were studied for the removal of acetone, butanol and ethanol, from binary aqueous mixtures and from a quaternary mixture. It was found that the overall performance of PDMS filled with 15% w/w of silicalite was the best for removal of butanol in binary mixture study. SBR performance was best for the quaternary mixture studied.
Sub-nanometer periodic nonlinearity error in absolute distance interferometers
NASA Astrophysics Data System (ADS)
Yang, Hongxing; Huang, Kaiqi; Hu, Pengcheng; Zhu, Pengfei; Tan, Jiubin; Fan, Zhigang
2015-05-01
Periodic nonlinearity which can result in error in nanometer scale has become a main problem limiting the absolute distance measurement accuracy. In order to eliminate this error, a new integrated interferometer with non-polarizing beam splitter is developed. This leads to disappearing of the frequency and/or polarization mixing. Furthermore, a strict requirement on the laser source polarization is highly reduced. By combining retro-reflector and angel prism, reference and measuring beams can be spatially separated, and therefore, their optical paths are not overlapped. So, the main cause of the periodic nonlinearity error, i.e., the frequency and/or polarization mixing and leakage of beam, is eliminated. Experimental results indicate that the periodic phase error is kept within 0.0018°.
Nonstationary signal analysis in episodic memory retrieval
NASA Astrophysics Data System (ADS)
Ku, Y. G.; Kawasumi, Masashi; Saito, Masao
2004-04-01
The problem of blind source separation from a mixture that has nonstationarity can be seen in signal processing, speech processing, spectral analysis and so on. This study analyzed EEG signal during episodic memory retrieval using ICA and TVAR. This paper proposes a method which combines ICA and TVAR. The signal from the brain not only exhibits the nonstationary behavior, but also contain artifacts. EEG data at the frontal lobe (F3) from the scalp is collected during the episodic memory retrieval task. The method is applied to EEG data for analysis. The artifact (eye movement) is removed by ICA, and a single burst (around 6Hz) is obtained by TVAR, suggesting that the single burst is related to the brain activity during the episodic memory retrieval.
NASA Astrophysics Data System (ADS)
Mohammad-Djafari, Ali
2015-01-01
The main object of this tutorial article is first to review the main inference tools using Bayesian approach, Entropy, Information theory and their corresponding geometries. This review is focused mainly on the ways these tools have been used in data, signal and image processing. After a short introduction of the different quantities related to the Bayes rule, the entropy and the Maximum Entropy Principle (MEP), relative entropy and the Kullback-Leibler divergence, Fisher information, we will study their use in different fields of data and signal processing such as: entropy in source separation, Fisher information in model order selection, different Maximum Entropy based methods in time series spectral estimation and finally, general linear inverse problems.
Recent progress in multi-electrode spike sorting methods
Lefebvre, Baptiste; Yger, Pierre; Marre, Olivier
2017-01-01
In recent years, arrays of extracellular electrodes have been developed and manufactured to record simultaneously from hundreds of electrodes packed with a high density. These recordings should allow neuroscientists to reconstruct the individual activity of the neurons spiking in the vicinity of these electrodes, with the help of signal processing algorithms. Algorithms need to solve a source separation problem, also known as spike sorting. However, these new devices challenge the classical way to do spike sorting. Here we review different methods that have been developed to sort spikes from these large-scale recordings. We describe the common properties of these algorithms, as well as their main differences. Finally, we outline the issues that remain to be solved by future spike sorting algorithms. PMID:28263793
Estimation of tunnel blockage from wall pressure signatures: A review and data correlation
NASA Technical Reports Server (NTRS)
Hackett, J. E.; Wilsden, D. J.; Lilley, D. E.
1979-01-01
A method is described for estimating low speed wind tunnel blockage, including model volume, bubble separation and viscous wake effects. A tunnel-centerline, source/sink distribution is derived from measured wall pressure signatures using fast algorithms to solve the inverse problem in three dimensions. Blockage may then be computed throughout the test volume. Correlations using scaled models or tests in two tunnels were made in all cases. In many cases model reference area exceeded 10% of the tunnel cross-sectional area. Good correlations were obtained regarding model surface pressures, lift drag and pitching moment. It is shown that blockage-induced velocity variations across the test section are relatively unimportant but axial gradients should be considered when model size is determined.
Stage Separation CFD Tool Development and Evaluation
NASA Technical Reports Server (NTRS)
Droege, Alan; Gomez, Reynaldo; Wang, Ten-See
2002-01-01
This viewgraph presentation evaluates CFD (Computational Fluid Dynamics) tools for solving stage separation problems. The demonstration and validation of the tools is for a second generation RLV (Reusable Launch Vehicle) stage separation. The flow solvers are: Cart3D; Overflow/Overflow-D; Unic.
PDQ-8 reference manual (LWBR development program)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pfiefer, C J; Spitz, C J
1978-05-01
The PDQ-8 program is designed to solve the neutron diffusion, depletion problem in one, two, or three dimensions on the CDC-6600 and CDC-7600 computers. The three dimensional spatial calculation may be either explicit or discontinuous trial function synthesis. Up to five lethargy groups are permitted. The fast group treatment may be simplified P(3), and the thermal neutrons may be represented by a single group or a pair of overlapping groups. Adjoint, fixed source, one iteration, additive fixed source, eigenvalue, and boundary value calculations may be performed. The HARMONY system is used for cross section variation and generalized depletion chain solutions.more » The depletion is a combination gross block depletion for all nuclides as well as a fine block depletion for a specified subset of the nuclides. The geometries available include rectangular, cylindrical, spherical, hexagonal, and a very general quadrilateral geometry with diagonal interfaces. All geometries allow variable mesh in all dimensions. Various control searches as well as temperature and xenon feedbacks are provided. The synthesis spatial solution time is dependent on the number of trial functions used and the number of gross blocks. The PDQ-8 program is used at Bettis on a production basis for solving diffusion--depletion problems. The report describes the various features of the program and then separately describes the input required to utilize these features.« less
Louis, A. K.
2006-01-01
Many algorithms applied in inverse scattering problems use source-field systems instead of the direct computation of the unknown scatterer. It is well known that the resulting source problem does not have a unique solution, since certain parts of the source totally vanish outside of the reconstruction area. This paper provides for the two-dimensional case special sets of functions, which include all radiating and all nonradiating parts of the source. These sets are used to solve an acoustic inverse problem in two steps. The problem under discussion consists of determining an inhomogeneous obstacle supported in a part of a disc, from data, known for a subset of a two-dimensional circle. In a first step, the radiating parts are computed by solving a linear problem. The second step is nonlinear and consists of determining the nonradiating parts. PMID:23165060
Stephens, Cristina; Westmaas, J Lee; Kim, Jihye; Cannady, Rachel; Stein, Kevin
2016-10-01
Research suggests that a cancer diagnosis predicts marital dissolution more strongly for women survivors than men, but there is a paucity of research on potential processes underlying this vulnerability. The present cross-sectional study examined whether specific cancer-related problems were associated with the odds of relationship breakup following diagnosis and whether these relationships differed between male and female cancer survivors. A national cross-sectional quality of life study assessed self-reported cancer-related problems and relationship change among survivors who were either 2, 6, or 10 years post-diagnosis (n = 6099). Bivariate analyses indicated that cancer-related problems (e.g., emotional distress) were greater for divorced/separated survivors compared to those with intact relationships and were greater for women versus men. Logistic regressions indicated that for both male and female survivors, lower income, younger age, and longer time since diagnosis were associated with greater odds of divorce or separation after diagnosis (ORs > 2.14, p < .01). For women only, greater emotional distress (OR = 1.14, p < 0.01) and employment and financial problems (OR = 1.23, p < 0.0001) were associated with greater odds of post-diagnosis divorce or separation. For men only, fear of cancer recurrence was associated with greater odds of divorce or separation (OR = 1.32, p < 0.001). Female and male survivors differed in the extent to which emotional or financial/employment problems attributed to the cancer diagnosis were associated with the likelihood of reporting relationship dissolution. Although directions of causality could not be ascertained, results suggest the possibility that helping male and female cancer survivors cope with specific cancer-related problems may benefit the quality and stability of their relationships with significant others following diagnosis.
NASA Astrophysics Data System (ADS)
Kryanev, A. V.; Ivanov, V. V.; Romanova, A. O.; Sevastyanov, L. A.; Udumyan, D. K.
2018-03-01
This paper considers the problem of separating the trend and the chaotic component of chaotic time series in the absence of information on the characteristics of the chaotic component. Such a problem arises in nuclear physics, biomedicine, and many other applied fields. The scheme has two stages. At the first stage, smoothing linear splines with different values of smoothing parameter are used to separate the "trend component." At the second stage, the method of least squares is used to find the unknown variance σ2 of the noise component.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vuchot, L.; Ginocchio, A. et al.
1959-10-31
As uranium ores, like most other ores, are not definite substances which can be treated directly for the production of the metal, the ores must be concentrated. The common physical processes used for all ores, such as sieving, gravimetric separation, flotation, electromagnetic separation, and electrostatic separation, are applicable to the beneficiation of uranium. The radioactivity of uranium ores has led to a radiometric method for the concentration. This method is described in detail. As an example, the preconcentration of Forez ores is discussed. (J.S.R.)
Using nonlocal means to separate cardiac and respiration sounds
NASA Astrophysics Data System (ADS)
Rudnitskii, A. G.
2014-11-01
The paper presents the results of applying nonlocal means (NLMs) approach in the problem of separating respiration and cardiac sounds in a signal recorded on a human chest wall. The performance of the algorithm was tested both by simulated and real signals. As a quantitative efficiency measure of NLM filtration, the angle of divergence between isolated and reference signal was used. It is shown that for a wide range of signal-to-noise ratios, the algorithm makes it possible to efficiently solve this problem of separating cardiac and respiration sounds in the sum signal recorded on a human chest wall.
NASA Astrophysics Data System (ADS)
Regel, L. L.; Vedernikov, A. A.; Queeckers, P.; Legros, J.-C.
1991-12-01
The problem of the separation of crystals from their feeding solutions and their conservation at the end of the crystallization under microgravity is investigated. The goal to be reached is to propose an efficient and simple system. This method has to be applicable for an automatic separation on board a spacecraft, without using a centrifuge. The injection of an immiscible and inert liquid into the cell is proposed to solve the problem. The results of numerical modeling, earth simulation tests and experiments under short durations of weightlessness (using aircraft parabolic flights) are described.
Short separation channel location impacts the performance of short channel regression in NIRS
Gagnon, Louis; Cooper, Robert J.; Yücel, Meryem A.; Perdue, Katherine L.; Greve, Douglas N.; Boas, David A.
2011-01-01
Near-Infrared Spectroscopy (NIRS) allows the recovery of cortical oxy-and deoxyhemoglobin changes associated with evoked brain activity. NIRS is a back-reflection measurement making it very sensitive to the superficial layers of the head, i.e. the skin and the skull, where systemic interference occurs. As a result, the NIRS signal is strongly contaminated with systemic interference of superficial origin. A recent approach to overcome this problem has been the use of additional short source-detector separation optodes as regressors. Since these additional measurements are mainly sensitive to superficial layers in adult humans, they can be used to remove the systemic interference present in longer separation measurements, improving the recovery of the cortical hemodynamic response function (HRF). One question that remains to answer is whether or not a short separation measurement is required in close proximity to each long separation NIRS channel. Here, we show that the systemic interference occurring in the superficial layers of the human head is inhomogeneous across the surface of the scalp. As a result, the improvement obtained by using a short separation optode decreases as the relative distance between the short and the long measurement is increased. NIRS data was acquired on 6 human subjects both at rest and during a motor task consisting of finger tapping. The effect of distance between the short and the long channel was first quantified by recovering a synthetic hemodynamic response added over the resting-state data. The effect was also observed in the functional data collected during the finger tapping task. Together, these results suggest that the short separation measurement must be located as close as 1.5 cm from the standard NIRS channel in order to provide an improvement which is of practical use. In this case, the improvement in Contrast-to-Noise Ratio (CNR) compared to a standard General Linear Model (GLM) procedure without using any small separation optode reached 50 % for HbO and 100 % for HbR. Using small separations located farther than 2 cm away resulted in mild or negligible improvements only. PMID:21945793
Synthesis and characterization of the removal of organic pollutants in effluents.
Bakayoko, Moussa; Kalakodio, Loissi; Kalagodio, Adiara; Abo, Bodjui Olivier; Muhoza, Jean Pierre; Ismaila, El Moctar
2018-06-27
The use of a large number of organic pollutants results in the accumulation of effluents at the places of production and the environment. These substances are, therefore, dangerous for living organisms and can cause heavy environmental damage. Hence, to cure these problems certain methods were used for the elimination of organic effluents. Indeed, the methods of elimination through magnetic adsorption and/or separation prove to be effective in the treatment of certain wastes, but the effectiveness of each one of these methods depends on several characteristics and also present limitations according to the pollutants they adsorb. This review examines on the one hand the capacity of certain elements of these methods in the elimination of certain pollutants and on the other hand the advantages and limits of these methods. Elements like biochars, biosorbents and composite materials are used due to their very strong porosity which makes it possible for them to develop an important contact surface with the external medium, at low costs, and the possibility of producing them from renewable sources. The latter still run up however against the problems of formation of mud and regeneration. Depollution by magnetic separation is also used due to its capacity to mitigate the disadvantages of certain methods which generally lead to the formation of mud and overcoming also the difficulties like obtaining an active material and at the same time being able to fix the pollutants present in the effluents to treat and sensitize them to external magnetic fields.
NASA Astrophysics Data System (ADS)
Han, Guang; Liu, Jin; Liu, Rong; Xu, Kexin
2016-10-01
Position-based reference measurement method is taken as one of the most promising method in non-invasive measurement of blood glucose based on spectroscopic methodology. Selecting an appropriate source-detector separation as the reference position is important for deducting the influence of background change and reducing the loss of useful signals. Our group proposed a special source-detector separation named floating-reference position where the signal contains only background change, that is to say, the signal at this source-detector separation is uncorrelated with glucose concentration. The existence of floating-reference position has been verified in a three layer skin by Monte Carlo simulation and in the in vitro experiment. But it is difficult to verify the existence of floating-reference position on the human body because the interference is more complex during in vivo experiment. Aiming at this situation, this paper studies the determination of the best reference position on human body by collecting signals at several source-detector separations on the palm and measuring the true blood glucose levels during oral glucose tolerance test (OGTT) experiments of 3 volunteers. Partial least square (PLS) calibration model is established between the signals at every source-detector separation and its corresponding blood glucose levels. The results shows that the correlation coefficient (R) between 1.32 mm to 1.88 mm is lowest and they can be used as reference for background correction. The signal of this special position is important for improving the accuracy of near-infrared non-invasive blood glucose measurement.
Selective Listening Point Audio Based on Blind Signal Separation and Stereophonic Technology
NASA Astrophysics Data System (ADS)
Niwa, Kenta; Nishino, Takanori; Takeda, Kazuya
A sound field reproduction method is proposed that uses blind source separation and a head-related transfer function. In the proposed system, multichannel acoustic signals captured at distant microphones are decomposed to a set of location/signal pairs of virtual sound sources based on frequency-domain independent component analysis. After estimating the locations and the signals of the virtual sources by convolving the controlled acoustic transfer functions with each signal, the spatial sound is constructed at the selected point. In experiments, a sound field made by six sound sources is captured using 48 distant microphones and decomposed into sets of virtual sound sources. Since subjective evaluation shows no significant difference between natural and reconstructed sound when six virtual sources and are used, the effectiveness of the decomposing algorithm as well as the virtual source representation are confirmed.
Coma cluster ultradiffuse galaxies are not standard radio galaxies
NASA Astrophysics Data System (ADS)
Struble, Mitchell F.
2018-02-01
Matching members in the Coma cluster catalogue of ultradiffuse galaxies (UDGs) from SUBARU imaging with a very deep radio continuum survey source catalogue of the cluster using the Karl G. Jansky Very Large Array (VLA) within a rectangular region of ∼1.19 deg2 centred on the cluster core reveals matches consistent with random. An overlapping set of 470 UDGs and 696 VLA radio sources in this rectangular area finds 33 matches within a separation of 25 arcsec; dividing the sample into bins with separations bounded by 5, 10, 20 and 25 arcsec finds 1, 4, 17 and 11 matches. An analytical model estimate, based on the Poisson probability distribution, of the number of randomly expected matches within these same separation bounds is 1.7, 4.9, 19.4 and 14.2, each, respectively, consistent with the 95 per cent Poisson confidence intervals of the observed values. Dividing the data into five clustercentric annuli of 0.1° and into the four separation bins, finds the same result. This random match of UDGs with VLA sources implies that UDGs are not radio galaxies by the standard definition. Those VLA sources having integrated flux >1 mJy at 1.4 GHz in Miller, Hornschemeier and Mobasher without SDSS galaxy matches are consistent with the known surface density of background radio sources. We briefly explore the possibility that some unresolved VLA sources near UDGs could be young, compact, bright, supernova remnants of Type Ia events, possibly in the intracluster volume.
Decentralized modal identification using sparse blind source separation
NASA Astrophysics Data System (ADS)
Sadhu, A.; Hazra, B.; Narasimhan, S.; Pandey, M. D.
2011-12-01
Popular ambient vibration-based system identification methods process information collected from a dense array of sensors centrally to yield the modal properties. In such methods, the need for a centralized processing unit capable of satisfying large memory and processing demands is unavoidable. With the advent of wireless smart sensor networks, it is now possible to process information locally at the sensor level, instead. The information at the individual sensor level can then be concatenated to obtain the global structure characteristics. A novel decentralized algorithm based on wavelet transforms to infer global structure mode information using measurements obtained using a small group of sensors at a time is proposed in this paper. The focus of the paper is on algorithmic development, while the actual hardware and software implementation is not pursued here. The problem of identification is cast within the framework of under-determined blind source separation invoking transformations of measurements to the time-frequency domain resulting in a sparse representation. The partial mode shape coefficients so identified are then combined to yield complete modal information. The transformations are undertaken using stationary wavelet packet transform (SWPT), yielding a sparse representation in the wavelet domain. Principal component analysis (PCA) is then performed on the resulting wavelet coefficients, yielding the partial mixing matrix coefficients from a few measurement channels at a time. This process is repeated using measurements obtained from multiple sensor groups, and the results so obtained from each group are concatenated to obtain the global modal characteristics of the structure.
Darnaude, Audrey M.
2016-01-01
Background Mixture models (MM) can be used to describe mixed stocks considering three sets of parameters: the total number of contributing sources, their chemical baseline signatures and their mixing proportions. When all nursery sources have been previously identified and sampled for juvenile fish to produce baseline nursery-signatures, mixing proportions are the only unknown set of parameters to be estimated from the mixed-stock data. Otherwise, the number of sources, as well as some/all nursery-signatures may need to be also estimated from the mixed-stock data. Our goal was to assess bias and uncertainty in these MM parameters when estimated using unconditional maximum likelihood approaches (ML-MM), under several incomplete sampling and nursery-signature separation scenarios. Methods We used a comprehensive dataset containing otolith elemental signatures of 301 juvenile Sparus aurata, sampled in three contrasting years (2008, 2010, 2011), from four distinct nursery habitats. (Mediterranean lagoons) Artificial nursery-source and mixed-stock datasets were produced considering: five different sampling scenarios where 0–4 lagoons were excluded from the nursery-source dataset and six nursery-signature separation scenarios that simulated data separated 0.5, 1.5, 2.5, 3.5, 4.5 and 5.5 standard deviations among nursery-signature centroids. Bias (BI) and uncertainty (SE) were computed to assess reliability for each of the three sets of MM parameters. Results Both bias and uncertainty in mixing proportion estimates were low (BI ≤ 0.14, SE ≤ 0.06) when all nursery-sources were sampled but exhibited large variability among cohorts and increased with the number of non-sampled sources up to BI = 0.24 and SE = 0.11. Bias and variability in baseline signature estimates also increased with the number of non-sampled sources, but tended to be less biased, and more uncertain than mixing proportion ones, across all sampling scenarios (BI < 0.13, SE < 0.29). Increasing separation among nursery signatures improved reliability of mixing proportion estimates, but lead to non-linear responses in baseline signature parameters. Low uncertainty, but a consistent underestimation bias affected the estimated number of nursery sources, across all incomplete sampling scenarios. Discussion ML-MM produced reliable estimates of mixing proportions and nursery-signatures under an important range of incomplete sampling and nursery-signature separation scenarios. This method failed, however, in estimating the true number of nursery sources, reflecting a pervasive issue affecting mixture models, within and beyond the ML framework. Large differences in bias and uncertainty found among cohorts were linked to differences in separation of chemical signatures among nursery habitats. Simulation approaches, such as those presented here, could be useful to evaluate sensitivity of MM results to separation and variability in nursery-signatures for other species, habitats or cohorts. PMID:27761305
Nonlinear programming for classification problems in machine learning
NASA Astrophysics Data System (ADS)
Astorino, Annabella; Fuduli, Antonio; Gaudioso, Manlio
2016-10-01
We survey some nonlinear models for classification problems arising in machine learning. In the last years this field has become more and more relevant due to a lot of practical applications, such as text and web classification, object recognition in machine vision, gene expression profile analysis, DNA and protein analysis, medical diagnosis, customer profiling etc. Classification deals with separation of sets by means of appropriate separation surfaces, which is generally obtained by solving a numerical optimization model. While linear separability is the basis of the most popular approach to classification, the Support Vector Machine (SVM), in the recent years using nonlinear separating surfaces has received some attention. The objective of this work is to recall some of such proposals, mainly in terms of the numerical optimization models. In particular we tackle the polyhedral, ellipsoidal, spherical and conical separation approaches and, for some of them, we also consider the semisupervised versions.
Transformational and derivational strategies in analogical problem solving.
Schelhorn, Sven-Eric; Griego, Jacqueline; Schmid, Ute
2007-03-01
Analogical problem solving is mostly described as transfer of a source solution to a target problem based on the structural correspondences (mapping) between source and target. Derivational analogy (Carbonell, Machine learning: an artificial intelligence approach Los Altos. Morgan Kaufmann, 1986) proposes an alternative view: a target problem is solved by replaying a remembered problem-solving episode. Thus, the experience with the source problem is used to guide the search for the target solution by applying the same solution technique rather than by transferring the complete solution. We report an empirical study using the path finding problems presented in Novick and Hmelo (J Exp Psychol Learn Mem Cogn 20:1296-1321, 1994) as material. We show that both transformational and derivational analogy are problem-solving strategies realized by human problem solvers. Which strategy is evoked in a given problem-solving context depends on the constraints guiding object-to-object mapping between source and target problem. Specifically, if constraints facilitating mapping are available, subjects are more likely to employ a transformational strategy, otherwise they are more likely to use a derivational strategy.
UAS Integration into the NAS: Unmanned Aircraft System (UAS) Delegation of Separation
NASA Technical Reports Server (NTRS)
Fern, Lisa Carolynn; Kenny, Caitlin Ailis
2012-01-01
FAA Modernization and Reform Act of 2012 mandates UAS integration in the NAS by 2015. Operators must be able to safely maneuver UAS to maintain separation and collision avoidance. Delegated Separation is defined as the transfer of responsibility for maintaining separation between aircraft or vehicles from the air navigation service provider to the relevant flight operator, and will likely begin in sparsely trafficked areas before moving to more heavily populated airspace. As UAS operate primarily in areas with lower traffic density and perform maneuvers routinely that are currently managed through special handling, they have the advantage of becoming an early adopter of delegated separation. This experiment will examine if UAS are capable of performing delegated separation in 5 nm horizontal and 1000 ft vertical distances under two delegation conditions. In Extended Delegation, ATC are in charge of identifying problems and delegating to pilot identification and implementation of the solution and monitoring. In Full Delegation, the pilots are responsible for all tasks related to separation assurance: identification of problems and solutions, implementation and monitoring.
Spatiotemporal signal space separation method for rejecting nearby interference in MEG measurements
NASA Astrophysics Data System (ADS)
Taulu, S.; Simola, J.
2006-04-01
Limitations of traditional magnetoencephalography (MEG) exclude some important patient groups from MEG examinations, such as epilepsy patients with a vagus nerve stimulator, patients with magnetic particles on the head or having magnetic dental materials that cause severe movement-related artefact signals. Conventional interference rejection methods are not able to remove the artefacts originating this close to the MEG sensor array. For example, the reference array method is unable to suppress interference generated by sources closer to the sensors than the reference array, about 20-40 cm. The spatiotemporal signal space separation method proposed in this paper recognizes and removes both external interference and the artefacts produced by these nearby sources, even on the scalp. First, the basic separation into brain-related and external interference signals is accomplished with signal space separation based on sensor geometry and Maxwell's equations only. After this, the artefacts from nearby sources are extracted by a simple statistical analysis in the time domain, and projected out. Practical examples with artificial current dipoles and interference sources as well as data from real patients demonstrate that the method removes the artefacts without altering the field patterns of the brain signals.
NASA Technical Reports Server (NTRS)
Gabrielsen, R. E.; Karel, S.
1975-01-01
An algorithm for solving the nonlinear stationary Navier-Stokes problem is developed. Explicit error estimates are given. This mathematical technique is potentially adaptable to the separation problem.
The crack and wedging problem for an orthotropic strip
NASA Technical Reports Server (NTRS)
Cinar, A.; Erdogan, F.
1982-01-01
The plane elasticity problem for an orthotropic strip containing a crack parallel to its boundaries is considered. The problem is formulated under general mixed mode loading conditions. The stress intensity factors depend on two dimensionless orthotropic constants only. For the crack problem the results are given for a single crack and two collinear cracks. The calculated results show that of the two orthotropic constants the influence of the stiffness ratio on the stress intensity factors is much more significant than that of the shear parameter. The problem of loading the strip by a rigid rectangular lengths continuous contact is maintained along the wedge strip interface; at a certain critical wedge length the separation starts at the midsection of the wedge, and the length of the separation zone increases rapidly with increasing wedge length.
Lörincz, András; Póczos, Barnabás
2003-06-01
In optimizations the dimension of the problem may severely, sometimes exponentially increase optimization time. Parametric function approximatiors (FAPPs) have been suggested to overcome this problem. Here, a novel FAPP, cost component analysis (CCA) is described. In CCA, the search space is resampled according to the Boltzmann distribution generated by the energy landscape. That is, CCA converts the optimization problem to density estimation. Structure of the induced density is searched by independent component analysis (ICA). The advantage of CCA is that each independent ICA component can be optimized separately. In turn, (i) CCA intends to partition the original problem into subproblems and (ii) separating (partitioning) the original optimization problem into subproblems may serve interpretation. Most importantly, (iii) CCA may give rise to high gains in optimization time. Numerical simulations illustrate the working of the algorithm.
Ardila-Rey, Jorge Alfredo; Rojas-Moreno, Mónica Victoria; Martínez-Tarifa, Juan Manuel; Robles, Guillermo
2014-02-19
Partial discharge (PD) detection is a standardized technique to qualify electrical insulation in machines and power cables. Several techniques that analyze the waveform of the pulses have been proposed to discriminate noise from PD activity. Among them, spectral power ratio representation shows great flexibility in the separation of the sources of PD. Mapping spectral power ratios in two-dimensional plots leads to clusters of points which group pulses with similar characteristics. The position in the map depends on the nature of the partial discharge, the setup and the frequency response of the sensors. If these clusters are clearly separated, the subsequent task of identifying the source of the discharge is straightforward so the distance between clusters can be a figure of merit to suggest the best option for PD recognition. In this paper, two inductive sensors with different frequency responses to pulsed signals, a high frequency current transformer and an inductive loop sensor, are analyzed to test their performance in detecting and separating the sources of partial discharges.
The current status of the MASHA setup
NASA Astrophysics Data System (ADS)
Vedeneev, V. Yu.; Rodin, A. M.; Krupa, L.; Belozerov, A. V.; Chernysheva, E. V.; Dmitriev, S. N.; Gulyaev, A. V.; Gulyaeva, A. V.; Kamas, D.; Kliman, J.; Komarov, A. B.; Motycak, S.; Novoselov, A. S.; Salamatin, V. S.; Stepantsov, S. V.; Podshibyakin, A. V.; Yukhimchuk, S. A.; Granja, C.; Pospisil, S.
2017-11-01
The MASHA setup designed as the mass-separator with the resolving power of about 1700, which allows mass identification of superheavy nuclides is described. The setup uses solid ISOL (Isotope Separation On-Line) method. In the present article the upgrade of some parts of MASHA are described: target box (rotating target + hot catcher), ion source based on electron cyclotron resonance, data acquisition, beam diagnostics and control systems. The upgrade is undertaken in order to increase the total separation efficiency, reduce the separation time, of the installation and working stability and make possible continuous measurements at high beam currents. Ion source efficiency was measured in autonomous regime with using calibrated gas leaks of Kr and Xe injected directly to ion source. Some results of the first experiments for production of radon isotopes using the multi-nucleon transfer reaction 48Ca+242Pu are described in the present article. The using of TIMEPIX detector with MASHA setup for neutron-rich Rn isotopes identification is also described.
Method for sequencing DNA base pairs
Sessler, Andrew M.; Dawson, John
1993-01-01
The base pairs of a DNA structure are sequenced with the use of a scanning tunneling microscope (STM). The DNA structure is scanned by the STM probe tip, and, as it is being scanned, the DNA structure is separately subjected to a sequence of infrared radiation from four different sources, each source being selected to preferentially excite one of the four different bases in the DNA structure. Each particular base being scanned is subjected to such sequence of infrared radiation from the four different sources as that particular base is being scanned. The DNA structure as a whole is separately imaged for each subjection thereof to radiation from one only of each source.
Poultry femoral head separation and necrosis: A review
USDA-ARS?s Scientific Manuscript database
Femoral head separation (FHS) is a degenerative skeletal problem in fast growing poultry where the growth plate of proximal femur separates from its articular cartilage. FHS can remain asymptomatic but under strenuous conditions the damage is pronounced leading to lameness. The etiology of FHS is po...
Iterative algorithm for joint zero diagonalization with application in blind source separation.
Zhang, Wei-Tao; Lou, Shun-Tian
2011-07-01
A new iterative algorithm for the nonunitary joint zero diagonalization of a set of matrices is proposed for blind source separation applications. On one hand, since the zero diagonalizer of the proposed algorithm is constructed iteratively by successive multiplications of an invertible matrix, the singular solutions that occur in the existing nonunitary iterative algorithms are naturally avoided. On the other hand, compared to the algebraic method for joint zero diagonalization, the proposed algorithm requires fewer matrices to be zero diagonalized to yield even better performance. The extension of the algorithm to the complex and nonsquare mixing cases is also addressed. Numerical simulations on both synthetic data and blind source separation using time-frequency distributions illustrate the performance of the algorithm and provide a comparison to the leading joint zero diagonalization schemes.
Compact, maintainable 80-KeV neutral beam module
Fink, Joel H.; Molvik, Arthur W.
1980-01-01
A compact, maintainable 80-keV arc chamber, extractor module for a neutral beam system immersed in a vacuum of <10.sup.-2 Torr, incorporating a nested 60-keV gradient shield located midway between the high voltage ion source and surrounding grounded frame. The shield reduces breakdown or arcing path length without increasing the voltage gradient, tends to keep electric fields normal to conducting surfaces rather than skewed and reduces the peak electric field around irregularities on the 80-keV electrodes. The arc chamber or ion source is mounted separately from the extractor or ion accelerator to reduce misalignment of the accelerator and to permit separate maintenance to be performed on these systems. The separate mounting of the ion source provides for maintaining same without removing the ion accelerator.
Non-destructive component separation using infrared radiant energy
Simandl, Ronald F [Knoxville, TN; Russell, Steven W [Knoxville, TN; Holt, Jerrid S [Knoxville, TN; Brown, John D [Harriman, TN
2011-03-01
A method for separating a first component and a second component from one another at an adhesive bond interface between the first component and second component. Typically the method involves irradiating the first component with infrared radiation from a source that radiates substantially only short wavelengths until the adhesive bond is destabilized, and then separating the first component and the second component from one another. In some embodiments an assembly of components to be debonded is placed inside an enclosure and the assembly is illuminated from an IR source that is external to the enclosure. In some embodiments an assembly of components to be debonded is simultaneously irradiated by a multi-planar array of IR sources. Often the IR radiation is unidirectional. In some embodiments the IR radiation is narrow-band short wavelength infrared radiation.
Removal of EOG Artifacts from EEG Recordings Using Stationary Subspace Analysis
Zeng, Hong; Song, Aiguo
2014-01-01
An effective approach is proposed in this paper to remove ocular artifacts from the raw EEG recording. The proposed approach first conducts the blind source separation on the raw EEG recording by the stationary subspace analysis (SSA) algorithm. Unlike the classic blind source separation algorithms, SSA is explicitly tailored to the understanding of distribution changes, where both the mean and the covariance matrix are taken into account. In addition, neither independency nor uncorrelation is required among the sources by SSA. Thereby, it can concentrate artifacts in fewer components than the representative blind source separation methods. Next, the components that are determined to be related to the ocular artifacts are projected back to be subtracted from EEG signals, producing the clean EEG data eventually. The experimental results on both the artificially contaminated EEG data and real EEG data have demonstrated the effectiveness of the proposed method, in particular for the cases where limited number of electrodes are used for the recording, as well as when the artifact contaminated signal is highly nonstationary and the underlying sources cannot be assumed to be independent or uncorrelated. PMID:24550696
Multilevel cascade voltage source inverter with separate DC sources
Peng, F.Z.; Lai, J.S.
1997-06-24
A multilevel cascade voltage source inverter having separate DC sources is described herein. This inverter is applicable to high voltage, high power applications such as flexible AC transmission systems (FACTS) including static VAR generation (SVG), power line conditioning, series compensation, phase shifting and voltage balancing and fuel cell and photovoltaic utility interface systems. The M-level inverter consists of at least one phase wherein each phase has a plurality of full bridge inverters equipped with an independent DC source. This inverter develops a near sinusoidal approximation voltage waveform with only one switching per cycle as the number of levels, M, is increased. The inverter may have either single-phase or multi-phase embodiments connected in either wye or delta configurations. 15 figs.
Iliev, Filip L.; Stanev, Valentin G.; Vesselinov, Velimir V.
2018-01-01
Factor analysis is broadly used as a powerful unsupervised machine learning tool for reconstruction of hidden features in recorded mixtures of signals. In the case of a linear approximation, the mixtures can be decomposed by a variety of model-free Blind Source Separation (BSS) algorithms. Most of the available BSS algorithms consider an instantaneous mixing of signals, while the case when the mixtures are linear combinations of signals with delays is less explored. Especially difficult is the case when the number of sources of the signals with delays is unknown and has to be determined from the data as well. To address this problem, in this paper, we present a new method based on Nonnegative Matrix Factorization (NMF) that is capable of identifying: (a) the unknown number of the sources, (b) the delays and speed of propagation of the signals, and (c) the locations of the sources. Our method can be used to decompose records of mixtures of signals with delays emitted by an unknown number of sources in a nondispersive medium, based only on recorded data. This is the case, for example, when electromagnetic signals from multiple antennas are received asynchronously; or mixtures of acoustic or seismic signals recorded by sensors located at different positions; or when a shift in frequency is induced by the Doppler effect. By applying our method to synthetic datasets, we demonstrate its ability to identify the unknown number of sources as well as the waveforms, the delays, and the strengths of the signals. Using Bayesian analysis, we also evaluate estimation uncertainties and identify the region of likelihood where the positions of the sources can be found. PMID:29518126
Iliev, Filip L; Stanev, Valentin G; Vesselinov, Velimir V; Alexandrov, Boian S
2018-01-01
Factor analysis is broadly used as a powerful unsupervised machine learning tool for reconstruction of hidden features in recorded mixtures of signals. In the case of a linear approximation, the mixtures can be decomposed by a variety of model-free Blind Source Separation (BSS) algorithms. Most of the available BSS algorithms consider an instantaneous mixing of signals, while the case when the mixtures are linear combinations of signals with delays is less explored. Especially difficult is the case when the number of sources of the signals with delays is unknown and has to be determined from the data as well. To address this problem, in this paper, we present a new method based on Nonnegative Matrix Factorization (NMF) that is capable of identifying: (a) the unknown number of the sources, (b) the delays and speed of propagation of the signals, and (c) the locations of the sources. Our method can be used to decompose records of mixtures of signals with delays emitted by an unknown number of sources in a nondispersive medium, based only on recorded data. This is the case, for example, when electromagnetic signals from multiple antennas are received asynchronously; or mixtures of acoustic or seismic signals recorded by sensors located at different positions; or when a shift in frequency is induced by the Doppler effect. By applying our method to synthetic datasets, we demonstrate its ability to identify the unknown number of sources as well as the waveforms, the delays, and the strengths of the signals. Using Bayesian analysis, we also evaluate estimation uncertainties and identify the region of likelihood where the positions of the sources can be found.
Cai, C; Rodet, T; Legoupil, S; Mohammad-Djafari, A
2013-11-01
Dual-energy computed tomography (DECT) makes it possible to get two fractions of basis materials without segmentation. One is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical DECT measurements are usually obtained with polychromatic x-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam polychromaticity fail to estimate the correct decomposition fractions and result in beam-hardening artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log preprocessing and the ill-conditioned water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on nonlinear forward models counting the beam polychromaticity show great potential for giving accurate fraction images. This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint maximum a posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a nonquadratic cost function. To solve it, the use of a monotone conjugate gradient algorithm with suboptimal descent steps is proposed. The performance of the proposed approach is analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also necessary to have the accurate spectrum information about the source-detector system. When dealing with experimental data, the spectrum can be predicted by a Monte Carlo simulator. For the materials between water and bone, less than 5% separation errors are observed on the estimated decomposition fractions. The proposed approach is a statistical reconstruction approach based on a nonlinear forward model counting the full beam polychromaticity and applied directly to the projections without taking negative-log. Compared to the approaches based on linear forward models and the BHA correction approaches, it has advantages in noise robustness and reconstruction accuracy.
Air Pollution Control and Waste Management
This special issue addresses air pollution control and waste management, two environmental problems that are usually considered separately. Indeed, one of the challenges of environmental protection is that problems are addressed in 'media-specific' ways. In reality, these problem...
Tommasino, Paolo; Campolo, Domenico
2017-02-03
In this work, we address human-like motor planning in redundant manipulators. Specifically, we want to capture postural synergies such as Donders' law, experimentally observed in humans during kinematically redundant tasks, and infer a minimal set of parameters to implement similar postural synergies in a kinematic model. For the model itself, although the focus of this paper is to solve redundancy by implementing postural strategies derived from experimental data, we also want to ensure that such postural control strategies do not interfere with other possible forms of motion control (in the task-space), i.e. solving the posture/movement problem. The redundancy problem is framed as a constrained optimization problem, traditionally solved via the method of Lagrange multipliers. The posture/movement problem can be tackled via the separation principle which, derived from experimental evidence, posits that the brain processes static torques (i.e. posture-dependent, such as gravitational torques) separately from dynamic torques (i.e. velocity-dependent). The separation principle has traditionally been applied at a joint torque level. Our main contribution is to apply the separation principle to Lagrange multipliers, which act as task-space force fields, leading to a task-space separation principle. In this way, we can separate postural control (implementing Donders' law) from various types of tasks-space movement planners. As an example, the proposed framework is applied to the (redundant) task of pointing with the human wrist. Nonlinear inverse optimization (NIO) is used to fit the model parameters and to capture motor strategies displayed by six human subjects during pointing tasks. The novelty of our NIO approach is that (i) the fitted motor strategy, rather than raw data, is used to filter and down-sample human behaviours; (ii) our framework is used to efficiently simulate model behaviour iteratively, until it converges towards the experimental human strategies.
Mediation and moderation of divorce effects on children's behavior problems.
Weaver, Jennifer M; Schofield, Thomas J
2015-02-01
Using data from the National Institute of Child Health and Human Development Study of Early Child Care and Youth Development, we examined children's internalizing and externalizing behavior problems from age 5 to 15 years in relation to whether they had experienced a parental divorce. Children from divorced families had more behavior problems compared with a propensity-score-matched sample of children from intact families, according to both teachers and mothers. They exhibited more internalizing and externalizing problems at the first assessment after the parents' separation and at the last available assessment (age 11 years for teacher reports, or 15 years for mother reports). Divorce also predicted both short-term and long-term rank-order increases in behavior problems. Associations between divorce and child behavior problems were moderated by family income (assessed before the divorce) such that children from families with higher incomes prior to the separation had fewer internalizing problems than children from families with lower incomes prior to the separation. Higher levels of predivorce maternal sensitivity and child IQ also functioned as protective factors for children of divorce. Mediation analyses showed that children were more likely to exhibit behavior problems after the divorce if their postdivorce home environment was less supportive and stimulating, their mother was less sensitive and more depressed, and their household income was lower. We discuss avenues for intervention, particularly efforts to improve the quality of home environments in divorced families. PsycINFO Database Record (c) 2015 APA, all rights reserved.
Family factors and life events as risk factors for behavioural and emotional problems in children.
Harland, P; Reijneveld, S A; Brugman, E; Verloove-Vanhorick, S P; Verhulst, F C
2002-08-01
The aim of this study was to identify groups of children at increased risk of behavioural or emotional problems on the basis of socio-demographic characteristics, family characteristics, and recent life events with a focus on unemployment and divorce or separation. We obtained data on the Child Behavior Checklist (CBCL) from a community-based national sample of 4480 parents of school-aged children and interviewed them about their demographic and family characteristics and about the child's recent life events. Results showed that family characteristics and recent life events were more strongly associated with children's risks of behavioural and emotional problems as measured by the CBCL than other demographic characteristics. Risks were somewhat higher for children who had experienced parental unemployment and divorce or separation recently, as compared to those who had experienced these events in the more distant past. We conclude that children with recent experience of parental unemployment or parental divorce or separation are at a relatively high risk of behavioural and emotional problems as reported by parents. Although relatively high, the risks that were found do not justify restriction of screening for behavioural and emotional problems to these children.
ECLSS ARS humidifier separator repair onboard Atlantis, OV-104, during STS-44
NASA Technical Reports Server (NTRS)
1991-01-01
During STS-44, the Environmental Control and Life Support System (ECLSS) Air Revitalization System (ARS) humidifier separator is repaired using a towel and a plastic bag underneath the middeck subfloor of Atlantis, Orbiter Vehicle (OV) 104. Problems with the humidifier separator began about midway through the mission.
Separation of Powers in Foreign and Domestic Contexts.
ERIC Educational Resources Information Center
Bennett, Robert W.
1987-01-01
Explores the concept of separation of powers in terms of recent conflicts between the executive and legislative branches of the U.S. government. Points out that what the Supreme Court has said about separation of powers in the domestic context may complicate resolution of more serious problems in foreign affairs. (BSR)
Method and Device for Extraction of Liquids from a Solid Particle Material
NASA Technical Reports Server (NTRS)
deMayo, Benjamin (Inventor)
2017-01-01
A method, system, and device for separating oil from oil sands or oil shale is disclosed. The method includes heating the oil sands, spinning the heated oil sands, confining the sand particles mechanically, and recovering the oil substantially free of the sand. The method can be used without the addition of chemical extraction agents. The system includes a source of centrifugal force, a heat source, a separation device, and a recovery device. The separation device includes a method of confining the sands while allowing the oil to escape, such as through an aperture.
NASA Astrophysics Data System (ADS)
Zhou, Huai-Bei
This dissertation examines the dynamic response of a magnetoplasma to an external time-dependent current source. To achieve this goal a new method which combines analytic and numerical techniques to study the dynamic response of a 3-D magnetoplasma to a time-dependent current source imposed across the magnetic field was developed. The set of the cold electron and/or ion plasma equations and Maxwell's equations are first solved analytically in (k, omega)^ace; inverse Laplace and 3 -D complex Fast Fourier Transform (FFT) techniques are subsequently used to numerically transform the radiation fields and plasma currents from the (k, omega) ^ace to the (r, t) space. The dynamic responses of the electron plasma and of the compensated two-component plasma to external current sources are studied separately. The results show that the electron plasma responds to a time -varying current source imposed across the magnetic field by exciting whistler/helicon waves and forming of an expanding local current loop, induced by field aligned plasma currents. The current loop consists of two anti-parallel field-aligned current channels concentrated at the ends of the imposed current and a cross-field current region connecting these channels. The latter is driven by an electron Hall drift. A compensated two-component plasma responds to the same current source as following: (a) For slow time scales tau > Omega_sp{i}{-1} , it generates Alfven waves and forms a non-local current loop in which the ion polarization currents dominate the cross-field current; (b) For fast time scales tau < Omega_sp{i}{-1} , the dynamic response of the compensated two-component plasma is the same as that of the electron plasma. The characteristics of the current closure region are determined by the background plasma density, the magnetic field and the time scale of the current source. This study has applications to a diverse range of space and solid state plasma problems. These problems include current closure in emf inducing tethered satellite systems (TSS), generation of ELF/VLF waves by ionospheric heating, current closure and quasineutrality in thin magnetopause transitions, and short electromagnetic pulse generation in solid state plasmas. The cross-field current in TSS builds up on a time scale corresponding to the whistler waves and results in local current closure. Amplitude modulated HF ionospheric heating generates ELF/VLF waves by forming a horizontal magnetic dipole. The dipole is formed by the current closure in the modified region. For thin transition the time-dependent cross-field polarization field at the magnetopause could be neutralized by the formation of field aligned current loops that close by a cross-field electron Hall current. A moving current source in a solid state plasma results in microwave emission if the speed of the source exceeds the local phase velocity of the helicon or Alfven waves. Detailed analysis of the above problems is presented in the thesis.
NASA Astrophysics Data System (ADS)
Qian, Jin; Zheng, Hao; Wang, Peifang; Liao, Xiaolin; Wang, Chao; Hou, Jun; Ao, Yanhui; Shen, Mengmeng; Liu, Jingjing; Li, Kun
2017-10-01
In this study we used a dual stable isotope approach (δ18O and δ2H) to assess the ecohydrological separation hypothesis and to identify the seasonal variation in water sources of Ginkgo biloba L. in the riparian zone in the Taihu Lake basin, China. Three study sites located at 5, 10, and 30 m from a river bank were established. From August 2014 to July 2015, samples of rainwater, river water, groundwater, bulk soil water at five soil depths (i.e. 0-30, 30-60, 60-90, 90-120, 120-150 cm), and xylem water of G. biloba, were collected and their δ18O and δ2H values were measured. Generally, the δ18O and δ2H values for xylem water, groundwater, and soil water clustered together and separated from those of river water, suggesting the possible occurrence of ecohydrological separation. However, the line-conditioned excess (lc-excess) values of most xylem water were positive, indicating a mixture of different water sources. Significant correlations were observed between the contributions of precipitation, soil water, and groundwater to water uptake by G. biloba, further supporting ecohydrological connectivity rather than ecohydrological separation. G. biloba switched its major water sources from soil water at 0-60 cm depth and precipitation in the wet summer, to soil water from >90 cm depth and groundwater in the dry winter. The river water was a minor water source for G. biloba, but its contribution was comparatively greater at the site closest to the river bank. Our findings contribute to understanding of plant-soil-water relationships and the water balance, and may provide important information for investigations of nutrient sources and sinks in riparian zones. The present study suggests the need to rethink the application of ecohydrological connectivity and separation in different biomes, especially where river water and groundwater recharge each other over time.
Fizeau simultaneous phase-shifting interferometry based on extended source
NASA Astrophysics Data System (ADS)
Wang, Shanshan; Zhu, Qiudong; Hou, Yinlong; Cao, Zheng
2016-09-01
Coaxial Fizeau simultaneous phase-shifting interferometer plays an important role in many fields for its characteristics of long optical path, miniaturization, and elimination of reference surface high-frequency error. Based on the matching of coherence between extended source and interferometer, orthogonal polarization reference wave and measurement wave can be obtained by Fizeau interferometry with Michelson interferometer preposed. Through matching spatial coherence length between preposed interferometer and primary interferometer, high contrast interference fringes can be obtained and additional interference fringes can be eliminated. Thus, the problem of separation of measurement and reference surface in the common optical path Fizeau interferometer is solved. Numerical simulation and principle experiment is conducted to verify the feasibility of extended source interferometer. Simulation platform is established by using the communication technique of DDE (dynamic data exchange) to connect Zemax and Matlab. The modeling of the extended source interferometer is realized by using Zemax. Matlab codes are programmed to automatically rectify the field parameters of the optical system and conveniently calculate the visibility of interference fringes. Combined with the simulation, the experimental platform of the extended source interferometer is established. After experimental research on the influence law of scattering screen granularity to interference fringes, the granularity of scattering screen is determined. Based on the simulation platform and experimental platform, the impacts on phase measurement accuracy of the imaging system aberration and collimation system aberration of the interferometer are analyzed. Compared the visibility relation curves between experimental measurement and simulation result, the experimental result is in line with the theoretical result.
Du, Zhenxia; Sun, Tangqiang; Zhao, Jianan; Wang, Di; Zhang, Zhongxia; Yu, Wenlian
2018-07-01
Ion mobility spectrometry (IMS) which acts as a rapid analysis technique is widely used in the field detection of illicit drugs and explosives. Due to limited separation abilities of the pint-sized IMS challenges and problems still exist regarding high false positive and false negative responses due to the interference of the matrix. In addition, the gas-phase ion chemistry and special phenomena in the IMS spectra, such one substance showing two peaks, were not identified unambiguously. In order to explain or resolve these questions, in this paper, an ion mobility spectrometry was coupled to a mass spectrometry (IMS-MS). A commercial IMS is embedded in a custom-built ion chamber shell was attached to the mass spectrometer. The faraday plate of IMS was fabricated with a hole for the ions to passing through to the mass spectrometer. The ion transmission efficiency of IMS-MS was optimized by optimizing the various parameters, especially the distance between the faraday plate and the cone of mass spectrum. This design keeps the integrity of the two original instruments and the mass spectrometry still works with multimode ionization source (i.e., IMS-MS, ESI-MS, APCI-MS modes). The illicit drugs and explosive samples were analyzed by the IMS-MS with 63 Ni source. The results showed that the IMS-MS is of high sensitivity. The ionization mechanism of the illicit drug and explosive samples with 63 Ni source were systematically studied. In addition, the interferent which interfered the detection of cocaine was identified as dibutyl phthalate (DBP) by this platform. The reason why the acetone solution of amphetamine showed two peaks was explained. Copyright © 2018 Elsevier B.V. All rights reserved.
Data Transfer for Multiple Sensor Networks Over a Broad Temperature Range
NASA Technical Reports Server (NTRS)
Krasowski, Michael
2013-01-01
At extreme temperatures, cryogenic and over 300 C, few electronic components are available to support intelligent data transfer over a common, linear combining medium. This innovation allows many sensors to operate on the same wire bus (or on the same airwaves or optical channel: any linearly combining medium), transmitting simultaneously, but individually recoverable at a node in a cooler part of the test area. This innovation has been demonstrated using room-temperature silicon microcircuits as proxy. The microcircuits have analog functionality comparable to componentry designed using silicon carbide. Given a common, linearly combining medium, multiple sending units may transmit information simultaneously. A listening node, using various techniques, can pick out the signal from a single sender, if it has unique qualities, e.g. a voice. The problem being solved is commonly referred to as the cocktail party problem. The human brain uses the cocktail party effect when it is able to recognize and follow a single conversation in a party full of talkers and other noise sources. High-temperature sensors have been used in silicon carbide electronic oscillator circuits. The frequency of the oscillator changes as a function of the changes in the sensed parameter, such as pressure. This change is analogous to changes in the pitch of a person s voice. The output of this oscillator and many others may be superimposed onto a single medium. This medium may be the power lines supplying current to the sensors, a third wire dedicated to data transmission, the airwaves through radio transmission, an optical medium, etc. However, with nothing to distinguish the identities of each source that is, the source separation this system is useless. Using digital electronic functions, unique codes or patterns are created and used to modulate the output of the sensor.
Migration of scattered teleseismic body waves
NASA Astrophysics Data System (ADS)
Bostock, M. G.; Rondenay, S.
1999-06-01
The retrieval of near-receiver mantle structure from scattered waves associated with teleseismic P and S and recorded on three-component, linear seismic arrays is considered in the context of inverse scattering theory. A Ray + Born formulation is proposed which admits linearization of the forward problem and economy in the computation of the elastic wave Green's function. The high-frequency approximation further simplifies the problem by enabling (1) the use of an earth-flattened, 1-D reference model, (2) a reduction in computations to 2-D through the assumption of 2.5-D experimental geometry, and (3) band-diagonalization of the Hessian matrix in the inverse formulation. The final expressions are in a form reminiscent of the classical diffraction stack of seismic migration. Implementation of this procedure demands an accurate estimate of the scattered wave contribution to the impulse response, and thus requires the removal of both the reference wavefield and the source time signature from the raw record sections. An approximate separation of direct and scattered waves is achieved through application of the inverse free-surface transfer operator to individual station records and a Karhunen-Loeve transform to the resulting record sections. This procedure takes the full displacement field to a wave vector space wherein the first principal component of the incident wave-type section is identified with the direct wave and is used as an estimate of the source time function. The scattered displacement field is reconstituted from the remaining principal components using the forward free-surface transfer operator, and may be reduced to a scattering impulse response upon deconvolution of the source estimate. An example employing pseudo-spectral synthetic seismograms demonstrates an application of the methodology.
NASA Astrophysics Data System (ADS)
Crochet, M. W.; Gonthier, K. A.
2013-12-01
Systems of hyperbolic partial differential equations are frequently used to model the flow of multiphase mixtures. These equations often contain sources, referred to as nozzling terms, that cannot be posed in divergence form, and have proven to be particularly challenging in the development of finite-volume methods. Upwind schemes have recently shown promise in properly resolving the steady wave solution of the associated multiphase Riemann problem. However, these methods require a full characteristic decomposition of the system eigenstructure, which may be either unavailable or computationally expensive. Central schemes, such as the Kurganov-Tadmor (KT) family of methods, require minimal characteristic information, which makes them easily applicable to systems with an arbitrary number of phases. However, the proper implementation of nozzling terms in these schemes has been mathematically ambiguous. The primary objectives of this work are twofold: first, an extension of the KT family of schemes is proposed that formally accounts for the nonconservative nozzling sources. This modification results in a semidiscrete form that retains the simplicity of its predecessor and introduces little additional computational expense. Second, this modified method is applied to multiple, but equivalent, forms of the multiphase equations to perform a numerical study by solving several one-dimensional test problems. Both ideal and Mie-Grüneisen equations of state are used, with the results compared to an analytical solution. This study demonstrates that the magnitudes of the resulting numerical errors are sensitive to the form of the equations considered, and suggests an optimal form to minimize these errors. Finally, a separate modification of the wave propagation speeds used in the KT family is also suggested that can reduce the extent of numerical diffusion in multiphase flows.
Ahdesmäki, Miika J; Gray, Simon R; Johnson, Justin H; Lai, Zhongwu
2016-01-01
Grafting of cell lines and primary tumours is a crucial step in the drug development process between cell line studies and clinical trials. Disambiguate is a program for computationally separating the sequencing reads of two species derived from grafted samples. Disambiguate operates on DNA or RNA-seq alignments to the two species and separates the components at very high sensitivity and specificity as illustrated in artificially mixed human-mouse samples. This allows for maximum recovery of data from target tumours for more accurate variant calling and gene expression quantification. Given that no general use open source algorithm accessible to the bioinformatics community exists for the purposes of separating the two species data, the proposed Disambiguate tool presents a novel approach and improvement to performing sequence analysis of grafted samples. Both Python and C++ implementations are available and they are integrated into several open and closed source pipelines. Disambiguate is open source and is freely available at https://github.com/AstraZeneca-NGS/disambiguate.
Ball, J.W.; Bassett, R.L.
2000-01-01
A method has been developed for separating the Cr dissolved in natural water from matrix elements and determination of its stable isotope ratios using solid-source thermal-ionization mass spectrometry (TIMS). The separation method takes advantage of the existence of the oxidized form of Cr as an oxyanion to separate it from interfering cations using anion-exchange chromatography, and of the reduced form of Cr as a positively charged ion to separate it from interfering anions such as sulfate. Subsequent processing of the separated sample eliminates residual organic material for application to a solid source filament. Ratios for 53Cr/52Cr for National Institute of Standards and Technology Standard Reference Material 979 can be measured using the silica gel-boric acid technique with a filament-to-filament standard deviation in the mean 53Cr/52Cr ratio for 50 replicates of 0.00005 or less. (C) 2000 Elsevier Science B.V. All rights reserved.
Development of Improved Surface Integral Methods for Jet Aeroacoustic Predictions
NASA Technical Reports Server (NTRS)
Pilon, Anthony R.; Lyrintzis, Anastasios S.
1997-01-01
The accurate prediction of aerodynamically generated noise has become an important goal over the past decade. Aeroacoustics must now be an integral part of the aircraft design process. The direct calculation of aerodynamically generated noise with CFD-like algorithms is plausible. However, large computer time and memory requirements often make these predictions impractical. It is therefore necessary to separate the aeroacoustics problem into two parts, one in which aerodynamic sound sources are determined, and another in which the propagating sound is calculated. This idea is applied in acoustic analogy methods. However, in the acoustic analogy, the determination of far-field sound requires the solution of a volume integral. This volume integration again leads to impractical computer requirements. An alternative to the volume integrations can be found in the Kirchhoff method. In this method, Green's theorem for the linear wave equation is used to determine sound propagation based on quantities on a surface surrounding the source region. The change from volume to surface integrals represents a tremendous savings in the computer resources required for an accurate prediction. This work is concerned with the development of enhancements of the Kirchhoff method for use in a wide variety of aeroacoustics problems. This enhanced method, the modified Kirchhoff method, is shown to be a Green's function solution of Lighthill's equation. It is also shown rigorously to be identical to the methods of Ffowcs Williams and Hawkings. This allows for development of versatile computer codes which can easily alternate between the different Kirchhoff and Ffowcs Williams-Hawkings formulations, using the most appropriate method for the problem at hand. The modified Kirchhoff method is developed primarily for use in jet aeroacoustics predictions. Applications of the method are shown for two dimensional and three dimensional jet flows. Additionally, the enhancements are generalized so that they may be used in any aeroacoustics problem.
NASA Astrophysics Data System (ADS)
Schumacher, Florian; Friederich, Wolfgang; Lamara, Samir; Gutt, Phillip; Paffrath, Marcel
2015-04-01
We present a seismic full waveform inversion concept for applications ranging from seismological to enineering contexts, based on sensitivity kernels for full waveforms. The kernels are derived from Born scattering theory as the Fréchet derivatives of linearized frequency-domain full waveform data functionals, quantifying the influence of elastic earth model parameters and density on the data values. For a specific source-receiver combination, the kernel is computed from the displacement and strain field spectrum originating from the source evaluated throughout the inversion domain, as well as the Green function spectrum and its strains originating from the receiver. By storing the wavefield spectra of specific sources/receivers, they can be re-used for kernel computation for different specific source-receiver combinations, optimizing the total number of required forward simulations. In the iterative inversion procedure, the solution of the forward problem, the computation of sensitivity kernels and the derivation of a model update is held completely separate. In particular, the model description for the forward problem and the description of the inverted model update are kept independent. Hence, the resolution of the inverted model as well as the complexity of solving the forward problem can be iteratively increased (with increasing frequency content of the inverted data subset). This may regularize the overall inverse problem and optimizes the computational effort of both, solving the forward problem and computing the model update. The required interconnection of arbitrary unstructured volume and point grids is realized by generalized high-order integration rules and 3D-unstructured interpolation methods. The model update is inferred solving a minimization problem in a least-squares sense, resulting in Gauss-Newton convergence of the overall inversion process. The inversion method was implemented in the modularized software package ASKI (Analysis of Sensitivity and Kernel Inversion), which provides a generalized interface to arbitrary external forward modelling codes. So far, the 3D spectral-element code SPECFEM3D (Tromp, Komatitsch and Liu, 2008) and the 1D semi-analytical code GEMINI (Friederich and Dalkolmo, 1995) in both, Cartesian and spherical framework are supported. The creation of interfaces to further forward codes is planned in the near future. ASKI is freely available under the terms of the GPL at www.rub.de/aski . Since the independent modules of ASKI must communicate via file output/input, large storage capacities need to be accessible conveniently. Storing the complete sensitivity matrix to file, however, permits the scientist full manual control over each step in a customized procedure of sensitivity/resolution analysis and full waveform inversion. In the presentation, we will show some aspects of the theory behind the full waveform inversion method and its practical realization by the software package ASKI, as well as synthetic and real-data applications from different scales and geometries.
A test of a mechanical multi-impact shear-wave seismic source
Worley, David M.; Odum, Jack K.; Williams, Robert A.; Stephenson, William J.
2001-01-01
We modified two gasoline-engine-powered earth tampers, commonly used as compressional-(P) wave seismic energy sources for shallow reflection studies, for use as shear(S)-wave energy sources. This new configuration, termed ?Hacker? (horizontal Wacker?), is evaluated as an alternative to the manual sledgehammer typically used in conjunction with a large timber held down by the front wheels of a vehicle. The Hacker maximizes the use of existing equipment by a quick changeover of bolt-on accessories as opposed to the handling of a separate source, and is intended to improve the depth of penetration of S-wave data by stacking hundreds of impacts over a two to three minute period. Records were made with a variety of configurations involving up to two Hackers simultaneously then compared to a reference record made with a sledgehammer. Preliminary results indicate moderate success by the higher amplitude S-waves recorded with the Hacker as compared to the hammer method. False triggers generated by the backswing of the Hacker add unwanted noise and we are currently working to modify the device to eliminate this effect. Correlation noise caused by insufficient randomness of the Hacker impact sequence is also a significant noise problem that we hope to reduce by improving the coupling of the Hacker to the timber so that the operator has more control over the impact sequence.
Threshold magnitudes for a multichannel correlation detector in background seismicity
Carmichael, Joshua D.; Hartse, Hans
2016-04-01
Colocated explosive sources often produce correlated seismic waveforms. Multichannel correlation detectors identify these signals by scanning template waveforms recorded from known reference events against "target" data to find similar waveforms. This screening problem is challenged at thresholds required to monitor smaller explosions, often because non-target signals falsely trigger such detectors. Therefore, it is generally unclear what thresholds will reliably identify a target explosion while screening non-target background seismicity. Here, we estimate threshold magnitudes for hypothetical explosions located at the North Korean nuclear test site over six months of 2010, by processing International Monitoring System (IMS) array data with a multichannelmore » waveform correlation detector. Our method (1) accounts for low amplitude background seismicity that falsely triggers correlation detectors but is unidentifiable with conventional power beams, (2) adapts to diurnally variable noise levels and (3) uses source-receiver reciprocity concepts to estimate thresholds for explosions spatially separated from the template source. Furthermore, we find that underground explosions with body wave magnitudes m b = 1.66 are detectable at the IMS array USRK with probability 0.99, when using template waveforms consisting only of P -waves, without false alarms. We conservatively find that these thresholds also increase by up to a magnitude unit for sources located 4 km or more from the Feb.12, 2013 announced nuclear test.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Afroz, Rafia, E-mail: rafia_afroz@yahoo.com; Masud, Muhammad Mehedi
2011-04-15
This study employed contingent valuation method to estimate the willingness to pay (WTP) of the households to improve the waste collection system in Kuala Lumpur, Malaysia. The objective of this study is to evaluate how household WTP changes when recycling and waste separation at source is made mandatory. The methodology consisted of asking people directly about their WTP for an additional waste collection service charge to cover the costs of a new waste management project. The new waste management project consisted of two versions: version A (recycling and waste separation is mandatory) and version B (recycling and waste separation ismore » not mandatory). The households declined their WTP for version A when they were asked to separate the waste at source although all the facilities would be given to them for waste separation. The result of this study indicates that the households were not conscious about the benefits of recycling and waste separation. Concerted efforts should be taken to raise environmental consciousness of the households through education and more publicity regarding waste separation, reducing and recycling.« less
NASA Technical Reports Server (NTRS)
Fox, George Edward (Inventor); Jackson, George William (Inventor); Willson, Richard Coale (Inventor)
2011-01-01
A device for separating and purifying useful quantities of particles comprises: a. an anolyte reservoir connected to an anode, the anolyte reservoir containing an electrophoresis buffer; b. a catholyte reservoir connected to a cathode, the catholyte reservoir also containing the electrophoresis buffer; c. a power supply connected to the anode and to the cathode; d. a column having a first end inserted into the anolyte reservoir, a second end inserted into the catholyte reservoir, and containing a separation medium; e. a light source; f. a first optical fiber having a first fiber end inserted into the separation medium, and having a second fiber end connected to the light source; g. a photo detector; h. a second optical fiber having a third fiber end inserted into the separation medium, and having a fourth fiber end connected to the photo detector; and i. an ion-exchange membrane in the anolyte reservoir.
Method for Monitored Separation and Collection of Biological Materials
NASA Technical Reports Server (NTRS)
Fox, George Edward (Inventor); Jackson, George William (Inventor); Willson, Richard Coale (Inventor)
2014-01-01
A device for separating and purifying useful quantities of particles comprises: (a) an anolyte reservoir connected to an anode, the anolyte reservoir containing an electrophoresis buffer; (b) a catholyte reservoir connected to a cathode, the catholyte reservoir also containing the electrophoresis buffer; (c) a power supply connected to the anode and to the cathode; (d) a column having a first end inserted into the anolyte reservoir, a second end inserted into the catholyte reservoir, and containing a separation medium; (e) a light source; (f) a first optical fiber having a first fiber end inserted into the separation medium, and having a second fiber end connected to the light source; (g) a photo detector; (h) a second optical fiber having a third fiber end inserted into the separation medium, and having a fourth fiber end connected to the photo detector; and (i) an ion-exchange membrane in the anolyte reservoir.
Separation anxiety among birth-assigned male children in a specialty gender identity service.
VanderLaan, Doug P; Santarossa, Alanna; Nabbijohn, A Natisha; Wood, Hayley; Owen-Anderson, Allison; Zucker, Kenneth J
2018-01-01
Previous research suggested that separation anxiety disorder (SAD) is overrepresented among birth-assigned male children clinic-referred for gender dysphoria (GD). The present study examined maternally reported separation anxiety of birth-assigned male children assessed in a specialty gender identity service (N = 360). SAD was determined in relation to DSM-III and DSM-IV criteria, respectively. A dimensional metric of separation anxiety was examined in relation to several additional factors: age, ethnicity, parental marital status and social class, IQ, gender nonconformity, behavioral and emotional problems, and poor peer relations. When defined in a liberal fashion, 55.8% were classified as having SAD. When using a more conservative criterion, 5.3% were classified as having SAD, which was significantly greater than the estimated general population prevalence for boys, but not for girls. Dimensionally, separation anxiety was associated with having parents who were not married or cohabitating as well as with elevations in gender nonconformity; however, the association with gender nonconformity was no longer significant when statistically controlling for internalizing problems. Thus, SAD appears to be common among birth-assigned males clinic-referred for GD when defined in a liberal fashion, and more common than in boys, but not girls, from the general population even when more stringent criteria were applied. Also, the degree of separation anxiety appears to be linked to generic risk factors (i.e., parental marital status, internalizing problems). As such, although separation anxiety is common among birth-assigned male children clinic-referred for GD, it seems unlikely to hold unique significance for this population based on the current data.
ERIC Educational Resources Information Center
Fukumine, Eri; Kennison, Shelia M.
2016-01-01
The present research investigated analogical transfer during problem solving by bilinguals. In a study with 50 Spanish-English bilinguals, participants solved a target problem whose solution was similar to that of a preceding source problem. The source problem was always presented in the 2nd language; the target problem was always presented in the…
Young Children's Analogical Problem Solving: Gaining Insights from Video Displays
ERIC Educational Resources Information Center
Chen, Zhe; Siegler, Robert S.
2013-01-01
This study examined how toddlers gain insights from source video displays and use the insights to solve analogous problems. Two- to 2.5-year-olds viewed a source video illustrating a problem-solving strategy and then attempted to solve analogous problems. Older but not younger toddlers extracted the problem-solving strategy depicted in the video…
NASA Technical Reports Server (NTRS)
Hartfield, Roy
1996-01-01
Raman scattering is a powerful technique for quantitatively probing high temperature and high speed flows. However, this technique has typically been limited to clean hydrogen flames because of the broadband fluorescence interference which occurs in hydrocarbon flames. Fluorescence can also interfere with the Raman signal in clean hydrogen flames when broadband UV lasers are used as the scattering source. A solution to this problem has been demonstrated. The solution to the fluorescence interference lies in the fact that the vibrational Q-branch Raman signal is highly polarized for 90 deg. signal collection and the fluorescence background is essentially unpolarized. Two basic schemes are available for separating the Raman from the background. One scheme involves using a polarized laser and collecting a signal with both horizontal and vertical laser polarizations separately. The signal with the vertical polarization will contain both the Raman and the fluorescence while the signal with the horizontal polarization will contain only the fluorescence. The second scheme involves polarization discrimination on the collection side of the optical setup. For vertical laser polarization, the scattered Q-branch Raman signal will be vertically polarized; hence the two polarizations can be collected separately and the difference between the two is the Raman signal. This approach has been used for the work found herein and has the advantage of allowing the data to be collected from the same laser shot(s). This makes it possible to collect quantitative Raman data with single shot resolution in conditions where interference cannot otherwise be eliminated.
The crack and wedging problem for an orthotropic strip
NASA Technical Reports Server (NTRS)
Cinar, A.; Erdogan, F.
1983-01-01
The plane elasticity problem for an orthotropic strip containing a crack parallel to its boundaries is considered. The problem is formulated under general mixed mode loading conditions. The stress intensity factors depend on two dimensionless orthotropic constants only. For the crack problem the results are given for a single crack and two collinear cracks. The calculated results show that of the two orthotropic constants the influence of the stiffness ratio on the stress intensity factors is much more significant than that of the shear parameter. The problem of loading the strip by a rigid rectangular lengths continuous contact is maintained along the wedge strip interface; at a certain critical wedge length the separation starts at the midsection of the wedge, and the length of the separation zone increases rapidly with increasing wedge length. Previously announced in STAR as N82-26707
Distinguishing one from many using super-resolution compressive sensing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anthony, Stephen Michael; Mulcahy-Stanislawczyk, Johnathan; Shields, Eric A.
We present that distinguishing whether a signal corresponds to a single source or a limited number of highly overlapping point spread functions (PSFs) is a ubiquitous problem across all imaging scales, whether detecting receptor-ligand interactions in cells or detecting binary stars. Super-resolution imaging based upon compressed sensing exploits the relative sparseness of the point sources to successfully resolve sources which may be separated by much less than the Rayleigh criterion. However, as a solution to an underdetermined system of linear equations, compressive sensing requires the imposition of constraints which may not always be valid. One typical constraint is that themore » PSF is known. However, the PSF of the actual optical system may reflect aberrations not present in the theoretical ideal optical system. Even when the optics are well characterized, the actual PSF may reflect factors such as non-uniform emission of the point source (e.g. fluorophore dipole emission). As such, the actual PSF may differ from the PSF used as a constraint. Similarly, multiple different regularization constraints have been suggested including the l 1-norm, l 0-norm, and generalized Gaussian Markov random fields (GGMRFs), each of which imposes a different constraint. Other important factors include the signal-to-noise ratio of the point sources and whether the point sources vary in intensity. In this work, we explore how these factors influence super-resolution image recovery robustness, determining the sensitivity and specificity. In conclusion, we determine an approach that is more robust to the types of PSF errors present in actual optical systems.« less
Distinguishing one from many using super-resolution compressive sensing
Anthony, Stephen Michael; Mulcahy-Stanislawczyk, Johnathan; Shields, Eric A.; ...
2018-05-14
We present that distinguishing whether a signal corresponds to a single source or a limited number of highly overlapping point spread functions (PSFs) is a ubiquitous problem across all imaging scales, whether detecting receptor-ligand interactions in cells or detecting binary stars. Super-resolution imaging based upon compressed sensing exploits the relative sparseness of the point sources to successfully resolve sources which may be separated by much less than the Rayleigh criterion. However, as a solution to an underdetermined system of linear equations, compressive sensing requires the imposition of constraints which may not always be valid. One typical constraint is that themore » PSF is known. However, the PSF of the actual optical system may reflect aberrations not present in the theoretical ideal optical system. Even when the optics are well characterized, the actual PSF may reflect factors such as non-uniform emission of the point source (e.g. fluorophore dipole emission). As such, the actual PSF may differ from the PSF used as a constraint. Similarly, multiple different regularization constraints have been suggested including the l 1-norm, l 0-norm, and generalized Gaussian Markov random fields (GGMRFs), each of which imposes a different constraint. Other important factors include the signal-to-noise ratio of the point sources and whether the point sources vary in intensity. In this work, we explore how these factors influence super-resolution image recovery robustness, determining the sensitivity and specificity. In conclusion, we determine an approach that is more robust to the types of PSF errors present in actual optical systems.« less
Shelley, Jacob T; Hieftje, Gary M
2010-04-01
The recent development of ambient desorption/ionization mass spectrometry (ADI-MS) has enabled fast, simple analysis of many different sample types. The ADI-MS sources have numerous advantages, including little or no required sample pre-treatment, simple mass spectra, and direct analysis of solids and liquids. However, problems of competitive ionization and limited fragmentation require sample-constituent separation, high mass accuracy, and/or tandem mass spectrometry (MS/MS) to detect, identify, and quantify unknown analytes. To maintain the inherent high throughput of ADI-MS, it is essential for the ion source/mass analyzer combination to measure fast transient signals and provide structural information. In the current study, the flowing atmospheric-pressure afterglow (FAPA) ionization source is coupled with a time-of-flight mass spectrometer (TOF-MS) to analyze fast transient signals (<500 ms FWHM). It was found that gas chromatography (GC) coupled with the FAPA source resulted in a reproducible (<5% RSD) and sensitive (detection limits of <6 fmol for a mixture of herbicides) system with analysis times of ca. 5 min. Introducing analytes to the FAPA in a transient was also shown to significantly reduce matrix effects caused by competitive ionization by minimizing the number and amount of constituents introduced into the ionization source. Additionally, MS/MS with FAPA-TOF-MS, enabling analyte identification, was performed via first-stage collision-induced dissociation (CID). Lastly, molecular and structural information was obtained across a fast transient peak by modulating the conditions that caused the first-stage CID.
High- and low-level hierarchical classification algorithm based on source separation process
NASA Astrophysics Data System (ADS)
Loghmari, Mohamed Anis; Karray, Emna; Naceur, Mohamed Saber
2016-10-01
High-dimensional data applications have earned great attention in recent years. We focus on remote sensing data analysis on high-dimensional space like hyperspectral data. From a methodological viewpoint, remote sensing data analysis is not a trivial task. Its complexity is caused by many factors, such as large spectral or spatial variability as well as the curse of dimensionality. The latter describes the problem of data sparseness. In this particular ill-posed problem, a reliable classification approach requires appropriate modeling of the classification process. The proposed approach is based on a hierarchical clustering algorithm in order to deal with remote sensing data in high-dimensional space. Indeed, one obvious method to perform dimensionality reduction is to use the independent component analysis process as a preprocessing step. The first particularity of our method is the special structure of its cluster tree. Most of the hierarchical algorithms associate leaves to individual clusters, and start from a large number of individual classes equal to the number of pixels; however, in our approach, leaves are associated with the most relevant sources which are represented according to mutually independent axes to specifically represent some land covers associated with a limited number of clusters. These sources contribute to the refinement of the clustering by providing complementary rather than redundant information. The second particularity of our approach is that at each level of the cluster tree, we combine both a high-level divisive clustering and a low-level agglomerative clustering. This approach reduces the computational cost since the high-level divisive clustering is controlled by a simple Boolean operator, and optimizes the clustering results since the low-level agglomerative clustering is guided by the most relevant independent sources. Then at each new step we obtain a new finer partition that will participate in the clustering process to enhance semantic capabilities and give good identification rates.
NASA Astrophysics Data System (ADS)
Prasad, S.; Bruce, L. M.
2007-04-01
There is a growing interest in using multiple sources for automatic target recognition (ATR) applications. One approach is to take multiple, independent observations of a phenomenon and perform a feature level or a decision level fusion for ATR. This paper proposes a method to utilize these types of multi-source fusion techniques to exploit hyperspectral data when only a small number of training pixels are available. Conventional hyperspectral image based ATR techniques project the high dimensional reflectance signature onto a lower dimensional subspace using techniques such as Principal Components Analysis (PCA), Fisher's linear discriminant analysis (LDA), subspace LDA and stepwise LDA. While some of these techniques attempt to solve the curse of dimensionality, or small sample size problem, these are not necessarily optimal projections. In this paper, we present a divide and conquer approach to address the small sample size problem. The hyperspectral space is partitioned into contiguous subspaces such that the discriminative information within each subspace is maximized, and the statistical dependence between subspaces is minimized. We then treat each subspace as a separate source in a multi-source multi-classifier setup and test various decision fusion schemes to determine their efficacy. Unlike previous approaches which use correlation between variables for band grouping, we study the efficacy of higher order statistical information (using average mutual information) for a bottom up band grouping. We also propose a confidence measure based decision fusion technique, where the weights associated with various classifiers are based on their confidence in recognizing the training data. To this end, training accuracies of all classifiers are used for weight assignment in the fusion process of test pixels. The proposed methods are tested using hyperspectral data with known ground truth, such that the efficacy can be quantitatively measured in terms of target recognition accuracies.
Ma, Jing; Hipel, Keith W; Hanson, Mark L
2017-12-21
A comprehensive evaluation of public participation in rural domestic waste (RDW) source-separated collection in China was carried out within a social-dimension framework, specifically in terms of public perception, awareness, attitude, and willingness to pay for RDW management. The evaluation was based on a case study conducted in Guilin, Guangxi Zhuang Autonomous Region, China, which is a representative of most inland areas of the country with a GDP around the national average. It was found that unlike urban residents, rural residents maintained a high rate of recycling, but in a spontaneous manner; they paid more attention to issues closely related to their daily lives, but less attention to those at the general level; their awareness of RDW source-separated collection was low and different age groups showed significantly different preferences regarding the sources of knowledge acquirement. Among potential information sources, village committees played a very important role in knowledge dissemination; for the respondents' pro-environmental attitudes, the influencing factor of "lack of legislation/policy" was considered to be significant; mandatory charges for waste collection and disposal had a high rate of acceptance among rural residents; and high monthly incomes had a positive correlation with both public pro-environmental attitudes and public willingness to pay for extra charges levied by RDW management. These observations imply that, for decision-makers in the short term, implementing mandatory RDW source-separated collection programs with enforced guidelines and economic compensation is more effective, while in the long run, promoting pro-environmental education to rural residents is more important.
Casas Ferreira, Ana María; Moreno Cordero, Bernardo; Pérez Pavón, José Luis
2017-02-01
Sometimes it is not necessary to separate the individual compounds of a sample to resolve an analytical problem, it is enough to obtain a signal profile of the sample formed by all the components integrating it. Within this strategy, electronic noses based on the direct coupling of a headspace sampler with a mass spectrometer (HS-MS) have been proposed. Nevertheless, this coupling is not suitable for the analysis of non-volatile compounds. In order to propose an alternative to HS-MS determinations for non-volatile compounds, here we present the first 'proof of concept' use of the direct coupling of microextraction by packed sorbents (MEPS) to a mass spectrometer device using an electron ionization (EI) and a single quadrupole as ionization source and analyzer, respectively. As target compounds, a set of analytes with different physic-chemical properties were evaluated (2-ethyl-1-hexanol, styrene, 2-heptanone, among others). The use of MEPS extraction present many advantages, such as it is fast, simple, easy to automate and requires small volumes of sample and organic solvents. Moreover, MEPS cartridges are re-usable as samples can be extracted more than 100 times using the same syringe. In order to introduce into the system all the elution volume from the MEPS extraction, a programmable temperature vaporizer (PTV) is proposed as the injector device. Results obtained with the proposed methodology (MEPS-PTV/MS) were compared with the ones obtained based on the separative scheme, i.e. using gas chromatography separation (MEPS-PTV-GC/MS), and both methods provided similar results. Limits of detection were found to be between 3.26 and 146.6μgL -1 in the non-separative scheme and between 0.02 and 1.72μgL -1 when the separative methodology was used. Repeatability and reproducibility were evaluated with values below 17% in all cases. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Reiter, D. T.; Rodi, W. L.
2015-12-01
Constructing 3D Earth models through the joint inversion of large geophysical data sets presents numerous theoretical and practical challenges, especially when diverse types of data and model parameters are involved. Among the challenges are the computational complexity associated with large data and model vectors and the need to unify differing model parameterizations, forward modeling methods and regularization schemes within a common inversion framework. The challenges can be addressed in part by decomposing the inverse problem into smaller, simpler inverse problems that can be solved separately, providing one knows how to merge the separate inversion results into an optimal solution of the full problem. We have formulated an approach to the decomposition of large inverse problems based on the augmented Lagrangian technique from optimization theory. As commonly done, we define a solution to the full inverse problem as the Earth model minimizing an objective function motivated, for example, by a Bayesian inference formulation. Our decomposition approach recasts the minimization problem equivalently as the minimization of component objective functions, corresponding to specified data subsets, subject to the constraints that the minimizing models be equal. A standard optimization algorithm solves the resulting constrained minimization problems by alternating between the separate solution of the component problems and the updating of Lagrange multipliers that serve to steer the individual solution models toward a common model solving the full problem. We are applying our inversion method to the reconstruction of the·crust and upper-mantle seismic velocity structure across Eurasia.· Data for the inversion comprise a large set of P and S body-wave travel times·and fundamental and first-higher mode Rayleigh-wave group velocities.
NASA Astrophysics Data System (ADS)
Zhou, Kenneth J.; Chen, Jun
2014-03-01
The fluorophores of malignant human breast cells change their compositions that may be exposed in the fluorescence spectroscopy and blind source separation method. The content of the fluorophores mixture media such as tryptophan, collagen, elastin, NADH, and flavin were varied according to the cancer development. The native fluorescence spectra of these key fluorophores mixture media excited by the selective excitation wavelengths of 300 nm and 340 nm were analyzed using a blind source separation method: Nonnegative Matrix Factorization (NMF). The results show that the contribution from tryptophan, NADH and flavin to the fluorescence spectra of the mixture media is proportional to the content of each fluorophore. These data present a possibility that native fluorescence spectra decomposed by NMF can be used as potential native biomarkers for cancer detection evaluation of the cancer.
Detection of Partial Discharge Sources Using UHF Sensors and Blind Signal Separation
Boya, Carlos; Parrado-Hernández, Emilio
2017-01-01
The measurement of the emitted electromagnetic energy in the UHF region of the spectrum allows the detection of partial discharges and, thus, the on-line monitoring of the condition of the insulation of electrical equipment. Unfortunately, determining the affected asset is difficult when there are several simultaneous insulation defects. This paper proposes the use of an independent component analysis (ICA) algorithm to separate the signals coming from different partial discharge (PD) sources. The performance of the algorithm has been tested using UHF signals generated by test objects. The results are validated by two automatic classification techniques: support vector machines and similarity with class mean. Both methods corroborate the suitability of the algorithm to separate the signals emitted by each PD source even when they are generated by the same type of insulation defect. PMID:29140267
Source-separated urine opens golden opportunities for microbial electrochemical technologies.
Ledezma, Pablo; Kuntke, Philipp; Buisman, Cees J N; Keller, Jürg; Freguia, Stefano
2015-04-01
The food security of a booming global population demands a continuous and sustainable supply of fertilisers. Their current once-through use [especially of the macronutrients nitrogen (N), phosphorus (P), and potassium (K)] requires a paradigm shift towards recovery and reuse. In the case of source-separated urine, efficient recovery could supply 20% of current macronutrient usage and remove 50-80% of nutrients present in wastewater. However, suitable technology options are needed to allow nutrients to be separated from urine close to the source. Thus far none of the proposed solutions has been widely implemented due to intrinsic limitations. Microbial electrochemical technologies (METs) have proved to be technically and economically viable for N recovery from urine, opening the path for novel decentralised systems focused on nutrient recovery and reuse. Copyright © 2015 Elsevier Ltd. All rights reserved.
Oxygen separation from air using zirconia solid electrolyte membranes
NASA Technical Reports Server (NTRS)
Suitor, J. W.; Marner, W. J.; Schroeder, J. E.; Losey, R. W.; Ferrall, J. F.
1988-01-01
Air separation using a zirconia solid electrolyte membrane is a possible alternative source of oxygen. The process of zirconia oxygen separation is reviewed, and an oxygen plant concept using such separation is described. Potential cell designs, stack designs, and testing procedures are examined. Fabrication of the materials used in a zirconia module as well as distribution plate design and fabrication are examined.
Single-sensor multispeaker listening with acoustic metamaterials
Xie, Yangbo; Tsai, Tsung-Han; Konneker, Adam; Popa, Bogdan-Ioan; Brady, David J.; Cummer, Steven A.
2015-01-01
Designing a “cocktail party listener” that functionally mimics the selective perception of a human auditory system has been pursued over the past decades. By exploiting acoustic metamaterials and compressive sensing, we present here a single-sensor listening device that separates simultaneous overlapping sounds from different sources. The device with a compact array of resonant metamaterials is demonstrated to distinguish three overlapping and independent sources with 96.67% correct audio recognition. Segregation of the audio signals is achieved using physical layer encoding without relying on source characteristics. This hardware approach to multichannel source separation can be applied to robust speech recognition and hearing aids and may be extended to other acoustic imaging and sensing applications. PMID:26261314
Method for sequencing DNA base pairs
Sessler, A.M.; Dawson, J.
1993-12-14
The base pairs of a DNA structure are sequenced with the use of a scanning tunneling microscope (STM). The DNA structure is scanned by the STM probe tip, and, as it is being scanned, the DNA structure is separately subjected to a sequence of infrared radiation from four different sources, each source being selected to preferentially excite one of the four different bases in the DNA structure. Each particular base being scanned is subjected to such sequence of infrared radiation from the four different sources as that particular base is being scanned. The DNA structure as a whole is separately imaged for each subjection thereof to radiation from one only of each source. 6 figures.
A posteriori error estimates in voice source recovery
NASA Astrophysics Data System (ADS)
Leonov, A. S.; Sorokin, V. N.
2017-12-01
The inverse problem of voice source pulse recovery from a segment of a speech signal is under consideration. A special mathematical model is used for the solution that relates these quantities. A variational method of solving inverse problem of voice source recovery for a new parametric class of sources, that is for piecewise-linear sources (PWL-sources), is proposed. Also, a technique for a posteriori numerical error estimation for obtained solutions is presented. A computer study of the adequacy of adopted speech production model with PWL-sources is performed in solving the inverse problems for various types of voice signals, as well as corresponding study of a posteriori error estimates. Numerical experiments for speech signals show satisfactory properties of proposed a posteriori error estimates, which represent the upper bounds of possible errors in solving the inverse problem. The estimate of the most probable error in determining the source-pulse shapes is about 7-8% for the investigated speech material. It is noted that a posteriori error estimates can be used as a criterion of the quality for obtained voice source pulses in application to speaker recognition.
A deterministic Lagrangian particle separation-based method for advective-diffusion problems
NASA Astrophysics Data System (ADS)
Wong, Ken T. M.; Lee, Joseph H. W.; Choi, K. W.
2008-12-01
A simple and robust Lagrangian particle scheme is proposed to solve the advective-diffusion transport problem. The scheme is based on relative diffusion concepts and simulates diffusion by regulating particle separation. This new approach generates a deterministic result and requires far less number of particles than the random walk method. For the advection process, particles are simply moved according to their velocity. The general scheme is mass conservative and is free from numerical diffusion. It can be applied to a wide variety of advective-diffusion problems, but is particularly suited for ecological and water quality modelling when definition of particle attributes (e.g., cell status for modelling algal blooms or red tides) is a necessity. The basic derivation, numerical stability and practical implementation of the NEighborhood Separation Technique (NEST) are presented. The accuracy of the method is demonstrated through a series of test cases which embrace realistic features of coastal environmental transport problems. Two field application examples on the tidal flushing of a fish farm and the dynamics of vertically migrating marine algae are also presented.
Rule-based interface generation on mobile devices for structured documentation.
Kock, Ann-Kristin; Andersen, Björn; Handels, Heinz; Ingenerf, Josef
2014-01-01
In many software systems to date, interactive graphical user interfaces (GUIs) are represented implicitly in the source code, together with the application logic. Hence, the re-use, development, and modification of these interfaces is often very laborious. Flexible adjustments of GUIs for various platforms and devices as well as individual user preferences are furthermore difficult to realize. These problems motivate a software-based separation of content and GUI models on the one hand, and application logic on the other. In this project, a software solution for structured reporting on mobile devices is developed. Clinical content archetypes developed in a previous project serve as the content model while the Android SDK provides the GUI model. The necessary bindings between the models are specified using the Jess Rule Language.
Talarowska, Monika; Galecki, Piotr
2016-01-01
Separating emotions from cognition seems impossible in everyday experiences of a human being. Emotional processes have an impact on the ability of planning and solving problems, or decision-making skills. They are a valuable source of information about ourselves, our partners in interactions and the surrounding world. Recent years have shown that axial symptoms of depression are caused by emotion regulation disorders, dysfunctions in the reward system and deficits of cognitive processes. There is a few studies concerning a link between emotional and inflammatory processes in depression. The aim of this article is to present results of contemporary research studies over mutual connections between social cognition, cognitive processes and inflammatory factors significant for the aetiology of recurrent depressive disorders, with particular reference to the role of kynurenine pathways.
Non-linear Parameter Estimates from Non-stationary MEG Data
Martínez-Vargas, Juan D.; López, Jose D.; Baker, Adam; Castellanos-Dominguez, German; Woolrich, Mark W.; Barnes, Gareth
2016-01-01
We demonstrate a method to estimate key electrophysiological parameters from resting state data. In this paper, we focus on the estimation of head-position parameters. The recovery of these parameters is especially challenging as they are non-linearly related to the measured field. In order to do this we use an empirical Bayesian scheme to estimate the cortical current distribution due to a range of laterally shifted head-models. We compare different methods of approaching this problem from the division of M/EEG data into stationary sections and performing separate source inversions, to explaining all of the M/EEG data with a single inversion. We demonstrate this through estimation of head position in both simulated and empirical resting state MEG data collected using a head-cast. PMID:27597815
Recent progress in multi-electrode spike sorting methods.
Lefebvre, Baptiste; Yger, Pierre; Marre, Olivier
2016-11-01
In recent years, arrays of extracellular electrodes have been developed and manufactured to record simultaneously from hundreds of electrodes packed with a high density. These recordings should allow neuroscientists to reconstruct the individual activity of the neurons spiking in the vicinity of these electrodes, with the help of signal processing algorithms. Algorithms need to solve a source separation problem, also known as spike sorting. However, these new devices challenge the classical way to do spike sorting. Here we review different methods that have been developed to sort spikes from these large-scale recordings. We describe the common properties of these algorithms, as well as their main differences. Finally, we outline the issues that remain to be solved by future spike sorting algorithms. Copyright © 2017 Elsevier Ltd. All rights reserved.
Hierarchical Ensemble Methods for Protein Function Prediction
2014-01-01
Protein function prediction is a complex multiclass multilabel classification problem, characterized by multiple issues such as the incompleteness of the available annotations, the integration of multiple sources of high dimensional biomolecular data, the unbalance of several functional classes, and the difficulty of univocally determining negative examples. Moreover, the hierarchical relationships between functional classes that characterize both the Gene Ontology and FunCat taxonomies motivate the development of hierarchy-aware prediction methods that showed significantly better performances than hierarchical-unaware “flat” prediction methods. In this paper, we provide a comprehensive review of hierarchical methods for protein function prediction based on ensembles of learning machines. According to this general approach, a separate learning machine is trained to learn a specific functional term and then the resulting predictions are assembled in a “consensus” ensemble decision, taking into account the hierarchical relationships between classes. The main hierarchical ensemble methods proposed in the literature are discussed in the context of existing computational methods for protein function prediction, highlighting their characteristics, advantages, and limitations. Open problems of this exciting research area of computational biology are finally considered, outlining novel perspectives for future research. PMID:25937954
Main devices design of submarine oil-water separation system
NASA Astrophysics Data System (ADS)
Cai, Wen-Bin; Liu, Bo-Hong
2017-11-01
In the process of offshore oil production, in order to thoroughly separate oil from produced fluid, solve the environment problem caused by oily sewage, and improve the economic benefit of offshore drilling, from the perspective of new oil-water separation, a set of submarine oil-water separation devices were designed through adsorption and desorption mechanism of the polymer materials for crude oil in this paper. The paper introduces the basic structure of gas-solid separation device, periodic separation device and adsorption device, and proves the rationality and feasibility of this device.
Nickel-hydrogen separator development
NASA Technical Reports Server (NTRS)
Gonzalez-Sanabria, O. D.
1986-01-01
The separator technology is a critical element in the nickel-hydrogen (Ni-H2) systems. Previous research and development work carried out at NASA Lewis Research Center has determined that separators made from zirconium oxide (ZrO2) and potassium titanate (PKT) fibers will function satisfactorily in Ni-H2 cells without exhibiting the problems associated with the asbestos separators. These separators and their characteristics were previously discussed. A program was established to transfer the separator technology into a commercial production line. A detailed plan of this program will be presented and the preliminary results will be discussed.
Notes on the Prediction of Shock-induced Boundary-layer Separation
NASA Technical Reports Server (NTRS)
Lange, Roy H.
1953-01-01
The present status of available information relative to the prediction of shock-induced boundary-layer separation is discussed. Experimental results showing the effects of Reynolds number and Mach number on the separation of both laminar and turbulent boundary layer are given and compared with available methods for predicting separation. The flow phenomena associated with separation caused by forward-facing steps, wedges, and incident shock waves are discussed. Applications of the flat-plate data to problems of separation on spoilers, diffusers, and scoop inlets are indicated for turbulent boundary layers.
Effect of emulsifier and viscosity on oil separation in ready-to-use therapeutic food
USDA-ARS?s Scientific Manuscript database
Oil separation is a common food quality problem in ready-to-use therapeutic food (RUTF), the shelf-stable, peanut-based food used to treat severe acute malnutrition in home settings. Our objective was to evaluate the effect on oil separation of three emulsifiers at different concentrations in RUTF. ...
Emotional Separation and Detachment as Two Distinct Dimensions of Parent-Adolescent Relationships
ERIC Educational Resources Information Center
Ingoglia, Sonia; Lo Coco, Alida; Liga, Francesca; Lo Cricchio, Maria Grazia
2011-01-01
The study examined adolescents' emotional separation and detachment from parents, analyzing their relations with connectedness and agency, with some aspects of self-other boundary regulation and with problem behavior. The participants were 331 Italian adolescents, aged from 16 to 19 years (mean age = 17.40, SD = 1.14). Separation and detachment…
Proceedings of the First Hanford Separation Science Workshop
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1993-05-01
The First Hanford Separation Science Workshop, sponsored by PNL had two main objectives: (1) assess the applicability of available separation methods for environmental restoration and for minimization, recovery, and recycle of mixed and radioactive mutes; and (2) identify research needs that must be addressed to create new or improved technologies. The information gathered at this workshop not only applies to Hanford but could be adapted to DOE facilities throughout the nation as well. These proceedings have been divided into three components: Background and Introduction to the Problem gives an overview of the history of the Site and the cleanup mission,more » including waste management operations, past disposal practices, current operations, and plans for the future. Also included in this section is a discussion of specific problems concerning the chemistry of the Hanford wastes. Separation Methodologies contains the papers given at the workshop by national experts in the field of separation science regarding the state-of-the-art of various methods and their applicability/adaptability to Hanford. Research Needs identifies further research areas developed in working group sessions. Individual papers are indexed separately.« less
Investigation of failure to separate an Inconel 718 frangible nut
NASA Technical Reports Server (NTRS)
Hoffman, William C., III; Hohmann, Carl
1994-01-01
The 2.5-inch frangible nut is used in two places to attach the Space Shuttle Orbiter to the External Tank. It must be capable of sustaining structural loads and must also separate into two pieces upon command. Structural load capability is verified by proof loading each flight nut, while ability to separate is verified on a sample of a production lot. Production lots of frangible nuts beginning in 1987 experienced an inability to reliably separate using one of two redundant explosive boosters. The problems were identified in lot acceptance tests, and the cause of failure has been attributed to differences in the response of the Inconel 718. Subsequent tests performed on the frangible nuts resulted in design modifications to the nuts along with redesign of the explosive booster to reliably separate the frangible nut. The problem history along with the design modifications to both the explosive booster and frangible nut are discussed in this paper. Implications of this failure experience impact any pyrotechnic separation system involving fracture of materials with respect to design margin control and lot acceptance testing.
Optimal SVM parameter selection for non-separable and unbalanced datasets.
Jiang, Peng; Missoum, Samy; Chen, Zhao
2014-10-01
This article presents a study of three validation metrics used for the selection of optimal parameters of a support vector machine (SVM) classifier in the case of non-separable and unbalanced datasets. This situation is often encountered when the data is obtained experimentally or clinically. The three metrics selected in this work are the area under the ROC curve (AUC), accuracy, and balanced accuracy. These validation metrics are tested using computational data only, which enables the creation of fully separable sets of data. This way, non-separable datasets, representative of a real-world problem, can be created by projection onto a lower dimensional sub-space. The knowledge of the separable dataset, unknown in real-world problems, provides a reference to compare the three validation metrics using a quantity referred to as the "weighted likelihood". As an application example, the study investigates a classification model for hip fracture prediction. The data is obtained from a parameterized finite element model of a femur. The performance of the various validation metrics is studied for several levels of separability, ratios of unbalance, and training set sizes.
NASA Astrophysics Data System (ADS)
Liebert, Adam; Sawosz, Piotr; Milej, Daniel; Kacprzak, Michał; Weigl, Wojciech; Botwicz, Marcin; MaCzewska, Joanna; Fronczewska, Katarzyna; Mayzner-Zawadzka, Ewa; Królicki, Leszek; Maniewski, Roman
2011-04-01
Recently, it was shown in measurements carried out on humans that time-resolved near-infrared reflectometry and fluorescence spectroscopy may allow for discrimination of information originating directly from the brain avoiding influence of contaminating signals related to the perfusion of extracerebral tissues. We report on continuation of these studies, showing that the near-infrared light can be detected noninvasively on the surface of the tissue at large interoptode distance. A multichannel time-resolved optical monitoring system was constructed for measurements of diffuse reflectance in optically turbid medium at very large source-detector separation up to 9 cm. The instrument was applied during intravenous injection of indocyanine green and the distributions of times of flight of photons were successfully acquired showing inflow and washout of the dye in the tissue. Time courses of the statistical moments of distributions of times of flight of photons are presented and compared to the results obtained simultaneously at shorter source-detector separations (3, 4, and 5 cm). We show in a series of experiments carried out on physical phantom and healthy volunteers that the time-resolved data acquisition in combination with very large source-detector separation may allow one to improve depth selectivity of perfusion assessment in the brain.
Separation Anxiety (For Parents)
... older child, there might be another problem, like bullying or abuse. Separation anxiety is different from the normal feelings older kids have when they don't want a parent to leave (which can usually be overcome if ...
Computational inverse methods of heat source in fatigue damage problems
NASA Astrophysics Data System (ADS)
Chen, Aizhou; Li, Yuan; Yan, Bo
2018-04-01
Fatigue dissipation energy is the research focus in field of fatigue damage at present. It is a new idea to solve the problem of calculating fatigue dissipation energy by introducing inverse method of heat source into parameter identification of fatigue dissipation energy model. This paper introduces the research advances on computational inverse method of heat source and regularization technique to solve inverse problem, as well as the existing heat source solution method in fatigue process, prospects inverse method of heat source applying in fatigue damage field, lays the foundation for further improving the effectiveness of fatigue dissipation energy rapid prediction.
Multipathing Via Three Parameter Common Image Gathers (CIGs) From Reverse Time Migration
NASA Astrophysics Data System (ADS)
Ostadhassan, M.; Zhang, X.
2015-12-01
A noteworthy problem for seismic exploration is effects of multipathing (both wanted or unwanted) caused by subsurface complex structures. We show that reverse time migration (RTM) combined with a unified, systematic three parameter framework that flexibly handles multipathing can be accomplished by adding one more dimension (image time) to the angle domain common image gather (ADCIG) data. RTM is widely used to generate prestack depth migration images. When using the cross-correlation image condition in 2D prestack migration in RTM, the usual practice is to sum over all the migration time steps. Thus all possible wave types and paths automatically contribute to the resulting image, including destructive wave interferences, phase shifts, and other distortions. One reason is that multipath (prismatic wave) contributions are not properly sorted and mapped in the ADCIGs. Also, multipath arrivals usually have different instantaneous attributes (amplitude, phase and frequency), and if not separated, the amplitudes and phases in the final prestack image will not stack coherently across sources. A prismatic path satisfies an image time for it's unique path; Cavalca and Lailly (2005) show that RTM images with multipaths can provide more complete target information in complex geology, as multipaths usually have different incident angles and amplitudes compared to primary reflections. If the image time slices within a cross-correlation common-source migration are saved for each image time, this three-parameter (incident angle, depth, image time) volume can be post-processed to generate separate, or composite, images of any desired subset of the migrated data. Images can by displayed for primary contributions, any combination of primary and multipath contributions (with or without artifacts), or various projections, including the conventional ADCIG (angle vs depth) plane. Examples show that signal from the true structure can be separated from artifacts caused by multiple arrivals when they have different image times. This improves the quality of images and benefits migration velocity analysis (MVA) and amplitude variation with angle (AVA) inversion.
Brain functional BOLD perturbation modelling for forward fMRI and inverse mapping
Robinson, Jennifer; Calhoun, Vince
2018-01-01
Purpose To computationally separate dynamic brain functional BOLD responses from static background in a brain functional activity for forward fMRI signal analysis and inverse mapping. Methods A brain functional activity is represented in terms of magnetic source by a perturbation model: χ = χ0 +δχ, with δχ for BOLD magnetic perturbations and χ0 for background. A brain fMRI experiment produces a timeseries of complex-valued images (T2* images), whereby we extract the BOLD phase signals (denoted by δP) by a complex division. By solving an inverse problem, we reconstruct the BOLD δχ dataset from the δP dataset, and the brain χ distribution from a (unwrapped) T2* phase image. Given a 4D dataset of task BOLD fMRI, we implement brain functional mapping by temporal correlation analysis. Results Through a high-field (7T) and high-resolution (0.5mm in plane) task fMRI experiment, we demonstrated in detail the BOLD perturbation model for fMRI phase signal separation (P + δP) and reconstructing intrinsic brain magnetic source (χ and δχ). We also provided to a low-field (3T) and low-resolution (2mm) task fMRI experiment in support of single-subject fMRI study. Our experiments show that the δχ-depicted functional map reveals bidirectional BOLD χ perturbations during the task performance. Conclusions The BOLD perturbation model allows us to separate fMRI phase signal (by complex division) and to perform inverse mapping for pure BOLD δχ reconstruction for intrinsic functional χ mapping. The full brain χ reconstruction (from unwrapped fMRI phase) provides a new brain tissue image that allows to scrutinize the brain tissue idiosyncrasy for the pure BOLD δχ response through an automatic function/structure co-localization. PMID:29351339
Limitations and applications of ICA for surface electromyogram.
Djuwari, D; Kumar, D K; Naik, G R; Arjunan, S P; Palaniswami, M
2006-09-01
Surface electromyogram (SEMG) has numerous applications, but the presence of artefacts and noise, especially at low level of muscle activity make the recordings unreliable. Spectral and temporal overlap can make the removal of artefacts and noise, or separation of relevant signals from other bioelectric signals extremely difficult. Individual muscles may be considered as independent at the local level and this makes an argument for separating the signals using independent component analysis (ICA). In the recent past, due to the easy availability of ICA tools, numbers of researchers have attempted to use ICA for this application. This paper reports research conducted to evaluate the use of ICA for the separation of muscle activity and removal of the artefacts from SEMG. It discusses some of the conditions that could affect the reliability of the separation and evaluates issues related to the properties of the signals and number of sources. The paper also identifies the lack of suitable measure of quality of separation for bioelectric signals and it recommends and tests a more robust measure of separation. The paper also reports tests using Zibulevsky's technique of temporal plotting to identify number of independent sources in SEMG recordings. The theoretical analysis and experimental results demonstrate that ICA is suitable for SEMG signals. The results identify the unsuitability of ICA when the number of sources is greater than the number of recording channels. The results also demonstrate the limitations of such applications due to the inability of the system to identify the correct order and magnitude of the signals. The paper determines the suitability of the use of error measure using simulated mixing matrix and the estimated unmixing matrix as a means identifying the quality of separation of the output. The work demonstrates that even at extremely low level of muscle contraction, and with filtering using wavelets and band pass filters, it is not possible to get the data sparse enough to identify number of independent sources using Zibulevs.ky's technique.
NASA Astrophysics Data System (ADS)
Wason, H.; Herrmann, F. J.; Kumar, R.
2016-12-01
Current efforts towards dense shot (or receiver) sampling and full azimuthal coverage to produce high resolution images have led to the deployment of multiple source vessels (or streamers) across marine survey areas. Densely sampled marine seismic data acquisition, however, is expensive, and hence necessitates the adoption of sampling schemes that save acquisition costs and time. Compressed sensing is a sampling paradigm that aims to reconstruct a signal--that is sparse or compressible in some transform domain--from relatively fewer measurements than required by the Nyquist sampling criteria. Leveraging ideas from the field of compressed sensing, we show how marine seismic acquisition can be setup as a compressed sensing problem. A step ahead from multi-source seismic acquisition is simultaneous source acquisition--an emerging technology that is stimulating both geophysical research and commercial efforts--where multiple source arrays/vessels fire shots simultaneously resulting in better coverage in marine surveys. Following the design principles of compressed sensing, we propose a pragmatic simultaneous time-jittered time-compressed marine acquisition scheme where single or multiple source vessels sail across an ocean-bottom array firing airguns at jittered times and source locations, resulting in better spatial sampling and speedup acquisition. Our acquisition is low cost since our measurements are subsampled. Simultaneous source acquisition generates data with overlapping shot records, which need to be separated for further processing. We can significantly impact the reconstruction quality of conventional seismic data from jittered data and demonstrate successful recovery by sparsity promotion. In contrast to random (sub)sampling, acquisition via jittered (sub)sampling helps in controlling the maximum gap size, which is a practical requirement of wavefield reconstruction with localized sparsifying transforms. We illustrate our results with simulations of simultaneous time-jittered marine acquisition for 2D and 3D ocean-bottom cable survey.
Nickel-hydrogen separator development
NASA Technical Reports Server (NTRS)
Gonzalez-Sanabria, O. D.
1986-01-01
The separator technology is a critical element in the nickel-hydrogen (Ni-H2) systems. Previous research and development work carried out at NASA Lewis Research Center has determined that separators made from zirconium oxide (ZrO2) and potassium titanate (PKT) fibers will function satisfactorily in Ni-H2 cells without exhibiting the problems associated with the asbestos separators. A program has been established to transfer the separator technology into a commercial production line. A detailed plan of this program will be presented and the preliminary results will be discussed.
Johnson, R K; Frary, C
2001-10-01
As part of the 2000 Dietary Guidelines for Americans, the public is advised to choose beverages and foods to moderate their intake of sugars. The term sugars is conventionally used to describe the mono- and disaccharides. However, the Dietary Guidelines for Americans distinguish between added sugars and other sources of carbohydrates. The concept of added sugars provides consumers with useful information, especially if they are trying to limit excessive use of caloric sweeteners. Added sugars are defined as sugars that are eaten separately at the table or used as ingredients in processed or prepared foods. Consumption of added sugars has increased steadily as documented by both food supply data and nationwide food consumption survey data. The largest source of added sugars in the U.S. diet is nondiet soft drinks, accounting for one third of total intake. Diets high in sugars have been associated with various health problems, including dental caries, dyslipidemias, obesity, bone loss and fractures, and poor diet quality. Research gaps are identified.
Feeding ducks, bacterial chemotaxis, and the Gini index
NASA Astrophysics Data System (ADS)
Peaudecerf, François J.; Goldstein, Raymond E.
2015-08-01
Classic experiments on the distribution of ducks around separated food sources found consistency with the "ideal free" distribution in which the local population is proportional to the local supply rate. Motivated by this experiment and others, we examine the analogous problem in the microbial world: the distribution of chemotactic bacteria around multiple nearby food sources. In contrast to the optimization of uptake rate that may hold at the level of a single cell in a spatially varying nutrient field, nutrient consumption by a population of chemotactic cells will modify the nutrient field, and the uptake rate will generally vary throughout the population. Through a simple model we study the distribution of resource uptake in the presence of chemotaxis, consumption, and diffusion of both bacteria and nutrients. Borrowing from the field of theoretical economics, we explore how the Gini index can be used as a means to quantify the inequalities of uptake. The redistributive effect of chemotaxis can lead to a phenomenon we term "chemotactic levelling," and the influence of these results on population fitness are briefly considered.
Online EEG artifact removal for BCI applications by adaptive spatial filtering.
Guarnieri, Roberto; Marino, Marco; Barban, Federico; Ganzetti, Marco; Mantini, Dante
2018-06-28
The performance of brain computer interfaces (BCIs) based on electroencephalography (EEG) data strongly depends on the effective attenuation of artifacts that are mixed in the recordings. To address this problem, we have developed a novel online EEG artifact removal method for BCI applications, which combines blind source separation (BSS) and regression (REG) analysis. The BSS-REG method relies on the availability of a calibration dataset of limited duration for the initialization of a spatial filter using BSS. Online artifact removal is implemented by dynamically adjusting the spatial filter in the actual experiment, based on a linear regression technique. Our results showed that the BSS-REG method is capable of attenuating different kinds of artifacts, including ocular and muscular, while preserving true neural activity. Thanks to its low computational requirements, BSS-REG can be applied to low-density as well as high-density EEG data. We argue that BSS-REG may enable the development of novel BCI applications requiring high-density recordings, such as source-based neurofeedback and closed-loop neuromodulation. © 2018 IOP Publishing Ltd.
High-order perturbations of a spherical collapsing star
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brizuela, David; Martin-Garcia, Jose M.; Sperhake, Ulrich
2010-11-15
A formalism to deal with high-order perturbations of a general spherical background was developed in earlier work [D. Brizuela, J. M. Martin-Garcia, and G. A. Mena Marugan, Phys. Rev. D 74, 044039 (2006); D. Brizuela, J. M. Martin-Garcia, and G. A. Mena Marugan, Phys. Rev. D 76, 024004 (2007)]. In this paper, we apply it to the particular case of a perfect fluid background. We have expressed the perturbations of the energy-momentum tensor at any order in terms of the perturbed fluid's pressure, density, and velocity. In general, these expressions are not linear and have sources depending on lower-order perturbations.more » For the second-order case we make the explicit decomposition of these sources in tensor spherical harmonics. Then, a general procedure is given to evolve the perturbative equations of motions of the perfect fluid for any value of the harmonic label. Finally, with the problem of a spherical collapsing star in mind, we discuss the high-order perturbative matching conditions across a timelike surface, in particular, the surface separating the perfect fluid interior from the exterior vacuum.« less
NASA Astrophysics Data System (ADS)
Kryjevskaia, Mila; Stetzer, MacKenzie R.; Heron, Paula R. L.
2013-06-01
In a previous paper that focused on the transmission of periodic waves at the boundary between two media, we documented difficulties with the basic concepts of wavelength, frequency, and propagation speed, and with the relationship v=fλ. In this paper, we report on student attempts to apply this relationship in problems involving two-source and thin-film interference. In both cases, interference arises from differences in the path lengths traveled by two waves. We found that some students (up to 40% on certain questions) had difficulty with a task that is fundamental to understanding these phenomena: expressing a physical distance, such as the separation between two sources, in terms of the wavelength of a periodic wave. We administered a series of questions to try to identify factors that influence student performance. We concluded that most incorrect responses stemmed from erroneous judgment about the type of reasoning required, not an inability to do said reasoning. A number of students do not seem to treat the spacing of moving wave fronts as analogous to immutable measurement tools (e.g., rulers).
Moderate Resolution Spectroscopy of Substellar Companion Kappa Andromeda B
NASA Astrophysics Data System (ADS)
Wilcomb, Kielan; Konopacky, Quinn; Barman, Travis; Brown, Jessie; Brock, Laci; Macintosh, Bruce; Ruffio, Jean-Baptiste; Marois, Christian
2018-01-01
Recent direct imaging of exoplanets has revealed a population of Jupiter-like objects that orbit at large separations (~10-100 AU) from their host stars. These planets, with masses of ~2-14 MJup and temperatures of ~500-2000 K, remain a problem for the two main planet formation models—core accretion and gravitational instability. OSIRIS observations of directly imaged planets have expanded our understanding of their atmospheres, alluded to their formation, and uncovered individual molecular lines. Here, we present OSIRIS K band spectra of the “super-Jupiter,” Kappa Andromeda b. Kappa Andromeda b has a lower mass limit at the deuterium burning limit, but also has an uncertain age which may indicate the source is a higher mass brown dwarf. The spectra reveal resolved molecular lines from water and CO. We will present atmospheric properties of this object derived from comparison to PHOENIX atmosphere models, and measure a best fit C/O ratio for the source. We will compare our results to atmospheric properties of other brown dwarfs and gas giant planets in an effort to improve our knowledge of intricate atmospheres of young, substellar objects.
A Voyage of Mathematical and Cultural Awareness for Students of Upper Secondary School
NASA Astrophysics Data System (ADS)
Panagiotou, Evangelos N.
2014-01-01
Many papers have emphasized the need for and importance of particular examples and the underlying rationale for introducing a historical dimension in mathematics education. This article presents the development and implementation of a project, based on original sources, in a situation where the existing curriculum does not include history. The subject was conic sections and the motivating problems and original work which eventually found resolution in modern concepts. The project was carried out during the school year 2006-2007 with 18 students of a Greek experimental high school 2nd class (11th degree). It was devised as a series of worksheets, separate readings and oral presentations and written essays so that students might appreciate that mathematics evolves under the influence of factors intrinsic and extrinsic to it. Both epistemological and disciplinary issues are taken into account. Even though this work is just one case study, we have found that exposing students directly to primary sources in mathematics contributes greatly to motivation and understanding, and illustrates the nature of mathematics as a discipline and as a human endeavour.
NASA Astrophysics Data System (ADS)
Fornaini, Carlo; Merigo, Elisabetta; Selleri, Stefano; Cucinotta, Annamaria
2016-03-01
With the introduction of more and more new wavelengths, one of the main problems of medical laser users was centered on the study of laser-tissue interactions with the aim of determining the ideal wavelength for their treatments. The aim of this ex vivo study was to determine, by means of the utilization of a supercontinuum source, the amount of transmitted energy of different wavelengths in different organ samples obtained by Sprague Dawley rats. Supercontinuum light is generated by exploiting high optical non-linearity in a material and it combines the broadband attributes of a lamp with the spatial coherence and high brightness of laser. Even if the single transmission measurement does not allow us to separate out the respective contribution of scattering and absorption, it gives us an evaluation of the wavelengths not interacting with the tissue. In this way, being possible to determine what of the laser wavelengths are not useful or active in the different kinds of tissue, physicians may choose the proper device for his clinical treatments.
Evaluation of Deblur Methods for Radiography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wood, William M.
2014-03-31
Radiography is used as a primary diagnostic for dynamic experiments, providing timeresolved radiographic measurements of areal mass density along a line of sight through the experiment. It is well known that the finite spot extent of the radiographic source, as well as scattering, are sources of blurring of the radiographic images. This blurring interferes with quantitative measurement of the areal mass density. In order to improve the quantitative utility of this diagnostic, it is necessary to deblur or “restore” the radiographs to recover the “true” areal mass density from a radiographic transmission measurement. Towards this end, I am evaluating threemore » separate methods currently in use for deblurring radiographs. I begin by briefly describing the problems associated with image restoration, and outlining the three methods. Next, I illustrate how blurring affects the quantitative measurements using radiographs. I then present the results of the various deblur methods, evaluating each according to several criteria. After I have summarized the results of the evaluation, I give a detailed account of how the restoration process is actually implemented.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krook, J.; Martensson, A.; Eklund, M.
In this paper, wood waste (RWW) recovered for heat production in Sweden was studied. Previous research has concluded that RWW contains elevated amounts of heavy metals, causing environmental problems during waste management. This study extends previous work on RWW by analysing which pollution sources cause this contamination. Using existing data on the metal contents in various materials, and the amounts of these materials in RWW, the share of the elevated amounts of metals in RWW that these materials explain was quantified. Six different materials occurring in RWW were studied and the results show that they explain from 70% to 100%more » of the amounts of arsenic, chromium, lead, copper and zinc in RWW. The most important materials contributing to contamination of RWW are surface-treated wood, industrial preservative-treated wood, plastic and galvanised fastening systems. These findings enable the development and evaluation of strategies aiming to decrease pollution and resource loss from handling RWW. It is argued that source separation and measures taken further downstream from the generation site, such as treatment, need to be combined to substantially decrease the amount of heavy metals in RWW.« less
Metal-organic framework-based separator for lithium-sulfur batteries
NASA Astrophysics Data System (ADS)
Bai, Songyan; Liu, Xizheng; Zhu, Kai; Wu, Shichao; Zhou, Haoshen
2016-07-01
Lithium-sulfur batteries are a promising energy-storage technology due to their relatively low cost and high theoretical energy density. However, one of their major technical problems is the shuttling of soluble polysulfides between electrodes, resulting in rapid capacity fading. Here, we present a metal-organic framework (MOF)-based battery separator to mitigate the shuttling problem. We show that the MOF-based separator acts as an ionic sieve in lithium-sulfur batteries, which selectively sieves Li+ ions while efficiently suppressing undesired polysulfides migrating to the anode side. When a sulfur-containing mesoporous carbon material (approximately 70 wt% sulfur content) is used as a cathode composite without elaborate synthesis or surface modification, a lithium-sulfur battery with a MOF-based separator exhibits a low capacity decay rate (0.019% per cycle over 1,500 cycles). Moreover, there is almost no capacity fading after the initial 100 cycles. Our approach demonstrates the potential for MOF-based materials as separators for energy-storage applications.
Mass analyzer ``MASHA'' high temperature target and plasma ion source
NASA Astrophysics Data System (ADS)
Semchenkov, A. G.; Rassadov, D. N.; Bekhterev, V. V.; Bystrov, V. A.; Chizov, A. Yu.; Dmitriev, S. N.; Efremov, A. A.; Guljaev, A. V.; Kozulin, E. M.; Oganessian, Yu. Ts.; Starodub, G. Ya.; Voskresensky, V. M.; Bogomolov, S. L.; Paschenko, S. V.; Zelenak, A.; Tikhonov, V. I.
2004-05-01
A new separator and mass analyzer of super heavy atoms (MASHA) has been created at the FLNR JINR Dubna to separate and measure masses of nuclei and molecules with precision better than 10-3. First experiments with the FEBIAD plasma ion source have been done and give an efficiency of ionization of up to 20% for Kr with a low flow test leak (6 particle μA). We suppose a magnetic field optimization, using the additional electrode (einzel lens type) in the extracting system, and an improving of the vacuum conditions in order to increase the ion source efficiency.
Fahnline, John B
2016-12-01
An equivalent source method is developed for solving transient acoustic boundary value problems. The method assumes the boundary surface is discretized in terms of triangular or quadrilateral elements and that the solution is represented using the acoustic fields of discrete sources placed at the element centers. Also, the boundary condition is assumed to be specified for the normal component of the surface velocity as a function of time, and the source amplitudes are determined to match the known elemental volume velocity vector at a series of discrete time steps. Equations are given for marching-on-in-time schemes to solve for the source amplitudes at each time step for simple, dipole, and tripole source formulations. Several example problems are solved to illustrate the results and to validate the formulations, including problems with closed boundary surfaces where long-time numerical instabilities typically occur. A simple relationship between the simple and dipole source amplitudes in the tripole source formulation is derived so that the source radiates primarily in the direction of the outward surface normal. The tripole source formulation is shown to eliminate interior acoustic resonances and long-time numerical instabilities.
Izatt, Reed M.; Christensen, James J.; Hawkins, Richard T.
1984-01-01
A process of recovering cesium ions from mixtures of ions containing them and other ions, e.g., a solution of nuclear waste materials, which comprises establishing a separate source phase containing such a mixture of ions, establishing a separate recipient phase, establishing a liquid membrane phase in interfacial contact with said source and recipient phases, said membrane phase containing a ligand, preferably a selected calixarene as depicted in the drawing, maintaining said interfacial contact for a period of time long enough to transport by said ligand a substantial portion of the cesium ion from the source phase to the recipient phase, and recovering the cesium ion from the recipient phase. The separation of the source and recipient phases may be by the membrane phase only, e.g., where these aqueous phases are emulsified as dispersed phases in a continuous membrane phase, or may include a physical barrier as well, e.g., an open-top outer container with an inner open-ended container of smaller cross-section mounted in the outer container with its open bottom end spaced from and above the closed bottom of the outer container so that the membrane phase may fill the outer container to a level above the bottom of the inner container and have floating on its upper surface a source phase and a recipient phase separated by the wall of the inner container as a physical barrier. A preferred solvent for the ligand is a mixture of methylene chloride and carbon tetrachloride.
Overview of the ISOL facility for the RISP
NASA Astrophysics Data System (ADS)
Woo, H. J.; Kang, B. H.; Tshoo, K.; Seo, C. S.; Hwang, W.; Park, Y.-H.; Yoon, J. W.; Yoo, S. H.; Kim, Y. K.; Jang, D. Y.
2015-02-01
The key feature of the Isotope Separation On-Line (ISOL) facility is its ability to provide high-intensity and high-quality beams of neutron-rich isotopes with masses in the range of 80-160 by means of a 70-MeV proton beam directly impinging on uranium-carbide thin-disc targets to perform forefront research in nuclear structure, nuclear astrophysics, reaction dynamics and interdisciplinary fields like medical, biological and material sciences. The technical design of the 10-kW and the 35-kW direct fission targets with in-target fission rates of up to 1014 fissions/s has been finished, and for the development of the ISOL fission-target chemistry an initial effort has been made to produce porous lanthanum-carbide (LaCx) discs as a benchmark for the final production of porous UCx discs. For the production of various beams, three classes of ion sources are under development at RISP (Rare Isotope Science Project), the surface ion source, the plasma ion source (FEBIAD), the laser ion source, and the engineering design of the FEBIAD is in progress for prototype fabrication. The engineering design of the ISOL target/ion source front-end system is also in progress, and a prototype will be used for an off-line test facility in front of the pre-separator. The technical designs of other basic elements at the ISOL facility, such as the RF-cooler, the high-resolution mass separator, and the A/q separator, have been finished, and the results, along with the future plans, are introduced.
Cognitive Intervention in the Normal Developmental Problems of Young Adults
ERIC Educational Resources Information Center
Wilson, Stephen B.
1978-01-01
The common developmental problems of young adults--career focus, sex confidence, clarification of beliefs, and separation from parents--provide themes of interest to young adults. Using these themes and the human tendency to problem solve, specific information can be given to improve personal problem-solving skills without psychological games.…
Isotope separation by photoselective dissociative electron capture
Stevens, C.G.
1978-08-29
Disclosed is a method of separating isotopes based on photoselective electron capture dissociation of molecules having an electron capture cross section dependence on the vibrational state of the molecule. A molecular isotope source material is irradiated to selectively excite those molecules containing a desired isotope to a predetermined vibrational state having associated therewith an electron capture energy region substantially non-overlapping with the electron capture energy ranges associated with the lowest vibration states of the molecules. The isotope source is also subjected to electrons having an energy corresponding to the non-overlapping electron capture region whereby the selectively excited molecules preferentially capture electrons and dissociate into negative ions and neutrals. The desired isotope may be in the negative ion product or in the neutral product depending upon the mechanism of dissociation of the particular isotope source used. The dissociation product enriched in the desired isotope is then separated from the reaction system by conventional means. Specifically, [sup 235]UF[sub 6] is separated from a UF[sub 6] mixture by selective excitation followed by dissociative electron capture into [sup 235]UF[sub 5]- and F. 2 figs.
Isotope separation by photoselective dissociative electron capture
Stevens, Charles G. [Pleasanton, CA
1978-08-29
A method of separating isotopes based on photoselective electron capture dissociation of molecules having an electron capture cross section dependence on the vibrational state of the molecule. A molecular isotope source material is irradiated to selectively excite those molecules containing a desired isotope to a predetermined vibrational state having associated therewith an electron capture energy region substantially non-overlapping with the electron capture energy ranges associated with the lowest vibration states of the molecules. The isotope source is also subjected to electrons having an energy corresponding to the non-overlapping electron capture region whereby the selectively excited molecules preferentially capture electrons and dissociate into negative ions and neutrals. The desired isotope may be in the negative ion product or in the neutral product depending upon the mechanism of dissociation of the particular isotope source used. The dissociation product enriched in the desired isotope is then separated from the reaction system by conventional means. Specifically, .sup.235 UF.sub.6 is separated from a UF.sub.6 mixture by selective excitation followed by dissociative electron capture into .sup.235 UF.sub.5 - and F.
Ardila-Rey, Jorge Alfredo; Rojas-Moreno, Mónica Victoria; Martínez-Tarifa, Juan Manuel; Robles, Guillermo
2014-01-01
Partial discharge (PD) detection is a standardized technique to qualify electrical insulation in machines and power cables. Several techniques that analyze the waveform of the pulses have been proposed to discriminate noise from PD activity. Among them, spectral power ratio representation shows great flexibility in the separation of the sources of PD. Mapping spectral power ratios in two-dimensional plots leads to clusters of points which group pulses with similar characteristics. The position in the map depends on the nature of the partial discharge, the setup and the frequency response of the sensors. If these clusters are clearly separated, the subsequent task of identifying the source of the discharge is straightforward so the distance between clusters can be a figure of merit to suggest the best option for PD recognition. In this paper, two inductive sensors with different frequency responses to pulsed signals, a high frequency current transformer and an inductive loop sensor, are analyzed to test their performance in detecting and separating the sources of partial discharges. PMID:24556674
A new submarine oil-water separation system
NASA Astrophysics Data System (ADS)
Cai, Wen-Bin; Liu, Bo-Hong
2017-12-01
In order to solve the oil field losses of environmental problems and economic benefit caused by the separation of lifting production liquid to offshore platforms in the current offshore oil production, from the most basic separation principle, a new oil-water separation system has been processed of adsorption and desorption on related materials, achieving high efficiency and separation of oil and water phases. And the submarine oil-water separation device has been designed. The main structure of the device consists of gas-solid phase separation device, period separating device and adsorption device that completed high efficiency separation of oil, gas and water under the adsorption and desorption principle, and the processing capacity of the device is calculated.