Compson, Zacchaeus G; Adams, Kenneth J; Edwards, Joeseph A; Maestas, Jesse M; Whitham, Thomas G; Marks, Jane C
2013-10-01
Reciprocal subsidies between rivers and terrestrial habitats are common where terrestrial leaf litter provides energy to aquatic invertebrates while emerging aquatic insects provide energy to terrestrial predators (e.g., birds, lizards, spiders). We examined how aquatic insect emergence changed seasonally with litter from two foundation riparian trees, whose litter often dominates riparian streams of the southwestern United States: Fremont (Populus fremontii) and narrowleaf (Populus angustifolia) cottonwood. P. fremontii litter is fast-decomposing and lower in defensive phytochemicals (i.e., condensed tannins, lignin) relative to P. angustifolia. We experimentally manipulated leaf litter from these two species by placing them in leaf enclosures with emergence traps attached in order to determine how leaf type influenced insect emergence. Contrary to our initial predictions, we found that packs with slow-decomposing leaves tended to support more emergent insects relative to packs with fast-decomposing leaves. Three findings emerged. Firstly, abundance (number of emerging insects m(-2) day(-1)) was 25% higher on narrowleaf compared to Fremont leaves for the spring but did not differ in the fall, demonstrating that leaf quality from two dominant trees of the same genus yielded different emergence patterns and that these patterns changed seasonally. Secondly, functional feeding groups of emerging insects differed between treatments and seasons. Specifically, in the spring collector-gatherer abundance and biomass were higher on narrowleaf leaves, whereas collector-filterer abundance and biomass were higher on Fremont leaves. Shredder abundance and biomass were higher on narrowleaf leaves in the fall. Thirdly, diversity (Shannon's H') was higher on Fremont leaves in the spring, but no differences were found in the fall, showing that fast-decomposing leaves can support a more diverse, complex emergent insect assemblage during certain times of the year. Collectively, these results challenge the notion that leaf quality is a simple function of decomposition, suggesting instead that aquatic insects benefit differentially from different leaf types, such that some use slow-decomposing litter for habitat and its temporal longevity and others utilize fast-decomposing litter with more immediate nutrient release.
[Investigation of fast filter of ECG signals with lifting wavelet and smooth filter].
Li, Xuefei; Mao, Yuxing; He, Wei; Yang, Fan; Zhou, Liang
2008-02-01
The lifting wavelet is used to decompose the original ECG signals and separate them into the approach signals with low frequency and the detail signals with high frequency, based on frequency characteristic. Parts of the detail signals are ignored according to the frequency characteristic. To avoid the distortion of QRS Complexes, the approach signals are filtered by an adaptive smooth filter with a proper threshold value. Through the inverse transform of the lifting wavelet, the reserved approach signals are reconstructed, and the three primary kinds of noise are limited effectively. In addition, the method is fast and there is no time delay between input and output.
A high speed model-based approach for wavefront sensorless adaptive optics systems
NASA Astrophysics Data System (ADS)
Lianghua, Wen; Yang, Ping; Shuai, Wang; Wenjing, Liu; Shanqiu, Chen; Xu, Bing
2018-02-01
To improve temporal-frequency property of wavefront sensorless adaptive optics (AO) systems, a fast general model-based aberration correction algorithm is presented. The fast general model-based approach is based on the approximately linear relation between the mean square of the aberration gradients and the second moment of far-field intensity distribution. The presented model-based method is capable of completing a mode aberration effective correction just applying one disturbing onto the deformable mirror(one correction by one disturbing), which is reconstructed by the singular value decomposing the correlation matrix of the Zernike functions' gradients. Numerical simulations of AO corrections under the various random and dynamic aberrations are implemented. The simulation results indicate that the equivalent control bandwidth is 2-3 times than that of the previous method with one aberration correction after applying N times disturbing onto the deformable mirror (one correction by N disturbing).
NASA Astrophysics Data System (ADS)
Wang, Lei; Liu, Zhiwen; Miao, Qiang; Zhang, Xin
2018-03-01
A time-frequency analysis method based on ensemble local mean decomposition (ELMD) and fast kurtogram (FK) is proposed for rotating machinery fault diagnosis. Local mean decomposition (LMD), as an adaptive non-stationary and nonlinear signal processing method, provides the capability to decompose multicomponent modulation signal into a series of demodulated mono-components. However, the occurring mode mixing is a serious drawback. To alleviate this, ELMD based on noise-assisted method was developed. Still, the existing environmental noise in the raw signal remains in corresponding PF with the component of interest. FK has good performance in impulse detection while strong environmental noise exists. But it is susceptible to non-Gaussian noise. The proposed method combines the merits of ELMD and FK to detect the fault for rotating machinery. Primarily, by applying ELMD the raw signal is decomposed into a set of product functions (PFs). Then, the PF which mostly characterizes fault information is selected according to kurtosis index. Finally, the selected PF signal is further filtered by an optimal band-pass filter based on FK to extract impulse signal. Fault identification can be deduced by the appearance of fault characteristic frequencies in the squared envelope spectrum of the filtered signal. The advantages of ELMD over LMD and EEMD are illustrated in the simulation analyses. Furthermore, the efficiency of the proposed method in fault diagnosis for rotating machinery is demonstrated on gearbox case and rolling bearing case analyses.
NASA Astrophysics Data System (ADS)
Jing, Ya-Bing; Liu, Chang-Wen; Bi, Feng-Rong; Bi, Xiao-Yang; Wang, Xia; Shao, Kang
2017-07-01
Numerous vibration-based techniques are rarely used in diesel engines fault diagnosis in a direct way, due to the surface vibration signals of diesel engines with the complex non-stationary and nonlinear time-varying features. To investigate the fault diagnosis of diesel engines, fractal correlation dimension, wavelet energy and entropy as features reflecting the diesel engine fault fractal and energy characteristics are extracted from the decomposed signals through analyzing vibration acceleration signals derived from the cylinder head in seven different states of valve train. An intelligent fault detector FastICA-SVM is applied for diesel engine fault diagnosis and classification. The results demonstrate that FastICA-SVM achieves higher classification accuracy and makes better generalization performance in small samples recognition. Besides, the fractal correlation dimension and wavelet energy and entropy as the special features of diesel engine vibration signal are considered as input vectors of classifier FastICA-SVM and could produce the excellent classification results. The proposed methodology improves the accuracy of feature extraction and the fault diagnosis of diesel engines.
Arikan and Alamouti matrices based on fast block-wise inverse Jacket transform
NASA Astrophysics Data System (ADS)
Lee, Moon Ho; Khan, Md Hashem Ali; Kim, Kyeong Jin
2013-12-01
Recently, Lee and Hou (IEEE Signal Process Lett 13: 461-464, 2006) proposed one-dimensional and two-dimensional fast algorithms for block-wise inverse Jacket transforms (BIJTs). Their BIJTs are not real inverse Jacket transforms from mathematical point of view because their inverses do not satisfy the usual condition, i.e., the multiplication of a matrix with its inverse matrix is not equal to the identity matrix. Therefore, we mathematically propose a fast block-wise inverse Jacket transform of orders N = 2 k , 3 k , 5 k , and 6 k , where k is a positive integer. Based on the Kronecker product of the successive lower order Jacket matrices and the basis matrix, the fast algorithms for realizing these transforms are obtained. Due to the simple inverse and fast algorithms of Arikan polar binary and Alamouti multiple-input multiple-output (MIMO) non-binary matrices, which are obtained from BIJTs, they can be applied in areas such as 3GPP physical layer for ultra mobile broadband permutation matrices design, first-order q-ary Reed-Muller code design, diagonal channel design, diagonal subchannel decompose for interference alignment, and 4G MIMO long-term evolution Alamouti precoding design.
Taki, Hirofumi; Nagatani, Yoshiki; Matsukawa, Mami; Kanai, Hiroshi; Izumi, Shin-Ichi
2017-10-01
Ultrasound signals that pass through cancellous bone may be considered to consist of two longitudinal waves, which are called fast and slow waves. Accurate decomposition of these fast and slow waves is considered to be highly beneficial in determination of the characteristics of cancellous bone. In the present study, a fast decomposition method using a wave transfer function with a phase rotation parameter was applied to received signals that have passed through bovine bone specimens with various bone volume to total volume (BV/TV) ratios in a simulation study, where the elastic finite-difference time-domain method is used and the ultrasound wave propagated parallel to the bone axes. The proposed method succeeded to decompose both fast and slow waves accurately; the normalized residual intensity was less than -19.5 dB when the specimen thickness ranged from 4 to 7 mm and the BV/TV value ranged from 0.144 to 0.226. There was a strong relationship between the phase rotation value and the BV/TV value. The ratio of the peak envelope amplitude of the decomposed fast wave to that of the slow wave increased monotonically with increasing BV/TV ratio, indicating the high performance of the proposed method in estimation of the BV/TV value in cancellous bone.
Competitive code-based fast palmprint identification using a set of cover trees
NASA Astrophysics Data System (ADS)
Yue, Feng; Zuo, Wangmeng; Zhang, David; Wang, Kuanquan
2009-06-01
A palmprint identification system recognizes a query palmprint image by searching for its nearest neighbor from among all the templates in a database. When applied on a large-scale identification system, it is often necessary to speed up the nearest-neighbor searching process. We use competitive code, which has very fast feature extraction and matching speed, for palmprint identification. To speed up the identification process, we extend the cover tree method and propose to use a set of cover trees to facilitate the fast and accurate nearest-neighbor searching. We can use the cover tree method because, as we show, the angular distance used in competitive code can be decomposed into a set of metrics. Using the Hong Kong PolyU palmprint database (version 2) and a large-scale palmprint database, our experimental results show that the proposed method searches for nearest neighbors faster than brute force searching.
A new fast direct solver for the boundary element method
NASA Astrophysics Data System (ADS)
Huang, S.; Liu, Y. J.
2017-09-01
A new fast direct linear equation solver for the boundary element method (BEM) is presented in this paper. The idea of the new fast direct solver stems from the concept of the hierarchical off-diagonal low-rank matrix. The hierarchical off-diagonal low-rank matrix can be decomposed into the multiplication of several diagonal block matrices. The inverse of the hierarchical off-diagonal low-rank matrix can be calculated efficiently with the Sherman-Morrison-Woodbury formula. In this paper, a more general and efficient approach to approximate the coefficient matrix of the BEM with the hierarchical off-diagonal low-rank matrix is proposed. Compared to the current fast direct solver based on the hierarchical off-diagonal low-rank matrix, the proposed method is suitable for solving general 3-D boundary element models. Several numerical examples of 3-D potential problems with the total number of unknowns up to above 200,000 are presented. The results show that the new fast direct solver can be applied to solve large 3-D BEM models accurately and with better efficiency compared with the conventional BEM.
2012-03-01
the reactant. faster than phenanthrene II, whereas the latter decomposes about as fast as phenanthrene III. The correlation of these rates with the...the spreading rate of the jet at the first station near the nozzle is slightly too fast . Second, at the two downstream stations, the centerline...In previous RANS studies [93–96], the same two discrepancies are observed, that is, an initial jet spreading rate that is too fast and an
A FAST POLYNOMIAL TRANSFORM PROGRAM WITH A MODULARIZED STRUCTURE
NASA Technical Reports Server (NTRS)
Truong, T. K.
1994-01-01
This program utilizes a fast polynomial transformation (FPT) algorithm applicable to two-dimensional mathematical convolutions. Two-dimensional convolution has many applications, particularly in image processing. Two-dimensional cyclic convolutions can be converted to a one-dimensional convolution in a polynomial ring. Traditional FPT methods decompose the one-dimensional cyclic polynomial into polynomial convolutions of different lengths. This program will decompose a cyclic polynomial into polynomial convolutions of the same length. Thus, only FPTs and Fast Fourier Transforms of the same length are required. This modular approach can save computational resources. To further enhance its appeal, the program is written in the transportable 'C' language. The steps in the algorithm are: 1) formulate the modulus reduction equations, 2) calculate the polynomial transforms, 3) multiply the transforms using a generalized fast Fourier transformation, 4) compute the inverse polynomial transforms, and 5) reconstruct the final matrices using the Chinese remainder theorem. Input to this program is comprised of the row and column dimensions and the initial two matrices. The matrices are printed out at all steps, ending with the final reconstruction. This program is written in 'C' for batch execution and has been implemented on the IBM PC series of computers under DOS with a central memory requirement of approximately 18K of 8 bit bytes. This program was developed in 1986.
Wear, Keith A
2014-04-01
In through-transmission interrogation of cancellous bone, two longitudinal pulses ("fast" and "slow" waves) may be generated. Fast and slow wave properties convey information about material and micro-architectural characteristics of bone. However, these properties can be difficult to assess when fast and slow wave pulses overlap in time and frequency domains. In this paper, two methods are applied to decompose signals into fast and slow waves: bandlimited deconvolution and modified least-squares Prony's method with curve-fitting (MLSP + CF). The methods were tested in plastic and Zerdine(®) samples that provided fast and slow wave velocities commensurate with velocities for cancellous bone. Phase velocity estimates were accurate to within 6 m/s (0.4%) (slow wave with both methods and fast wave with MLSP + CF) and 26 m/s (1.2%) (fast wave with bandlimited deconvolution). Midband signal loss estimates were accurate to within 0.2 dB (1.7%) (fast wave with both methods), and 1.0 dB (3.7%) (slow wave with both methods). Similar accuracies were found for simulations based on fast and slow wave parameter values published for cancellous bone. These methods provide sufficient accuracy and precision for many applications in cancellous bone such that experimental error is likely to be a greater limiting factor than estimation error.
Sparse time-frequency decomposition based on dictionary adaptation.
Hou, Thomas Y; Shi, Zuoqiang
2016-04-13
In this paper, we propose a time-frequency analysis method to obtain instantaneous frequencies and the corresponding decomposition by solving an optimization problem. In this optimization problem, the basis that is used to decompose the signal is not known a priori. Instead, it is adapted to the signal and is determined as part of the optimization problem. In this sense, this optimization problem can be seen as a dictionary adaptation problem, in which the dictionary is adaptive to one signal rather than a training set in dictionary learning. This dictionary adaptation problem is solved by using the augmented Lagrangian multiplier (ALM) method iteratively. We further accelerate the ALM method in each iteration by using the fast wavelet transform. We apply our method to decompose several signals, including signals with poor scale separation, signals with outliers and polluted by noise and a real signal. The results show that this method can give accurate recovery of both the instantaneous frequencies and the intrinsic mode functions. © 2016 The Author(s).
Wu, Zhaohua; Feng, Jiaxin; Qiao, Fangli; Tan, Zhe-Min
2016-04-13
In this big data era, it is more urgent than ever to solve two major issues: (i) fast data transmission methods that can facilitate access to data from non-local sources and (ii) fast and efficient data analysis methods that can reveal the key information from the available data for particular purposes. Although approaches in different fields to address these two questions may differ significantly, the common part must involve data compression techniques and a fast algorithm. This paper introduces the recently developed adaptive and spatio-temporally local analysis method, namely the fast multidimensional ensemble empirical mode decomposition (MEEMD), for the analysis of a large spatio-temporal dataset. The original MEEMD uses ensemble empirical mode decomposition to decompose time series at each spatial grid and then pieces together the temporal-spatial evolution of climate variability and change on naturally separated timescales, which is computationally expensive. By taking advantage of the high efficiency of the expression using principal component analysis/empirical orthogonal function analysis for spatio-temporally coherent data, we design a lossy compression method for climate data to facilitate its non-local transmission. We also explain the basic principles behind the fast MEEMD through decomposing principal components instead of original grid-wise time series to speed up computation of MEEMD. Using a typical climate dataset as an example, we demonstrate that our newly designed methods can (i) compress data with a compression rate of one to two orders; and (ii) speed-up the MEEMD algorithm by one to two orders. © 2016 The Authors.
Computer simulation results of attitude estimation of earth orbiting satellites
NASA Technical Reports Server (NTRS)
Kou, S. R.
1976-01-01
Computer simulation results of attitude estimation of Earth-orbiting satellites (including Space Telescope) subjected to environmental disturbances and noises are presented. Decomposed linear recursive filter and Kalman filter were used as estimation tools. Six programs were developed for this simulation, and all were written in the basic language and were run on HP 9830A and HP 9866A computers. Simulation results show that a decomposed linear recursive filter is accurate in estimation and fast in response time. Furthermore, for higher order systems, this filter has computational advantages (i.e., less integration errors and roundoff errors) over a Kalman filter.
Bumb, Iris; Garnier, Eric; Coq, Sylvain; Nahmani, Johanne; Del Rey Granado, Maria; Gimenez, Olivier; Kazakou, Elena
2018-03-05
Forage quality for herbivores and litter quality for decomposers are two key plant properties affecting ecosystem carbon and nutrient cycling. Although there is a positive relationship between palatability and decomposition, very few studies have focused on larger vertebrate herbivores while considering links between the digestibility of living leaves and stems and the decomposability of litter and associated traits. The hypothesis tested is that some defences of living organs would reduce their digestibility and, as a consequence, their litter decomposability, through 'afterlife' effects. Additionally in high-fertility conditions the presence of intense herbivory would select for communities dominated by fast-growing plants, which are able to compensate for tissue loss by herbivory, producing both highly digestible organs and easily decomposable litter. Relationships between dry matter digestibility and decomposability were quantified in 16 dominant species from Mediterranean rangelands, which are subject to management regimes that differ in grazing intensity and fertilization. The digestibility and decomposability of leaves and stems were estimated at peak standing biomass, in plots that were either fertilized and intensively grazed or unfertilized and moderately grazed. Several traits were measured on living and senesced organs: fibre content, dry matter content and nitrogen, phosphorus and tannin concentrations. Digestibility was positively related to decomposability, both properties being influenced in the same direction by management regime, organ and growth forms. Digestibility of leaves and stems was negatively related to their fibre concentrations, and positively related to their nitrogen concentration. Decomposability was more strongly related to traits measured on living organs than on litter. Digestibility and decomposition were governed by similar structural traits, in particular fibre concentration, affecting both herbivores and micro-organisms through the afterlife effects. This study contributes to a better understanding of the interspecific relationships between forage quality and litter decomposition in leaves and stems and demonstrates the key role these traits play in the link between plant and soil via herbivory and decomposition. Fibre concentration and dry matter content can be considered as good predictors of both digestibility and decomposability. © The Author(s) 2018. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Xu, Chonggang; Gertner, George
2013-01-01
Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements. PMID:24143037
Xu, Chonggang; Gertner, George
2011-01-01
Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements.
Air-stable ink for scalable, high-throughput layer deposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weil, Benjamin D; Connor, Stephen T; Cui, Yi
A method for producing and depositing air-stable, easily decomposable, vulcanized ink on any of a wide range of substrates is disclosed. The ink enables high-volume production of optoelectronic and/or electronic devices using scalable production methods, such as roll-to-roll transfer, fast rolling processes, and the like.
Vegetation effects on soil organic matter chemistry of aggregate fractions in a Hawaiian forest
USDA-ARS?s Scientific Manuscript database
We examined chemical changes from live plant tissue to soil organic matter (SOM) to determine the persistence of individual plant compounds into soil aggregate fractions. We characterized the tissue chemistry of a slow- (Dicranopteris linearis) and fast-decomposing species (Cheirodendron trigynum) a...
Radiocarbon Evidence That Millennial and Fast-Cycling Soil Carbon are Equally Sensitive to Warming
NASA Astrophysics Data System (ADS)
Vaughn, L. S.; Torn, M. S.; Porras, R. C.
2017-12-01
Within the century, the Arctic is expected to shift from a sink to a source of atmospheric CO2 due to climate-induced increases in soil carbon mineralization. The magnitude of this effect remains uncertain, due in large part to unknown temperature sensitivities of organic matter decomposition. In particular, the distribution of temperature sensitivities across soil carbon pools remains unknown. New experimental approaches are needed, because studies that fit multi-pool models to CO2 flux measurements may be sensitive to model assumptions, statistical effects, and non-steady-state changes in substrate availability or microbial activity. In this study, we developed a new methodology using natural abundance radiocarbon to evaluate temperature sensitivities across soil carbon pools. In two incubation experiments with soils from Barrow, AK, we (1) evaluated soil carbon age and decomposability, (2) disentangled the effects of temperature and substrate depletion on carbon mineralization, and (3) compared the temperature sensitivities of fast- and slow-cycling soil carbon pools. From a long-term incubation, both respired CO2 and the remaining soil organic matter were highly depleted in radiocarbon. At 20 cm depth, median Δ14C values were -167‰ in respired CO2 and -377‰ in soil organic matter, corresponding to turnover times of 1800 and 4800 years, respectively. Such negative Δ14C values indicate both storage and decomposition of old, stabilized carbon, while radiocarbon differences between the mineralized and non-mineralized fractions suggest that decomposability varies along a turnover time gradient. Applying a new analytical method combining CO2 flux and Δ14C, we found that fast- and slow-cycling carbon pools were equally sensitive to temperature, with a Q10 of 2 irrespective of turnover time. We conclude that in these Arctic soils, ancient soil carbon is vulnerable to warming under thawed, aerobic conditions. In contrast to many previous studies, we found no difference in temperature sensitivity of decomposition between fast- and slow-cycling pools. These findings suggest that in these soils, carbon stabilization mechanisms other than chemical recalcitrance mediate temperature sensitivities, and even old SOC will be readily decomposable as climate warms.
Sensitivity and Uncertainty Analysis of the GFR MOX Fuel Subassembly
NASA Astrophysics Data System (ADS)
Lüley, J.; Vrban, B.; Čerba, Š.; Haščík, J.; Nečas, V.; Pelloni, S.
2014-04-01
We performed sensitivity and uncertainty analysis as well as benchmark similarity assessment of the MOX fuel subassembly designed for the Gas-Cooled Fast Reactor (GFR) as a representative material of the core. Material composition was defined for each assembly ring separately allowing us to decompose the sensitivities not only for isotopes and reactions but also for spatial regions. This approach was confirmed by direct perturbation calculations for chosen materials and isotopes. Similarity assessment identified only ten partly comparable benchmark experiments that can be utilized in the field of GFR development. Based on the determined uncertainties, we also identified main contributors to the calculation bias.
Hydrogen and elemental carbon production from natural gas and other hydrocarbons
Detering, Brent A.; Kong, Peter C.
2002-01-01
Diatomic hydrogen and unsaturated hydrocarbons are produced as reactor gases in a fast quench reactor. During the fast quench, the unsaturated hydrocarbons are further decomposed by reheating the reactor gases. More diatomic hydrogen is produced, along with elemental carbon. Other gas may be added at different stages in the process to form a desired end product and prevent back reactions. The product is a substantially clean-burning hydrogen fuel that leaves no greenhouse gas emissions, and elemental carbon that may be used in powder form as a commodity for several processes.
Li, Wei
2016-06-01
This paper considers a unified geometric projection approach for: 1) decomposing a general system of cooperative agents coupled via Laplacian matrices or stochastic matrices and 2) deriving a centroid-subsystem and many shape-subsystems, where each shape-subsystem has the distinct properties (e.g., preservation of formation and stability of the original system, sufficiently simple structures and explicit formation evolution of agents, and decoupling from the centroid-subsystem) which will facilitate subsequent analyses. Particularly, this paper provides an additional merit of the approach: considering adjustments of coupling topologies of agents which frequently occur in system design (e.g., to add or remove an edge, to move an edge to a new place, and to change the weight of an edge), the corresponding new shape-subsystems can be derived by a few simple computations merely from the old shape-subsystems and without referring to the original system, which will provide further convenience for analysis and flexibility of choice. Finally, such fast recalculations of new subsystems under topology adjustments are provided with examples.
Robust control of combustion instabilities
NASA Astrophysics Data System (ADS)
Hong, Boe-Shong
Several interactive dynamical subsystems, each of which has its own time-scale and physical significance, are decomposed to build a feedback-controlled combustion- fluid robust dynamics. On the fast-time scale, the phenomenon of combustion instability is corresponding to the internal feedback of two subsystems: acoustic dynamics and flame dynamics, which are parametrically dependent on the slow-time-scale mean-flow dynamics controlled for global performance by a mean-flow controller. This dissertation constructs such a control system, through modeling, analysis and synthesis, to deal with model uncertainties, environmental noises and time- varying mean-flow operation. Conservation law is decomposed as fast-time acoustic dynamics and slow-time mean-flow dynamics, served for synthesizing LPV (linear parameter varying)- L2-gain robust control law, in which a robust observer is embedded for estimating and controlling the internal status, while achieving trade- offs among robustness, performances and operation. The robust controller is formulated as two LPV-type Linear Matrix Inequalities (LMIs), whose numerical solver is developed by finite-element method. Some important issues related to physical understanding and engineering application are discussed in simulated results of the control system.
Detering, Brent A.; Kong, Peter C.
2006-08-29
A fast-quench reactor for production of diatomic hydrogen and unsaturated carbons is provided. During the fast quench in the downstream diverging section of the nozzle, such as in a free expansion chamber, the unsaturated hydrocarbons are further decomposed by reheating the reactor gases. More diatomic hydrogen is produced, along with elemental carbon. Other gas may be added at different stages in the process to form a desired end product and prevent back reactions. The product is a substantially clean-burning hydrogen fuel that leaves no greenhouse gas emissions, and elemental carbon that may be used in powder form as a commodity for several processes.
Why are idioms recognized fast?
Tabossi, Patrizia; Fanari, Rachele; Wolf, Kinou
2009-06-01
It is an established fact that idiomatic expressions are fast to process. However, the explanation of the phenomenon is controversial. Using a semantic judgment paradigm, where people decide whether a string is meaningful or not, the present experiment tested the predictions deriving from the three main theories of idiom recognition-the lexical representation hypothesis, the idiom decomposition hypothesis, and the configuration hypothesis. Participants were faster at judging decomposable idioms, nondecomposable idioms, and clichés than at judging their matched controls. The effect was comparable for all conventional expressions. The results were interpreted as suggesting that, as posited by the configuration hypothesis, the fact that they are known expressions, rather than idiomaticity, explains their fast recognition.
Motion-based, high-yielding, and fast separation of different charged organics in water.
Xuan, Mingjun; Lin, Xiankun; Shao, Jingxin; Dai, Luru; He, Qiang
2015-01-12
We report a self-propelled Janus silica micromotor as a motion-based analytical method for achieving fast target separation of polyelectrolyte microcapsules, enriching different charged organics with low molecular weights in water. The self-propelled Janus silica micromotor catalytically decomposes a hydrogen peroxide fuel and moves along the direction of the catalyst face at a speed of 126.3 μm s(-1) . Biotin-functionalized Janus micromotors can specifically capture and rapidly transport streptavidin-modified polyelectrolyte multilayer capsules, which could effectively enrich and separate different charged organics in water. The interior of the polyelectrolyte multilayer microcapsules were filled with a strong charged polyelectrolyte, and thus a Donnan equilibrium is favorable between the inner solution within the capsules and the bulk solution to entrap oppositely charged organics in water. The integration of these self-propelled Janus silica micromotors and polyelectrolyte multilayer capsules into a lab-on-chip device that enables the separation and analysis of charged organics could be attractive for a diverse range of applications. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Stock, Joachim W.; Kitzmann, Daniel; Patzer, A. Beate C.; Sedlmayr, Erwin
2018-06-01
For the calculation of complex neutral/ionized gas phase chemical equilibria, we present a semi-analytical versatile and efficient computer program, called FastChem. The applied method is based on the solution of a system of coupled nonlinear (and linear) algebraic equations, namely the law of mass action and the element conservation equations including charge balance, in many variables. Specifically, the system of equations is decomposed into a set of coupled nonlinear equations in one variable each, which are solved analytically whenever feasible to reduce computation time. Notably, the electron density is determined by using the method of Nelder and Mead at low temperatures. The program is written in object-oriented C++ which makes it easy to couple the code with other programs, although a stand-alone version is provided. FastChem can be used in parallel or sequentially and is available under the GNU General Public License version 3 at https://github.com/exoclime/FastChem together with several sample applications. The code has been successfully validated against previous studies and its convergence behavior has been tested even for extreme physical parameter ranges down to 100 K and up to 1000 bar. FastChem converges stable and robust in even most demanding chemical situations, which posed sometimes extreme challenges for previous algorithms.
A novel ECG data compression method based on adaptive Fourier decomposition
NASA Astrophysics Data System (ADS)
Tan, Chunyu; Zhang, Liming
2017-12-01
This paper presents a novel electrocardiogram (ECG) compression method based on adaptive Fourier decomposition (AFD). AFD is a newly developed signal decomposition approach, which can decompose a signal with fast convergence, and hence reconstruct ECG signals with high fidelity. Unlike most of the high performance algorithms, our method does not make use of any preprocessing operation before compression. Huffman coding is employed for further compression. Validated with 48 ECG recordings of MIT-BIH arrhythmia database, the proposed method achieves the compression ratio (CR) of 35.53 and the percentage root mean square difference (PRD) of 1.47% on average with N = 8 decomposition times and a robust PRD-CR relationship. The results demonstrate that the proposed method has a good performance compared with the state-of-the-art ECG compressors.
Bio-oil production from palm fronds by fast pyrolysis process in fluidized bed reactor
NASA Astrophysics Data System (ADS)
Rinaldi, Nino; Simanungkalit, Sabar P.; Kiky Corneliasari, S.
2017-01-01
Fast pyrolysis process of palm fronds has been conducted in the fluidized bed reactor to yield bio-oil product (pyrolysis oil). The process employed sea sand as the heat transfer medium. The objective of this study is to design of the fluidized bed rector, to conduct fast pyrolysis process to product bio-oil from palm fronds, and to characterize the feed and bio-oil product. The fast pyrolysis process was conducted continuously with the feeding rate around 500 g/hr. It was found that the biomass conversion is about 35.5% to yield bio-oil, however this conversion is still minor. It is suggested due to the heating system inside the reactor was not enough to decompose the palm fronds as a feedstock. Moreover, the acids compounds ware mostly observed on the bio-oil product.
Catalytic Fast Pyrolysis of Cellulose by Integrating Dispersed Nickel Catalyst with HZSM-5 Zeolite
NASA Astrophysics Data System (ADS)
Lei, Xiaojuan; Bi, Yadong; Zhou, Wei; Chen, Hui; Hu, Jianli
2018-01-01
The effect of integrating dispersed nickel catalyst with HZSM-5 zeolite on upgrading of vapors produced from pyrolysis of lignocellulosic biomass was investigated. The active component nickel nitrate was introduced onto the cellulose substrate by impregnation technique. Based on TGA experimental results, we discovered that nickel nitrate first released crystallization water, and then successively decomposed into nickel oxide which was reduced in-situ to metallic nickel through carbothermal reduction reaction. In-situ generated nickel nanoparticles were found highly dispersed over carbon substrate, which were responsible for catalyzing reforming and cracking of tars. In catalytic fast pyrolysis of cellulose, the addition of nickel nitrate caused more char formation at the expense of the yield of the condensable liquid products. In addition, the selectivity of linear oxygenates was increased whereas the yield of laevoglucose was reduced. Oxygen-containing compounds in pyrolysis vapors were deoxygenated into aromatics using HZSM-5. Moreover, the amount of condensable liquid products was decreased with the addition of HZSM-5.
Bacterial decontamination using ambient pressure nonthermal discharges
DOE Office of Scientific and Technical Information (OSTI.GOV)
Birmingham, J.G.; Hammerstrom, D.J.
2000-02-01
Atmospheric pressure nonthermal plasmas can efficiently deactivate bacteria in gases, liquids, and on surfaces, as well as can decompose hazardous chemicals. This paper focuses on the changes to bacterial spores and toxic biochemical compounds, such as mycotoxins, after their treatment in ambient pressure discharges. The ability of nonthermal plasmas to decompose toxic chemicals and deactivate hazardous biological materials has been applied to sterilizing medical instruments, ozonating water, and purifying air. In addition, the fast lysis of bacterial spores and other cells has led us to include plasma devices within pathogen detection instruments, where nucleic acids must be accessed. Decontaminating chemicalmore » and biological warfare materials from large, high value targets such as building surfaces, after a terrorist attack, are especially challenging. A large area plasma decontamination technology is described.« less
Separation of Doppler radar-based respiratory signatures.
Lee, Yee Siong; Pathirana, Pubudu N; Evans, Robin J; Steinfort, Christopher L
2016-08-01
Respiration detection using microwave Doppler radar has attracted significant interest primarily due to its unobtrusive form of measurement. With less preparation in comparison with attaching physical sensors on the body or wearing special clothing, Doppler radar for respiration detection and monitoring is particularly useful for long-term monitoring applications such as sleep studies (i.e. sleep apnoea, SIDS). However, motion artefacts and interference from multiple sources limit the widespread use and the scope of potential applications of this technique. Utilising the recent advances in independent component analysis (ICA) and multiple antenna configuration schemes, this work investigates the feasibility of decomposing respiratory signatures into each subject from the Doppler-based measurements. Experimental results demonstrated that FastICA is capable of separating two distinct respiratory signatures from two subjects adjacent to each other even in the presence of apnoea. In each test scenario, the separated respiratory patterns correlate closely to the reference respiration strap readings. The effectiveness of FastICA in dealing with the mixed Doppler radar respiration signals confirms its applicability in healthcare applications, especially in long-term home-based monitoring as it usually involves at least two people in the same environment (i.e. two people sleeping next to each other). Further, the use of FastICA to separate involuntary movements such as the arm swing from the respiratory signatures of a single subject was explored in a multiple antenna environment. The separated respiratory signal indeed demonstrated a high correlation with the measurements made by a respiratory strap used currently in clinical settings.
An algorithm of adaptive scale object tracking in occlusion
NASA Astrophysics Data System (ADS)
Zhao, Congmei
2017-05-01
Although the correlation filter-based trackers achieve the competitive results both on accuracy and robustness, there are still some problems in handling scale variations, object occlusion, fast motions and so on. In this paper, a multi-scale kernel correlation filter algorithm based on random fern detector was proposed. The tracking task was decomposed into the target scale estimation and the translation estimation. At the same time, the Color Names features and HOG features were fused in response level to further improve the overall tracking performance of the algorithm. In addition, an online random fern classifier was trained to re-obtain the target after the target was lost. By comparing with some algorithms such as KCF, DSST, TLD, MIL, CT and CSK, experimental results show that the proposed approach could estimate the object state accurately and handle the object occlusion effectively.
Multispectral image fusion for illumination-invariant palmprint recognition
Zhang, Xinman; Xu, Xuebin; Shang, Dongpeng
2017-01-01
Multispectral palmprint recognition has shown broad prospects for personal identification due to its high accuracy and great stability. In this paper, we develop a novel illumination-invariant multispectral palmprint recognition method. To combine the information from multiple spectral bands, an image-level fusion framework is completed based on a fast and adaptive bidimensional empirical mode decomposition (FABEMD) and a weighted Fisher criterion. The FABEMD technique decomposes the multispectral images into their bidimensional intrinsic mode functions (BIMFs), on which an illumination compensation operation is performed. The weighted Fisher criterion is to construct the fusion coefficients at the decomposition level, making the images be separated correctly in the fusion space. The image fusion framework has shown strong robustness against illumination variation. In addition, a tensor-based extreme learning machine (TELM) mechanism is presented for feature extraction and classification of two-dimensional (2D) images. In general, this method has fast learning speed and satisfying recognition accuracy. Comprehensive experiments conducted on the PolyU multispectral palmprint database illustrate that the proposed method can achieve favorable results. For the testing under ideal illumination, the recognition accuracy is as high as 99.93%, and the result is 99.50% when the lighting condition is unsatisfied. PMID:28558064
Multispectral image fusion for illumination-invariant palmprint recognition.
Lu, Longbin; Zhang, Xinman; Xu, Xuebin; Shang, Dongpeng
2017-01-01
Multispectral palmprint recognition has shown broad prospects for personal identification due to its high accuracy and great stability. In this paper, we develop a novel illumination-invariant multispectral palmprint recognition method. To combine the information from multiple spectral bands, an image-level fusion framework is completed based on a fast and adaptive bidimensional empirical mode decomposition (FABEMD) and a weighted Fisher criterion. The FABEMD technique decomposes the multispectral images into their bidimensional intrinsic mode functions (BIMFs), on which an illumination compensation operation is performed. The weighted Fisher criterion is to construct the fusion coefficients at the decomposition level, making the images be separated correctly in the fusion space. The image fusion framework has shown strong robustness against illumination variation. In addition, a tensor-based extreme learning machine (TELM) mechanism is presented for feature extraction and classification of two-dimensional (2D) images. In general, this method has fast learning speed and satisfying recognition accuracy. Comprehensive experiments conducted on the PolyU multispectral palmprint database illustrate that the proposed method can achieve favorable results. For the testing under ideal illumination, the recognition accuracy is as high as 99.93%, and the result is 99.50% when the lighting condition is unsatisfied.
Fang, Li-Min; Lin, Min
2009-08-01
For the rapid detection of the ethanol, pH and rest sugar in red wine, infrared (IR) spectra of 44 wine samples were analyzed. The algorithm of fast independent component analysis (FastICA) was used to decompose the data of IR spectra, and their independent components and the mixing matrix were obtained. Then, the ICA-NNR calibration model with three-level artificial neural network (ANN) structure was built by using back-propagation (BP) algorithm. The models were used to estimate the contents of ethanol, pH and rest sugar in red wine samples for both in calibration set and predicted set. Correlation coefficient (r) of prediction and root mean square error of prediction (RMSEP) were used as the evaluation indexes. The results indicate that the r and RMSEP for the prediction of ethanol content, pH and rest sugar content are 0.953, 0.983 and 0.994, and 0.161, 0.017 and 0.181, respectively. The maximum relative deviations between the ICA-NNR method predicted value and referenced value of the 22 samples in predicted set are less than 4%. The results of this paper provide a foundation for the application and further development of IR on-line red wine analyzer.
NASA Astrophysics Data System (ADS)
Zhang, Mingkai; Liu, Yanchen; Cheng, Xun; Zhu, David Z.; Shi, Hanchang; Yuan, Zhiguo
2018-03-01
Quantifying rainfall-derived inflow and infiltration (RDII) in a sanitary sewer is difficult when RDII and overflow occur simultaneously. This study proposes a novel conductivity-based method for estimating RDII. The method separately decomposes rainfall-derived inflow (RDI) and rainfall-induced infiltration (RII) on the basis of conductivity data. Fast Fourier transform was adopted to analyze variations in the flow and water quality during dry weather. Nonlinear curve fitting based on the least squares algorithm was used to optimize parameters in the proposed RDII model. The method was successfully applied to real-life case studies, in which inflow and infiltration were successfully estimated for three typical rainfall events with total rainfall volumes of 6.25 mm (light), 28.15 mm (medium), and 178 mm (heavy). Uncertainties of model parameters were estimated using the generalized likelihood uncertainty estimation (GLUE) method and were found to be acceptable. Compared with traditional flow-based methods, the proposed approach exhibits distinct advantages in estimating RDII and overflow, particularly when the two processes happen simultaneously.
Weighted least squares phase unwrapping based on the wavelet transform
NASA Astrophysics Data System (ADS)
Chen, Jiafeng; Chen, Haiqin; Yang, Zhengang; Ren, Haixia
2007-01-01
The weighted least squares phase unwrapping algorithm is a robust and accurate method to solve phase unwrapping problem. This method usually leads to a large sparse linear equation system. Gauss-Seidel relaxation iterative method is usually used to solve this large linear equation. However, this method is not practical due to its extremely slow convergence. The multigrid method is an efficient algorithm to improve convergence rate. However, this method needs an additional weight restriction operator which is very complicated. For this reason, the multiresolution analysis method based on the wavelet transform is proposed. By applying the wavelet transform, the original system is decomposed into its coarse and fine resolution levels and an equivalent equation system with better convergence condition can be obtained. Fast convergence in separate coarse resolution levels speeds up the overall system convergence rate. The simulated experiment shows that the proposed method converges faster and provides better result than the multigrid method.
Fast sweeping method for the factored eikonal equation
NASA Astrophysics Data System (ADS)
Fomel, Sergey; Luo, Songting; Zhao, Hongkai
2009-09-01
We develop a fast sweeping method for the factored eikonal equation. By decomposing the solution of a general eikonal equation as the product of two factors: the first factor is the solution to a simple eikonal equation (such as distance) or a previously computed solution to an approximate eikonal equation. The second factor is a necessary modification/correction. Appropriate discretization and a fast sweeping strategy are designed for the equation of the correction part. The key idea is to enforce the causality of the original eikonal equation during the Gauss-Seidel iterations. Using extensive numerical examples we demonstrate that (1) the convergence behavior of the fast sweeping method for the factored eikonal equation is the same as for the original eikonal equation, i.e., the number of iterations for the Gauss-Seidel iterations is independent of the mesh size, (2) the numerical solution from the factored eikonal equation is more accurate than the numerical solution directly computed from the original eikonal equation, especially for point sources.
Fast large scale structure perturbation theory using one-dimensional fast Fourier transforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmittfull, Marcel; Vlah, Zvonimir; McDonald, Patrick
The usual fluid equations describing the large-scale evolution of mass density in the universe can be written as local in the density, velocity divergence, and velocity potential fields. As a result, the perturbative expansion in small density fluctuations, usually written in terms of convolutions in Fourier space, can be written as a series of products of these fields evaluated at the same location in configuration space. Based on this, we establish a new method to numerically evaluate the 1-loop power spectrum (i.e., Fourier transform of the 2-point correlation function) with one-dimensional fast Fourier transforms. This is exact and a fewmore » orders of magnitude faster than previously used numerical approaches. Numerical results of the new method are in excellent agreement with the standard quadrature integration method. This fast model evaluation can in principle be extended to higher loop order where existing codes become painfully slow. Our approach follows by writing higher order corrections to the 2-point correlation function as, e.g., the correlation between two second-order fields or the correlation between a linear and a third-order field. These are then decomposed into products of correlations of linear fields and derivatives of linear fields. In conclusion, the method can also be viewed as evaluating three-dimensional Fourier space convolutions using products in configuration space, which may also be useful in other contexts where similar integrals appear.« less
Fast large scale structure perturbation theory using one-dimensional fast Fourier transforms
Schmittfull, Marcel; Vlah, Zvonimir; McDonald, Patrick
2016-05-01
The usual fluid equations describing the large-scale evolution of mass density in the universe can be written as local in the density, velocity divergence, and velocity potential fields. As a result, the perturbative expansion in small density fluctuations, usually written in terms of convolutions in Fourier space, can be written as a series of products of these fields evaluated at the same location in configuration space. Based on this, we establish a new method to numerically evaluate the 1-loop power spectrum (i.e., Fourier transform of the 2-point correlation function) with one-dimensional fast Fourier transforms. This is exact and a fewmore » orders of magnitude faster than previously used numerical approaches. Numerical results of the new method are in excellent agreement with the standard quadrature integration method. This fast model evaluation can in principle be extended to higher loop order where existing codes become painfully slow. Our approach follows by writing higher order corrections to the 2-point correlation function as, e.g., the correlation between two second-order fields or the correlation between a linear and a third-order field. These are then decomposed into products of correlations of linear fields and derivatives of linear fields. In conclusion, the method can also be viewed as evaluating three-dimensional Fourier space convolutions using products in configuration space, which may also be useful in other contexts where similar integrals appear.« less
ParaExp Using Leapfrog as Integrator for High-Frequency Electromagnetic Simulations
NASA Astrophysics Data System (ADS)
Merkel, M.; Niyonzima, I.; Schöps, S.
2017-12-01
Recently, ParaExp was proposed for the time integration of linear hyperbolic problems. It splits the time interval of interest into subintervals and computes the solution on each subinterval in parallel. The overall solution is decomposed into a particular solution defined on each subinterval with zero initial conditions and a homogeneous solution propagated by the matrix exponential applied to the initial conditions. The efficiency of the method depends on fast approximations of this matrix exponential based on recent results from numerical linear algebra. This paper deals with the application of ParaExp in combination with Leapfrog to electromagnetic wave problems in time domain. Numerical tests are carried out for a simple toy problem and a realistic spiral inductor model discretized by the Finite Integration Technique.
Fast Detection of Material Deformation through Structural Dissimilarity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ushizima, Daniela; Perciano, Talita; Parkinson, Dilworth
2015-10-29
Designing materials that are resistant to extreme temperatures and brittleness relies on assessing structural dynamics of samples. Algorithms are critically important to characterize material deformation under stress conditions. Here, we report on our design of coarse-grain parallel algorithms for image quality assessment based on structural information and on crack detection of gigabyte-scale experimental datasets. We show how key steps can be decomposed into distinct processing flows, one based on structural similarity (SSIM) quality measure, and another on spectral content. These algorithms act upon image blocks that fit into memory, and can execute independently. We discuss the scientific relevance of themore » problem, key developments, and decomposition of complementary tasks into separate executions. We show how to apply SSIM to detect material degradation, and illustrate how this metric can be allied to spectral analysis for structure probing, while using tiled multi-resolution pyramids stored in HDF5 chunked multi-dimensional arrays. Results show that the proposed experimental data representation supports an average compression rate of 10X, and data compression scales linearly with the data size. We also illustrate how to correlate SSIM to crack formation, and how to use our numerical schemes to enable fast detection of deformation from 3D datasets evolving in time.« less
Hardware design and implementation of fast DOA estimation method based on multicore DSP
NASA Astrophysics Data System (ADS)
Guo, Rui; Zhao, Yingxiao; Zhang, Yue; Lin, Qianqiang; Chen, Zengping
2016-10-01
In this paper, we present a high-speed real-time signal processing hardware platform based on multicore digital signal processor (DSP). The real-time signal processing platform shows several excellent characteristics including high performance computing, low power consumption, large-capacity data storage and high speed data transmission, which make it able to meet the constraint of real-time direction of arrival (DOA) estimation. To reduce the high computational complexity of DOA estimation algorithm, a novel real-valued MUSIC estimator is used. The algorithm is decomposed into several independent steps and the time consumption of each step is counted. Based on the statistics of the time consumption, we present a new parallel processing strategy to distribute the task of DOA estimation to different cores of the real-time signal processing hardware platform. Experimental results demonstrate that the high processing capability of the signal processing platform meets the constraint of real-time direction of arrival (DOA) estimation.
Joint Feature Extraction and Classifier Design for ECG-Based Biometric Recognition.
Gutta, Sandeep; Cheng, Qi
2016-03-01
Traditional biometric recognition systems often utilize physiological traits such as fingerprint, face, iris, etc. Recent years have seen a growing interest in electrocardiogram (ECG)-based biometric recognition techniques, especially in the field of clinical medicine. In existing ECG-based biometric recognition methods, feature extraction and classifier design are usually performed separately. In this paper, a multitask learning approach is proposed, in which feature extraction and classifier design are carried out simultaneously. Weights are assigned to the features within the kernel of each task. We decompose the matrix consisting of all the feature weights into sparse and low-rank components. The sparse component determines the features that are relevant to identify each individual, and the low-rank component determines the common feature subspace that is relevant to identify all the subjects. A fast optimization algorithm is developed, which requires only the first-order information. The performance of the proposed approach is demonstrated through experiments using the MIT-BIH Normal Sinus Rhythm database.
A fast estimation of shock wave pressure based on trend identification
NASA Astrophysics Data System (ADS)
Yao, Zhenjian; Wang, Zhongyu; Wang, Chenchen; Lv, Jing
2018-04-01
In this paper, a fast method based on trend identification is proposed to accurately estimate the shock wave pressure in a dynamic measurement. Firstly, the collected output signal of the pressure sensor is reconstructed by discrete cosine transform (DCT) to reduce the computational complexity for the subsequent steps. Secondly, the empirical mode decomposition (EMD) is applied to decompose the reconstructed signal into several components with different frequency-bands, and the last few low-frequency components are chosen to recover the trend of the reconstructed signal. In the meantime, the optimal component number is determined based on the correlation coefficient and the normalized Euclidean distance between the trend and the reconstructed signal. Thirdly, with the areas under the gradient curve of the trend signal, the stable interval that produces the minimum can be easily identified. As a result, the stable value of the output signal is achieved in this interval. Finally, the shock wave pressure can be estimated according to the stable value of the output signal and the sensitivity of the sensor in the dynamic measurement. A series of shock wave pressure measurements are carried out with a shock tube system to validate the performance of this method. The experimental results show that the proposed method works well in shock wave pressure estimation. Furthermore, comparative experiments also demonstrate the superiority of the proposed method over the existing approaches in both estimation accuracy and computational efficiency.
Unified commutation-pruning technique for efficient computation of composite DFTs
NASA Astrophysics Data System (ADS)
Castro-Palazuelos, David E.; Medina-Melendrez, Modesto Gpe.; Torres-Roman, Deni L.; Shkvarko, Yuriy V.
2015-12-01
An efficient computation of a composite length discrete Fourier transform (DFT), as well as a fast Fourier transform (FFT) of both time and space data sequences in uncertain (non-sparse or sparse) computational scenarios, requires specific processing algorithms. Traditional algorithms typically employ some pruning methods without any commutations, which prevents them from attaining the potential computational efficiency. In this paper, we propose an alternative unified approach with automatic commutations between three computational modalities aimed at efficient computations of the pruned DFTs adapted for variable composite lengths of the non-sparse input-output data. The first modality is an implementation of the direct computation of a composite length DFT, the second one employs the second-order recursive filtering method, and the third one performs the new pruned decomposed transform. The pruned decomposed transform algorithm performs the decimation in time or space (DIT) data acquisition domain and, then, decimation in frequency (DIF). The unified combination of these three algorithms is addressed as the DFTCOMM technique. Based on the treatment of the combinational-type hypotheses testing optimization problem of preferable allocations between all feasible commuting-pruning modalities, we have found the global optimal solution to the pruning problem that always requires a fewer or, at most, the same number of arithmetic operations than other feasible modalities. The DFTCOMM method outperforms the existing competing pruning techniques in the sense of attainable savings in the number of required arithmetic operations. It requires fewer or at most the same number of arithmetic operations for its execution than any other of the competing pruning methods reported in the literature. Finally, we provide the comparison of the DFTCOMM with the recently developed sparse fast Fourier transform (SFFT) algorithmic family. We feature that, in the sensing scenarios with sparse/non-sparse data Fourier spectrum, the DFTCOMM technique manifests robustness against such model uncertainties in the sense of insensitivity for sparsity/non-sparsity restrictions and the variability of the operating parameters.
Pan, Xu; Cornelissen, Johannes H C; Zhao, Wei-Wei; Liu, Guo-Fang; Hu, Yu-Kun; Prinzing, Andreas; Dong, Ming; Cornwell, William K
2014-09-01
Leaf litter decomposability is an important effect trait for ecosystem functioning. However, it is unknown how this effect trait evolved through plant history as a leaf 'afterlife' integrator of the evolution of multiple underlying traits upon which adaptive selection must have acted. Did decomposability evolve in a Brownian fashion without any constraints? Was evolution rapid at first and then slowed? Or was there an underlying mean-reverting process that makes the evolution of extreme trait values unlikely? Here, we test the hypothesis that the evolution of decomposability has undergone certain mean-reverting forces due to strong constraints and trade-offs in the leaf traits that have afterlife effects on litter quality to decomposers. In order to test this, we examined the leaf litter decomposability and seven key leaf traits of 48 tree species in the temperate area of China and fitted them to three evolutionary models: Brownian motion model (BM), Early burst model (EB), and Ornstein-Uhlenbeck model (OU). The OU model, which does not allow unlimited trait divergence through time, was the best fit model for leaf litter decomposability and all seven leaf traits. These results support the hypothesis that neither decomposability nor the underlying traits has been able to diverge toward progressively extreme values through evolutionary time. These results have reinforced our understanding of the relationships between leaf litter decomposability and leaf traits in an evolutionary perspective and may be a helpful step toward reconstructing deep-time carbon cycling based on taxonomic composition with more confidence.
An Optimization-based Framework to Learn Conditional Random Fields for Multi-label Classification
Naeini, Mahdi Pakdaman; Batal, Iyad; Liu, Zitao; Hong, CharmGil; Hauskrecht, Milos
2015-01-01
This paper studies multi-label classification problem in which data instances are associated with multiple, possibly high-dimensional, label vectors. This problem is especially challenging when labels are dependent and one cannot decompose the problem into a set of independent classification problems. To address the problem and properly represent label dependencies we propose and study a pairwise conditional random Field (CRF) model. We develop a new approach for learning the structure and parameters of the CRF from data. The approach maximizes the pseudo likelihood of observed labels and relies on the fast proximal gradient descend for learning the structure and limited memory BFGS for learning the parameters of the model. Empirical results on several datasets show that our approach outperforms several multi-label classification baselines, including recently published state-of-the-art methods. PMID:25927015
Improved cache performance in Monte Carlo transport calculations using energy banding
NASA Astrophysics Data System (ADS)
Siegel, A.; Smith, K.; Felker, K.; Romano, P.; Forget, B.; Beckman, P.
2014-04-01
We present an energy banding algorithm for Monte Carlo (MC) neutral particle transport simulations which depend on large cross section lookup tables. In MC codes, read-only cross section data tables are accessed frequently, exhibit poor locality, and are typically too much large to fit in fast memory. Thus, performance is often limited by long latencies to RAM, or by off-node communication latencies when the data footprint is very large and must be decomposed on a distributed memory machine. The proposed energy banding algorithm allows maximal temporal reuse of data in band sizes that can flexibly accommodate different architectural features. The energy banding algorithm is general and has a number of benefits compared to the traditional approach. In the present analysis we explore its potential to achieve improvements in time-to-solution on modern cache-based architectures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Emami, F.; Hatami, M.; Keshavarz, A. R.
2009-08-13
Using a combination of Runge-Kutta and Jacobi iterative method, we could solve the nonlinear Schroedinger equation describing the pulse propagation in FBGs. By decomposing the electric field to forward and backward components in fiber Bragg grating and utilizing the Fourier series analysis technique, the boundary value problem of a set of coupled equations governing the pulse propagation in FBG changes to an initial condition coupled equations which can be solved by simple Runge-Kutta method.
An integrated condition-monitoring method for a milling process using reduced decomposition features
NASA Astrophysics Data System (ADS)
Liu, Jie; Wu, Bo; Wang, Yan; Hu, Youmin
2017-08-01
Complex and non-stationary cutting chatter affects productivity and quality in the milling process. Developing an effective condition-monitoring approach is critical to accurately identify cutting chatter. In this paper, an integrated condition-monitoring method is proposed, where reduced features are used to efficiently recognize and classify machine states in the milling process. In the proposed method, vibration signals are decomposed into multiple modes with variational mode decomposition, and Shannon power spectral entropy is calculated to extract features from the decomposed signals. Principal component analysis is adopted to reduce feature size and computational cost. With the extracted feature information, the probabilistic neural network model is used to recognize and classify the machine states, including stable, transition, and chatter states. Experimental studies are conducted, and results show that the proposed method can effectively detect cutting chatter during different milling operation conditions. This monitoring method is also efficient enough to satisfy fast machine state recognition and classification.
Solution of the determinantal assignment problem using the Grassmann matrices
NASA Astrophysics Data System (ADS)
Karcanias, Nicos; Leventides, John
2016-02-01
The paper provides a direct solution to the determinantal assignment problem (DAP) which unifies all frequency assignment problems of the linear control theory. The current approach is based on the solvability of the exterior equation ? where ? is an n -dimensional vector space over ? which is an integral part of the solution of DAP. New criteria for existence of solution and their computation based on the properties of structured matrices are referred to as Grassmann matrices. The solvability of this exterior equation is referred to as decomposability of ?, and it is in turn characterised by the set of quadratic Plücker relations (QPRs) describing the Grassmann variety of the corresponding projective space. Alternative new tests for decomposability of the multi-vector ? are given in terms of the rank properties of the Grassmann matrix, ? of the vector ?, which is constructed by the coordinates of ?. It is shown that the exterior equation is solvable (? is decomposable), if and only if ? where ?; the solution space for a decomposable ?, is the space ?. This provides an alternative linear algebra characterisation of the decomposability problem and of the Grassmann variety to that defined by the QPRs. Further properties of the Grassmann matrices are explored by defining the Hodge-Grassmann matrix as the dual of the Grassmann matrix. The connections of the Hodge-Grassmann matrix to the solution of exterior equations are examined, and an alternative new characterisation of decomposability is given in terms of the dimension of its image space. The framework based on the Grassmann matrices provides the means for the development of a new computational method for the solutions of the exact DAP (when such solutions exist), as well as computing approximate solutions, when exact solutions do not exist.
Pan, Xu; Cornelissen, Johannes H C; Zhao, Wei-Wei; Liu, Guo-Fang; Hu, Yu-Kun; Prinzing, Andreas; Dong, Ming; Cornwell, William K
2014-01-01
Leaf litter decomposability is an important effect trait for ecosystem functioning. However, it is unknown how this effect trait evolved through plant history as a leaf ‘afterlife’ integrator of the evolution of multiple underlying traits upon which adaptive selection must have acted. Did decomposability evolve in a Brownian fashion without any constraints? Was evolution rapid at first and then slowed? Or was there an underlying mean-reverting process that makes the evolution of extreme trait values unlikely? Here, we test the hypothesis that the evolution of decomposability has undergone certain mean-reverting forces due to strong constraints and trade-offs in the leaf traits that have afterlife effects on litter quality to decomposers. In order to test this, we examined the leaf litter decomposability and seven key leaf traits of 48 tree species in the temperate area of China and fitted them to three evolutionary models: Brownian motion model (BM), Early burst model (EB), and Ornstein-Uhlenbeck model (OU). The OU model, which does not allow unlimited trait divergence through time, was the best fit model for leaf litter decomposability and all seven leaf traits. These results support the hypothesis that neither decomposability nor the underlying traits has been able to diverge toward progressively extreme values through evolutionary time. These results have reinforced our understanding of the relationships between leaf litter decomposability and leaf traits in an evolutionary perspective and may be a helpful step toward reconstructing deep-time carbon cycling based on taxonomic composition with more confidence. PMID:25535551
Joint Power Charging and Routing in Wireless Rechargeable Sensor Networks.
Jia, Jie; Chen, Jian; Deng, Yansha; Wang, Xingwei; Aghvami, Abdol-Hamid
2017-10-09
The development of wireless power transfer (WPT) technology has inspired the transition from traditional battery-based wireless sensor networks (WSNs) towards wireless rechargeable sensor networks (WRSNs). While extensive efforts have been made to improve charging efficiency, little has been done for routing optimization. In this work, we present a joint optimization model to maximize both charging efficiency and routing structure. By analyzing the structure of the optimization model, we first decompose the problem and propose a heuristic algorithm to find the optimal charging efficiency for the predefined routing tree. Furthermore, by coding the many-to-one communication topology as an individual, we further propose to apply a genetic algorithm (GA) for the joint optimization of both routing and charging. The genetic operations, including tree-based recombination and mutation, are proposed to obtain a fast convergence. Our simulation results show that the heuristic algorithm reduces the number of resident locations and the total moving distance. We also show that our proposed algorithm achieves a higher charging efficiency compared with existing algorithms.
A Predictive Model of Anesthesia Depth Based on SVM in the Primary Visual Cortex
Shi, Li; Li, Xiaoyuan; Wan, Hong
2013-01-01
In this paper, a novel model for predicting anesthesia depth is put forward based on local field potentials (LFPs) in the primary visual cortex (V1 area) of rats. The model is constructed using a Support Vector Machine (SVM) to realize anesthesia depth online prediction and classification. The raw LFP signal was first decomposed into some special scaling components. Among these components, those containing higher frequency information were well suited for more precise analysis of the performance of the anesthetic depth by wavelet transform. Secondly, the characteristics of anesthetized states were extracted by complexity analysis. In addition, two frequency domain parameters were selected. The above extracted features were used as the input vector of the predicting model. Finally, we collected the anesthesia samples from the LFP recordings under the visual stimulus experiments of Long Evans rats. Our results indicate that the predictive model is accurate and computationally fast, and that it is also well suited for online predicting. PMID:24044024
Joint Power Charging and Routing in Wireless Rechargeable Sensor Networks
Jia, Jie; Chen, Jian; Deng, Yansha; Wang, Xingwei; Aghvami, Abdol-Hamid
2017-01-01
The development of wireless power transfer (WPT) technology has inspired the transition from traditional battery-based wireless sensor networks (WSNs) towards wireless rechargeable sensor networks (WRSNs). While extensive efforts have been made to improve charging efficiency, little has been done for routing optimization. In this work, we present a joint optimization model to maximize both charging efficiency and routing structure. By analyzing the structure of the optimization model, we first decompose the problem and propose a heuristic algorithm to find the optimal charging efficiency for the predefined routing tree. Furthermore, by coding the many-to-one communication topology as an individual, we further propose to apply a genetic algorithm (GA) for the joint optimization of both routing and charging. The genetic operations, including tree-based recombination and mutation, are proposed to obtain a fast convergence. Our simulation results show that the heuristic algorithm reduces the number of resident locations and the total moving distance. We also show that our proposed algorithm achieves a higher charging efficiency compared with existing algorithms. PMID:28991200
Spatio-temporal hierarchy in the dynamics of a minimalist protein model
NASA Astrophysics Data System (ADS)
Matsunaga, Yasuhiro; Baba, Akinori; Li, Chun-Biu; Straub, John E.; Toda, Mikito; Komatsuzaki, Tamiki; Berry, R. Stephen
2013-12-01
A method for time series analysis of molecular dynamics simulation of a protein is presented. In this approach, wavelet analysis and principal component analysis are combined to decompose the spatio-temporal protein dynamics into contributions from a hierarchy of different time and space scales. Unlike the conventional Fourier-based approaches, the time-localized wavelet basis captures the vibrational energy transfers among the collective motions of proteins. As an illustrative vehicle, we have applied our method to a coarse-grained minimalist protein model. During the folding and unfolding transitions of the protein, vibrational energy transfers between the fast and slow time scales were observed among the large-amplitude collective coordinates while the other small-amplitude motions are regarded as thermal noise. Analysis employing a Gaussian-based measure revealed that the time scales of the energy redistribution in the subspace spanned by such large-amplitude collective coordinates are slow compared to the other small-amplitude coordinates. Future prospects of the method are discussed in detail.
Fast multi-scale feature fusion for ECG heartbeat classification
NASA Astrophysics Data System (ADS)
Ai, Danni; Yang, Jian; Wang, Zeyu; Fan, Jingfan; Ai, Changbin; Wang, Yongtian
2015-12-01
Electrocardiogram (ECG) is conducted to monitor the electrical activity of the heart by presenting small amplitude and duration signals; as a result, hidden information present in ECG data is difficult to determine. However, this concealed information can be used to detect abnormalities. In our study, a fast feature-fusion method of ECG heartbeat classification based on multi-linear subspace learning is proposed. The method consists of four stages. First, baseline and high frequencies are removed to segment heartbeat. Second, as an extension of wavelets, wavelet-packet decomposition is conducted to extract features. With wavelet-packet decomposition, good time and frequency resolutions can be provided simultaneously. Third, decomposed confidences are arranged as a two-way tensor, in which feature fusion is directly implemented with generalized N dimensional ICA (GND-ICA). In this method, co-relationship among different data information is considered, and disadvantages of dimensionality are prevented; this method can also be used to reduce computing compared with linear subspace-learning methods (PCA). Finally, support vector machine (SVM) is considered as a classifier in heartbeat classification. In this study, ECG records are obtained from the MIT-BIT arrhythmia database. Four main heartbeat classes are used to examine the proposed algorithm. Based on the results of five measurements, sensitivity, positive predictivity, accuracy, average accuracy, and t-test, our conclusion is that a GND-ICA-based strategy can be used to provide enhanced ECG heartbeat classification. Furthermore, large redundant features are eliminated, and classification time is reduced.
Intrinsic Multi-Scale Dynamic Behaviors of Complex Financial Systems.
Ouyang, Fang-Yan; Zheng, Bo; Jiang, Xiong-Fei
2015-01-01
The empirical mode decomposition is applied to analyze the intrinsic multi-scale dynamic behaviors of complex financial systems. In this approach, the time series of the price returns of each stock is decomposed into a small number of intrinsic mode functions, which represent the price motion from high frequency to low frequency. These intrinsic mode functions are then grouped into three modes, i.e., the fast mode, medium mode and slow mode. The probability distribution of returns and auto-correlation of volatilities for the fast and medium modes exhibit similar behaviors as those of the full time series, i.e., these characteristics are rather robust in multi time scale. However, the cross-correlation between individual stocks and the return-volatility correlation are time scale dependent. The structure of business sectors is mainly governed by the fast mode when returns are sampled at a couple of days, while by the medium mode when returns are sampled at dozens of days. More importantly, the leverage and anti-leverage effects are dominated by the medium mode.
Reactivity at the Lithium–Metal Anode Surface of Lithium–Sulfur Batteries
Camacho-Forero, Luis E.; Smith, Taylor W.; Bertolini, Samuel; ...
2015-11-11
Due to their high energy density and reduced cost, lithium–sulfur batteries are promising alternatives for applications such as electrical vehicles. However, a number of technical challenges need to be overcome in order to make them feasible for commercial uses. These challenges arise from the battery highly interconnected chemistry, which besides the electrochemical reactions includes side reactions at both electrodes and migration of soluble polysulfide (PS) species produced at the cathode to the anode side. The presence of such PS species alters the already complex reactivity of the Li anode. In this paper, interfacial reactions occurring at the surface of Limore » metal anodes due to electrochemical instability of the electrolyte components and PS species are investigated with density functional theory and ab initio molecular dynamics methods. It is found that the bis(trifluoromethane)sulfonimide lithium salt reacts very fast when in contact with the Li surface, and anion decomposition precedes salt dissociation. The anion decomposition mechanisms are fully elucidated. Two of the typical solvents used in Li–S technology, 1,3-dioxolane and 1,2-dimethoxyethane, are found stable during the entire simulation length, in contrast with the case of ethylene carbonate that is rapidly decomposed by sequential 2- or 4-electron mechanisms. Finally, on the other hand, the fast reactivity of the soluble PS species alters the side reactions because the PS totally decomposes before any of the electrolyte components forming Li 2S on the anode surface.« less
Extension of the frequency-domain pFFT method for wave structure interaction in finite depth
NASA Astrophysics Data System (ADS)
Teng, Bin; Song, Zhi-jie
2017-06-01
To analyze wave interaction with a large scale body in the frequency domain, a precorrected Fast Fourier Transform (pFFT) method has been proposed for infinite depth problems with the deep water Green function, as it can form a matrix with Toeplitz and Hankel properties. In this paper, a method is proposed to decompose the finite depth Green function into two terms, which can form matrices with the Toeplitz and a Hankel properties respectively. Then, a pFFT method for finite depth problems is developed. Based on the pFFT method, a numerical code pFFT-HOBEM is developed with the discretization of high order elements. The model is validated, and examinations on the computing efficiency and memory requirement of the new method have also been carried out. It shows that the new method has the same advantages as that for infinite depth.
Geng, Jing; Wang, Wen-Liang; Yu, Yu-Xiang; Chang, Jian-Min; Cai, Li-Ping; Shi, Sheldon Q
2017-03-01
The composition of pyrolysis vapors obtained from alkali lignin pyrolysis with the additive of nickel formate was examined using the pyrolysis gas chromatography-mass spectrometry (Py-GC/MS). Characterization of bio-chars was performed using X-ray diffraction (XRD). Results showed that the nickel formate significantly increased liquid yield, simplified the types of alkali lignin pyrolysis products and increased individual component contents. The additive of nickel formate increased contents of alkylphenols and aromatics from alkali lignin pyrolysis. With an increase in temperature, a greater amount of the relative contents can be achieved. The nickel formate was thermally decomposed to form hydrogen, resulting in hydrodeoxygenation of alkali lignin during pyrolysis. It was also found that Ni is in favor of producing alkylphenols. The analysis based on the experimental result provided evidences used to propose reaction mechanism for pyrolysis of nickel formate-assisted alkali lignin. Copyright © 2016. Published by Elsevier Ltd.
Anomaly detection for medical images based on a one-class classification
NASA Astrophysics Data System (ADS)
Wei, Qi; Ren, Yinhao; Hou, Rui; Shi, Bibo; Lo, Joseph Y.; Carin, Lawrence
2018-02-01
Detecting an anomaly such as a malignant tumor or a nodule from medical images including mammogram, CT or PET images is still an ongoing research problem drawing a lot of attention with applications in medical diagnosis. A conventional way to address this is to learn a discriminative model using training datasets of negative and positive samples. The learned model can be used to classify a testing sample into a positive or negative class. However, in medical applications, the high unbalance between negative and positive samples poses a difficulty for learning algorithms, as they will be biased towards the majority group, i.e., the negative one. To address this imbalanced data issue as well as leverage the huge amount of negative samples, i.e., normal medical images, we propose to learn an unsupervised model to characterize the negative class. To make the learned model more flexible and extendable for medical images of different scales, we have designed an autoencoder based on a deep neural network to characterize the negative patches decomposed from large medical images. A testing image is decomposed into patches and then fed into the learned autoencoder to reconstruct these patches themselves. The reconstruction error of one patch is used to classify this patch into a binary class, i.e., a positive or a negative one, leading to a one-class classifier. The positive patches highlight the suspicious areas containing anomalies in a large medical image. The proposed method has been tested on InBreast dataset and achieves an AUC of 0.84. The main contribution of our work can be summarized as follows. 1) The proposed one-class learning requires only data from one class, i.e., the negative data; 2) The patch-based learning makes the proposed method scalable to images of different sizes and helps avoid the large scale problem for medical images; 3) The training of the proposed deep convolutional neural network (DCNN) based auto-encoder is fast and stable.
Rortex—A new vortex vector definition and vorticity tensor and vector decompositions
NASA Astrophysics Data System (ADS)
Liu, Chaoqun; Gao, Yisheng; Tian, Shuling; Dong, Xiangrui
2018-03-01
A vortex is intuitively recognized as the rotational/swirling motion of the fluids. However, an unambiguous and universally accepted definition for vortex is yet to be achieved in the field of fluid mechanics, which is probably one of the major obstacles causing considerable confusions and misunderstandings in turbulence research. In our previous work, a new vector quantity that is called vortex vector was proposed to accurately describe the local fluid rotation and clearly display vortical structures. In this paper, the definition of the vortex vector, named Rortex here, is revisited from the mathematical perspective. The existence of the possible rotational axis is proved through real Schur decomposition. Based on real Schur decomposition, a fast algorithm for calculating Rortex is also presented. In addition, new vorticity tensor and vector decompositions are introduced: the vorticity tensor is decomposed to a rigidly rotational part and a non-rotationally anti-symmetric part, and the vorticity vector is decomposed to a rigidly rotational vector which is called the Rortex vector and a non-rotational vector which is called the shear vector. Several cases, including the 2D Couette flow, 2D rigid rotational flow, and 3D boundary layer transition on a flat plate, are studied to demonstrate the justification of the definition of Rortex. It can be observed that Rortex identifies both the precise swirling strength and the rotational axis, and thus it can reasonably represent the local fluid rotation and provide a new powerful tool for vortex dynamics and turbulence research.
DECOMP: a PDB decomposition tool on the web.
Ordog, Rafael; Szabadka, Zoltán; Grolmusz, Vince
2009-07-27
The protein databank (PDB) contains high quality structural data for computational structural biology investigations. We have earlier described a fast tool (the decomp_pdb tool) for identifying and marking missing atoms and residues in PDB files. The tool also automatically decomposes PDB entries into separate files describing ligands and polypeptide chains. Here, we describe a web interface named DECOMP for the tool. Our program correctly identifies multi-monomer ligands, and the server also offers the preprocessed ligand-protein decomposition of the complete PDB for downloading (up to size: 5GB) AVAILABILITY: http://decomp.pitgroup.org.
NASA Astrophysics Data System (ADS)
Guan, W.; Cheng, X.; Huang, J.; Huber, G.; Li, W.; McCammon, J. A.; Zhang, B.
2018-06-01
RPYFMM is a software package for the efficient evaluation of the potential field governed by the Rotne-Prager-Yamakawa (RPY) tensor interactions in biomolecular hydrodynamics simulations. In our algorithm, the RPY tensor is decomposed as a linear combination of four Laplace interactions, each of which is evaluated using the adaptive fast multipole method (FMM) (Greengard and Rokhlin, 1997) where the exponential expansions are applied to diagonalize the multipole-to-local translation operators. RPYFMM offers a unified execution on both shared and distributed memory computers by leveraging the DASHMM library (DeBuhr et al., 2016, 2018). Preliminary numerical results show that the interactions for a molecular system of 15 million particles (beads) can be computed within one second on a Cray XC30 cluster using 12,288 cores, while achieving approximately 54% strong-scaling efficiency.
Generation of skeletal mechanism by means of projected entropy participation indices
NASA Astrophysics Data System (ADS)
Paolucci, Samuel; Valorani, Mauro; Ciottoli, Pietro Paolo; Galassi, Riccardo Malpica
2017-11-01
When the dynamics of reactive systems develop very-slow and very-fast time scales separated by a range of active time scales, with gaps in the fast/active and slow/active time scales, then it is possible to achieve multi-scale adaptive model reduction along-with the integration of the ODEs using the G-Scheme. The scheme assumes that the dynamics is decomposed into active, slow, fast, and invariant subspaces. We derive expressions that establish a direct link between time scales and entropy production by using estimates provided by the G-Scheme. To calculate the contribution to entropy production, we resort to a standard model of a constant pressure, adiabatic, batch reactor, where the mixture temperature of the reactants is initially set above the auto-ignition temperature. Numerical experiments show that the contribution to entropy production of the fast subspace is of the same magnitude as the error threshold chosen for the identification of the decomposition of the tangent space, and the contribution of the slow subspace is generally much smaller than that of the active subspace. The information on entropy production associated with reactions within each subspace is used to define an entropy participation index that is subsequently utilized for model reduction.
Song, Pengfei; Manduca, Armando; Zhao, Heng; Urban, Matthew W.; Greenleaf, James F.; Chen, Shigao
2014-01-01
A fast shear compounding method was developed in this study using only one shear wave push-detect cycle, such that the shear wave imaging frame rate is preserved and motion artifacts are minimized. The proposed method is composed of the following steps: 1. applying a comb-push to produce multiple differently angled shear waves at different spatial locations simultaneously; 2. decomposing the complex shear wave field into individual shear wave fields with differently oriented shear waves using a multi-directional filter; 3. using a robust two-dimensional (2D) shear wave speed calculation to reconstruct 2D shear elasticity maps from each filter direction; 4. compounding these 2D maps from different directions into a final map. An inclusion phantom study showed that the fast shear compounding method could achieve comparable performance to conventional shear compounding without sacrificing the imaging frame rate. A multi-inclusion phantom experiment showed that the fast shear compounding method could provide a full field-of-view (FOV), 2D, and compounded shear elasticity map with three types of inclusions clearly resolved and stiffness measurements showing excellent agreement to the nominal values. PMID:24613636
NASA Astrophysics Data System (ADS)
Manzoni, S.; Capek, P.; Mooshammer, M.; Lindahl, B.; Richter, A.; Santruckova, H.
2016-12-01
Litter and soil organic matter decomposers feed on substrates with much wider C:N and C:P ratios then their own cellular composition, raising the question as to how they can adapt their metabolism to such a chronic stoichiometric imbalance. Here we propose an optimality framework to address this question, based on the hypothesis that carbon-use efficiency (CUE) can be optimally adjusted to maximize the decomposer growth rate. When nutrients are abundant, increasing CUE improves decomposer growth rate, at the expense of higher nutrient demand. However, when nutrients are scarce, increased nutrient demand driven by high CUE can trigger nutrient limitation and inhibit growth. An intermediate, `optimal' CUE ensures balanced growth at the verge of nutrient limitation. We derive a simple analytical equation that links this optimal CUE to organic substrate and decomposer biomass C:N and C:P ratios, and to the rate of inorganic nutrient supply (e.g., fertilization). This equation allows formulating two specific hypotheses: i) decomposer CUE should increase with widening organic substrate C:N and C:P ratios with a scaling exponent between 0 (with abundant inorganic nutrients) and -1 (scarce inorganic nutrients), and ii) CUE should increase with increasing inorganic nutrient supply, for a given organic substrate stoichiometry. These hypotheses are tested using a new database encompassing nearly 2000 estimates of CUE from about 160 studies, spanning aquatic and terrestrial decomposers of litter and more stabilized organic matter. The theoretical predictions are largely confirmed by our data analysis, except for the lack of fertilization effects on terrestrial decomposer CUE. While stoichiometric drivers constrain the general trends in CUE, the relatively large variability in CUE estimates suggests that other factors could be at play as well. For example, temperature is often cited as a potential driver of CUE, but we only found limited evidence of temperature effects, although in some subsets of data, temperature and substrate stoichiometry appeared to interact. Based on our results, the optimality principle can provide a solid (but still incomplete) framework to develop CUE models for large-scale applications.
Long-term Priming-induced Changes in Permafrost Soil Organic Matter Decomposition
NASA Astrophysics Data System (ADS)
Pegoraro, E.; Bracho, R. G.; Schuur, E.
2016-12-01
Warming of tundra ecosystems due to climate change is predicted to thaw permafrost and increase plant biomass and litter input to soil. Additional input of easily decomposable carbon can stimulate microbial activity, consequently increasing soil organic matter decomposition rates. This phenomenon, known as the priming effect, can exacerbate the effects of climate change by releasing more CO2 from permafrost soils; however, the extent to which it could decrease soil carbon stocks in the Arctic is unknown. Most priming incubation studies are conducted for a short period of time, making it difficult to assess if priming is a short-term phenomenon, or could persist over the long-term. We incubated permafrost soil from a moist acidic tundra site in Healy, Alaska for 456 days at 15° C. Soil from surface and deep layers were amended with three pulses of uniformly 13C labeled glucose, a fast decomposing substrate, every 152 days. We also quantified the proportion of old carbon respired by measuring 14CO2. Substrate addition resulted in higher respiration rates in glucose amended soils; however, positive priming was only observed in deep layers, where on average 9%, 57%, and 25% more soil-derived C was respired at 45-55, 65-75, and 75-85 cm depth increments for the duration of the experiment. This suggests that microbes in deep layers are limited in energy, and the addition of easily decomposable carbon increases native soil organic matter decomposition.
Multigranular integrated services optical network
NASA Astrophysics Data System (ADS)
Yu, Oliver; Yin, Leping; Xu, Huan; Liao, Ming
2006-12-01
Based on all-optical switches without requiring fiber delay lines and optical-electrical-optical interfaces, the multigranular optical switching (MGOS) network integrates three transport services via unified core control to efficiently support bursty and stream traffic of subwavelength to multiwavelength bandwidth. Adaptive robust optical burst switching (AR-OBS) aggregates subwavelength burst traffic into asynchronous light-rate bursts, transported via slotted-time light paths established by fast two-way reservation with robust blocking recovery control. Multiwavelength optical switching (MW-OS) decomposes multiwavelength stream traffic into a group of timing-related light-rate streams, transported via a light-path group to meet end-to-end delay-variation requirements. Optical circuit switching (OCS) simply converts wavelength stream traffic from an electrical-rate into a light-rate stream. The MGOS network employs decoupled routing, wavelength, and time-slot assignment (RWTA) and novel group routing and wavelength assignment (GRWA) to select slotted-time light paths and light-path groups, respectively. The selected resources are reserved by the unified multigranular robust fast optical reservation protocol (MG-RFORP). Simulation results show that elastic traffic is efficiently supported via AR-OBS in terms of loss rate and wavelength utilization, while connection-oriented wavelength traffic is efficiently supported via wavelength-routed OCS in terms of connection blocking and wavelength utilization. The GRWA-tuning result for MW-OS is also shown.
Draft genome sequence of the white-rot fungus Obba rivulosa 3A-2
Otto Miettinen; Robert Riley; Kerrie Barry; Daniel Cullen; Ronald P. de Vries; Matthieu Hainaut; Annele Hatakka; Bernard Henrissat; Kristiina Hilden; Rita Kuo; Kurt LaButti; Anna Lipzen; Miia R. Makela; Laura Sandor; Joseph W. Spatafora; Igor V. Grigoriev; David S. Hibbett
2016-01-01
We report here the first genome sequence of the white-rot fungus Obba rivulsa (Polyporales, Basidiomycota), a polypore known for its lignin-decomposing ability. The genome is based on the homokaryon 3A-2 originating in Finland. The genome is typical in size and carbohydrate active enzyme (CAZy) content for wood-decomposing basidiomycetes.
ERIC Educational Resources Information Center
Dos Santos, Luiz Miguel Renda; Okazaki, Shintaro
2013-01-01
This study sheds light on the organizational dimensions underlying e-learning adoption among Brazilian universities. We propose an organizational e-learning adoption model based on the decomposed theory of planned behavior (TPB). A series of hypotheses are posited with regard to the relationships among the proposed constructs. The model is…
Scheduling double round-robin tournaments with divisional play using constraint programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlsson, Mats; Johansson, Mikael; Larson, Jeffrey
We study a tournament format that extends a traditional double round-robin format with divisional single round-robin tournaments. Elitserien, the top Swedish handball league, uses such a format for its league schedule. We present a constraint programming model that characterizes the general double round-robin plus divisional single round-robin format. This integrated model allows scheduling to be performed in a single step, as opposed to common multistep approaches that decompose scheduling into smaller problems and possibly miss optimal solutions. In addition to general constraints, we introduce Elitserien-specific requirements for its tournament. These general and league-specific constraints allow us to identify implicit andmore » symmetry-breaking properties that reduce the time to solution from hours to seconds. A scalability study of the number of teams shows that our approach is reasonably fast for even larger league sizes. The experimental evaluation of the integrated approach takes considerably less computational effort to schedule Elitserien than does the previous decomposed approach. (C) 2016 Elsevier B.V. All rights reserved« less
Niemcunowicz-Janica, Anna; Pepiński, Witold; Janica, Jacek Robert; Janica, Jerzy; Skawrońska, Małgorzata; Koc-Zórawska, Ewa
2007-01-01
In cases of decomposed bodies, Y chromosomal STR markers may be useful in identification of a male relative. The authors assessed typeability of PowerPlex Y (Promega) loci in post mortem tissue material stored in various environments. Kidney, spleen and pancreas specimens were collected during autopsies of five persons aged 20-30 years, whose time of death was determined within the limit of 14 hours. Tissue material was incubated at 21 degrees C and 4 degrees C in various environmental conditions. DNA was extracted by the organic method from tissue samples collected in 7-day intervals and subsequently typed using the PowerPlexY-STR kit and ABI 310. A fast decrease in the typeability rate was seen in specimens incubated in peat soil and in sand. Kidney tissue samples were typeable in all PowerPlexY-STR loci within 63 days of incubation at 4 degrees C. Faster DNA degradation was recorded in spleen and pancreas specimens. In samples with negative genotyping results, no DNA was found by fluorometric quantitation. Decomposed soft tissues are a potential material for DNA typing.
NASA Astrophysics Data System (ADS)
Yoshida, Yuki; Karakida, Ryo; Okada, Masato; Amari, Shun-ichi
2017-04-01
Weight normalization, a newly proposed optimization method for neural networks by Salimans and Kingma (2016), decomposes the weight vector of a neural network into a radial length and a direction vector, and the decomposed parameters follow their steepest descent update. They reported that learning with the weight normalization achieves better converging speed in several tasks including image recognition and reinforcement learning than learning with the conventional parameterization. However, it remains theoretically uncovered how the weight normalization improves the converging speed. In this study, we applied a statistical mechanical technique to analyze on-line learning in single layer linear and nonlinear perceptrons with weight normalization. By deriving order parameters of the learning dynamics, we confirmed quantitatively that weight normalization realizes fast converging speed by automatically tuning the effective learning rate, regardless of the nonlinearity of the neural network. This property is realized when the initial value of the radial length is near the global minimum; therefore, our theory suggests that it is important to choose the initial value of the radial length appropriately when using weight normalization.
Selective Listening Point Audio Based on Blind Signal Separation and Stereophonic Technology
NASA Astrophysics Data System (ADS)
Niwa, Kenta; Nishino, Takanori; Takeda, Kazuya
A sound field reproduction method is proposed that uses blind source separation and a head-related transfer function. In the proposed system, multichannel acoustic signals captured at distant microphones are decomposed to a set of location/signal pairs of virtual sound sources based on frequency-domain independent component analysis. After estimating the locations and the signals of the virtual sources by convolving the controlled acoustic transfer functions with each signal, the spatial sound is constructed at the selected point. In experiments, a sound field made by six sound sources is captured using 48 distant microphones and decomposed into sets of virtual sound sources. Since subjective evaluation shows no significant difference between natural and reconstructed sound when six virtual sources and are used, the effectiveness of the decomposing algorithm as well as the virtual source representation are confirmed.
Intrinsic Multi-Scale Dynamic Behaviors of Complex Financial Systems
Ouyang, Fang-Yan; Zheng, Bo; Jiang, Xiong-Fei
2015-01-01
The empirical mode decomposition is applied to analyze the intrinsic multi-scale dynamic behaviors of complex financial systems. In this approach, the time series of the price returns of each stock is decomposed into a small number of intrinsic mode functions, which represent the price motion from high frequency to low frequency. These intrinsic mode functions are then grouped into three modes, i.e., the fast mode, medium mode and slow mode. The probability distribution of returns and auto-correlation of volatilities for the fast and medium modes exhibit similar behaviors as those of the full time series, i.e., these characteristics are rather robust in multi time scale. However, the cross-correlation between individual stocks and the return-volatility correlation are time scale dependent. The structure of business sectors is mainly governed by the fast mode when returns are sampled at a couple of days, while by the medium mode when returns are sampled at dozens of days. More importantly, the leverage and anti-leverage effects are dominated by the medium mode. PMID:26427063
Composite fuzzy sliding mode control of nonlinear singularly perturbed systems.
Nagarale, Ravindrakumar M; Patre, B M
2014-05-01
This paper deals with the robust asymptotic stabilization for a class of nonlinear singularly perturbed systems using the fuzzy sliding mode control technique. In the proposed approach the original system is decomposed into two subsystems as slow and fast models by the singularly perturbed method. The composite fuzzy sliding mode controller is designed for stabilizing the full order system by combining separately designed slow and fast fuzzy sliding mode controllers. The two-time scale design approach minimizes the effect of boundary layer system on the full order system. A stability analysis allows us to provide sufficient conditions for the asymptotic stability of the full order closed-loop system. The simulation results show improved system performance of the proposed controller as compared to existing methods. The experimentation results validate the effectiveness of the proposed controller. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Orizio, Claudio; Cogliati, Marta; Bissolotti, Luciano; Diemont, Bertrand; Gobbo, Massimiliano; Celichowski, Jan
2016-01-01
This work aimed to verify if maximal electrically evoked single twitch (STmax) scan discloses the relative functional weight of fast and slow small bundles of fibres (SBF) in determining the contractile features of tibialis anterior (TA) with ageing. SBFs were recruited by TA main motor point stimulation through 60 increasing levels of stimulation (LS): 20 stimuli at 2Hz for each LS. The lowest and highest LS provided the least ST and STmax, respectively. The scanned STmax was decomposed into individual SBF STs. They were identified when twitches from adjacent LS were significantly different and then subtracted from each other. Nine young (Y) and eleven old (O) subjects were investigated. Contraction time (CT) and STarea/STpeak (A/PT) were calculated per each SBF ST. 143 and 155 SBF STs were obtained in Y and O, respectively. Y: CT and A/PT range: 45-105ms and 67-183mNs/mN, respectively. Literature data set TA fast fibres at 34% so, from the arrays of CT and A/PT, 65ms and 100mNs/mN were identified as the upper limit for SBF fast ST classification. O: no SBF ST could be classified as fast. STmax scan reveals age-related changes in the relative contribution of fast and slow SBFs to the overall muscle mechanics. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhu, Ming; Liu, Tingting; Zhang, Xiangqun; Li, Caiyun
2018-01-01
Recently, a decomposition method of acoustic relaxation absorption spectra was used to capture the entire molecular multimode relaxation process of gas. In this method, the acoustic attenuation and phase velocity were measured jointly based on the relaxation absorption spectra. However, fast and accurate measurements of the acoustic attenuation remain challenging. In this paper, we present a method of capturing the molecular relaxation process by only measuring acoustic velocity, without the necessity of obtaining acoustic absorption. The method is based on the fact that the frequency-dependent velocity dispersion of a multi-relaxation process in a gas is the serial connection of the dispersions of interior single-relaxation processes. Thus, one can capture the relaxation times and relaxation strengths of N decomposed single-relaxation dispersions to reconstruct the entire multi-relaxation dispersion using the measurements of acoustic velocity at 2N + 1 frequencies. The reconstructed dispersion spectra are in good agreement with experimental data for various gases and mixtures. The simulations also demonstrate the robustness of our reconstructive method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Toosi, E. R.; Kravchenko, A. N.; Mao, J.
Macroaggregates are of interest because of their fast response to land management and their role in the loss or restoration of soil organic carbon (SOC). The study included two experiments. In Experiment I, we investigated the effect of long-term (27 years) land management on the chemical composition of organic matter (OM) of macroaggregates. Macroaggregates were sampled from topsoil under conventional cropping, cover cropping and natural succession systems. The OM of macroaggregates from conventional cropping was more decomposed than that of cover cropping and especially natural succession, based on larger δ 15N values and decomposition indices determined by multiple magic-angle spinningmore » nuclear magnetic resonance ( 13C CP/MAS NMR) and Fourier transform infrared (FTIR) spectroscopy. Previous research at the sites studied suggested that this was mainly because of reduced diversity and activity of the decomposer community, change in nutrient stoichiometry from fertilization and contrasting formation pathways of macroaggregates in conventional cropping compared with cover cropping and, specifically, natural succession. In Experiment II, we investigated the relation between OM composition and pore characteristics of macroaggregates. Macroaggregates from the natural succession system only were studied. We determined 3-D pore-size distribution of macroaggregates with X-ray microtomography, for which we cut the macroaggregates into sections that had contrasting dominant pore sizes. Then, we characterized the OM of macroaggregate sections with FTIR and δ15N methods. The results showed that within a macroaggregate, the OM was less decomposed in areas where the small (13–32 µm) or large (136–260 µm) pores were abundant. This was attributed to the role of large pores in supplying fresh OM and small pores in the effective protection of OM in macroaggregates. Previous research at the site studied had shown increased abundance of large and small intra-aggregate pores following adoption of less intensive management systems. It appears that land management can alter the OM composition of macroaggregates, partly by the regulation of OM turnover at the intra-aggregate scale.« less
Ju, Jinyong; Li, Wei; Wang, Yuqiao; Fan, Mengbao; Yang, Xuefeng
2016-01-01
Effective feedback control requires all state variable information of the system. However, in the translational flexible-link manipulator (TFM) system, it is unrealistic to measure the vibration signals and their time derivative of any points of the TFM by infinite sensors. With the rigid-flexible coupling between the global motion of the rigid base and the elastic vibration of the flexible-link manipulator considered, a two-time scale virtual sensor, which includes the speed observer and the vibration observer, is designed to achieve the estimation for the vibration signals and their time derivative of the TFM, as well as the speed observer and the vibration observer are separately designed for the slow and fast subsystems, which are decomposed from the dynamic model of the TFM by the singular perturbation. Additionally, based on the linear-quadratic differential games, the observer gains of the two-time scale virtual sensor are optimized, which aims to minimize the estimation error while keeping the observer stable. Finally, the numerical calculation and experiment verify the efficiency of the designed two-time scale virtual sensor. PMID:27801840
Song, Pengfei; Manduca, Armando; Zhao, Heng; Urban, Matthew W; Greenleaf, James F; Chen, Shigao
2014-06-01
A fast shear compounding method was developed in this study using only one shear wave push-detect cycle, such that the shear wave imaging frame rate is preserved and motion artifacts are minimized. The proposed method is composed of the following steps: 1. Applying a comb-push to produce multiple differently angled shear waves at different spatial locations simultaneously; 2. Decomposing the complex shear wave field into individual shear wave fields with differently oriented shear waves using a multi-directional filter; 3. Using a robust 2-D shear wave speed calculation to reconstruct 2-D shear elasticity maps from each filter direction; and 4. Compounding these 2-D maps from different directions into a final map. An inclusion phantom study showed that the fast shear compounding method could achieve comparable performance to conventional shear compounding without sacrificing the imaging frame rate. A multi-inclusion phantom experiment showed that the fast shear compounding method could provide a full field-of-view, 2-D and compounded shear elasticity map with three types of inclusions clearly resolved and stiffness measurements showing excellent agreement to the nominal values. Copyright © 2014 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Detection of buried magnetic objects by a SQUID gradiometer system
NASA Astrophysics Data System (ADS)
Meyer, Hans-Georg; Hartung, Konrad; Linzen, Sven; Schneider, Michael; Stolz, Ronny; Fried, Wolfgang; Hauspurg, Sebastian
2009-05-01
We present a magnetic detection system based on superconducting gradiometric sensors (SQUID gradiometers). The system provides a unique fast mapping of large areas with a high resolution of the magnetic field gradient as well as the local position. A main part of this work is the localization and classification of magnetic objects in the ground by automatic interpretation of geomagnetic field gradients, measured by the SQUID system. In accordance with specific features the field is decomposed into segments, which allow inferences to possible objects in the ground. The global consideration of object describing properties and their optimization using error minimization methods allows the reconstruction of superimposed features and detection of buried objects. The analysis system of measured geomagnetic fields works fully automatically. By a given surface of area-measured gradients the algorithm determines within numerical limits the absolute position of objects including depth with sub-pixel accuracy and allows an arbitrary position and attitude of sources. Several SQUID gradiometer data sets were used to show the applicability of the analysis algorithm.
Zheng, Le; Crippen, Tawni L; Dabney, Alan; Gordy, Alex; Tomberlin, Jeffery K
2017-09-01
The impact of six sterilized diets (blood-yeast agar diet, decomposed beef liver diet, powdered beef liver diet, powdered fish diet, milk-based diet, and a chemically defined diet) on Lucilia sericata (Meigen) larvae reared at three densities (10 larvae, 20 larvae, and 40 larvae on 20 g diet) was determined in comparison to fresh beef liver as a control. Specifically, the effects of these diets on the following traits of L. sericata were measured: 1) pupal weight, 2) pupation percentage, 3) eclosion percentage, as well as 4) adult longevity. The experiment included two trials with five technical replicates in each. Lucilia sericata did not successfully develop on the powdered fish, milk-based, or chemically defined diets. Overall, the liver-based diets (decomposed and powdered) resulted in the most similar fly development to the fresh beef liver. Larvae reared on blood-yeast agar diet resulted in a significantly (increased 20.56% ± 8.09%) greater pupation rate than those reared on the decomposed and powdered beef liver diets. Pupae from larvae fed the fresh beef liver were significantly larger (6.27 ± 1.01 mg, 4.05 ± 0.94 mg larger, respectively) than those reared on the blood-yeast agar diet, decomposed beef liver, and powdered beef liver diets. Overall, results revealed larvae reared on sterilized liver-based diets resulted in traits similar to those raised on fresh beef liver. Owing to low costs the sterile liver-based diets could be produced and used with limited infrastructure and economic incomes. © The Authors 2017. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Parallel algorithms for placement and routing in VLSI design. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Brouwer, Randall Jay
1991-01-01
The computational requirements for high quality synthesis, analysis, and verification of very large scale integration (VLSI) designs have rapidly increased with the fast growing complexity of these designs. Research in the past has focused on the development of heuristic algorithms, special purpose hardware accelerators, or parallel algorithms for the numerous design tasks to decrease the time required for solution. Two new parallel algorithms are proposed for two VLSI synthesis tasks, standard cell placement and global routing. The first algorithm, a parallel algorithm for global routing, uses hierarchical techniques to decompose the routing problem into independent routing subproblems that are solved in parallel. Results are then presented which compare the routing quality to the results of other published global routers and which evaluate the speedups attained. The second algorithm, a parallel algorithm for cell placement and global routing, hierarchically integrates a quadrisection placement algorithm, a bisection placement algorithm, and the previous global routing algorithm. Unique partitioning techniques are used to decompose the various stages of the algorithm into independent tasks which can be evaluated in parallel. Finally, results are presented which evaluate the various algorithm alternatives and compare the algorithm performance to other placement programs. Measurements are presented on the parallel speedups available.
NASA Astrophysics Data System (ADS)
Mehta, Dalip Singh; Sharma, Anuradha; Dubey, Vishesh; Singh, Veena; Ahmad, Azeem
2016-03-01
We present a single-shot white light interference microscopy for the quantitative phase imaging (QPI) of biological cells and tissues. A common path white light interference microscope is developed and colorful white light interferogram is recorded by three-chip color CCD camera. The recorded white light interferogram is decomposed into the red, green and blue color wavelength component interferograms and processed it to find out the RI for different color wavelengths. The decomposed interferograms are analyzed using local model fitting (LMF)" algorithm developed for reconstructing the phase map from single interferogram. LMF is slightly off-axis interferometric QPI method which is a single-shot method that employs only a single image, so it is fast and accurate. The present method is very useful for dynamic process where path-length changes at millisecond level. From the single interferogram a wavelength-dependent quantitative phase imaging of human red blood cells (RBCs) are reconstructed and refractive index is determined. The LMF algorithm is simple to implement and is efficient in computation. The results are compared with the conventional phase shifting interferometry and Hilbert transform techniques.
Fast segmentation of satellite images using SLIC, WebGL and Google Earth Engine
NASA Astrophysics Data System (ADS)
Donchyts, Gennadii; Baart, Fedor; Gorelick, Noel; Eisemann, Elmar; van de Giesen, Nick
2017-04-01
Google Earth Engine (GEE) is a parallel geospatial processing platform, which harmonizes access to petabytes of freely available satellite images. It provides a very rich API, allowing development of dedicated algorithms to extract useful geospatial information from these images. At the same time, modern GPUs provide thousands of computing cores, which are mostly not utilized in this context. In the last years, WebGL became a popular and well-supported API, allowing fast image processing directly in web browsers. In this work, we will evaluate the applicability of WebGL to enable fast segmentation of satellite images. A new implementation of a Simple Linear Iterative Clustering (SLIC) algorithm using GPU shaders will be presented. SLIC is a simple and efficient method to decompose an image in visually homogeneous regions. It adapts a k-means clustering approach to generate superpixels efficiently. While this approach will be hard to scale, due to a significant amount of data to be transferred to the client, it should significantly improve exploratory possibilities and simplify development of dedicated algorithms for geoscience applications. Our prototype implementation will be used to improve surface water detection of the reservoirs using multispectral satellite imagery.
Hesford, Andrew J.; Chew, Weng C.
2010-01-01
The distorted Born iterative method (DBIM) computes iterative solutions to nonlinear inverse scattering problems through successive linear approximations. By decomposing the scattered field into a superposition of scattering by an inhomogeneous background and by a material perturbation, large or high-contrast variations in medium properties can be imaged through iterations that are each subject to the distorted Born approximation. However, the need to repeatedly compute forward solutions still imposes a very heavy computational burden. To ameliorate this problem, the multilevel fast multipole algorithm (MLFMA) has been applied as a forward solver within the DBIM. The MLFMA computes forward solutions in linear time for volumetric scatterers. The typically regular distribution and shape of scattering elements in the inverse scattering problem allow the method to take advantage of data redundancy and reduce the computational demands of the normally expensive MLFMA setup. Additional benefits are gained by employing Kaczmarz-like iterations, where partial measurements are used to accelerate convergence. Numerical results demonstrate both the efficiency of the forward solver and the successful application of the inverse method to imaging problems with dimensions in the neighborhood of ten wavelengths. PMID:20707438
Phenotypic responses to microbial volatiles render a mold fungus more susceptible to insect damage.
Caballero Ortiz, Silvia; Trienens, Monika; Pfohl, Katharina; Karlovsky, Petr; Holighaus, Gerrit; Rohlfs, Marko
2018-04-01
In decomposer systems, fungi show diverse phenotypic responses to volatile organic compounds of microbial origin (volatiles). The mechanisms underlying such responses and their consequences for the performance and ecological success of fungi in a multitrophic community context have rarely been tested explicitly. We used a laboratory-based approach in which we investigated a tripartite yeast-mold-insect model decomposer system to understand the possible influence of yeast-borne volatiles on the ability of a chemically defended mold fungus to resist insect damage. The volatile-exposed mold phenotype (1) did not exhibit protein kinase A-dependent morphological differentiation, (2) was more susceptible to insect foraging activity, and (3) had reduced insecticidal properties. Additionally, the volatile-exposed phenotype was strongly impaired in secondary metabolite formation and unable to activate "chemical defense" genes upon insect damage. These results suggest that volatiles can be ecologically important factors that affect the chemical-based combative abilities of fungi against insect antagonists and, consequently, the structure and dynamics of decomposer communities.
Kaiser, Christina; Franklin, Oskar; Richter, Andreas; Dieckmann, Ulf
2015-01-01
The chemical structure of organic matter has been shown to be only marginally important for its decomposability by microorganisms. The question of why organic matter does accumulate in the face of powerful microbial degraders is thus key for understanding terrestrial carbon and nitrogen cycling. Here we demonstrate, based on an individual-based microbial community model, that social dynamics among microbes producing extracellular enzymes (‘decomposers') and microbes exploiting the catalytic activities of others (‘cheaters') regulate organic matter turnover. We show that the presence of cheaters increases nitrogen retention and organic matter build-up by downregulating the ratio of extracellular enzymes to total microbial biomass, allowing nitrogen-rich microbial necromass to accumulate. Moreover, increasing catalytic efficiencies of enzymes are outbalanced by a strong negative feedback on enzyme producers, leading to less enzymes being produced at the community level. Our results thus reveal a possible control mechanism that may buffer soil CO2 emissions in a future climate. PMID:26621582
Effect of argon and hydrogen on deposition of silicon from tetrochlrosilane in cold plasmas
NASA Technical Reports Server (NTRS)
Manory, R. R.; d.
1985-01-01
The roles of Ar and H2 on the decomposition of SiCl4 in cold plasma were investigated by Langmuir probes and mass spectrometry. Decomposition of the reactant by Ar only has been found to be very slow. In presence of H2 in the plasma SiCl4 is decomposed by fast radical-molecule reactions which are further enhanced by Ar due to additional ion-molecule reactions in which more H radicals are produced. A model for the plasma-surface interactions during deposition of mu-Si in the Ar + H2 + SiCl4 system is presented.
Cong, Fengyu; Puoliväli, Tuomas; Alluri, Vinoo; Sipola, Tuomo; Burunat, Iballa; Toiviainen, Petri; Nandi, Asoke K; Brattico, Elvira; Ristaniemi, Tapani
2014-02-15
Independent component analysis (ICA) has been often used to decompose fMRI data mostly for the resting-state, block and event-related designs due to its outstanding advantage. For fMRI data during free-listening experiences, only a few exploratory studies applied ICA. For processing the fMRI data elicited by 512-s modern tango, a FFT based band-pass filter was used to further pre-process the fMRI data to remove sources of no interest and noise. Then, a fast model order selection method was applied to estimate the number of sources. Next, both individual ICA and group ICA were performed. Subsequently, ICA components whose temporal courses were significantly correlated with musical features were selected. Finally, for individual ICA, common components across majority of participants were found by diffusion map and spectral clustering. The extracted spatial maps (by the new ICA approach) common across most participants evidenced slightly right-lateralized activity within and surrounding the auditory cortices. Meanwhile, they were found associated with the musical features. Compared with the conventional ICA approach, more participants were found to have the common spatial maps extracted by the new ICA approach. Conventional model order selection methods underestimated the true number of sources in the conventionally pre-processed fMRI data for the individual ICA. Pre-processing the fMRI data by using a reasonable band-pass digital filter can greatly benefit the following model order selection and ICA with fMRI data by naturalistic paradigms. Diffusion map and spectral clustering are straightforward tools to find common ICA spatial maps. Copyright © 2013 Elsevier B.V. All rights reserved.
Fast, Nonlinear, Fully Probabilistic Inversion of Large Geophysical Problems
NASA Astrophysics Data System (ADS)
Curtis, A.; Shahraeeni, M.; Trampert, J.; Meier, U.; Cho, G.
2010-12-01
Almost all Geophysical inverse problems are in reality nonlinear. Fully nonlinear inversion including non-approximated physics, and solving for probability distribution functions (pdf’s) that describe the solution uncertainty, generally requires sampling-based Monte-Carlo style methods that are computationally intractable in most large problems. In order to solve such problems, physical relationships are usually linearized leading to efficiently-solved, (possibly iterated) linear inverse problems. However, it is well known that linearization can lead to erroneous solutions, and in particular to overly optimistic uncertainty estimates. What is needed across many Geophysical disciplines is a method to invert large inverse problems (or potentially tens of thousands of small inverse problems) fully probabilistically and without linearization. This talk shows how very large nonlinear inverse problems can be solved fully probabilistically and incorporating any available prior information using mixture density networks (driven by neural network banks), provided the problem can be decomposed into many small inverse problems. In this talk I will explain the methodology, compare multi-dimensional pdf inversion results to full Monte Carlo solutions, and illustrate the method with two applications: first, inverting surface wave group and phase velocities for a fully-probabilistic global tomography model of the Earth’s crust and mantle, and second inverting industrial 3D seismic data for petrophysical properties throughout and around a subsurface hydrocarbon reservoir. The latter problem is typically decomposed into 104 to 105 individual inverse problems, each solved fully probabilistically and without linearization. The results in both cases are sufficiently close to the Monte Carlo solution to exhibit realistic uncertainty, multimodality and bias. This provides far greater confidence in the results, and in decisions made on their basis.
Qu, Chang-feng; Song, Jin-ming; Li, Ning; Li, Xue-gang; Yuan, Hua-mao; Duan, Li-qin
2016-01-01
Abstract: Jellyfish bloom has been increasing in Chinese seas and decomposition after jellyfish bloom has great influences on marine ecological environment. We conducted the incubation of Nemopilema nomurai decomposing to evaluate its effect on carbon, nitrogen and phosphorus recycling of water column by simulated experiments. The results showed that the processes of jellyfish decomposing represented a fast release of biogenic elements, and the release of carbon, nitrogen and phosphorus reached the maximum at the beginning of jellyfish decomposing. The release of biogenic elements from jellyfish decomposition was dominated by dissolved matter, which had a much higher level than particulate matter. The highest net release rates of dissolved organic carbon and particulate organic carbon reached (103.77 ± 12.60) and (1.52 ± 0.37) mg · kg⁻¹ · h⁻¹, respectively. The dissolved nitrogen was dominated by NH₄⁺-N during the whole incubation time, accounting for 69.6%-91.6% of total dissolved nitrogen, whereas the dissolved phosphorus was dominated by dissolved organic phosphorus during the initial stage of decomposition, being 63.9%-86.7% of total dissolved phosphorus and dominated by PO₄³⁻-P during the late stage of decomposition, being 50.4%-60.2%. On the contrary, the particulate nitrogen was mainly in particulate organic nitrogen, accounting for (88.6 ± 6.9) % of total particulate nitrogen, whereas the particulate phosphorus was mainly in particulate. inorganic phosphorus, accounting for (73.9 ±10.5) % of total particulate phosphorus. In addition, jellyfish decomposition decreased the C/N and increased the N/P of water column. These indicated that jellyfish decomposition could result in relative high carbon and nitrogen loads.
Fast localized orthonormal virtual orbitals which depend smoothly on nuclear coordinates.
Subotnik, Joseph E; Dutoi, Anthony D; Head-Gordon, Martin
2005-09-15
We present here an algorithm for computing stable, well-defined localized orthonormal virtual orbitals which depend smoothly on nuclear coordinates. The algorithm is very fast, limited only by diagonalization of two matrices with dimension the size of the number of virtual orbitals. Furthermore, we require no more than quadratic (in the number of electrons) storage. The basic premise behind our algorithm is that one can decompose any given atomic-orbital (AO) vector space as a minimal basis space (which includes the occupied and valence virtual spaces) and a hard-virtual (HV) space (which includes everything else). The valence virtual space localizes easily with standard methods, while the hard-virtual space is constructed to be atom centered and automatically local. The orbitals presented here may be computed almost as quickly as projecting the AO basis onto the virtual space and are almost as local (according to orbital variance), while our orbitals are orthonormal (rather than redundant and nonorthogonal). We expect this algorithm to find use in local-correlation methods.
Accounting carbon storage in decaying root systems of harvested forests.
Wang, G Geoff; Van Lear, David H; Hu, Huifeng; Kapeluck, Peter R
2012-05-01
Decaying root systems of harvested trees can be a significant component of belowground carbon storage, especially in intensively managed forests where harvest occurs repeatedly in relatively short rotations. Based on destructive sampling of root systems of harvested loblolly pine trees, we estimated that root systems contained about 32% (17.2 Mg ha(-1)) at the time of harvest, and about 13% (6.1 Mg ha(-1)) of the soil organic carbon 10 years later. Based on the published roundwood output data, we estimated belowground biomass at the time of harvest for loblolly-shortleaf pine forests harvested between 1995 and 2005 in South Carolina. We then calculated C that remained in the decomposing root systems in 2005 using the decay function developed for loblolly pine. Our calculations indicate that the amount of C stored in decaying roots of loblolly-shortleaf pine forests harvested between 1995 and 2005 in South Carolina was 7.1 Tg. Using a simple extrapolation method, we estimated 331.8 Tg C stored in the decomposing roots due to timber harvest from 1995 to 2005 in the conterminous USA. To fully account for the C stored in the decomposing roots of the US forests, future studies need (1) to quantify decay rates of coarse roots for major tree species in different regions, and (2) to develop a methodology that can determine C stock in decomposing roots resulting from natural mortality.
Liu, Zhiwen; He, Zhengjia; Guo, Wei; Tang, Zhangchun
2016-03-01
In order to extract fault features of large-scale power equipment from strong background noise, a hybrid fault diagnosis method based on the second generation wavelet de-noising (SGWD) and the local mean decomposition (LMD) is proposed in this paper. In this method, a de-noising algorithm of second generation wavelet transform (SGWT) using neighboring coefficients was employed as the pretreatment to remove noise in rotating machinery vibration signals by virtue of its good effect in enhancing the signal-noise ratio (SNR). Then, the LMD method is used to decompose the de-noised signals into several product functions (PFs). The PF corresponding to the faulty feature signal is selected according to the correlation coefficients criterion. Finally, the frequency spectrum is analyzed by applying the FFT to the selected PF. The proposed method is applied to analyze the vibration signals collected from an experimental gearbox and a real locomotive rolling bearing. The results demonstrate that the proposed method has better performances such as high SNR and fast convergence speed than the normal LMD method. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.
Synthesis of Graphene-Based Sensors and Application on Detecting SF6 Decomposing Products: A Review
Zhang, Xiaoxing; Cui, Hao; Gui, Yingang
2017-01-01
Graphene-based materials have aroused enormous focus on a wide range of engineering fields because of their unique structure. One of the most promising applications is gas adsorption and sensing. In electrical engineering, graphene-based sensors are also employed as detecting devices to estimate the operation status of gas insulated switchgear (GIS). This paper reviews the main synthesis methods of graphene, gas adsorption, and sensing mechanism of its based sensors, as well as their applications in detecting SF6 decomposing products, such as SO2, H2S, SO2F2, and SOF2, in GIS. Both theoretical and experimental researches on gas response of graphene-based sensors to these typical gases are summarized. Finally, the future research trend about graphene synthesis technique and relevant perspective are also given. PMID:28208836
Arterial stiffness estimation based photoplethysmographic pulse wave analysis
NASA Astrophysics Data System (ADS)
Huotari, Matti; Maatta, Kari; Kostamovaara, Juha
2010-11-01
Arterial stiffness is one of the indices of vascular healthiness. It is based on pulse wave analysis. In the case we decompose the pulse waveform for the estimation and determination of arterial elasticity. Firstly, optically measured with photoplethysmograph and then investigating means by four lognormal pulse waveforms for which we can find very good fit between the original and summed decomposed pulse wave. Several studies have demonstrated that these kinds of measures predict cardiovascular events. While dynamic factors, e.g., arterial stiffness, depend on fixed structural features of the vascular wall. Arterial stiffness is estimated based on pulse wave decomposition analysis in the radial and tibial arteries. Elucidation of the precise relationship between endothelial function and vascular stiffness awaits still further study.
Decomposition of timed automata for solving scheduling problems
NASA Astrophysics Data System (ADS)
Nishi, Tatsushi; Wakatake, Masato
2014-03-01
A decomposition algorithm for scheduling problems based on timed automata (TA) model is proposed. The problem is represented as an optimal state transition problem for TA. The model comprises of the parallel composition of submodels such as jobs and resources. The procedure of the proposed methodology can be divided into two steps. The first step is to decompose the TA model into several submodels by using decomposable condition. The second step is to combine individual solution of subproblems for the decomposed submodels by the penalty function method. A feasible solution for the entire model is derived through the iterated computation of solving the subproblem for each submodel. The proposed methodology is applied to solve flowshop and jobshop scheduling problems. Computational experiments demonstrate the effectiveness of the proposed algorithm compared with a conventional TA scheduling algorithm without decomposition.
Reaction mechanisms in the organometallic vapor phase epitaxial growth of GaAs
NASA Technical Reports Server (NTRS)
Larsen, C. A.; Buchan, N. I.; Stringfellow, G. B.
1988-01-01
The decomposition mechanisms of AsH3, trimethylgallium (TMGa), and mixtures of the two have been studied in an atmospheric-pressure flow system with the use of D2 to label the reaction products which are analyzed in a time-of-flight mass spectrometer. AsH3 decomposes entirely heterogeneously to give H2. TMGa decomposes by a series of gas-phase steps, involving methyl radicals and D atoms to produce CH3D, CH4, C2H6, and HD. TMGa decomposition is accelerated by the presence of AsH3. When the two are mixed, as in the organometallic vapor phase epitaxial growth of GaAs, both compounds decompose in concert to produce only CH4. A likely model is that of a Lewis acid-base adduct that forms and subsequently eliminates CH4.
Reaction mechanisms in the organometallic vapor phase epitaxial growth of GaAs
NASA Astrophysics Data System (ADS)
Larsen, C. A.; Buchan, N. I.; Stringfellow, G. B.
1988-02-01
The decomposition mechanisms of AsH3, trimethylgallium (TMGa), and mixtures of the two have been studied in an atmospheric-pressure flow system with the use of D2 to label the reaction products which are analyzed in a time-of-flight mass spectrometer. AsH3 decomposes entirely heterogeneously to give H2. TMGa decomposes by a series of gas-phase steps, involving methyl radicals and D atoms to produce CH3D, CH4, C2H6, and HD. TMGa decomposition is accelerated by the presence of AsH3. When the two are mixed, as in the organometallic vapor phase epitaxial growth of GaAs, both compounds decompose in concert to produce only CH4. A likely model is that of a Lewis acid-base adduct that forms and subsequently eliminates CH4.
Fourier transform for fermionic systems and the spectral tensor network.
Ferris, Andrew J
2014-07-04
Leveraging the decomposability of the fast Fourier transform, I propose a new class of tensor network that is efficiently contractible and able to represent many-body systems with local entanglement that is greater than the area law. Translationally invariant systems of free fermions in arbitrary dimensions as well as 1D systems solved by the Jordan-Wigner transformation are shown to be exactly represented in this class. Further, it is proposed that these tensor networks be used as generic structures to variationally describe more complicated systems, such as interacting fermions. This class shares some similarities with the Evenbly-Vidal branching multiscale entanglement renormalization ansatz, but with some important differences and greatly reduced computational demands.
Development of a Dual Windowed Test Vehicle for Live Streaming of Cook-Off in Energetic Materials
NASA Astrophysics Data System (ADS)
Cheese, Phil; Reeves, Tom; White, Nathan; Stennett, Christopher; Wood, Andrew; Cook, Malcolm; Syanco Ltd Team; Cranfield University Team; DE&S, MoD Abbey Wood Team
2017-06-01
A modular, axially connected test vehicle for researching the influence of various heating rates (cook-off) on energetic materials and how they fundamentally decompose, leading to a violent reaction has been developed and tested. The vehicle can accommodate samples measuring up to 50 mm in diameter, with thicknesses variable from 0.5 mm up to 50 mm long. A unique feature of this vehicle is the ability to have a live high speed camera view, without compromising confinement during the cook-off process. This is achieved via two special windows that allow artificial backlighting to be provided at one end for clear observation of the test sample; this has allowed unprecedented views of how explosives decompose and runaway to violent reactions, and has given insight into the reaction mechanisms operating, and challenges current theories. Using glass windows, a burst pressure of 20 MPa has been measured. The heating rate is fully adjustable from slow to fast rates, and its design allows for confinement to be varied to study the influence on the violence of reaction during cook-off. In addition to being able to view the test sample during cook-off, embedded thermocouples provide detailed temperature records and the ability to use PDV instrumentation is also incorporated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maurer, Simon A.; Clin, Lucien; Ochsenfeld, Christian, E-mail: christian.ochsenfeld@uni-muenchen.de
2014-06-14
Our recently developed QQR-type integral screening is introduced in our Cholesky-decomposed pseudo-densities Møller-Plesset perturbation theory of second order (CDD-MP2) method. We use the resolution-of-the-identity (RI) approximation in combination with efficient integral transformations employing sparse matrix multiplications. The RI-CDD-MP2 method shows an asymptotic cubic scaling behavior with system size and a small prefactor that results in an early crossover to conventional methods for both small and large basis sets. We also explore the use of local fitting approximations which allow to further reduce the scaling behavior for very large systems. The reliability of our method is demonstrated on test sets formore » interaction and reaction energies of medium sized systems and on a diverse selection from our own benchmark set for total energies of larger systems. Timings on DNA systems show that fast calculations for systems with more than 500 atoms are feasible using a single processor core. Parallelization extends the range of accessible system sizes on one computing node with multiple cores to more than 1000 atoms in a double-zeta basis and more than 500 atoms in a triple-zeta basis.« less
Profiler - A Fast and Versatile New Program for Decomposing Galaxy Light Profiles
NASA Astrophysics Data System (ADS)
Ciambur, Bogdan C.
2016-12-01
I introduce Profiler, a user-friendly program designed to analyse the radial surface brightness profiles of galaxies. With an intuitive graphical user interface, Profiler can accurately model galaxies of a broad range of morphological types, with various parametric functions routinely employed in the field (Sérsic, core-Sérsic, exponential, Gaussian, Moffat, and Ferrers). In addition to these, Profiler can employ the broken exponential model for disc truncations or anti-truncations, and two special cases of the edge-on disc model: along the disc's major or minor axis. The convolution of (circular or elliptical) models with the point spread function is performed in 2D, and offers a choice between Gaussian, Moffat or a user-provided profile for the point spread function. Profiler is optimised to work with galaxy light profiles obtained from isophotal measurements, which allow for radial gradients in the geometric parameters of the isophotes, and are thus often better at capturing the total light than 2D image-fitting programs. Additionally, the 1D approach is generally less computationally expensive and more stable. I demonstrate Profiler's features by decomposing three case-study galaxies: the cored elliptical galaxy NGC 3348, the nucleated dwarf Seyfert I galaxy Pox 52, and NGC 2549, a double-barred galaxy with an edge-on, truncated disc.
Opposing effects of different soil organic matter fractions on crop yields.
Wood, Stephen A; Sokol, Noah; Bell, Colin W; Bradford, Mark A; Naeem, Shahid; Wallenstein, Matthew D; Palm, Cheryl A
2016-10-01
Soil organic matter is critical to sustainable agriculture because it provides nutrients to crops as it decomposes and increases nutrient- and water-holding capacity when built up. Fast- and slow-cycling fractions of soil organic matter can have different impacts on crop production because fast-cycling fractions rapidly release nutrients for short-term plant growth and slow-cycling fractions bind nutrients that mineralize slowly and build up water-holding capacity. We explored the controls on these fractions in a tropical agroecosystem and their relationship to crop yields. We performed physical fractionation of soil organic matter from 48 farms and plots in western Kenya. We found that fast-cycling, particulate organic matter was positively related to crop yields, but did not have a strong effect, while slower-cycling, mineral-associated organic matter was negatively related to yields. Our finding that slower-cycling organic matter was negatively related to yield points to a need to revise the view that stabilization of organic matter positively impacts food security. Our results support a new paradigm that different soil organic matter fractions are controlled by different mechanisms, potentially leading to different relationships with management outcomes, like crop yield. Effectively managing soils for sustainable agriculture requires quantifying the effects of specific organic matter fractions on these outcomes. © 2016 by the Ecological Society of America.
An efficient CU partition algorithm for HEVC based on improved Sobel operator
NASA Astrophysics Data System (ADS)
Sun, Xuebin; Chen, Xiaodong; Xu, Yong; Sun, Gang; Yang, Yunsheng
2018-04-01
As the latest video coding standard, High Efficiency Video Coding (HEVC) achieves over 50% bit rate reduction with similar video quality compared with previous standards H.264/AVC. However, the higher compression efficiency is attained at the cost of significantly increasing computational load. In order to reduce the complexity, this paper proposes a fast coding unit (CU) partition technique to speed up the process. To detect the edge features of each CU, a more accurate improved Sobel filtering is developed and performed By analyzing the textural features of CU, an early CU splitting termination is proposed to decide whether a CU should be decomposed into four lower-dimensions CUs or not. Compared with the reference software HM16.7, experimental results indicate the proposed algorithm can lessen the encoding time up to 44.09% on average, with a negligible bit rate increase of 0.24%, and quality losses lower 0.03 dB, respectively. In addition, the proposed algorithm gets a better trade-off between complexity and rate-distortion among the other proposed works.
Modularity and the spread of perturbations in complex dynamical systems
NASA Astrophysics Data System (ADS)
Kolchinsky, Artemy; Gates, Alexander J.; Rocha, Luis M.
2015-12-01
We propose a method to decompose dynamical systems based on the idea that modules constrain the spread of perturbations. We find partitions of system variables that maximize "perturbation modularity," defined as the autocovariance of coarse-grained perturbed trajectories. The measure effectively separates the fast intramodular from the slow intermodular dynamics of perturbation spreading (in this respect, it is a generalization of the "Markov stability" method of network community detection). Our approach captures variation of modular organization across different system states, time scales, and in response to different kinds of perturbations: aspects of modularity which are all relevant to real-world dynamical systems. It offers a principled alternative to detecting communities in networks of statistical dependencies between system variables (e.g., "relevance networks" or "functional networks"). Using coupled logistic maps, we demonstrate that the method uncovers hierarchical modular organization planted in a system's coupling matrix. Additionally, in homogeneously coupled map lattices, it identifies the presence of self-organized modularity that depends on the initial state, dynamical parameters, and type of perturbations. Our approach offers a powerful tool for exploring the modular organization of complex dynamical systems.
Leak detection in gas pipeline by acoustic and signal processing - A review
NASA Astrophysics Data System (ADS)
Adnan, N. F.; Ghazali, M. F.; Amin, M. M.; Hamat, A. M. A.
2015-12-01
The pipeline system is the most important part in media transport in order to deliver fluid to another station. The weak maintenance and poor safety will contribute to financial losses in term of fluid waste and environmental impacts. There are many classifications of techniques to make it easier to show their specific method and application. This paper's discussion about gas leak detection in pipeline system using acoustic method will be presented in this paper. The wave propagation in the pipeline is a key parameter in acoustic method when the leak occurs and the pressure balance of the pipe will generated by the friction between wall in the pipe. The signal processing is used to decompose the raw signal and show in time- frequency. Findings based on the acoustic method can be used for comparative study in the future. Acoustic signal and HHT is the best method to detect leak in gas pipelines. More experiments and simulation need to be carried out to get the fast result of leaking and estimation of their location.
Direct-Solve Image-Based Wavefront Sensing
NASA Technical Reports Server (NTRS)
Lyon, Richard G.
2009-01-01
A method of wavefront sensing (more precisely characterized as a method of determining the deviation of a wavefront from a nominal figure) has been invented as an improved means of assessing the performance of an optical system as affected by such imperfections as misalignments, design errors, and fabrication errors. The method is implemented by software running on a single-processor computer that is connected, via a suitable interface, to the image sensor (typically, a charge-coupled device) in the system under test. The software collects a digitized single image from the image sensor. The image is displayed on a computer monitor. The software directly solves for the wavefront in a time interval of a fraction of a second. A picture of the wavefront is displayed. The solution process involves, among other things, fast Fourier transforms. It has been reported to the effect that some measure of the wavefront is decomposed into modes of the optical system under test, but it has not been reported whether this decomposition is postprocessing of the solution or part of the solution process.
Modularity and the spread of perturbations in complex dynamical systems.
Kolchinsky, Artemy; Gates, Alexander J; Rocha, Luis M
2015-12-01
We propose a method to decompose dynamical systems based on the idea that modules constrain the spread of perturbations. We find partitions of system variables that maximize "perturbation modularity," defined as the autocovariance of coarse-grained perturbed trajectories. The measure effectively separates the fast intramodular from the slow intermodular dynamics of perturbation spreading (in this respect, it is a generalization of the "Markov stability" method of network community detection). Our approach captures variation of modular organization across different system states, time scales, and in response to different kinds of perturbations: aspects of modularity which are all relevant to real-world dynamical systems. It offers a principled alternative to detecting communities in networks of statistical dependencies between system variables (e.g., "relevance networks" or "functional networks"). Using coupled logistic maps, we demonstrate that the method uncovers hierarchical modular organization planted in a system's coupling matrix. Additionally, in homogeneously coupled map lattices, it identifies the presence of self-organized modularity that depends on the initial state, dynamical parameters, and type of perturbations. Our approach offers a powerful tool for exploring the modular organization of complex dynamical systems.
NASA Astrophysics Data System (ADS)
Shetti, Nagaraj P.; Hegde, Rajesh N.; Nandibewoor, Sharanappa T.
2009-07-01
Oxidation of penicillin derivative, ampicillin (AMP) by diperiodatocuprate(III) (DPC) in alkaline medium at a constant ionic strength of 0.01-mol dm -3 was studied spectrophotometrically. The reaction between DPC and ampicillin in alkaline medium exhibits 1:4 stoichiometry (ampicillin:DPC). Intervention of free radicals was observed in the reaction. Based on the observed orders and experimental evidences, a mechanism involving the protonated form of DPC as the reactive oxidant species has been proposed. The oxidation reaction in alkaline medium has been shown to proceed via a DPC-AMP complex, which decomposes slowly in a rate determining step to yield phenyl glycine (PG) and free radical species of 6-aminopenicillanic acid (6-APA), followed by other fast steps to give the products. The two major products were characterized by IR, NMR, LC-MS and Spot test. The reaction constants involved in the different steps of the mechanism were calculated. The activation parameters with respect to slow step of the mechanism were computed and discussed and thermodynamic quantities were also determined.
Adsorption mechanism of SF6 decomposed species on pyridine-like PtN3 embedded CNT: A DFT study
NASA Astrophysics Data System (ADS)
Cui, Hao; Zhang, Xiaoxing; Chen, Dachang; Tang, Ju
2018-07-01
Metal-Nx embedded CNT have aroused considerable attention in the field of gas interaction due to their strong catalytic behavior, which provides prospective scopes for gas adsorption and sensing. Detecting SF6 decomposed species in certain devices is essential to guarantee their safe operation. In this work, we performed DFT method and simulated the adsorption of three SF6 decomposed gases (SO2, SOF2 and SO2F2) onto the PtN3 embedded CNT surface, in order to shed light on its adsorption ability and sensing mechanism. Results suggest that the CNT embedded with PtN3 center has strong interaction with these gas molecules, leading to high hybridization between Pt dopant and active atoms inner gas molecules. These interactions are assumed to be chemisorption due to the remarkable Ead and QT, thus resulting in dramatic deformations in electronic structure of PtN3-CNT near the Fermi level. Furthermore, the electronic redistribution cause the conductivity increase of proposed material in three systems, based on frontier molecular orbital theory. Our calculations attempt to suggest novel sensing material that are potentially employed in detection of SF6 decomposed components.
NASA Astrophysics Data System (ADS)
Rodigast, Maria; Mutzel, Anke; Herrmann, Hartmut
2017-03-01
Methylglyoxal forms oligomeric compounds in the atmospheric aqueous particle phase, which could establish a significant contribution to the formation of aqueous secondary organic aerosol (aqSOA). Thus far, no suitable method for the quantification of methylglyoxal oligomers is available despite the great effort spent for structure elucidation. In the present study a simplified method was developed to quantify heat-decomposable methylglyoxal oligomers as a sum parameter. The method is based on the thermal decomposition of oligomers into methylglyoxal monomers. Formed methylglyoxal monomers were detected using PFBHA (o-(2,3,4,5,6-pentafluorobenzyl)hydroxylamine hydrochloride) derivatisation and gas chromatography-mass spectrometry (GC/MS) analysis. The method development was focused on the heating time (varied between 15 and 48 h), pH during the heating process (pH = 1-7), and heating temperature (50, 100 °C). The optimised values of these method parameters are presented. The developed method was applied to quantify heat-decomposable methylglyoxal oligomers formed during the OH-radical oxidation of 1,3,5-trimethylbenzene (TMB) in the Leipzig aerosol chamber (LEipziger AerosolKammer, LEAK). Oligomer formation was investigated as a function of seed particle acidity and relative humidity. A fraction of heat-decomposable methylglyoxal oligomers of up to 8 % in the produced organic particle mass was found, highlighting the importance of those oligomers formed solely by methylglyoxal for SOA formation. Overall, the present study provides a new and suitable method for quantification of heat-decomposable methylglyoxal oligomers in the aqueous particle phase.
Liu, Wu-Jun; Tian, Ke; Jiang, Hong; Zhang, Xue-Song; Ding, Hong-Sheng; Yu, Han-Qing
2012-07-17
Heavy-metal-polluted biomass derived from phytoremediation or biosorption is widespread and difficult to be disposed of. In this work, simultaneous conversion of the waste woody biomass into bio-oil and recovery of Cu in a fast pyrolysis reactor were investigated. The results show that Cu can effectively catalyze the thermo-decomposition of biomass. Both the yield and high heating value (HHV) of the Cu-polluted fir sawdust biomass (Cu-FSD) derived bio-oil are significantly improved compared with those of the fir sawdust (FSD) derived bio-oil. The results of UV-vis and (1)H NMR spectra of bio-oil indicate pyrolytic lignin is further decomposed into small-molecular aromatic compounds by the catalysis of Cu, which is in agreement with the GC-MS results that the fractions of C7-C10 compounds in the bio-oil significantly increase. Inductively coupled plasma-atomic emission spectrometry, X-ray diffraction, and X-ray photoelectron spectroscopy analyses of the migration and transformation of Cu in the fast pyrolysis process show that more than 91% of the total Cu in the Cu-FSD is enriched in the char in the form of zerovalent Cu with a face-centered cubic crystalline phase. This study gives insight into catalytic fast pyrolysis of heavy metals, and demonstrates the technical feasibility of an eco-friendly process for disposal of heavy-metal-polluted biomass.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, X; Petrongolo, M; Wang, T
Purpose: A general problem of dual-energy CT (DECT) is that the decomposition is sensitive to noise in the two sets of dual-energy projection data, resulting in severely degraded qualities of decomposed images. We have previously proposed an iterative denoising method for DECT. Using a linear decomposition function, the method does not gain the full benefits of DECT on beam-hardening correction. In this work, we expand the framework of our iterative method to include non-linear decomposition models for noise suppression in DECT. Methods: We first obtain decomposed projections, which are free of beam-hardening artifacts, using a lookup table pre-measured on amore » calibration phantom. First-pass material images with high noise are reconstructed from the decomposed projections using standard filter-backprojection reconstruction. Noise on the decomposed images is then suppressed by an iterative method, which is formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, we include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. Analytical formulae are derived to compute the variance-covariance matrix from the measured decomposition lookup table. Results: We have evaluated the proposed method via phantom studies. Using non-linear decomposition, our method effectively suppresses the streaking artifacts of beam-hardening and obtains more uniform images than our previous approach based on a linear model. The proposed method reduces the average noise standard deviation of two basis materials by one order of magnitude without sacrificing the spatial resolution. Conclusion: We propose a general framework of iterative denoising for material decomposition of DECT. Preliminary phantom studies have shown the proposed method improves the image uniformity and reduces noise level without resolution loss. In the future, we will perform more phantom studies to further validate the performance of the purposed method. This work is supported by a Varian MRA grant.« less
Correlation between the morphogenetic types of litter and their properties in bog birch forests
NASA Astrophysics Data System (ADS)
Efremova, T. T.; Efremov, S. P.; Avrova, A. F.
2010-08-01
A formalized arrangement of morphogenetic types of litter according to the physicochemical parameters provided their significant grouping in three genetic associations. The litter group (highly decomposed + moderately decomposed) is confined to the tall-grass group of bog birch forests. The rhizomatous (roughly decomposed) litter is formed in the sedge-reed grass bog birch forests. The litter group (peaty + peatified + peat) is associated with the bog-herbaceous-moss group of forest types. The genetic associations of the litters (a) reliably characterize the edaphic conditions of bog birch forests and (b)correspond to formation of the peat of certain ecological groups. We found highly informative the acid-base parameters, the exchangeable cations (Ca2+ + Mg2+) and the total potential acidity, which differentiated the genetic associations of litter practically with 100% probability. The expediency of studying litters under groups of forest types rather than under separate types of bog birch forests was demonstrated.
Why does Kevlar decompose, while Nomex does not, when treated with aqueous chlorine solutions?
Akdag, Akin; Kocer, Hasan B; Worley, S D; Broughton, R M; Webb, T R; Bray, Travis H
2007-05-24
Kevlar and Nomex are high-performance polymers which have wide varieties of applications in daily life. Recently, they have been proposed to be biocidal materials when reacted with household bleach (sodium hypochlorite solution) because they contain amide moieties which can be chlorinated to generate biocidal N-halamine functional groups. Although Nomex can be chlorinated without any significant decomposition, Kevlar decomposes under the same chlorination conditions. In this study, two mimics for each of the polymers were synthesized to simulate the carboxylate and diaminophenylene components of the materials. It was found that the p-diaminophenylene component of the Kevlar mimic is oxidized to a quinone-type structure upon treatment with hypochlorous acid, which then decomposes. However, such a mechanism for the Nomex mimic is not possible. In this paper, based upon these observations, a plausible answer will be provided to the title question.
Removal of methylmercury and tributyltin (TBT) using marine microorganisms.
Lee, Seong Eon; Chung, Jin Wook; Won, Ho Shik; Lee, Dong Sup; Lee, Yong-Woo
2012-02-01
Two marine species of bacteria were isolated that are capable of degrading organometallic contaminants: Pseudomonas balearica, which decomposes methylmercury; and Shewanella putrefaciens, which decomposes tributyltin. P. balearica decomposed 97% of methylmercury (20.0 μg/L) into inorganic mercury after 3 h, while S. putrefaciens decomposed 88% of tributyltin (55.3 μg Sn/L) in real wastewater after 36 h. These data indicate that the two bacteria efficiently decomposed the targeted substances and may be applied to real wastewater.
Decomposed bodies--still an unrewarding autopsy?
Ambade, Vipul Namdeorao; Keoliya, Ajay Narmadaprasad; Deokar, Ravindra Baliram; Dixit, Pradip Gangadhar
2011-04-01
One of the classic mistakes in forensic pathology is to regard the autopsy of decomposed body as unrewarding. The present study was undertaken with a view to debunk this myth and to determine the characteristic pattern in decomposed bodies brought for medicolegal autopsy. From a total of 4997 medicolegal deaths reported at an Apex Medical Centre, Yeotmal, a rural district of Maharashtra over seven year study period, only 180 cases were decomposed, representing 3.6% of the total medicolegal autopsies with the rate of 1.5 decomposed body/100,000 population per year. Male (79.4%) predominance was seen in decomposed bodies with male female ratio of 3.9:1. Most of the victims were between the ages of 31 and 60 years with peak at 31-40 years (26.7%) followed by 41-50 years (19.4%). Older age above 60 years was found in 8.6% cases. Married (64.4%) outnumbered unmarried ones in decomposition. Most of the decomposed bodies were complete (83.9%) and identified (75%). But when the body was incomplete/mutilated or skeletonised then 57.7% of the deceased remains unidentified. The cause and manner of death was ascertained in 85.6% and 81.1% cases respectively. Drowning (35.6%) was the commonest cause of death in decomposed bodies with suicide (52.8%) as the commonest manner of death. Decomposed bodies were commonly recovered from open places (43.9%), followed by water sources (43.3%) and enclosed place (12.2%). Most of the decomposed bodies were retrieved from well (49 cases) followed by barren land (27 cases) and forest (17 cases). 83.8% of the decomposed bodies were recovered before 72 h and only in 16.2% cases the time since death was more than 72 h, mostly recovered from barren land, forest and river. Most of the decomposed bodies were found in summer season (42.8%) with peak in the month of May. Despite technical difficulties in handling the body and artefactual alteration of the tissue, the decomposed body may still reveal cause and manner of death in significant number of cases. Copyright © 2011 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Catalyst for Decomposition of Nitrogen Oxides
NASA Technical Reports Server (NTRS)
Schryer, David R. (Inventor); Akyurtlu, Ates (Inventor); Jordan, Jeffrey D. (Inventor); Akyurtlu, Jale (Inventor)
2015-01-01
This invention relates generally to a platinized tin oxide-based catalyst. It relates particularly to an improved platinized tin oxide-based catalyst able to decompose nitric oxide to nitrogen and oxygen without the necessity of a reducing gas.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takamoto, Makoto; Lazarian, Alexandre, E-mail: mtakamoto@eps.s.u-tokyo.ac.jp, E-mail: alazarian@facstaff.wisc.edu
2016-11-10
In this Letter, we report compressible mode effects on relativistic magnetohydrodynamic (RMHD) turbulence in Poynting-dominated plasmas using three-dimensional numerical simulations. We decomposed fluctuations in the turbulence into 3 MHD modes (fast, slow, and Alfvén) following the procedure of mode decomposition in Cho and Lazarian, and analyzed their energy spectra and structure functions separately. We also analyzed the ratio of compressible mode to Alfvén mode energy with respect to its Mach number. We found the ratio of compressible mode increases not only with the Alfvén Mach number, but also with the background magnetization, which indicates a strong coupling between the fastmore » and Alfvén modes. It also signifies the appearance of a new regime of RMHD turbulence in Poynting-dominated plasmas where the fast and Alfvén modes are strongly coupled and, unlike the non-relativistic MHD regime, cannot be treated separately. This finding will affect particle acceleration efficiency obtained by assuming Alfvénic critical-balance turbulence and can change the resulting photon spectra emitted by non-thermal electrons.« less
Interplay between morphology and frequency in lexical access: The case of the base frequency effect
Vannest, Jennifer; Newport, Elissa L.; Newman, Aaron J.; Bavelier, Daphne
2011-01-01
A major issue in lexical processing concerns storage and access of lexical items. Here we make use of the base frequency effect to examine this. Specifically, reaction time to morphologically complex words (words made up of base and suffix, e.g., agree+able) typically reflects frequency of the base element (i.e., total frequency of all words in which agree appears) rather than surface word frequency (i.e., frequency of agreeable itself). We term these complex words decomposable. However, a class of words termed whole-word do not show such sensitivity to base frequency (e.g., serenity). Using an event-related MRI design, we exploited the fact that processing low-frequency words increases BOLD activity relative to high frequency ones, and examined effects of base frequency on brain activity for decomposable and whole-word items. Morphologically complex words, half high and half low base frequency, were compared to matched high and low frequency simple monomorphemic words using a lexical decision task. Morphologically complex words increased activation in left inferior frontal and left superior temporal cortices versus simple words. The only area to mirror the behavioral distinction between decomposable and whole-word types was the thalamus. Surprisingly, most frequency-sensitive areas failed to show base frequency effects. This variety of responses to frequency and word type across brain areas supports an integrative view of multiple variables during lexical access, rather than a dichotomy between memory-based access and on-line computation. Lexical access appears best captured as interplay of several neural processes with different sensitivities to various linguistic factors including frequency and morphological complexity. PMID:21167136
On-line range images registration with GPGPU
NASA Astrophysics Data System (ADS)
Będkowski, J.; Naruniec, J.
2013-03-01
This paper concerns implementation of algorithms in the two important aspects of modern 3D data processing: data registration and segmentation. Solution proposed for the first topic is based on the 3D space decomposition, while the latter on image processing and local neighbourhood search. Data processing is implemented by using NVIDIA compute unified device architecture (NIVIDIA CUDA) parallel computation. The result of the segmentation is a coloured map where different colours correspond to different objects, such as walls, floor and stairs. The research is related to the problem of collecting 3D data with a RGB-D camera mounted on a rotated head, to be used in mobile robot applications. Performance of the data registration algorithm is aimed for on-line processing. The iterative closest point (ICP) approach is chosen as a registration method. Computations are based on the parallel fast nearest neighbour search. This procedure decomposes 3D space into cubic buckets and, therefore, the time of the matching is deterministic. First technique of the data segmentation uses accele-rometers integrated with a RGB-D sensor to obtain rotation compensation and image processing method for defining pre-requisites of the known categories. The second technique uses the adapted nearest neighbour search procedure for obtaining normal vectors for each range point.
NASA Astrophysics Data System (ADS)
Rai, Prashant; Sargsyan, Khachik; Najm, Habib; Hermes, Matthew R.; Hirata, So
2017-09-01
A new method is proposed for a fast evaluation of high-dimensional integrals of potential energy surfaces (PES) that arise in many areas of quantum dynamics. It decomposes a PES into a canonical low-rank tensor format, reducing its integral into a relatively short sum of products of low-dimensional integrals. The decomposition is achieved by the alternating least squares (ALS) algorithm, requiring only a small number of single-point energy evaluations. Therefore, it eradicates a force-constant evaluation as the hotspot of many quantum dynamics simulations and also possibly lifts the curse of dimensionality. This general method is applied to the anharmonic vibrational zero-point and transition energy calculations of molecules using the second-order diagrammatic vibrational many-body Green's function (XVH2) theory with a harmonic-approximation reference. In this application, high dimensional PES and Green's functions are both subjected to a low-rank decomposition. Evaluating the molecular integrals over a low-rank PES and Green's functions as sums of low-dimensional integrals using the Gauss-Hermite quadrature, this canonical-tensor-decomposition-based XVH2 (CT-XVH2) achieves an accuracy of 0.1 cm-1 or higher and nearly an order of magnitude speedup as compared with the original algorithm using force constants for water and formaldehyde.
NASA Astrophysics Data System (ADS)
Audette, M. A.; Hertel, I.; Burgert, O.; Strauss, G.
This paper presents on-going work on a method for determining which subvolumes of a patient-specific tissue map, extracted from CT data of the head, are relevant to simulating endoscopic sinus surgery of that individual, and for decomposing these relevant tissues into triangles and tetrahedra whose mesh size is well controlled. The overall goal is to limit the complexity of the real-time biomechanical interaction while ensuring the clinical relevance of the simulation. Relevant tissues are determined as the union of the pathology present in the patient, of critical tissues deemed to be near the intended surgical path or pathology, and of bone and soft tissue near the intended path, pathology or critical tissues. The processing of tissues, prior to meshing, is based on the Fast Marching method applied under various guises, in a conditional manner that is related to tissue classes. The meshing is based on an adaptation of a meshing method of ours, which combines the Marching Tetrahedra method and the discrete Simplex mesh surface model to produce a topologically faithful surface mesh with well controlled edge and face size as a first stage, and Almost-regular Tetrahedralization of the same prescribed mesh size as a last stage.
NASA Astrophysics Data System (ADS)
He, Zhi; Liu, Lin
2016-11-01
Empirical mode decomposition (EMD) and its variants have recently been applied for hyperspectral image (HSI) classification due to their ability to extract useful features from the original HSI. However, it remains a challenging task to effectively exploit the spectral-spatial information by the traditional vector or image-based methods. In this paper, a three-dimensional (3D) extension of EMD (3D-EMD) is proposed to naturally treat the HSI as a cube and decompose the HSI into varying oscillations (i.e. 3D intrinsic mode functions (3D-IMFs)). To achieve fast 3D-EMD implementation, 3D Delaunay triangulation (3D-DT) is utilized to determine the distances of extrema, while separable filters are adopted to generate the envelopes. Taking the extracted 3D-IMFs as features of different tasks, robust multitask learning (RMTL) is further proposed for HSI classification. In RMTL, pairs of low-rank and sparse structures are formulated by trace-norm and l1,2 -norm to capture task relatedness and specificity, respectively. Moreover, the optimization problems of RMTL can be efficiently solved by the inexact augmented Lagrangian method (IALM). Compared with several state-of-the-art feature extraction and classification methods, the experimental results conducted on three benchmark data sets demonstrate the superiority of the proposed methods.
NASA Technical Reports Server (NTRS)
Lohner, Kevin A. (Inventor); Mays, Jeffrey A. (Inventor); Sevener, Kathleen M. (Inventor)
2004-01-01
A method for designing and assembling a high performance catalyst bed gas generator for use in decomposing propellants, particularly hydrogen peroxide propellants, for use in target, space, and on-orbit propulsion systems and low-emission terrestrial power and gas generation. The gas generator utilizes a sectioned catalyst bed system, and incorporates a robust, high temperature mixed metal oxide catalyst. The gas generator requires no special preheat apparatus or special sequencing to meet start-up requirements, enabling a fast overall response time. The high performance catalyst bed gas generator system has consistently demonstrated high decomposition efficiency, extremely low decomposition roughness, and long operating life on multiple test articles.
Functional renormalization group and Kohn-Sham scheme in density functional theory
NASA Astrophysics Data System (ADS)
Liang, Haozhao; Niu, Yifei; Hatsuda, Tetsuo
2018-04-01
Deriving accurate energy density functional is one of the central problems in condensed matter physics, nuclear physics, and quantum chemistry. We propose a novel method to deduce the energy density functional by combining the idea of the functional renormalization group and the Kohn-Sham scheme in density functional theory. The key idea is to solve the renormalization group flow for the effective action decomposed into the mean-field part and the correlation part. Also, we propose a simple practical method to quantify the uncertainty associated with the truncation of the correlation part. By taking the φ4 theory in zero dimension as a benchmark, we demonstrate that our method shows extremely fast convergence to the exact result even for the highly strong coupling regime.
Enabling fast, stable and accurate peridynamic computations using multi-time-step integration
Lindsay, P.; Parks, M. L.; Prakash, A.
2016-04-13
Peridynamics is a nonlocal extension of classical continuum mechanics that is well-suited for solving problems with discontinuities such as cracks. This paper extends the peridynamic formulation to decompose a problem domain into a number of smaller overlapping subdomains and to enable the use of different time steps in different subdomains. This approach allows regions of interest to be isolated and solved at a small time step for increased accuracy while the rest of the problem domain can be solved at a larger time step for greater computational efficiency. Lastly, performance of the proposed method in terms of stability, accuracy, andmore » computational cost is examined and several numerical examples are presented to corroborate the findings.« less
Untangling Galaxy Components - The Angular Momentum Parameter
NASA Astrophysics Data System (ADS)
Tabor, Martha; Merrifield, Michael; Aragon-Salamanca, Alfonso
2017-06-01
We have developed a new technique to decompose Integral Field spectral data cubes into separate bulge and disk components, allowing us to study the kinematic and stellar population properties of the individual components and how they vary with position. We present here the application of this method to a sample of fast rotator early type galaxies from the MaNGA integral field survey, and demonstrate how it can be used to explore key properties of the individual components. By extracting ages, metallicities and the angular momentum parameter lambda of the bulges and disks, we show how this method can give us new insights into the underlying structure of the galaxies and discuss what this can tell us about their evolution history.
A wavelet based approach to measure and manage contagion at different time scales
NASA Astrophysics Data System (ADS)
Berger, Theo
2015-10-01
We decompose financial return series of US stocks into different time scales with respect to different market regimes. First, we examine dependence structure of decomposed financial return series and analyze the impact of the current financial crisis on contagion and changing interdependencies as well as upper and lower tail dependence for different time scales. Second, we demonstrate to which extent the information of different time scales can be used in the context of portfolio management. As a result, minimizing the variance of short-run noise outperforms a portfolio that minimizes the variance of the return series.
Bimolecular Coupling as a Vector for Decomposition of Fast-Initiating Olefin Metathesis Catalysts.
Bailey, Gwendolyn A; Foscato, Marco; Higman, Carolyn S; Day, Craig S; Jensen, Vidar R; Fogg, Deryn E
2018-06-06
The correlation between rapid initiation and rapid decomposition in olefin metathesis is probed for a series of fast-initiating, phosphine-free Ru catalysts: the Hoveyda catalyst HII, RuCl 2 (L)(═CHC 6 H 4 - o-O i Pr); the Grela catalyst nG (a derivative of HII with a nitro group para to O i Pr); the Piers catalyst PII, [RuCl 2 (L)(═CHPCy 3 )]OTf; the third-generation Grubbs catalyst GIII, RuCl 2 (L)(py) 2 (═CHPh); and dianiline catalyst DA, RuCl 2 (L)( o-dianiline)(═CHPh), in all of which L = H 2 IMes = N,N'-bis(mesityl)imidazolin-2-ylidene. Prior studies of ethylene metathesis have established that various Ru metathesis catalysts can decompose by β-elimination of propene from the metallacyclobutane intermediate RuCl 2 (H 2 IMes)(κ 2 -C 3 H 6 ), Ru-2. The present work demonstrates that in metathesis of terminal olefins, β-elimination yields only ca. 25-40% propenes for HII, nG, PII, or DA, and none for GIII. The discrepancy is attributed to competing decomposition via bimolecular coupling of methylidene intermediate RuCl 2 (H 2 IMes)(═CH 2 ), Ru-1. Direct evidence for methylidene coupling is presented, via the controlled decomposition of transiently stabilized adducts of Ru-1, RuCl 2 (H 2 IMes)L n (═CH 2 ) (L n = py n' ; n' = 1, 2, or o-dianiline). These adducts were synthesized by treating in situ-generated metallacyclobutane Ru-2 with pyridine or o-dianiline, and were isolated by precipitating at low temperature (-116 or -78 °C, respectively). On warming, both undergo methylidene coupling, liberating ethylene and forming RuCl 2 (H 2 IMes)L n . A mechanism is proposed based on kinetic studies and molecular-level computational analysis. Bimolecular coupling emerges as an important contributor to the instability of Ru-1, and a potentially major pathway for decomposition of fast-initiating, phosphine-free metathesis catalysts.
Integrating microbial physiology and enzyme traits in the quality model
NASA Astrophysics Data System (ADS)
Sainte-Marie, Julien; Barrandon, Matthieu; Martin, Francis; Saint-André, Laurent; Derrien, Delphine
2017-04-01
Microbe activity plays an undisputable role in soil carbon storage and there have been many calls to integrate microbial ecology in soil carbon (C) models. With regard to this challenge, a few trait-based microbial models of C dynamics have emerged during the past decade. They parameterize specific traits related to decomposer physiology (substrate use efficiency, growth and mortality rates...) and enzyme properties (enzyme production rate, catalytic properties of enzymes…). But these models are built on the premise that organic matter (OM) can be represented as one single entity or are divided into a few pools, while organic matter exists as a continuum of many different compounds spanning from intact plant molecules to highly oxidised microbial metabolites. In addition, a given molecule may also exist in different forms, depending on its stage of polymerization or on its interactions with other organic compounds or mineral phases of the soil. Here we develop a general theoretical model relating the evolution of soil organic matter, as a continuum of progressively decomposing compounds, with decomposer activity and enzyme traits. The model is based on the notion of quality developed by Agren and Bosatta (1998), which is a measure of molecule accessibility to degradation. The model integrates three major processes: OM depolymerisation by enzyme action, OM assimilation and OM biotransformation. For any enzyme, the model reports the quality range where this enzyme selectively operates and how the initial quality distribution of the OM subset evolves into another distribution of qualities under the enzyme action. The model also defines the quality range where the OM can be uptaken and assimilated by microbes. It finally describes how the quality of the assimilated molecules is transformed into another quality distribution, corresponding to the decomposer metabolites signature. Upon decomposer death, these metabolites return to the substrate. We explore here the how microbial physiology and enzyme traits can be incorporated in a model based on a continuous representation of the organic matter and evaluate how it can improve our ability to predict soil C cycling. To do so, we analyse the properties of the model by implementing different scenarii and test the sensitivity of its parameters. Agren, G. I., & Bosatta, E. (1998). Theoretical ecosystem ecology: understanding element cycles. Cambridge University Press.
Force-Based Reasoning for Assembly Planning and Subassembly Stability Analysis
NASA Technical Reports Server (NTRS)
Lee, S.; Yi, C.; Wang, F-C.
1993-01-01
In this paper, we show that force-based reasoning, for identifying a cluster of parts that can be decomposed naturally by the applied force, plays an important role in selecting feasible subassemblies and analyzing subassembly stability in assembly planning.
Reduced Toxicity Fuel Satellite Propulsion System
NASA Technical Reports Server (NTRS)
Schneider, Steven J. (Inventor)
2001-01-01
A reduced toxicity fuel satellite propulsion system including a reduced toxicity propellant supply for consumption in an axial class thruster and an ACS class thruster. The system includes suitable valves and conduits for supplying the reduced toxicity propellant to the ACS decomposing element of an ACS thruster. The ACS decomposing element is operative to decompose the reduced toxicity propellant into hot propulsive gases. In addition the system includes suitable valves and conduits for supplying the reduced toxicity propellant to an axial decomposing element of the axial thruster. The axial decomposing element is operative to decompose the reduced toxicity propellant into hot gases. The system further includes suitable valves and conduits for supplying a second propellant to a combustion chamber of the axial thruster, whereby the hot gases and the second propellant auto-ignite and begin the combustion process for producing thrust.
Reduced Toxicity Fuel Satellite Propulsion System Including Plasmatron
NASA Technical Reports Server (NTRS)
Schneider, Steven J. (Inventor)
2003-01-01
A reduced toxicity fuel satellite propulsion system including a reduced toxicity propellant supply for consumption in an axial class thruster and an ACS class thruster. The system includes suitable valves and conduits for supplying the reduced toxicity propellant to the ACS decomposing element of an ACS thruster. The ACS decomposing element is operative to decompose the reduced toxicity propellant into hot propulsive gases. In addition the system includes suitable valves and conduits for supplying the reduced toxicity propellant to an axial decomposing element of the axial thruster. The axial decomposing element is operative to decompose the reduced toxicity propellant into hot gases. The system further includes suitable valves and conduits for supplying a second propellant to a combustion chamber of the axial thruster. whereby the hot gases and the second propellant auto-ignite and begin the combustion process for producing thrust.
Waters, Christopher L.; Janupala, Rajiv R.; Mallinson, Richard G.; ...
2017-05-25
Thermal conversion technologies may be the most efficient means of production of transportation fuels from lignocellulosic biomass. In order to increase the viability and improve the carbon emissions profile of pyrolysis biofuels, improvements must be made to the required catalytic upgrading to increase both hydrogen utilization efficiency and final liquid carbon yields. However, no current single catalytic valorization strategy can be optimized to convert the complex mixture of compounds produced upon fast pyrolysis of biomass. Staged thermal fractionation, which entails a series of sequentially increasing temperature steps to decompose biomass, has been proposed as a simple means to create vapormore » product streams of enhanced purity as compared to fast pyrolysis. In this work, we use analytical pyrolysis to investigate the effects of time and temperature on a thermal step designed to segregate the lignin and cellulose pyrolysis products of a biomass which has been pre-torrefied to remove hemicellulose. At process conditions of 380 °C and 180 s isothermal hold time, a stream containing less than 20% phenolics (carbon basis) was produced, and upon subsequent fast pyrolysis of the residual solid a stream of 81.5% levoglucosan (carbon basis) was produced. The thermal segregation comes at the expense of vapor product carbon yield, but the improvement in catalytic performance may offset these losses.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waters, Christopher L.; Janupala, Rajiv R.; Mallinson, Richard G.
Thermal conversion technologies may be the most efficient means of production of transportation fuels from lignocellulosic biomass. In order to increase the viability and improve the carbon emissions profile of pyrolysis biofuels, improvements must be made to the required catalytic upgrading to increase both hydrogen utilization efficiency and final liquid carbon yields. However, no current single catalytic valorization strategy can be optimized to convert the complex mixture of compounds produced upon fast pyrolysis of biomass. Staged thermal fractionation, which entails a series of sequentially increasing temperature steps to decompose biomass, has been proposed as a simple means to create vapormore » product streams of enhanced purity as compared to fast pyrolysis. In this work, we use analytical pyrolysis to investigate the effects of time and temperature on a thermal step designed to segregate the lignin and cellulose pyrolysis products of a biomass which has been pre-torrefied to remove hemicellulose. At process conditions of 380 °C and 180 s isothermal hold time, a stream containing less than 20% phenolics (carbon basis) was produced, and upon subsequent fast pyrolysis of the residual solid a stream of 81.5% levoglucosan (carbon basis) was produced. The thermal segregation comes at the expense of vapor product carbon yield, but the improvement in catalytic performance may offset these losses.« less
Thermochemical hydrogen production based on magnetic fusion
NASA Astrophysics Data System (ADS)
Krikorian, O. H.; Brown, L. C.
Preliminary results of a DoE study to define the configuration and production costs for a Tandem Mirror Reactor (TMR) heat source H2 fuel production plant are presented. The TMR uses the D-T reaction to produce thermal energy and dc electrical current, with an Li blanket employed to breed more H-3 for fuel. Various blanket designs are being considered, and the coupling of two of them, a heat pipe blanket to a Joule-boosted decomposer, and a two-temperature zone blanket to a fluidized bed decomposer, are discussed. The thermal energy would be used in an H2SO4 thermochemical cycler to produce the H2. The Joule-boosted decomposer, involving the use of electrically heated commercial SiC furnace elements to transfer process heat to the thermochemical H2 cycle, is found to yield H2 fuel at a cost of $12-14/GJ, which is the projected cost of fossil fuels in 30-40 yr, when the TMR H2 production facility would be operable.
Automatic single-image-based rain streaks removal via image decomposition.
Kang, Li-Wei; Lin, Chia-Wen; Fu, Yu-Hsiang
2012-04-01
Rain removal from a video is a challenging problem and has been recently investigated extensively. Nevertheless, the problem of rain removal from a single image was rarely studied in the literature, where no temporal information among successive images can be exploited, making the problem very challenging. In this paper, we propose a single-image-based rain removal framework via properly formulating rain removal as an image decomposition problem based on morphological component analysis. Instead of directly applying a conventional image decomposition technique, the proposed method first decomposes an image into the low- and high-frequency (HF) parts using a bilateral filter. The HF part is then decomposed into a "rain component" and a "nonrain component" by performing dictionary learning and sparse coding. As a result, the rain component can be successfully removed from the image while preserving most original image details. Experimental results demonstrate the efficacy of the proposed algorithm.
NASA Astrophysics Data System (ADS)
Zhang, Cheng; Wenbo, Mei; Huiqian, Du; Zexian, Wang
2018-04-01
A new algorithm was proposed for medical images fusion in this paper, which combined gradient minimization smoothing filter (GMSF) with non-sampled directional filter bank (NSDFB). In order to preserve more detail information, a multi scale edge preserving decomposition framework (MEDF) was used to decompose an image into a base image and a series of detail images. For the fusion of base images, the local Gaussian membership function is applied to construct the fusion weighted factor. For the fusion of detail images, NSDFB was applied to decompose each detail image into multiple directional sub-images that are fused by pulse coupled neural network (PCNN) respectively. The experimental results demonstrate that the proposed algorithm is superior to the compared algorithms in both visual effect and objective assessment.
NASA Technical Reports Server (NTRS)
Schneider, Steven J. (Inventor)
2001-01-01
A reduced toxicity fuel satellite propulsion system including a reduced toxicity propellant supply for consumption in an axial class thruster and an ACS class thruster. The system includes suitable valves and conduits for supplying the reduced toxicity propellant to the ACS decomposing element of an ACS thruster. The ACS decomposing element is operative to decompose the reduced toxicity propellant into hot propulsive gases. In addition the system includes suitable valves and conduits for supplying the reduced toxicity propellant to an axial decomposing element of the axial thruster. The axial decomposing element is operative to decompose the reduced toxicity propellant into hot gases. The system further includes suitable valves and conduits for supplying a second propellant to a combustion chamber of the axial thruster, whereby the hot gases and the second propellant auto-ignite and begin the combustion process for producing thrust.
Double loaded self-decomposable SiO2 nanoparticles for sustained drug release
NASA Astrophysics Data System (ADS)
Zhao, Saisai; Zhang, Silu; Ma, Jiang; Fan, Li; Yin, Chun; Lin, Ge; Li, Quan
2015-10-01
Sustained drug release for a long duration is a desired feature of modern drugs. Using double-loaded self-decomposable SiO2 nanoparticles, we demonstrated sustained drug release in a controllable manner. The double loading of the drugs was achieved using two different mechanisms--the first one via a co-growth mechanism, and the second one by absorption. A two-phase sustained drug release was firstly revealed in an in vitro system, and then further demonstrated in mice. After a single intravenous injection, the drug was controllably released from the nanoparticles into blood circulation with a Tmax of about 8 h, afterwards a long lasting release pattern was achieved to maintain drug systemic exposure with a plasma elimination half-life of approximately 28 h. We disclosed that the absorbed drug molecules contributed to the initial fast release for quickly reaching the therapeutic level with relatively higher plasma concentrations, while the ``grown-in'' drugs were responsible for maintaining the therapeutic level via the later controlled slow and sustained release. The present nanoparticle carrier drug configuration and the loading/maintenance release mechanisms provide a promising platform that ensures a prolonged therapeutic effect by controlling drug concentrations within the therapeutic window--a sustained drug delivery system with a great impact on improving the management of chronic diseases.Sustained drug release for a long duration is a desired feature of modern drugs. Using double-loaded self-decomposable SiO2 nanoparticles, we demonstrated sustained drug release in a controllable manner. The double loading of the drugs was achieved using two different mechanisms--the first one via a co-growth mechanism, and the second one by absorption. A two-phase sustained drug release was firstly revealed in an in vitro system, and then further demonstrated in mice. After a single intravenous injection, the drug was controllably released from the nanoparticles into blood circulation with a Tmax of about 8 h, afterwards a long lasting release pattern was achieved to maintain drug systemic exposure with a plasma elimination half-life of approximately 28 h. We disclosed that the absorbed drug molecules contributed to the initial fast release for quickly reaching the therapeutic level with relatively higher plasma concentrations, while the ``grown-in'' drugs were responsible for maintaining the therapeutic level via the later controlled slow and sustained release. The present nanoparticle carrier drug configuration and the loading/maintenance release mechanisms provide a promising platform that ensures a prolonged therapeutic effect by controlling drug concentrations within the therapeutic window--a sustained drug delivery system with a great impact on improving the management of chronic diseases. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr03029c
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kılıç, Emre, E-mail: emre.kilic@tum.de; Eibert, Thomas F.
An approach combining boundary integral and finite element methods is introduced for the solution of three-dimensional inverse electromagnetic medium scattering problems. Based on the equivalence principle, unknown equivalent electric and magnetic surface current densities on a closed surface are utilized to decompose the inverse medium problem into two parts: a linear radiation problem and a nonlinear cavity problem. The first problem is formulated by a boundary integral equation, the computational burden of which is reduced by employing the multilevel fast multipole method (MLFMM). Reconstructed Cauchy data on the surface allows the utilization of the Lorentz reciprocity and the Poynting's theorems.more » Exploiting these theorems, the noise level and an initial guess are estimated for the cavity problem. Moreover, it is possible to determine whether the material is lossy or not. In the second problem, the estimated surface currents form inhomogeneous boundary conditions of the cavity problem. The cavity problem is formulated by the finite element technique and solved iteratively by the Gauss–Newton method to reconstruct the properties of the object. Regularization for both the first and the second problems is achieved by a Krylov subspace method. The proposed method is tested against both synthetic and experimental data and promising reconstruction results are obtained.« less
Kinetics of acetaminophen degradation by Fenton oxidation in a fluidized-bed reactor.
de Luna, Mark Daniel G; Briones, Rowena M; Su, Chia-Chi; Lu, Ming-Chun
2013-01-01
Acetaminophen (ACT), an analgesic and antipyretic substance, is one of the most commonly detected pharmaceutical compound in surface waters and wastewaters. In this study, fluidized-bed Fenton (FB-Fenton) was used to decompose ACT into its final degradation products. The 1.45-L cylindrical glass reactor had inlet, outlet and recirculating sections. SiO(2) carrier particles were supported by glass beads with 2-4 mm in diameter. ACT concentration was determined by high performance liquid chromatography (HPLC). During the first 40 min of reaction, a fast initial ACT removal was observed and the "two-stage" ACT degradation conformed to a pseudo reaction kinetics. The effects of ferrous ion dosage and [Fe(2+)]/[H(2)O(2)] (FH ratio) were integrated into the derived pseudo second-order kinetic model. A reaction pathway was proposed based on the intermediates detected through SPME/GC-MS. The aromatic intermediates identified were hydroquinone, benzaldehydes and benzoic acids while the non-aromatic substances include alcohols, ketones, aldehydes and carboxylic acids. Rapid initial ACT degradation rate can be accomplished by high initial ferrous ion concentration and/or low FH ratio. Copyright © 2012 Elsevier Ltd. All rights reserved.
Fast social-like learning of complex behaviors based on motor motifs.
Calvo Tapia, Carlos; Tyukin, Ivan Y; Makarov, Valeri A
2018-05-01
Social learning is widely observed in many species. Less experienced agents copy successful behaviors exhibited by more experienced individuals. Nevertheless, the dynamical mechanisms behind this process remain largely unknown. Here we assume that a complex behavior can be decomposed into a sequence of n motor motifs. Then a neural network capable of activating motor motifs in a given sequence can drive an agent. To account for (n-1)! possible sequences of motifs in a neural network, we employ the winnerless competition approach. We then consider a teacher-learner situation: one agent exhibits a complex movement, while another one aims at mimicking the teacher's behavior. Despite the huge variety of possible motif sequences we show that the learner, equipped with the provided learning model, can rewire "on the fly" its synaptic couplings in no more than (n-1) learning cycles and converge exponentially to the durations of the teacher's motifs. We validate the learning model on mobile robots. Experimental results show that the learner is indeed capable of copying the teacher's behavior composed of six motor motifs in a few learning cycles. The reported mechanism of learning is general and can be used for replicating different functions, including, for example, sound patterns or speech.
NASA Astrophysics Data System (ADS)
Murawski, Jens; Kleine, Eckhard
2017-04-01
Sea ice remains one of the frontiers of ocean modelling and is of vital importance for the correct forecasts of the northern oceans. At large scale, it is commonly considered a continuous medium whose dynamics is modelled in terms of continuum mechanics. Its specifics are a matter of constitutive behaviour which may be characterised as rigid-plastic. The new developed sea ice dynamic module bases on general principles and follows a systematic approach to the problem. Both drift field and stress field are modelled by a variational property. Rigidity is treated by Lagrangian relaxation. Thus one is led to a sensible numerical method. Modelling fast ice remains to be a challenge. It is understood that ridging and the formation of grounded ice keels plays a role in the process. The ice dynamic model includes a parameterisation of the stress associated with grounded ice keels. Shear against the grounded bottom contact might lead to plastic deformation and the loss of integrity. The numerical scheme involves a potentially large system of linear equations which is solved by pre-conditioned iteration. The entire algorithm consists of several components which result from decomposing the problem. The algorithm has been implemented and tested in practice.
Fast social-like learning of complex behaviors based on motor motifs
NASA Astrophysics Data System (ADS)
Calvo Tapia, Carlos; Tyukin, Ivan Y.; Makarov, Valeri A.
2018-05-01
Social learning is widely observed in many species. Less experienced agents copy successful behaviors exhibited by more experienced individuals. Nevertheless, the dynamical mechanisms behind this process remain largely unknown. Here we assume that a complex behavior can be decomposed into a sequence of n motor motifs. Then a neural network capable of activating motor motifs in a given sequence can drive an agent. To account for (n -1 )! possible sequences of motifs in a neural network, we employ the winnerless competition approach. We then consider a teacher-learner situation: one agent exhibits a complex movement, while another one aims at mimicking the teacher's behavior. Despite the huge variety of possible motif sequences we show that the learner, equipped with the provided learning model, can rewire "on the fly" its synaptic couplings in no more than (n -1 ) learning cycles and converge exponentially to the durations of the teacher's motifs. We validate the learning model on mobile robots. Experimental results show that the learner is indeed capable of copying the teacher's behavior composed of six motor motifs in a few learning cycles. The reported mechanism of learning is general and can be used for replicating different functions, including, for example, sound patterns or speech.
Rule groupings: An approach towards verification of expert systems
NASA Technical Reports Server (NTRS)
Mehrotra, Mala
1991-01-01
Knowledge-based expert systems are playing an increasingly important role in NASA space and aircraft systems. However, many of NASA's software applications are life- or mission-critical and knowledge-based systems do not lend themselves to the traditional verification and validation techniques for highly reliable software. Rule-based systems lack the control abstractions found in procedural languages. Hence, it is difficult to verify or maintain such systems. Our goal is to automatically structure a rule-based system into a set of rule-groups having a well-defined interface to other rule-groups. Once a rule base is decomposed into such 'firewalled' units, studying the interactions between rules would become more tractable. Verification-aid tools can then be developed to test the behavior of each such rule-group. Furthermore, the interactions between rule-groups can be studied in a manner similar to integration testing. Such efforts will go a long way towards increasing our confidence in the expert-system software. Our research efforts address the feasibility of automating the identification of rule groups, in order to decompose the rule base into a number of meaningful units.
NASA Astrophysics Data System (ADS)
Baynes, K.; Gilman, J.; Pilone, D.; Mitchell, A. E.
2015-12-01
The NASA EOSDIS (Earth Observing System Data and Information System) Common Metadata Repository (CMR) is a continuously evolving metadata system that merges all existing capabilities and metadata from EOS ClearingHOuse (ECHO) and the Global Change Master Directory (GCMD) systems. This flagship catalog has been developed with several key requirements: fast search and ingest performance ability to integrate heterogenous external inputs and outputs high availability and resiliency scalability evolvability and expandability This talk will focus on the advantages and potential challenges of tackling these requirements using a microservices architecture, which decomposes system functionality into smaller, loosely-coupled, individually-scalable elements that communicate via well-defined APIs. In addition, time will be spent examining specific elements of the CMR architecture and identifying opportunities for future integrations.
Kulkarni, Raviraj M; Bilehal, Dinesh C; Nandibewoor, Sharanappa T
2004-04-01
The kinetics of oxidation of isoniazid in acidic medium was studied spectrophotometrically. The reaction between QDC and isoniazid in acid medium exhibits (4:1) stoichiometry (QDC:isoniazid). The reaction showed first order kinetics in quinolinium dichromate (QDC) concentration and an order of less than unity in isoniazid (INH) and acid concentrations. The oxidation reaction proceeds via a protonated QDC species, which forms a complex with isoniazid. The latter decomposes in a slow step to give a free radical derived from isoniazid and an intermediate chromium(V), which is followed, by subsequent fast steps to give the products. The reaction constants involved in the mechanism are evaluated. Isoniazid was analyzed by kinetic methods in pure and pharmaceutical formulations.
Incipient fault feature extraction of rolling bearings based on the MVMD and Teager energy operator.
Ma, Jun; Wu, Jiande; Wang, Xiaodong
2018-06-04
Aiming at the problems that the incipient fault of rolling bearings is difficult to recognize and the number of intrinsic mode functions (IMFs) decomposed by variational mode decomposition (VMD) must be set in advance and can not be adaptively selected, taking full advantages of the adaptive segmentation of scale spectrum and Teager energy operator (TEO) demodulation, a new method for early fault feature extraction of rolling bearings based on the modified VMD and Teager energy operator (MVMD-TEO) is proposed. Firstly, the vibration signal of rolling bearings is analyzed by adaptive scale space spectrum segmentation to obtain the spectrum segmentation support boundary, and then the number K of IMFs decomposed by VMD is adaptively determined. Secondly, the original vibration signal is adaptively decomposed into K IMFs, and the effective IMF components are extracted based on the correlation coefficient criterion. Finally, the Teager energy spectrum of the reconstructed signal of the effective IMF components is calculated by the TEO, and then the early fault features of rolling bearings are extracted to realize the fault identification and location. Comparative experiments of the proposed method and the existing fault feature extraction method based on Local Mean Decomposition and Teager energy operator (LMD-TEO) have been implemented using experimental data-sets and a measured data-set. The results of comparative experiments in three application cases show that the presented method can achieve a fairly or slightly better performance than LMD-TEO method, and the validity and feasibility of the proposed method are proved. Copyright © 2018. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Krab, E. J.; Berg, M. P.; Aerts, R.; van Logtestijn, R. S. P.; Cornelissen, H. H. C.
2014-12-01
Climate-change-induced trends towards shrub dominance in subarctic, moss-dominated peatlands will most likely have large effects on soil carbon (C) dynamics through an input of more easily decomposable litter. The mechanisms by which this increase in vascular litter input interacts with the abundance and diet-choice of the decomposer community to alter C-processing have, however, not yet been unraveled. We used a novel 13C tracer approach to link invertebrate species composition (Collembola), abundance and species-specific feeding behavior to C-processing of vascular and peat moss litters. We incubated different litter mixtures, 100% Sphagnum moss litter, 100% Betula leaf litter, and a 50/50 mixture of both, in mesocosms for 406 days. We revealed the transfer of C from the litters to the soil invertebrate species by 13C labeling of each of the litter types and assessed 13C signatures of the invertebrates Collembola species composition differed significantly between Sphagnum and Betula litter. Within the 'single type litter' mesocosms, Collembola species showed different 13C signatures, implying species-specific differences in diet choice. Surprisingly, the species composition and Collembola abundance changed relatively little as a consequence of Betula input to a Sphagnum based system. Their diet choice, however, changed drastically; species-specific differences in diet choice disappeared and approximately 67% of the food ingested by all Collembola originated from Betula litter. Furthermore, litter decomposition patterns corresponded to these findings; mass loss of Betula increased from 16.1% to 26.2% when decomposing in combination with Sphagnum, while Sphagnum decomposed even slower in combination with Betula litter (1.9%) than alone (4.7%). This study is the first to empirically show that collective diet shifts of the peatland decomposer community from mosses towards vascular plant litter may drive altered decomposition patterns. In addition, we showed that although species-specific differences in Collembola feeding behavior appear to exist, species are very plastic in their diet. This implies that changes in C turnover rates with vegetation shifts, might well be due to diet shifts of the present decomposer community rather than by changes in species composition.
Forensic entomology of decomposing humans and their decomposing pets.
Sanford, Michelle R
2015-02-01
Domestic pets are commonly found in the homes of decedents whose deaths are investigated by a medical examiner or coroner. When these pets become trapped with a decomposing decedent they may resort to feeding on the body or succumb to starvation and/or dehydration and begin to decompose as well. In this case report photographic documentation of cases involving pets and decedents were examined from 2009 through the beginning of 2014. This photo review indicated that in many cases the pets were cats and dogs that were trapped with the decedent, died and were discovered in a moderate (bloat to active decay) state of decomposition. In addition three cases involving decomposing humans and their decomposing pets are described as they were processed for time of insect colonization by forensic entomological approach. Differences in timing and species colonizing the human and animal bodies were noted as was the potential for the human or animal derived specimens to contaminate one another at the scene. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Philpott, Timothy J; Barker, Jason S; Prescott, Cindy E; Grayston, Sue J
2018-02-01
Fine root litter is the principal source of carbon stored in forest soils and a dominant source of carbon for fungal decomposers. Differences in decomposer capacity between fungal species may be important determinants of fine-root decomposition rates. Variable-retention harvesting (VRH) provides refuge for ectomycorrhizal fungi, but its influence on fine-root decomposers is unknown, as are the effects of functional shifts in these fungal communities on carbon cycling. We compared fungal communities decomposing fine roots (in litter bags) under VRH, clear-cut, and uncut stands at two sites (6 and 13 years postharvest) and two decay stages (43 days and 1 year after burial) in Douglas fir forests in coastal British Columbia, Canada. Fungal species and guilds were identified from decomposed fine roots using high-throughput sequencing. Variable retention had short-term effects on β-diversity; harvest treatment modified the fungal community composition at the 6-year-postharvest site, but not at the 13-year-postharvest site. Ericoid and ectomycorrhizal guilds were not more abundant under VRH, but stand age significantly structured species composition. Guild composition varied by decay stage, with ruderal species later replaced by saprotrophs and ectomycorrhizae. Ectomycorrhizal abundance on decomposing fine roots may partially explain why fine roots typically decompose more slowly than surface litter. Our results indicate that stand age structures fine-root decomposers but that decay stage is more important in structuring the fungal community than shifts caused by harvesting. The rapid postharvest recovery of fungal communities decomposing fine roots suggests resiliency within this community, at least in these young regenerating stands in coastal British Columbia. IMPORTANCE Globally, fine roots are a dominant source of carbon in forest soils, yet the fungi that decompose this material and that drive the sequestration or respiration of this carbon remain largely uncharacterized. Fungi vary in their capacity to decompose plant litter, suggesting that fungal community composition is an important determinant of decomposition rates. Variable-retention harvesting is a forestry practice that modifies fungal communities by providing refuge for ectomycorrhizal fungi. We evaluated the effects of variable retention and clear-cut harvesting on fungal communities decomposing fine roots at two sites (6 and 13 years postharvest), at two decay stages (43 days and 1 year), and in uncut stands in temperate rainforests. Harvesting impacts on fungal community composition were detected only after 6 years after harvest. We suggest that fungal community composition may be an important factor that reduces fine-root decomposition rates relative to those of above-ground plant litter, which has important consequences for forest carbon cycling. Copyright © 2018 American Society for Microbiology.
Aras, N; Altinel, I K; Oommen, J
2003-01-01
In addition to the classical heuristic algorithms of operations research, there have also been several approaches based on artificial neural networks for solving the traveling salesman problem. Their efficiency, however, decreases as the problem size (number of cities) increases. A technique to reduce the complexity of a large-scale traveling salesman problem (TSP) instance is to decompose or partition it into smaller subproblems. We introduce an all-neural decomposition heuristic that is based on a recent self-organizing map called KNIES, which has been successfully implemented for solving both the Euclidean traveling salesman problem and the Euclidean Hamiltonian path problem. Our solution for the Euclidean TSP proceeds by solving the Euclidean HPP for the subproblems, and then patching these solutions together. No such all-neural solution has ever been reported.
3D shape decomposition and comparison for gallbladder modeling
NASA Astrophysics Data System (ADS)
Huang, Weimin; Zhou, Jiayin; Liu, Jiang; Zhang, Jing; Yang, Tao; Su, Yi; Law, Gim Han; Chui, Chee Kong; Chang, Stephen
2011-03-01
This paper presents an approach to gallbladder shape comparison by using 3D shape modeling and decomposition. The gallbladder models can be used for shape anomaly analysis and model comparison and selection in image guided robotic surgical training, especially for laparoscopic cholecystectomy simulation. The 3D shape of a gallbladder is first represented as a surface model, reconstructed from the contours segmented in CT data by a scheme of propagation based voxel learning and classification. To better extract the shape feature, the surface mesh is further down-sampled by a decimation filter and smoothed by a Taubin algorithm, followed by applying an advancing front algorithm to further enhance the regularity of the mesh. Multi-scale curvatures are then computed on the regularized mesh for the robust saliency landmark localization on the surface. The shape decomposition is proposed based on the saliency landmarks and the concavity, measured by the distance from the surface point to the convex hull. With a given tolerance the 3D shape can be decomposed and represented as 3D ellipsoids, which reveal the shape topology and anomaly of a gallbladder. The features based on the decomposed shape model are proposed for gallbladder shape comparison, which can be used for new model selection. We have collected 19 sets of abdominal CT scan data with gallbladders, some shown in normal shape and some in abnormal shapes. The experiments have shown that the decomposed shapes reveal important topology features.
Event-driven Monte Carlo: Exact dynamics at all time scales for discrete-variable models
NASA Astrophysics Data System (ADS)
Mendoza-Coto, Alejandro; Díaz-Méndez, Rogelio; Pupillo, Guido
2016-06-01
We present an algorithm for the simulation of the exact real-time dynamics of classical many-body systems with discrete energy levels. In the same spirit of kinetic Monte Carlo methods, a stochastic solution of the master equation is found, with no need to define any other phase-space construction. However, unlike existing methods, the present algorithm does not assume any particular statistical distribution to perform moves or to advance the time, and thus is a unique tool for the numerical exploration of fast and ultra-fast dynamical regimes. By decomposing the problem in a set of two-level subsystems, we find a natural variable step size, that is well defined from the normalization condition of the transition probabilities between the levels. We successfully test the algorithm with known exact solutions for non-equilibrium dynamics and equilibrium thermodynamical properties of Ising-spin models in one and two dimensions, and compare to standard implementations of kinetic Monte Carlo methods. The present algorithm is directly applicable to the study of the real-time dynamics of a large class of classical Markovian chains, and particularly to short-time situations where the exact evolution is relevant.
Nanoparticle Plasma Jet as Fast Probe for Runaway Electrons in Tokamak Disruptions
NASA Astrophysics Data System (ADS)
Bogatu, I. N.; Galkin, S. A.
2017-10-01
Successful probing of runaway electrons (REs) requires fast (1 - 2 ms) high-speed injection of enough mass able to penetrate through tokamak toroidal B-field (2 - 5 T) over 1 - 2 m distance with large assimilation fraction in core plasma. A nanoparticle plasma jet (NPPJ) from a plasma gun is a unique combination of millisecond trigger-to-delivery response and mass-velocity of 100 mg at several km/s for deep direct injection into current channel of rapidly ( 1 ms) cooling post-TQ core plasma. After C60 NPPJ test bed demonstration we started to work on ITER-compatible boron nitride (BN) NPPJ. Once injected into plasma, BN NP undergoes ablative sublimation, thermally decomposes into B and N, and releases abundant B and N high-charge ions along plasma-traversing path and into the core. We present basic characteristics of our BN NPPJ concept and first results from B and N ions on Zeff > 1 effect on REs dynamics by using a self-consistent model for RE current density. Simulation results of BNQ+ NPPJ penetration through tokamak B-field to RE beam location performed with Hybrid Electro-Magnetic code (HEM-2D) are also presented. Work supported by U.S. DOE SBIR Grant.
Lu, Yan; Li, Gang; Liu, Wei; Yuan, Hongyan; Xiao, Dan
2018-08-15
It is known that most of the refractory ore are the basis of national economy and widely applied in various fields, however, the complexity of the chemical composition and the diversity of the crystallinity in the mineral phases make the sample pre-treatment of refractory ore still remains a challenge. In this work, the complete decomposition of the refractory ore sample can be achieved just by exposing the solid fusion agent and the refractory ore sample in the microwave irradiation environment for a few minutes, and induced by a drop of water. A digestion time of 15 min for 3.0 g solid fusion agent mixture of sodium peroxide/sodium carbonate (Na 2 O 2 /Na 2 CO 3 ) in a corundum crucible via microwave heating is sufficient to decompose 0.1 g refractory ore sample. An excellent microwave digestion solid agent should meet the following conditions, a good decomposition ability, an outstanding ability of absorbing microwave energy and converting it into heat quickly, a higher melting point than the decomposing temperature of the ore sample. In the research, the induction effect of water plays an important role for the microwave digestion. The energy which is released by the reaction of water and the solid fusion agent (Na 2 O 2 ) is the key to decompose refractory ore samples with solid fusion agent, which replenished the total energy required for the microwave digestion and made the microwave digestion completed successfully. This microwave digestion technique has good reproducibility and precision, RSD % for Mo, Fe, Ti, Cr and W in the refractory ore samples were all better than 6, except RSD % for Be of about 8 because of the influence of matrix-effect. Meanwhile, the analysis results of the elements in the refractory ore samples provided by the microwave digestion technique were all in good agreement with the analysis results provided by the traditional fusion method except for Cr in the mixture ore samples. In the study, the non-linear dependence of the electromagnetic and thermal properties of the solid fusion agent on temperature under microwave irradiation and the selective heating of microwave are fully applied in this simple microwave technique. Comparing to the traditional fusion decomposition method, this microwave digestion technique is a simple, economical, fast and energy-saving sample pre-treatment technique. Copyright © 2018 Elsevier B.V. All rights reserved.
Our World without Decomposers: How Scary!
ERIC Educational Resources Information Center
Spring, Patty; Harr, Natalie
2014-01-01
Bugs, slugs, bacteria, and fungi are decomposers at the heart of every ecosystem. Fifth graders at Dodge Intermediate School in Twinsburg, Ohio, ventured outdoors to learn about the necessity of these amazing organisms. With the help of a naturalist, students explored their local park and discovered the wonder of decomposers and their…
Univariate Time Series Prediction of Solar Power Using a Hybrid Wavelet-ARMA-NARX Prediction Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nazaripouya, Hamidreza; Wang, Yubo; Chu, Chi-Cheng
This paper proposes a new hybrid method for super short-term solar power prediction. Solar output power usually has a complex, nonstationary, and nonlinear characteristic due to intermittent and time varying behavior of solar radiance. In addition, solar power dynamics is fast and is inertia less. An accurate super short-time prediction is required to compensate for the fluctuations and reduce the impact of solar power penetration on the power system. The objective is to predict one step-ahead solar power generation based only on historical solar power time series data. The proposed method incorporates discrete wavelet transform (DWT), Auto-Regressive Moving Average (ARMA)more » models, and Recurrent Neural Networks (RNN), while the RNN architecture is based on Nonlinear Auto-Regressive models with eXogenous inputs (NARX). The wavelet transform is utilized to decompose the solar power time series into a set of richer-behaved forming series for prediction. ARMA model is employed as a linear predictor while NARX is used as a nonlinear pattern recognition tool to estimate and compensate the error of wavelet-ARMA prediction. The proposed method is applied to the data captured from UCLA solar PV panels and the results are compared with some of the common and most recent solar power prediction methods. The results validate the effectiveness of the proposed approach and show a considerable improvement in the prediction precision.« less
NASA Astrophysics Data System (ADS)
Bassier, M.; Bonduel, M.; Van Genechten, B.; Vergauwen, M.
2017-11-01
Point cloud segmentation is a crucial step in scene understanding and interpretation. The goal is to decompose the initial data into sets of workable clusters with similar properties. Additionally, it is a key aspect in the automated procedure from point cloud data to BIM. Current approaches typically only segment a single type of primitive such as planes or cylinders. Also, current algorithms suffer from oversegmenting the data and are often sensor or scene dependent. In this work, a method is presented to automatically segment large unstructured point clouds of buildings. More specifically, the segmentation is formulated as a graph optimisation problem. First, the data is oversegmented with a greedy octree-based region growing method. The growing is conditioned on the segmentation of planes as well as smooth surfaces. Next, the candidate clusters are represented by a Conditional Random Field after which the most likely configuration of candidate clusters is computed given a set of local and contextual features. The experiments prove that the used method is a fast and reliable framework for unstructured point cloud segmentation. Processing speeds up to 40,000 points per second are recorded for the region growing. Additionally, the recall and precision of the graph clustering is approximately 80%. Overall, nearly 22% of oversegmentation is reduced by clustering the data. These clusters will be classified and used as a basis for the reconstruction of BIM models.
Rai, Prashant; Sargsyan, Khachik; Najm, Habib; ...
2017-03-07
Here, a new method is proposed for a fast evaluation of high-dimensional integrals of potential energy surfaces (PES) that arise in many areas of quantum dynamics. It decomposes a PES into a canonical low-rank tensor format, reducing its integral into a relatively short sum of products of low-dimensional integrals. The decomposition is achieved by the alternating least squares (ALS) algorithm, requiring only a small number of single-point energy evaluations. Therefore, it eradicates a force-constant evaluation as the hotspot of many quantum dynamics simulations and also possibly lifts the curse of dimensionality. This general method is applied to the anharmonic vibrationalmore » zero-point and transition energy calculations of molecules using the second-order diagrammatic vibrational many-body Green's function (XVH2) theory with a harmonic-approximation reference. In this application, high dimensional PES and Green's functions are both subjected to a low-rank decomposition. Evaluating the molecular integrals over a low-rank PES and Green's functions as sums of low-dimensional integrals using the Gauss–Hermite quadrature, this canonical-tensor-decomposition-based XVH2 (CT-XVH2) achieves an accuracy of 0.1 cm -1 or higher and nearly an order of magnitude speedup as compared with the original algorithm using force constants for water and formaldehyde.« less
Swertz, Morris A; De Brock, E O; Van Hijum, Sacha A F T; De Jong, Anne; Buist, Girbe; Baerends, Richard J S; Kok, Jan; Kuipers, Oscar P; Jansen, Ritsert C
2004-09-01
Genomic research laboratories need adequate infrastructure to support management of their data production and research workflow. But what makes infrastructure adequate? A lack of appropriate criteria makes any decision on buying or developing a system difficult. Here, we report on the decision process for the case of a molecular genetics group establishing a microarray laboratory. Five typical requirements for experimental genomics database systems were identified: (i) evolution ability to keep up with the fast developing genomics field; (ii) a suitable data model to deal with local diversity; (iii) suitable storage of data files in the system; (iv) easy exchange with other software; and (v) low maintenance costs. The computer scientists and the researchers of the local microarray laboratory considered alternative solutions for these five requirements and chose the following options: (i) use of automatic code generation; (ii) a customized data model based on standards; (iii) storage of datasets as black boxes instead of decomposing them in database tables; (iv) loosely linking to other programs for improved flexibility; and (v) a low-maintenance web-based user interface. Our team evaluated existing microarray databases and then decided to build a new system, Molecular Genetics Information System (MOLGENIS), implemented using code generation in a period of three months. This case can provide valuable insights and lessons to both software developers and a user community embarking on large-scale genomic projects. http://www.molgenis.nl
Laser-assisted chemical vapor deposition setup for fast synthesis of graphene patterns
NASA Astrophysics Data System (ADS)
Zhang, Chentao; Zhang, Jianhuan; Lin, Kun; Huang, Yuanqing
2017-05-01
An automatic setup based on the laser-assisted chemical vapor deposition method has been developed for the rapid synthesis of graphene patterns. The key components of this setup include a laser beam control and focusing unit, a laser spot monitoring unit, and a vacuum and flow control unit. A laser beam with precision control of laser power is focused on the surface of a nickel foil substrate by the laser beam control and focusing unit for localized heating. A rapid heating and cooling process at the localized region is induced by the relative movement between the focalized laser spot and the nickel foil substrate, which causes the decomposing of gaseous hydrocarbon and the out-diffusing of excess carbon atoms to form graphene patterns on the laser scanning path. All the fabrication parameters that affect the quality and number of graphene layers, such as laser power, laser spot size, laser scanning speed, pressure of vacuum chamber, and flow rates of gases, can be precisely controlled and monitored during the preparation of graphene patterns. A simulation of temperature distribution was carried out via the finite element method, providing a scientific guidance for the regulation of temperature distribution during experiments. A multi-layer graphene ribbon with few defects was synthesized to verify its performance of the rapid growth of high-quality graphene patterns. Furthermore, this setup has potential applications in other laser-based graphene synthesis and processing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rai, Prashant; Sargsyan, Khachik; Najm, Habib
Here, a new method is proposed for a fast evaluation of high-dimensional integrals of potential energy surfaces (PES) that arise in many areas of quantum dynamics. It decomposes a PES into a canonical low-rank tensor format, reducing its integral into a relatively short sum of products of low-dimensional integrals. The decomposition is achieved by the alternating least squares (ALS) algorithm, requiring only a small number of single-point energy evaluations. Therefore, it eradicates a force-constant evaluation as the hotspot of many quantum dynamics simulations and also possibly lifts the curse of dimensionality. This general method is applied to the anharmonic vibrationalmore » zero-point and transition energy calculations of molecules using the second-order diagrammatic vibrational many-body Green's function (XVH2) theory with a harmonic-approximation reference. In this application, high dimensional PES and Green's functions are both subjected to a low-rank decomposition. Evaluating the molecular integrals over a low-rank PES and Green's functions as sums of low-dimensional integrals using the Gauss–Hermite quadrature, this canonical-tensor-decomposition-based XVH2 (CT-XVH2) achieves an accuracy of 0.1 cm -1 or higher and nearly an order of magnitude speedup as compared with the original algorithm using force constants for water and formaldehyde.« less
Ink-Jet Printer Forms Solar-Cell Contacts
NASA Technical Reports Server (NTRS)
Alexander, Paul, Jr.; Vest, R. W.; Binford, Don A.; Tweedell, Eric P.
1988-01-01
Contacts formed in controllable patterns with metal-based inks. System forms upper metal contact patterns on silicon photovoltaic cells. Uses metallo-organic ink, decomposes when heated, leaving behind metallic, electrically conductive residue in printed area.
Scalable Domain Decomposed Monte Carlo Particle Transport
NASA Astrophysics Data System (ADS)
O'Brien, Matthew Joseph
In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation. The main algorithms we consider are: • Domain decomposition of constructive solid geometry: enables extremely large calculations in which the background geometry is too large to fit in the memory of a single computational node. • Load Balancing: keeps the workload per processor as even as possible so the calculation runs efficiently. • Global Particle Find: if particles are on the wrong processor, globally resolve their locations to the correct processor based on particle coordinate and background domain. • Visualizing constructive solid geometry, sourcing particles, deciding that particle streaming communication is completed and spatial redecomposition. These algorithms are some of the most important parallel algorithms required for domain decomposed Monte Carlo particle transport. We demonstrate that our previous algorithms were not scalable, prove that our new algorithms are scalable, and run some of the algorithms up to 2 million MPI processes on the Sequoia supercomputer.
Improvement of charge separation in TiO{sub 2} by its modification with different tungsten compounds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tryba, B., E-mail: beata.tryba@zut.edu.pl; Tygielska, M.; Grzeskowiak, M.
2016-04-15
Highlights: • Ammonium m-tungstate doped to TiO{sub 2} highly improved charge separation in TiO{sub 2}. • Negative electrokinetic potential of TiO{sub 2} facilitates holes migration to its surface. • Fast migration of holes to TiO{sub 2} surfaces increased yield of OH radicals formation. • Adsorption of dyes on photocatalyst increased its decomposition under visible light. - Abstract: Three different tungsten precursors were used for TiO{sub 2} modification: H{sub 2}WO{sub 4}, WO{sub 2}, and ammonium m-tungstate. It was proved that modification of TiO{sub 2} with tungsten compounds enhanced its photocatalytic activity through the improvement of charge separation. This effect was obtainedmore » by coating of TiO{sub 2} particles with tungsten compound, which changed their surficial electrokinetical potential from positive onto negative. The most efficient tungsten compound, which caused enhanced separation of free carriers was ammonium m-tungstate (AMT). Two dyes with different ionic potential were used for the photocatalytic decomposition. It appeared that cationic dye—Methylene Blue was highly adsorbed on the negatively charged surface of TiO{sub 2} modified by AMT and decomposed, however this photocatalyst was quickly deactivated whereas anionic dye—acid red was better adsorbed on the less acidic surface of TiO{sub 2} and was rapidly decomposed with almost the same rate in the five following cycles.« less
Decomposed direct matrix inversion for fast non-cartesian SENSE reconstructions.
Qian, Yongxian; Zhang, Zhenghui; Wang, Yi; Boada, Fernando E
2006-08-01
A new k-space direct matrix inversion (DMI) method is proposed here to accelerate non-Cartesian SENSE reconstructions. In this method a global k-space matrix equation is established on basic MRI principles, and the inverse of the global encoding matrix is found from a set of local matrix equations by taking advantage of the small extension of k-space coil maps. The DMI algorithm's efficiency is achieved by reloading the precalculated global inverse when the coil maps and trajectories remain unchanged, such as in dynamic studies. Phantom and human subject experiments were performed on a 1.5T scanner with a standard four-channel phased-array cardiac coil. Interleaved spiral trajectories were used to collect fully sampled and undersampled 3D raw data. The equivalence of the global k-space matrix equation to its image-space version, was verified via conjugate gradient (CG) iterative algorithms on a 2x undersampled phantom and numerical-model data sets. When applied to the 2x undersampled phantom and human-subject raw data, the decomposed DMI method produced images with small errors (< or = 3.9%) relative to the reference images obtained from the fully-sampled data, at a rate of 2 s per slice (excluding 4 min for precalculating the global inverse at an image size of 256 x 256). The DMI method may be useful for noise evaluations in parallel coil designs, dynamic MRI, and 3D sodium MRI with fixed coils and trajectories. Copyright 2006 Wiley-Liss, Inc.
Integrated boiler, superheater, and decomposer for sulfuric acid decomposition
Moore, Robert [Edgewood, NM; Pickard, Paul S [Albuquerque, NM; Parma, Jr., Edward J.; Vernon, Milton E [Albuquerque, NM; Gelbard, Fred [Albuquerque, NM; Lenard, Roger X [Edgewood, NM
2010-01-12
A method and apparatus, constructed of ceramics and other corrosion resistant materials, for decomposing sulfuric acid into sulfur dioxide, oxygen and water using an integrated boiler, superheater, and decomposer unit comprising a bayonet-type, dual-tube, counter-flow heat exchanger with a catalytic insert and a central baffle to increase recuperation efficiency.
Procedures for Decomposing a Redox Reaction into Half-Reaction
ERIC Educational Resources Information Center
Fishtik, Ilie; Berka, Ladislav H.
2005-01-01
A simple algorithm for a complete enumeration of the possible ways a redox reaction (RR) might be uniquely decomposed into half-reactions (HRs) using the response reactions (RERs) formalism is presented. A complete enumeration of the possible ways a RR may be decomposed into HRs is equivalent to a complete enumeration of stoichiometrically…
Optimal reconstruction of the states in qutrit systems
NASA Astrophysics Data System (ADS)
Yan, Fei; Yang, Ming; Cao, Zhuo-Liang
2010-10-01
Based on mutually unbiased measurements, an optimal tomographic scheme for the multiqutrit states is presented explicitly. Because the reconstruction process of states based on mutually unbiased states is free of information waste, we refer to our scheme as the optimal scheme. By optimal we mean that the number of the required conditional operations reaches the minimum in this tomographic scheme for the states of qutrit systems. Special attention will be paid to how those different mutually unbiased measurements are realized; that is, how to decompose each transformation that connects each mutually unbiased basis with the standard computational basis. It is found that all those transformations can be decomposed into several basic implementable single- and two-qutrit unitary operations. For the three-qutrit system, there exist five different mutually unbiased-bases structures with different entanglement properties, so we introduce the concept of physical complexity to minimize the number of nonlocal operations needed over the five different structures. This scheme is helpful for experimental scientists to realize the most economical reconstruction of quantum states in qutrit systems.
Simultaneous acquisition of differing image types
Demos, Stavros G
2012-10-09
A system in one embodiment includes an image forming device for forming an image from an area of interest containing different image components; an illumination device for illuminating the area of interest with light containing multiple components; at least one light source coupled to the illumination device, the at least one light source providing light to the illumination device containing different components, each component having distinct spectral characteristics and relative intensity; an image analyzer coupled to the image forming device, the image analyzer decomposing the image formed by the image forming device into multiple component parts based on type of imaging; and multiple image capture devices, each image capture device receiving one of the component parts of the image. A method in one embodiment includes receiving an image from an image forming device; decomposing the image formed by the image forming device into multiple component parts based on type of imaging; receiving the component parts of the image; and outputting image information based on the component parts of the image. Additional systems and methods are presented.
Autonomous Information Unit: Why Making Data Smart Can also Make Data Secured?
NASA Technical Reports Server (NTRS)
Chow, Edward T.
2006-01-01
In this paper, we introduce a new fine-grain distributed information protection mechanism which can self-protect, self-discover, self-organize, and self-manage. In our approach, we decompose data into smaller pieces and provide individualized protection. We also provide a policy control mechanism to allow 'smart' access control and context based re-assembly of the decomposed data. By combining smart policy with individually protected data, we are able to provide better protection of sensitive information and achieve more flexible access during emergency conditions. As a result, this new fine-grain protection mechanism can enable us to achieve better solutions for problems such as distributed information protection and identity theft.
Full waveform inversion using a decomposed single frequency component from a spectrogram
NASA Astrophysics Data System (ADS)
Ha, Jiho; Kim, Seongpil; Koo, Namhyung; Kim, Young-Ju; Woo, Nam-Sub; Han, Sang-Mok; Chung, Wookeen; Shin, Sungryul; Shin, Changsoo; Lee, Jaejoon
2018-06-01
Although many full waveform inversion methods have been developed to construct velocity models of subsurface, various approaches have been presented to obtain an inversion result with long-wavelength features even though seismic data lacking low-frequency components were used. In this study, a new full waveform inversion algorithm was proposed to recover a long-wavelength velocity model that reflects the inherent characteristics of each frequency component of seismic data using a single-frequency component decomposed from the spectrogram. We utilized the wavelet transform method to obtain the spectrogram, and the decomposed signal from the spectrogram was used as transformed data. The Gauss-Newton method with the diagonal elements of an approximate Hessian matrix was used to update the model parameters at each iteration. Based on the results of time-frequency analysis in the spectrogram, numerical tests with some decomposed frequency components were performed using a modified SEG/EAGE salt dome (A-A‧) line to demonstrate the feasibility of the proposed inversion algorithm. This demonstrated that a reasonable inverted velocity model with long-wavelength structures can be obtained using a single frequency component. It was also confirmed that when strong noise occurs in part of the frequency band, it is feasible to obtain a long-wavelength velocity model from the noise data with a frequency component that is less affected by the noise. Finally, it was confirmed that the results obtained from the spectrogram inversion can be used as an initial velocity model in conventional inversion methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Shidong; Xu, Wu; Zheng, Jianming
Incomplete decomposition of Li2CO3 during charge process is a critical barrier for rechargeable Li-O2 batteries. Here we report complete decomposition of Li2CO3 in Li-O2 batteries using ultrafine iridium-decorated boron carbide (Ir/B4C) nanocomposite as oxygen electrode. The systematic investigation on charging the Li2CO3 preloaded Ir/B4C electrode in an ether-based electrolyte demonstrates that Ir/B4C electrode can decompose Li2CO3 with an efficiency close to 100% at below 4.37 V. In contrast, the bare B4C without Ir electrocatalyst can only decompose 4.7% of preloaded Li2CO3. The reaction mechanism of Li2CO3 decomposition in the presence of Ir/B4C electrocatalyst has been further investigated. A Li-O2 batterymore » using Ir/B4C as oxygen electrode material shows highly enhanced cycling stability than that using bare B4C oxygen electrode. These results clearly demonstrate that Ir/B4C is an effecitive oxygen electrode amterial to completely decompose Li2CO3 at relatively low charge voltages and is of significant importance in improving the cycle performanc of aprotic Li-O2 batteries.« less
A comparison of two software architectural styles for space-based control systems
NASA Technical Reports Server (NTRS)
Dvorak, D.
2003-01-01
In the hardware/software design of control systems it is almost an article of faith to decompose a system into loosely coupled subsystems, with state variables encapsulated inside device and subsystem objects.
Walter M. Broadfoot; Edward H. Tyner
1939-01-01
Methods which are satisfactory for determining the base-exchange capacity of mineral soils can not be applied indiscriminately for determining the exchange capacity of fresh or decomposed plant residues.
Estimation of Soil Moisture with L-band Multi-polarization Radar
NASA Technical Reports Server (NTRS)
Shi, J.; Chen, K. S.; Kim, Chung-Li Y.; Van Zyl, J. J.; Njoku, E.; Sun, G.; O'Neill, P.; Jackson, T.; Entekhabi, D.
2004-01-01
Through analyses of the model simulated data-base, we developed a technique to estimate surface soil moisture under HYDROS radar sensor (L-band multi-polarizations and 40deg incidence) configuration. This technique includes two steps. First, it decomposes the total backscattering signals into two components - the surface scattering components (the bare surface backscattering signals attenuated by the overlaying vegetation layer) and the sum of the direct volume scattering components and surface-volume interaction components at different polarizations. From the model simulated data-base, our decomposition technique works quit well in estimation of the surface scattering components with RMSEs of 0.12,0.25, and 0.55 dB for VV, HH, and VH polarizations, respectively. Then, we use the decomposed surface backscattering signals to estimate the soil moisture and the combined surface roughness and vegetation attenuation correction factors with all three polarizations.
System for thermochemical hydrogen production
Werner, R.W.; Galloway, T.R.; Krikorian, O.H.
1981-05-22
Method and apparatus are described for joule boosting a SO/sub 3/ decomposer using electrical instead of thermal energy to heat the reactants of the high temperature SO/sub 3/ decomposition step of a thermochemical hydrogen production process driven by a tandem mirror reactor. Joule boosting the decomposer to a sufficiently high temperature from a lower temperature heat source eliminates the need for expensive catalysts and reduces the temperature and consequent materials requirements for the reactor blanket. A particular decomposer design utilizes electrically heated silicon carbide rods, at a temperature of 1250/sup 0/K, to decompose a cross flow of SO/sub 3/ gas.
Cerebellar ataxia: abnormal control of interaction torques across multiple joints.
Bastian, A J; Martin, T A; Keating, J G; Thach, W T
1996-07-01
1. We studied seven subjects with cerebellar lesions and seven control subjects as they made reaching movements in the sagittal plane to a target directly in front of them. Reaches were made under three different conditions: 1) "slow-accurate," 2) "fast-accurate," and 3) "fast as possible." All subjects were videotaped moving in a sagittal plane with markers on the index finger, wrist, elbow, and shoulder. Marker positions were digitized and then used to calculate joint angles. For each of the shoulder, elbow and wrist joints, inverse dynamics equations based on a three-segment limb model were used to estimate the net torque (sum of components) and each of the component torques. The component torques consisted of the torque due to gravity, the dynamic interaction torques induced passively by the movement of the adjacent joint, and the torque produced by the muscles and passive tissue elements (sometimes called "residual" torque). 2. A kinematic analysis of the movement trajectory and the change in joint angles showed that the reaches of subjects with cerebellar lesions were abnormal compared with reaches of control subjects. In both the slow-accurate and fast-accurate conditions the cerebellar subjects made abnormally curved wrist paths; the curvature was greater in the slow-accurate condition. During the slow-accurate condition, cerebellar subjects showed target undershoot and tended to move one joint at a time (decomposition). During the fast-accurate reaches, the cerebellar subjects showed target overshoot. Additionally, in the fast-accurate condition, cerebellar subjects moved the joints at abnormal rates relative to one another, but the movements were less decomposed. Only three subjects were tested in the fast as possible condition; this condition was analyzed only to determine maximal reaching speeds of subjects with cerebellar lesions. Cerebellar subjects moved more slowly than controls in all three conditions. 3. A kinetic analysis of torques generated at each joint during the slow-accurate reaches and the fast-accurate reaches revealed that subjects with cerebellar lesions produced very different torque profiles compared with control subjects. In the slow-accurate condition, the cerebellar subjects produced abnormal elbow muscle torques that prevented the normal elbow extension early in the reach. In the fast-accurate condition, the cerebellar subjects produced inappropriate levels of shoulder muscle torque and also produced elbow muscle torques that did not very appropriately with the dynamic interaction torques that occurred at the elbow. Lack of appropriate muscle torque resulted in excessive contributions of the dynamic interaction torque during the fast-accurate reaches. 4. The inability to produce muscle torques that predict, accommodate, and compensate for the dynamic interaction torques appears to be an important cause of the classic kinematic deficits shown by cerebellar subjects during attempted reaching. These kinematic deficits include incoordination of the shoulder and the elbow joints, a curved trajectory, and overshoot. In the fast-accurate condition, cerebellar subjects often made inappropriate muscle torques relative to the dynamic interaction torques. Because of this, interaction torques often determined the pattern of incoordination of the elbow and shoulder that produced the curved trajectory and target overshoot. In the slow-accurate condition, we reason that the cerebellar subjects may use a decomposition strategy so as to simplify the movement and not have to control both joints simultaneously. From these results, we suggest that a major role of the cerebellum is in generating muscle torques at a joint that will predict the interaction torques being generated by other moving joints and compensate for them as they occur.
ERIC Educational Resources Information Center
Cieslicka, Anna B.
2013-01-01
The purpose of this study was to explore possible cerebral asymmetries in the processing of decomposable and nondecomposable idioms by fluent nonnative speakers of English. In the study, native language (Polish) and foreign language (English) decomposable and nondecomposable idioms were embedded in ambiguous (neutral) and unambiguous (biasing…
Edwards, J C; Quinn, P J
1982-09-01
The unsaturated fatty acyl residues of egg yolk lecithin are selectively removed when bilayer dispersions of the lipid are exposed to decomposing peroxychromate at pH 7.6 or pH 9.0. Mannitol (50 mM or 100 mM)partially prevents the oxidation of the phospholipid due to decomposing peroxychromate at pH 7.6 and the amount of lipid lost is inversely proportional to the concentration of mannitol. N,N-Dimethyl-p-nitrosoaniline, mixed with the lipid in a molar ratio of 1.3:1, completely prevents the oxidation of lipid due to decomposing peroxychromate at pH 9.0, but some linoleic acid is lost if the incubation is done at pH 7.6. If the concentration of this quench reagent is reduced tenfold, oxidation of linoleic acid by decomposing peroxychromate at pH 9.0 is observed. Hydrogen peroxide is capable of oxidizing the unsaturated fatty acids of lecithin dispersions. Catalase or boiled catalase (2 mg/ml) protects the lipid from oxidation due to decomposing peroxychromate at pH 7.6 to approximately the same extent, but their protective effect is believed to be due to the non-specific removal of .OH. It is concluded that .OH is the species responsible for the lipid oxidation caused by decomposing peroxychromate. This is consistent with the observed bleaching of N,N-dimethyl-p-nitrosoanaline and the formation of a characteristic paramagnetic .OH adduct of the spin trap, 5,5-dimethylpyrroline-1-oxide.
NASA Astrophysics Data System (ADS)
Sekiguchi, K.; Shirakawa, H.; Yamamoto, Y.; Araidai, M.; Kangawa, Y.; Kakimoto, K.; Shiraishi, K.
2017-06-01
We analyzed the decomposition mechanisms of trimethylgallium (TMG) used for the gallium source of GaN fabrication based on first-principles calculations and thermodynamic analysis. We considered two conditions. One condition is under the total pressure of 1 atm and the other one is under metal organic vapor phase epitaxy (MOVPE) growth of GaN. Our calculated results show that H2 is indispensable for TMG decomposition under both conditions. In GaN MOVPE, TMG with H2 spontaneously decomposes into Ga(CH3) and Ga(CH3) decomposes into Ga atom gas when temperature is higher than 440 K. From these calculations, we confirmed that TMG surely becomes Ga atom gas near the GaN substrate surfaces.
Fluid-mechanic/thermal interaction of a molten material and a decomposing solid
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larson, D.W.; Lee, D.O.
1976-12-01
Bench-scale experiments of a molten material in contact with a decomposing solid were conducted to gain insight into the expected interaction of a hot, molten reactor core with a concrete base. The results indicate that either of two regimes can occur: violent agitation and splattering of the melt or a very quiescent settling of the melt when placed in contact with the solid. The two regimes appear to be governed by the interface temperature condition. A conduction heat transfer model predicts the critical interface temperature with reasonable accuracy. In addition, a film thermal resistance model correlates well with the datamore » in predicting the time for a solid skin to form on the molten material.« less
Baskaran, Suresh; Graff, Gordon L.; Song, Lin
1998-01-01
The invention provides a method for synthesizing a titanium oxide-containing film comprising the following steps: (a) preparing an aqueous solution of a titanium chelate with a titanium molarity in the range of 0.01M to 0.6M. (b) immersing a substrate in the prepared solution, (c) decomposing the titanium chelate to deposit a film on the substrate. The titanium chelate maybe decomposed acid, base, temperature or other means. A preferred method provides for the deposit of adherent titanium oxide films from C2 to C5 hydroxy carboxylic acids. In another aspect the invention is a novel article of manufacture having a titanium coating which protects the substrate against ultraviolet damage. In another aspect the invention provides novel semipermeable gas separation membranes, and a method for producing them.
Tsuji, Masaharu; Yokota, Yuji; Kudoh, Sakae; Hoshino, Tamotsu
2015-06-01
Milk fat curdle is difficult to remove from sewage. In an attempt to identify an appropriate agent for bio-remediation of milk fat curdle, Mrakia strains were collected from the Skarvsnes ice-free area of Antarctica. A total of 27 strains were isolated and tested for their ability to decompose milk fat at temperatures ranging from 4°C to 15°C. All strains could decompose milk fat at 4°C and 10°C. Phylogenetic analysis and comparison of the decomposition ability of milk fat (DAMF) revealed that the DAMF may be useful for predicting the outcome of phylogenetic analysis based on ITS sequences. Copyright © 2015 Elsevier Inc. All rights reserved.
Baker, Nameer R; Khalili, Banafshe; Martiny, Jennifer B H; Allison, Steven D
2018-06-01
Microbial decomposers mediate the return of CO 2 to the atmosphere by producing extracellular enzymes to degrade complex plant polymers, making plant carbon available for metabolism. Determining if and how these decomposer communities are constrained in their ability to degrade plant litter is necessary for predicting how carbon cycling will be affected by future climate change. We analyzed mass loss, litter chemistry, microbial biomass, extracellular enzyme activities, and enzyme temperature sensitivities in grassland litter transplanted along a Mediterranean climate gradient in southern California. Microbial community composition was manipulated by caging litter within bags made of nylon membrane that prevent microbial immigration. To test whether grassland microbes were constrained by climate history, half of the bags were inoculated with local microbial communities native to each gradient site. We determined that temperature and precipitation likely interact to limit microbial decomposition in the extreme sites along our gradient. Despite their unique climate history, grassland microbial communities were not restricted in their ability to decompose litter under different climate conditions across the gradient, although microbial communities across our gradient may be restricted in their ability to degrade different types of litter. We did find some evidence that local microbial communities were optimized based on climate, but local microbial taxa that proliferated after inoculation into litterbags did not enhance litter decomposition. Our results suggest that microbial community composition does not constrain C-cycling rates under climate change in our system, but optimization to particular resource environments may act as more general constraints on microbial communities. © 2018 by the Ecological Society of America.
NASA Astrophysics Data System (ADS)
Cheng, Jiqi; Lu, Jian-Yu
2002-05-01
Angular spectrum is one of the most powerful tools for field calculation. It is based on linear system theory and the Fourier transform and is used for the calculation of propagating sound fields at different distances. In this report, the generalization and interpretation of the angular spectrum and its intrinsic relationship with limited diffraction beams are studied. With an angular spectrum, the field at the surface of a transducer is decomposed into limited diffractions beams. For an array transducer, a linear relationship between the quantized fields at the surface of elements of the array and the propagating field at any point in space can be established. For an annular array, the field is decomposed into limited diffraction Bessel beams [P. D. Fox and S. Holm, IEEE Trans. Ultrason. Ferroelectr. Freq. Control 49, 85-93 (2002)], while for a two-dimensional (2-D) array the field is decomposed into limited diffraction array beams [J-y. Lu and J. Cheng, J. Acoust. Soc. Am. 109, 2397-2398 (2001)]. The angular spectrum reveals the intrinsic link between these decompositions. [Work supported in part by Grant 5RO1 HL60301 from NIH.
NASA Astrophysics Data System (ADS)
Juniati, D.; Khotimah, C.; Wardani, D. E. K.; Budayasa, K.
2018-01-01
The heart abnormalities can be detected from heart sound. A heart sound can be heard directly with a stethoscope or indirectly by a phonocardiograph, a machine of the heart sound recording. This paper presents the implementation of fractal dimension theory to make a classification of phonocardiograms into a normal heart sound, a murmur, or an extrasystole. The main algorithm used to calculate the fractal dimension was Higuchi’s Algorithm. There were two steps to make a classification of phonocardiograms, feature extraction, and classification. For feature extraction, we used Discrete Wavelet Transform to decompose the signal of heart sound into several sub-bands depending on the selected level. After the decomposition process, the signal was processed using Fast Fourier Transform (FFT) to determine the spectral frequency. The fractal dimension of the FFT output was calculated using Higuchi Algorithm. The classification of fractal dimension of all phonocardiograms was done with KNN and Fuzzy c-mean clustering methods. Based on the research results, the best accuracy obtained was 86.17%, the feature extraction by DWT decomposition level 3 with the value of kmax 50, using 5-fold cross validation and the number of neighbors was 5 at K-NN algorithm. Meanwhile, for fuzzy c-mean clustering, the accuracy was 78.56%.
Chen, He; Du, Wei; Li, Ning; Chen, Gong; Zheng, Xiaoying
2013-06-01
Traffic crashes have become the fifth leading cause of burden of diseases and injuries in China. More importantly, it may further aggravate the degree of health inequality among Chinese population, which is still under-investigated. Based on a nationally representative data, we calculated the concentration index (CI) to measure the socioeconomic inequality in traffic-related disability (TRD), and decomposed CI into potential sources of the inequality. Results show that more than 1.5 million Chinese adults were disabled by traffic crashes and the adults with financial disadvantage bear disproportionately heavier burden of TRD. Besides, strategies of reducing income inequality and protecting the safety of poor road users, are of great importance. Residence appears to counteract the socioeconomic inequality in TRD, however, it does not necessarily come to an optimistic conclusion. In addition to the worrying income gap between rural and urban areas, other possible mechanisms, e.g. the low level of post-crash medical resources in rural area, need further studies. China is one of the developing countries undergoing fast motorization and our findings could provide other countries in similar context with some insights about how to maintain socioeconomic equality in road safety. Copyright © 2013 Elsevier Ltd. All rights reserved.
A hardware implementation of the discrete Pascal transform for image processing
NASA Astrophysics Data System (ADS)
Goodman, Thomas J.; Aburdene, Maurice F.
2006-02-01
The discrete Pascal transform is a polynomial transform with applications in pattern recognition, digital filtering, and digital image processing. It already has been shown that the Pascal transform matrix can be decomposed into a product of binary matrices. Such a factorization leads to a fast and efficient hardware implementation without the use of multipliers, which consume large amounts of hardware. We recently developed a field-programmable gate array (FPGA) implementation to compute the Pascal transform. Our goal was to demonstrate the computational efficiency of the transform while keeping hardware requirements at a minimum. Images are uploaded into memory from a remote computer prior to processing, and the transform coefficients can be offloaded from the FPGA board for analysis. Design techniques like as-soon-as-possible scheduling and adder sharing allowed us to develop a fast and efficient system. An eight-point, one-dimensional transform completes in 13 clock cycles and requires only four adders. An 8x8 two-dimensional transform completes in 240 cycles and requires only a top-level controller in addition to the one-dimensional transform hardware. Finally, through minor modifications to the controller, the transform operations can be pipelined to achieve 100% utilization of the four adders, allowing one eight-point transform to complete every seven clock cycles.
Dynamics of multiple elements in fast decomposing vegetable residues.
Cao, Chun; Liu, Si-Qi; Ma, Zhen-Bang; Lin, Yun; Su, Qiong; Chen, Huan; Wang, Jun-Jian
2018-03-01
Litter decomposition regulates the cycling of nutrients and toxicants but is poorly studied in farmlands. To understand the unavoidable in-situ decomposition process, we quantified the dynamics of C, H, N, As, Ca, Cd, Cr, Cu, Fe, Hg, K, Mg, Mn, Na, Ni, Pb, and Zn during a 180-d decomposition study in leafy lettuce (Lactuca sativa var. longifoliaf) and rape (Brassica chinensis) residues in a wastewater-irrigated farmland in northwestern China. Different from most studied natural ecosystems, the managed vegetable farmland had a much faster litter decomposition rate (half-life of 18-60d), and interestingly, faster decomposition of roots relative to leaves for both the vegetables. Faster root decomposition can be explained by the initial biochemical composition (more O-alkyl C and less alkyl and aromatic C) but not the C/N stoichiometry. Multi-element dynamics varied greatly, with C, H, N, K, and Na being highly released (remaining proportion<20%), Ca, Cd, Cr, Mg, Ni, and Zn released, and As, Cu, Fe, Hg, Mn, and Pb possibly accumulated. Although vegetable residues serve as temporary sinks of some metal(loid)s, their fast decomposition, particularly for the O-alkyl-C-rich leafy-lettuce roots, suggest that toxic metal(loid)s can be released from residues, which therefore become secondary pollution sources. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Chao; Yu, Rencheng; Zhou, Mingjiang
2011-05-01
From 2007 to 2009, large-scale blooms of green algae (the so-called "green tides") occurred every summer in the Yellow Sea, China. In June 2008, huge amounts of floating green algae accumulated along the coast of Qingdao and led to mass mortality of cultured abalone and sea cucumber. However, the mechanism for the mass mortality of cultured animals remains undetermined. This study examined the toxic effects of Ulva ( Enteromorpha) prolifera, the causative species of green tides in the Yellow Sea during the last three years. The acute toxicity of fresh culture medium and decomposing algal effluent of U. prolifera to the cultured abalone Haliotis discus hannai were tested. It was found that both fresh culture medium and decomposing algal effluent had toxic effects to abalone, and decomposing algal effluent was more toxic than fresh culture medium. The acute toxicity of decomposing algal effluent could be attributed to the ammonia and sulfide presented in the effluent, as well as the hypoxia caused by the decomposition process.
Plant–herbivore–decomposer stoichiometric mismatches and nutrient cycling in ecosystems
Cherif, Mehdi; Loreau, Michel
2013-01-01
Plant stoichiometry is thought to have a major influence on how herbivores affect nutrient availability in ecosystems. Most conceptual models predict that plants with high nutrient contents increase nutrient excretion by herbivores, in turn raising nutrient availability. To test this hypothesis, we built a stoichiometrically explicit model that includes a simple but thorough description of the processes of herbivory and decomposition. Our results challenge traditional views of herbivore impacts on nutrient availability in many ways. They show that the relationship between plant nutrient content and the impact of herbivores predicted by conceptual models holds only at high plant nutrient contents. At low plant nutrient contents, the impact of herbivores is mediated by the mineralization/immobilization of nutrients by decomposers and by the type of resource limiting the growth of decomposers. Both parameters are functions of the mismatch between plant and decomposer stoichiometries. Our work provides new predictions about the impacts of herbivores on ecosystem fertility that depend on critical interactions between plant, herbivore and decomposer stoichiometries in ecosystems. PMID:23303537
Zhang, Xiaoxing; Yu, Lei; Tie, Jing; Dong, Xingchen
2014-01-01
The analysis to SF6 decomposed component gases is an efficient diagnostic approach to detect the partial discharge in gas-insulated switchgear (GIS) for the purpose of accessing the operating state of power equipment. This paper applied the Au-doped TiO2 nanotube array sensor (Au-TiO2 NTAs) to detect SF6 decomposed components. The electrochemical constant potential method was adopted in the Au-TiO2 NTAs' fabrication, and a series of experiments were conducted to test the characteristic SF6 decomposed gases for a thorough investigation of sensing performances. The sensing characteristic curves of intrinsic and Au-doped TiO2 NTAs were compared to study the mechanism of the gas sensing response. The results indicated that the doped Au could change the TiO2 nanotube arrays' performances of gas sensing selectivity in SF6 decomposed components, as well as reducing the working temperature of TiO2 NTAs. PMID:25330053
An experimental study of postmortem decomposition of methomyl in blood.
Kawakami, Yuka; Fuke, Chiaki; Fukasawa, Maki; Ninomiya, Kenji; Ihama, Yoko; Miyazaki, Tetsuji
2017-03-01
Methomyl (S-methyl-1-N-[(methylcarbamoyl)oxy]thioacetimidate) is a carbamate pesticide. It has been noted that in some cases of methomyl poisoning, methomyl is either not detected or detected only in low concentrations in the blood of the victims. However, in such cases, methomyl is detected at higher concentrations in the vitreous humor than in the blood. This indicates that methomyl in the blood is possibly decomposed after death. However, the reasons for this phenomenon have been unclear. We have previously reported that methomyl is decomposed to dimethyl disulfide (DMDS) in the livers and kidneys of pigs but not in their blood. In addition, in the field of forensic toxicology, it is known that some compounds are decomposed or produced by internal bacteria in biological samples after death. This indicates that there is a possibility that methomyl in blood may be decomposed by bacteria after death. The aim of this study was therefore to investigate whether methomyl in blood is decomposed by bacteria isolated from human stool. Our findings demonstrated that methomyl was decomposed in human stool homogenates, resulting in the generation of DMDS. In addition, it was observed that three bacterial species isolated from the stool homogenates, Bacillus cereus, Pseudomonas aeruginosa, and Bacillus sp., showed methomyl-decomposing activity. The results therefore indicated that one reason for the difficulty in detecting methomyl in postmortem blood from methomyl-poisoning victims is the decomposition of methomyl by internal bacteria such as B. cereus, P. aeruginosa, and Bacillus sp. Copyright © 2017 Elsevier B.V. All rights reserved.
Vertebrate Decomposition Is Accelerated by Soil Microbes
Lauber, Christian L.; Metcalf, Jessica L.; Keepers, Kyle; Ackermann, Gail; Carter, David O.
2014-01-01
Carrion decomposition is an ecologically important natural phenomenon influenced by a complex set of factors, including temperature, moisture, and the activity of microorganisms, invertebrates, and scavengers. The role of soil microbes as decomposers in this process is essential but not well understood and represents a knowledge gap in carrion ecology. To better define the role and sources of microbes in carrion decomposition, lab-reared mice were decomposed on either (i) soil with an intact microbial community or (ii) soil that was sterilized. We characterized the microbial community (16S rRNA gene for bacteria and archaea, and the 18S rRNA gene for fungi and microbial eukaryotes) for three body sites along with the underlying soil (i.e., gravesoils) at time intervals coinciding with visible changes in carrion morphology. Our results indicate that mice placed on soil with intact microbial communities reach advanced stages of decomposition 2 to 3 times faster than those placed on sterile soil. Microbial communities associated with skin and gravesoils of carrion in stages of active and advanced decay were significantly different between soil types (sterile versus untreated), suggesting that substrates on which carrion decompose may partially determine the microbial decomposer community. However, the source of the decomposer community (soil- versus carcass-associated microbes) was not clear in our data set, suggesting that greater sequencing depth needs to be employed to identify the origin of the decomposer communities in carrion decomposition. Overall, our data show that soil microbial communities have a significant impact on the rate at which carrion decomposes and have important implications for understanding carrion ecology. PMID:24907317
NASA Astrophysics Data System (ADS)
Spirito, Florencia; Yahdjian, Laura; Tognetti, Pedro M.; Chaneton, Enrique J.
2014-01-01
Old fields often become dominated by exotic plants establishing persistent community states. Ecosystem functioning may differ widely between such novel communities and the native-dominated counterparts. We evaluated soil ecosystem attributes in native and exotic (synthetic) grass assemblages established on a newly abandoned field, and in remnants of native grassland in the Inland Pampa, Argentina. We asked whether exotic species alter soil functioning through the quality of the litter they shed or by changing the decomposition environment. Litter decomposition of the exotic dominant Festuca arundinacea in exotic assemblages was faster than that of the native dominant Paspalum quadrifarium in native assemblages and remnant grasslands. Decomposition of a standard litter (Triticum aestivum) was also faster in exotic assemblages than in native assemblages and remnant grasslands. In a common garden, F. arundinacea showed higher decay rates than P. quadrifarium, which reflected the higher N content and lower C:N of the exotic grass litter. Soil respiration rates were higher in the exotic than in the native assemblages and remnant grasslands. Yet there were no significant differences in soil N availability or net N mineralization between exotic and native assemblages. Our results suggest that exotic grass dominance affected ecosystem function by producing a more decomposable leaf litter and by increasing soil decomposer activity. These changes might contribute to the extended dominance of fast-growing exotic grasses during old-field succession. Further, increased organic matter turnover under novel, exotic communities could reduce the carbon storage capacity of the system in the long term.
Amorphous Silica Based Nanomedicine with Safe Carrier Excretion and Enhanced Drug Efficacy
NASA Astrophysics Data System (ADS)
Zhang, Silu
With recent development of nanoscience and nanotechnology, a great amount of efforts have been devoted to nanomedicine development. Among various nanomaterials, silica nanoparticle (NP) is generally accepted as non-toxic, and can provide a versatile platform for drug loading. In addition, the surface of the silica NP is hydrophilic, being favorable for cellular uptake. Therefore, it is considered as one of the most promising candidates to serve as carriers for drugs. The present thesis mainly focuses on the design of silica based nanocarrier-drug systems, aiming at achieving safe nanocarrier excretion from the biological system and enhanced drug efficacy, which two are considered as most important issues in nanomedicine development. To address the safe carrier excretion issue, we have developed a special type of selfdecomposable SiO2-drug composite NPs. By creating a radial concentration gradient of drug in the NP, the drug release occurred simultaneously with the silica carrier decomposition. Such unique characteristic was different from the conventional dense SiO2-drug NP, in which drug was uniformly distributed and can hardly escape the carrier. We found that the controllable release of the drug was primarily determined by diffusion, which was caused by the radial drug concentration gradient in the NP. Escape of the drug molecules then triggered the silica carrier decomposition, which started from the center of the NP and eventually led to its complete fragmentation. The small size of the final carrier fragments enabled their easy excretion via renal systems. Apart from the feature of safe carrier excretion, we also found the controlled release of drugs contribute significantly to the drug efficacy enhancement. By loading an anticancer drug doxorubicin (Dox) to the decomposable SiO 2-methylene blue (MB) NPs, we achieved a self-decomposable SiO 2(MB)-Dox nanomedicine. The gradual escape of drug molecules from NPs and their enabled cytosolic release by optical switch, led to not only high but also stable drug concentration in cytosol within a sustained period. This resulted in enhanced drug efficacy, which is especially manifested in multidrug resistant (MDR) cancer cells, due to the fact that the NP-carrier drug can efficiently bypass the efflux mechanisms and increase drug availability. Together with its feature of spontaneous carrier decomposition and safe excretion, this type of nanomedicine's high drug efficacy highlights its potential for low dose anticancer drug treatment and reduced adverse effect to biological system, holding great promise for clinical translation. The enhanced drug efficacy by employing the self-decomposable silica nanocarrier is also demonstrated in photodynamic therapy (PDT). The loose and fragmentable features of the self-decomposable SiO2-photosensitizer (PS) NPs promoted the outdiffusion of the generated ROS, which resulted in a higher efficacy than that of dense SiO2-PS NPs. On the other hand, we also explored another nanocarrier configuration of Au nanorods decorated SiO2 NP, with PS drug embedded into dense SiO2 matrix. A different mechanism of drug efficacy enhancement was presented as the Au's surface plasmon resonance enhanced the ROS production. Although the drug efficacy of such SiO2(PS)-Au NPs was similar to that of self-decomposable SiO2-PS NPs, their potential for clinical applications was limited without the feature of safe carrier excretion. In summary, the self-decomposable SiO2 based NP developed is a most promising system to serve as safe and effective carriers for drugs. Together with the known biocompatibility of silica, the feature of controllable drug release and simultaneous carrier decomposition achieved in the self-decomposable SiO2-drug NPs make it ideal for a wide range of therapeutic applications.
Map-invariant spectral analysis for the identification of DNA periodicities
2012-01-01
Many signal processing based methods for finding hidden periodicities in DNA sequences have primarily focused on assigning numerical values to the symbolic DNA sequence and then applying spectral analysis tools such as the short-time discrete Fourier transform (ST-DFT) to locate these repeats. The key results pertaining to this approach are however obtained using a very specific symbolic to numerical map, namely the so-called Voss representation. An important research problem is to therefore quantify the sensitivity of these results to the choice of the symbolic to numerical map. In this article, a novel algebraic approach to the periodicity detection problem is presented and provides a natural framework for studying the role of the symbolic to numerical map in finding these repeats. More specifically, we derive a new matrix-based expression of the DNA spectrum that comprises most of the widely used mappings in the literature as special cases, shows that the DNA spectrum is in fact invariable under all these mappings, and generates a necessary and sufficient condition for the invariance of the DNA spectrum to the symbolic to numerical map. Furthermore, the new algebraic framework decomposes the periodicity detection problem into several fundamental building blocks that are totally independent of each other. Sophisticated digital filters and/or alternate fast data transforms such as the discrete cosine and sine transforms can therefore be always incorporated in the periodicity detection scheme regardless of the choice of the symbolic to numerical map. Although the newly proposed framework is matrix based, identification of these periodicities can be achieved at a low computational cost. PMID:23067324
Li, Zheng-Zhou; Chen, Jing; Hou, Qian; Fu, Hong-Xia; Dai, Zhen; Jin, Gang; Li, Ru-Zhang; Liu, Chang-Ju
2014-01-01
It is difficult for structural over-complete dictionaries such as the Gabor function and discriminative over-complete dictionary, which are learned offline and classified manually, to represent natural images with the goal of ideal sparseness and to enhance the difference between background clutter and target signals. This paper proposes an infrared dim target detection approach based on sparse representation on a discriminative over-complete dictionary. An adaptive morphological over-complete dictionary is trained and constructed online according to the content of infrared image by K-singular value decomposition (K-SVD) algorithm. Then the adaptive morphological over-complete dictionary is divided automatically into a target over-complete dictionary describing target signals, and a background over-complete dictionary embedding background by the criteria that the atoms in the target over-complete dictionary could be decomposed more sparsely based on a Gaussian over-complete dictionary than the one in the background over-complete dictionary. This discriminative over-complete dictionary can not only capture significant features of background clutter and dim targets better than a structural over-complete dictionary, but also strengthens the sparse feature difference between background and target more efficiently than a discriminative over-complete dictionary learned offline and classified manually. The target and background clutter can be sparsely decomposed over their corresponding over-complete dictionaries, yet couldn't be sparsely decomposed based on their opposite over-complete dictionary, so their residuals after reconstruction by the prescribed number of target and background atoms differ very visibly. Some experiments are included and the results show that this proposed approach could not only improve the sparsity more efficiently, but also enhance the performance of small target detection more effectively. PMID:24871988
Li, Zheng-Zhou; Chen, Jing; Hou, Qian; Fu, Hong-Xia; Dai, Zhen; Jin, Gang; Li, Ru-Zhang; Liu, Chang-Ju
2014-05-27
It is difficult for structural over-complete dictionaries such as the Gabor function and discriminative over-complete dictionary, which are learned offline and classified manually, to represent natural images with the goal of ideal sparseness and to enhance the difference between background clutter and target signals. This paper proposes an infrared dim target detection approach based on sparse representation on a discriminative over-complete dictionary. An adaptive morphological over-complete dictionary is trained and constructed online according to the content of infrared image by K-singular value decomposition (K-SVD) algorithm. Then the adaptive morphological over-complete dictionary is divided automatically into a target over-complete dictionary describing target signals, and a background over-complete dictionary embedding background by the criteria that the atoms in the target over-complete dictionary could be decomposed more sparsely based on a Gaussian over-complete dictionary than the one in the background over-complete dictionary. This discriminative over-complete dictionary can not only capture significant features of background clutter and dim targets better than a structural over-complete dictionary, but also strengthens the sparse feature difference between background and target more efficiently than a discriminative over-complete dictionary learned offline and classified manually. The target and background clutter can be sparsely decomposed over their corresponding over-complete dictionaries, yet couldn't be sparsely decomposed based on their opposite over-complete dictionary, so their residuals after reconstruction by the prescribed number of target and background atoms differ very visibly. Some experiments are included and the results show that this proposed approach could not only improve the sparsity more efficiently, but also enhance the performance of small target detection more effectively.
High-Fidelity Single-Shot Toffoli Gate via Quantum Control.
Zahedinejad, Ehsan; Ghosh, Joydip; Sanders, Barry C
2015-05-22
A single-shot Toffoli, or controlled-controlled-not, gate is desirable for classical and quantum information processing. The Toffoli gate alone is universal for reversible computing and, accompanied by the Hadamard gate, forms a universal gate set for quantum computing. The Toffoli gate is also a key ingredient for (nontopological) quantum error correction. Currently Toffoli gates are achieved by decomposing into sequentially implemented single- and two-qubit gates, which require much longer times and yields lower overall fidelities compared to a single-shot implementation. We develop a quantum-control procedure to construct a single-shot Toffoli gate for three nearest-neighbor-coupled superconducting transmon systems such that the fidelity is 99.9% and is as fast as an entangling two-qubit gate under the same realistic conditions. The gate is achieved by a nongreedy quantum control procedure using our enhanced version of the differential evolution algorithm.
An optical Fourier transform coprocessor with direct phase determination.
Macfaden, Alexander J; Gordon, George S D; Wilkinson, Timothy D
2017-10-20
The Fourier transform is a ubiquitous mathematical operation which arises naturally in optics. We propose and demonstrate a practical method to optically evaluate a complex-to-complex discrete Fourier transform. By implementing the Fourier transform optically we can overcome the limiting O(nlogn) complexity of fast Fourier transform algorithms. Efficiently extracting the phase from the well-known optical Fourier transform is challenging. By appropriately decomposing the input and exploiting symmetries of the Fourier transform we are able to determine the phase directly from straightforward intensity measurements, creating an optical Fourier transform with O(n) apparent complexity. Performing larger optical Fourier transforms requires higher resolution spatial light modulators, but the execution time remains unchanged. This method could unlock the potential of the optical Fourier transform to permit 2D complex-to-complex discrete Fourier transforms with a performance that is currently untenable, with applications across information processing and computational physics.
Gamma signatures of the C-BORD Tagged Neutron Inspection System
NASA Astrophysics Data System (ADS)
Sardet, A.; Pérot, B.; Carasco, C.; Sannié, G.; Moretto, S.; Nebbia, G.; Fontana, C.; Pino, F.; Iovene, A.; Tintori, C.
2018-01-01
In the frame of C-BORD project (H2020 program of the EU), a Rapidly relocatable Tagged Neutron Inspection System (RRTNIS) is being developed to non-intrusively detect explosives, chemical threats, and other illicit goods in cargo containers. Material identification is performed through gamma spectroscopy, using twenty NaI detectors and four LaBr3 detectors, to determine the different elements composing the inspected item from their specific gamma signatures induced by fast neutrons. This is performed using an unfolding algorithm to decompose the energy spectrum of a suspect item, selected by X-ray radiography and on which the RRTNIS inspection is focused, on a database of pure element gamma signatures. This paper reports on simulated signatures for the NaI and LaBr3 detectors, constructed using the MCNP6 code. First experimental spectra of a few elements of interest are also presented.
Reducing neural network training time with parallel processing
NASA Technical Reports Server (NTRS)
Rogers, James L., Jr.; Lamarsh, William J., II
1995-01-01
Obtaining optimal solutions for engineering design problems is often expensive because the process typically requires numerous iterations involving analysis and optimization programs. Previous research has shown that a near optimum solution can be obtained in less time by simulating a slow, expensive analysis with a fast, inexpensive neural network. A new approach has been developed to further reduce this time. This approach decomposes a large neural network into many smaller neural networks that can be trained in parallel. Guidelines are developed to avoid some of the pitfalls when training smaller neural networks in parallel. These guidelines allow the engineer: to determine the number of nodes on the hidden layer of the smaller neural networks; to choose the initial training weights; and to select a network configuration that will capture the interactions among the smaller neural networks. This paper presents results describing how these guidelines are developed.
An efficient and robust 3D mesh compression based on 3D watermarking and wavelet transform
NASA Astrophysics Data System (ADS)
Zagrouba, Ezzeddine; Ben Jabra, Saoussen; Didi, Yosra
2011-06-01
The compression and watermarking of 3D meshes are very important in many areas of activity including digital cinematography, virtual reality as well as CAD design. However, most studies on 3D watermarking and 3D compression are done independently. To verify a good trade-off between protection and a fast transfer of 3D meshes, this paper proposes a new approach which combines 3D mesh compression with mesh watermarking. This combination is based on a wavelet transformation. In fact, the used compression method is decomposed to two stages: geometric encoding and topologic encoding. The proposed approach consists to insert a signature between these two stages. First, the wavelet transformation is applied to the original mesh to obtain two components: wavelets coefficients and a coarse mesh. Then, the geometric encoding is done on these two components. The obtained coarse mesh will be marked using a robust mesh watermarking scheme. This insertion into coarse mesh allows obtaining high robustness to several attacks. Finally, the topologic encoding is applied to the marked coarse mesh to obtain the compressed mesh. The combination of compression and watermarking permits to detect the presence of signature after a compression of the marked mesh. In plus, it allows transferring protected 3D meshes with the minimum size. The experiments and evaluations show that the proposed approach presents efficient results in terms of compression gain, invisibility and robustness of the signature against of many attacks.
Into the decomposed body-forensic digital autopsy using multislice-computed tomography.
Thali, M J; Yen, K; Schweitzer, W; Vock, P; Ozdoba, C; Dirnhofer, R
2003-07-08
It is impossible to obtain a representative anatomical documentation of an entire body using classical X-ray methods, they subsume three-dimensional bodies into a two-dimensional level. We used the novel multislice-computed tomography (MSCT) technique in order to evaluate a case of homicide with putrefaction of the corpse before performing a classical forensic autopsy. This non-invasive method showed gaseous distension of the decomposing organs and tissues in detail as well as a complex fracture of the calvarium. MSCT also proved useful in screening for foreign matter in decomposing bodies, and full-body scanning took only a few minutes. In conclusion, we believe postmortem MSCT imaging is an excellent vizualisation tool with great potential for forensic documentation and evaluation of decomposed bodies.
Wavelet-Based Processing for Fiber Optic Sensing Systems
NASA Technical Reports Server (NTRS)
Hamory, Philip J. (Inventor); Parker, Allen R., Jr. (Inventor)
2016-01-01
The present invention is an improved method of processing conglomerate data. The method employs a Triband Wavelet Transform that decomposes and decimates the conglomerate signal to obtain a final result. The invention may be employed to improve performance of Optical Frequency Domain Reflectometry systems.
Augmenting the decomposition of EMG signals using supervised feature extraction techniques.
Parsaei, Hossein; Gangeh, Mehrdad J; Stashuk, Daniel W; Kamel, Mohamed S
2012-01-01
Electromyographic (EMG) signal decomposition is the process of resolving an EMG signal into its constituent motor unit potential trains (MUPTs). In this work, the possibility of improving the decomposing results using two supervised feature extraction methods, i.e., Fisher discriminant analysis (FDA) and supervised principal component analysis (SPCA), is explored. Using the MUP labels provided by a decomposition-based quantitative EMG system as a training data for FDA and SPCA, the MUPs are transformed into a new feature space such that the MUPs of a single MU become as close as possible to each other while those created by different MUs become as far as possible. The MUPs are then reclassified using a certainty-based classification algorithm. Evaluation results using 10 simulated EMG signals comprised of 3-11 MUPTs demonstrate that FDA and SPCA on average improve the decomposition accuracy by 6%. The improvement for the most difficult-to-decompose signal is about 12%, which shows the proposed approach is most beneficial in the decomposition of more complex signals.
NASA Astrophysics Data System (ADS)
Liang, Ruiyu; Xi, Ji; Bao, Yongqiang
2017-07-01
To improve the performance of gain compensation based on three-segment sound pressure level (SPL) in hearing aids, an improved multichannel loudness compensation method based on eight-segment SPL was proposed. Firstly, the uniform cosine modulated filter bank was designed. Then, the adjacent channels which have low or gradual slopes were adaptively merged to obtain the corresponding non-uniform cosine modulated filter according to the audiogram of hearing impaired persons. Secondly, the input speech was decomposed into sub-band signals and the SPL of every sub-band signal was computed. Meanwhile, the audible SPL range from 0 dB SPL to 120 dB SPL was equally divided into eight segments. Based on these segments, a different prescription formula was designed to compute more detailed gain to compensate according to the audiogram and the computed SPL. Finally, the enhanced signal was synthesized. Objective experiments showed the decomposed signals after cosine modulated filter bank have little distortion. Objective experiments showed that the hearing aids speech perception index (HASPI) and hearing aids speech quality index (HASQI) increased 0.083 and 0.082 on average, respectively. Subjective experiments showed the proposed algorithm can effectively improve the speech recognition of six hearing impaired persons.
Fast sparse Raman spectral unmixing for chemical fingerprinting and quantification
NASA Astrophysics Data System (ADS)
Yaghoobi, Mehrdad; Wu, Di; Clewes, Rhea J.; Davies, Mike E.
2016-10-01
Raman spectroscopy is a well-established spectroscopic method for the detection of condensed phase chemicals. It is based on scattered light from exposure of a target material to a narrowband laser beam. The information generated enables presumptive identification from measuring correlation with library spectra. Whilst this approach is successful in identification of chemical information of samples with one component, it is more difficult to apply to spectral mixtures. The capability of handling spectral mixtures is crucial for defence and security applications as hazardous materials may be present as mixtures due to the presence of degradation, interferents or precursors. A novel method for spectral unmixing is proposed here. Most modern decomposition techniques are based on the sparse decomposition of mixture and the application of extra constraints to preserve the sum of concentrations. These methods have often been proposed for passive spectroscopy, where spectral baseline correction is not required. Most successful methods are computationally expensive, e.g. convex optimisation and Bayesian approaches. We present a novel low complexity sparsity based method to decompose the spectra using a reference library of spectra. It can be implemented on a hand-held spectrometer in near to real-time. The algorithm is based on iteratively subtracting the contribution of selected spectra and updating the contribution of each spectrum. The core algorithm is called fast non-negative orthogonal matching pursuit, which has been proposed by the authors in the context of nonnegative sparse representations. The iteration terminates when the maximum number of expected chemicals has been found or the residual spectrum has a negligible energy, i.e. in the order of the noise level. A backtracking step removes the least contributing spectrum from the list of detected chemicals and reports it as an alternative component. This feature is particularly useful in detection of chemicals with small contributions, which are normally not detected. The proposed algorithm is easily reconfigurable to include new library entries and optional preferential threat searches in the presence of predetermined threat indicators. Under Ministry of Defence funding, we have demonstrated the algorithm for fingerprinting and rough quantification of the concentration of chemical mixtures using a set of reference spectral mixtures. In our experiments, the algorithm successfully managed to detect the chemicals with concentrations below 10 percent. The running time of the algorithm is in the order of one second, using a single core of a desktop computer.
Integrated control/structure optimization by multilevel decomposition
NASA Technical Reports Server (NTRS)
Zeiler, Thomas A.; Gilbert, Michael G.
1990-01-01
A method for integrated control/structure optimization by multilevel decomposition is presented. It is shown that several previously reported methods were actually partial decompositions wherein only the control was decomposed into a subsystem design. One of these partially decomposed problems was selected as a benchmark example for comparison. The system is fully decomposed into structural and control subsystem designs and an improved design is produced. Theory, implementation, and results for the method are presented and compared with the benchmark example.
Cornthwaite, H M; Watterson, J H
2014-10-01
The influence of body position and microclimate on ketamine (KET) and metabolite distribution in decomposed bone tissue was examined. Rats received 75 mg/kg (i.p.) KET (n = 30) or remained drug-free (controls, n = 4). Following euthanasia, rats were divided into two groups and placed outdoors to decompose in one of the three positions: supine (SUP), prone (PRO) or upright (UPR). One group decomposed in a shaded, wooded microclimate (Site 1) while the other decomposed in an exposed sunlit microclimate with gravel substrate (Site 2), roughly 500 m from Site 1. Following decomposition, bones (lumbar vertebrae, thoracic vertebra, cervical vertebrae, rib, pelvis, femora, tibiae, humeri and scapulae) were collected and sorted for analysis. Clean, ground bones underwent microwave-assisted extraction using acetone : hexane mixture (1 : 1, v/v), followed by solid-phase extraction and analysis using GC-MS. Drug levels, expressed as mass normalized response ratios, were compared across all bone types between body position and microclimates. Bone type was a main effect (P < 0.05) for drug level and drug/metabolite level ratio for all body positions and microclimates examined. Microclimate and body position significantly influenced observed drug levels: higher levels were observed in carcasses decomposing in direct sunlight, where reduced entomological activity led to slowed decomposition. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
ADVANCED OXIDATION: OXALATE DECOMPOSITION TESTING WITH OZONE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ketusky, E.; Subramanian, K.
At the Savannah River Site (SRS), oxalic acid is currently considered the preferred agent for chemically cleaning the large underground Liquid Radioactive Waste Tanks. It is applied only in the final stages of emptying a tank when generally less than 5,000 kg of waste solids remain, and slurrying based removal methods are no-longer effective. The use of oxalic acid is preferred because of its combined dissolution and chelating properties, as well as the fact that corrosion to the carbon steel tank walls can be controlled. Although oxalic acid is the preferred agent, there are significant potential downstream impacts. Impacts include:more » (1) Degraded evaporator operation; (2) Resultant oxalate precipitates taking away critically needed operating volume; and (3) Eventual creation of significant volumes of additional feed to salt processing. As an alternative to dealing with the downstream impacts, oxalate decomposition using variations of ozone based Advanced Oxidation Process (AOP) were investigated. In general AOPs use ozone or peroxide and a catalyst to create hydroxyl radicals. Hydroxyl radicals have among the highest oxidation potentials, and are commonly used to decompose organics. Although oxalate is considered among the most difficult organic to decompose, the ability of hydroxyl radicals to decompose oxalate is considered to be well demonstrated. In addition, as AOPs are considered to be 'green' their use enables any net chemical additions to the waste to be minimized. In order to test the ability to decompose the oxalate and determine the decomposition rates, a test rig was designed, where 10 vol% ozone would be educted into a spent oxalic acid decomposition loop, with the loop maintained at 70 C and recirculated at 40L/min. Each of the spent oxalic acid streams would be created from three oxalic acid strikes of an F-area simulant (i.e., Purex = high Fe/Al concentration) and H-area simulant (i.e., H area modified Purex = high Al/Fe concentration) after nearing dissolution equilibrium, and then decomposed to {le} 100 Parts per Million (ppm) oxalate. Since AOP technology largely originated on using ultraviolet (UV) light as a primary catalyst, decomposition of the spent oxalic acid, well exposed to a medium pressure mercury vapor light was considered the benchmark. However, with multi-valent metals already contained in the feed, and maintenance of the UV light a concern; testing was conducted to evaluate the impact from removing the UV light. Using current AOP terminology, the test without the UV light would likely be considered an ozone based, dark, ferrioxalate type, decomposition process. Specifically, as part of the testing, the impacts from the following were investigated: (1) Importance of the UV light on the decomposition rates when decomposing 1 wt% spent oxalic acid; (2) Impact of increasing the oxalic acid strength from 1 to 2.5 wt% on the decomposition rates; and (3) For F-area testing, the advantage of increasing the spent oxalic acid flowrate from 40 L/min (liters/minute) to 50 L/min during decomposition of the 2.5 wt% spent oxalic acid. The results showed that removal of the UV light (from 1 wt% testing) slowed the decomposition rates in both the F & H testing. Specifically, for F-Area Strike 1, the time increased from about 6 hours to 8 hours. In H-Area, the impact was not as significant, with the time required for Strike 1 to be decomposed to less than 100 ppm increasing slightly, from 5.4 to 6.4 hours. For the spent 2.5 wt% oxalic acid decomposition tests (all) without the UV light, the F-area decompositions required approx. 10 to 13 hours, while the corresponding required H-Area decompositions times ranged from 10 to 21 hours. For the 2.5 wt% F-Area sludge, the increased availability of iron likely caused the increased decomposition rates compared to the 1 wt% oxalic acid based tests. In addition, for the F-testing, increasing the recirculation flow rates from 40 liter/minute to 50 liter/minute resulted in an increased decomposition rate, suggesting a better use of ozone.« less
Toxicity to woodlice of zinc and lead oxides added to soil litter
Beyer, W.N.; Anderson, A.
1985-01-01
Previous studies have shown that high concentrations of metals in soil are associated with reductions in decomposer populations. We have here determined the relation between the concentrations of lead and zinc added as oxides to soil litter and the survival and reproduction of a decomposer population under controlled conditions. Laboratory populations of woodlice (Porcellio scaber Latr) were fed soil litter treated with lead or zinc at concentrations that ranged from 100 to 12,800 ppm. The survival of the adults, the maximum number of young alive, and the average number of young alive, were recorded over 64 weeks. Lead at 12,800 ppm and zinc at 1,600 ppm or more had statistically significant (p < 0.05) negative effects on the populations. These results agree with field observations suggesting that lead and zinc have reduced populations of decomposers in contaminated forest soil litter, and concentrations are similar to those reported to be associated with reductions in natural populations of decomposers. Poisoning of decomposers may disrupt nutrient cycling, reduce the numbers of invertebrates available to other wildlife for food, and contribute to the contamination of food chains.
Cat got your tongue? Using the tip-of-the-tongue state to investigate fixed expressions.
Nordmann, Emily; Cleland, Alexandra A; Bull, Rebecca
2013-01-01
Despite the fact that they play a prominent role in everyday speech, the representation and processing of fixed expressions during language production is poorly understood. Here, we report a study investigating the processes underlying fixed expression production. "Tip-of-the-tongue" (TOT) states were elicited for well-known idioms (e.g., hit the nail on the head) and participants were asked to report any information they could regarding the content of the phrase. Participants were able to correctly report individual words for idioms that they could not produce. In addition, participants produced both figurative (e.g., pretty for easy on the eye) and literal errors (e.g., hammer for hit the nail on the head) when in a TOT state, suggesting that both figurative and literal meanings are active during production. There was no effect of semantic decomposability on overall TOT incidence; however, participants recalled a greater proportion of words for decomposable rather than non-decomposable idioms. This finding suggests there may be differences in how decomposable and non-decomposable idioms are retrieved during production. Copyright © 2013 Cognitive Science Society, Inc.
An accurate, fast, and scalable solver for high-frequency wave propagation
NASA Astrophysics Data System (ADS)
Zepeda-Núñez, L.; Taus, M.; Hewett, R.; Demanet, L.
2017-12-01
In many science and engineering applications, solving time-harmonic high-frequency wave propagation problems quickly and accurately is of paramount importance. For example, in geophysics, particularly in oil exploration, such problems can be the forward problem in an iterative process for solving the inverse problem of subsurface inversion. It is important to solve these wave propagation problems accurately in order to efficiently obtain meaningful solutions of the inverse problems: low order forward modeling can hinder convergence. Additionally, due to the volume of data and the iterative nature of most optimization algorithms, the forward problem must be solved many times. Therefore, a fast solver is necessary to make solving the inverse problem feasible. For time-harmonic high-frequency wave propagation, obtaining both speed and accuracy is historically challenging. Recently, there have been many advances in the development of fast solvers for such problems, including methods which have linear complexity with respect to the number of degrees of freedom. While most methods scale optimally only in the context of low-order discretizations and smooth wave speed distributions, the method of polarized traces has been shown to retain optimal scaling for high-order discretizations, such as hybridizable discontinuous Galerkin methods and for highly heterogeneous (and even discontinuous) wave speeds. The resulting fast and accurate solver is consequently highly attractive for geophysical applications. To date, this method relies on a layered domain decomposition together with a preconditioner applied in a sweeping fashion, which has limited straight-forward parallelization. In this work, we introduce a new version of the method of polarized traces which reveals more parallel structure than previous versions while preserving all of its other advantages. We achieve this by further decomposing each layer and applying the preconditioner to these new components separately and in parallel. We demonstrate that this produces an even more effective and parallelizable preconditioner for a single right-hand side. As before, additional speed can be gained by pipelining several right-hand-sides.
A Graph-Based Recovery and Decomposition of Swanson’s Hypothesis using Semantic Predications
Cameron, Delroy; Bodenreider, Olivier; Yalamanchili, Hima; Danh, Tu; Vallabhaneni, Sreeram; Thirunarayan, Krishnaprasad; Sheth, Amit P.; Rindflesch, Thomas C.
2014-01-01
Objectives This paper presents a methodology for recovering and decomposing Swanson’s Raynaud Syndrome–Fish Oil Hypothesis semi-automatically. The methodology leverages the semantics of assertions extracted from biomedical literature (called semantic predications) along with structured background knowledge and graph-based algorithms to semi-automatically capture the informative associations originally discovered manually by Swanson. Demonstrating that Swanson’s manually intensive techniques can be undertaken semi-automatically, paves the way for fully automatic semantics-based hypothesis generation from scientific literature. Methods Semantic predications obtained from biomedical literature allow the construction of labeled directed graphs which contain various associations among concepts from the literature. By aggregating such associations into informative subgraphs, some of the relevant details originally articulated by Swanson has been uncovered. However, by leveraging background knowledge to bridge important knowledge gaps in the literature, a methodology for semi-automatically capturing the detailed associations originally explicated in natural language by Swanson has been developed. Results Our methodology not only recovered the 3 associations commonly recognized as Swanson’s Hypothesis, but also decomposed them into an additional 16 detailed associations, formulated as chains of semantic predications. Altogether, 14 out of the 19 associations that can be attributed to Swanson were retrieved using our approach. To the best of our knowledge, such an in-depth recovery and decomposition of Swanson’s Hypothesis has never been attempted. Conclusion In this work therefore, we presented a methodology for semi- automatically recovering and decomposing Swanson’s RS-DFO Hypothesis using semantic representations and graph algorithms. Our methodology provides new insights into potential prerequisites for semantics-driven Literature-Based Discovery (LBD). These suggest that three critical aspects of LBD include: 1) the need for more expressive representations beyond Swanson’s ABC model; 2) an ability to accurately extract semantic information from text; and 3) the semantic integration of scientific literature with structured background knowledge. PMID:23026233
Preservation and rapid purification of DNA from decomposing human tissue samples.
Sorensen, Amy; Rahman, Elizabeth; Canela, Cassandra; Gangitano, David; Hughes-Stamm, Sheree
2016-11-01
One of the key features to be considered in a mass disaster is victim identification. However, the recovery and identification of human remains are sometimes complicated by harsh environmental conditions, limited facilities, loss of electricity and lack of refrigeration. If human remains cannot be collected, stored, or identified immediately, bodies decompose and DNA degrades making genotyping more difficult and ultimately decreasing DNA profiling success. In order to prevent further DNA damage and degradation after collection, tissue preservatives may be used. The goal of this study was to evaluate three customized (modified TENT, DESS, LST) and two commercial DNA preservatives (RNAlater and DNAgard ® ) on fresh and decomposed human skin and muscle samples stored in hot (35°C) and humid (60-70% relative humidity) conditions for up to three months. Skin and muscle samples were harvested from the thigh of three human cadavers placed outdoors for up to two weeks. In addition, the possibility of purifying DNA directly from the preservative solutions ("free DNA") was investigated in order to eliminate lengthy tissue digestion processes and increase throughput. The efficiency of each preservative was evaluated based on the quantity of DNA recovered from both the "free DNA" in solution and the tissue sample itself in conjunction with the quality and completeness of downstream STR profiles. As expected, DNA quantity and STR success decreased with time of decomposition. However, a marked decrease in DNA quantity and STR quality was observed in all samples after the bodies entered the bloat stage (approximately six days of decomposition in this study). Similar amounts of DNA were retrieved from skin and muscle samples over time, but slightly more complete STR profiles were obtained from muscle tissue. Although higher amounts of DNA were recovered from tissue samples than from the surrounding preservative, the average number of reportable alleles from the "free DNA" was comparable. Overall, DNAgard ® and the modified TENT buffer were the most successful tissue preservatives tested in this study based on STR profile success from "free DNA" in solution when decomposing tissues were stored for up to three months in hot, humid conditions. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Integrated control/structure optimization by multilevel decomposition
NASA Technical Reports Server (NTRS)
Zeiler, Thomas A.; Gilbert, Michael G.
1990-01-01
A method for integrated control/structure optimization by multilevel decomposition is presented. It is shown that several previously reported methods were actually partial decompositions wherein only the control was decomposed into a subsystem design. One of these partially decomposed problems was selected as a benchmark example for comparison. The present paper fully decomposes the system into structural and control subsystem designs and produces an improved design. Theory, implementation, and results for the method are presented and compared with the benchmark example.
Iterative image-domain decomposition for dual-energy CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niu, Tianye; Dong, Xue; Petrongolo, Michael
2014-04-15
Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its capability of material decomposition. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical values of DECT. Existing denoising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. In this work, the authors propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm ismore » formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, the authors include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. The regularization term enforces the image smoothness by calculating the square sum of neighboring pixel value differences. To retain the boundary sharpness of the decomposed images, the authors detect the edges in the CT images before decomposition. These edge pixels have small weights in the calculation of the regularization term. Distinct from the existing denoising algorithms applied on the images before or after decomposition, the method has an iterative process for noise suppression, with decomposition performed in each iteration. The authors implement the proposed algorithm using a standard conjugate gradient algorithm. The method performance is evaluated using an evaluation phantom (Catphan©600) and an anthropomorphic head phantom. The results are compared with those generated using direct matrix inversion with no noise suppression, a denoising method applied on the decomposed images, and an existing algorithm with similar formulation as the proposed method but with an edge-preserving regularization term. Results: On the Catphan phantom, the method maintains the same spatial resolution on the decomposed images as that of the CT images before decomposition (8 pairs/cm) while significantly reducing their noise standard deviation. Compared to that obtained by the direct matrix inversion, the noise standard deviation in the images decomposed by the proposed algorithm is reduced by over 98%. Without considering the noise correlation properties in the formulation, the denoising scheme degrades the spatial resolution to 6 pairs/cm for the same level of noise suppression. Compared to the edge-preserving algorithm, the method achieves better low-contrast detectability. A quantitative study is performed on the contrast-rod slice of Catphan phantom. The proposed method achieves lower electron density measurement error as compared to that by the direct matrix inversion, and significantly reduces the error variation by over 97%. On the head phantom, the method reduces the noise standard deviation of decomposed images by over 97% without blurring the sinus structures. Conclusions: The authors propose an iterative image-domain decomposition method for DECT. The method combines noise suppression and material decomposition into an iterative process and achieves both goals simultaneously. By exploring the full variance-covariance properties of the decomposed images and utilizing the edge predetection, the proposed algorithm shows superior performance on noise suppression with high image spatial resolution and low-contrast detectability.« less
Yamamoto, Shuji; Suzuki, Kei; Araki, Yoko; Mochihara, Hiroki; Hosokawa, Tetsuya; Kubota, Hiroko; Chiba, Yusuke; Rubaba, Owen; Tashiro, Yosuke; Futamata, Hiroyuki
2014-01-01
The relationship between the bacterial communities in anolyte and anode biofilms and the electrochemical properties of microbial fuel cells (MFCs) was investigated when a complex organic waste-decomposing solution was continuously supplied to MFCs as an electron donor. The current density increased gradually and was maintained at approximately 100 to 150 mA m−2. Polarization curve analyses revealed that the maximum power density was 7.4 W m−3 with an internal resistance of 110 Ω. Bacterial community structures in the organic waste-decomposing solution and MFCs differed from each other. Clonal analyses targeting 16S rRNA genes indicated that bacterial communities in the biofilms on MFCs developed to specific communities dominated by novel Geobacter. Multidimensional scaling analyses based on DGGE profiles revealed that bacterial communities in the organic waste-decomposing solution fluctuated and had no dynamic equilibrium. Bacterial communities on the anolyte in MFCs had a dynamic equilibrium with fluctuations, while those of the biofilm converged to the Geobacter-dominated structure. These bacterial community dynamics of MFCs differed from those of control-MFCs under open circuit conditions. These results suggested that bacterial communities in the anolyte and biofilm have a gentle symbiotic system through electron flow, which resulted in the advance of current density from complex organic waste. PMID:24789988
Yamamoto, Shuji; Suzuki, Kei; Araki, Yoko; Mochihara, Hiroki; Hosokawa, Tetsuya; Kubota, Hiroko; Chiba, Yusuke; Rubaba, Owen; Tashiro, Yosuke; Futamata, Hiroyuki
2014-01-01
The relationship between the bacterial communities in anolyte and anode biofilms and the electrochemical properties of microbial fuel cells (MFCs) was investigated when a complex organic waste-decomposing solution was continuously supplied to MFCs as an electron donor. The current density increased gradually and was maintained at approximately 100 to 150 mA m(-2). Polarization curve analyses revealed that the maximum power density was 7.4 W m(-3) with an internal resistance of 110 Ω. Bacterial community structures in the organic waste-decomposing solution and MFCs differed from each other. Clonal analyses targeting 16S rRNA genes indicated that bacterial communities in the biofilms on MFCs developed to specific communities dominated by novel Geobacter. Multidimensional scaling analyses based on DGGE profiles revealed that bacterial communities in the organic waste-decomposing solution fluctuated and had no dynamic equilibrium. Bacterial communities on the anolyte in MFCs had a dynamic equilibrium with fluctuations, while those of the biofilm converged to the Geobacter-dominated structure. These bacterial community dynamics of MFCs differed from those of control-MFCs under open circuit conditions. These results suggested that bacterial communities in the anolyte and biofilm have a gentle symbiotic system through electron flow, which resulted in the advance of current density from complex organic waste.
The Symbolism of Death in the Later Middle Ages.
ERIC Educational Resources Information Center
Helgeland, John
1985-01-01
Discusses the gruesome images of death occurring in medieval art and letters. Suggests that the images are a form of symbolism based on body metaphors. By means of decomposing bodies, artists and poets symbolized the disintegration of medieval institutions and the transition to the early modern period in Europe. (JAC)
Addition of biochar to simulated golf greens promotes creeping bentgrass growth
USDA-ARS?s Scientific Manuscript database
Organic amendments such as peat moss and various composts are typically added to sand-based root zones such as golf greens to increase water and nutrient retention. However, these attributes are generally lost as these amendments decompose in a few years. Biochar is a high carbon, extremely porous ...
Morphological Decomposition Based on the Analysis of Orthography
ERIC Educational Resources Information Center
Rastle, Kathleen; Davis, Matthew H.
2008-01-01
Recent theories of morphological processing have been dominated by the notion that morphologically complex words are decomposed into their constituents on the basis of their semantic properties. In this article we argue that the weight of evidence now suggests that the recognition of morphologically complex words begins with a rapid morphemic…
Parallel ICA and its hardware implementation in hyperspectral image analysis
NASA Astrophysics Data System (ADS)
Du, Hongtao; Qi, Hairong; Peterson, Gregory D.
2004-04-01
Advances in hyperspectral images have dramatically boosted remote sensing applications by providing abundant information using hundreds of contiguous spectral bands. However, the high volume of information also results in excessive computation burden. Since most materials have specific characteristics only at certain bands, a lot of these information is redundant. This property of hyperspectral images has motivated many researchers to study various dimensionality reduction algorithms, including Projection Pursuit (PP), Principal Component Analysis (PCA), wavelet transform, and Independent Component Analysis (ICA), where ICA is one of the most popular techniques. It searches for a linear or nonlinear transformation which minimizes the statistical dependence between spectral bands. Through this process, ICA can eliminate superfluous but retain practical information given only the observations of hyperspectral images. One hurdle of applying ICA in hyperspectral image (HSI) analysis, however, is its long computation time, especially for high volume hyperspectral data sets. Even the most efficient method, FastICA, is a very time-consuming process. In this paper, we present a parallel ICA (pICA) algorithm derived from FastICA. During the unmixing process, pICA divides the estimation of weight matrix into sub-processes which can be conducted in parallel on multiple processors. The decorrelation process is decomposed into the internal decorrelation and the external decorrelation, which perform weight vector decorrelations within individual processors and between cooperative processors, respectively. In order to further improve the performance of pICA, we seek hardware solutions in the implementation of pICA. Until now, there are very few hardware designs for ICA-related processes due to the complicated and iterant computation. This paper discusses capacity limitation of FPGA implementations for pICA in HSI analysis. A synthesis of Application-Specific Integrated Circuit (ASIC) is designed for pICA-based dimensionality reduction in HSI analysis. The pICA design is implemented using standard-height cells and aimed at TSMC 0.18 micron process. During the synthesis procedure, three ICA-related reconfigurable components are developed for the reuse and retargeting purpose. Preliminary results show that the standard-height cell based ASIC synthesis provide an effective solution for pICA and ICA-related processes in HSI analysis.
NASA Astrophysics Data System (ADS)
Torrungrueng, Danai; Johnson, Joel T.; Chou, Hsi-Tseng
2002-03-01
The novel spectral acceleration (NSA) algorithm has been shown to produce an $[\\mathcal{O}]$(Ntot) efficient iterative method of moments for the computation of radiation/scattering from both one-dimensional (1-D) and two-dimensional large-scale quasi-planar structures, where Ntot is the total number of unknowns to be solved. This method accelerates the matrix-vector multiplication in an iterative method of moments solution and divides contributions between points into ``strong'' (exact matrix elements) and ``weak'' (NSA algorithm) regions. The NSA method is based on a spectral representation of the electromagnetic Green's function and appropriate contour deformation, resulting in a fast multipole-like formulation in which contributions from large numbers of points to a single point are evaluated simultaneously. In the standard NSA algorithm the NSA parameters are derived on the basis of the assumption that the outermost possible saddle point, φs,max, along the real axis in the complex angular domain is small. For given height variations of quasi-planar structures, this assumption can be satisfied by adjusting the size of the strong region Ls. However, for quasi-planar structures with large height variations, the adjusted size of the strong region is typically large, resulting in significant increases in computational time for the computation of the strong-region contribution and degrading overall efficiency of the NSA algorithm. In addition, for the case of extremely large scale structures, studies based on the physical optics approximation and a flat surface assumption show that the given NSA parameters in the standard NSA algorithm may yield inaccurate results. In this paper, analytical formulas associated with the NSA parameters for an arbitrary value of φs,max are presented, resulting in more flexibility in selecting Ls to compromise between the computation of the contributions of the strong and weak regions. In addition, a ``multilevel'' algorithm, decomposing 1-D extremely large scale quasi-planar structures into more than one weak region and appropriately choosing the NSA parameters for each weak region, is incorporated into the original NSA method to improve its accuracy.
Semiconductor laser self-mixing micro-vibration measuring technology based on Hilbert transform
NASA Astrophysics Data System (ADS)
Tao, Yufeng; Wang, Ming; Xia, Wei
2016-06-01
A signal-processing synthesizing Wavelet transform and Hilbert transform is employed to measurement of uniform or non-uniform vibrations in self-mixing interferometer on semiconductor laser diode with quantum well. Background noise and fringe inclination are solved by decomposing effect, fringe counting is adopted to automatic determine decomposing level, a couple of exact quadrature signals are produced by Hilbert transform to extract vibration. The tempting potential of real-time measuring micro vibration with high accuracy and wide dynamic response bandwidth using proposed method is proven by both simulation and experiment. Advantages and error sources are presented as well. Main features of proposed semiconductor laser self-mixing interferometer are constant current supply, high resolution, simplest optical path and much higher tolerance to feedback level than existing self-mixing interferometers, which is competitive for non-contact vibration measurement.
NASA Astrophysics Data System (ADS)
Heya, Akira; Matsuo, Naoto
2018-04-01
The guidelines for a bottom-up approach of nanographene formation from pentacene using heated tungsten were investigated using a novel method called hot mesh deposition (HMD). In this method, a heated W mesh was set between a pentacene source and a quartz substrate. Pentacene molecules were decomposed by the heated W mesh. The generated pentacene-based decomposed precursors were then deposited on the quartz substrate. The pentacene dimer (peripentacene) was obtained from pentacene by HMD using two heated catalysts. As expected from the calculation with the density functional theory in the literature, it was confirmed that the pentacene dimer can be formed by a reaction between pentacene and 6,13-dihydropentacene. This technique can be applied to the formation of novel nanographene on various substrates without metal catalysts.
NASA Astrophysics Data System (ADS)
Wang, Shibin; Chen, Xuefeng; Selesnick, Ivan W.; Guo, Yanjie; Tong, Chaowei; Zhang, Xingwu
2018-02-01
Synchrosqueezing transform (SST) can effectively improve the readability of the time-frequency (TF) representation (TFR) of nonstationary signals composed of multiple components with slow varying instantaneous frequency (IF). However, for signals composed of multiple components with fast varying IF, SST still suffers from TF blurs. In this paper, we introduce a time-frequency analysis (TFA) method called matching synchrosqueezing transform (MSST) that achieves a highly concentrated TF representation comparable to the standard TF reassignment methods (STFRM), even for signals with fast varying IF, and furthermore, MSST retains the reconstruction benefit of SST. MSST captures the philosophy of STFRM to simultaneously consider time and frequency variables, and incorporates three estimators (i.e., the IF estimator, the group delay estimator, and a chirp-rate estimator) into a comprehensive and accurate IF estimator. In this paper, we first introduce the motivation of MSST with three heuristic examples. Then we introduce a precise mathematical definition of a class of chirp-like intrinsic-mode-type functions that locally can be viewed as a sum of a reasonably small number of approximate chirp signals, and we prove that MSST does indeed succeed in estimating chirp-rate and IF of arbitrary functions in this class and succeed in decomposing these functions. Furthermore, we describe an efficient numerical algorithm for the practical implementation of the MSST, and we provide an adaptive IF extraction method for MSST reconstruction. Finally, we verify the effectiveness of the MSST in practical applications for machine fault diagnosis, including gearbox fault diagnosis for a wind turbine in variable speed conditions and rotor rub-impact fault diagnosis for a dual-rotor turbofan engine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moore, J. A. M.; Jiang, J.; Post, W. M.
Carbon cycle models often lack explicit belowground organism activity, yet belowground organisms regulate carbon storage and release in soil. Ectomycorrhizal fungi are important players in the carbon cycle because they are a conduit into soil for carbon assimilated by the plant. It is hypothesized that ectomycorrhizal fungi can also be active decomposers when plant carbon allocation to fungi is low. Here, we reviewed the literature on ectomycorrhizal decomposition and we developed a simulation model of the plant-mycorrhizae interaction where a reduction in plant productivity stimulates ectomycorrhizal fungi to decompose soil organic matter. Our review highlights evidence demonstrating the potential formore » ectomycorrhizal fungi to decompose soil organic matter. Our model output suggests that ectomycorrhizal activity accounts for a portion of carbon decomposed in soil, but this portion varied with plant productivity and the mycorrhizal carbon uptake strategy simulated. Lower organic matter inputs to soil were largely responsible for reduced soil carbon storage. Using mathematical theory, we demonstrated that biotic interactions affect predictions of ecosystem functions. Specifically, we developed a simple function to model the mycorrhizal switch in function from plant symbiont to decomposer. In conclusion, we show that including mycorrhizal fungi with the flexibility of mutualistic and saprotrophic lifestyles alters predictions of ecosystem function.« less
Becky A. Ball; Mark A. Bradford; Dave C. Coleman; Mark D. Hunter
2009-01-01
Inputs of aboveground plant litter influence the abundance and activities of belowground decomposer biota. Litter-mixing studies have examined whether the diversity and heterogeneity of litter inputs...
NASA Astrophysics Data System (ADS)
Luo, Bin; Lin, Lin; Zhong, ShiSheng
2018-02-01
In this research, we propose a preference-guided optimisation algorithm for multi-criteria decision-making (MCDM) problems with interval-valued fuzzy preferences. The interval-valued fuzzy preferences are decomposed into a series of precise and evenly distributed preference-vectors (reference directions) regarding the objectives to be optimised on the basis of uniform design strategy firstly. Then the preference information is further incorporated into the preference-vectors based on the boundary intersection approach, meanwhile, the MCDM problem with interval-valued fuzzy preferences is reformulated into a series of single-objective optimisation sub-problems (each sub-problem corresponds to a decomposed preference-vector). Finally, a preference-guided optimisation algorithm based on MOEA/D (multi-objective evolutionary algorithm based on decomposition) is proposed to solve the sub-problems in a single run. The proposed algorithm incorporates the preference-vectors within the optimisation process for guiding the search procedure towards a more promising subset of the efficient solutions matching the interval-valued fuzzy preferences. In particular, lots of test instances and an engineering application are employed to validate the performance of the proposed algorithm, and the results demonstrate the effectiveness and feasibility of the algorithm.
Application of wavelet-based multi-model Kalman filters to real-time flood forecasting
NASA Astrophysics Data System (ADS)
Chou, Chien-Ming; Wang, Ru-Yih
2004-04-01
This paper presents the application of a multimodel method using a wavelet-based Kalman filter (WKF) bank to simultaneously estimate decomposed state variables and unknown parameters for real-time flood forecasting. Applying the Haar wavelet transform alters the state vector and input vector of the state space. In this way, an overall detail plus approximation describes each new state vector and input vector, which allows the WKF to simultaneously estimate and decompose state variables. The wavelet-based multimodel Kalman filter (WMKF) is a multimodel Kalman filter (MKF), in which the Kalman filter has been substituted for a WKF. The WMKF then obtains M estimated state vectors. Next, the M state-estimates, each of which is weighted by its possibility that is also determined on-line, are combined to form an optimal estimate. Validations conducted for the Wu-Tu watershed, a small watershed in Taiwan, have demonstrated that the method is effective because of the decomposition of wavelet transform, the adaptation of the time-varying Kalman filter and the characteristics of the multimodel method. Validation results also reveal that the resulting method enhances the accuracy of the runoff prediction of the rainfall-runoff process in the Wu-Tu watershed.
Decomposition and extraction: a new framework for visual classification.
Fang, Yuqiang; Chen, Qiang; Sun, Lin; Dai, Bin; Yan, Shuicheng
2014-08-01
In this paper, we present a novel framework for visual classification based on hierarchical image decomposition and hybrid midlevel feature extraction. Unlike most midlevel feature learning methods, which focus on the process of coding or pooling, we emphasize that the mechanism of image composition also strongly influences the feature extraction. To effectively explore the image content for the feature extraction, we model a multiplicity feature representation mechanism through meaningful hierarchical image decomposition followed by a fusion step. In particularly, we first propose a new hierarchical image decomposition approach in which each image is decomposed into a series of hierarchical semantical components, i.e, the structure and texture images. Then, different feature extraction schemes can be adopted to match the decomposed structure and texture processes in a dissociative manner. Here, two schemes are explored to produce property related feature representations. One is based on a single-stage network over hand-crafted features and the other is based on a multistage network, which can learn features from raw pixels automatically. Finally, those multiple midlevel features are incorporated by solving a multiple kernel learning task. Extensive experiments are conducted on several challenging data sets for visual classification, and experimental results demonstrate the effectiveness of the proposed method.
Multi-material decomposition of spectral CT images
NASA Astrophysics Data System (ADS)
Mendonça, Paulo R. S.; Bhotika, Rahul; Maddah, Mahnaz; Thomsen, Brian; Dutta, Sandeep; Licato, Paul E.; Joshi, Mukta C.
2010-04-01
Spectral Computed Tomography (Spectral CT), and in particular fast kVp switching dual-energy computed tomography, is an imaging modality that extends the capabilities of conventional computed tomography (CT). Spectral CT enables the estimation of the full linear attenuation curve of the imaged subject at each voxel in the CT volume, instead of a scalar image in Hounsfield units. Because the space of linear attenuation curves in the energy ranges of medical applications can be accurately described through a two-dimensional manifold, this decomposition procedure would be, in principle, limited to two materials. This paper describes an algorithm that overcomes this limitation, allowing for the estimation of N-tuples of material-decomposed images. The algorithm works by assuming that the mixing of substances and tissue types in the human body has the physicochemical properties of an ideal solution, which yields a model for the density of the imaged material mix. Under this model the mass attenuation curve of each voxel in the image can be estimated, immediately resulting in a material-decomposed image triplet. Decomposition into an arbitrary number of pre-selected materials can be achieved by automatically selecting adequate triplets from an application-specific material library. The decomposition is expressed in terms of the volume fractions of each constituent material in the mix; this provides for a straightforward, physically meaningful interpretation of the data. One important application of this technique is in the digital removal of contrast agent from a dual-energy exam, producing a virtual nonenhanced image, as well as in the quantification of the concentration of contrast observed in a targeted region, thus providing an accurate measure of tissue perfusion.
Liu, Guicai; Liao, Yanfen; Ma, Xiaoqian
2017-03-01
As important plastic blends in End-of-Life vehicles (ELV), pyrolysis profiles of ABS/PVC, ABS/PA6 and ABS/PC were investigated using thermogravimetric-Fourier transform infrared spectrometer (TG-FTIR). Also, CaCO 3 was added as plastic filler to discuss its effects on the pyrolysis of these plastics. The results showed that the interaction between ABS and PVC made PVC pyrolysis earlier and HCl emission slightly accelerated. The mixing of ABS and PA6 made their decomposition temperature closer, and ketones in PA6 pyrolysis products were reduced. The presence of ABS made PC pyrolysis earlier, and phenyl compounds in PC pyrolysis products could be transferred into alcohol or H 2 O. The interaction between ABS and other polymers in pyrolysis could be attributed to the intermolecular radical transfer, and free radicals from the polymer firstly decomposed led to a fast initiation the decomposition of the other polymer. As plastic filler, CaCO 3 promoted the thermal decomposition of PA6 and PC, and had no obvious effects on ABS and PVC pyrolysis process. Also, CaCO 3 made the pyrolysis products from PA6 and PC further decomposed into small-molecule compounds like CO 2 . The kinetics analysis showed that isoconversional method like Starink method was more suitable for these polymer blends. Starink method showed the average activation energy of ABS50/PVC50, ABS50/PA50 and ABS50/PC50 was 186.63kJ/mol, 239.61kJ/mol and 248.95kJ/mol, respectively, and the interaction among them could be reflected by the activation energy variation. Copyright © 2017 Elsevier Ltd. All rights reserved.
Method for preparing a thick film conductor
Nagesh, Voddarahalli K.; Fulrath, deceased, Richard M.
1978-01-01
A method for preparing a thick film conductor which comprises providing surface active glass particles, mixing the surface active glass particles with a thermally decomposable organometallic compound, for example, a silver resinate, and then decomposing the organometallic compound by heating, thereby chemically depositing metal on the glass particles. The glass particle mixture is applied to a suitable substrate either before or after the organometallic compound is thermally decomposed. The resulting system is then fired in an oxidizing atmosphere, providing a microstructure of glass particles substantially uniformly coated with metal.
Rasouli, Omid; Vasseljen, Ottar; Fors, Egil A; Lorås, Håvard W; Stensdotter, Ann-Katrin
2018-01-01
As many similar symptoms are reported in fibromyalgia (FM) and chronic fatigue syndrome (CFS), underlying defcits may potentially also be similar. Postural disequilibrium reported in both conditions may thus be explained by similar deviations in postural control strategies. 75 females (25/group FM, CFS and control, age 19-49 years) performed 60 s of quiet standing on a force platform in each of three conditions: 1) firm surface with vision, 2) firm surface without vision and, 3) compliant surface with vision. Migration of center of pressure was decomposed into a slow and a fast component denoting postural sway and lateral forces controlling postural sway, analyzed in the time and frequency domains. Main effects of group for the antero-posterior (AP) and medio-lateral (ML) directions showed that patients displayed larger amplitudes (AP, p = 0.002; ML, p = 0.021) and lower frequencies (AP, p < 0.001; ML, p < 0.001) for the slow component, as well as for the fast component (amplitudes: AP, p = 0.010; ML, p = 0.001 and frequencies: AP, p = 0.001; ML, p = 0.029) compared to controls. Post hoc analyses showed no significant differences between patient groups. In conclusion, both the CFS- and the FM-group differed from the control group. Larger postural sway and insufficient control was found in patients compared to controls, with no significant differences between the two patient groups.
Rasouli, Omid; Vasseljen, Ottar; Fors, Egil A.; Lorås, Håvard W.
2018-01-01
As many similar symptoms are reported in fibromyalgia (FM) and chronic fatigue syndrome (CFS), underlying defcits may potentially also be similar. Postural disequilibrium reported in both conditions may thus be explained by similar deviations in postural control strategies. 75 females (25/group FM, CFS and control, age 19–49 years) performed 60 s of quiet standing on a force platform in each of three conditions: 1) firm surface with vision, 2) firm surface without vision and, 3) compliant surface with vision. Migration of center of pressure was decomposed into a slow and a fast component denoting postural sway and lateral forces controlling postural sway, analyzed in the time and frequency domains. Main effects of group for the antero-posterior (AP) and medio-lateral (ML) directions showed that patients displayed larger amplitudes (AP, p = 0.002; ML, p = 0.021) and lower frequencies (AP, p < 0.001; ML, p < 0.001) for the slow component, as well as for the fast component (amplitudes: AP, p = 0.010; ML, p = 0.001 and frequencies: AP, p = 0.001; ML, p = 0.029) compared to controls. Post hoc analyses showed no significant differences between patient groups. In conclusion, both the CFS- and the FM-group differed from the control group. Larger postural sway and insufficient control was found in patients compared to controls, with no significant differences between the two patient groups. PMID:29617424
Klix, Sabrina; Hezel, Fabian; Fuchs, Katharina; Ruff, Jan; Dieringer, Matthias A.; Niendorf, Thoralf
2014-01-01
Purpose Design, validation and application of an accelerated fast spin-echo (FSE) variant that uses a split-echo approach for self-calibrated parallel imaging. Methods For self-calibrated, split-echo FSE (SCSE-FSE), extra displacement gradients were incorporated into FSE to decompose odd and even echo groups which were independently phase encoded to derive coil sensitivity maps, and to generate undersampled data (reduction factor up to R = 3). Reference and undersampled data were acquired simultaneously. SENSE reconstruction was employed. Results The feasibility of SCSE-FSE was demonstrated in phantom studies. Point spread function performance of SCSE-FSE was found to be competitive with traditional FSE variants. The immunity of SCSE-FSE for motion induced mis-registration between reference and undersampled data was shown using a dynamic left ventricular model and cardiac imaging. The applicability of black blood prepared SCSE-FSE for cardiac imaging was demonstrated in healthy volunteers including accelerated multi-slice per breath-hold imaging and accelerated high spatial resolution imaging. Conclusion SCSE-FSE obviates the need of external reference scans for SENSE reconstructed parallel imaging with FSE. SCSE-FSE reduces the risk for mis-registration between reference scans and accelerated acquisitions. SCSE-FSE is feasible for imaging of the heart and of large cardiac vessels but also meets the needs of brain, abdominal and liver imaging. PMID:24728341
Ghaffari, Mahsa; Tangen, Kevin; Alaraj, Ali; Du, Xinjian; Charbel, Fady T; Linninger, Andreas A
2017-12-01
In this paper, we present a novel technique for automatic parametric mesh generation of subject-specific cerebral arterial trees. This technique generates high-quality and anatomically accurate computational meshes for fast blood flow simulations extending the scope of 3D vascular modeling to a large portion of cerebral arterial trees. For this purpose, a parametric meshing procedure was developed to automatically decompose the vascular skeleton, extract geometric features and generate hexahedral meshes using a body-fitted coordinate system that optimally follows the vascular network topology. To validate the anatomical accuracy of the reconstructed vasculature, we performed statistical analysis to quantify the alignment between parametric meshes and raw vascular images using receiver operating characteristic curve. Geometric accuracy evaluation showed an agreement with area under the curves value of 0.87 between the constructed mesh and raw MRA data sets. Parametric meshing yielded on-average, 36.6% and 21.7% orthogonal and equiangular skew quality improvement over the unstructured tetrahedral meshes. The parametric meshing and processing pipeline constitutes an automated technique to reconstruct and simulate blood flow throughout a large portion of the cerebral arterial tree down to the level of pial vessels. This study is the first step towards fast large-scale subject-specific hemodynamic analysis for clinical applications. Copyright © 2017 Elsevier Ltd. All rights reserved.
Diversity of Riparian Plants among and within Species Shapes River Communities
Jackrel, Sara L.; Wootton, J. Timothy
2015-01-01
Organismal diversity among and within species may affect ecosystem function with effects transmitting across ecosystem boundaries. Whether recipient communities adjust their composition, in turn, to maximize their function in response to changes in donor composition at these two scales of diversity is unknown. We use small stream communities that rely on riparian subsidies as a model system. We used leaf pack experiments to ask how variation in plants growing beside streams in the Olympic Peninsula of Washington State, USA affects stream communities via leaf subsidies. Leaves from red alder (Alnus rubra), vine maple (Acer cinereus), bigleaf maple (Acer macrophyllum) and western hemlock (Tsuga heterophylla) were assembled in leaf packs to contrast low versus high diversity, and deployed in streams to compare local versus non-local leaf sources at the among and within species scales. Leaves from individuals within species decomposed at varying rates; most notably thin leaves decomposed rapidly. Among deciduous species, vine maple decomposed most rapidly, harbored the least algal abundance, and supported the greatest diversity of aquatic invertebrates, while bigleaf maple was at the opposite extreme for these three metrics. Recipient communities decomposed leaves from local species rapidly: leaves from early successional plants decomposed rapidly in stream reaches surrounded by early successional forest and leaves from later successional plants decomposed rapidly adjacent to later successional forest. The species diversity of leaves inconsistently affected decomposition, algal abundance and invertebrate metrics. Intraspecific diversity of leaf packs also did not affect decomposition or invertebrate diversity. However, locally sourced alder leaves decomposed more rapidly and harbored greater levels of algae than leaves sourced from conspecifics growing in other areas on the Olympic Peninsula, but did not harbor greater aquatic invertebrate diversity. In contrast to alder, local intraspecific differences via decomposition, algal or invertebrate metrics were not observed consistently among maples. These results emphasize that biodiversity of riparian subsidies at the within and across species scale have the potential to affect aquatic ecosystems, although there are complex species-specific effects. PMID:26539714
Diversity of Riparian Plants among and within Species Shapes River Communities.
Jackrel, Sara L; Wootton, J Timothy
2015-01-01
Organismal diversity among and within species may affect ecosystem function with effects transmitting across ecosystem boundaries. Whether recipient communities adjust their composition, in turn, to maximize their function in response to changes in donor composition at these two scales of diversity is unknown. We use small stream communities that rely on riparian subsidies as a model system. We used leaf pack experiments to ask how variation in plants growing beside streams in the Olympic Peninsula of Washington State, USA affects stream communities via leaf subsidies. Leaves from red alder (Alnus rubra), vine maple (Acer cinereus), bigleaf maple (Acer macrophyllum) and western hemlock (Tsuga heterophylla) were assembled in leaf packs to contrast low versus high diversity, and deployed in streams to compare local versus non-local leaf sources at the among and within species scales. Leaves from individuals within species decomposed at varying rates; most notably thin leaves decomposed rapidly. Among deciduous species, vine maple decomposed most rapidly, harbored the least algal abundance, and supported the greatest diversity of aquatic invertebrates, while bigleaf maple was at the opposite extreme for these three metrics. Recipient communities decomposed leaves from local species rapidly: leaves from early successional plants decomposed rapidly in stream reaches surrounded by early successional forest and leaves from later successional plants decomposed rapidly adjacent to later successional forest. The species diversity of leaves inconsistently affected decomposition, algal abundance and invertebrate metrics. Intraspecific diversity of leaf packs also did not affect decomposition or invertebrate diversity. However, locally sourced alder leaves decomposed more rapidly and harbored greater levels of algae than leaves sourced from conspecifics growing in other areas on the Olympic Peninsula, but did not harbor greater aquatic invertebrate diversity. In contrast to alder, local intraspecific differences via decomposition, algal or invertebrate metrics were not observed consistently among maples. These results emphasize that biodiversity of riparian subsidies at the within and across species scale have the potential to affect aquatic ecosystems, although there are complex species-specific effects.
Simandl, Ronald F.; Brown, John D.; Whinnery, Jr., LeRoy L.
1999-01-01
In an improved ozone decomposing air filter carbon fibers are held together with a carbonized binder in a perforated structure. The structure is made by combining rayon fibers with gelatin, forming the mixture in a mold, freeze-drying, and vacuum baking.
Limited-memory fast gradient descent method for graph regularized nonnegative matrix factorization.
Guan, Naiyang; Wei, Lei; Luo, Zhigang; Tao, Dacheng
2013-01-01
Graph regularized nonnegative matrix factorization (GNMF) decomposes a nonnegative data matrix X[Symbol:see text]R(m x n) to the product of two lower-rank nonnegative factor matrices, i.e.,W[Symbol:see text]R(m x r) and H[Symbol:see text]R(r x n) (r < min {m,n}) and aims to preserve the local geometric structure of the dataset by minimizing squared Euclidean distance or Kullback-Leibler (KL) divergence between X and WH. The multiplicative update rule (MUR) is usually applied to optimize GNMF, but it suffers from the drawback of slow-convergence because it intrinsically advances one step along the rescaled negative gradient direction with a non-optimal step size. Recently, a multiple step-sizes fast gradient descent (MFGD) method has been proposed for optimizing NMF which accelerates MUR by searching the optimal step-size along the rescaled negative gradient direction with Newton's method. However, the computational cost of MFGD is high because 1) the high-dimensional Hessian matrix is dense and costs too much memory; and 2) the Hessian inverse operator and its multiplication with gradient cost too much time. To overcome these deficiencies of MFGD, we propose an efficient limited-memory FGD (L-FGD) method for optimizing GNMF. In particular, we apply the limited-memory BFGS (L-BFGS) method to directly approximate the multiplication of the inverse Hessian and the gradient for searching the optimal step size in MFGD. The preliminary results on real-world datasets show that L-FGD is more efficient than both MFGD and MUR. To evaluate the effectiveness of L-FGD, we validate its clustering performance for optimizing KL-divergence based GNMF on two popular face image datasets including ORL and PIE and two text corpora including Reuters and TDT2. The experimental results confirm the effectiveness of L-FGD by comparing it with the representative GNMF solvers.
Reactive codoping of GaAlInP compound semiconductors
Hanna, Mark Cooper [Boulder, CO; Reedy, Robert [Golden, CO
2008-02-12
A GaAlInP compound semiconductor and a method of producing a GaAlInP compound semiconductor are provided. The apparatus and method comprises a GaAs crystal substrate in a metal organic vapor deposition reactor. Al, Ga, In vapors are prepared by thermally decomposing organometallic compounds. P vapors are prepared by thermally decomposing phospine gas, group II vapors are prepared by thermally decomposing an organometallic group IIA or IIB compound. Group VIB vapors are prepared by thermally decomposing a gaseous compound of group VIB. The Al, Ga, In, P, group II, and group VIB vapors grow a GaAlInP crystal doped with group IIA or IIB and group VIB elements on the substrate wherein the group IIA or IIB and a group VIB vapors produced a codoped GaAlInP compound semiconductor with a group IIA or IIB element serving as a p-type dopant having low group II atomic diffusion.
Yang, Ya; Moore, Michael J.; Brockington, Samuel F.; Soltis, Douglas E.; Wong, Gane Ka-Shu; Carpenter, Eric J.; Zhang, Yong; Chen, Li; Yan, Zhixiang; Xie, Yinlong; Sage, Rowan F.; Covshoff, Sarah; Hibberd, Julian M.; Nelson, Matthew N.; Smith, Stephen A.
2015-01-01
Many phylogenomic studies based on transcriptomes have been limited to “single-copy” genes due to methodological challenges in homology and orthology inferences. Only a relatively small number of studies have explored analyses beyond reconstructing species relationships. We sampled 69 transcriptomes in the hyperdiverse plant clade Caryophyllales and 27 outgroups from annotated genomes across eudicots. Using a combined similarity- and phylogenetic tree-based approach, we recovered 10,960 homolog groups, where each was represented by at least eight ingroup taxa. By decomposing these homolog trees, and taking gene duplications into account, we obtained 17,273 ortholog groups, where each was represented by at least ten ingroup taxa. We reconstructed the species phylogeny using a 1,122-gene data set with a gene occupancy of 92.1%. From the homolog trees, we found that both synonymous and nonsynonymous substitution rates in herbaceous lineages are up to three times as fast as in their woody relatives. This is the first time such a pattern has been shown across thousands of nuclear genes with dense taxon sampling. We also pinpointed regions of the Caryophyllales tree that were characterized by relatively high frequencies of gene duplication, including three previously unrecognized whole-genome duplications. By further combining information from homolog tree topology and synonymous distance between paralog pairs, phylogenetic locations for 13 putative genome duplication events were identified. Genes that experienced the greatest gene family expansion were concentrated among those involved in signal transduction and oxidoreduction, including a cytochrome P450 gene that encodes a key enzyme in the betalain synthesis pathway. Our approach demonstrates a new approach for functional phylogenomic analysis in nonmodel species that is based on homolog groups in addition to inferred ortholog groups. PMID:25837578
3D tumor measurement in cone-beam CT breast imaging
NASA Astrophysics Data System (ADS)
Chen, Zikuan; Ning, Ruola
2004-05-01
Cone-beam CT breast imaging provides a digital volume representation of a breast. With a digital breast volume, the immediate task is to extract the breast tissue information, especially for suspicious tumors, preferably in an automatic manner or with minimal user interaction. This paper reports a program for three-dimensional breast tissue analysis. It consists of volumetric segmentation (by globally thresholding), subsegmentation (connection-based separation), and volumetric component measurement (volume, surface, shape, and other geometrical specifications). A combination scheme of multi-thresholding and binary volume morphology is proposed to fast determine the surface gradients, which may be interpreted as the surface evolution (outward growth or inward shrinkage) for a tumor volume. This scheme is also used to optimize the volumetric segmentation. With a binary volume, we decompose the foreground into components according to spatial connectedness. Since this decomposition procedure is performed after volumetric segmentation, it is called subsegmentation. The subsegmentation brings the convenience for component visualization and measurement, in the whole support space, without interference from others. Upon the tumor component identification, we measure the following specifications: volume, surface area, roundness, elongation, aspect, star-shapedness, and location (centroid). A 3D morphological operation is used to extract the cluster shell and, by delineating the corresponding volume from the grayscale volume, to measure the shell stiffness. This 3D tissue measurement is demonstrated with a tumor-borne breast specimen (a surgical part).
Widening and Deepening Questions in Web-Based Investigative Learning
ERIC Educational Resources Information Center
Kashihara, Akihiro; Akiyama, Naoto
2016-01-01
Web allows learners to investigate any question with a great variety of Web resources, in which they could construct a wider, and deeper knowledge. In such investigative learning process, it is important for them to deepen and widen the question, which involves decomposing the question into the sub-questions to be further investigated. This…
Early Decomposition in Visual Word Recognition: Dissociating Morphology, Form, and Meaning
ERIC Educational Resources Information Center
Marslen-Wilson, William D.; Bozic, Mirjana; Randall, Billi
2008-01-01
The role of morphological, semantic, and form-based factors in the early stages of visual word recognition was investigated across different SOAs in a masked priming paradigm, focusing on English derivational morphology. In a first set of experiments, stimulus pairs co-varying in morphological decomposability and in semantic and orthographic…
NASA Astrophysics Data System (ADS)
Katayama, Ayumi; Khoon Koh, Lip; Kume, Tomonori; Makita, Naoki; Matsumoto, Kazuho; Ohashi, Mizue
2016-04-01
Considerable carbon is allocated belowground and used for respiration and production of roots. It is reported that approximately 40 % of GPP is allocated belowground in a Bornean tropical rainforest, which is much higher than those in Neotropical rainforests. This may be caused by high root production in this forest. Ingrowth core is a popular method for estimating fine root production, but recent study by Osawa et al. (2012) showed potential underestimates of this method because of the lack of consideration of the impact of decomposed roots. It is important to estimate fine root production with consideration for the decomposed roots, especially in tropics where decomposition rate is higher than other regions. Therefore, objective of this study is to estimate fine root production with consideration of decomposed roots using ingrowth cores and root litter-bag in the tropical rainforest. The study was conducted in Lambir Hills National Park in Borneo. Ingrowth cores and litter bags for fine roots were buried in March 2013. Eighteen ingrowth cores and 27 litter bags were collected in May, September 2013, March 2014 and March 2015, respectively. Fine root production was comparable to aboveground biomass increment and litterfall amount, and accounted only 10% of GPP in this study site, suggesting most of the carbon allocated to belowground might be used for other purposes. Fine root production was comparable to those in Neotropics. Decomposed roots accounted for 18% of fine root production. This result suggests that no consideration of decomposed fine roots may cause underestimate of fine root production.
Jackrel, Sara L.; Wootton, J. Timothy
2015-01-01
Herbivores induce plants to undergo diverse processes that minimize costs to the plant, such as producing defences to deter herbivory or reallocating limited resources to inaccessible portions of the plant. Yet most plant tissue is consumed by decomposers, not herbivores, and these defensive processes aimed to deter herbivores may alter plant tissue even after detachment from the plant. All consumers value nutrients, but plants also require these nutrients for primary functions and defensive processes. We experimentally simulated herbivory with and without nutrient additions on red alder (Alnus rubra), which supplies the majority of leaf litter for many rivers in western North America. Simulated herbivory induced a defence response with cascading effects: terrestrial herbivores and aquatic decomposers fed less on leaves from stressed trees. This effect was context dependent: leaves from fertilized-only trees decomposed most rapidly while leaves from fertilized trees receiving the herbivory treatment decomposed least, suggesting plants funnelled a nutritionally valuable resource into enhanced defence. One component of the defence response was a decrease in leaf nitrogen leading to elevated carbon : nitrogen. Aquatic decomposers prefer leaves naturally low in C : N and this altered nutrient profile largely explains the lower rate of aquatic decomposition. Furthermore, terrestrial soil decomposers were unaffected by either treatment but did show a preference for local and nitrogen-rich leaves. Our study illustrates the ecological implications of terrestrial herbivory and these findings demonstrate that the effects of selection caused by terrestrial herbivory in one ecosystem can indirectly shape the structure of other ecosystems through ecological fluxes across boundaries. PMID:25788602
Maheshwari, Shishir; Pachori, Ram Bilas; Acharya, U Rajendra
2017-05-01
Glaucoma is an ocular disorder caused due to increased fluid pressure in the optic nerve. It damages the optic nerve and subsequently causes loss of vision. The available scanning methods are Heidelberg retinal tomography, scanning laser polarimetry, and optical coherence tomography. These methods are expensive and require experienced clinicians to use them. So, there is a need to diagnose glaucoma accurately with low cost. Hence, in this paper, we have presented a new methodology for an automated diagnosis of glaucoma using digital fundus images based on empirical wavelet transform (EWT). The EWT is used to decompose the image, and correntropy features are obtained from decomposed EWT components. These extracted features are ranked based on t value feature selection algorithm. Then, these features are used for the classification of normal and glaucoma images using least-squares support vector machine (LS-SVM) classifier. The LS-SVM is employed for classification with radial basis function, Morlet wavelet, and Mexican-hat wavelet kernels. The classification accuracy of the proposed method is 98.33% and 96.67% using threefold and tenfold cross validation, respectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Futatani, S.; Bos, W.J.T.; Del-Castillo-Negrete, Diego B
2011-01-01
We assess two techniques for extracting coherent vortices out of turbulent flows: the wavelet based Coherent Vorticity Extraction (CVE) and the Proper Orthogonal Decomposition (POD). The former decomposes the flow field into an orthogonal wavelet representation and subsequent thresholding of the coefficients allows one to split the flow into organized coherent vortices with non-Gaussian statistics and an incoherent random part which is structureless. POD is based on the singular value decomposition and decomposes the flow into basis functions which are optimal with respect to the retained energy for the ensemble average. Both techniques are applied to direct numerical simulation datamore » of two-dimensional drift-wave turbulence governed by Hasegawa Wakatani equation, considering two limit cases: the quasi-hydrodynamic and the quasi-adiabatic regimes. The results are compared in terms of compression rate, retained energy, retained enstrophy and retained radial flux, together with the enstrophy spectrum and higher order statistics. (c) 2010 Published by Elsevier Masson SAS on behalf of Academie des sciences.« less
Infrared small target detection in heavy sky scene clutter based on sparse representation
NASA Astrophysics Data System (ADS)
Liu, Depeng; Li, Zhengzhou; Liu, Bing; Chen, Wenhao; Liu, Tianmei; Cao, Lei
2017-09-01
A novel infrared small target detection method based on sky clutter and target sparse representation is proposed in this paper to cope with the representing uncertainty of clutter and target. The sky scene background clutter is described by fractal random field, and it is perceived and eliminated via the sparse representation on fractal background over-complete dictionary (FBOD). The infrared small target signal is simulated by generalized Gaussian intensity model, and it is expressed by the generalized Gaussian target over-complete dictionary (GGTOD), which could describe small target more efficiently than traditional structured dictionaries. Infrared image is decomposed on the union of FBOD and GGTOD, and the sparse representation energy that target signal and background clutter decomposed on GGTOD differ so distinctly that it is adopted to distinguish target from clutter. Some experiments are induced and the experimental results show that the proposed approach could improve the small target detection performance especially under heavy clutter for background clutter could be efficiently perceived and suppressed by FBOD and the changing target could also be represented accurately by GGTOD.
Zhou, Qingping; Jiang, Haiyan; Wang, Jianzhou; Zhou, Jianling
2014-10-15
Exposure to high concentrations of fine particulate matter (PM₂.₅) can cause serious health problems because PM₂.₅ contains microscopic solid or liquid droplets that are sufficiently small to be ingested deep into human lungs. Thus, daily prediction of PM₂.₅ levels is notably important for regulatory plans that inform the public and restrict social activities in advance when harmful episodes are foreseen. A hybrid EEMD-GRNN (ensemble empirical mode decomposition-general regression neural network) model based on data preprocessing and analysis is firstly proposed in this paper for one-day-ahead prediction of PM₂.₅ concentrations. The EEMD part is utilized to decompose original PM₂.₅ data into several intrinsic mode functions (IMFs), while the GRNN part is used for the prediction of each IMF. The hybrid EEMD-GRNN model is trained using input variables obtained from principal component regression (PCR) model to remove redundancy. These input variables accurately and succinctly reflect the relationships between PM₂.₅ and both air quality and meteorological data. The model is trained with data from January 1 to November 1, 2013 and is validated with data from November 2 to November 21, 2013 in Xi'an Province, China. The experimental results show that the developed hybrid EEMD-GRNN model outperforms a single GRNN model without EEMD, a multiple linear regression (MLR) model, a PCR model, and a traditional autoregressive integrated moving average (ARIMA) model. The hybrid model with fast and accurate results can be used to develop rapid air quality warning systems. Copyright © 2014 Elsevier B.V. All rights reserved.
Fast object reconstruction in block-based compressive low-light-level imaging
NASA Astrophysics Data System (ADS)
Ke, Jun; Sui, Dong; Wei, Ping
2014-11-01
In this paper we propose a simply yet effective and efficient method for long-term object tracking. Different from traditional visual tracking method which mainly depends on frame-to-frame correspondence, we combine high-level semantic information with low-level correspondences. Our framework is formulated in a confidence selection framework, which allows our system to recover from drift and partly deal with occlusion problem. To summarize, our algorithm can be roughly decomposed in a initialization stage and a tracking stage. In the initialization stage, an offline classifier is trained to get the object appearance information in category level. When the video stream is coming, the pre-trained offline classifier is used for detecting the potential target and initializing the tracking stage. In the tracking stage, it consists of three parts which are online tracking part, offline tracking part and confidence judgment part. Online tracking part captures the specific target appearance information while detection part localizes the object based on the pre-trained offline classifier. Since there is no data dependence between online tracking and offline detection, these two parts are running in parallel to significantly improve the processing speed. A confidence selection mechanism is proposed to optimize the object location. Besides, we also propose a simple mechanism to judge the absence of the object. If the target is lost, the pre-trained offline classifier is utilized to re-initialize the whole algorithm as long as the target is re-located. During experiment, we evaluate our method on several challenging video sequences and demonstrate competitive results.
Innovating Method of Existing Mechanical Product Based on TRIZ Theory
NASA Astrophysics Data System (ADS)
Zhao, Cunyou; Shi, Dongyan; Wu, Han
Main way of product development is adaptive design and variant design based on existing product. In this paper, conceptual design frame and its flow model of innovating products is put forward through combining the methods of conceptual design and TRIZ theory. Process system model of innovating design that includes requirement analysis, total function analysis and decomposing, engineering problem analysis, finding solution of engineering problem and primarily design is constructed and this establishes the base for innovating design of existing product.
An Improved Interferometric Calibration Method Based on Independent Parameter Decomposition
NASA Astrophysics Data System (ADS)
Fan, J.; Zuo, X.; Li, T.; Chen, Q.; Geng, X.
2018-04-01
Interferometric SAR is sensitive to earth surface undulation. The accuracy of interferometric parameters plays a significant role in precise digital elevation model (DEM). The interferometric calibration is to obtain high-precision global DEM by calculating the interferometric parameters using ground control points (GCPs). However, interferometric parameters are always calculated jointly, making them difficult to decompose precisely. In this paper, we propose an interferometric calibration method based on independent parameter decomposition (IPD). Firstly, the parameters related to the interferometric SAR measurement are determined based on the three-dimensional reconstruction model. Secondly, the sensitivity of interferometric parameters is quantitatively analyzed after the geometric parameters are completely decomposed. Finally, each interferometric parameter is calculated based on IPD and interferometric calibration model is established. We take Weinan of Shanxi province as an example and choose 4 TerraDEM-X image pairs to carry out interferometric calibration experiment. The results show that the elevation accuracy of all SAR images is better than 2.54 m after interferometric calibration. Furthermore, the proposed method can obtain the accuracy of DEM products better than 2.43 m in the flat area and 6.97 m in the mountainous area, which can prove the correctness and effectiveness of the proposed IPD based interferometric calibration method. The results provide a technical basis for topographic mapping of 1 : 50000 and even larger scale in the flat area and mountainous area.
PEROXIDE DESTRUCTION TESTING FOR THE 200 AREA EFFLUENT TREATMENT FACILITY
DOE Office of Scientific and Technical Information (OSTI.GOV)
HALGREN DL
2010-03-12
The hydrogen peroxide decomposer columns at the 200 Area Effluent Treatment Facility (ETF) have been taken out of service due to ongoing problems with particulate fines and poor destruction performance from the granular activated carbon (GAC) used in the columns. An alternative search was initiated and led to bench scale testing and then pilot scale testing. Based on the bench scale testing three manganese dioxide based catalysts were evaluated in the peroxide destruction pilot column installed at the 300 Area Treated Effluent Disposal Facility. The ten inch diameter, nine foot tall, clear polyvinyl chloride (PVC) column allowed for the samemore » six foot catalyst bed depth as is in the existing ETF system. The flow rate to the column was controlled to evaluate the performance at the same superficial velocity (gpm/ft{sup 2}) as the full scale design flow and normal process flow. Each catalyst was evaluated on peroxide destruction performance and particulate fines capacity and carryover. Peroxide destruction was measured by hydrogen peroxide concentration analysis of samples taken before and after the column. The presence of fines in the column headspace and the discharge from carryover was generally assessed by visual observation. All three catalysts met the peroxide destruction criteria by achieving hydrogen peroxide discharge concentrations of less than 0.5 mg/L at the design flow with inlet peroxide concentrations greater than 100 mg/L. The Sud-Chemie T-2525 catalyst was markedly better in the minimization of fines and particle carryover. It is anticipated the T-2525 can be installed as a direct replacement for the GAC in the peroxide decomposer columns. Based on the results of the peroxide method development work the recommendation is to purchase the T-2525 catalyst and initially load one of the ETF decomposer columns for full scale testing.« less
Wavelet regression model in forecasting crude oil price
NASA Astrophysics Data System (ADS)
Hamid, Mohd Helmie; Shabri, Ani
2017-05-01
This study presents the performance of wavelet multiple linear regression (WMLR) technique in daily crude oil forecasting. WMLR model was developed by integrating the discrete wavelet transform (DWT) and multiple linear regression (MLR) model. The original time series was decomposed to sub-time series with different scales by wavelet theory. Correlation analysis was conducted to assist in the selection of optimal decomposed components as inputs for the WMLR model. The daily WTI crude oil price series has been used in this study to test the prediction capability of the proposed model. The forecasting performance of WMLR model were also compared with regular multiple linear regression (MLR), Autoregressive Moving Average (ARIMA) and Generalized Autoregressive Conditional Heteroscedasticity (GARCH) using root mean square errors (RMSE) and mean absolute errors (MAE). Based on the experimental results, it appears that the WMLR model performs better than the other forecasting technique tested in this study.
Distributed Task Offloading in Heterogeneous Vehicular Crowd Sensing
Liu, Yazhi; Wang, Wendong; Ma, Yuekun; Yang, Zhigang; Yu, Fuxing
2016-01-01
The ability of road vehicles to efficiently execute different sensing tasks varies because of the heterogeneity in their sensing ability and trajectories. Therefore, the data collection sensing task, which requires tempo-spatial sensing data, becomes a serious problem in vehicular sensing systems, particularly those with limited sensing capabilities. A utility-based sensing task decomposition and offloading algorithm is proposed in this paper. The utility function for a task executed by a certain vehicle is built according to the mobility traces and sensing interfaces of the vehicle, as well as the sensing data type and tempo-spatial coverage requirements of the sensing task. Then, the sensing tasks are decomposed and offloaded to neighboring vehicles according to the utilities of the neighboring vehicles to the decomposed sensing tasks. Real trace-driven simulation shows that the proposed task offloading is able to collect much more comprehensive and uniformly distributed sensing data than other algorithms. PMID:27428967
Understanding a reference-free impedance method using collocated piezoelectric transducers
NASA Astrophysics Data System (ADS)
Kim, Eun Jin; Kim, Min Koo; Sohn, Hoon; Park, Hyun Woo
2010-03-01
A new concept of a reference-free impedance method, which does not require direct comparison with a baseline impedance signal, is proposed for damage detection in a plate-like structure. A single pair of piezoelectric (PZT) wafers collocated on both surfaces of a plate are utilized for extracting electro-mechanical signatures (EMS) associated with mode conversion due to damage. A numerical simulation is conducted to investigate the EMS of collocated PZT wafers in the frequency domain at the presence of damage through spectral element analysis. Then, the EMS due to mode conversion induced by damage are extracted using the signal decomposition technique based on the polarization characteristics of the collocated PZT wafers. The effects of the size and the location of damage on the decomposed EMS are investigated as well. Finally, the applicability of the decomposed EMS to the reference-free damage diagnosis is discussed.
NASA Technical Reports Server (NTRS)
McDowell, Mark
2004-01-01
An integrated algorithm for decomposing overlapping particle images (multi-particle objects) along with determining each object s constituent particle centroid(s) has been developed using image analysis techniques. The centroid finding algorithm uses a modified eight-direction search method for finding the perimeter of any enclosed object. The centroid is calculated using the intensity-weighted center of mass of the object. The overlap decomposition algorithm further analyzes the object data and breaks it down into its constituent particle centroid(s). This is accomplished with an artificial neural network, feature based technique and provides an efficient way of decomposing overlapping particles. Combining the centroid finding and overlap decomposition routines into a single algorithm allows us to accurately predict the error associated with finding the centroid(s) of particles in our experiments. This algorithm has been tested using real, simulated, and synthetic data and the results are presented and discussed.
Upgrading non-oxidized carbon nanotubes by thermally decomposed hydrazine
NASA Astrophysics Data System (ADS)
Wang, Pen-Cheng; Liao, Yu-Chun; Liu, Li-Hung; Lai, Yu-Ling; Lin, Ying-Chang; Hsu, Yao-Jane
2014-06-01
We found that the electrical properties of conductive thin films based on non-oxidized carbon nanotubes (CNTs) could be further improved when the CNTs consecutively underwent a mild hydrazine adsorption treatment and then a sufficiently effective thermal desorption treatment. We also found that, after several rounds of vapor-phase hydrazine treatments and baking treatments were applied to an inferior single-CNT field-effect transistor device, the device showed improvement in Ion/Ioff ratio and reduction in the extent of gate-sweeping hysteresis. Our experimental results indicate that, even though hydrazine is a well-known reducing agent, the characteristics of our hydrazine-exposed CNT samples subject to certain treatment conditions could become more graphenic than graphanic, suggesting that the improvement in the electrical and electronic properties of CNT samples could be related to the transient bonding and chemical scavenging of thermally decomposed hydrazine on the surface of CNTs.
A new approach for SSVEP detection using PARAFAC and canonical correlation analysis.
Tello, Richard; Pouryazdian, Saeed; Ferreira, Andre; Beheshti, Soosan; Krishnan, Sridhar; Bastos, Teodiano
2015-01-01
This paper presents a new way for automatic detection of SSVEPs through correlation analysis between tensor models. 3-way EEG tensor of channel × frequency × time is decomposed into constituting factor matrices using PARAFAC model. PARAFAC analysis of EEG tensor enables us to decompose multichannel EEG into constituting temporal, spectral and spatial signatures. SSVEPs characterized with localized spectral and spatial signatures are then detected exploiting a correlation analysis between extracted signatures of the EEG tensor and the corresponding simulated signatures of all target SSVEP signals. The SSVEP that has the highest correlation is selected as the intended target. Two flickers blinking at 8 and 13 Hz were used as visual stimuli and the detection was performed based on data packets of 1 second without overlapping. Five subjects participated in the experiments and the highest classification rate of 83.34% was achieved, leading to the Information Transfer Rate (ITR) of 21.01 bits/min.
‘Willpower’ over the life span: decomposing self-regulation
Ayduk, Ozlem; Berman, Marc G.; Casey, B. J.; Gotlib, Ian H.; Jonides, John; Kross, Ethan; Teslovich, Theresa; Wilson, Nicole L.; Zayas, Vivian
2011-01-01
In the 1960s, Mischel and colleagues developed a simple ‘marshmallow test’ to measure preschoolers’ ability to delay gratification. In numerous follow-up studies over 40 years, this ‘test’ proved to have surprisingly significant predictive validity for consequential social, cognitive and mental health outcomes over the life course. In this article, we review key findings from the longitudinal work and from earlier delay-of-gratification experiments examining the cognitive appraisal and attention control strategies that underlie this ability. Further, we outline a set of hypotheses that emerge from the intersection of these findings with research on ‘cognitive control’ mechanisms and their neural bases. We discuss implications of these hypotheses for decomposing the phenomena of ‘willpower’ and the lifelong individual differences in self-regulatory ability that were identified in the earlier research and that are currently being pursued. PMID:20855294
Modeling diffusion control on organic matter decomposition in unsaturated soil pore space
NASA Astrophysics Data System (ADS)
Vogel, Laure; Pot, Valérie; Garnier, Patricia; Vieublé-Gonod, Laure; Nunan, Naoise; Raynaud, Xavier; Chenu, Claire
2014-05-01
Soil Organic Matter decomposition is affected by soil structure and water content, but field and laboratory studies about this issue conclude to highly variable outcomes. Variability could be explained by the discrepancy between the scale at which key processes occur and the measurements scale. We think that physical and biological interactions driving carbon transformation dynamics can be best understood at the pore scale. Because of the spatial disconnection between carbon sources and decomposers, the latter rely on nutrient transport unless they can actively move. In hydrostatic case, diffusion in soil pore space is thus thought to regulate biological activity. In unsaturated conditions, the heterogeneous distribution of water modifies diffusion pathways and rates, thus affects diffusion control on decomposition. Innovative imaging and modeling tools offer new means to address these effects. We have developed a new model based on the association between a 3D Lattice-Boltzmann Model and an adimensional decomposition module. We designed scenarios to study the impact of physical (geometry, saturation, decomposers position) and biological properties on decomposition. The model was applied on porous media with various morphologies. We selected three cubic images of 100 voxels side from µCT-scanned images of an undisturbed soil sample at 68µm resolution. We used LBM to perform phase separation and obtained water phase distributions at equilibrium for different saturation indices. We then simulated the diffusion of a simple soluble substrate (glucose) and its consumption by bacteria. The same mass of glucose was added as a pulse at the beginning of all simulations. Bacteria were placed in few voxels either regularly spaced or concentrated close to or far from the glucose source. We modulated physiological features of decomposers in order to weight them against abiotic conditions. We could evidence several effects creating unequal substrate access conditions for decomposers, hence inducing contrasted decomposition kinetics: position of bacteria relative to the substrate diffusion pathways, diffusion rate and hydraulic connectivity between bacteria and substrate source, local substrate enrichment due to restricted mass transfer. Physiological characteristics had a strong impact on decomposition only when glucose diffused easily but not when diffusion limitation prevailed. This suggests that carbon dynamics should not be considered to derive from decomposers' physiology alone but rather from the interactions of biological and physical processes at the microscale.
Scalable Domain Decomposed Monte Carlo Particle Transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Brien, Matthew Joseph
2013-12-05
In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.
Decomposers and the fire cycle in a phryganic (East Mediterranean) ecosystem.
Arianoutsou-Faraggitaki, M; Margaris, N S
1982-06-01
Dehydrogenase activity, cellulose decomposition, nitrification, and CO2 release were measured for 2 years to estimate the effects of a wildfire over a phryganic ecosystem. In decomposers' subsystem we found that fire mainly affected the nitrification process during the whole period, and soil respiration for the second post-fire year, when compared with the control site. Our data suggest that after 3-4 months the activity of microbial decomposers is almost the same at the two sites, suggesting that fire is not a catastrophic event, but a simple perturbation common to Mediterranean-type ecosystems.
NASA Astrophysics Data System (ADS)
Tan, Wee Choon; Iwai, Hiroshi; Kishimoto, Masashi; Brus, Grzegorz; Szmyd, Janusz S.; Yoshida, Hideo
2018-04-01
Planar solid oxide fuel cells (SOFCs) with decomposed ammonia are numerically studied to investigate the effect of the cell aspect ratio. The ammonia decomposer is assumed to be located next to the SOFCs, and the heat required for the endothermic decomposition reaction is supplied by the thermal radiation from the SOFCs. Cells with aspect ratios (ratios of the streamwise length to the spanwise width) between 0.130 and 7.68 are provided with the reactants at a constant mass flow rate. A parametric study is conducted by varying the cell temperature and fuel utility factor to investigate their effects on the cell performance in terms of the voltage efficiency. The effect of the heat supply to the ammonia decomposer is also studied. The developed model shows good agreement, in terms of the current-voltage curve, with the experimental data obtained from a short stack without parameter tuning. The simulation study reveals that the cell with the highest aspect ratio achieves the highest performance under furnace operation. On the other hand, the 0.750 aspect ratio cell with the highest voltage efficiency of 0.67 is capable of thermally sustaining the ammonia decomposers at a fuel utility of 0.80 using the thermal radiation from both sidewalls.
Barantal, Sandra; Schimann, Heidy; Fromin, Nathalie; Hättenschwiler, Stephan
2014-01-01
Plant leaf litter generally decomposes faster as a group of different species than when individual species decompose alone, but underlying mechanisms of these diversity effects remain poorly understood. Because resource C : N : P stoichiometry (i.e. the ratios of these key elements) exhibits strong control on consumers, we supposed that stoichiometric dissimilarity of litter mixtures (i.e. the divergence in C : N : P ratios among species) improves resource complementarity to decomposers leading to faster mixture decomposition. We tested this hypothesis with: (i) a wide range of leaf litter mixtures of neotropical tree species varying in C : N : P dissimilarity, and (ii) a nutrient addition experiment (C, N and P) to create stoichiometric similarity. Litter mixtures decomposed in the field using two different types of litterbags allowing or preventing access to soil fauna. Litter mixture mass loss was higher than expected from species decomposing singly, especially in presence of soil fauna. With fauna, synergistic litter mixture effects increased with increasing stoichiometric dissimilarity of litter mixtures and this positive relationship disappeared with fertilizer addition. Our results indicate that litter stoichiometric dissimilarity drives mixture effects via the nutritional requirements of soil fauna. Incorporating ecological stoichiometry in biodiversity research allows refinement of the underlying mechanisms of how changing biodiversity affects ecosystem functioning. PMID:25320173
Decomposition by ectomycorrhizal fungi alters soil carbon storage in a simulation model
Moore, J. A. M.; Jiang, J.; Post, W. M.; ...
2015-03-06
Carbon cycle models often lack explicit belowground organism activity, yet belowground organisms regulate carbon storage and release in soil. Ectomycorrhizal fungi are important players in the carbon cycle because they are a conduit into soil for carbon assimilated by the plant. It is hypothesized that ectomycorrhizal fungi can also be active decomposers when plant carbon allocation to fungi is low. Here, we reviewed the literature on ectomycorrhizal decomposition and we developed a simulation model of the plant-mycorrhizae interaction where a reduction in plant productivity stimulates ectomycorrhizal fungi to decompose soil organic matter. Our review highlights evidence demonstrating the potential formore » ectomycorrhizal fungi to decompose soil organic matter. Our model output suggests that ectomycorrhizal activity accounts for a portion of carbon decomposed in soil, but this portion varied with plant productivity and the mycorrhizal carbon uptake strategy simulated. Lower organic matter inputs to soil were largely responsible for reduced soil carbon storage. Using mathematical theory, we demonstrated that biotic interactions affect predictions of ecosystem functions. Specifically, we developed a simple function to model the mycorrhizal switch in function from plant symbiont to decomposer. In conclusion, we show that including mycorrhizal fungi with the flexibility of mutualistic and saprotrophic lifestyles alters predictions of ecosystem function.« less
Koda, Shin-ichi
2015-05-28
It has been shown by some existing studies that some linear dynamical systems defined on a dendritic network are equivalent to those defined on a set of one-dimensional networks in special cases and this transformation to the simple picture, which we call linear chain (LC) decomposition, has a significant advantage in understanding properties of dendrimers. In this paper, we expand the class of LC decomposable system with some generalizations. In addition, we propose two general sufficient conditions for LC decomposability with a procedure to systematically realize the LC decomposition. Some examples of LC decomposable linear dynamical systems are also presented with their graphs. The generalization of the LC decomposition is implemented in the following three aspects: (i) the type of linear operators; (ii) the shape of dendritic networks on which linear operators are defined; and (iii) the type of symmetry operations representing the symmetry of the systems. In the generalization (iii), symmetry groups that represent the symmetry of dendritic systems are defined. The LC decomposition is realized by changing the basis of a linear operator defined on a dendritic network into bases of irreducible representations of the symmetry group. The achievement of this paper makes it easier to utilize the LC decomposition in various cases. This may lead to a further understanding of the relation between structure and functions of dendrimers in future studies.
Photodecomposition of volatile organic compounds using TiO2 nanoparticles.
Jwo, Ching-Song; Chang, Ho; Kao, Mu-Jnug; Lin, Chi-Hsiang
2007-06-01
This study examined the photodecomposition of volatile organic compounds (VOCs) using TiO2 catalyst fabricated by the Submerged Arc Nanoparticle Synthesis System (SANSS). TiO2 catalyst was employed to decompose volatile organic compounds and compare with Degussa-P25 TiO2 in terms of decomposition efficiency. In the electric discharge manufacturing process, a Ti bar, applied as the electrode, was melted and vaporized under high temperature. The vaporized Ti powders were then rapidly quenched under low-temperature and low-pressure conditions in deionized water, thus nucleating and forming nanocrystalline powders uniformly dispersed in the base solvent. The average diameter of the TiO2 nanoparticles was 20 nm. X-ray diffraction analysis confirmed that the nanoparticles in the deionized water were Anatase type TiO2. It was found that gaseous toluene exposed to UV irradiation produced intermediates that were even harder to decompose. After 60-min photocomposition, Degussa-P25 TiO2 reduced the concentration of gaseous toluene to 8.18% while the concentration after decomposition by SANSS TiO2 catalyst dropped to 0.35%. Under UV irradiation at 253.7 +/- 184.9 nm, TiO2 prepared by SANSS can produce strong chemical debonding energy, thus showing great efficiency, superior to that of Degussa-P25 TiO2, in decomposing gaseous toluene and its intermediates.
Zhang, Xiaoxing; Chen, Qinchuan; Tang, Ju; Hu, Weihua; Zhang, Jinbin
2014-01-01
The detection of partial discharge by analyzing the components of SF6 gas in gas-insulated switchgears is important to the diagnosis and assessment of the operational state of power equipment. A gas sensor based on anatase TiO2 is used to detect decomposed gases in SF6. In this paper, first-principle density functional theory calculations are adopted to analyze the adsorption of SO2, SOF2, and SO2F2, the primary decomposition by-products of SF6 under partial discharge, on anatase (101) and (001) surfaces. Simulation results show that the perfect anatase (001) surface has a stronger interaction with the three gases than that of anatase (101), and both surfaces are more sensitive and selective to SO2 than to SOF2 and SO2F2. The selection of a defect surface to SO2, SOF2, and SO2F2 differs from that of a perfect surface. This theoretical result is corroborated by the sensing experiment using a TiO2 nanotube array (TNTA) gas sensor. The calculated values are analyzed to explain the results of the Pt-doped TNTA gas sensor sensing experiment. The results imply that the deposited Pt nanoparticles on the surface increase the active sites of the surface and the gas molecules may decompose upon adsorption on the active sites. PMID:24755845
Fan, Pingping; Guo, Dali
2010-06-01
Among tree fine roots, the distal small-diameter lateral branches comprising first- and second-order roots lack secondary (wood) development. Therefore, these roots are expected to decompose more rapidly than higher order woody roots. But this prediction has not been tested and may not be correct. Current evidence suggests that lower order roots may decompose more slowly than higher order roots in tree species associated with ectomycorrhizal (EM) fungi because they are preferentially colonized by fungi and encased by a fungal sheath rich in chitin (a recalcitrant compound). In trees associated with arbuscular mycorrhizal (AM) fungi, lower order roots do not form fungal sheaths, but they may have poorer C quality, e.g. lower concentrations of soluble carbohydrates and higher concentrations of acid-insolubles than higher order roots, thus may decompose more slowly. In addition, litter with high concentrations of acid insolubles decomposes more slowly under higher N concentrations (such as lower order roots). Therefore, we propose that in both AM and EM trees, lower order roots decompose more slowly than higher order roots due to the combination of poor C quality and high N concentrations. To test this hypothesis, we examined decomposition of the first six root orders in Fraxinus mandshurica (an AM species) and Larix gmelinii (an EM species) using litterbag method in northeastern China. We found that lower order roots of both species decomposed more slowly than higher order roots, and this pattern appears to be associated mainly with initial C quality and N concentrations. Because these lower order roots have short life spans and thus dominate root mortality, their slow decomposition implies that a substantial fraction of the stable soil organic matter pool is derived from these lower order roots, at least in the two species we studied.
NASA Technical Reports Server (NTRS)
Takacs, Lawrence L.; Sawyer, William; Suarez, Max J. (Editor); Fox-Rabinowitz, Michael S.
1999-01-01
This report documents the techniques used to filter quantities on a stretched grid general circulation model. Standard high-latitude filtering techniques (e.g., using an FFT (Fast Fourier Transformations) to decompose and filter unstable harmonics at selected latitudes) applied on a stretched grid are shown to produce significant distortions of the prognostic state when used to control instabilities near the pole. A new filtering technique is developed which accurately accounts for the non-uniform grid by computing the eigenvectors and eigenfrequencies associated with the stretching. A filter function, constructed to selectively damp those modes whose associated eigenfrequencies exceed some critical value, is used to construct a set of grid-spaced weights which are shown to effectively filter without distortion. Both offline and GCM (General Circulation Model) experiments are shown using the new filtering technique. Finally, a brief examination is also made on the impact of applying the Shapiro filter on the stretched grid.
NASA Astrophysics Data System (ADS)
Wu, Xiaolin; Rong, Yue
2015-12-01
The quality-of-service (QoS) criteria (measured in terms of the minimum capacity requirement in this paper) are very important to practical indoor power line communication (PLC) applications as they greatly affect the user experience. With a two-way multicarrier relay configuration, in this paper we investigate the joint terminals and relay power optimization for the indoor broadband PLC environment, where the relay node works in the amplify-and-forward (AF) mode. As the QoS-constrained power allocation problem is highly non-convex, the globally optimal solution is computationally intractable to obtain. To overcome this challenge, we propose an alternating optimization (AO) method to decompose this problem into three convex/quasi-convex sub-problems. Simulation results demonstrate the fast convergence of the proposed algorithm under practical PLC channel conditions. Compared with the conventional bidirectional direct transmission (BDT) system, the relay-assisted two-way information exchange (R2WX) scheme can meet the same QoS requirement with less total power consumption.
Block-Parallel Data Analysis with DIY2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morozov, Dmitriy; Peterka, Tom
DIY2 is a programming model and runtime for block-parallel analytics on distributed-memory machines. Its main abstraction is block-structured data parallelism: data are decomposed into blocks; blocks are assigned to processing elements (processes or threads); computation is described as iterations over these blocks, and communication between blocks is defined by reusable patterns. By expressing computation in this general form, the DIY2 runtime is free to optimize the movement of blocks between slow and fast memories (disk and flash vs. DRAM) and to concurrently execute blocks residing in memory with multiple threads. This enables the same program to execute in-core, out-of-core, serial,more » parallel, single-threaded, multithreaded, or combinations thereof. This paper describes the implementation of the main features of the DIY2 programming model and optimizations to improve performance. DIY2 is evaluated on benchmark test cases to establish baseline performance for several common patterns and on larger complete analysis codes running on large-scale HPC machines.« less
Tracking dipeptides at work-uptake and intracellular fate in CHO culture.
Sánchez-Kopper, Andres; Becker, Max; Pfizenmaier, Jennifer; Kessler, Christian; Karau, Andreas; Takors, Ralf
2016-12-01
Market demands for monoclonal antibodies (mAbs) are steadily increasing worldwide. As a result, production processes using Chinese hamster ovary cells (CHO) are in the focus of ongoing intensification studies for maximizing cell-specific and volumetric productivities. This includes the optimization of animal-derived component free (ADCF) cultivation media as part of good cell culture practice. Dipeptides are known to improve CHO culture performance. However, little or even conflicting assumptions exist about their putative import and functionality inside the cells. A set of well-known performance boosters and new dipeptide prospects was evaluated. The present study revealed that dipeptides are indeed imported in the cells, where they are decomposed to the amino acids building blocks. Subsequently, they are metabolized or, unexpectedly, secreted to the medium. Monoclonal antibody production boosting additives like L-alanine-L-glutamine (AQ) or glycyl-L-glutamine (GQ) can be assigned to fast or slow dipeptide uptake, respectively, thus pinpointing to the need to study dipeptide kinetics and to adjust their feeding individually for optimizing mAb production.
NASA Astrophysics Data System (ADS)
Puranen, Jouni; Lagerbom, Juha; Hyvärinen, Leo; Kylmälahti, Mikko; Himanen, Olli; Pihlatie, Mikko; Kiviaho, Jari; Vuoristo, Petri
2011-01-01
Manganese cobalt oxide spinel doped with Fe2O3 was studied as a protective coating on ferritic stainless steel interconnects. Chromium alloying causes problems at high operation temperatures in such oxidizing conditions where chromium compounds evaporate and poison the cathode active area, causing the degradation of the solid oxide fuel cell. In order to prevent chromium evaporation, these interconnectors need a protective coating to block the chromium evaporation and to maintain an adequate electrical conductivity. Thermal spraying is regarded as a promising way to produce dense and protective layers. In the present work, the ceramic Mn-Co-Fe oxide spinel coatings were produced by using the atmospheric plasma spray process. Coatings with low thickness and low amount of porosity were produced by optimizing deposition conditions. The original spinel structure decomposed because of the fast transformation of solid-liquid-solid states but was partially restored by using post-annealing treatment.
A spectral approach for discrete dislocation dynamics simulations of nanoindentation
NASA Astrophysics Data System (ADS)
Bertin, Nicolas; Glavas, Vedran; Datta, Dibakar; Cai, Wei
2018-07-01
We present a spectral approach to perform nanoindentation simulations using three-dimensional nodal discrete dislocation dynamics. The method relies on a two step approach. First, the contact problem between an indenter of arbitrary shape and an isotropic elastic half-space is solved using a spectral iterative algorithm, and the contact pressure is fully determined on the half-space surface. The contact pressure is then used as a boundary condition of the spectral solver to determine the resulting stress field produced in the simulation volume. In both stages, the mechanical fields are decomposed into Fourier modes and are efficiently computed using fast Fourier transforms. To further improve the computational efficiency, the method is coupled with a subcycling integrator and a special approach is devised to approximate the displacement field associated with surface steps. As a benchmark, the method is used to compute the response of an elastic half-space using different types of indenter. An example of a dislocation dynamics nanoindentation simulation with complex initial microstructure is presented.
NASA Astrophysics Data System (ADS)
Lamani, S. D.; Veeresh, T. M.; Nandibewoor, S. T.
2009-12-01
The kinetics of oxidation of L-phenylalanine (L-Phe) by diperiodatoargentate(III) (DPA) in alkaline medium at a constant ionic strength of 0.25 mol/dm-3 has been studied spectrophotometrically. The reaction between DPA and L-phenylalanine in alkaline medium exhibits 1: 1 stoichiometry (L-phenylalanine: DPA). The reaction shows first order in [DPA] and has less than unit order dependence each in both [L-Phe] and [Alkali] and retarding effect of [IO{4/-}] under the reaction conditions. The active species of DPA is understood to be as monoperiodatoargentate(III) (MPA). The reaction is shown to proceed via a MPA-L-Phe complex, which decomposes in a rate-determining step to give intermediates followed by a fast step to give the products. The products were identified by spot and spectroscopic studies. The reaction constants involved in the different steps of the mechanisms were calculated. The activation parameters with respect to slow step of the mechanism were computed and discussed. The thermodynamic quantities were also determined for the reaction.
Adaptive Skin Meshes Coarsening for Biomolecular Simulation
Shi, Xinwei; Koehl, Patrice
2011-01-01
In this paper, we present efficient algorithms for generating hierarchical molecular skin meshes with decreasing size and guaranteed quality. Our algorithms generate a sequence of coarse meshes for both the surfaces and the bounded volumes. Each coarser surface mesh is adaptive to the surface curvature and maintains the topology of the skin surface with guaranteed mesh quality. The corresponding tetrahedral mesh is conforming to the interface surface mesh and contains high quality tetrahedral that decompose both the interior of the molecule and the surrounding region (enclosed in a sphere). Our hierarchical tetrahedral meshes have a number of advantages that will facilitate fast and accurate multigrid PDE solvers. Firstly, the quality of both the surface triangulations and tetrahedral meshes is guaranteed. Secondly, the interface in the tetrahedral mesh is an accurate approximation of the molecular boundary. In particular, all the boundary points lie on the skin surface. Thirdly, our meshes are Delaunay meshes. Finally, the meshes are adaptive to the geometry. PMID:21779137
Determinants of carbon release from the active layer and permafrost deposits on the Tibetan Plateau
Chen, Leiyi; Liang, Junyi; Qin, Shuqi; Liu, Li; Fang, Kai; Xu, Yunping; Ding, Jinzhi; Li, Fei; Luo, Yiqi; Yang, Yuanhe
2016-01-01
The sign and magnitude of permafrost carbon (C)-climate feedback are highly uncertain due to the limited understanding of the decomposability of thawing permafrost and relevant mechanistic controls over C release. Here, by combining aerobic incubation with biomarker analysis and a three-pool model, we reveal that C quality (represented by a higher amount of fast cycling C but a lower amount of recalcitrant C compounds) and normalized CO2–C release in permafrost deposits were similar or even higher than those in the active layer, demonstrating a high vulnerability of C in Tibetan upland permafrost. We also illustrate that C quality exerts the most control over CO2–C release from the active layer, whereas soil microbial abundance is more directly associated with CO2–C release after permafrost thaw. Taken together, our findings highlight the importance of incorporating microbial properties into Earth System Models when predicting permafrost C dynamics under a changing environment. PMID:27703168
Soil chemistry changes beneath decomposing cadavers over a one-year period.
Szelecz, Ildikó; Koenig, Isabelle; Seppey, Christophe V W; Le Bayon, Renée-Claire; Mitchell, Edward A D
2018-05-01
Decomposing vertebrate cadavers release large, localized inputs of nutrients. These temporally limited resource patches affect nutrient cycling and soil organisms. The impact of decomposing cadavers on soil chemistry is relevant to soil biology, as a natural disturbance, and forensic science, to estimate the postmortem interval. However, cadaver impacts on soils are rarely studied, making it difficult to identify common patterns. We investigated the effects of decomposing pig cadavers (Sus scrofa domesticus) on soil chemistry (pH, ammonium, nitrate, nitrogen, phosphorous, potassium and carbon) over a one-year period in a spruce-dominant forest. Four treatments were applied, each with five replicates: two treatments including pig cadavers (placed on the ground and hung one metre above ground) and two controls (bare soil and bags filled with soil placed on the ground i.e. "fake pig" treatment). In the first two months (15-59 days after the start of the experiment), cadavers caused significant increases of ammonium, nitrogen, phosphorous and potassium (p<0.05) whereas nitrate significantly increased towards the end of the study (263-367 days; p<0.05). Soil pH increased significantly at first and then decreased significantly at the end of the experiment. After one year, some markers returned to basal levels (i.e. not significantly different from control plots), whereas others were still significantly different. Based on these response patterns and in comparison with previous studies, we define three categories of chemical markers that may have the potential to date the time since death: early peak markers (EPM), late peak markers (LPM) and late decrease markers (LDM). The marker categories will enhance our understanding of soil processes and can be highly useful when changes in soil chemistry are related to changes in the composition of soil organism communities. For actual casework further studies and more data are necessary to refine the marker categories along a more precise timeline and to develop a method that can be used in court. Copyright © 2018 Elsevier B.V. All rights reserved.
Giuliani, Sara; McArthur, Alexa; Greenwood, John
2015-11-01
Major burn injury patients commonly fast preoperatively before multiple surgical procedures. The Societies of Anesthesiology in Europe and the United States recommend fasting from clear fluids for two hours and solids for six to eight hours preoperatively. However, at the Royal Adelaide Hospital, patients often fast from midnight proceeding the day of surgery. This project aims to promote evidence-based practice to minimize extended preoperative fasting in major burn patients. A baseline audit was conducted measuring the percentage compliance with audit criteria, specifically on preoperative fasting documentation and appropriate instructions in line with evidence-based guidelines. Strategies were then implemented to address areas of non-compliance, which included staff education, development of documentation tools and completion of a perioperative feeding protocol for major burn patients. Following this, a post implementation audit assessed the extent of change compared with the baseline audit results. Education on evidence-based fasting guidelines was delivered to 54% of staff. This resulted in a 19% improvement in compliance with fasting documentation and a 52% increase in adherence to appropriate evidence-based instructions. There was a notable shift from the most common fasting instruction being "fast from midnight" to "fast from 03:00 hours", with an overall four-hour reduction in fasting per theater admission. These results demonstrate that education improves compliance with documentation and preoperative fasting that is more reflective of evidence-based practice. Collaboration with key stakeholders and a hospital wide fasting protocol is warranted to sustain change and further advance compliance with evidence-based practice at an organizational level.
FPGA-Based Filterbank Implementation for Parallel Digital Signal Processing
NASA Technical Reports Server (NTRS)
Berner, Stephan; DeLeon, Phillip
1999-01-01
One approach to parallel digital signal processing decomposes a high bandwidth signal into multiple lower bandwidth (rate) signals by an analysis bank. After processing, the subband signals are recombined into a fullband output signal by a synthesis bank. This paper describes an implementation of the analysis and synthesis banks using (Field Programmable Gate Arrays) FPGAs.
Denis Valle; Benjamin Baiser; Christopher W. Woodall; Robin Chazdon; Jerome Chave
2014-01-01
We propose a novel multivariate method to analyse biodiversity data based on the Latent Dirichlet Allocation (LDA) model. LDA, a probabilistic model, reduces assemblages to sets of distinct component communities. It produces easily interpretable results, can represent abrupt and gradual changes in composition, accommodates missing data and allows for coherent estimates...
ERIC Educational Resources Information Center
Sun, Jennifer; van Es, Elizabeth A.
2015-01-01
We designed a video-based course to develop preservice teachers' vision of ambitious instruction by decomposing instruction to learn to attend to student thinking and to examine how particular teaching moves influence student learning. In this study, we examine the influence that learning to systematically analyze ambitious pedagogy in the course…
Modular architecture for robotics and teleoperation
Anderson, Robert J.
1996-12-03
Systems and methods for modularization and discretization of real-time robot, telerobot and teleoperation systems using passive, network based control laws. Modules consist of network one-ports and two-ports. Wave variables and position information are passed between modules. The behavior of each module is decomposed into uncoupled linear-time-invariant, and coupled, nonlinear memoryless elements and then are separately discretized.
Wang, Xiuran; Peng, Zhongqi; Sun, Xiaoling; Liu, Dongbo; Chen, Shan; Li, Fan; Xia, Hongmei; Lu, Tiancheng
2012-01-01
Sporocytophaga sp. JL-01 is a sliding cellulose degrading bacterium that can decompose filter paper (FP), carboxymethyl cellulose (CMC) and cellulose CF11. In this paper, the morphological characteristics of S. sp. JL-01 growing in FP liquid medium was studied by Scanning Electron Microscope (SEM), and one of the FPase components of this bacterium was analyzed. The results showed that the cell shapes were variable during the process of filter paper cellulose decomposition and the rod shape might be connected with filter paper decomposing. After incubating for 120 h, the filter paper was decomposed significantly, and it was degraded absolutely within 144 h. An FPase1 was purified from the supernatant and its characteristics were analyzed. The molecular weight of the FPase1 was 55 kDa. The optimum pH was pH 7.2 and optimum temperature was 50°C under experiment conditions. Zn(2+) and Co(2+) enhanced the enzyme activity, but Fe(3+) inhibited it.
Xiao, Xiaopeng; Mazza, Lorenzo; Yu, Yongqiang; Cai, Minmin; Zheng, Longyu; Tomberlin, Jeffery K; Yu, Jeffrey; van Huis, Arnold; Yu, Ziniu; Fasulo, Salvatore; Zhang, Jibin
2018-07-01
A chicken manure management process was carried out through co-conversion of Hermetia illucens L. larvae (BSFL) with functional bacteria for producing larvae as feed stuff and organic fertilizer. Thirteen days co-conversion of 1000 kg of chicken manure inoculated with one million 6-day-old BSFL and 10 9 CFU Bacillus subtilis BSF-CL produced aging larvae, followed by eleven days of aerobic fermentation inoculated with the decomposing agent to maturity. 93.2 kg of fresh larvae were harvested from the B. subtilis BSF-CL-inoculated group, while the control group only harvested 80.4 kg of fresh larvae. Chicken manure reduction rate of the B. subtilis BSF-CL-inoculated group was 40.5%, while chicken manure reduction rate of the control group was 35.8%. The weight of BSFL increased by 15.9%, BSFL conversion rate increased by 12.7%, and chicken manure reduction rate increased by 13.4% compared to the control (no B. subtilis BSF-CL). The residue inoculated with decomposing agent had higher maturity (germination index >92%), compared with the no decomposing agent group (germination index ∼86%). The activity patterns of different enzymes further indicated that its production was more mature and stable than that of the no decomposing agent group. Physical and chemical production parameters showed that the residue inoculated with the decomposing agent was more suitable for organic fertilizer than the no decomposing agent group. Both, the co-conversion of chicken manure by BSFL with its synergistic bacteria and the aerobic fermentation with the decomposing agent required only 24 days. The results demonstrate that co-conversion process could shorten the processing time of chicken manure compared to traditional compost process. Gut bacteria could enhance manure conversion and manure reduction. We established efficient manure co-conversion process by black soldier fly and bacteria and harvest high value-added larvae mass and biofertilizer. Copyright © 2018 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fell, J.W.; Cefalu, R.
1984-01-01
The paper discusses the meiofauna associated with decomposing leaf litter from two species of coastal marshland plants: the black needle rush, Juncus roemerianus and the red mangrove, Rhizophora mangle. The following aspects were investigated: (1) types of meiofauna present, especially nematodes; (2) changes in meiofaunal community structures with regard to season, station location, and type of plant litter; (3) amount of nematode and copepod biomass present on the decomposing plant litter; and (4) an estimation of the possible role of the nematodes in the decomposition process. 28 references, 5 figures, 9 tables. (ACR)
Catalytic cartridge SO.sub.3 decomposer
Galloway, Terry R.
1982-01-01
A catalytic cartridge internally heated is utilized as a SO.sub.3 decomposer for thermochemical hydrogen production. The cartridge has two embodiments, a cross-flow cartridge and an axial flow cartridge. In the cross-flow cartridge, SO.sub.3 gas is flowed through a chamber and incident normally to a catalyst coated tube extending through the chamber, the catalyst coated tube being internally heated. In the axial-flow cartridge, SO.sub.3 gas is flowed through the annular space between concentric inner and outer cylindrical walls, the inner cylindrical wall being coated by a catalyst and being internally heated. The modular cartridge decomposer provides high thermal efficiency, high conversion efficiency, and increased safety.
Stefanutti, Luca; Robusto, Egidio; Vianello, Michelangelo; Anselmi, Pasquale
2013-06-01
A formal model is proposed that decomposes the implicit association test (IAT) effect into three process components: stimuli discrimination, automatic association, and termination criterion. Both response accuracy and reaction time are considered. Four independent and parallel Poisson processes, one for each of the four label categories of the IAT, are assumed. The model parameters are the rate at which information accrues on the counter of each process and the amount of information that is needed before a response is given. The aim of this study is to present the model and an illustrative application in which the process components of a Coca-Pepsi IAT are decomposed.
Thermally Regenerative Battery with Intercalatable Electrodes and Selective Heating Means
NASA Technical Reports Server (NTRS)
Sharma, Pramod K. (Inventor); Narayanan, Sekharipuram R. (Inventor); Hickey, Gregory S. (Inventor)
2000-01-01
The battery contains at least one electrode such as graphite that intercalates a first species from the electrolyte disposed in a first compartment such as bromine to form a thermally decomposable complex during discharge. The other electrode can also be graphite which supplies another species such as lithium to the electrolyte in a second electrode compartment. The thermally decomposable complex is stable at room temperature but decomposes at elevated temperatures such as 50 C. to 150 C. The electrode compartments are separated by a selective ion permeable membrane that is impermeable to the first species. Charging is effected by selectively heating the first electrode.
Ma, R; Castellanos, D C; Bachman, J
2016-07-01
China is in the midst of the nutrition transition with increasing rates of obesity and dietary changes. One contributor is the increase in fast food chains within the country. The purpose of this study was to develop a theory-based instrument that explores influencing factors of fast food consumption in adolescents residing in Beijing, China. Cross-sectional study. Value expectancy and theory of planned behaviour were utilised to explore influencing factors of fast food consumption in the target population. There were 201 Chinese adolescents between the ages of 12 and 18. Cronbach's alpha correlation coefficients were used to examine internal reliability of the theory-based questionnaire. Bivariate correlations and a MANOVA were utilised to determine the relationship between theory-based constructs, body mass index (BMI)-for-age and fast food intake frequency as well as to determine differences in theory-based scores among fast food consumption frequency groupings. The theory-based questionnaire showed good reliability. Furthermore, there was a significant difference in the theory-based subcategory scores between fast food frequency groups. A significant positive correlation was observed between times per week fast food was consumed and each theory-based subscale score. Using BMI-for-age of 176 participants, 81% were normal weight and 19% were considered overweight or obese. Results showed consumption of fast food to be on average 1.50 ± 1.33 per week. The relationship between BMI-for-age and times per week fast food was consumed was not significant. As the nutrition transition continues and fast food chains expand, it is important to explore factors effecting fast food consumption in China. Interventions targeting influencing factors can be developed to encourage healthy dietary choice in the midst of this transition. Copyright © 2016. Published by Elsevier Ltd.
A Removal of Eye Movement and Blink Artifacts from EEG Data Using Morphological Component Analysis
Wagatsuma, Hiroaki
2017-01-01
EEG signals contain a large amount of ocular artifacts with different time-frequency properties mixing together in EEGs of interest. The artifact removal has been substantially dealt with by existing decomposition methods known as PCA and ICA based on the orthogonality of signal vectors or statistical independence of signal components. We focused on the signal morphology and proposed a systematic decomposition method to identify the type of signal components on the basis of sparsity in the time-frequency domain based on Morphological Component Analysis (MCA), which provides a way of reconstruction that guarantees accuracy in reconstruction by using multiple bases in accordance with the concept of “dictionary.” MCA was applied to decompose the real EEG signal and clarified the best combination of dictionaries for this purpose. In our proposed semirealistic biological signal analysis with iEEGs recorded from the brain intracranially, those signals were successfully decomposed into original types by a linear expansion of waveforms, such as redundant transforms: UDWT, DCT, LDCT, DST, and DIRAC. Our result demonstrated that the most suitable combination for EEG data analysis was UDWT, DST, and DIRAC to represent the baseline envelope, multifrequency wave-forms, and spiking activities individually as representative types of EEG morphologies. PMID:28194221
Bi-spectrum based-EMD applied to the non-stationary vibration signals for bearing faults diagnosis.
Saidi, Lotfi; Ali, Jaouher Ben; Fnaiech, Farhat
2014-09-01
Empirical mode decomposition (EMD) has been widely applied to analyze vibration signals behavior for bearing failures detection. Vibration signals are almost always non-stationary since bearings are inherently dynamic (e.g., speed and load condition change over time). By using EMD, the complicated non-stationary vibration signal is decomposed into a number of stationary intrinsic mode functions (IMFs) based on the local characteristic time scale of the signal. Bi-spectrum, a third-order statistic, helps to identify phase coupling effects, the bi-spectrum is theoretically zero for Gaussian noise and it is flat for non-Gaussian white noise, consequently the bi-spectrum analysis is insensitive to random noise, which are useful for detecting faults in induction machines. Utilizing the advantages of EMD and bi-spectrum, this article proposes a joint method for detecting such faults, called bi-spectrum based EMD (BSEMD). First, original vibration signals collected from accelerometers are decomposed by EMD and a set of IMFs is produced. Then, the IMF signals are analyzed via bi-spectrum to detect outer race bearing defects. The procedure is illustrated with the experimental bearing vibration data. The experimental results show that BSEMD techniques can effectively diagnosis bearing failures. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Instantaneous Respiratory Estimation from Thoracic Impedance by Empirical Mode Decomposition.
Wang, Fu-Tai; Chan, Hsiao-Lung; Wang, Chun-Li; Jian, Hung-Ming; Lin, Sheng-Hsiung
2015-07-07
Impedance plethysmography provides a way to measure respiratory activity by sensing the change of thoracic impedance caused by inspiration and expiration. This measurement imposes little pressure on the body and uses the human body as the sensor, thereby reducing the need for adjustments as body position changes and making it suitable for long-term or ambulatory monitoring. The empirical mode decomposition (EMD) can decompose a signal into several intrinsic mode functions (IMFs) that disclose nonstationary components as well as stationary components and, similarly, capture respiratory episodes from thoracic impedance. However, upper-body movements usually produce motion artifacts that are not easily removed by digital filtering. Moreover, large motion artifacts disable the EMD to decompose respiratory components. In this paper, motion artifacts are detected and replaced by the data mirrored from the prior and the posterior before EMD processing. A novel intrinsic respiratory reconstruction index that considers both global and local properties of IMFs is proposed to define respiration-related IMFs for respiration reconstruction and instantaneous respiratory estimation. Based on the experiments performing a series of static and dynamic physical activates, our results showed the proposed method had higher cross correlations between respiratory frequencies estimated from thoracic impedance and those from oronasal airflow based on small window size compared to the Fourier transform-based method.
Instantaneous Respiratory Estimation from Thoracic Impedance by Empirical Mode Decomposition
Wang, Fu-Tai; Chan, Hsiao-Lung; Wang, Chun-Li; Jian, Hung-Ming; Lin, Sheng-Hsiung
2015-01-01
Impedance plethysmography provides a way to measure respiratory activity by sensing the change of thoracic impedance caused by inspiration and expiration. This measurement imposes little pressure on the body and uses the human body as the sensor, thereby reducing the need for adjustments as body position changes and making it suitable for long-term or ambulatory monitoring. The empirical mode decomposition (EMD) can decompose a signal into several intrinsic mode functions (IMFs) that disclose nonstationary components as well as stationary components and, similarly, capture respiratory episodes from thoracic impedance. However, upper-body movements usually produce motion artifacts that are not easily removed by digital filtering. Moreover, large motion artifacts disable the EMD to decompose respiratory components. In this paper, motion artifacts are detected and replaced by the data mirrored from the prior and the posterior before EMD processing. A novel intrinsic respiratory reconstruction index that considers both global and local properties of IMFs is proposed to define respiration-related IMFs for respiration reconstruction and instantaneous respiratory estimation. Based on the experiments performing a series of static and dynamic physical activates, our results showed the proposed method had higher cross correlations between respiratory frequencies estimated from thoracic impedance and those from oronasal airflow based on small window size compared to the Fourier transform-based method. PMID:26198231
A Hybrid Generalized Hidden Markov Model-Based Condition Monitoring Approach for Rolling Bearings
Liu, Jie; Hu, Youmin; Wu, Bo; Wang, Yan; Xie, Fengyun
2017-01-01
The operating condition of rolling bearings affects productivity and quality in the rotating machine process. Developing an effective rolling bearing condition monitoring approach is critical to accurately identify the operating condition. In this paper, a hybrid generalized hidden Markov model-based condition monitoring approach for rolling bearings is proposed, where interval valued features are used to efficiently recognize and classify machine states in the machine process. In the proposed method, vibration signals are decomposed into multiple modes with variational mode decomposition (VMD). Parameters of the VMD, in the form of generalized intervals, provide a concise representation for aleatory and epistemic uncertainty and improve the robustness of identification. The multi-scale permutation entropy method is applied to extract state features from the decomposed signals in different operating conditions. Traditional principal component analysis is adopted to reduce feature size and computational cost. With the extracted features’ information, the generalized hidden Markov model, based on generalized interval probability, is used to recognize and classify the fault types and fault severity levels. Finally, the experiment results show that the proposed method is effective at recognizing and classifying the fault types and fault severity levels of rolling bearings. This monitoring method is also efficient enough to quantify the two uncertainty components. PMID:28524088
Draft Genome Sequence of the Lignocellulose Decomposer Thermobifida fusca Strain TM51.
Tóth, Akos; Barna, Terézia; Nagy, István; Horváth, Balázs; Nagy, István; Táncsics, András; Kriszt, Balázs; Baka, Erzsébet; Fekete, Csaba; Kukolya, József
2013-07-11
Here, we present the complete genome sequence of Thermobifida fusca strain TM51, which was isolated from the hot upper layer of a compost pile in Hungary. T. fusca TM51 is a thermotolerant, aerobic actinomycete with outstanding lignocellulose-decomposing activity.
Identifying the biotic (e.g. decomposers, vegetation) and abiotic (e.g. temperature, moisture) mechanisms controlling litter decomposition is key to understanding ecosystem function, especially where variation in ecosystem structure due to successional processes may alter the str...
Wei, Qiang; Ling, Lei; Zhang, Guang-zhong; Yan, Pei-bin; Tao, Ji-xin; Chai, Chun-shan; Xue, Rui
2011-10-01
By the methods of field survey and laboratory soaking extraction, an investigation was conducted on the accumulation amount, water-holding capacity, water-holding rate, and water-absorption rate of the litters under six main forests (Picea wilsonii forest, P. wilsonii - Betula platyphlla forest, Populus davidiana - B. platyphlla forest, Cotonester multiglorus - Rosa xanthina shrubs, Pinus tabulaeformis forest, and Larix principis-rupprechtii forest) in Xinglong Mountain of Gansu. The accumulation amount of the litters under the forests was 13.40-46.32 t hm(-2), and in the order of P. tabulaeformis forest > P. wilsonii - B. platyphlla forest > L. principis-rupprechtii forest > P. wilsonii forest > C. multiglorus-R. xanthina shrubs > P. davidiana - B. platyphlla forest. The litter storage of coniferous forests was greater than that of broadleaved forests, and the storage percentage of semi-decomposed litters was all higher than that of un-decomposed litters. The maximum water-holding rate of the litters was 185.5%-303.6%, being the highest for L. principis-rupprechtii forest and the lowest for P. tabulaeformis forest. The litters' water-holding capacity changed logarithmically with their soaking time. For coniferous forests, un-decomposed litters had a lower water-holding rate than semi-decomposed litters; whereas for broadleaved forests, it was in adverse. The maximum water-holding capacity of the litters varied from 3.94 mm to 8.59 mm, and was in the order of P. tabulaeformis forest > L. principis-rupprechtii forest > P. wilsonii - B. platyphlla forest > P. wilsonii forest > C. multiglorus - R. xanthina shrubs > P. davidiana - B. platyphlla forest. The litters' water-holding capacity also changed logarithmically with immersing time, and the half-decomposed litters had a larger water-holding capacity than un-decomposed litters. The water-absorption rate of the litters presented a power function with immersing time. Within the first one hour of immersed in water, the water-absorption rate of the litters declined linearly; after the first one hour, the litters' water-absorption rate became smaller, and changed slowly at different immersed stages. Semi-decomposed litters had a higher water-absorption rate than un-decomposed litters. The effective retaining amount (depth) of the litters was in the order of P. wilsonii - B. platyphlla forest (5.97 mm) > P. tabulaeformis forest (5.59 mm) > L. principis-rupprechtii forest (5.46 mm) >P. wilsonii forest (4.30 mm) > C. multiglorus - R. xanthina shrubs (3.03 mm)>P. davidiana - B. platyphlla forest (2.13 mm).
Barantal, Sandra; Schimann, Heidy; Fromin, Nathalie; Hättenschwiler, Stephan
2014-12-07
Plant leaf litter generally decomposes faster as a group of different species than when individual species decompose alone, but underlying mechanisms of these diversity effects remain poorly understood. Because resource C : N : P stoichiometry (i.e. the ratios of these key elements) exhibits strong control on consumers, we supposed that stoichiometric dissimilarity of litter mixtures (i.e. the divergence in C : N : P ratios among species) improves resource complementarity to decomposers leading to faster mixture decomposition. We tested this hypothesis with: (i) a wide range of leaf litter mixtures of neotropical tree species varying in C : N : P dissimilarity, and (ii) a nutrient addition experiment (C, N and P) to create stoichiometric similarity. Litter mixtures decomposed in the field using two different types of litterbags allowing or preventing access to soil fauna. Litter mixture mass loss was higher than expected from species decomposing singly, especially in presence of soil fauna. With fauna, synergistic litter mixture effects increased with increasing stoichiometric dissimilarity of litter mixtures and this positive relationship disappeared with fertilizer addition. Our results indicate that litter stoichiometric dissimilarity drives mixture effects via the nutritional requirements of soil fauna. Incorporating ecological stoichiometry in biodiversity research allows refinement of the underlying mechanisms of how changing biodiversity affects ecosystem functioning. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Nagy, Péter; Ashby, Michael T
2005-06-01
Cystine and HOCl (a neutrophil-derived oxidant) react to form an intermediate that has a half-life of ca. 5 min at pH 7.5. The intermediate subsequently decomposes to eventually yield a mixture of cystine, higher oxides of Cys, and other uncharacterized species. Spectral titrations, transitory (1)H NMR and UV-vis spectra, and the reaction properties of the intermediate are consistent with a formulation of N,N'-dichlorocystine {NDC = [-SCH(2)CH(NHCl)(CO(2)H)](2)}. The reaction of equimolar amounts of HOCl with cystine at pH 11.3 does not yield N-chlorocystine [NCC = (-O2C)(H3N+)CHCH(2)SSCH(2)CH(NHCl)(CO(2)H)] but rather a 1:1 mixture of NDC and cystine. This result could be explained by two mechanisms: rapid disproportionation of NCC to produce NDC and cystine or a faster reaction of the second equivalent of HOCl with NCC than the first equivalent of HOCl reacts with cystine. The latter mechanism is favored because of our observation by NMR spectroscopy that NDC decomposes via a species that we have assigned as NCC. Thus, disproportionation of NCC is apparently a relatively slow process. The rates of reaction of cystine(0) = [-SCH(2)CH(NH(3)(+))(CO(2)(-))](2) degrees , cystine(1-) = [((-)O(2)C)(H(2)N)CHCH(2)SSCH(2)CH(NH(3)(+))(CO(2)(-))](-), and cystine(2-) = [-SCH(2)CH(NH2)(CO2)(-))]2(2-) have been investigated, and it is clear that cystine(0) is unreactive, whereas cystine(2-) is about four times more reactive than cystine(1-). Accordingly, the following mechanism is proposed (constants for 5 degrees C): HOCl = H+ + OCl-, pK1 = 7.47; cystine(0) = cystine(1-) + H+, pK2 = 8.15; cystine(1-) = cystine(2-) + H+, pK3 = 9.00; cystine(1-) + HOCl --> NCC(1-) + H2O, k4 = 4.3(2) x 10(6) M(-1) s(-1); cystine(2-) + HOCl --> NCC(2)(-) + H2O, k5 = 1.6(2) x 10(7) M(-1) s(-1); NCC(1-) --> NCC(2-) + H+, k6 = fast; NCC(2-) + HOCl --> NDC(2-) + H2O, k7 = fast. At physiologic pH, the k4 pathway dominates. The generation of long-lived chloramine derivatives of cystine may have physiological consequences, since such compounds are known to react with nucleophiles via mechanisms that are also characteristic of HOCl, electrophilic transfer C+.
Draft Genome Sequence of the Lignocellulose Decomposer Thermobifida fusca Strain TM51
Tóth, Ákos; Barna, Terézia; Nagy, István; Horváth, Balázs; Nagy, István; Táncsics, András; Kriszt, Balázs; Baka, Erzsébet; Fekete, Csaba
2013-01-01
Here, we present the complete genome sequence of Thermobifida fusca strain TM51, which was isolated from the hot upper layer of a compost pile in Hungary. T. fusca TM51 is a thermotolerant, aerobic actinomycete with outstanding lignocellulose-decomposing activity. PMID:23846276
Decomposing University Grades: A Longitudinal Study of Students and Their Instructors
ERIC Educational Resources Information Center
Beenstock, Michael; Feldman, Dan
2018-01-01
First-degree course grades for a cohort of social science students are matched to their instructors, and are statistically decomposed into departmental, course, instructor, and student components. Student ability is measured alternatively by university acceptance scores, or by fixed effects estimated using panel data methods. After controlling for…
Gaze Fluctuations Are Not Additively Decomposable: Reply to Bogartz and Staub
ERIC Educational Resources Information Center
Kelty-Stephen, Damian G.; Mirman, Daniel
2013-01-01
Our previous work interpreted single-lognormal fits to inter-gaze distance (i.e., "gaze steps") histograms as evidence of multiplicativity and hence interactions across scales in visual cognition. Bogartz and Staub (2012) proposed that gaze steps are additively decomposable into fixations and saccades, matching the histograms better and…
Decomposing Achievement Gaps among OECD Countries
ERIC Educational Resources Information Center
Zhang, Liang; Lee, Kristen A.
2011-01-01
In this study, we use decomposition methods on PISA 2006 data to compare student academic performance across OECD countries. We first establish an empirical model to explain the variation in academic performance across individuals, and then use the Oaxaca-Blinder decomposition method to decompose the achievement gap between each of the OECD…
Behavior of decomposition of rifampicin in the presence of isoniazid in the pH range 1-3.
Sankar, R; Sharda, Nishi; Singh, Saranjit
2003-08-01
The extent of decomposition of rifampicin in the presence of isoniazid was determined in the pH range 1-3 at 37 degrees C in 50 min, the mean stomach residence time. With increase in pH, the degradation initially increased from pH 1 to 2 and then decreased, resulting in a bell-shaped pH-decomposition profile. This showed that rifampicin degraded in the presence of isoniazid to a higher extent at pH 2, the maximum pH in the fasting condition, under which antituberculosis fixed-dose combination (FDC) products are administered. At this pH and in 50 min, rifampicin decomposed by approximately 34%, while the fall of isoniazid was 10%. The extent of decomposition for the two drugs was also determined in marketed formulations, and the values ranged between 13-35% and 4-11%, respectively. The extents of decomposition at stomach residence times of 15 min and 3 h were 11.94% and 62.57%, respectively, for rifampicin and 4.78% and 11.12%, respectively, for isoniazid. The results show that quite an extensive loss of rifampicin and isoniazid can occur as a result of interaction between them in fasting pH conditions. This emphasizes that antituberculosis FDC formulations, which contain both drugs, should be designed in a manner that the interaction of the two drugs is prevented when the formulations are administered on an empty stomach.
Aquatic Plants Aid Sewage Filter
NASA Technical Reports Server (NTRS)
Wolverton, B. C.
1985-01-01
Method of wastewater treatment combines micro-organisms and aquatic plant roots in filter bed. Treatment occurs as liquid flows up through system. Micro-organisms, attached themselves to rocky base material of filter, act in several steps to decompose organic matter in wastewater. Vascular aquatic plants (typically, reeds, rushes, cattails, or water hyacinths) absorb nitrogen, phosphorus, other nutrients, and heavy metals from water through finely divided roots.
ERIC Educational Resources Information Center
Sadaf, Ayesha; Newby, Timothy J.; Ertmer, Peggy A.
2016-01-01
The purpose of the study was to investigate factors that predict preservice teachers' intentions and actual uses of Web 2.0 tools in their classrooms. A two-phase, mixed method, sequential explanatory design was used. The first phase explored factors, based on the decomposed theory of planned behavior, that predict preservice teachers' intentions…
NASA Technical Reports Server (NTRS)
1998-01-01
Pointwise Inc.'s, Gridgen Software is a system for the generation of 3D (three dimensional) multiple block, structured grids. Gridgen is a visually-oriented, graphics-based interactive code used to decompose a 3D domain into blocks, distribute grid points on curves, initialize and refine grid points on surfaces and initialize volume grid points. Gridgen is available to U.S. citizens and American-owned companies by license.
ERIC Educational Resources Information Center
Sadaf, Ayesha
2013-01-01
The purpose of this two phase mixed methods sequential explanatory study was to investigate factors that predict preservice teachers' intentions to use Web 2.0 technologies in their future classrooms and their ability to carry out their intentions during student teaching. The first phase explored factors based on the Decomposed Theory of Planned…
Building a Relationship between Elements of Product Form Features and Vocabulary Assessment Models
ERIC Educational Resources Information Center
Lo, Chi-Hung
2016-01-01
Based on the characteristic feature parameterization and the superiority evaluation method (SEM) in extension engineering, a product-shape design method was proposed in this study. The first step of this method is to decompose the basic feature components of a product. After that, the morphological chart method is used to segregate the ideas so as…
1993-11-10
realized. Metal carboxylates are often used as precursors for ceramic oxides since they tend to be air-stable, soluble in organic solvents, and decompose...metalorganic precursors [9] . These include routes based solely on metal alkoxides [9, 101 or metal carboxylates (e.g. the Pechini (or citrate) process
ERIC Educational Resources Information Center
Lai, Horng-Ji
2017-01-01
The purpose of this study was to investigate the decisions of civil servants to use Web 2.0 applications while engaging in online learning. The participants were 439 civil servants enrolled in asynchronous online learning programs, using an e-learning portal provided by Taiwan's Regional Civil Service Development Institute. The participants…
Adsorption performance of Rh decorated SWCNT upon SF6 decomposed components based on DFT method
NASA Astrophysics Data System (ADS)
Zhang, Xiaoxing; Cui, Hao; Dong, Xingchen; Chen, Dachang; Tang, Ju
2017-10-01
Transition metal decorated carbon nanotubes (CNTs) applied in the field of gas adsorption and storage have in recent years accepted considerable attentions because of their superior adsorbing performance. In electrical engineering, they are employed as adsorbents to remove the decomposed products of SF6 caused by partial discharge, for guaranteeing the insulation status of gas insulation switchgear (GIS). In this paper, Rh doped SWCNT is introduced to investigate its adsorption properties towards typical gases of SF6 based on density functional theory (DFT) method. Both single and double molecules adsorbing systems were performed to investigate the adsorption ability of proposed material. Results indicate that Rh-CNT, which has strong interaction with defined gas molecules, is a promising material for SF6 decompositions adsorption especially SO2 and SOF2 that exhibit topmost sensitivity to the modified surface. Therefore, we suggest the Rh-CNT to be an adsorbent to be applied in GIS for guaranteeing the operation state of such devices and even to be exploited as gas sensor to evaluate the insulation state of the power system. Our calculations would provide experimentalist with a first insight into physicochemical properties of this material.
The prospect of hazardous sludge reduction through gasification process
NASA Astrophysics Data System (ADS)
Hakiki, R.; Wikaningrum, T.; Kurniawan, T.
2018-01-01
Biological sludge generated from centralized industrial WWTP is classified as toxic and hazardous waste based on the Indonesian’s Government Regulation No. 101/2014. The amount of mass and volume of sludge produced have an impact in the cost to manage or to dispose. The main objective of this study is to identify the opportunity of gasification technology which can be applied to reduce hazardous sludge quantity before sending to the final disposal. This preliminary study covers the technical and economic assessment of the application of gasification process, which was a combination of lab-scale experimental results and assumptions based on prior research. The results showed that the process was quite effective in reducing the amount and volume of hazardous sludge which results in reducing the disposal costs without causing negative impact on the environment. The reduced mass are moisture and volatile carbon which are decomposed, while residues are fix carbon and other minerals which are not decomposed by thermal process. The economical simulation showed that the project will achieve payback period in 2.5 years, IRR value of 53 % and BC Ratio of 2.3. The further study in the pilot scale to obtain the more accurate design and calculations is recommended.
Gaussian process regression of chirplet decomposed ultrasonic B-scans of a simulated design case
NASA Astrophysics Data System (ADS)
Wertz, John; Homa, Laura; Welter, John; Sparkman, Daniel; Aldrin, John
2018-04-01
The US Air Force seeks to implement damage tolerant lifecycle management of composite structures. Nondestructive characterization of damage is a key input to this framework. One approach to characterization is model-based inversion of the ultrasonic response from damage features; however, the computational expense of modeling the ultrasonic waves within composites is a major hurdle to implementation. A surrogate forward model with sufficient accuracy and greater computational efficiency is therefore critical to enabling model-based inversion and damage characterization. In this work, a surrogate model is developed on the simulated ultrasonic response from delamination-like structures placed at different locations within a representative composite layup. The resulting B-scans are decomposed via the chirplet transform, and a Gaussian process model is trained on the chirplet parameters. The quality of the surrogate is tested by comparing the B-scan for a delamination configuration not represented within the training data set. The estimated B-scan has a maximum error of ˜15% for an estimated reduction in computational runtime of ˜95% for 200 function calls. This considerable reduction in computational expense makes full 3D characterization of impact damage tractable.
A brick-architecture-based mobile under-vehicle inspection system
NASA Astrophysics Data System (ADS)
Qian, Cheng; Page, David; Koschan, Andreas; Abidi, Mongi
2005-05-01
In this paper, a mobile scanning system for real-time under-vehicle inspection is presented, which is founded on a "Brick" architecture. In this "Brick" architecture, the inspection system is basically decomposed into bricks of three kinds: sensing, mobility, and computing. These bricks are physically and logically independent and communicate with each other by wireless communication. Each brick is mainly composed by five modules: data acquisition, data processing, data transmission, power, and self-management. These five modules can be further decomposed into submodules where the function and the interface are well-defined. Based on this architecture, the system is built by four bricks: two sensing bricks consisting of a range scanner and a line CCD, one mobility brick, and one computing brick. The sensing bricks capture geometric data and texture data of the under-vehicle scene, while the mobility brick provides positioning data along the motion path. Data of these three modalities are transmitted to the computing brick where they are fused and reconstruct a 3D under-vehicle model for visualization and danger inspection. This system has been successfully used in several military applications and proved to be an effective safer method for national security.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sengupta, S.; Soda, H.; McLean, A.
2000-01-01
A ternary eutectic alloy with a composition of 57.2 pct Bi, 24.8 pct In, and 18 pct Sn was continuously cast into wire of 2 mm diameter with casting speeds of 14 and 79 mm/min using the Ohno Continuous Casting (OCC) process. The microstructures obtained were compared with those of statically cast specimens. Extensive segregation of massive Bi blocks, Bi complex structures, and tin-rich dendrites was found in specimens that were statically cast. Decomposition of {radical}Sn by a eutectoid reaction was confirmed based on microstructural evidence. Ternary eutectic alloy with a cooling rate of approximately 1 C/min formed a doublemore » binary eutectic. The double binary eutectic consisted of regions of BiIn and decomposed {radical}Sn in the form of a dendrite cell structure and regions of Bi and decomposed {radical}Sn in the form of a complex-regular cell. The Bi complex-regular cells, which are a ternary eutectic constituent, existed either along the boundaries of the BiIn-decomposed {radical}Sn dendrite cells or at the front of elongated dendrite cell structures. In the continuously cast wires, primary Sn dendrites coupled with a small Bi phase were uniformly distributed within the Bi-In alloy matrix. Neither massive Bi phase, Bi complex-regular cells, no BiIn eutectic dendrite cells were observed, resulting in a more uniform microstructure in contrast to the heavily segregated structures of the statically cast specimens.« less
Hsieh, Pi-Jung
2015-01-01
Electronic medical records (EMRs) exchange improves clinical quality and reduces medical costs. However, few studies address the antecedent factors of physicians' intentions to use EMR exchange. Based on institutional trust and perceived risk integrated with the decomposed theory of planned behavior (TPB) model, we propose a theoretical model to explain the intention of physicians to use an EMR exchange system. We conducted a field survey in Taiwan to collect data from physicians who had experience using the EMR exchange systems. A valid sample of 191 responses was collected for data analysis. To test the proposed research model, we employed structural equation modeling using the partial least squares method. The study findings show that the following five factors have a significant influence on the physicians' intentions to use EMR exchange systems: (a) attitude; (b) subjective norm; (c) perceived behavior control; (d) institutional trust; and (e) perceived risk. These five factors are predictable by perceived usefulness, perceived ease of use, and compatibility, interpersonal and governmental influence, facilitating conditions and self-efficacy, situational normality and structural assurance, and institutional trust, respectively. The results also indicate that institutional trust and perceived risk integrated with the decomposed TPB model improve the prediction of physician's intentions to use EMR exchange. The results of this study indicate that our research model effectively predicts the intention of physicians to use EMR exchange, and provides valuable implications for academics and practitioners. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Leutenegger, Scott T.; Horton, Graham
1994-01-01
Recently the Multi-Level algorithm was introduced as a general purpose solver for the solution of steady state Markov chains. In this paper, we consider the performance of the Multi-Level algorithm for solving Nearly Completely Decomposable (NCD) Markov chains, for which special-purpose iteractive aggregation/disaggregation algorithms such as the Koury-McAllister-Stewart (KMS) method have been developed that can exploit the decomposability of the the Markov chain. We present experimental results indicating that the general-purpose Multi-Level algorithm is competitive, and can be significantly faster than the special-purpose KMS algorithm when Gauss-Seidel and Gaussian Elimination are used for solving the individual blocks.
Methods for assessing the impact of avermectins on the decomposer community of sheep pastures.
King, K L
1993-06-01
This paper outlines methods which can be used in the field assessment of potentially toxic chemicals such as the avermectins. The procedures focus on measuring the effects of the drug on decomposer organisms and the nutrient cycling process in pastures grazed by sheep. Measurements of decomposer activity are described along with methods for determining dry and organic matter loss and mineral loss from dung to the underlying soil. Sampling methods for both micro- and macro-invertebrates are discussed along with determination of the percentage infection of plant roots with vesicular-arbuscular mycorrhizal fungi. An integrated sampling unit for assessing the ecotoxicity of ivermectin in pastures grazed by sheep is presented.
Fluidized bed silicon deposition from silane
NASA Technical Reports Server (NTRS)
Hsu, George C. (Inventor); Levin, Harry (Inventor); Hogle, Richard A. (Inventor); Praturi, Ananda (Inventor); Lutwack, Ralph (Inventor)
1982-01-01
A process and apparatus for thermally decomposing silicon containing gas for deposition on fluidized nucleating silicon seed particles is disclosed. Silicon seed particles are produced in a secondary fluidized reactor by thermal decomposition of a silicon containing gas. The thermally produced silicon seed particles are then introduced into a primary fluidized bed reactor to form a fluidized bed. Silicon containing gas is introduced into the primary reactor where it is thermally decomposed and deposited on the fluidized silicon seed particles. Silicon seed particles having the desired amount of thermally decomposed silicon product thereon are removed from the primary fluidized reactor as ultra pure silicon product. An apparatus for carrying out this process is also disclosed.
Fluidized bed silicon deposition from silane
NASA Technical Reports Server (NTRS)
Hsu, George (Inventor); Levin, Harry (Inventor); Hogle, Richard A. (Inventor); Praturi, Ananda (Inventor); Lutwack, Ralph (Inventor)
1984-01-01
A process and apparatus for thermally decomposing silicon containing gas for deposition on fluidized nucleating silicon seed particles is disclosed. Silicon seed particles are produced in a secondary fluidized reactor by thermal decomposition of a silicon containing gas. The thermally produced silicon seed particles are then introduced into a primary fluidized bed reactor to form a fludized bed. Silicon containing gas is introduced into the primary reactor where it is thermally decomposed and deposited on the fluidized silicon seed particles. Silicon seed particles having the desired amount of thermally decomposed silicon product thereon are removed from the primary fluidized reactor as ultra pure silicon product. An apparatus for carrying out this process is also disclosed.
The complexity of divisibility.
Bausch, Johannes; Cubitt, Toby
2016-09-01
We address two sets of long-standing open questions in linear algebra and probability theory, from a computational complexity perspective: stochastic matrix divisibility, and divisibility and decomposability of probability distributions. We prove that finite divisibility of stochastic matrices is an NP-complete problem, and extend this result to nonnegative matrices, and completely-positive trace-preserving maps, i.e. the quantum analogue of stochastic matrices. We further prove a complexity hierarchy for the divisibility and decomposability of probability distributions, showing that finite distribution divisibility is in P, but decomposability is NP-hard. For the former, we give an explicit polynomial-time algorithm. All results on distributions extend to weak-membership formulations, proving that the complexity of these problems is robust to perturbations.
A Tale of Three Classes: Case Studies in Course Complexity
ERIC Educational Resources Information Center
Gill, T. Grandon; Jones, Joni
2010-01-01
This paper examines the question of decomposability versus complexity of teaching situations by presenting three case studies of MIS courses. Because all three courses were highly successful in their observed outcomes, the paper hypothesizes that if the attributes of effective course design are decomposable, one would expect to see a large number…
NASA Technical Reports Server (NTRS)
Wahl, Kurt; Klemm, Wilhelm
1988-01-01
The reaction of KO2 and CuO in an O2 atmosphere at 400 to 450 C results in KCuO, which is a steel-blue and nonmagnetic compound. This substance exhibits a characteristic X-ray diagram; it decomposes in dilute acids to form O2 and Cu(II) salts. It decomposes thermally above 500 C.
USDA-ARS?s Scientific Manuscript database
If not properly account for, auto-correlated errors in observations can lead to inaccurate results in soil moisture data analysis and reanalysis. Here, we propose a more generalized form of the triple collocation algorithm (GTC) capable of decomposing the total error variance of remotely-sensed surf...
Kill the Song--Steal the Show: What Does Distinguish Predicative Metaphors from Decomposable Idioms?
ERIC Educational Resources Information Center
Caillies, Stephanie; Declercq, Christelle
2011-01-01
This study examined the semantic processing difference between decomposable idioms and novel predicative metaphors. It was hypothesized that idiom comprehension results from the retrieval of a figurative meaning stored in memory, that metaphor comprehension requires a sense creation process and that this process difference affects the processing…
USDA-ARS?s Scientific Manuscript database
Blow flies are commonly associated with decomposing material. In most cases, the larvae are found feeding on decomposing vertebrate remains. However, some species have specialized to feed on living tissue or can survive on other alternate resources like feces. Because of their affiliation with su...
Saito, Shota; Hirata, Yoshito; Sasahara, Kazutoshi; Suzuki, Hideyuki
2015-01-01
Micro-blogging services, such as Twitter, offer opportunities to analyse user behaviour. Discovering and distinguishing behavioural patterns in micro-blogging services is valuable. However, it is difficult and challenging to distinguish users, and to track the temporal development of collective attention within distinct user groups in Twitter. In this paper, we formulate this problem as tracking matrices decomposed by Nonnegative Matrix Factorisation for time-sequential matrix data, and propose a novel extension of Nonnegative Matrix Factorisation, which we refer to as Time Evolving Nonnegative Matrix Factorisation (TENMF). In our method, we describe users and words posted in some time interval by a matrix, and use several matrices as time-sequential data. Subsequently, we apply Time Evolving Nonnegative Matrix Factorisation to these time-sequential matrices. TENMF can decompose time-sequential matrices, and can track the connection among decomposed matrices, whereas previous NMF decomposes a matrix into two lower dimension matrices arbitrarily, which might lose the time-sequential connection. Our proposed method has an adequately good performance on artificial data. Moreover, we present several results and insights from experiments using real data from Twitter.
When microbes and consumers determine the limiting nutrient of autotrophs: a theoretical analysis
Cherif, Mehdi; Loreau, Michel
2008-01-01
Ecological stoichiometry postulates that differential nutrient recycling of elements such as nitrogen and phosphorus by consumers can shift the element that limits plant growth. However, this hypothesis has so far considered the effect of consumers, mostly herbivores, out of their food-web context. Microbial decomposers are important components of food webs, and might prove as important as consumers in changing the availability of elements for plants. In this theoretical study, we investigate how decomposers determine the nutrient that limits plants, both by feeding on nutrients and organic carbon released by plants and consumers, and by being fed upon by omnivorous consumers. We show that decomposers can greatly alter the relative availability of nutrients for plants. The type of limiting nutrient promoted by decomposers depends on their own elemental composition and, when applicable, on their ingestion by consumers. Our results highlight the limitations of previous stoichiometric theories of plant nutrient limitation control, which often ignored trophic levels other than plants and herbivores. They also suggest that detrital chains play an important role in determining plant nutrient limitation in many ecosystems. PMID:18854301
Myers, Ronald L
2013-09-01
In Raffia (Raphia taedigera) palm-swamps, it is frequent to observe high mounds at the base of the palm clumps. These mounds are formed by the accumulation of litter and organic matter, or might result from upturned roots of wind-thrown trees. The mounds serve as anchorage site for the palms, and could be important for the establishment of woody tree species in the swamp. The formation of these mounds might be explained by the unequal accumulation of organic matter in the wetland, or by differences in decomposition rates between Raffia litter versus the litter produced in adjacent mixed forests. To distinguish between these hypotheses, I compared the spatial distribution of litter in a R. taedigera swamp with the litter distribution on an adjacent slope forest, where litter distribution is expected to be homogeneous. In addition, I compared decomposition rates of major components of fine litter in three different environments: two wetlands dominated by palms (R. taedigera and Manicaria saccifera) and a slope forest that experiences lower inundation effects. On the palm swamp, noticeable concentration of litter was observed near the bases of clumps of palm as opposed to the swamp floor. In the adjacent slope forest, the magnitude of the differences in the distribution of litter is small and there is no accumulation at the base of emergent trees. It was also found that litter production increases during heavy rains and storms that follow dry periods. The swamp environment, independent of the litter, showed significantly lower decomposition rates than the surrounding forest slope. Furthermore, R. taedigera litter decomposes as fast as the slope forest litter. Overall, these results suggest that resistance to decomposition is not a major factor in the formation of mounds at the bases of R. taedigera clumps. Instead, litter accumulation contributes to the formation of the mounds that rise above the surface of the swamp.
Optical diagnosis of cervical cancer by intrinsic mode functions
NASA Astrophysics Data System (ADS)
Mukhopadhyay, Sabyasachi; Pratiher, Sawon; Pratiher, Souvik; Pradhan, Asima; Ghosh, Nirmalya; Panigrahi, Prasanta K.
2017-03-01
In this paper, we make use of the empirical mode decomposition (EMD) to discriminate the cervical cancer tissues from normal ones based on elastic scattering spectroscopy. The phase space has been reconstructed through decomposing the optical signal into a finite set of bandlimited signals known as intrinsic mode functions (IMFs). It has been shown that the area measure of the analytic IMFs provides a good discrimination performance. Simulation results validate the efficacy of the IMFs followed by SVM based classification.
NASA Astrophysics Data System (ADS)
Lv, Gangming; Zhu, Shihua; Hui, Hui
Multi-cell resource allocation under minimum rate request for each user in OFDMA networks is addressed in this paper. Based on Lagrange dual decomposition theory, the joint multi-cell resource allocation problem is decomposed and modeled as a limited-cooperative game, and a distributed multi-cell resource allocation algorithm is thus proposed. Analysis and simulation results show that, compared with non-cooperative iterative water-filling algorithm, the proposed algorithm can remarkably reduce the ICI level and improve overall system performances.
Particle agglomeration and fuel decomposition in burning slurry droplets
NASA Astrophysics Data System (ADS)
Choudhury, P. Roy; Gerstein, Melvin
In a burning slurry droplet the particles tend to agglomerate and produce large clusters which are difficult to burn. As a consequence, the combustion efficiency is drastically reduced. For such a droplet the nonlinear D2- t behavior associated with the formation of hard to burn agglomerates can be explained if the fuel decomposes on the surface of the particles. This paper deals with analysis and experiments with JP-10 and Diesel #2 slurries prepared with inert SiC and Al 2O 3 particles. It provides direct evidence of decomposed fuel residue on the surface of the particles heated by flame radiation. These decomposed fuel residues act as bonding agents and appear to be responsible for the observed agglomeration of particles in a slurry. Chemical analysis, scanning electron microscope photographs and finally micro-analysis by electron scattering clearly show the presence of decomposed fuel residue on the surface of the particles. Diesel #2 is decomposed relatively easily and therefore leaves a thicker deposit on SiC and forms larger agglomerates than the more stable JP-10. A surface reaction model with particles heated by flame radiation is able to describe the observed trend of the diameter history of the slurry fuel. Additional experiments with particles of lower emissivity (Al 2O 3) and radiation absorbing dye validate the theoretical model of the role of flame radiation in fuel decomposition and the formation of agglomerates in burning slurry droplets.
2015-03-01
HEAVY OXIDE INORGANIC SCINTILLATOR CRYSTALS FOR DIRECT DETECTION OF FAST NEUTRONS BASED ON INELASTIC SCATTERING by Philip R. Rusiecki...HEAVY OXIDE INORGANIC SCINTILLATOR CRYSTALS FOR DIRECT DETECTION OF FAST NEUTRONS BASED ON INELASTIC SCATTERING 6. AUTHOR(S) Philip R. Rusiecki 7...ABSTRACT (maximum 200 words) Heavy oxide inorganic scintillators may prove viable in the detection of fast neutrons based on the mechanism of
Management intensity alters decomposition via biological pathways
Wickings, Kyle; Grandy, A. Stuart; Reed, Sasha; Cleveland, Cory
2011-01-01
Current conceptual models predict that changes in plant litter chemistry during decomposition are primarily regulated by both initial litter chemistry and the stage-or extent-of mass loss. Far less is known about how variations in decomposer community structure (e.g., resulting from different ecosystem management types) could influence litter chemistry during decomposition. Given the recent agricultural intensification occurring globally and the importance of litter chemistry in regulating soil organic matter storage, our objectives were to determine the potential effects of agricultural management on plant litter chemistry and decomposition rates, and to investigate possible links between ecosystem management, litter chemistry and decomposition, and decomposer community composition and activity. We measured decomposition rates, changes in litter chemistry, extracellular enzyme activity, microarthropod communities, and bacterial versus fungal relative abundance in replicated conventional-till, no-till, and old field agricultural sites for both corn and grass litter. After one growing season, litter decomposition under conventional-till was 20% greater than in old field communities. However, decomposition rates in no-till were not significantly different from those in old field or conventional-till sites. After decomposition, grass residue in both conventional- and no-till systems was enriched in total polysaccharides relative to initial litter, while grass litter decomposed in old fields was enriched in nitrogen-bearing compounds and lipids. These differences corresponded with differences in decomposer communities, which also exhibited strong responses to both litter and management type. Overall, our results indicate that agricultural intensification can increase litter decomposition rates, alter decomposer communities, and influence litter chemistry in ways that could have important and long-term effects on soil organic matter dynamics. We suggest that future efforts to more accurately predict soil carbon dynamics under different management regimes may need to explicitly consider how changes in litter chemistry during decomposition are influenced by the specific metabolic capabilities of the extant decomposer communities.
Scheibe, Andrea; Gleixner, Gerd
2014-01-01
We investigated the effect of leaf litter on below ground carbon export and soil carbon formation in order to understand how litter diversity affects carbon cycling in forest ecosystems. 13C labeled and unlabeled leaf litter of beech (Fagus sylvatica) and ash (Fraxinus excelsior), characterized by low and high decomposability, were used in a litter exchange experiment in the Hainich National Park (Thuringia, Germany). Litter was added in pure and mixed treatments with either beech or ash labeled with 13C. We collected soil water in 5 cm mineral soil depth below each treatment biweekly and determined dissolved organic carbon (DOC), δ13C values and anion contents. In addition, we measured carbon concentrations and δ13C values in the organic and mineral soil (collected in 1 cm increments) up to 5 cm soil depth at the end of the experiment. Litter-derived C contributes less than 1% to dissolved organic matter (DOM) collected in 5 cm mineral soil depth. Better decomposable ash litter released significantly more (0.50±0.17%) litter carbon than beech litter (0.17±0.07%). All soil layers held in total around 30% of litter-derived carbon, indicating the large retention potential of litter-derived C in the top soil. Interestingly, in mixed (ash and beech litter) treatments we did not find a higher contribution of better decomposable ash-derived carbon in DOM, O horizon or mineral soil. This suggest that the known selective decomposition of better decomposable litter by soil fauna has no or only minor effects on the release and formation of litter-derived DOM and soil organic matter. Overall our experiment showed that 1) litter-derived carbon is of low importance for dissolved organic carbon release and 2) litter of higher decomposability is faster decomposed, but litter diversity does not influence the carbon flow. PMID:25486628
Scheibe, Andrea; Gleixner, Gerd
2014-01-01
We investigated the effect of leaf litter on below ground carbon export and soil carbon formation in order to understand how litter diversity affects carbon cycling in forest ecosystems. 13C labeled and unlabeled leaf litter of beech (Fagus sylvatica) and ash (Fraxinus excelsior), characterized by low and high decomposability, were used in a litter exchange experiment in the Hainich National Park (Thuringia, Germany). Litter was added in pure and mixed treatments with either beech or ash labeled with 13C. We collected soil water in 5 cm mineral soil depth below each treatment biweekly and determined dissolved organic carbon (DOC), δ13C values and anion contents. In addition, we measured carbon concentrations and δ13C values in the organic and mineral soil (collected in 1 cm increments) up to 5 cm soil depth at the end of the experiment. Litter-derived C contributes less than 1% to dissolved organic matter (DOM) collected in 5 cm mineral soil depth. Better decomposable ash litter released significantly more (0.50±0.17%) litter carbon than beech litter (0.17±0.07%). All soil layers held in total around 30% of litter-derived carbon, indicating the large retention potential of litter-derived C in the top soil. Interestingly, in mixed (ash and beech litter) treatments we did not find a higher contribution of better decomposable ash-derived carbon in DOM, O horizon or mineral soil. This suggest that the known selective decomposition of better decomposable litter by soil fauna has no or only minor effects on the release and formation of litter-derived DOM and soil organic matter. Overall our experiment showed that 1) litter-derived carbon is of low importance for dissolved organic carbon release and 2) litter of higher decomposability is faster decomposed, but litter diversity does not influence the carbon flow.
NASA Technical Reports Server (NTRS)
Duraj, S. A.; Duffy, N. V.; Hepp, A. F.; Cowen, J. E.; Hoops, M. D.; Brothrs, S. M.; Baird, M. J.; Fanwick, P. E.; Harris, J. D.; Jin, M. H.-C.
2009-01-01
Ten dithiocarbamate complexes of indium(III) and gallium(III) have been prepared and characterized by elemental analysis, infrared spectra and melting point. Each complex was decomposed thermally and its decomposition products separated and identified with the combination of gas chromatography/mass spectrometry. Their potential utility as photovoltaic materials precursors was assessed. Bis(dibenzyldithiocarbamato)- and bis(diethyldithiocarbamato)copper(II), Cu(S2CN(CH2C6H5)2)2 and Cu(S2CN(C2H5)2)2 respectively, have also been examined for their suitability as precursors for copper sulfides for the fabrication of photovoltaic materials. Each complex was decomposed thermally and the products analyzed by GC/MS, TGA and FTIR. The dibenzyl derivative complex decomposed at a lower temperature (225-320 C) to yield CuS as the product. The diethyl derivative complex decomposed at a higher temperature (260-325 C) to yield Cu2S. No Cu containing fragments were noted in the mass spectra. Unusual recombination fragments were observed in the mass spectra of the diethyl derivative. Tris(bis(phenylmethyl)carbamodithioato-S,S'), commonly referred to as tris(N,N-dibenzyldithiocarbamato)indium(III), In(S2CNBz2)3, was synthesized and characterized by single crystal X-ray crystallography. The compound crystallizes in the triclinic space group P1(bar) with two molecules per unit cell. The material was further characterized using a novel analytical system employing the combined powers of thermogravimetric analysis, gas chromatography/mass spectrometry, and Fourier transform infrared (FT-IR) spectroscopy to investigate its potential use as a precursor for the chemical vapor deposition (CVD) of thin film materials for photovoltaic applications. Upon heating, the material thermally decomposes to release CS2 and benzyl moieties in to the gas phase, resulting in bulk In2S3. Preliminary spray CVD experiments indicate that In(S2CNBz2)3 decomposed on a Cu substrate reacts to produce stoichiometric CuInS2 films.
The evolution and disintegration of matter
Clarke, Frank Wigglesworth
1925-01-01
In any attempt to study the evolution of matter it is necessary to begin with its simplest known forms, the so-called chemical elements. During a great part of the nineteenth century many philosophical chemists held a vague belief that these elements were not distinct entities but manifestations of one primal substance-the protyle, as it is sometimes called. Other chemists, more conservative, looked askance at all such speculations and held fast to what they regarded as established facts. To them an element was something distinct from other kinds of matter, a substance which could neither be decomposed nor transmuted into anything else. This belief, however, was based entirely upon negative evidence-the inadequacy of our existing resources to produce such sweeping changes. Many important facts were ignored, and especially the fact that the elements are connected by very intimate relations, such as are best shown in the periodic law of Mendeleef, who, from gaps in his table of atomic weights, predicted the existence of three unknown metals, which have since been discovered. For these metals, scandium, gallium, and germanium, he foretold not only their atomic weights but also their most characteristic physical properties and the sort of compounds that each one would form. His prophecies have been verified in every essential particular. One obvious conclusion was soon drawn from Mendeleef's "law," although he was too cautious to admit it, namely, that the chemical elements must have had some community of origin. The philosophical speculations as to their nature were fully justified.
Multi-functional optical signal processing using optical spectrum control circuit
NASA Astrophysics Data System (ADS)
Hayashi, Shuhei; Ikeda, Tatsuhiko; Mizuno, Takayuki; Takahashi, Hiroshi; Tsuda, Hiroyuki
2015-02-01
Processing ultra-fast optical signals without optical/electronic conversion is in demand and time-to-space conversion has been proposed as an effective solution. We have designed and fabricated an arrayed-waveguide grating (AWG) based optical spectrum control circuit (OSCC) using silica planar lightwave circuit (PLC) technology. This device is composed of an AWG, tunable phase shifters and a mirror. The principle of signal processing is to spatially decompose the signal's frequency components by using the AWG. Then, the phase of each frequency component is controlled by the tunable phase shifters. Finally, the light is reflected back to the AWG by the mirror and synthesized. Amplitude of each frequency component can be controlled by distributing the power to high diffraction order light. The spectral controlling range of the OSCC is 100 GHz and its resolution is 1.67 GHz. This paper describes equipping the OSCC with optical coded division multiplex (OCDM) encoder/decoder functionality. The encoding principle is to apply certain phase patterns to the signal's frequency components and intentionally disperse the signal. The decoding principle is also to apply certain phase patterns to the frequency components at the receiving side. If the applied phase pattern compensates the intentional dispersion, the waveform is regenerated, but if the pattern is not appropriate, the waveform remains dispersed. We also propose an arbitrary filter function by exploiting the OSCC's amplitude and phase control attributes. For example, a filtered optical signal transmitted through multiple optical nodes that use the wavelength multiplexer/demultiplexer can be equalized.
Zheng, Shuanghao; Li, Zhilin; Wu, Zhong-Shuai; Dong, Yanfeng; Zhou, Feng; Wang, Sen; Fu, Qiang; Sun, Chenglin; Guo, Liwei; Bao, Xinhe
2017-04-25
Interfacial integration of a shape-engineered electrode with a strongly bonded current collector is the key for minimizing both ionic and electronic resistance and then developing high-power supercapacitors. Herein, we demonstrated the construction of high-power micro-supercapacitors (VG-MSCs) based on high-density unidirectional arrays of vertically aligned graphene (VG) nanosheets, derived from a thermally decomposed SiC substrate. The as-grown VG arrays showed a standing basal plane orientation grown on a (0001̅) SiC substrate, tailored thickness (3.5-28 μm), high-density structurally ordering alignment of graphene consisting of 1-5 layers, vertically oriented edges, open intersheet channels, high electrical conductivity (192 S cm -1 ), and strong bonding of the VG edges to the SiC substrate. As a result, the demonstrated VG-MSCs displayed a high areal capacitance of ∼7.3 mF cm -2 and a fast frequency response with a short time constant of 9 ms. Furthermore, VG-MSCs in both an aqueous polymer gel electrolyte and nonaqueous ionic liquid of 1-ethyl-3-methylimidazolium tetrafluoroborate operated well at high scan rates of up to 200 V s -1 . More importantly, VG-MSCs offered a high power density of ∼15 W cm -3 in gel electrolyte and ∼61 W cm -3 in ionic liquid. Therefore, this strategy of producing high-density unidirectional VG nanosheets directly bonded on a SiC current collector demonstrated the feasibility of manufacturing high-power compact supercapacitors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeAngelis, Kristen M.; Sharma, Deepak; Varney, Rebecca
2013-08-29
The anaerobic isolate Enterobacter lignolyticus SCF1 was initially cultivated based on anaerobic growth on lignin as sole carbon source. The source of the isolated bacteria was from tropical forest soils that decompose litter rapidly with low and fluctuating redox potentials, making it likely that bacteria using oxygen-independent enzymes play an important role in decomposition. We have examined differential expression of the anaerobic isolate Enterobacter lignolyticus SCF1 during growth on lignin. After 48 hours of growth, we used transcriptomics and proteomics to define the enzymes and other regulatory machinery that these organisms use to degrade lignin, as well as metabolomics tomore » measure lignin degradation and monitor the use of lignin and iron as terminal electron acceptors that facilitate more efficient use of carbon. Proteomics revealed accelerated xylose uptake and metabolism under lignin-amended growth, and lignin degradation via the 4-hydroxyphenylacetate degradation pathway, catalase/peroxidase enzymes, and the glutathione biosynthesis and glutathione S-transferase proteins. We also observed increased production of NADH-quinone oxidoreductase, other electron transport chain proteins, and ATP synthase and ATP-binding cassette (ABC) transporters. Our data shows the advantages of a multi-omics approach, where incomplete pathways identified by genomics were completed, and new observations made on coping with poor carbon availability. The fast growth, high efficiency and specificity of enzymes employed in bacterial anaerobic litter deconstruction makes these soils useful templates for improving biofuel production.« less
Mohammadi Khalfbadam, Hassan; Cheng, Ka Yu; Sarukkalige, Ranjan; Kaksonen, Anna H; Kayaalp, Ahmet S; Ginige, Maneesha P
2016-09-01
This study examined for the first time the use of bioelectrochemical systems (BES) to entrap, decompose and oxidise fresh algal biomass from an algae-laden effluent. The experimental process consisted of a photobioreactor for a continuous production of the algal-laden effluent, and a two-chamber BES equipped with anodic graphite granules and carbon-felt to physically remove and oxidise algal biomass from the influent. Results showed that the BES filter could retain ca. 90% of the suspended solids (SS) loaded. A coulombic efficiency (CE) of 36.6% (based on particulate chemical oxygen demand (PCOD) removed) was achieved, which was consistent with the highest CEs of BES studies (operated in microbial fuel cell mode (MFC)) that included additional pre-treatment steps for algae hydrolysis. Overall, this study suggests that a filter type BES anode can effectively entrap, decompose and in situ oxidise algae without the need for a separate pre-treatment step. Copyright © 2016 Elsevier Ltd. All rights reserved.
Spatially resolved spectroscopy analysis of the XMM-Newton large program on SN1006
NASA Astrophysics Data System (ADS)
Li, Jiang-Tao; Decourchelle, Anne; Miceli, Marco; Vink, Jacco; Bocchino, Fabrizio
2016-04-01
We perform analysis of the XMM-Newton large program on SN1006 based on our newly developed methods of spatially resolved spectroscopy analysis. We extract spectra from low and high resolution meshes. The former (3596 meshes) is used to roughly decompose the thermal and non-thermal components and characterize the spatial distributions of different parameters, such as temperature, abundances of different elements, ionization age, and electron density of the thermal component, as well as photon index and cutoff frequency of the non-thermal component. On the other hand, the low resolution meshes (583 meshes) focus on the interior region dominated by the thermal emission and have enough counts to well characterize the Si lines. We fit the spectra from the low resolution meshes with different models, in order to decompose the multiple plasma components at different thermal and ionization states and compare their spatial distributions. In this poster, we will present the initial results of this project.
Composting oily sludges: Characterizing microflora using randomly amplified polymorphic DNA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Persson, A.; Quednau, M.; Ahrne, S.
1995-12-31
Laboratory-scale composts in which oily sludge was composted under mesophilic conditions with amendments such as peat, bark, and fresh or decomposed horse manure, were studied with respect to basic parameters such as oil degradation, respirometry, and bacterial numbers. Further, an attempt was made to characterize a part of the bacterial flora using randomly amplified polymorphic DNA (RAPD). The compost based on decomposed horse manure showed the greatest reduction of oil (85%). Comparison with a killed control indicated that microbial degradation actually had occurred. However, a substantial part of the oil was stabilized rather than totally broken down. Volatiles, on themore » contrary, accounted for a rather small percentage (5%) of the observed reduction. RAPD indicated that a selection had taken place and that the dominating microbial flora during the active degradation of oil were not the same as the ones dominating the different basic materials. The stabilized compost, on the other hand, had bacterial flora with similarities to the ones found in peat and bark.« less
Zhao, Kai; Musolesi, Mirco; Hui, Pan; Rao, Weixiong; Tarkoma, Sasu
2015-03-16
Human mobility has been empirically observed to exhibit Lévy flight characteristics and behaviour with power-law distributed jump size. The fundamental mechanisms behind this behaviour has not yet been fully explained. In this paper, we propose to explain the Lévy walk behaviour observed in human mobility patterns by decomposing them into different classes according to the different transportation modes, such as Walk/Run, Bike, Train/Subway or Car/Taxi/Bus. Our analysis is based on two real-life GPS datasets containing approximately 10 and 20 million GPS samples with transportation mode information. We show that human mobility can be modelled as a mixture of different transportation modes, and that these single movement patterns can be approximated by a lognormal distribution rather than a power-law distribution. Then, we demonstrate that the mixture of the decomposed lognormal flight distributions associated with each modality is a power-law distribution, providing an explanation to the emergence of Lévy Walk patterns that characterize human mobility patterns.
NASA Astrophysics Data System (ADS)
Zhao, Kai; Musolesi, Mirco; Hui, Pan; Rao, Weixiong; Tarkoma, Sasu
2015-03-01
Human mobility has been empirically observed to exhibit Lévy flight characteristics and behaviour with power-law distributed jump size. The fundamental mechanisms behind this behaviour has not yet been fully explained. In this paper, we propose to explain the Lévy walk behaviour observed in human mobility patterns by decomposing them into different classes according to the different transportation modes, such as Walk/Run, Bike, Train/Subway or Car/Taxi/Bus. Our analysis is based on two real-life GPS datasets containing approximately 10 and 20 million GPS samples with transportation mode information. We show that human mobility can be modelled as a mixture of different transportation modes, and that these single movement patterns can be approximated by a lognormal distribution rather than a power-law distribution. Then, we demonstrate that the mixture of the decomposed lognormal flight distributions associated with each modality is a power-law distribution, providing an explanation to the emergence of Lévy Walk patterns that characterize human mobility patterns.
Zhao, Kai; Musolesi, Mirco; Hui, Pan; Rao, Weixiong; Tarkoma, Sasu
2015-01-01
Human mobility has been empirically observed to exhibit Lévy flight characteristics and behaviour with power-law distributed jump size. The fundamental mechanisms behind this behaviour has not yet been fully explained. In this paper, we propose to explain the Lévy walk behaviour observed in human mobility patterns by decomposing them into different classes according to the different transportation modes, such as Walk/Run, Bike, Train/Subway or Car/Taxi/Bus. Our analysis is based on two real-life GPS datasets containing approximately 10 and 20 million GPS samples with transportation mode information. We show that human mobility can be modelled as a mixture of different transportation modes, and that these single movement patterns can be approximated by a lognormal distribution rather than a power-law distribution. Then, we demonstrate that the mixture of the decomposed lognormal flight distributions associated with each modality is a power-law distribution, providing an explanation to the emergence of Lévy Walk patterns that characterize human mobility patterns. PMID:25779306
Multiscale structure of time series revealed by the monotony spectrum.
Vamoş, Călin
2017-03-01
Observation of complex systems produces time series with specific dynamics at different time scales. The majority of the existing numerical methods for multiscale analysis first decompose the time series into several simpler components and the multiscale structure is given by the properties of their components. We present a numerical method which describes the multiscale structure of arbitrary time series without decomposing them. It is based on the monotony spectrum defined as the variation of the mean amplitude of the monotonic segments with respect to the mean local time scale during successive averagings of the time series, the local time scales being the durations of the monotonic segments. The maxima of the monotony spectrum indicate the time scales which dominate the variations of the time series. We show that the monotony spectrum can correctly analyze a diversity of artificial time series and can discriminate the existence of deterministic variations at large time scales from the random fluctuations. As an application we analyze the multifractal structure of some hydrological time series.
Multivariate Time Series Decomposition into Oscillation Components.
Matsuda, Takeru; Komaki, Fumiyasu
2017-08-01
Many time series are considered to be a superposition of several oscillation components. We have proposed a method for decomposing univariate time series into oscillation components and estimating their phases (Matsuda & Komaki, 2017 ). In this study, we extend that method to multivariate time series. We assume that several oscillators underlie the given multivariate time series and that each variable corresponds to a superposition of the projections of the oscillators. Thus, the oscillators superpose on each variable with amplitude and phase modulation. Based on this idea, we develop gaussian linear state-space models and use them to decompose the given multivariate time series. The model parameters are estimated from data using the empirical Bayes method, and the number of oscillators is determined using the Akaike information criterion. Therefore, the proposed method extracts underlying oscillators in a data-driven manner and enables investigation of phase dynamics in a given multivariate time series. Numerical results show the effectiveness of the proposed method. From monthly mean north-south sunspot number data, the proposed method reveals an interesting phase relationship.
Adsorption behaviour of SF6 decomposed species onto Pd4-decorated single-walled CNT: a DFT study
NASA Astrophysics Data System (ADS)
Cui, Hao; Zhang, Xiaoxing; Zhang, Jun; Tang, Ju
2018-07-01
Metal nanocluster decorated single-walled carbon nanotubes (SWCNT) with improved adsorption behaviour towards gaseous molecules compared with intrinsic ones, have been widely accepted as a workable media for gas interaction due to their strong catalysis. In this work, Pd4 cluster is determined as a catalytic centre to theoretically study the adsorption property of Pd4-decorated SWCNT upon SF6 decomposed species. Results indicate that Pd4-SWCNT possessing good responses and sensitivities towards three composed species of SF6 could realise selective detection for them according to the different conductivity changes resulting from the varying adsorption ability. The response of Pd4-SWCNT upon three molecules in order is SOF2 > H2S > SO2, and the conductivity of the proposed material is about to increase in SOF2 and H2S systems, while declining in SO2 system. Such conclusions would be helpful for experimentalists to explore novel SWCNT-based sensors in evaluating the operating state of SF6 insulation devices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glascoe, E A; Hsu, P C; Springer, H K
PBXN-9, an HMX-formulation, is thermally damaged and thermally decomposed in order to determine the morphological changes and decomposition kinetics that occur in the material after mild to moderate heating. The material and its constituents were decomposed using standard thermal analysis techniques (DSC and TGA) and the decomposition kinetics are reported using different kinetic models. Pressed parts and prill were thermally damaged, i.e. heated to temperatures that resulted in material changes but did not result in significant decomposition or explosion, and analyzed. In general, the thermally damaged samples showed a significant increase in porosity and decrease in density and a smallmore » amount of weight loss. These PBXN-9 samples appear to sustain more thermal damage than similar HMX-Viton A formulations and the most likely reasons are the decomposition/evaporation of a volatile plasticizer and a polymorphic transition of the HMX from {beta} to {delta} phase.« less
Bacteria in decomposing wood and their interactions with wood-decay fungi.
Johnston, Sarah R; Boddy, Lynne; Weightman, Andrew J
2016-11-01
The fungal community within dead wood has received considerable study, but far less attention has been paid to bacteria in the same habitat. Bacteria have long been known to inhabit decomposing wood, but much remains underexplored about their identity and ecology. Bacteria within the dead wood environment must interact with wood-decay fungi, but again, very little is known about the form this takes; there are indications of both antagonistic and beneficial interactions within this fungal microbiome. Fungi are hypothesised to play an important role in shaping bacterial communities in wood, and conversely, bacteria may affect wood-decay fungi in a variety of ways. This minireview considers what is currently known about bacteria in wood and their interactions with fungi, and proposes possible associations based on examples from other habitats. It aims to identify key knowledge gaps and pressing questions for future research. © FEMS 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
USDA-ARS?s Scientific Manuscript database
The large amounts of organic matter stored in permafrost-region soils are preserved in a relatively undecomposed state by the cold and wet environmental conditions limiting decomposer activity. With pending climate changes and the potential for warming of Arctic soils, there is a need to better unde...
Environmental Influences on Well-Being: A Dyadic Latent Panel Analysis of Spousal Similarity
ERIC Educational Resources Information Center
Schimmack, Ulrich; Lucas, Richard E.
2010-01-01
This article uses dyadic latent panel analysis (DLPA) to examine environmental influences on well-being. DLPA requires longitudinal dyadic data. It decomposes the observed variance of both members of a dyad into a trait, state, and an error component. Furthermore, state variance is decomposed into initial and new state variance. Total observed…
Dust to dust - How a human corpse decomposes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vass, Arpad Alexander
2010-01-01
After death, the human body decomposes through four stages. The final, skeleton stage may be reached as quickly as two weeks or as slowly as two years, depending on temperature, humidity and other environmental conditions where the body lies. Dead bodies emit a surprising array of chemicals, from benzene to freon, which can help forensic scientists find clandestine graves.
Chemical vapor deposition of group IIIB metals
Erbil, A.
1989-11-21
Coatings of Group IIIB metals and compounds thereof are formed by chemical vapor deposition, in which a heat decomposable organometallic compound of the formula given in the patent where M is a Group IIIB metal, such as lanthanum or yttrium and R is a lower alkyl or alkenyl radical containing from 2 to about 6 carbon atoms, with a heated substrate which is above the decomposition temperature of the organometallic compound. The pure metal is obtained when the compound of the formula 1 is the sole heat decomposable compound present and deposition is carried out under nonoxidizing conditions. Intermetallic compounds such as lanthanum telluride can be deposited from a lanthanum compound of formula 1 and a heat decomposable tellurium compound under nonoxidizing conditions.
Zambrow, J.; Hausner, H.
1957-09-24
A method of joining metal parts for the preparation of relatively long, thin fuel element cores of uranium or alloys thereof for nuclear reactors is described. The process includes the steps of cleaning the surfaces to be jointed, placing the sunfaces together, and providing between and in contact with them, a layer of a compound in finely divided form that is decomposable to metal by heat. The fuel element members are then heated at the contact zone and maintained under pressure during the heating to decompose the compound to metal and sinter the members and reduced metal together producing a weld. The preferred class of decomposable compounds are the metal hydrides such as uranium hydride, which release hydrogen thus providing a reducing atmosphere in the vicinity of the welding operation.
Catalytic cartridge SO.sub.3 decomposer
Galloway, Terry R.
1982-01-01
A catalytic cartridge surrounding a heat pipe driven by a heat source is utilized as a SO.sub.3 decomposer for thermochemical hydrogen production. The cartridge has two embodiments, a cross-flow cartridge and an axial flow cartridge. In the cross-flow cartridge, SO.sub.3 gas is flowed through a chamber and incident normally to a catalyst coated tube extending through the chamber, the catalyst coated tube surrounding the heat pipe. In the axial-flow cartridge, SO.sub.3 gas is flowed through the annular space between concentric inner and outer cylindrical walls, the inner cylindrical wall being coated by a catalyst and surrounding the heat pipe. The modular cartridge decomposer provides high thermal efficiency, high conversion efficiency, and increased safety.
Chemical vapor deposition of group IIIB metals
Erbil, Ahmet
1989-01-01
Coatings of Group IIIB metals and compounds thereof are formed by chemical vapor deposition, in which a heat decomposable organometallic compound of the formula (I) ##STR1## where M is a Group IIIB metal, such as lanthanum or yttrium and R is a lower alkyl or alkenyl radical containing from 2 to about 6 carbon atoms, with a heated substrate which is above the decomposition temperature of the organometallic compound. The pure metal is obtained when the compound of the formula I is the sole heat decomposable compound present and deposition is carried out under nonoxidizing conditions. Intermetallic compounds such as lanthanum telluride can be deposited from a lanthanum compound of formula I and a heat decomposable tellurium compound under nonoxidizing conditions.
Method for forming hermetic seals
NASA Technical Reports Server (NTRS)
Gallagher, Brian D.
1987-01-01
The firmly adherent film of bondable metal, such as silver, is applied to the surface of glass or other substrate by decomposing a layer of solution of a thermally decomposable metallo-organic deposition (MOD) compound such as silver neodecanoate in xylene. The MOD compound thermally decomposes into metal and gaseous by-products. Sealing is accomplished by depositing a layer of bonding metal, such as solder or a brazing alloy, on the metal film and then forming an assembly with another high melting point metal surface such as a layer of Kovar. When the assembly is heated above the temperature of the solder, the solder flows, wets the adjacent surfaces and forms a hermetic seal between the metal film and metal surface when the assembly cools.
Seeing the Implications of Zero Again
ERIC Educational Resources Information Center
Ponce, Gregorio A.
2015-01-01
Composing and decomposing numbers with base-ten blocks depends on children being able to see ten both as ten units and as one group of ten units (a long), cognizant that its value is the same in either case. Being able to see, or deciding when to see, an object or collection of objects as a unit is a key skill that children must develop to solve…
Program Helps Decompose Complex Design Systems
NASA Technical Reports Server (NTRS)
Rogers, James L., Jr.; Hall, Laura E.
1994-01-01
DeMAID (A Design Manager's Aid for Intelligent Decomposition) computer program is knowledge-based software system for ordering sequence of modules and identifying possible multilevel structure for design problem. Groups modular subsystems on basis of interactions among them. Saves considerable money and time in total design process, particularly in new design problem in which order of modules has not been defined. Available in two machine versions: Macintosh and Sun.
Propellant Charge with Reduced Muzzle Smoke and Flash Characteristics.
a conventional double base extruded propellant as well as more energetic nitramine composition and a microencapsulated oxamide coolant additive for...cooling the gases exiting the weapons barrel. In the preferred embodiment, the oxamide is encapsulated with a gelatin and the resulting microcapsules ...of this invention to provide a novel microencapsulated propellant additive which will pass through the propellant flame zone intact and decompose
Study on the decomposition of trace benzene over V2O5–WO3/TiO2-based catalysts in simulated flue gas
Trace levels (1 and 10 ppm) of gaseous benzene were catalytically decomposed in a fixed-bed catalytic reactor with monolithic oxides of vanadium and tungsten supported on titanium oxide (V2O5–WO3/TiO2) catalysts under conditions simulating the cooling of waste incineration flue g...
MO-FG-204-01: Improved Noise Suppression for Dual-Energy CT Through Entropy Minimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petrongolo, M; Zhu, L
2015-06-15
Purpose: In dual energy CT (DECT), noise amplification during signal decomposition significantly limits the utility of basis material images. Since clinically relevant objects contain a limited number of materials, we propose to suppress noise for DECT based on image entropy minimization. An adaptive weighting scheme is employed during noise suppression to improve decomposition accuracy with limited effect on spatial resolution and image texture preservation. Methods: From decomposed images, we first generate a 2D plot of scattered data points, using basis material densities as coordinates. Data points representing the same material generate a highly asymmetric cluster. We orient an axis bymore » minimizing the entropy in a 1D histogram of these points projected onto the axis. To suppress noise, we replace pixel values of decomposed images with center-of-mass values in the direction perpendicular to the optimal axis. To limit errors due to cluster overlap, we weight each data point’s contribution based on its high and low energy CT values and location within the image. The proposed method’s performance is assessed on physical phantom studies. Electron density is used as the quality metric for decomposition accuracy. Our results are compared to those without noise suppression and with a recently developed iterative method. Results: The proposed method reduces noise standard deviations of the decomposed images by at least one order of magnitude. On the Catphan phantom, this method greatly preserves the spatial resolution and texture of the CT images and limits induced error in measured electron density to below 1.2%. In the head phantom study, the proposed method performs the best in retaining fine, intricate structures. Conclusion: The entropy minimization based algorithm with adaptive weighting substantially reduces DECT noise while preserving image spatial resolution and texture. Future investigations will include extensive investigations on material decomposition accuracy that go beyond the current electron density calculations. This work was supported in part by the National Institutes of Health (NIH) under Grant Number R21 EB012700.« less
NASA Astrophysics Data System (ADS)
Beigi, Parmida; Salcudean, Septimiu E.; Rohling, Robert; Ng, Gary C.
2016-03-01
This paper presents an automatic localization method for a standard hand-held needle in ultrasound based on temporal motion analysis of spatially decomposed data. Subtle displacement arising from tremor motion has a periodic pattern which is usually imperceptible in the intensity image but may convey information in the phase image. Our method aims to detect such periodic motion of a hand-held needle and distinguish it from intrinsic tissue motion, using a technique inspired by video magnification. Complex steerable pyramids allow specific design of the wavelets' orientations according to the insertion angle as well as the measurement of the local phase. We therefore use steerable pairs of even and odd Gabor wavelets to decompose the ultrasound B-mode sequence into various spatial frequency bands. Variations of the local phase measurements in the spatially decomposed input data is then temporally analyzed using a finite impulse response bandpass filter to detect regions with a tremor motion pattern. Results obtained from different pyramid levels are then combined and thresholded to generate the binary mask input for the Hough transform, which determines an estimate of the direction angle and discards some of the outliers. Polynomial fitting is used at the final stage to remove any remaining outliers and improve the trajectory detection. The detected needle is finally added back to the input sequence as an overlay of a cloud of points. We demonstrate the efficiency of our approach to detect the needle using subtle tremor motion in an agar phantom and in-vivo porcine cases where intrinsic motion is also present. The localization accuracy was calculated by comparing to expert manual segmentation, and presented in (mean, standard deviation and root-mean-square error) of (0.93°, 1.26° and 0.87°) and (1.53 mm, 1.02 mm and 1.82 mm) for the trajectory and the tip, respectively.
Daily water level forecasting using wavelet decomposition and artificial intelligence techniques
NASA Astrophysics Data System (ADS)
Seo, Youngmin; Kim, Sungwon; Kisi, Ozgur; Singh, Vijay P.
2015-01-01
Reliable water level forecasting for reservoir inflow is essential for reservoir operation. The objective of this paper is to develop and apply two hybrid models for daily water level forecasting and investigate their accuracy. These two hybrid models are wavelet-based artificial neural network (WANN) and wavelet-based adaptive neuro-fuzzy inference system (WANFIS). Wavelet decomposition is employed to decompose an input time series into approximation and detail components. The decomposed time series are used as inputs to artificial neural networks (ANN) and adaptive neuro-fuzzy inference system (ANFIS) for WANN and WANFIS models, respectively. Based on statistical performance indexes, the WANN and WANFIS models are found to produce better efficiency than the ANN and ANFIS models. WANFIS7-sym10 yields the best performance among all other models. It is found that wavelet decomposition improves the accuracy of ANN and ANFIS. This study evaluates the accuracy of the WANN and WANFIS models for different mother wavelets, including Daubechies, Symmlet and Coiflet wavelets. It is found that the model performance is dependent on input sets and mother wavelets, and the wavelet decomposition using mother wavelet, db10, can further improve the efficiency of ANN and ANFIS models. Results obtained from this study indicate that the conjunction of wavelet decomposition and artificial intelligence models can be a useful tool for accurate forecasting daily water level and can yield better efficiency than the conventional forecasting models.
McLaren, Jennie R; Buckeridge, Kate M; van de Weg, Martine J; Shaver, Gaius R; Schimel, Joshua P; Gough, Laura
2017-05-01
Rapid arctic vegetation change as a result of global warming includes an increase in the cover and biomass of deciduous shrubs. Increases in shrub abundance will result in a proportional increase of shrub litter in the litter community, potentially affecting carbon turnover rates in arctic ecosystems. We investigated the effects of leaf and root litter of a deciduous shrub, Betula nana, on decomposition, by examining species-specific decomposition patterns, as well as effects of Betula litter on the decomposition of other species. We conducted a 2-yr decomposition experiment in moist acidic tundra in northern Alaska, where we decomposed three tundra species (Vaccinium vitis-idaea, Rhododendron palustre, and Eriophorum vaginatum) alone and in combination with Betula litter. Decomposition patterns for leaf and root litter were determined using three different measures of decomposition (mass loss, respiration, extracellular enzyme activity). We report faster decomposition of Betula leaf litter compared to other species, with support for species differences coming from all three measures of decomposition. Mixing effects were less consistent among the measures, with negative mixing effects shown only for mass loss. In contrast, there were few species differences or mixing effects for root decomposition. Overall, we attribute longer-term litter mass loss patterns to patterns created by early decomposition processes in the first winter. We note numerous differences for species patterns between leaf and root decomposition, indicating that conclusions from leaf litter experiments should not be extrapolated to below-ground decomposition. The high decomposition rates of Betula leaf litter aboveground, and relatively similar decomposition rates of multiple species below, suggest a potential for increases in turnover in the fast-decomposing carbon pool of leaves and fine roots as the dominance of deciduous shrubs in the Arctic increases, but this outcome may be tempered by negative litter mixing effects during the early stages of encroachment. © 2017 by the Ecological Society of America.
Systems and methods for analyzing building operations sensor data
Mezic, Igor; Eisenhower, Bryan A.
2015-05-26
Systems and methods are disclosed for analyzing building sensor information and decomposing the information therein to a more manageable and more useful form. Certain embodiments integrate energy-based and spectral-based analysis methods with parameter sampling and uncertainty/sensitivity analysis to achieve a more comprehensive perspective of building behavior. The results of this analysis may be presented to a user via a plurality of visualizations and/or used to automatically adjust certain building operations. In certain embodiments, advanced spectral techniques, including Koopman-based operations, are employed to discern features from the collected building sensor data.
A new approach to hand-based authentication
NASA Astrophysics Data System (ADS)
Amayeh, G.; Bebis, G.; Erol, A.; Nicolescu, M.
2007-04-01
Hand-based authentication is a key biometric technology with a wide range of potential applications both in industry and government. Traditionally, hand-based authentication is performed by extracting information from the whole hand. To account for hand and finger motion, guidance pegs are employed to fix the position and orientation of the hand. In this paper, we consider a component-based approach to hand-based verification. Our objective is to investigate the discrimination power of different parts of the hand in order to develop a simpler, faster, and possibly more accurate and robust verification system. Specifically, we propose a new approach which decomposes the hand in different regions, corresponding to the fingers and the back of the palm, and performs verification using information from certain parts of the hand only. Our approach operates on 2D images acquired by placing the hand on a flat lighting table. Using a part-based representation of the hand allows the system to compensate for hand and finger motion without using any guidance pegs. To decompose the hand in different regions, we use a robust methodology based on morphological operators which does not require detecting any landmark points on the hand. To capture the geometry of the back of the palm and the fingers in suffcient detail, we employ high-order Zernike moments which are computed using an effcient methodology. The proposed approach has been evaluated on a database of 100 subjects with 10 images per subject, illustrating promising performance. Comparisons with related approaches using the whole hand for verification illustrate the superiority of the proposed approach. Moreover, qualitative comparisons with state-of-the-art approaches indicate that the proposed approach has comparable or better performance.
Mark A. Bradford; Tara Gancos; Christopher J. Frost
2008-01-01
In terrestrial systems there is a close relationship between litter quality and the activity and abundance of decomposers. Therefore, the potential exists for aboveground, herbivore-induced changes in foliar chemistry to affect soil decomposer fauna. These herbivore-induced changes in chemistry may persist across growing seasons. While the impacts of such slow-cycle...
A test of the hierarchical model of litter decomposition.
Bradford, Mark A; Veen, G F Ciska; Bonis, Anne; Bradford, Ella M; Classen, Aimee T; Cornelissen, J Hans C; Crowther, Thomas W; De Long, Jonathan R; Freschet, Gregoire T; Kardol, Paul; Manrubia-Freixa, Marta; Maynard, Daniel S; Newman, Gregory S; Logtestijn, Richard S P; Viketoft, Maria; Wardle, David A; Wieder, William R; Wood, Stephen A; van der Putten, Wim H
2017-12-01
Our basic understanding of plant litter decomposition informs the assumptions underlying widely applied soil biogeochemical models, including those embedded in Earth system models. Confidence in projected carbon cycle-climate feedbacks therefore depends on accurate knowledge about the controls regulating the rate at which plant biomass is decomposed into products such as CO 2 . Here we test underlying assumptions of the dominant conceptual model of litter decomposition. The model posits that a primary control on the rate of decomposition at regional to global scales is climate (temperature and moisture), with the controlling effects of decomposers negligible at such broad spatial scales. Using a regional-scale litter decomposition experiment at six sites spanning from northern Sweden to southern France-and capturing both within and among site variation in putative controls-we find that contrary to predictions from the hierarchical model, decomposer (microbial) biomass strongly regulates decomposition at regional scales. Furthermore, the size of the microbial biomass dictates the absolute change in decomposition rates with changing climate variables. Our findings suggest the need for revision of the hierarchical model, with decomposers acting as both local- and broad-scale controls on litter decomposition rates, necessitating their explicit consideration in global biogeochemical models.
Kreutzweiser, David P; Good, Kevin P; Chartrand, Derek T; Scarr, Taylor A; Thompson, Dean G
2008-01-01
The systemic insecticide imidacloprid may be applied to deciduous trees for control of the Asian longhorned beetle, an invasive wood-boring insect. Senescent leaves falling from systemically treated trees contain imidacloprid concentrations that could pose a risk to natural decomposer organisms. We examined the effects of foliar imidacloprid concentrations on decomposer organisms by adding leaves from imidacloprid-treated sugar maple trees to aquatic and terrestrial microcosms under controlled laboratory conditions. Imidacloprid in maple leaves at realistic field concentrations (3-11 mg kg(-1)) did not affect survival of aquatic leaf-shredding insects or litter-dwelling earthworms. However, adverse sublethal effects at these concentrations were detected. Feeding rates by aquatic insects and earthworms were reduced, leaf decomposition (mass loss) was decreased, measurable weight losses occurred among earthworms, and aquatic and terrestrial microbial decomposition activity was significantly inhibited. Results of this study suggest that sugar maple trees systemically treated with imidacloprid to control Asian longhorned beetles may yield senescent leaves with residue levels sufficient to reduce natural decomposition processes in aquatic and terrestrial environments through adverse effects on non-target decomposer organisms.
Screening on oil-decomposing microorganisms and application in organic waste treatment machine.
Lu, Yi-Tong; Chen, Xiao-Bin; Zhou, Pei; Li, Zhen-Hong
2005-01-01
As an oil-decomposable mixture of two bacteria strains (Bacillus sp. and Pseudomonas sp.), Y3 was isolated after 50 d domestication under the condition that oil was used as the limited carbon source. The decomposing rate by Y3 was higher than that by each separate individual strain, indicating a synergistic effect of the two bacteria. Under the conditions that T = 25-40 degrees C, pH = 6-8, HRT (Hydraulic retention time) = 36 h and the oil concentration at 0.1%, Y3 yielded the highest decomposing rate of 95.7%. Y3 was also applied in an organic waste treatment machine and a certain rate of activated bacteria was put into the stuffing. A series of tests including humidity, pH, temperature, C/N rate and oil percentage of the stuffing were carried out to check the efficacy of oil-decomposition. Results showed that the oil content of the stuffing with inoculums was only half of that of the control. Furthermore, the bacteria were also beneficial to maintain the stability of the machine operating. Therefore, the bacteria mixture as well as the machines in this study could be very useful for waste treatment.
Natural image statistics and low-complexity feature selection.
Vasconcelos, Manuela; Vasconcelos, Nuno
2009-02-01
Low-complexity feature selection is analyzed in the context of visual recognition. It is hypothesized that high-order dependences of bandpass features contain little information for discrimination of natural images. This hypothesis is characterized formally by the introduction of the concepts of conjunctive interference and decomposability order of a feature set. Necessary and sufficient conditions for the feasibility of low-complexity feature selection are then derived in terms of these concepts. It is shown that the intrinsic complexity of feature selection is determined by the decomposability order of the feature set and not its dimension. Feature selection algorithms are then derived for all levels of complexity and are shown to be approximated by existing information-theoretic methods, which they consistently outperform. The new algorithms are also used to objectively test the hypothesis of low decomposability order through comparison of classification performance. It is shown that, for image classification, the gain of modeling feature dependencies has strongly diminishing returns: best results are obtained under the assumption of decomposability order 1. This suggests a generic law for bandpass features extracted from natural images: that the effect, on the dependence of any two features, of observing any other feature is constant across image classes.
Song, Shidong; Xu, Wu; Zheng, Jianming; Luo, Langli; Engelhard, Mark H; Bowden, Mark E; Liu, Bin; Wang, Chong-Min; Zhang, Ji-Guang
2017-03-08
Instability of carbon-based oxygen electrodes and incomplete decomposition of Li 2 CO 3 during charge process are critical barriers for rechargeable Li-O 2 batteries. Here we report the complete decomposition of Li 2 CO 3 in Li-O 2 batteries using the ultrafine iridium-decorated boron carbide (Ir/B 4 C) nanocomposite as a noncarbon based oxygen electrode. The systematic investigation on charging the Li 2 CO 3 preloaded Ir/B 4 C electrode in an ether-based electrolyte demonstrates that the Ir/B 4 C electrode can decompose Li 2 CO 3 with an efficiency close to 100% at a voltage below 4.37 V. In contrast, the bare B 4 C without Ir electrocatalyst can only decompose 4.7% of the preloaded Li 2 CO 3 . Theoretical analysis indicates that the high efficiency decomposition of Li 2 CO 3 can be attributed to the synergistic effects of Ir and B 4 C. Ir has a high affinity for oxygen species, which could lower the energy barrier for electrochemical oxidation of Li 2 CO 3 . B 4 C exhibits much higher chemical and electrochemical stability than carbon-based electrodes and high catalytic activity for Li-O 2 reactions. A Li-O 2 battery using Ir/B 4 C as the oxygen electrode material shows highly enhanced cycling stability than those using the bare B 4 C oxygen electrode. Further development of these stable oxygen-electrodes could accelerate practical applications of Li-O 2 batteries.
NASA Astrophysics Data System (ADS)
Ha, J.; Chung, W.; Shin, S.
2015-12-01
Many waveform inversion algorithms have been proposed in order to construct subsurface velocity structures from seismic data sets. These algorithms have suffered from computational burden, local minima problems, and the lack of low-frequency components. Computational efficiency can be improved by the application of back-propagation techniques and advances in computing hardware. In addition, waveform inversion algorithms, for obtaining long-wavelength velocity models, could avoid both the local minima problem and the effect of the lack of low-frequency components in seismic data. In this study, we proposed spectrogram inversion as a technique for recovering long-wavelength velocity models. In spectrogram inversion, decomposed frequency components from spectrograms of traces, in the observed and calculated data, are utilized to generate traces with reproduced low-frequency components. Moreover, since each decomposed component can reveal the different characteristics of a subsurface structure, several frequency components were utilized to analyze the velocity features in the subsurface. We performed the spectrogram inversion using a modified SEG/SEGE salt A-A' line. Numerical results demonstrate that spectrogram inversion could also recover the long-wavelength velocity features. However, inversion results varied according to the frequency components utilized. Based on the results of inversion using a decomposed single-frequency component, we noticed that robust inversion results are obtained when a dominant frequency component of the spectrogram was utilized. In addition, detailed information on recovered long-wavelength velocity models was obtained using a multi-frequency component combined with single-frequency components. Numerical examples indicate that various detailed analyses of long-wavelength velocity models can be carried out utilizing several frequency components.
Impact of Resource-Based Practice Expenses on the Medicare Physician Volume
Maxwell, Stephanie; Zuckerman, Stephen
2007-01-01
In 1999, Medicare implemented a resource-based relative value unit (RVU) system for physician practice expense payments, and increased the number of services for which practice expense payments differ by site. Using 1998-2004 data, we examined RVU growth and decomposed that growth into resource-based RVUs, site of service, and service quantity and mix. We found that the number services with site of service differentials doubled, and that shifts in site of service and introduction of resource-based practice expenses (RBPE) were important sources of change in practice expense RVU volume. Service quantity and mix remained the largest source of growth in total RVU volume. PMID:18435224
Microbial community assembly and metabolic function during mammalian corpse decomposition
Metcalf, Jessica L; Xu, Zhenjiang Zech; Weiss, Sophie; Lax, Simon; Van Treuren, Will; Hyde, Embriette R.; Song, Se Jin; Amir, Amnon; Larsen, Peter; Sangwan, Naseer; Haarmann, Daniel; Humphrey, Greg C; Ackermann, Gail; Thompson, Luke R; Lauber, Christian; Bibat, Alexander; Nicholas, Catherine; Gebert, Matthew J; Petrosino, Joseph F; Reed, Sasha C.; Gilbert, Jack A; Lynne, Aaron M; Bucheli, Sibyl R; Carter, David O; Knight, Rob
2016-01-01
Vertebrate corpse decomposition provides an important stage in nutrient cycling in most terrestrial habitats, yet microbially mediated processes are poorly understood. Here we combine deep microbial community characterization, community-level metabolic reconstruction, and soil biogeochemical assessment to understand the principles governing microbial community assembly during decomposition of mouse and human corpses on different soil substrates. We find a suite of bacterial and fungal groups that contribute to nitrogen cycling and a reproducible network of decomposers that emerge on predictable time scales. Our results show that this decomposer community is derived primarily from bulk soil, but key decomposers are ubiquitous in low abundance. Soil type was not a dominant factor driving community development, and the process of decomposition is sufficiently reproducible to offer new opportunities for forensic investigations.
Microbial community assembly and metabolic function during mammalian corpse decomposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Metcalf, J. L.; Xu, Z. Z.; Weiss, S.
2015-12-10
Vertebrate corpse decomposition provides an important stage in nutrient cycling in most terrestrial habitats, yet microbially mediated processes are poorly understood. Here we combine deep microbial community characterization, community-level metabolic reconstruction, and soil biogeochemical assessment to understand the principles governing microbial community assembly during decomposition of mouse and human corpses on different soil substrates. We find a suite of bacterial and fungal groups that contribute to nitrogen cycling and a reproducible network of decomposers that emerge on predictable time scales. Our results show that this decomposer community is derived primarily from bulk soil, but key decomposers are ubiquitous in lowmore » abundance. Soil type was not a dominant factor driving community development, and the process of decomposition is sufficiently reproducible to offer new opportunities for forensic investigations.« less
Catalytic cartridge SO/sub 3/ decomposer
Galloway, T.R.
1980-11-18
A catalytic cartridge surrounding a heat pipe driven by a heat source is utilized as a SO/sub 3/ decomposer for thermochemical hydrogen production. The cartridge has two embodiments, a cross-flow cartridge and an axial flow cartridge. In the cross-flow cartridge, SO/sub 3/ gas is flowed through a chamber and incident normally to a catalyst coated tube extending through the chamber, the catalyst coated tube surrounding the heat pipe. In the axial-flow cartridge, SO/sub 3/ gas is flowed through the annular space between concentric inner and outer cylindrical walls, the inner cylindrical wall being coated by a catalyst and surrounding the heat pipe. The modular cartridge decomposer provides high thermal efficiency, high conversion efficiency, and increased safety. A fusion reactor may be used as the heat source.
Alcoa Pressure Calcination Process for Alumina
NASA Astrophysics Data System (ADS)
Sucech, S. W.; Misra, C.
A new alumina calcination process developed at Alcoa Laboratories is described. Alumina is calcined in two stages. In the first stage, alumina hydrate is heated indirectly to 500°C in a decomposer vessel. Released water is recovered as process steam at 110 psig pressure. Partial transformation of gibbsite to boehmite occurs under hydrothermal conditions of the decomposer. The product from the decomposer containing about 5% LOI is then calcined by direct heating to 850°C to obtain smelting grade alumina. The final product is highly attrition resistant, has a surface area of 50-80 m2/g and a LOI of less than 1%. Accounting for the recovered steam, the effective fuel consumption for the new calcination process is only 1.6 GJ/t A12O3.
NASA Astrophysics Data System (ADS)
Anderson, Carly; Clark, Douglas; Graves, David
2014-10-01
We present evidence for the existence of two distinct processes that contribute to the generation of reactive oxygen and nitrogen species (RONS) in liquids exposed to cold atmospheric plasma (CAP) in air. At the plasma-liquid interface, there exists a fast surface reaction zone where RONS from the gas phase interact with species in the liquid. RONS can also be produced by ``slow'' chemical reactions in the bulk liquid, even long after plasma exposure. To separate the effects of these processes, we used indigo dye as an indicator of ROS production; specifically generation of hydroxyl radical. The rate of indigo decolorization while in direct contact with CAP is compared with the expected rate of hydroxyl radical generation at the liquid surface. When added to aqueous solutions after CAP exposure, indigo dye reacts on a time scale consistent with the production of peroxynitrous acid, ONOOH, which is known to decompose to hydroxyl radical below a pH of 6.8. In this study, the CAP used was a air corona discharge plasma run in a positive streamer mode.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yu-Xiao; Zhao, Lin; Gu, Gen-Da
2016-06-01
Here, we report a reproducible approach in preparing high-quality overdoped Bi 2Sr 2 CaCu 2 O 8+δ (Bi2212) single crystals by annealing Bi2212 crystals in high oxygen pressure followed by a fast quenching. High-quality overdoped and heavily overdoped Bi2212 single crystals are obtained by controlling the annealing oxygen pressure. Furthermore, we find that, beyond a limit of oxygen pressure that can achieve most heavily overdoped Bi2212 with a T c ~63 K, the annealed Bi2212 begins to decompose. This accounts for the existence of the hole-doping limit and thus the T c limit in the heavily overdoped region of Bi2212more » by the oxygen annealing process. Our results provide a reliable way in preparing high-quality overdoped and heavily overdoped Bi2212 crystals that are important for studies of the physical properties, electronic structure and superconductivity mechanism of the cuprate superconductors.« less
Cell Structure Evolution of Aluminum Foams Under Reduced Pressure Foaming
NASA Astrophysics Data System (ADS)
Cao, Zhuokun; Yu, Yang; Li, Min; Luo, Hongjie
2016-09-01
Ti-H particles are used to increase the gas content in aluminum melts for reduced pressure foaming. This paper reports on the RPF process of AlCa alloy by adding TiH2, but in smaller amounts compared to traditional process. TiH2 is completely decomposed by stirring the melt, following which reduced pressure is applied. TiH2 is not added as the blowing agent; instead, it is added for increasing the H2 concentration in the liquid AlCa melt. It is shown that pressure change induces further release of hydrogen from Ti phase. It is also found that foam collapse is caused by the fast bubble coalescing during pressure reducing procedure, and the instability of liquid film is related to the significant increase in critical thickness of film rupture. A combination of lower amounts of TiH2, coupled with reduced pressure, is another way of increasing hydrogen content in the liquid aluminum. A key benefit of this process is that it provides time to transfer the molten metal to a mold and then apply the reduced pressure to produce net shape foam parts.
High performance computing environment for multidimensional image analysis
Rao, A Ravishankar; Cecchi, Guillermo A; Magnasco, Marcelo
2007-01-01
Background The processing of images acquired through microscopy is a challenging task due to the large size of datasets (several gigabytes) and the fast turnaround time required. If the throughput of the image processing stage is significantly increased, it can have a major impact in microscopy applications. Results We present a high performance computing (HPC) solution to this problem. This involves decomposing the spatial 3D image into segments that are assigned to unique processors, and matched to the 3D torus architecture of the IBM Blue Gene/L machine. Communication between segments is restricted to the nearest neighbors. When running on a 2 Ghz Intel CPU, the task of 3D median filtering on a typical 256 megabyte dataset takes two and a half hours, whereas by using 1024 nodes of Blue Gene, this task can be performed in 18.8 seconds, a 478× speedup. Conclusion Our parallel solution dramatically improves the performance of image processing, feature extraction and 3D reconstruction tasks. This increased throughput permits biologists to conduct unprecedented large scale experiments with massive datasets. PMID:17634099
High performance computing environment for multidimensional image analysis.
Rao, A Ravishankar; Cecchi, Guillermo A; Magnasco, Marcelo
2007-07-10
The processing of images acquired through microscopy is a challenging task due to the large size of datasets (several gigabytes) and the fast turnaround time required. If the throughput of the image processing stage is significantly increased, it can have a major impact in microscopy applications. We present a high performance computing (HPC) solution to this problem. This involves decomposing the spatial 3D image into segments that are assigned to unique processors, and matched to the 3D torus architecture of the IBM Blue Gene/L machine. Communication between segments is restricted to the nearest neighbors. When running on a 2 Ghz Intel CPU, the task of 3D median filtering on a typical 256 megabyte dataset takes two and a half hours, whereas by using 1024 nodes of Blue Gene, this task can be performed in 18.8 seconds, a 478x speedup. Our parallel solution dramatically improves the performance of image processing, feature extraction and 3D reconstruction tasks. This increased throughput permits biologists to conduct unprecedented large scale experiments with massive datasets.
Singular-Arc Time-Optimal Trajectory of Aircraft in Two-Dimensional Wind Field
NASA Technical Reports Server (NTRS)
Nguyen, Nhan
2006-01-01
This paper presents a study of a minimum time-to-climb trajectory analysis for aircraft flying in a two-dimensional altitude dependent wind field. The time optimal control problem possesses a singular control structure when the lift coefficient is taken as a control variable. A singular arc analysis is performed to obtain an optimal control solution on the singular arc. Using a time-scale separation with the flight path angle treated as a fast state, the dimensionality of the optimal control solution is reduced by eliminating the lift coefficient control. A further singular arc analysis is used to decompose the original optimal control solution into the flight path angle solution and a trajectory solution as a function of the airspeed and altitude. The optimal control solutions for the initial and final climb segments are computed using a shooting method with known starting values on the singular arc The numerical results of the shooting method show that the optimal flight path angle on the initial and final climb segments are constant. The analytical approach provides a rapid means for analyzing a time optimal trajectory for aircraft performance.
Blacker, Teddy D.
1994-01-01
An automatic quadrilateral surface discretization method and apparatus is provided for automatically discretizing a geometric region without decomposing the region. The automated quadrilateral surface discretization method and apparatus automatically generates a mesh of all quadrilateral elements which is particularly useful in finite element analysis. The generated mesh of all quadrilateral elements is boundary sensitive, orientation insensitive and has few irregular nodes on the boundary. A permanent boundary of the geometric region is input and rows are iteratively layered toward the interior of the geometric region. Also, an exterior permanent boundary and an interior permanent boundary for a geometric region may be input and the rows are iteratively layered inward from the exterior boundary in a first counter clockwise direction while the rows are iteratively layered from the interior permanent boundary toward the exterior of the region in a second clockwise direction. As a result, a high quality mesh for an arbitrary geometry may be generated with a technique that is robust and fast for complex geometric regions and extreme mesh gradations.
Ayala, Raul E.
1993-01-01
This invention relates to additives to mixed-metal oxides that act simultaneously as sorbents and catalysts in cleanup systems for hot coal gases. Such additives of this type, generally, act as a sorbent to remove sulfur from the coal gases while substantially simultaneously, catalytically decomposing appreciable amounts of ammonia from the coal gases.
High-quality compressive ghost imaging
NASA Astrophysics Data System (ADS)
Huang, Heyan; Zhou, Cheng; Tian, Tian; Liu, Dongqi; Song, Lijun
2018-04-01
We propose a high-quality compressive ghost imaging method based on projected Landweber regularization and guided filter, which effectively reduce the undersampling noise and improve the resolution. In our scheme, the original object is reconstructed by decomposing of regularization and denoising steps instead of solving a minimization problem in compressive reconstruction process. The simulation and experimental results show that our method can obtain high ghost imaging quality in terms of PSNR and visual observation.
Research on Green Manufacturing Innovation Based on Resource Environment Protection
NASA Astrophysics Data System (ADS)
Jie, Xu
2017-11-01
Green manufacturing is a trend of manufacturing industry in the future, and is of great significance to resource protection and environmental protection. This paper first studies the green manufacturing innovation system, and then decomposes the green manufacturing innovation dimensions, and constructs the green manufacturing innovation dimension space. Finally, from the view of resource protection and environmental protection, this paper explores the path of green manufacturing innovation.
Morphological decomposition of 2-D binary shapes into convex polygons: a heuristic algorithm.
Xu, J
2001-01-01
In many morphological shape decomposition algorithms, either a shape can only be decomposed into shape components of extremely simple forms or a time consuming search process is employed to determine a decomposition. In this paper, we present a morphological shape decomposition algorithm that decomposes a two-dimensional (2-D) binary shape into a collection of convex polygonal components. A single convex polygonal approximation for a given image is first identified. This first component is determined incrementally by selecting a sequence of basic shape primitives. These shape primitives are chosen based on shape information extracted from the given shape at different scale levels. Additional shape components are identified recursively from the difference image between the given image and the first component. Simple operations are used to repair certain concavities caused by the set difference operation. The resulting hierarchical structure provides descriptions for the given shape at different detail levels. The experiments show that the decomposition results produced by the algorithm seem to be in good agreement with the natural structures of the given shapes. The computational cost of the algorithm is significantly lower than that of an earlier search-based convex decomposition algorithm. Compared to nonconvex decomposition algorithms, our algorithm allows accurate approximations for the given shapes at low coding costs.
NASA Astrophysics Data System (ADS)
Naya, Tomoki; Kohga, Makoto
2015-04-01
Ammonium nitrate (AN) has attracted much attention due to its clean burning nature as an oxidizer. However, an AN-based composite propellant has the disadvantages of low burning rate and poor ignitability. In this study, we added nitramine of cyclotrimethylene trinitramine (RDX) or cyclotetramethylene tetranitramine (HMX) as a high-energy material to AN propellants to overcome these disadvantages. The thermal decomposition and burning rate characteristics of the prepared propellants were examined as the ratio of AN and nitramine was varied. In the thermal decomposition process, AN/RDX propellants showed unique mass loss peaks in the lower temperature range that were not observed for AN or RDX propellants alone. AN and RDX decomposed continuously as an almost single oxidizer in the AN/RDX propellant. In contrast, AN/HMX propellants exhibited thermal decomposition characteristics similar to those of AN and HMX, which decomposed almost separately in the thermal decomposition of the AN/HMX propellant. The ignitability was improved and the burning rate increased by the addition of nitramine for both AN/RDX and AN/HMX propellants. The increased burning rates of AN/RDX propellants were greater than those of AN/HMX. The difference in the thermal decomposition and burning characteristics was caused by the interaction between AN and RDX.
Ahnn, Jong Hoon; Potkonjak, Miodrag
2013-10-01
Although mobile health monitoring where mobile sensors continuously gather, process, and update sensor readings (e.g. vital signals) from patient's sensors is emerging, little effort has been investigated in an energy-efficient management of sensor information gathering and processing. Mobile health monitoring with the focus of energy consumption may instead be holistically analyzed and systematically designed as a global solution to optimization subproblems. This paper presents an attempt to decompose the very complex mobile health monitoring system whose layer in the system corresponds to decomposed subproblems, and interfaces between them are quantified as functions of the optimization variables in order to orchestrate the subproblems. We propose a distributed and energy-saving mobile health platform, called mHealthMon where mobile users publish/access sensor data via a cloud computing-based distributed P2P overlay network. The key objective is to satisfy the mobile health monitoring application's quality of service requirements by modeling each subsystem: mobile clients with medical sensors, wireless network medium, and distributed cloud services. By simulations based on experimental data, we present the proposed system can achieve up to 10.1 times more energy-efficient and 20.2 times faster compared to a standalone mobile health monitoring application, in various mobile health monitoring scenarios applying a realistic mobility model.
NASA Astrophysics Data System (ADS)
Baysal, Gulcin; Kalav, Berdan; Karagüzel Kayaoğlu, Burçak
2017-10-01
In the current study, it is aimed to determine the effect of pigment concentration on fastness and colour values of thermal and ultraviolet (UV) curable pigment printing on synthetic leather. For this purpose, thermal curable solvent-based and UV curable water-based formulations were prepared with different pigment concentrations (3, 5 and 7%) separately and applied by screen printing technique using a screen printing machine. Samples printed with solvent-based formulations were thermally cured and samples printed with water-based formulations were cured using a UV curing machine equipped with gallium and mercury (Ga/Hg) lamps at room temperature. The crock fastness values of samples printed with solvent-based formulations showed that increase in pigment concentration was not effective on both dry and wet crock fastness values. On the other hand, in samples printed with UV curable water-based formulations, dry crock fastness was improved and evaluated as very good for all pigment concentrations. However, increasing the pigment concentration affected the wet crock fastness values adversely and lower values were observed. As the energy level increased for each irradiation source, the fastness values were improved. In comparison with samples printed with solvent-based formulations, samples printed with UV curable water-based formulations yielded higher K/S values at all pigment concentrations. The results suggested that, higher K/S values can be obtained in samples printed with UV curable water-based formulations at a lower pigment concentration compared to samples printed with solvent-based formulations.
Forsyth, Ann; Wall, Melanie; Larson, Nicole; Story, Mary; Neumark-Sztainer, Dianne
2012-01-01
This population-based study examined whether residential or school neighborhood access to fast food restaurants is related to adolescents’ eating frequency of fast food. A classroom-based survey of racially/ethnically diverse adolescents (n=2,724) in 20 secondary schools in Minneapolis/St. Paul, Minnesota was used to assess eating frequency at five types of fast food restaurants. Black, Hispanic, and Native American adolescents lived near more fast food restaurants than white and Asian adolescents and also ate at fast food restaurants more often. After controlling for individual-level socio-demographics, adolescent males living near high numbers fast food restaurants ate more frequently from these venues compared to their peers. PMID:23064515
Thermal decomposition of dolomite under CO2: insights from TGA and in situ XRD analysis.
Valverde, Jose Manuel; Perejon, Antonio; Medina, Santiago; Perez-Maqueda, Luis A
2015-11-28
Thermal decomposition of dolomite in the presence of CO2 in a calcination environment is investigated by means of in situ X-ray diffraction (XRD) and thermogravimetric analysis (TGA). The in situ XRD results suggest that dolomite decomposes directly at a temperature around 700 °C into MgO and CaO. Immediate carbonation of nascent CaO crystals leads to the formation of calcite as an intermediate product of decomposition. Subsequently, decarbonation of this poorly crystalline calcite occurs when the reaction is thermodynamically favorable and sufficiently fast at a temperature depending on the CO2 partial pressure in the calcination atmosphere. Decarbonation of this dolomitic calcite occurs at a lower temperature than limestone decarbonation due to the relatively low crystallinity of the former. Full decomposition of dolomite leads also to a relatively low crystalline CaO, which exhibits a high reactivity as compared to limestone derived CaO. Under CO2 capture conditions in the Calcium-Looping (CaL) process, MgO grains remain inert yet favor the carbonation reactivity of dolomitic CaO especially in the solid-state diffusion controlled phase. The fundamental mechanism that drives the crystallographic transformation of dolomite in the presence of CO2 is thus responsible for its fast calcination kinetics and the high carbonation reactivity of dolomitic CaO, which makes natural dolomite a potentially advantageous alternative to limestone for CO2 capture in the CaL technology as well as SO2in situ removal in oxy-combustion fluidized bed reactors.
The plant cell wall--decomposing machinery underlies the functional diversity of forest fungi
Daniel C. Eastwood; Dimitrios Floudas; Manfred Binder; Andrzej Majcherczyk; Patrick Schneider; Andrea Aerts; Fred O. Asiegbu; Scott E. Baker; Kerrie Barry; Mika Bendiksby; Melanie Blumentritt; Pedro M. Coutinho; Dan Cullen; Ronald P. de Vries; Allen Gathman; Barry Goodell; Bernard Henrissat; Katarina Ihrmark; Havard Kauserud; Annegret Kohler; Kurt LaButti; Alla Lapidus; Jose L. Lavin; Yong-Hwan Lee; Erika Lindquist; Walt Lilly; Susan Lucas; Emmanuelle Morin; Claude Murat; Jose A. Oguiza; Jongsun Park; Antonio G. Pisabarro; Robert Riley; Anna Rosling; Asaf Salamov; Olaf Schmidt; Jeremy Schmutz; Inger Skrede; Jan Stenlid; Ad Wiebenga; Xinfeng Xie; Ursula Kues; David S. Hibbett; Dirk Hoffmeister; Nils Hogberg; Francis Martin; Igor V. Grigoriev; Sarah C. Watkinson
2011-01-01
Brown rot decay removes cellulose and hemicelluloses from wood, residual lignin contributing up to 30% of forest soil carbon, and is derived from an ancestral white rot saprotrophy where both lignin and cellulose are decomposed. Comparative and functional genomics of the âdry rotâ fungus Serpula lacrymans, derived from forest ancestors, demonstrated that the evolution...
Installation Restoration of Frankford Arsenal, Pennsylvania, Concept Plan
1977-09-01
combined with 650 ml of tetrazine slurry. To this mix was added other components (e.g., antimony sulfide , powdered aluminum, PETN, barium nitrate and a...such materials as barium nitrate, magnesium and aluminum powders, potassium perchlorate, iron oxide, red phosphorus, strontium peroxide, strontium...soluble in acetone and nethyl acetate. Chemical Activity: Decomposed slowly by boiling 2.5% aqueous caustic;0 decomposed slowly by sodium sulfide
Tiunov, Alexei V; Semenina, Eugenia E; Aleksandrova, Alina V; Tsurikov, Sergey M; Anichkin, Alexander E; Novozhilov, Yuri K
2015-08-30
Data on the bulk stable isotope composition of soil bacteria and bacterivorous soil animals are required to estimate the nutrient and energy fluxes via bacterial channels within detrital food webs. We measured the isotopic composition of slime molds (Myxogastria, Amoebozoa), a group of soil protozoans forming macroscopic spore-bearing fruiting bodies. An analysis of largely bacterivorous slime molds can provide information on the bulk stable isotope composition of soil bacteria. Fruiting bodies of slime molds were collected in a monsoon tropical forest of Cat Tien National Park, Vietnam, and analyzed by continuous-flow isotope ratio mass spectrometry. Prior to stable isotope analysis, carbonates were removed from a subset of samples by acidification. To estimate the trophic position of slime molds, their δ(13) C and δ(15) N values were compared with those of plant debris, soil, microbial destructors (litter-decomposing, humus-decomposing, and ectomycorrhizal fungi) and members of higher trophic levels (oribatid mites, termites, predatory macroinvertebrates). Eight species of slime molds represented by at least three independent samples were 3-6‰ enriched in (13) C and (15) N relative to plant litter. A small but significant difference in the δ(13) C and δ(15) N values suggests that different species of myxomycetes can differ in feeding behavior. The slime molds were enriched in (15) N compared with litter-decomposing fungi, and depleted in (15) N compared with mycorrhizal or humus-decomposing fungi. Slime mold sporocarps and plasmodia largely overlapped with oribatid mites in the isotopic bi-plot, but were depleted in (15) N compared with predatory invertebrates and humiphagous termites. A comparison with reference groups of soil organisms suggests strong trophic links of slime molds to saprotrophic microorganisms which decompose plant litter, but not to humus-decomposing microorganisms or to mycorrhizal fungi. Under the assumption that slime molds are primarily feeding on bacteria, the isotopic similarity of slime molds and mycophagous soil animals indicates that saprotrophic soil bacteria and fungi are similar in bulk isotopic composition. Copyright © 2015 John Wiley & Sons, Ltd.
Wavelet-based group and phase velocity measurements: Method
NASA Astrophysics Data System (ADS)
Yang, H. Y.; Wang, W. W.; Hung, S. H.
2016-12-01
Measurements of group and phase velocities of surface waves are often carried out by applying a series of narrow bandpass or stationary Gaussian filters localized at specific frequencies to wave packets and estimating the corresponding arrival times at the peak envelopes and phases of the Fourier spectra. However, it's known that seismic waves are inherently nonstationary and not well represented by a sum of sinusoids. Alternatively, a continuous wavelet transform (CWT) which decomposes a time series into a family of wavelets, translated and scaled copies of a generally fast oscillating and decaying function known as the mother wavelet, is capable of retaining localization in both the time and frequency domain and well-suited for the time-frequency analysis of nonstationary signals. Here we develop a wavelet-based method to measure frequency-dependent group and phase velocities, an essential dataset used in crust and mantle tomography. For a given time series, we employ the complex morlet wavelet to obtain the scalogram of amplitude modulus |Wg| and phase φ on the time-frequency plane. The instantaneous frequency (IF) is then calculated by taking the derivative of phase with respect to time, i.e., (1/2π)dφ(f, t)/dt. Time windows comprising strong energy arrivals to be measured can be identified by those IFs close to the frequencies with the maximum modulus and varying smoothly and monotonically with time. The respective IFs in each selected time window are further interpolated to yield a smooth branch of ridge points or representative IFs at which the arrival time, tridge(f), and phase, φridge(f), after unwrapping and correcting cycle skipping based on a priori knowledge of the possible velocity range, are determined for group and phase velocity estimation. We will demonstrate our measurement method using both ambient noise cross correlation functions and multi-mode surface waves from earthquakes. The obtained dispersion curves will be compared with those by a conventional narrow bandpass method.
FAST: FAST Analysis of Sequences Toolbox
Lawrence, Travis J.; Kauffman, Kyle T.; Amrine, Katherine C. H.; Carper, Dana L.; Lee, Raymond S.; Becich, Peter J.; Canales, Claudia J.; Ardell, David H.
2015-01-01
FAST (FAST Analysis of Sequences Toolbox) provides simple, powerful open source command-line tools to filter, transform, annotate and analyze biological sequence data. Modeled after the GNU (GNU's Not Unix) Textutils such as grep, cut, and tr, FAST tools such as fasgrep, fascut, and fastr make it easy to rapidly prototype expressive bioinformatic workflows in a compact and generic command vocabulary. Compact combinatorial encoding of data workflows with FAST commands can simplify the documentation and reproducibility of bioinformatic protocols, supporting better transparency in biological data science. Interface self-consistency and conformity with conventions of GNU, Matlab, Perl, BioPerl, R, and GenBank help make FAST easy and rewarding to learn. FAST automates numerical, taxonomic, and text-based sorting, selection and transformation of sequence records and alignment sites based on content, index ranges, descriptive tags, annotated features, and in-line calculated analytics, including composition and codon usage. Automated content- and feature-based extraction of sites and support for molecular population genetic statistics make FAST useful for molecular evolutionary analysis. FAST is portable, easy to install and secure thanks to the relative maturity of its Perl and BioPerl foundations, with stable releases posted to CPAN. Development as well as a publicly accessible Cookbook and Wiki are available on the FAST GitHub repository at https://github.com/tlawrence3/FAST. The default data exchange format in FAST is Multi-FastA (specifically, a restriction of BioPerl FastA format). Sanger and Illumina 1.8+ FastQ formatted files are also supported. FAST makes it easier for non-programmer biologists to interactively investigate and control biological data at the speed of thought. PMID:26042145
Attitude control of the space construction base: A modular approach
NASA Technical Reports Server (NTRS)
Oconnor, D. A.
1982-01-01
A planar model of a space base and one module is considered. For this simplified system, a feedback controller which is compatible with the modular construction method is described. The systems dynamics are decomposed into two parts corresponding to base and module. The information structure of the problem is non-classical in that not all system information is supplied to each controller. The base controller is designed to accommodate structural changes that occur as the module is added and the module controller is designed to regulate its own states and follow commands from the base. Overall stability of the system is checked by Liapunov analysis and controller effectiveness is verified by computer simulation.
A Randomized Field Trial of the Fast ForWord Language Computer-Based Training Program
ERIC Educational Resources Information Center
Borman, Geoffrey D.; Benson, James G.; Overman, Laura
2009-01-01
This article describes an independent assessment of the Fast ForWord Language computer-based training program developed by Scientific Learning Corporation. Previous laboratory research involving children with language-based learning impairments showed strong effects on their abilities to recognize brief and fast sequences of nonspeech and speech…
2016-09-01
par. 4) Based on a RED projected size of 22.16 m, a sample calculation for the unadjusted single shot probability of kill for HELLFIRE missiles is...framework based on intelligent objects (SIMIO) environment to model a fast attack craft/fast inshore attack craft anti-surface warfare expanded kill chain...concept of operation efficiency. Based on the operational environment, low cost and less capable unmanned aircraft provide an alternative to the
NASA Astrophysics Data System (ADS)
Mazzoleni, Stefano; Bonanomi, Giuliano; Incerti, Guido; El-Gawad, Ahmed M. Abd; Sarker, Tushar Chandra; Cesarano, Gaspare; Saulino, Luigi; Saracino, Antonio; Castro Rego, Francisco
2017-04-01
Litter burning and biological decomposition are oxidative processes co-occurring in many terrestrial ecosystems, producing organic matter with different chemical properties and differently affecting plant growth and soil microbial activity. Here, we tested the chemical convergence hypothesis (i.e. materials with different initial chemistry tend to converge towards a common profile, with similar biological effects, as the oxidative process advances) for burning and decomposition. We compared the molecular composition of 63 organic materials - 7 litter types either fresh, decomposed for 30, 90, 180 days, or heated at 100, 200, 300, 400, 500 °C - as assessed by 13C NMR. We used litter water extracts (5% dw) as treatments in bioassays on plant (Lepidium sativum) and fungal (Aspergillus niger) growth, and a washed quartz sand amended with litter materials (0.5 % dw) to assess heterotrophic respiration by CO2 flux chamber. We observed different molecular variations for materials either burning (i.e. a sharp increase of aromatic C and a decrease of most other fractions above 200 °C) or decomposing (i.e. early increase of alkyl, methoxyl and N-alkyl C and decrease of O-alkyl and di-O-alkyl C fractions). Soil respiration and fungal growth progressively decreased with litter age and temperature. Plant growth underwent an inhibitory effect by untreated litter, more and less rapidly released over decomposing and burning materials, respectively. Correlation analysis between NMR and bioassay data showed that opposite responses for soil respiration and fungi, compared to plants, are related to essentially the same C molecular types. Our findings suggest a functional convergence of decomposed and burnt organic substrates, emerging from the balance between the bioavailability of labile C sources and the presence of recalcitrant and pyrogenic compounds, oppositely affecting different trophic levels.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsirel'nikov, V.I.; Komissarova, L.N.; Spitsyn, V.I.
1962-09-01
The decomposition coefficients of the chlorides, bromides, and iodides of Zr and Hf were determined as a function of the temperature of a hot surface. The tetrahalides were carefully purified in an inert atmosphere of argon. The halide compound in a quartz ampoule was heated by a removable heater. The vapors passed through a capillary opening and struck a tungsten foil 0.5 mm thick and 12 to 15 mm wide. The Mo foil was heated electrically to control the surface temperature which was measured by an optical pyrometer. The tetrahalide decomposed according to the following reaction: Me(Hal)/sub 4/ yields Memore » + 2(Hal)/sub 2/. The lower halides dissociated completely to metal and free halides, since the temperature was >600 deg C. The Mo backing was dissolved in nitric acid, and the unsupported metal deposit of Zr or Hf was weighed, The decomposition coefficient was calcuweight of metal evaporated. Zrl/sub 4/ decomposed completely (100%) at 1500 deg C, while only 96% of the HfI/sub 4/ was decomposed at this temperature. The ZrBr/sub 4/ and HfBr/sub 4/ were decomposed by 68 and 61% respectively. The ZrCl/sub 4/ and HfCl/sub 4/ were stable at l500 deg C (5% of the ZrCl/sub 4/ was decomposed at l500 deg C). In all cases, the hafnium halide was more stable than the zirconium halide, especially in the case of the iodides. The decomposition was directly proportional to the temperature of the molybdenum target. (TTT)« less
Microbial Decomposers Not Constrained by Climate History Along a Mediterranean Climate Gradient
NASA Astrophysics Data System (ADS)
Baker, N. R.; Khalili, B.; Martiny, J. B. H.; Allison, S. D.
2017-12-01
The return of organic carbon to the atmosphere through terrestrial decomposition is mediated through the breakdown of complex organic polymers by extracellular enzymes produced by microbial decomposer communities. Determining if and how these decomposer communities are constrained in their ability to degrade plant litter is necessary for predicting how carbon cycling will be affected by future climate change. To address this question, we deployed fine-pore nylon mesh "microbial cage" litterbags containing grassland litter with and without local inoculum across five sites in southern California, spanning a gradient of 10.3-22.8° C in mean annual temperature and 100-400+ mm mean annual precipitation. Litterbags were deployed in October 2014 and collected four times over the course of 14 months. Recovered litter was assayed for mass loss, litter chemistry, microbial biomass, extracellular enzymes (Vmax and Km), and enzyme temperature sensitivities. We hypothesized that grassland litter would decompose most rapidly in the grassland site, and that access to local microbial communities would enhance litter decomposition rates and microbial activity in the other sites along the gradient. We determined that temperature and precipitation likely interact to limit microbial decomposition in the extreme sites along our gradient. Despite their unique climate history, grassland microbes were not restricted in their ability to decompose litter under different climate conditions. Although we observed a strong correlation between bacterial biomass and mass loss across the gradient, litter that was inoculated with local microbial communities lost less mass despite having greater bacterial biomass and potentially accumulating more microbial residues. Our results suggest that microbial community composition may not constrain C-cycling rates under climate change in our system. However, there may be community constraints on decomposition if climate change alters litter chemistry, a mechanism only indirectly addressed by our design.
Multichannel analysis of surface waves
Park, C.B.; Miller, R.D.; Xia, J.
1999-01-01
The frequency-dependent properties of Rayleigh-type surface waves can be utilized for imaging and characterizing the shallow subsurface. Most surface-wave analysis relies on the accurate calculation of phase velocities for the horizontally traveling fundamental-mode Rayleigh wave acquired by stepping out a pair of receivers at intervals based on calculated ground roll wavelengths. Interference by coherent source-generated noise inhibits the reliability of shear-wave velocities determined through inversion of the whole wave field. Among these nonplanar, nonfundamental-mode Rayleigh waves (noise) are body waves, scattered and nonsource-generated surface waves, and higher-mode surface waves. The degree to which each of these types of noise contaminates the dispersion curve and, ultimately, the inverted shear-wave velocity profile is dependent on frequency as well as distance from the source. Multichannel recording permits effective identification and isolation of noise according to distinctive trace-to-trace coherency in arrival time and amplitude. An added advantage is the speed and redundancy of the measurement process. Decomposition of a multichannel record into a time variable-frequency format, similar to an uncorrelated Vibroseis record, permits analysis and display of each frequency component in a unique and continuous format. Coherent noise contamination can then be examined and its effects appraised in both frequency and offset space. Separation of frequency components permits real-time maximization of the S/N ratio during acquisition and subsequent processing steps. Linear separation of each ground roll frequency component allows calculation of phase velocities by simply measuring the linear slope of each frequency component. Breaks in coherent surface-wave arrivals, observable on the decomposed record, can be compensated for during acquisition and processing. Multichannel recording permits single-measurement surveying of a broad depth range, high levels of redundancy with a single field configuration, and the ability to adjust the offset, effectively reducing random or nonlinear noise introduced during recording. A multichannel shot gather decomposed into a swept-frequency record allows the fast generation of an accurate dispersion curve. The accuracy of dispersion curves determined using this method is proven through field comparisons of the inverted shear-wave velocity (??(s)) profile with a downhole ??(s) profile.Multichannel recording is an efficient method of acquiring ground roll. By displaying the obtained information in a swept-frequency format, different frequency components of Rayleigh waves can be identified by distinctive and simple coherency. In turn, a seismic surface-wave method is derived that provides a useful noninvasive tool, where information about elastic properties of near-surface materials can be effectively obtained.
Regional simulation of soil nitrogen dynamics and balance in Swiss cropping systems
NASA Astrophysics Data System (ADS)
Lee, Juhwan; Necpalova, Magdalena; Six, Johan
2017-04-01
We evaluated the regional-scale potential of various crop and soil management practices to reduce the dependency of crop N demand on external N inputs and N losses to the environment. The estimates of soil N balance were simulated and compared under alternative and conventional crop production across all Swiss cropland. Alternative practices were all combinations of organic fertilization, reduced tillage and winter cover cropping. Using the DayCent model, we simulated changes in crop N yields as well as the contribution of inputs and outputs to soil N balance by alternative practices, which was complemented with corresponding measurements from available long-term field experiments and site-level simulations. In addition, the effects of reducing (between 0% and 80% of recommended application rates) or increasing chemical fertilizer input rates (between 120% and 300% of recommended application rates) on system-level N dynamics were also simulated. Modeled yields at recommended N rates were only 37-87% of the maximum yield potential across common Swiss crops, and crop productivity were sensitive to the level of external N inputs, except for grass-clover mixture, soybean and peas. Overall, differences in soil N input and output decreased or increased proportionally with changing the amount of N input only from the recommended rate. As a result, there was no additional difference in soil N balance in response to N application rates. Nitrate leaching accounted for 40-81% of total N output differences, while up to 47% of total N output occurred through harvest and straw removal. Regardless of crops, yield potential became insensitive to high N rates. Differences in N2O and N2 emissions slightly increased with increasing N inputs, in which each gas was only responsible for about 1% of changes in total N output. Overall, there was a positive soil N balance under alternative practices. Particularly, considerable improvement in soil N balance was expected when slowly decomposed organic fertilizer was used in combination with cover cropping and/or reduced tillage. However, the increase in soil N balance was due to the decreases in harvested yield and nitrate leaching under these organic cropping based practices. Instead, the use of fast decomposed organic matter with cover cropping could be considered to avoid any yield penalty while decreasing nitrate leaching, hence reducing total N output. In order to effectively reduce N losses from soils, approaches to utilize multiple alternative options should be taken into account at the regional scale.
A general framework of noise suppression in material decomposition for dual-energy CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petrongolo, Michael; Dong, Xue; Zhu, Lei, E-mail: leizhu@gatech.edu
Purpose: As a general problem of dual-energy CT (DECT), noise amplification in material decomposition severely reduces the signal-to-noise ratio on the decomposed images compared to that on the original CT images. In this work, the authors propose a general framework of noise suppression in material decomposition for DECT. The method is based on an iterative algorithm recently developed in their group for image-domain decomposition of DECT, with an extension to include nonlinear decomposition models. The generalized framework of iterative DECT decomposition enables beam-hardening correction with simultaneous noise suppression, which improves the clinical benefits of DECT. Methods: The authors propose tomore » suppress noise on the decomposed images of DECT using convex optimization, which is formulated in the form of least-squares estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, the authors include the inverse of the estimated variance–covariance matrix of the decomposed images as the penalty weight in the least-squares term. Analytical formulas are derived to compute the variance–covariance matrix for decomposed images with general-form numerical or analytical decomposition. As a demonstration, the authors implement the proposed algorithm on phantom data using an empirical polynomial function of decomposition measured on a calibration scan. The polynomial coefficients are determined from the projection data acquired on a wedge phantom, and the signal decomposition is performed in the projection domain. Results: On the Catphan{sup ®}600 phantom, the proposed noise suppression method reduces the average noise standard deviation of basis material images by one to two orders of magnitude, with a superior performance on spatial resolution as shown in comparisons of line-pair images and modulation transfer function measurements. On the synthesized monoenergetic CT images, the noise standard deviation is reduced by a factor of 2–3. By using nonlinear decomposition on projections, the authors’ method effectively suppresses the streaking artifacts of beam hardening and obtains more uniform images than their previous approach based on a linear model. Similar performance of noise suppression is observed in the results of an anthropomorphic head phantom and a pediatric chest phantom generated by the proposed method. With beam-hardening correction enabled by their approach, the image spatial nonuniformity on the head phantom is reduced from around 10% on the original CT images to 4.9% on the synthesized monoenergetic CT image. On the pediatric chest phantom, their method suppresses image noise standard deviation by a factor of around 7.5, and compared with linear decomposition, it reduces the estimation error of electron densities from 33.3% to 8.6%. Conclusions: The authors propose a general framework of noise suppression in material decomposition for DECT. Phantom studies have shown the proposed method improves the image uniformity and the accuracy of electron density measurements by effective beam-hardening correction and reduces noise level without noticeable resolution loss.« less
NASA Astrophysics Data System (ADS)
Sekiguchi, Kazuki; Shirakawa, Hiroki; Chokawa, Kenta; Araidai, Masaaki; Kangawa, Yoshihiro; Kakimoto, Koichi; Shiraishi, Kenji
2018-04-01
We analyzed the decomposition of Ga(CH3)3 (TMG) during the metal organic vapor phase epitaxy (MOVPE) of GaN on the basis of first-principles calculations and thermodynamic analysis. We performed activation energy calculations of TMG decomposition and determined the main reaction processes of TMG during GaN MOVPE. We found that TMG reacts with the H2 carrier gas and that (CH3)2GaH is generated after the desorption of the methyl group. Next, (CH3)2GaH decomposes into (CH3)GaH2 and this decomposes into GaH3. Finally, GaH3 becomes GaH. In the MOVPE growth of GaN, TMG decomposes into GaH by the successive desorption of its methyl groups. The results presented here concur with recent high-resolution mass spectroscopy results.
Microbial community assembly and metabolic function during mammalian corpse decomposition.
Metcalf, Jessica L; Xu, Zhenjiang Zech; Weiss, Sophie; Lax, Simon; Van Treuren, Will; Hyde, Embriette R; Song, Se Jin; Amir, Amnon; Larsen, Peter; Sangwan, Naseer; Haarmann, Daniel; Humphrey, Greg C; Ackermann, Gail; Thompson, Luke R; Lauber, Christian; Bibat, Alexander; Nicholas, Catherine; Gebert, Matthew J; Petrosino, Joseph F; Reed, Sasha C; Gilbert, Jack A; Lynne, Aaron M; Bucheli, Sibyl R; Carter, David O; Knight, Rob
2016-01-08
Vertebrate corpse decomposition provides an important stage in nutrient cycling in most terrestrial habitats, yet microbially mediated processes are poorly understood. Here we combine deep microbial community characterization, community-level metabolic reconstruction, and soil biogeochemical assessment to understand the principles governing microbial community assembly during decomposition of mouse and human corpses on different soil substrates. We find a suite of bacterial and fungal groups that contribute to nitrogen cycling and a reproducible network of decomposers that emerge on predictable time scales. Our results show that this decomposer community is derived primarily from bulk soil, but key decomposers are ubiquitous in low abundance. Soil type was not a dominant factor driving community development, and the process of decomposition is sufficiently reproducible to offer new opportunities for forensic investigations. Copyright © 2016, American Association for the Advancement of Science.
Decomposing Huge Networks into Skeleton Graphs by Reachable Relations
2017-06-07
AFRL-AFOSR-JP-TR-2017-0047 Decomposing Huge Networks into Skeleton Graphs by Reachable Relations Kazumi Saito University Of Shizuoka Final Report 06...07/2017 DISTRIBUTION A: Distribution approved for public release. AF Office Of Scientific Research (AFOSR)/ IOA Arlington, Virginia 22203 Air Force...ApprovedOMB No. 0704-0188 The public reporting burden for this collection of information is estimated to average 1 hour per response, including the
2014-10-01
nonlinear and non-stationary signals. It aims at decomposing a signal, via an iterative sifting procedure, into several intrinsic mode functions ...stationary signals. It aims at decomposing a signal, via an iterative sifting procedure into several intrinsic mode functions (IMFs), and each of the... function , optimization. 1 Introduction It is well known that nonlinear and non-stationary signal analysis is important and difficult. His- torically
Layout compliance for triple patterning lithography: an iterative approach
NASA Astrophysics Data System (ADS)
Yu, Bei; Garreton, Gilda; Pan, David Z.
2014-10-01
As the semiconductor process further scales down, the industry encounters many lithography-related issues. In the 14nm logic node and beyond, triple patterning lithography (TPL) is one of the most promising techniques for Metal1 layer and possibly Via0 layer. As one of the most challenging problems in TPL, recently layout decomposition efforts have received more attention from both industry and academia. Ideally the decomposer should point out locations in the layout that are not triple patterning decomposable and therefore manual intervention by designers is required. A traditional decomposition flow would be an iterative process, where each iteration consists of an automatic layout decomposition step and manual layout modification task. However, due to the NP-hardness of triple patterning layout decomposition, automatic full chip level layout decomposition requires long computational time and therefore design closure issues continue to linger around in the traditional flow. Challenged by this issue, we present a novel incremental layout decomposition framework to facilitate accelerated iterative decomposition. In the first iteration, our decomposer not only points out all conflicts, but also provides the suggestions to fix them. After the layout modification, instead of solving the full chip problem from scratch, our decomposer can provide a quick solution for a selected portion of layout. We believe this framework is efficient, in terms of performance and designer friendly.
Atomic-batched tensor decomposed two-electron repulsion integrals
NASA Astrophysics Data System (ADS)
Schmitz, Gunnar; Madsen, Niels Kristian; Christiansen, Ove
2017-04-01
We present a new integral format for 4-index electron repulsion integrals, in which several strategies like the Resolution-of-the-Identity (RI) approximation and other more general tensor-decomposition techniques are combined with an atomic batching scheme. The 3-index RI integral tensor is divided into sub-tensors defined by atom pairs on which we perform an accelerated decomposition to the canonical product (CP) format. In a first step, the RI integrals are decomposed to a high-rank CP-like format by repeated singular value decompositions followed by a rank reduction, which uses a Tucker decomposition as an intermediate step to lower the prefactor of the algorithm. After decomposing the RI sub-tensors (within the Coulomb metric), they can be reassembled to the full decomposed tensor (RC approach) or the atomic batched format can be maintained (ABC approach). In the first case, the integrals are very similar to the well-known tensor hypercontraction integral format, which gained some attraction in recent years since it allows for quartic scaling implementations of MP2 and some coupled cluster methods. On the MP2 level, the RC and ABC approaches are compared concerning efficiency and storage requirements. Furthermore, the overall accuracy of this approach is assessed. Initial test calculations show a good accuracy and that it is not limited to small systems.
Atomic-batched tensor decomposed two-electron repulsion integrals.
Schmitz, Gunnar; Madsen, Niels Kristian; Christiansen, Ove
2017-04-07
We present a new integral format for 4-index electron repulsion integrals, in which several strategies like the Resolution-of-the-Identity (RI) approximation and other more general tensor-decomposition techniques are combined with an atomic batching scheme. The 3-index RI integral tensor is divided into sub-tensors defined by atom pairs on which we perform an accelerated decomposition to the canonical product (CP) format. In a first step, the RI integrals are decomposed to a high-rank CP-like format by repeated singular value decompositions followed by a rank reduction, which uses a Tucker decomposition as an intermediate step to lower the prefactor of the algorithm. After decomposing the RI sub-tensors (within the Coulomb metric), they can be reassembled to the full decomposed tensor (RC approach) or the atomic batched format can be maintained (ABC approach). In the first case, the integrals are very similar to the well-known tensor hypercontraction integral format, which gained some attraction in recent years since it allows for quartic scaling implementations of MP2 and some coupled cluster methods. On the MP2 level, the RC and ABC approaches are compared concerning efficiency and storage requirements. Furthermore, the overall accuracy of this approach is assessed. Initial test calculations show a good accuracy and that it is not limited to small systems.
Fast-ion D(alpha) measurements and simulations in DIII-D
NASA Astrophysics Data System (ADS)
Luo, Yadong
The fast-ion Dalpha diagnostic measures the Doppler-shifted Dalpha light emitted by neutralized fast ions. For a favorable viewing geometry, the bright interferences from beam neutrals, halo neutrals, and edge neutrals span over a small wavelength range around the Dalpha rest wavelength and are blocked by a vertical bar at the exit focal plane of the spectrometer. Background subtraction and fitting techniques eliminate various contaminants in the spectrum. Fast-ion data are acquired with a time evolution of ˜1 ms, spatial resolution of ˜5 cm, and energy resolution of ˜10 keV. A weighted Monte Carlo simulation code models the fast-ion Dalpha spectra based on the fast-ion distribution function from other sources. In quiet plasmas, the spectral shape is in excellent agreement and absolute magnitude also has reasonable agreement. The fast-ion D alpha signal has the expected dependencies on plasma and neutral beam parameters. The neutral particle diagnostic and neutron diagnostic corroborate the fast-ion Dalpha measurements. The relative spatial profile is in agreement with the simulated profile based on the fast-ion distribution function from the TRANSP analysis code. During ion cyclotron heating, fast ions with high perpendicular energy are accelerated, while those with low perpendicular energy are barely affected. The spatial profile is compared with the simulated profiles based on the fast-ion distribution functions from the CQL Fokker-Planck code. In discharges with Alfven instabilities, both the spatial profile and spectral shape suggests that fast ions are redistributed. The flattened fast-ion Dalpha profile is in agreement with the fast-ion pressure profile.
NASA Astrophysics Data System (ADS)
Wang, Zuo-Cai; Xin, Yu; Ren, Wei-Xin
2016-08-01
This paper proposes a new nonlinear joint model updating method for shear type structures based on the instantaneous characteristics of the decomposed structural dynamic responses. To obtain an accurate representation of a nonlinear system's dynamics, the nonlinear joint model is described as the nonlinear spring element with bilinear stiffness. The instantaneous frequencies and amplitudes of the decomposed mono-component are first extracted by the analytical mode decomposition (AMD) method. Then, an objective function based on the residuals of the instantaneous frequencies and amplitudes between the experimental structure and the nonlinear model is created for the nonlinear joint model updating. The optimal values of the nonlinear joint model parameters are obtained by minimizing the objective function using the simulated annealing global optimization method. To validate the effectiveness of the proposed method, a single-story shear type structure subjected to earthquake and harmonic excitations is simulated as a numerical example. Then, a beam structure with multiple local nonlinear elements subjected to earthquake excitation is also simulated. The nonlinear beam structure is updated based on the global and local model using the proposed method. The results show that the proposed local nonlinear model updating method is more effective for structures with multiple local nonlinear elements. Finally, the proposed method is verified by the shake table test of a real high voltage switch structure. The accuracy of the proposed method is quantified both in numerical and experimental applications using the defined error indices. Both the numerical and experimental results have shown that the proposed method can effectively update the nonlinear joint model.
Empirical Wavelet Transform Based Features for Classification of Parkinson's Disease Severity.
Oung, Qi Wei; Muthusamy, Hariharan; Basah, Shafriza Nisha; Lee, Hoileong; Vijean, Vikneswaran
2017-12-29
Parkinson's disease (PD) is a type of progressive neurodegenerative disorder that has affected a large part of the population till now. Several symptoms of PD include tremor, rigidity, slowness of movements and vocal impairments. In order to develop an effective diagnostic system, a number of algorithms were proposed mainly to distinguish healthy individuals from the ones with PD. However, most of the previous works were conducted based on a binary classification, with the early PD stage and the advanced ones being treated equally. Therefore, in this work, we propose a multiclass classification with three classes of PD severity level (mild, moderate, severe) and healthy control. The focus is to detect and classify PD using signals from wearable motion and audio sensors based on both empirical wavelet transform (EWT) and empirical wavelet packet transform (EWPT) respectively. The EWT/EWPT was applied to decompose both speech and motion data signals up to five levels. Next, several features are extracted after obtaining the instantaneous amplitudes and frequencies from the coefficients of the decomposed signals by applying the Hilbert transform. The performance of the algorithm was analysed using three classifiers - K-nearest neighbour (KNN), probabilistic neural network (PNN) and extreme learning machine (ELM). Experimental results demonstrated that our proposed approach had the ability to differentiate PD from non-PD subjects, including their severity level - with classification accuracies of more than 90% using EWT/EWPT-ELM based on signals from motion and audio sensors respectively. Additionally, classification accuracy of more than 95% was achieved when EWT/EWPT-ELM is applied to signals from integration of both signal's information.
A spectral analysis of the domain decomposed Monte Carlo method for linear systems
Slattery, Stuart R.; Evans, Thomas M.; Wilson, Paul P. H.
2015-09-08
The domain decomposed behavior of the adjoint Neumann-Ulam Monte Carlo method for solving linear systems is analyzed using the spectral properties of the linear oper- ator. Relationships for the average length of the adjoint random walks, a measure of convergence speed and serial performance, are made with respect to the eigenvalues of the linear operator. In addition, relationships for the effective optical thickness of a domain in the decomposition are presented based on the spectral analysis and diffusion theory. Using the effective optical thickness, the Wigner rational approxi- mation and the mean chord approximation are applied to estimate the leakagemore » frac- tion of random walks from a domain in the decomposition as a measure of parallel performance and potential communication costs. The one-speed, two-dimensional neutron diffusion equation is used as a model problem in numerical experiments to test the models for symmetric operators with spectral qualities similar to light water reactor problems. We find, in general, the derived approximations show good agreement with random walk lengths and leakage fractions computed by the numerical experiments.« less
Multi-Satellite Observation Scheduling for Large Area Disaster Emergency Response
NASA Astrophysics Data System (ADS)
Niu, X. N.; Tang, H.; Wu, L. X.
2018-04-01
an optimal imaging plan, plays a key role in coordinating multiple satellites to monitor the disaster area. In the paper, to generate imaging plan dynamically according to the disaster relief, we propose a dynamic satellite task scheduling method for large area disaster response. First, an initial robust scheduling scheme is generated by a robust satellite scheduling model in which both the profit and the robustness of the schedule are simultaneously maximized. Then, we use a multi-objective optimization model to obtain a series of decomposing schemes. Based on the initial imaging plan, we propose a mixed optimizing algorithm named HA_NSGA-II to allocate the decomposing results thus to obtain an adjusted imaging schedule. A real disaster scenario, i.e., 2008 Wenchuan earthquake, is revisited in terms of rapid response using satellite resources and used to evaluate the performance of the proposed method with state-of-the-art approaches. We conclude that our satellite scheduling model can optimize the usage of satellite resources so as to obtain images in disaster response in a more timely and efficient manner.
A structural model decomposition framework for systems health management
NASA Astrophysics Data System (ADS)
Roychoudhury, I.; Daigle, M.; Bregon, A.; Pulido, B.
Systems health management (SHM) is an important set of technologies aimed at increasing system safety and reliability by detecting, isolating, and identifying faults; and predicting when the system reaches end of life (EOL), so that appropriate fault mitigation and recovery actions can be taken. Model-based SHM approaches typically make use of global, monolithic system models for online analysis, which results in a loss of scalability and efficiency for large-scale systems. Improvement in scalability and efficiency can be achieved by decomposing the system model into smaller local submodels and operating on these submodels instead. In this paper, the global system model is analyzed offline and structurally decomposed into local submodels. We define a common model decomposition framework for extracting submodels from the global model. This framework is then used to develop algorithms for solving model decomposition problems for the design of three separate SHM technologies, namely, estimation (which is useful for fault detection and identification), fault isolation, and EOL prediction. We solve these model decomposition problems using a three-tank system as a case study.
Högberg, Peter; Plamboeck, Agneta H.; Taylor, Andrew F. S.; Fransson, Petra M. A.
1999-01-01
Fungi play crucial roles in the biogeochemistry of terrestrial ecosystems, most notably as saprophytes decomposing organic matter and as mycorrhizal fungi enhancing plant nutrient uptake. However, a recurrent problem in fungal ecology is to establish the trophic status of species in the field. Our interpretations and conclusions are too often based on extrapolations from laboratory microcosm experiments or on anecdotal field evidence. Here, we used natural variations in stable carbon isotope ratios (δ13C) as an approach to distinguish between fungal decomposers and symbiotic mycorrhizal fungal species in the rich sporocarp flora (our sample contains 135 species) of temperate forests. We also demonstrated that host-specific mycorrhizal fungi that receive C from overstorey or understorey tree species differ in their δ13C. The many promiscuous mycorrhizal fungi, associated with and connecting several tree hosts, were calculated to receive 57–100% of their C from overstorey trees. Thus, overstorey trees also support, partly or wholly, the nutrient-absorbing mycelia of their alleged competitors, the understorey trees. PMID:10411910
A Structural Model Decomposition Framework for Systems Health Management
NASA Technical Reports Server (NTRS)
Roychoudhury, Indranil; Daigle, Matthew J.; Bregon, Anibal; Pulido, Belamino
2013-01-01
Systems health management (SHM) is an important set of technologies aimed at increasing system safety and reliability by detecting, isolating, and identifying faults; and predicting when the system reaches end of life (EOL), so that appropriate fault mitigation and recovery actions can be taken. Model-based SHM approaches typically make use of global, monolithic system models for online analysis, which results in a loss of scalability and efficiency for large-scale systems. Improvement in scalability and efficiency can be achieved by decomposing the system model into smaller local submodels and operating on these submodels instead. In this paper, the global system model is analyzed offline and structurally decomposed into local submodels. We define a common model decomposition framework for extracting submodels from the global model. This framework is then used to develop algorithms for solving model decomposition problems for the design of three separate SHM technologies, namely, estimation (which is useful for fault detection and identification), fault isolation, and EOL prediction. We solve these model decomposition problems using a three-tank system as a case study.
Heat treatment's effects on hydroxyapatite powders in water vapor and air atmosphere
NASA Astrophysics Data System (ADS)
Karabulut, A.; Baştan, F. E.; Erdoǧan, G.; Üstel, F.
2015-03-01
Hydroxyapatite (HA; Ca10(PO4)6(OH)2) is the main chemical constituent of bone tissue (~70%) as well as HA which is a calcium phosphate based ceramic material forms inorganic tissue of bone and tooth as hard tissues is used in production of prosthesis for synthetic bone, fractured and broken bone restoration, coating of metallic biomaterials and dental applications because of its bio compatibility. It is known that Hydroxyapatite decomposes with high heat energy after heat treatment. Therefore hydroxyapatite powders that heated in water vapor will less decomposed phases and lower amorphous phase content than in air atmosphere. In this study high purity hydroxyapatite powders were heat treated with open atmosphere furnace and water vapor atmosphere with 900, 1000, 1200 °C. Morphology of same powder size used in this process by SEM analyzed. Chemical structures of synthesized coatings have been examined by XRD. The determination of particle size and morphological structure of has been characterized by Particle Sizer, and SEM analysis, respectively. Weight change of sample was recorded by thermogravimetric analysis (TGA) during heating and cooling.
Process for synthesis of ammonia borane for bulk hydrogen storage
Autrey, S Thomas [West Richland, WA; Heldebrant, David J [Richland, WA; Linehan, John C [Richland, WA; Karkamkar, Abhijeet J [Richland, WA; Zheng, Feng [Richland, WA
2011-03-01
The present invention discloses new methods for synthesizing ammonia borane (NH.sub.3BH.sub.3, or AB). Ammonium borohydride (NH.sub.4BH.sub.4) is formed from the reaction of borohydride salts and ammonium salts in liquid ammonia. Ammonium borohydride is decomposed in an ether-based solvent that yields AB at a near quantitative yield. The AB product shows promise as a chemical hydrogen storage material for fuel cell powered applications.
A hierarchy of generalized Jaulent-Miodek equations and their explicit solutions
NASA Astrophysics Data System (ADS)
Geng, Xianguo; Guan, Liang; Xue, Bo
A hierarchy of generalized Jaulent-Miodek (JM) equations related to a new spectral problem with energy-dependent potentials is proposed. Depending on the Lax matrix and elliptic variables, the generalized JM hierarchy is decomposed into two systems of solvable ordinary differential equations. Explicit theta function representations of the meromorphic function and the Baker-Akhiezer function are constructed, the solutions of the hierarchy are obtained based on the theory of algebraic curves.
NASA Astrophysics Data System (ADS)
Selivanova, Karina G.; Avrunin, Oleg G.; Zlepko, Sergii M.; Romanyuk, Sergii O.; Zabolotna, Natalia I.; Kotyra, Andrzej; Komada, Paweł; Smailova, Saule
2016-09-01
Research and systematization of motor disorders, taking into account the clinical and neurophysiologic phenomena, are important and actual problem of neurology. The article describes a technique for decomposing surface electromyography (EMG), using Principal Component Analysis. The decomposition is achieved by a set of algorithms that uses a specially developed for analyze EMG. The accuracy was verified by calculation of Mahalanobis distance and Probability error.
Structural system identification based on variational mode decomposition
NASA Astrophysics Data System (ADS)
Bagheri, Abdollah; Ozbulut, Osman E.; Harris, Devin K.
2018-03-01
In this paper, a new structural identification method is proposed to identify the modal properties of engineering structures based on dynamic response decomposition using the variational mode decomposition (VMD). The VMD approach is a decomposition algorithm that has been developed as a means to overcome some of the drawbacks and limitations of the empirical mode decomposition method. The VMD-based modal identification algorithm decomposes the acceleration signal into a series of distinct modal responses and their respective center frequencies, such that when combined their cumulative modal responses reproduce the original acceleration response. The decaying amplitude of the extracted modal responses is then used to identify the modal damping ratios using a linear fitting function on modal response data. Finally, after extracting modal responses from available sensors, the mode shape vector for each of the decomposed modes in the system is identified from all obtained modal response data. To demonstrate the efficiency of the algorithm, a series of numerical, laboratory, and field case studies were evaluated. The laboratory case study utilized the vibration response of a three-story shear frame, whereas the field study leveraged the ambient vibration response of a pedestrian bridge to characterize the modal properties of the structure. The modal properties of the shear frame were computed using analytical approach for a comparison with the experimental modal frequencies. Results from these case studies demonstrated that the proposed method is efficient and accurate in identifying modal data of the structures.
Ecosystem assembly rules: the interplay of green and brown webs during salt marsh succession.
Schrama, Maarten; Berg, Matty P; Olff, Han
2012-11-01
Current theories about vegetation succession and food web assembly are poorly compatible, as food webs are generally viewed to be static, and succession is usually analyzed without the inclusion of higher trophic levels. In this study we present results from a detailed analysis of ecosystem assembly rules over a chronosequence of 100 years of salt marsh succession. First, using 13 yearlong observations on vegetation and soil parameters in different successional stages, we show that the space-for-time substitution is valid for this chronosequence. We then quantify biomass changes for all dominant invertebrate and vertebrate species across all main trophic groups of plants and animals. All invertebrate and vertebrate species were assigned to a trophic group according to feeding preference, and changes in trophic group abundance were quantified for seven different successional stages of the ecosystem. We found changes from a marine-fueled, decomposer-based (brown) food web in early stages to a more terrestrial, plant-based, herbivore-driven (green) food web in intermediate succession stages, and finally to a decomposer-based, terrestrial-driven food web in the latest stages. These changes were accompanied not only by an increase in live plant biomass and a leveling toward late succession but also by a constant increase in the amount of dead plant biomass over succession. Our results show that the structure and dynamics of salt marsh food webs cannot be understood except in light of vegetation succession, and vice versa.
The capital-asset-pricing model and arbitrage pricing theory: A unification
Khan, M. Ali; Sun, Yeneng
1997-01-01
We present a model of a financial market in which naive diversification, based simply on portfolio size and obtained as a consequence of the law of large numbers, is distinguished from efficient diversification, based on mean-variance analysis. This distinction yields a valuation formula involving only the essential risk embodied in an asset’s return, where the overall risk can be decomposed into a systematic and an unsystematic part, as in the arbitrage pricing theory; and the systematic component further decomposed into an essential and an inessential part, as in the capital-asset-pricing model. The two theories are thus unified, and their individual asset-pricing formulas shown to be equivalent to the pervasive economic principle of no arbitrage. The factors in the model are endogenously chosen by a procedure analogous to the Karhunen–Loéve expansion of continuous time stochastic processes; it has an optimality property justifying the use of a relatively small number of them to describe the underlying correlational structures. Our idealized limit model is based on a continuum of assets indexed by a hyperfinite Loeb measure space, and it is asymptotically implementable in a setting with a large but finite number of assets. Because the difficulties in the formulation of the law of large numbers with a standard continuum of random variables are well known, the model uncovers some basic phenomena not amenable to classical methods, and whose approximate counterparts are not already, or even readily, apparent in the asymptotic setting. PMID:11038614
The capital-asset-pricing model and arbitrage pricing theory: a unification.
Ali Khan, M; Sun, Y
1997-04-15
We present a model of a financial market in which naive diversification, based simply on portfolio size and obtained as a consequence of the law of large numbers, is distinguished from efficient diversification, based on mean-variance analysis. This distinction yields a valuation formula involving only the essential risk embodied in an asset's return, where the overall risk can be decomposed into a systematic and an unsystematic part, as in the arbitrage pricing theory; and the systematic component further decomposed into an essential and an inessential part, as in the capital-asset-pricing model. The two theories are thus unified, and their individual asset-pricing formulas shown to be equivalent to the pervasive economic principle of no arbitrage. The factors in the model are endogenously chosen by a procedure analogous to the Karhunen-Loéve expansion of continuous time stochastic processes; it has an optimality property justifying the use of a relatively small number of them to describe the underlying correlational structures. Our idealized limit model is based on a continuum of assets indexed by a hyperfinite Loeb measure space, and it is asymptotically implementable in a setting with a large but finite number of assets. Because the difficulties in the formulation of the law of large numbers with a standard continuum of random variables are well known, the model uncovers some basic phenomena not amenable to classical methods, and whose approximate counterparts are not already, or even readily, apparent in the asymptotic setting.
Whisker Contact Detection of Rodents Based on Slow and Fast Mechanical Inputs
Claverie, Laure N.; Boubenec, Yves; Debrégeas, Georges; Prevost, Alexis M.; Wandersman, Elie
2017-01-01
Rodents use their whiskers to locate nearby objects with an extreme precision. To perform such tasks, they need to detect whisker/object contacts with a high temporal accuracy. This contact detection is conveyed by classes of mechanoreceptors whose neural activity is sensitive to either slow or fast time varying mechanical stresses acting at the base of the whiskers. We developed a biomimetic approach to separate and characterize slow quasi-static and fast vibrational stress signals acting on a whisker base in realistic exploratory phases, using experiments on both real and artificial whiskers. Both slow and fast mechanical inputs are successfully captured using a mechanical model of the whisker. We present and discuss consequences of the whisking process in purely mechanical terms and hypothesize that free whisking in air sets a mechanical threshold for contact detection. The time resolution and robustness of the contact detection strategies based on either slow or fast stress signals are determined. Contact detection based on the vibrational signal is faster and more robust to exploratory conditions than the slow quasi-static component, although both slow/fast components allow localizing the object. PMID:28119582
Topographic Spreading Analysis of an Empirical Sex Workers' Network
NASA Astrophysics Data System (ADS)
Bjell, Johannes; Canright, Geoffrey; Engø-Monsen, Kenth; Remple, Valencia P.
The problem of epidemic spreading over networks has received considerable attention in recent years, due both to its intrinsic intellectual challenge and to its practical importance. A good recent summary of such work may be found in Newman (8), while (9) gives an outstanding example of a non-trivial prediction which is obtained from explicitly modeling the network in the epidemic spreading. In the language of mathematicians and computer scientists, a network of nodes connected by edges is called a graph. Most work on epidemic spreading over networks focuses on whole-graph properties, such as the percentage of infected nodes at long time. Two of us have, in contrast, focused on understanding the spread of an infection over time and space (the network) (61; 63; 62). This work involves decomposing any given network into subgraphs called regions (61). Regions are precisely defined as disjoint subgraphs which may be viewed as coarse-grained units of infection—in that, once one node in a region is infected, the progress of the infection over the remainder of the region is relatively fast and predictable (63). We note that this approach is based on the ‘Susceptible-Infected’ (SI) model of infection, in which nodes, once infected, are never cured. This model is reasonable for some infections, such as HIV—which is one of the diseases studied here. We also study gonorrhea and chlamydia, for which a more appropriate model is Susceptible-Infected-Susceptible (SIS) (67) (since nodes can be cured); we discuss the limitations of our approach for these cases below.
ARTIST: A fully automated artifact rejection algorithm for single-pulse TMS-EEG data.
Wu, Wei; Keller, Corey J; Rogasch, Nigel C; Longwell, Parker; Shpigel, Emmanuel; Rolle, Camarin E; Etkin, Amit
2018-04-01
Concurrent single-pulse TMS-EEG (spTMS-EEG) is an emerging noninvasive tool for probing causal brain dynamics in humans. However, in addition to the common artifacts in standard EEG data, spTMS-EEG data suffer from enormous stimulation-induced artifacts, posing significant challenges to the extraction of neural information. Typically, neural signals are analyzed after a manual time-intensive and often subjective process of artifact rejection. Here we describe a fully automated algorithm for spTMS-EEG artifact rejection. A key step of this algorithm is to decompose the spTMS-EEG data into statistically independent components (ICs), and then train a pattern classifier to automatically identify artifact components based on knowledge of the spatio-temporal profile of both neural and artefactual activities. The autocleaned and hand-cleaned data yield qualitatively similar group evoked potential waveforms. The algorithm achieves a 95% IC classification accuracy referenced to expert artifact rejection performance, and does so across a large number of spTMS-EEG data sets (n = 90 stimulation sites), retains high accuracy across stimulation sites/subjects/populations/montages, and outperforms current automated algorithms. Moreover, the algorithm was superior to the artifact rejection performance of relatively novice individuals, who would be the likely users of spTMS-EEG as the technique becomes more broadly disseminated. In summary, our algorithm provides an automated, fast, objective, and accurate method for cleaning spTMS-EEG data, which can increase the utility of TMS-EEG in both clinical and basic neuroscience settings. © 2018 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Zeng, Jing; Huang, Handong; Li, Huijie; Miao, Yuxin; Wen, Junxiang; Zhou, Fei
2017-12-01
The main emphasis of exploration and development is shifting from simple structural reservoirs to complex reservoirs, which all have the characteristics of complex structure, thin reservoir thickness and large buried depth. Faced with these complex geological features, hydrocarbon detection technology is a direct indication of changes in hydrocarbon reservoirs and a good approach for delimiting the distribution of underground reservoirs. It is common to utilize the time-frequency (TF) features of seismic data in detecting hydrocarbon reservoirs. Therefore, we research the complex domain-matching pursuit (CDMP) method and propose some improvements. First is the introduction of a scale parameter, which corrects the defect that atomic waveforms only change with the frequency parameter. Its introduction not only decomposes seismic signal with high accuracy and high efficiency but also reduces iterations. We also integrate jumping search with ergodic search to improve computational efficiency while maintaining the reasonable accuracy. Then we combine the improved CDMP with the Wigner-Ville distribution to obtain a high-resolution TF spectrum. A one-dimensional modeling experiment has proved the validity of our method. Basing on the low-frequency domain reflection coefficient in fluid-saturated porous media, we finally get an approximation formula for the mobility attributes of reservoir fluid. This approximation formula is used as a hydrocarbon identification factor to predict deep-water gas-bearing sand of the M oil field in the South China Sea. The results are consistent with the actual well test results and our method can help inform the future exploration of deep-water gas reservoirs.
Self-Assessment of Individual Differences in Language Switching
Rodriguez-Fornells, Antoni; Krämer, Ulrike M.; Lorenzo-Seva, Urbano; Festman, Julia; Münte, Thomas F.
2012-01-01
Language switching is omnipresent in bilingual individuals. In fact, the ability to switch languages (code switching) is a very fast, efficient, and flexible process that seems to be a fundamental aspect of bilingual language processing. In this study, we aimed to characterize psychometrically self-perceived individual differences in language switching and to create a reliable measure of this behavioral pattern by introducing a bilingual switching questionnaire. As a working hypothesis based on the previous literature about code switching, we decomposed language switching into four constructs: (i) L1 switching tendencies (the tendency to switch to L1; L1-switch); (ii) L2 switching tendencies (L2-switch); (iii) contextual switch, which indexes the frequency of switches usually triggered by a particular situation, topic, or environment; and (iv) unintended switch, which measures the lack of intention and awareness of the language switches. A total of 582 Spanish–Catalan bilingual university students were studied. Twelve items were selected (three for each construct). The correlation matrix was factor-analyzed using minimum rank factor analysis followed by oblique direct oblimin rotation. The overall proportion of common variance explained by the four extracted factors was 0.86. Finally, to assess the external validity of the individual differences scored with the new questionnaire, we evaluated the correlations between these measures and several psychometric (language proficiency) and behavioral measures related to cognitive and attentional control. The present study highlights the importance of evaluating individual differences in language switching using self-assessment instruments when studying the interface between cognitive control and bilingualism. PMID:22291668
Groupwise Image Registration Guided by a Dynamic Digraph of Images.
Tang, Zhenyu; Fan, Yong
2016-04-01
For groupwise image registration, graph theoretic methods have been adopted for discovering the manifold of images to be registered so that accurate registration of images to a group center image can be achieved by aligning similar images that are linked by the shortest graph paths. However, the image similarity measures adopted to build a graph of images in the extant methods are essentially pairwise measures, not effective for capturing the groupwise similarity among multiple images. To overcome this problem, we present a groupwise image similarity measure that is built on sparse coding for characterizing image similarity among all input images and build a directed graph (digraph) of images so that similar images are connected by the shortest paths of the digraph. Following the shortest paths determined according to the digraph, images are registered to a group center image in an iterative manner by decomposing a large anatomical deformation field required to register an image to the group center image into a series of small ones between similar images. During the iterative image registration, the digraph of images evolves dynamically at each iteration step to pursue an accurate estimation of the image manifold. Moreover, an adaptive dictionary strategy is adopted in the groupwise image similarity measure to ensure fast convergence of the iterative registration procedure. The proposed method has been validated based on both simulated and real brain images, and experiment results have demonstrated that our method was more effective for learning the manifold of input images and achieved higher registration accuracy than state-of-the-art groupwise image registration methods.
Multi-level basis selection of wavelet packet decomposition tree for heart sound classification.
Safara, Fatemeh; Doraisamy, Shyamala; Azman, Azreen; Jantan, Azrul; Abdullah Ramaiah, Asri Ranga
2013-10-01
Wavelet packet transform decomposes a signal into a set of orthonormal bases (nodes) and provides opportunities to select an appropriate set of these bases for feature extraction. In this paper, multi-level basis selection (MLBS) is proposed to preserve the most informative bases of a wavelet packet decomposition tree through removing less informative bases by applying three exclusion criteria: frequency range, noise frequency, and energy threshold. MLBS achieved an accuracy of 97.56% for classifying normal heart sound, aortic stenosis, mitral regurgitation, and aortic regurgitation. MLBS is a promising basis selection to be suggested for signals with a small range of frequencies. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.
A hierarchical model for probabilistic independent component analysis of multi-subject fMRI studies
Tang, Li
2014-01-01
Summary An important goal in fMRI studies is to decompose the observed series of brain images to identify and characterize underlying brain functional networks. Independent component analysis (ICA) has been shown to be a powerful computational tool for this purpose. Classic ICA has been successfully applied to single-subject fMRI data. The extension of ICA to group inferences in neuroimaging studies, however, is challenging due to the unavailability of a pre-specified group design matrix. Existing group ICA methods generally concatenate observed fMRI data across subjects on the temporal domain and then decompose multi-subject data in a similar manner to single-subject ICA. The major limitation of existing methods is that they ignore between-subject variability in spatial distributions of brain functional networks in group ICA. In this paper, we propose a new hierarchical probabilistic group ICA method to formally model subject-specific effects in both temporal and spatial domains when decomposing multi-subject fMRI data. The proposed method provides model-based estimation of brain functional networks at both the population and subject level. An important advantage of the hierarchical model is that it provides a formal statistical framework to investigate similarities and differences in brain functional networks across subjects, e.g., subjects with mental disorders or neurodegenerative diseases such as Parkinson’s as compared to normal subjects. We develop an EM algorithm for model estimation where both the E-step and M-step have explicit forms. We compare the performance of the proposed hierarchical model with that of two popular group ICA methods via simulation studies. We illustrate our method with application to an fMRI study of Zen meditation. PMID:24033125
Denoising, deconvolving, and decomposing photon observations. Derivation of the D3PO algorithm
NASA Astrophysics Data System (ADS)
Selig, Marco; Enßlin, Torsten A.
2015-02-01
The analysis of astronomical images is a non-trivial task. The D3PO algorithm addresses the inference problem of denoising, deconvolving, and decomposing photon observations. Its primary goal is the simultaneous but individual reconstruction of the diffuse and point-like photon flux given a single photon count image, where the fluxes are superimposed. In order to discriminate between these morphologically different signal components, a probabilistic algorithm is derived in the language of information field theory based on a hierarchical Bayesian parameter model. The signal inference exploits prior information on the spatial correlation structure of the diffuse component and the brightness distribution of the spatially uncorrelated point-like sources. A maximum a posteriori solution and a solution minimizing the Gibbs free energy of the inference problem using variational Bayesian methods are discussed. Since the derivation of the solution is not dependent on the underlying position space, the implementation of the D3PO algorithm uses the nifty package to ensure applicability to various spatial grids and at any resolution. The fidelity of the algorithm is validated by the analysis of simulated data, including a realistic high energy photon count image showing a 32 × 32 arcmin2 observation with a spatial resolution of 0.1 arcmin. In all tests the D3PO algorithm successfully denoised, deconvolved, and decomposed the data into a diffuse and a point-like signal estimate for the respective photon flux components. A copy of the code is available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/574/A74
Visual saliency-based fast intracoding algorithm for high efficiency video coding
NASA Astrophysics Data System (ADS)
Zhou, Xin; Shi, Guangming; Zhou, Wei; Duan, Zhemin
2017-01-01
Intraprediction has been significantly improved in high efficiency video coding over H.264/AVC with quad-tree-based coding unit (CU) structure from size 64×64 to 8×8 and more prediction modes. However, these techniques cause a dramatic increase in computational complexity. An intracoding algorithm is proposed that consists of perceptual fast CU size decision algorithm and fast intraprediction mode decision algorithm. First, based on the visual saliency detection, an adaptive and fast CU size decision method is proposed to alleviate intraencoding complexity. Furthermore, a fast intraprediction mode decision algorithm with step halving rough mode decision method and early modes pruning algorithm is presented to selectively check the potential modes and effectively reduce the complexity of computation. Experimental results show that our proposed fast method reduces the computational complexity of the current HM to about 57% in encoding time with only 0.37% increases in BD rate. Meanwhile, the proposed fast algorithm has reasonable peak signal-to-noise ratio losses and nearly the same subjective perceptual quality.
Method for the decontamination of soil containing solid organic explosives therein
Radtke, Corey W.; Roberto, Francisco F.
2000-01-01
An efficient method for decontaminating soil containing organic explosives ("TNT" and others) in the form of solid portions or chunks which are not ordinarily subject to effective bacterial degradation. The contaminated soil is treated by delivering an organic solvent to the soil which is capable of dissolving the explosives. This process makes the explosives more bioavailable to natural bacteria in the soil which can decompose the explosives. An organic nutrient composition is also preferably added to facilitate decomposition and yield a compost product. After dissolution, the explosives are allowed to remain in the soil until they are decomposed by the bacteria. Decomposition occurs directly in the soil which avoids the need to remove both the explosives and the solvents (which either evaporate or are decomposed by the bacteria). Decomposition is directly facilitated by the solvent pre-treatment process described above which enables rapid bacterial remediation of the soil.
Mizoguchi, T; Ishii, H
1980-06-01
Sulphate in sulphate ores, e.g., alunite, anglesite, barytes, chalcanthite, gypsum, manganese sulphate ore, is reduced to hydrogen sulphide by the hypophosphite-tin metal-CPA method, if a slight modification is made. Sulphide ores, e.g., galena, sphalerite, are quantitatively decomposed with CPA alone to give hydrogen sulphide. Suitable reducing agents must be used for the quantitative recovery of hydrogen sulphide from pyrite, nickel sulphide, cobalt sulphide and cadmium sulphide, or elemental sulphur is liberated. Iodide must be used in the decomposition of chalcopyrite; the copper sulphide is too stable to be decomposed by CPA alone. Molybdenite is not decomposed in CPA even if reducing agents are added. The pretreatment methods for the determination of sulphur in sulphur oxyacids and elemental sulphur have also been investigated.
Kill the song—steal the show: what does distinguish predicative metaphors from decomposable idioms?
Caillies, Stéphanie; Declercq, Christelle
2011-06-01
This study examined the semantic processing difference between decomposable idioms and novel predicative metaphors. It was hypothesized that idiom comprehension results from the retrieval of a figurative meaning stored in memory, that metaphor comprehension requires a sense creation process and that this process difference affects the processing time of idiomatic and metaphoric expressions. In the first experiment, participants read sentences containing decomposable idioms, predicative metaphors or control expressions and performed a lexical decision task on figurative targets presented 0, 350, and 500 ms, or 750 after reading. Results demonstrated that idiomatic expressions were processed sooner than metaphoric ones. In the second experiment, participants were asked to assess the meaningfulness of idiomatic, metaphoric and literal expressions after reading a verb prime that belongs to the target phrase (identity priming). The results showed that verb identity priming was stronger for idiomatic expressions than for metaphor ones, indicating different mental representations.
NASA Technical Reports Server (NTRS)
Agah, Arvin; Bekey, George A.
1994-01-01
This paper describes autonomous mobile robot teams performing tasks in unstructured environments. The behavior and the intelligence of the group is distributed, and the system does not include a central command base or leader. The novel concept of the Tropism-Based Cognitive Architecture is introduced, which is used by the robots in order to produce behavior transforming their sensory information to proper action. The results of a number of simulation experiments are presented. These experiments include worlds where the robot teams must locate, decompose, and gather objects, and defend themselves against hostile predators, while navigating around stationary and mobile obstacles.
Monotonicity-based electrical impedance tomography for lung imaging
NASA Astrophysics Data System (ADS)
Zhou, Liangdong; Harrach, Bastian; Seo, Jin Keun
2018-04-01
This paper presents a monotonicity-based spatiotemporal conductivity imaging method for continuous regional lung monitoring using electrical impedance tomography (EIT). The EIT data (i.e. the boundary current-voltage data) can be decomposed into pulmonary, cardiac and other parts using their different periodic natures. The time-differential current-voltage operator corresponding to the lung ventilation can be viewed as either semi-positive or semi-negative definite owing to monotonic conductivity changes within the lung regions. We used these monotonicity constraints to improve the quality of lung EIT imaging. We tested the proposed methods in numerical simulations, phantom experiments and human experiments.
NASA Astrophysics Data System (ADS)
Yamamoto, Takanori; Bannai, Hideo; Nagasaki, Masao; Miyano, Satoru
We present new decomposition heuristics for finding the optimal solution for the maximum-weight connected graph problem, which is known to be NP-hard. Previous optimal algorithms for solving the problem decompose the input graph into subgraphs using heuristics based on node degree. We propose new heuristics based on betweenness centrality measures, and show through computational experiments that our new heuristics tend to reduce the number of subgraphs in the decomposition, and therefore could lead to the reduction in computational time for finding the optimal solution. The method is further applied to analysis of biological pathway data.
Self-reduction of a copper complex MOD ink for inkjet printing conductive patterns on plastics.
Farraj, Yousef; Grouchko, Michael; Magdassi, Shlomo
2015-01-31
Highly conductive copper patterns on low-cost flexible substrates are obtained by inkjet printing a metal complex based ink. Upon heating the ink, the soluble complex, which is composed of copper formate and 2-amino-2-methyl-1-propanol, decomposes under nitrogen at 140 °C and is converted to pure metallic copper. The decomposition process of the complex is investigated and a suggested mechanism is presented. The ink is stable in air for prolonged periods, with no sedimentation or oxidation problems, which are usually encountered in copper nanoparticle based inks.
NASA Astrophysics Data System (ADS)
Matamala, R.; Fan, Z.; Jastrow, J. D.; Liang, C.; Calderon, F.; Michaelson, G.; Ping, C. L.; Mishra, U.; Hofmann, S. M.
2016-12-01
The large amounts of organic matter stored in permafrost-region soils are preserved in a relatively undecomposed state by the cold and wet environmental conditions limiting decomposer activity. With pending climate changes and the potential for warming of Arctic soils, there is a need to better understand the amount and potential susceptibility to mineralization of the carbon stored in the soils of this region. Studies have suggested that soil C:N ratio or other indicators based on the molecular composition of soil organic matter could be good predictors of potential decomposability. In this study, we investigated the capability of Fourier-transform mid infrared spectroscopy (MidIR) spectroscopy to predict the evolution of carbon dioxide (CO2) produced by Arctic tundra soils during a 60-day laboratory incubation. Soils collected from four tundra sites on the Coastal Plain, and Arctic Foothills of the North Slope of Alaska were separated into active-layer organic, active-layer mineral, and upper permafrost and incubated at 1, 4, 8 and 16 °C. Carbon dioxide production was measured throughout the incubations. Total soil organic carbon (SOC) and total nitrogen (TN) concentrations, salt (0.5 M K2SO4) extractable organic matter (SEOM), and MidIR spectra of the soils were measured before and after incubation. Multivariate partial least squares (PLS) modeling was used to predict cumulative CO2 production, decay rates, and the other measurements. MidIR reliably estimated SOC and TN and SEOM concentrations. The MidIR prediction models of CO2 production were very good for active-layer mineral and upper permafrost soils and good for the active-layer organic soils. SEOM was also a very good predictor of CO2 produced during the incubations. Analysis of the standardized beta coefficients from the PLS models of CO2 production for the three soil layers indicated a small number (9) of influential spectral bands. Of these, bands associated with O-H and N-H stretch, carbonates, and ester C-O appeared to be most important for predicting CO2 production for both active-layer mineral and upper permafrost soils. Further analysis of these influential bands and their relationships to SEOM in soil will be explored. Our results show that the MidIR spectra contains valuable information that can be related to decomposability of soils.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, S; Kang, S; Eom, J
Purpose: Photon-counting detectors (PCDs) allow multi-energy X-ray imaging without additional exposures and spectral overlap. This capability results in the improvement of accuracy of material decomposition for dual-energy X-ray imaging and the reduction of radiation dose. In this study, the PCD-based contrast-enhanced dual-energy mammography (CEDM) was compared with the conventional CDEM in terms of radiation dose, image quality and accuracy of material decomposition. Methods: A dual-energy model was designed by using Beer-Lambert’s law and rational inverse fitting function for decomposing materials from a polychromatic X-ray source. A cadmium zinc telluride (CZT)-based PCD, which has five energy thresholds, and iodine solutions includedmore » in a 3D half-cylindrical phantom, which composed of 50% glandular and 50% adipose tissues, were simulated by using a Monte Carlo simulation tool. The low- and high-energy images were obtained in accordance with the clinical exposure conditions for the conventional CDEM. Energy bins of 20–33 and 34–50 keV were defined from X-ray energy spectra simulated at 50 kVp with different dose levels for implementing the PCD-based CDEM. The dual-energy mammographic techniques were compared by means of absorbed dose, noise property and normalized root-mean-square error (NRMSE). Results: Comparing to the conventional CEDM, the iodine solutions were clearly decomposed for the PCD-based CEDM. Although the radiation dose for the PCD-based CDEM was lower than that for the conventional CEDM, the PCD-based CDEM improved the noise property and accuracy of decomposition images. Conclusion: This study demonstrates that the PCD-based CDEM allows the quantitative material decomposition, and reduces radiation dose in comparison with the conventional CDEM. Therefore, the PCD-based CDEM is able to provide useful information for detecting breast tumor and enhancing diagnostic accuracy in mammography.« less
King, C. Judson; Husson, Scott M.
1999-01-01
Carboxylic acids are sorbed from aqueous feedstocks onto a solid adsorbent. The acids are freed from the sorbent phase by treating it with an organic solution of alkylamine thus forming an alkylamine/carboxylic acid complex which is decomposed with improved efficiency to the desired carboxylic acid and the alkylamine. Carbon dioxide addition can be used to improve the adsorption or the carboxylic acids by the solid phase sorbent.
Separated Component-Based Restoration of Speckled SAR Images
2014-01-01
One of the simplest approaches for speckle noise reduction is known as multi-look processing. It involves non-coherently summing the independent...image is assumed to be piecewise smooth [21], [22], [23]. It has been shown that TV regular- ization often yields images with the stair -casing effect...as a function f , is to be decomposed into a sum of two components f = u+ v, where u represents the cartoon or geometric (i.e. piecewise smooth
Deterministic representation of chaos with application to turbulence
NASA Technical Reports Server (NTRS)
Zak, M.
1987-01-01
Chaotic motions of nonlinear dynamical systems are decomposed into mean components and fluctuations. The approach is based upon the concept that the fluctuations driven by the instability of the original (unperturbed) motion grow until a new stable state is approached. The Reynolds-type equations written for continuous as well as for finite-degrees-of-freedom dynamical systems are closed by using this stabilization principle. The theory is applied to conservative systems, to strange attractors and to turbulent motions.
Working Papers in Speech Recognition. IV. The Hearsay II System
1976-02-01
implementation of this model (Reddy, Erman, and Neely (73); Reddy, Er- man, Fennell , and Neely (73); Neely [73); Erman |74J). This system, which was the... Fennell . Erman, and Rea- dy (74|). Hearsayll is also based on the Hearsay model: it generalizes and extends many of the con- cepts which exist in a...difficulty of decomposing large problems for such machines. Erman, Fennell , Lesser, and Reddy [73] describe this problem and outline some early solutions
First-principles calculated decomposition pathways for LiBH4 nanoclusters
Huang, Zhi-Quan; Chen, Wei-Chih; Chuang, Feng-Chuan; Majzoub, Eric H.; Ozoliņš, Vidvuds
2016-01-01
We analyze thermodynamic stability and decomposition pathways of LiBH4 nanoclusters using grand-canonical free-energy minimization based on total energies and vibrational frequencies obtained from density-functional theory (DFT) calculations. We consider (LiBH4)n nanoclusters with n = 2 to 12 as reactants, while the possible products include (Li)n, (B)n, (LiB)n, (LiH)n, and Li2BnHn; off-stoichiometric LinBnHm (m ≤ 4n) clusters were considered for n = 2, 3, and 6. Cluster ground-state configurations have been predicted using prototype electrostatic ground-state (PEGS) and genetic algorithm (GA) based structural optimizations. Free-energy calculations show hydrogen release pathways markedly differ from those in bulk LiBH4. While experiments have found that the bulk material decomposes into LiH and B, with Li2B12H12 as a kinetically inhibited intermediate phase, (LiBH4)n nanoclusters with n ≤ 12 are predicted to decompose into mixed LinBn clusters via a series of intermediate clusters of LinBnHm (m ≤ 4n). The calculated pressure-composition isotherms and temperature-pressure isobars exhibit sloping plateaus due to finite size effects on reaction thermodynamics. Generally, decomposition temperatures of free-standing clusters are found to increase with decreasing cluster size due to thermodynamic destabilization of reaction products. PMID:27189731
First-principles calculated decomposition pathways for LiBH 4 nanoclusters
Huang, Zhi -Quan; Chen, Wei -Chih; Chuang, Feng -Chuan; ...
2016-05-18
Here, we analyze thermodynamic stability and decomposition pathways of LiBH 4 nanoclusters using grand-canonical free-energy minimization based on total energies and vibrational frequencies obtained from density-functional theory (DFT) calculations. We consider (LiBH 4) n nanoclusters with n = 2 to 12 as reactants, while the possible products include (Li) n, (B) n, (LiB) n, (LiH) n, and Li 2B nH n; off-stoichiometric LinBnHm (m ≤ 4n) clusters were considered for n = 2, 3, and 6. Cluster ground-state configurations have been predicted using prototype electrostatic ground-state (PEGS) and genetic algorithm (GA) based structural optimizations. Free-energy calculations show hydrogen release pathwaysmore » markedly differ from those in bulk LiBH 4. While experiments have found that the bulk material decomposes into LiH and B, with Li 2B 12H 12 as a kinetically inhibited intermediate phase, (LiBH 4) n nanoclusters with n ≤ 12 are predicted to decompose into mixed Li nB n clusters via a series of intermediate clusters of Li nB nH m (m ≤ 4n). The calculated pressure-composition isotherms and temperature-pressure isobars exhibit sloping plateaus due to finite size effects on reaction thermodynamics. Generally, decomposition temperatures of free-standing clusters are found to increase with decreasing cluster size due to thermodynamic destabilization of reaction products.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Zhi -Quan; Chen, Wei -Chih; Chuang, Feng -Chuan
Here, we analyze thermodynamic stability and decomposition pathways of LiBH 4 nanoclusters using grand-canonical free-energy minimization based on total energies and vibrational frequencies obtained from density-functional theory (DFT) calculations. We consider (LiBH 4) n nanoclusters with n = 2 to 12 as reactants, while the possible products include (Li) n, (B) n, (LiB) n, (LiH) n, and Li 2B nH n; off-stoichiometric LinBnHm (m ≤ 4n) clusters were considered for n = 2, 3, and 6. Cluster ground-state configurations have been predicted using prototype electrostatic ground-state (PEGS) and genetic algorithm (GA) based structural optimizations. Free-energy calculations show hydrogen release pathwaysmore » markedly differ from those in bulk LiBH 4. While experiments have found that the bulk material decomposes into LiH and B, with Li 2B 12H 12 as a kinetically inhibited intermediate phase, (LiBH 4) n nanoclusters with n ≤ 12 are predicted to decompose into mixed Li nB n clusters via a series of intermediate clusters of Li nB nH m (m ≤ 4n). The calculated pressure-composition isotherms and temperature-pressure isobars exhibit sloping plateaus due to finite size effects on reaction thermodynamics. Generally, decomposition temperatures of free-standing clusters are found to increase with decreasing cluster size due to thermodynamic destabilization of reaction products.« less
NASA Astrophysics Data System (ADS)
Holobar, A.; Minetto, M. A.; Farina, D.
2014-02-01
Objective. A signal-based metric for assessment of accuracy of motor unit (MU) identification from high-density surface electromyograms (EMG) is introduced. This metric, so-called pulse-to-noise-ratio (PNR), is computationally efficient, does not require any additional experimental costs and can be applied to every MU that is identified by the previously developed convolution kernel compensation technique. Approach. The analytical derivation of the newly introduced metric is provided, along with its extensive experimental validation on both synthetic and experimental surface EMG signals with signal-to-noise ratios ranging from 0 to 20 dB and muscle contraction forces from 5% to 70% of the maximum voluntary contraction. Main results. In all the experimental and simulated signals, the newly introduced metric correlated significantly with both sensitivity and false alarm rate in identification of MU discharges. Practically all the MUs with PNR > 30 dB exhibited sensitivity >90% and false alarm rates <2%. Therefore, a threshold of 30 dB in PNR can be used as a simple method for selecting only reliably decomposed units. Significance. The newly introduced metric is considered a robust and reliable indicator of accuracy of MU identification. The study also shows that high-density surface EMG can be reliably decomposed at contraction forces as high as 70% of the maximum.
Using pattern based layout comparison for a quick analysis of design changes
NASA Astrophysics Data System (ADS)
Huang, Lucas; Yang, Legender; Kan, Huan; Zou, Elain; Wan, Qijian; Du, Chunshan; Hu, Xinyi; Liu, Zhengfang
2018-03-01
A design usually goes through several versions until achieving a most successful one. These changes between versions are not a complete substitution but a continual improvement, either fixing the known issues of its prior versions (engineering change order) or a more optimized design substitution of a portion of the design. On the manufacturing side, process engineers care more about the design pattern changes because any new pattern occurrence may be a killer of the yield. An effective and efficient way to narrow down the diagnosis scope appeals to the engineers. What is the best approach of comparing two layouts? A direct overlay of two layouts may not always work as even though most of the design instances will be kept in the layout from version to version, the actual placements may be different. An alternative way, pattern based layout comparison, comes to play. By expanding this application, it makes it possible to transfer the learning in one cycle to another and accelerate the process of failure analysis. This paper presents a solution to compare two layouts by using Calibre DRC and Pattern Matching. The key step in this flow is layout decomposition. In theory, with a fixed pattern size, a layout can always be decomposed into limited number of patterns by moving the pattern center around the layout, the number is limited but may be huge if the layout is not processed smartly! A mathematical answer is not what we are looking for but an engineering solution is more desired. Layouts must be decomposed into patterns with physical meaning in a smart way. When a layout is decomposed and patterns are classified, a pattern library with unique patterns inside is created for that layout. After individual pattern libraries for each layout are created, run pattern comparison utility provided by Calibre Pattern Matching to compare the pattern libraries, unique patterns will come out for each layout. This paper illustrates this flow in details and demonstrates the advantage of combining Calibre DRC and Calibre Pattern Matching.
Fungal decomposers of leaf litter from an invaded and native mountain forest of NW Argentina.
Fernandez, Romina Daiana; Bulacio, Natalia; Álvarez, Analía; Pajot, Hipólito; Aragón, Roxana
2017-09-01
The impact of plant species invasions on the abundance, composition and activity of fungal decomposers of leaf litter is poorly understood. In this study, we isolated and compared the relative abundance of ligninocellulolytic fungi of leaf litter mixtures from a native forest and a forest invaded by Ligustrum lucidum in a lower mountain forest of Tucuman, Argentina. In addition, we evaluated the relationship between the relative abundance of ligninocellulolytic fungi and properties of the soil of both forest types. Finally, we identified lignin degrading fungi and characterized their polyphenol oxidase activities. The relative abundance of ligninocellulolytic fungi was higher in leaf litter mixtures from the native forest. The abundance of cellulolytic fungi was negatively related with soil pH while the abundance of ligninolytic fungi was positively related with soil humidity. We identified fifteen genera of ligninolytic fungi; four strains were isolated from both forest types, six strains only from the invaded forest and five strains were isolated only from the native forest. The results found in this study suggest that L. Lucidum invasion could alter the abundance and composition of fungal decomposers. Long-term studies that include an analysis of the nutritional quality of litter are needed, for a more complete overview of the influence of L. Lucidum invasion on fungal decomposers and on leaf litter decomposition.
Nordmann, Emily; Cleland, Alexandra A; Bull, Rebecca
2014-06-01
To date, there have been several attempts made to build a database of normative data for English idiomatic expressions (e.g., Libben & Titone, 2008; Titone & Connine, 1994), however, there has been some discussion in the literature as to the validity and reliability of the data obtained, particularly for decomposability ratings. Our work aimed to address these issues by looking at ratings from native and non-native speakers and to extend the deeper investigation and analysis of decomposability to other aspects of idiomatic expressions, namely familiarly, meaning and literality. Poor reliability was observed on all types of ratings, suggesting that rather than decomposability being a special case, individual variability plays a large role in how participants rate idiomatic phrases in general. Ratings from native and non-native speakers were positively correlated and an analysis of covariance found that once familiarity with an idiom was accounted for, most of the differences between native and non-native ratings were not significant. Overall, the results suggest that individual experience with idioms plays an important role in how they are perceived and this should be taken into account when selecting stimuli for experimental studies. Furthermore, the results are suggestive of the inability of speakers to inhibit the figurative meanings for idioms that they are highly familiar with. Copyright © 2014 Elsevier B.V. All rights reserved.
A Kullback-Leibler approach for 3D reconstruction of spectral CT data corrupted by Poisson noise
NASA Astrophysics Data System (ADS)
Hohweiller, Tom; Ducros, Nicolas; Peyrin, Françoise; Sixou, Bruno
2017-09-01
While standard computed tomography (CT) data do not depend on energy, spectral computed tomography (SPCT) acquire energy-resolved data, which allows material decomposition of the object of interest. Decompo- sitions in the projection domain allow creating projection mass density (PMD) per materials. From decomposed projections, a tomographic reconstruction creates 3D material density volume. The decomposition is made pos- sible by minimizing a cost function. The variational approach is preferred since this is an ill-posed non-linear inverse problem. Moreover, noise plays a critical role when decomposing data. That is why in this paper, a new data fidelity term is used to take into account of the photonic noise. In this work two data fidelity terms were investigated: a weighted least squares (WLS) term, adapted to Gaussian noise, and the Kullback-Leibler distance (KL), adapted to Poisson noise. A regularized Gauss-Newton algorithm minimizes the cost function iteratively. Both methods decompose materials from a numerical phantom of a mouse. Soft tissues and bones are decomposed in the projection domain; then a tomographic reconstruction creates a 3D material density volume for each material. Comparing relative errors, KL is shown to outperform WLS for low photon counts, in 2D and 3D. This new method could be of particular interest when low-dose acquisitions are performed.
Zhang, Lu; Peng, Xue; Liu, Biyun; Zhang, Yi; Zhou, Qiaohong; Wu, Zhenbin
2018-08-15
Excessive proliferation of filamentous green algae (FGA) has been considered an important factor resulting in the poor growth or even decline of submerged macrophytes. However, there is a lack of detailed information regarding the effect of decaying FGA on submerged macrophytes. This study aimed to investigate whether the decomposing liquid from Cladophora oligoclona negatively affects Hydrilla verticillata turion germination and seedling growth. The results showed that the highest concentrations of decomposing liquid treatments inhibited the turion germination rate, which was the lowest than other treatments, at only 84%. The chlorophyll a fluorescence (JIP test) and physiological indicators (chlorophyll a content, soluble sugars, Ca 2+ /Mg 2+ -ATPase and PAL activity) were also measured. The chlorophyll a content in the highest concentration (40% of original decomposing liquid) treatment group decreased by 43.53% than that of the control; however, soluble sugars, Ca 2+ /Mg 2+ -ATPase, and PAL activity increased by 172.46%, 271.19%, and 26.43% respectively. The overall results indicated that FGA decay has a considerable effect on submerged macrophyte turion germination and seedling growth, which could inhibit their expansion and reproduction. This study emphasized the need to focus on effects of FGA decomposition on the early growth stages of submerged macrophytes and offered technological guidance for submerged vegetation restoration in lakes and shallow waters. Copyright © 2018 Elsevier Inc. All rights reserved.
FastChem: An ultra-fast equilibrium chemistry
NASA Astrophysics Data System (ADS)
Kitzmann, Daniel; Stock, Joachim
2018-04-01
FastChem is an equilibrium chemistry code that calculates the chemical composition of the gas phase for given temperatures and pressures. Written in C++, it is based on a semi-analytic approach, and is optimized for extremely fast and accurate calculations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bortolaz de Oliveira, Henrique; Wypych, Fernando, E-mail: wypych@ufpr.br
Layered zinc hydroxide nitrate (ZnHN) and Zn/Ni layered double hydroxide salts were synthesized and used to remove chromate ions from solutions at pH 8.0. The materials were characterized by many instrumental techniques before and after chromate ion removal. ZnHN decomposed after contact with the chromate solution, whereas the layered structure of Zn/Ni hydroxide nitrate (Zn/NiHN) and Zn/Ni hydroxide acetate (Zn/NiHA) remained their layers intact after the topotactic anionic exchange reaction, only changing the basal distances. ZnHN, Zn/NiHN, and Zn/NiHA removed 210.1, 144.8, and 170.1 mg of CrO{sub 4}{sup 2−}/g of material, respectively. Although the removal values obtained for Zn/NiHN andmore » Zn/NiHA were smaller than the values predicted for the ideal formulas of the solids (194.3 and 192.4 mg of CrO{sub 4}{sup 2−}/g of material, respectively), the measured capacities were higher than the values achieved with many materials reported in the literature. Kinetic experiments showed the removal reaction was fast. To facilitate the solid/liquid separation process after chromium removal, Zn/Ni layered double hydroxide salts with magnetic supports were also synthesized, and their ability to remove chromate was evaluated. - Highlights: • Zinc hydroxide nitrate and Zn/Ni hydroxide nitrate or acetate were synthesized. • The interlayer anions were replaced by chromate anions at pH=8.0. • Only Zn/Ni hydroxide nitrate or acetate have the structure preserved after exchange. • Fast exchange reaction and high capacity of chromate removal were observed. • Magnetic materials were obtained to facilitate the solids removal the from solutions.« less
Prymont-Przyminska, Anna; Zwolinska, Anna; Sarniak, Agata; Wlodarczyk, Anna; Krol, Maciej; Nowak, Michal; de Graft-Johnson, Jeffrey; Padula, Gianluca; Bialasiewicz, Piotr; Markowski, Jaroslaw; Rutkowski, Krzysztof P.; Nowak, Dariusz
2014-01-01
Strawberries contain anthocyanins and ellagitanins which have antioxidant properties. We determined whether the consumption of strawberries increase the plasma antioxidant activity measured as the ability to decompose 2,2-diphenyl-1-picrylhydrazyl radical (DPPH) in healthy subjects. The study involved 10 volunteers (age 41 ± 6 years, body weight 74.4 ± 12.7 kg) that consumed 500 g of strawberries daily for 9 days and 7 matched controls. Fasting plasma and spot morning urine samples were collected at baseline, during fruit consumption and after a 6 day wash-out period. DPPH decomposition was measured in both deproteinized native plasma specimens and pretreated with uricase (non-urate plasma). Twelve phenolics were determined with HPLC. Strawberries had no effect on the antioxidant activity of native plasma and circulating phenolics. Non-urate plasma DPPH decomposition increased from 5.7 ± 0.6% to 6.6 ± 0.6%, 6.5 ± 1.0% and 6.3 ± 1.4% after 3, 6 and 9 days of supplementation, respectively. The wash-out period reversed this activity back to 5.7 ± 0.8% (p<0.01). Control subjects did not reveal any changes of plasma antioxidant activity. Significant increase in urinary urolithin A and 4-hydroxyhippuric (by 8.7- and 5.9-times after 6 days of supplementation with fruits) was noted. Strawberry consumption can increase the non-urate plasma antioxidant activity which, in turn, may decrease the risk of systemic oxidants overactivity. PMID:25120279
Starling, Anne P; Adgate, John L; Hamman, Richard F; Kechris, Katerina; Calafat, Antonia M; Ye, Xiaoyun; Dabelea, Dana
2017-06-26
Certain perfluoroalkyl and polyfluoroalkyl substances (PFAS) are widespread, persistent environmental contaminants. Prenatal PFAS exposure has been associated with lower birth weight; however, impacts on body composition and factors responsible for this association are unknown. We aimed to estimate associations between maternal PFAS concentrations and offspring weight and adiposity at birth, and secondarily to estimate associations between PFAS concentrations and maternal glucose and lipids, and to evaluate the potential for these nutrients to mediate associations between PFAS and neonatal outcomes. Within the Healthy Start prospective cohort, concentrations of 11 PFAS, fasting glucose, and lipids were measured in maternal mid-pregnancy serum (n=628). Infant body composition was measured using air displacement plethysmography. Associations between PFAS and birth weight and adiposity, and between PFAS and maternal glucose and lipids, were estimated via linear regression. Associations were decomposed into direct and indirect effects. Five PFAS were detectable in >50% of participants. Maternal perfluorooctanoate (PFOA) and perfluorononanoate (PFNA) concentrations were inversely associated with birth weight. Adiposity at birth was approximately 10% lower in the highest categories of PFOA, PFNA, and perfluorohexane sulfonate (PFHxS) compared to the lowest categories. PFOA, PFNA, perfluorodecanoate (PFDeA), and PFHxS were inversely associated with maternal glucose. Up to 11.6% of the effect of PFAS on neonatal adiposity was mediated by maternal glucose concentrations. Perfluorooctane sulfonate (PFOS) was not significantly associated with any outcomes studied. Follow-up of offspring will determine the potential long-term consequences of lower weight and adiposity at birth associated with prenatal PFAS exposure. https://doi.org/10.1289/EHP641.
Adgate, John L.; Hamman, Richard F.; Kechris, Katerina; Calafat, Antonia M.; Ye, Xiaoyun; Dabelea, Dana
2017-01-01
Background: Certain perfluoroalkyl and polyfluoroalkyl substances (PFAS) are widespread, persistent environmental contaminants. Prenatal PFAS exposure has been associated with lower birth weight; however, impacts on body composition and factors responsible for this association are unknown. Objectives: We aimed to estimate associations between maternal PFAS concentrations and offspring weight and adiposity at birth, and secondarily to estimate associations between PFAS concentrations and maternal glucose and lipids, and to evaluate the potential for these nutrients to mediate associations between PFAS and neonatal outcomes. Methods: Within the Healthy Start prospective cohort, concentrations of 11 PFAS, fasting glucose, and lipids were measured in maternal mid-pregnancy serum (n=628). Infant body composition was measured using air displacement plethysmography. Associations between PFAS and birth weight and adiposity, and between PFAS and maternal glucose and lipids, were estimated via linear regression. Associations were decomposed into direct and indirect effects. Results: Five PFAS were detectable in >50% of participants. Maternal perfluorooctanoate (PFOA) and perfluorononanoate (PFNA) concentrations were inversely associated with birth weight. Adiposity at birth was approximately 10% lower in the highest categories of PFOA, PFNA, and perfluorohexane sulfonate (PFHxS) compared to the lowest categories. PFOA, PFNA, perfluorodecanoate (PFDeA), and PFHxS were inversely associated with maternal glucose. Up to 11.6% of the effect of PFAS on neonatal adiposity was mediated by maternal glucose concentrations. Perfluorooctane sulfonate (PFOS) was not significantly associated with any outcomes studied. Conclusions: Follow-up of offspring will determine the potential long-term consequences of lower weight and adiposity at birth associated with prenatal PFAS exposure. https://doi.org/10.1289/EHP641 PMID:28669937
Simplifying the complexity of a coupled carbon turnover and pesticide degradation model
NASA Astrophysics Data System (ADS)
Marschmann, Gianna; Erhardt, André H.; Pagel, Holger; Kügler, Philipp; Streck, Thilo
2016-04-01
The mechanistic one-dimensional model PECCAD (PEsticide degradation Coupled to CArbon turnover in the Detritusphere; Pagel et al. 2014, Biogeochemistry 117, 185-204) has been developed as a tool to elucidate regulation mechanisms of pesticide degradation in soil. A feature of this model is that it integrates functional traits of microorganisms, identifiable by molecular tools, and physicochemical processes such as transport and sorption that control substrate availability. Predicting the behavior of microbially active interfaces demands a fundamental understanding of factors controlling their dynamics. Concepts from dynamical systems theory allow us to study general properties of the model such as its qualitative behavior, intrinsic timescales and dynamic stability: Using a Latin hypercube method we sampled the parameter space for physically realistic steady states of the PECCAD ODE system and set up a numerical continuation and bifurcation problem with the open-source toolbox MatCont in order to obtain a complete classification of the dynamical system's behaviour. Bifurcation analysis reveals an equilibrium state of the system entirely controlled by fungal kinetic parameters. The equilibrium is generally unstable in response to small perturbations except for a small band in parameter space where the pesticide pool is stable. Time scale separation is a phenomenon that occurs in almost every complex open physical system. Motivated by the notion of "initial-stage" and "late-stage" decomposers and the concept of r-, K- or L-selected microbial life strategies, we test the applicability of geometric singular perturbation theory to identify fast and slow time scales of PECCAD. Revealing a generic fast-slow structure would greatly simplify the analysis of complex models of organic matter turnover by reducing the number of unknowns and parameters and providing a systematic mathematical framework for studying their properties.
MHD Wave Propagation at the Interface Between Solar Chromosphere and Corona
NASA Astrophysics Data System (ADS)
Huang, Y.; Song, P.; Vasyliunas, V. M.
2017-12-01
We study the electromagnetic and momentum constraints at the solar transition region which is a sharp layer interfacing between the solar chromosphere and corona. When mass transfer between the two domains is neglected, the transition region can be treated as a contact discontinuity across which the magnetic flux is conserved and the total forces are balanced. We consider an Alfvénic perturbation that propagates along the magnetic field incident onto the interface from one side. In order to satisfy the boundary conditions at the transition region, only part of the incident energy flux is transmitted through and the rest is reflected. Taking into account the highly anisotropic propagation of waves in magnetized plasmas, we generalize the law of reflection and specify Snell's law for each of the three wave MHD modes: incompressible Alfvén mode and compressible fast and slow modes. Unlike conventional optical systems, the interface between two magnetized plasmas is not rigid but can be deformed by the waves, allowing momentum and energy to be transferred by compression. With compressible modes included, the Fresnel conditions need substantial modification. We derive Fresnel conditions, reflectivities and transmittances, and mode conversion for incident waves propagating along the background magnetic field. The results are well organized when the incident perturbation is decomposed into components in and normal to the incident plane (containing the background magnetic field and the normal direction of the interface). For a perturbation normal to the incident plane, both transmitted and reflected perturbations are incompressible Alfvén mode waves. For a perturbation in the incident plane, they can be compressible slow and fast mode waves which may produce ripples on the transition region.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niu, T; Dong, X; Petrongolo, M
Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its material decomposition capability. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical value. Existing de-noising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. We propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm is formulated in the form of least-square estimationmore » with smoothness regularization. It includes the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. Performance is evaluated using an evaluation phantom (Catphan 600) and an anthropomorphic head phantom. Results are compared to those generated using direct matrix inversion with no noise suppression, a de-noising method applied on the decomposed images, and an existing algorithm with similar formulation but with an edge-preserving regularization term. Results: On the Catphan phantom, our method retains the same spatial resolution as the CT images before decomposition while reducing the noise standard deviation of decomposed images by over 98%. The other methods either degrade spatial resolution or achieve less low-contrast detectability. Also, our method yields lower electron density measurement error than direct matrix inversion and reduces error variation by over 97%. On the head phantom, it reduces the noise standard deviation of decomposed images by over 97% without blurring the sinus structures. Conclusion: We propose an iterative image-domain decomposition method for DECT. The method combines noise suppression and material decomposition into an iterative process and achieves both goals simultaneously. The proposed algorithm shows superior performance on noise suppression with high image spatial resolution and low-contrast detectability. This work is supported by a Varian MRA grant.« less
Thermochemical process for recovering Cu from CuO or CuO.sub.2
Richardson, deceased, Donald M.; Bamberger, Carlos E.
1981-01-01
A process for producing hydrogen comprises the step of reacting metallic Cu with Ba(OH).sub.2 in the presence of steam to produce hydrogen and BaCu.sub.2 O.sub.2. The BaCu.sub.2 O.sub.2 is reacted with H.sub.2 O to form Cu.sub.2 O and a Ba(OH).sub.2 product for recycle to the initial reaction step. Cu can be obtained from the Cu.sub.2 O product by several methods. In one embodiment the Cu.sub.2 O is reacted with HF solution to provide CuF.sub.2 and Cu. The CuF.sub.2 is reacted with H.sub.2 O to provide CuO and HF. CuO is decomposed to Cu.sub.2 O and O.sub.2. The HF, Cu and Cu.sub.2 O are recycled. In another embodiment the Cu.sub.2 O is reacted with aqueous H.sub.2 SO.sub.4 solution to provide CuSO.sub.4 solution and Cu. The CuSO.sub.4 is decomposed to CuO and SO.sub.3. The CuO is decomposed to form Cu.sub.2 O and O.sub.2. The SO.sub.3 is dissolved to form H.sub.2 SO.sub.4. H.sub.2 SO.sub.4, Cu and Cu.sub.2 O are recycled. In another embodiment Cu.sub.2 O is decomposed electrolytically to Cu and O.sub.2. In another aspect of the invention, Cu is recovered from CuO by the steps of decomposing CuO to Cu.sub.2 O and O.sub.2, reacting the Cu.sub.2 O with aqueous HF solution to produce Cu and CuF.sub.2, reacting the CuF.sub.2 with H.sub.2 O to form CuO and HF, and recycling the CuO and HF to previous reaction steps.