Sample records for based return decompositions

  1. Object detection with a multistatic array using singular value decomposition

    DOEpatents

    Hallquist, Aaron T.; Chambers, David H.

    2014-07-01

    A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across a surface and that travels down the surface. The detection system converts the return signals from a time domain to a frequency domain, resulting in frequency return signals. The detection system then performs a singular value decomposition for each frequency to identify singular values for each frequency. The detection system then detects the presence of a subsurface object based on a comparison of the identified singular values to expected singular values when no subsurface object is present.

  2. Which Mechanisms Explain Monetary Returns to International Student Mobility?

    ERIC Educational Resources Information Center

    Kratz, Fabian; Netz, Nicolai

    2018-01-01

    The authors develop a conceptual framework explaining monetary returns to international student mobility (ISM). Based on data from two German graduate panel surveys, they test this framework using growth curve models and Oaxaca-Blinder decompositions. The results indicate that ISM-experienced graduates enjoy a steeper wage growth after graduation…

  3. Long memory in international financial markets trends and short movements during 2008 financial crisis based on variational mode decomposition and detrended fluctuation analysis

    NASA Astrophysics Data System (ADS)

    Lahmiri, Salim

    2015-11-01

    The purpose of this study is to investigate long-range dependence in trend and short variation of stock market price and return series before, during, and after 2008 financial crisis. Variational mode decomposition (VMD), a newly introduced technique for signal processing, is adopted to decompose stock market data into a finite set of modes so as to obtain long term trends and short term movements of stock market data. Then, the detrended fluctuation analysis (DFA) and range scale (R/S) analysis are used to estimate Hurst exponent in each variational mode obtained from VMD. For both price and return series, the empirical results from twelve international stock markets show evidence that long term trends are persistent, whilst short term variations are anti-persistent before, during, and after 2008 financial crisis.

  4. Spectroscopic characterization and evaluation of SOM in areas under different soil tillage systems

    USDA-ARS?s Scientific Manuscript database

    Agricultural management influences the amount of carbon returned to the soil in the form of plant residues and animal manures and the rate of decomposition of soil carbon. The physical and chemical characteristics of soil carbon influence its recalcitrance to decomposition. We sampled soil from th...

  5. [Effects of nitrogen application on decomposition and nutrient release of returned maize straw in Guanzhong Plain, Northwest China].

    PubMed

    Huang, Ting Miao; Wang, Zhao Hui; Hou, Yang Yi; Gu, Chi Ming; Li, Xiao; Zheng, Xian Feng

    2017-07-18

    With 15 N isotope labeled maize straw in nylon net bags and buried in the wheat field at two N rates of 0 and 200 kg N·hm -2 , the effects of nitrogen application on the decomposition of straw dry matter and the release dynamics of carbon, nitrogen, phosphorus and potassium (C, N, P and K) after maize straw retention were investigated in the winter wheat-summer maize rotation system in Guanzhong Plain, Shaanxi, China. Results showed that N application did not affect the decomposition of the returned straw C and dry matter, but promoted the release of P and inhibited the release of N and K from straw during sowing to wintering periods of winter wheat. From the grain filling to the harvest of winter wheat, the decomposition of the returned straw and the release of N, P and K were not affected, but the release of straw C was significantly enhanced by N application. The release dynamic of straw C was synchronized with the decomposition of the dry matter, and the C/N of straw declined gradually with the extension of wheat growing. Until the harvest of winter wheat, the accumulative decomposition rate of straw dry matter was less than 50%, and the total straw C release rate was around 47.9% to 51.1%. The C/N ratio of the returned straw was decreased from 32.2 to 20.2 and 17.9, respectively at N rates of 0 and 200 kg N·hm -2 . From sowing to harvest of winter wheat, the net release of N, P and K from the straw was observed. The N release was 7.2-9.4 kg·hm -2 and 12.7%-16.6% of the total straw N, and the P release was 1.29-1.44 kg·hm -2 and 29.0%-32.4% of the total straw P, while a great deal of K was released quickly, with approximately 80% of the straw K released before wintering, 51.8-52.5 kg·hm -2 and 90.5%-91.7% of the total straw K released at wheat harvest. It was suggested that the K fertilizer application should be decreased for the winter wheat due to the great amount K release from the returned maize straw, and an extra amount of N and P fertilizer should be applied under the straw retention cropping system.

  6. Volatility and correlation-based systemic risk measures in the US market

    NASA Astrophysics Data System (ADS)

    Civitarese, Jamil

    2016-10-01

    This paper deals with the problem of how to use simple systemic risk measures to assess portfolio risk characteristics. Using three simple examples taken from previous literature, one based on raw and partial correlations, another based on the eigenvalue decomposition of the covariance matrix and the last one based on an eigenvalue entropy, a Granger-causation analysis revealed some of them are not always a good measure of risk in the S&P 500 and in the VIX. The measures selected do not Granger-cause the VIX index in all windows selected; therefore, in the sense of risk as volatility, the indicators are not always suitable. Nevertheless, their results towards returns are similar to previous works that accept them. A deeper analysis has shown that any symmetric measure based on eigenvalue decomposition of correlation matrices, however, is not useful as a measure of "correlation" risk. The empirical counterpart analysis of this proposition stated that negative correlations are usually small and, therefore, do not heavily distort the behavior of the indicator.

  7. Garbage Grows Great Plants.

    ERIC Educational Resources Information Center

    Brittain, Alexander N.

    1996-01-01

    Describes activities in which students explore composting. Enables students to learn that all organic material returns naturally to the earth through a process of decomposition that involves many living organisms. (JRH)

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitford, W.G.; Elkins, N.Z.; Parker, L.W.

    In laboratory microcosms of coal mine spoil amended with bark and wood chips, the activity of termites increased organic matter and increased total nitrogen. Termite survival was reduced in microcosms with spoil and paper or straw amendments. Field studies evaluating the efficacy of organic amendments in developing a soil biota showed that decomposition rates of wood chip-bark amended spoil were the same as unmined soil and that decomposition rates were lower than all other mulch-spoil combinations. Wood and bark amended-spoil had the highest density and diversity of soil fauna. Top dressing spoils with borrow soil did not improve any ofmore » the soil biological parameters measured. Based on these data it was recommended that reclamation procedures be changed to eliminate borrow soil top-dressing and that wood removed from mined areas be returned to the contoured spoil as wood chip amendment in addition to straw mulch.« less

  9. Highly sensitive antenna using inkjet overprinting with particle-free conductive inks.

    PubMed

    Komoda, Natsuki; Nogi, Masaya; Suganuma, Katsuaki; Otsuka, Kanji

    2012-11-01

    Printed antennas with low signal losses and fast response in high-frequency bands have been required. Here we reported on highly sensitive antennas using additive patterning of particle-free metallo-organic decomposition silver inks. Inkjet overprinting of metallo-organic decomposition inks onto copper foil and silver nanowire line produced antenna with mirror surfaces. As a result, the overprinted antennas decreased their return losses at 0.5-4.0 GHz and increased the speed of data communication in WiFi network.

  10. Gaussian Decomposition of Laser Altimeter Waveforms

    NASA Technical Reports Server (NTRS)

    Hofton, Michelle A.; Minster, J. Bernard; Blair, J. Bryan

    1999-01-01

    We develop a method to decompose a laser altimeter return waveform into its Gaussian components assuming that the position of each Gaussian within the waveform can be used to calculate the mean elevation of a specific reflecting surface within the laser footprint. We estimate the number of Gaussian components from the number of inflection points of a smoothed copy of the laser waveform, and obtain initial estimates of the Gaussian half-widths and positions from the positions of its consecutive inflection points. Initial amplitude estimates are obtained using a non-negative least-squares method. To reduce the likelihood of fitting the background noise within the waveform and to minimize the number of Gaussians needed in the approximation, we rank the "importance" of each Gaussian in the decomposition using its initial half-width and amplitude estimates. The initial parameter estimates of all Gaussians ranked "important" are optimized using the Levenburg-Marquardt method. If the sum of the Gaussians does not approximate the return waveform to a prescribed accuracy, then additional Gaussians are included in the optimization procedure. The Gaussian decomposition method is demonstrated on data collected by the airborne Laser Vegetation Imaging Sensor (LVIS) in October 1997 over the Sequoia National Forest, California.

  11. How Human and Natural Disturbance Affects the U.S. Carbon Sink

    NASA Astrophysics Data System (ADS)

    Felzer, B. S.

    2015-12-01

    Gridded datasets of Net Ecosystem Exchange derived from eddy covariance and remote sensing measurements (EC-MOD and FLUXNET-MTE) provide a means of validating Net Ecosystem Productivity (NEP, opposite of NEE) from terrestrial ecosystem models. While most forested regions in the U.S. are observed to be moderate to strong carbon sinks, models not including human or natural disturbances will tend to be more carbon neutral, which is expected of mature ecosystems. I have developed the Terrestrial Ecosystems Model Hydro version (TEM-Hydro) to include both human and natural disturbances to compare against gridded NEP datasets. Human disturbances are based on the Hurtt et al. land use transition dataset and include transient agricultural (crops and pasture) conversion and abandonment and timber harvest. Natural disturbances include tropical storms and hurricane and fires based on stochastic return intervals. Model results indicate that forests are the largest carbon sink, seconded by croplands and pastures, if not accounting for decomposition of agricultural products and animal respiration. Grasslands and shrublands are both small sinks or carbon neutral. The NEP of forests in EC-MOD from 2001-2006 is 240 gCm2yr-1 and for FLUXNET-MTE from 1982-2007 is 375 gCm-2yr-1. With potential vegetation, the respective forest sinks for those two time periods are 54 and 62 gCm-2yr-1, respectively. Including the effects of human disturbance increases the sinks to 154 and 147 gCm-2yr-1. The effect of stochastic fire and storms is to reduce the NEP to 114 and 108 gCm-2yr-1. While the positive carbon sink today is the result of past land use disturbance, net carbon sequestration, including product decomposition, conversion fluxes, and animal respiration, has not yet returned to predisturbance levels as seen in the potential vegetation. Differences in response to disturbance have to do with the type, frequency, and intensity of disturbance. Fire, in particular, is seen to have a net negative effect on carbon storage in forests due to decomposition of coarse woody debris and the fact that some nitrogen is lost during volatilization. Croplands become a carbon source if assuming product decomposition occurs where the crops are grown, and pasturelands become carbon neutral if accounting for animal respiration.

  12. Precipitation intensity-duration-frequency curves for central Belgium with an ensemble of EURO-CORDEX simulations, and associated uncertainties

    NASA Astrophysics Data System (ADS)

    Hosseinzadehtalaei, Parisa; Tabari, Hossein; Willems, Patrick

    2018-02-01

    An ensemble of 88 regional climate model (RCM) simulations at 0.11° and 0.44° spatial resolutions from the EURO-CORDEX project is analyzed for central Belgium to investigate the projected impact of climate change on precipitation intensity-duration-frequency (IDF) relationships and extreme precipitation quantiles typically used in water engineering designs. The rate of uncertainty arising from the choice of RCM, driving GCM, and radiative concentration pathway (RCP4.5 & RCP8.5) is quantified using a variance decomposition technique after reconstruction of missing data in GCM × RCM combinations. A comparative analysis between the historical simulations of the EURO-CORDEX 0.11° and 0.44° RCMs shows higher precipitation intensities by the finer resolution runs, leading to a larger overestimation of the observations-based IDFs by the 0.11° runs. The results reveal that making a temporal stationarity assumption for the climate system may lead to underestimation of precipitation quantiles up to 70% by the end of this century. This projected increase is generally larger for the 0.11° RCMs compared with the 0.44° RCMs. The relative changes in extreme precipitation do depend on return period and duration, indicating an amplification for larger return periods and for smaller durations. The variance decomposition approach generally identifies RCM as the most dominant component of uncertainty in changes of more extreme precipitation (return period of 10 years) for both 0.11° and 0.44° resolutions, followed by GCM and RCP scenario. The uncertainties associated with cross-contributions of RCMs, GCMs, and RCPs play a non-negligible role in the associated uncertainties of the changes.

  13. Probabilistic assessment of landslide tsunami hazard for the northern Gulf of Mexico

    NASA Astrophysics Data System (ADS)

    Pampell-Manis, A.; Horrillo, J.; Shigihara, Y.; Parambath, L.

    2016-01-01

    The devastating consequences of recent tsunamis affecting Indonesia and Japan have prompted a scientific response to better assess unexpected tsunami hazards. Although much uncertainty exists regarding the recurrence of large-scale tsunami events in the Gulf of Mexico (GoM), geological evidence indicates that a tsunami is possible and would most likely come from a submarine landslide triggered by an earthquake. This study customizes for the GoM a first-order probabilistic landslide tsunami hazard assessment. Monte Carlo Simulation (MCS) is employed to determine landslide configurations based on distributions obtained from observational submarine mass failure (SMF) data. Our MCS approach incorporates a Cholesky decomposition method for correlated landslide size parameters to capture correlations seen in the data as well as uncertainty inherent in these events. Slope stability analyses are performed using landslide and sediment properties and regional seismic loading to determine landslide configurations which fail and produce a tsunami. The probability of each tsunamigenic failure is calculated based on the joint probability of slope failure and probability of the triggering earthquake. We are thus able to estimate sizes and return periods for probabilistic maximum credible landslide scenarios. We find that the Cholesky decomposition approach generates landslide parameter distributions that retain the trends seen in observational data, improving the statistical validity and relevancy of the MCS technique in the context of landslide tsunami hazard assessment. Estimated return periods suggest that probabilistic maximum credible SMF events in the north and northwest GoM have a recurrence of 5000-8000 years, in agreement with age dates of observed deposits.

  14. Understanding the City Size Wage Gap*

    PubMed Central

    Baum-Snow, Nathaniel; Pavan, Ronni

    2013-01-01

    In this paper, we decompose city size wage premia into various components. We base these decompositions on an estimated on-the-job search model that incorporates latent ability, search frictions, firm-worker match quality, human capital accumulation and endogenous migration between large, medium and small cities. Counterfactual simulations of the model indicate that variation in returns to experience and differences in wage intercepts across location type are the most important mechanisms contributing to observed city size wage premia. Variation in returns to experience is more important for generating wage premia between large and small locations while differences in wage intercepts are more important for generating wage premia betwen medium and small locations. Sorting on unobserved ability within education group and differences in labor market search frictions and distributions of firm-worker match quality contribute little to observed city size wage premia. These conclusions hold for separate samples of high school and college graduates. PMID:24273347

  15. Understanding the City Size Wage Gap.

    PubMed

    Baum-Snow, Nathaniel; Pavan, Ronni

    2012-01-01

    In this paper, we decompose city size wage premia into various components. We base these decompositions on an estimated on-the-job search model that incorporates latent ability, search frictions, firm-worker match quality, human capital accumulation and endogenous migration between large, medium and small cities. Counterfactual simulations of the model indicate that variation in returns to experience and differences in wage intercepts across location type are the most important mechanisms contributing to observed city size wage premia. Variation in returns to experience is more important for generating wage premia between large and small locations while differences in wage intercepts are more important for generating wage premia betwen medium and small locations. Sorting on unobserved ability within education group and differences in labor market search frictions and distributions of firm-worker match quality contribute little to observed city size wage premia. These conclusions hold for separate samples of high school and college graduates.

  16. Interdependence between Greece and other European stock markets: A comparison of wavelet and VMD copula, and the portfolio implications

    NASA Astrophysics Data System (ADS)

    Shahzad, Syed Jawad Hussain; Kumar, Ronald Ravinesh; Ali, Sajid; Ameer, Saba

    2016-09-01

    The interdependence of Greece and other European stock markets and the subsequent portfolio implications are examined in wavelet and variational mode decomposition domain. In applying the decomposition techniques, we analyze the structural properties of data and distinguish between short and long term dynamics of stock market returns. First, the GARCH-type models are fitted to obtain the standardized residuals. Next, different copula functions are evaluated, and based on the conventional information criteria and time varying parameter, Joe-Clayton copula is chosen to model the tail dependence between the stock markets. The short-run lower tail dependence time paths show a sudden increase in comovement during the global financial crises. The results of the long-run dependence suggest that European stock markets have higher interdependence with Greece stock market. Individual country's Value at Risk (VaR) separates the countries into two distinct groups. Finally, the two-asset portfolio VaR measures provide potential markets for Greece stock market investment diversification.

  17. Independent EEG Sources Are Dipolar

    PubMed Central

    Delorme, Arnaud; Palmer, Jason; Onton, Julie; Oostenveld, Robert; Makeig, Scott

    2012-01-01

    Independent component analysis (ICA) and blind source separation (BSS) methods are increasingly used to separate individual brain and non-brain source signals mixed by volume conduction in electroencephalographic (EEG) and other electrophysiological recordings. We compared results of decomposing thirteen 71-channel human scalp EEG datasets by 22 ICA and BSS algorithms, assessing the pairwise mutual information (PMI) in scalp channel pairs, the remaining PMI in component pairs, the overall mutual information reduction (MIR) effected by each decomposition, and decomposition ‘dipolarity’ defined as the number of component scalp maps matching the projection of a single equivalent dipole with less than a given residual variance. The least well-performing algorithm was principal component analysis (PCA); best performing were AMICA and other likelihood/mutual information based ICA methods. Though these and other commonly-used decomposition methods returned many similar components, across 18 ICA/BSS algorithms mean dipolarity varied linearly with both MIR and with PMI remaining between the resulting component time courses, a result compatible with an interpretation of many maximally independent EEG components as being volume-conducted projections of partially-synchronous local cortical field activity within single compact cortical domains. To encourage further method comparisons, the data and software used to prepare the results have been made available (http://sccn.ucsd.edu/wiki/BSSComparison). PMID:22355308

  18. Storage of Organic and Inorganic Carbon in Human Settlements

    NASA Astrophysics Data System (ADS)

    Churkina, G.

    2009-12-01

    It has been shown that urban areas have carbon density comparable with tropical forest. Carbon density of urban areas may be even higher, because the density of organic carbon only was taking into account. Human settlements store carbon in two forms such as organic and inorganic. Carbon is stored in organic form in living biomass such as trees, grasses or in artifacts derived from biomass such as wooden furniture, building structures, paper, clothes and shoes made from natural materials. Inorganic carbon or fossil carbon, meanwhile, is primarily stored in objects fabricated by people like concrete, plastic, asphalt, and bricks. The key difference between organic and inorganic forms of carbon is how they return to the gaseous state. Organic carbon can be returned to the atmosphere without applying additional artificial energy through decomposition of organic matter, whereas energy input such as burning is needed to release inorganic carbon. In this study I compare inorganic with organic carbon storage, discuss their carbon residence time, decomposition rates, and possible implications for carbon emissions.

  19. The role of soil drainage class in carbon dioxide exchange and decomposition in boreal black spruce (Picea mariana) forest stands

    USGS Publications Warehouse

    Wickland, K.P.; Neff, J.C.; Harden, J.W.

    2010-01-01

    Black spruce (Picea mariana (Mill.) B.S.P.) forest stands range from well drained to poorly drained, typically contain large amounts of soil organic carbon (SOC), and are often underlain by permafrost. To better understand the role of soil drainage class in carbon dioxide (CO2) exchange and decomposition, we measured soil respiration and net CO2 fluxes, litter decomposition and litterfall rates, and SOC stocks above permafrost in three Alaska black spruce forest stands characterized as well drained (WD), moderately drained (MD), and poorly drained (PD). Soil respiration and net CO2 fluxes were not significantly different among sites, although the relation between soil respiration rate and temperature varied with site (Qw: WD > MD > PD). Annual estimated soil respiration, litter decomposition, and groundcover photosynthesis were greatest at PD. These results suggest that soil temperature and moisture conditions in shallow organic horizon soils at PD were more favorable for decomposition compared with the better drained sites. SOC stocks, however, increase from WD to MD to PD such that surface decomposition and C storage are diametric. Greater groundcover vegetation productivity, protection of deep SOC by permafrost and anoxic conditions, and differences in fire return interval and (or) severity at PD counteract the relatively high near-surface decomposition rates, resulting in high net C accumulation.

  20. Crowdsourcing Austrian data on decomposition with the help of citizen scientists

    NASA Astrophysics Data System (ADS)

    Sandén, Taru; Berthold, Helene; Schwarz, Michael; Baumgarten, Andreas; Spiegel, Heide

    2017-04-01

    Decay of organic material, decomposition, is a critical process for life on earth. Through decomposition, food becomes available for plants and soil organisms that they use in their growth and maintenance. When plant material decomposes, it loses weight and releases the greenhouse gas carbon dioxide (CO2) into the atmosphere. Terrestrial soils contain about three times more carbon than the atmosphere and, therefore, changes in the balance of soil carbon storage and release can significantly amplify or attenuate global warming. Many factors affecting the global carbon cycle are already known and mapped; however, an index for decomposition rate is still missing, even though it is needed for climate modelling. The Tea Bag Index (TBI) measures decomposition in a standardised, achievable, climate-relevant, and time-relevant way by burying commercial nylon tea bags in soils for three months (Keuskamp et al., 2013). In the summer of 2016, TBI (expressed as decomposition rate (k) and stabilisation index (S)) was measured with the help of Austrian citizen scientists at 7-8 cm soil depth in three different land uses (maize croplands, grasslands and forests). In total ca. 2700 tea bags were sent to the citizen scientists of which ca. 50% were returned. The data generated by the citizen scientists will be incorporated into an Austrian as well as a global soil map of decomposition. This map can be used as input to improve climate modelling in the future.

  1. Methods for estimating litter decomposition. Chapter 8

    Treesearch

    Noah J. Karberg; Neal A. Scott; Christian P. Giardina

    2008-01-01

    Litterfall in terrestrial ecosystems represents the primary pathway for nutrient return to soil. Heterotrophic metabolism, facilitated through comminution by small insects and leaching during precipitation events, results in the release of plant litter carbon as CO2 into the atmosphere. The balance between litter inputs and heterotrophic litter...

  2. Spatial, temporal, and hybrid decompositions for large-scale vehicle routing with time windows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bent, Russell W

    This paper studies the use of decomposition techniques to quickly find high-quality solutions to large-scale vehicle routing problems with time windows. It considers an adaptive decomposition scheme which iteratively decouples a routing problem based on the current solution. Earlier work considered vehicle-based decompositions that partitions the vehicles across the subproblems. The subproblems can then be optimized independently and merged easily. This paper argues that vehicle-based decompositions, although very effective on various problem classes also have limitations. In particular, they do not accommodate temporal decompositions and may produce spatial decompositions that are not focused enough. This paper then proposes customer-based decompositionsmore » which generalize vehicle-based decouplings and allows for focused spatial and temporal decompositions. Experimental results on class R2 of the extended Solomon benchmarks demonstrates the benefits of the customer-based adaptive decomposition scheme and its spatial, temporal, and hybrid instantiations. In particular, they show that customer-based decompositions bring significant benefits over large neighborhood search in contrast to vehicle-based decompositions.« less

  3. Intrinsic Multi-Scale Dynamic Behaviors of Complex Financial Systems.

    PubMed

    Ouyang, Fang-Yan; Zheng, Bo; Jiang, Xiong-Fei

    2015-01-01

    The empirical mode decomposition is applied to analyze the intrinsic multi-scale dynamic behaviors of complex financial systems. In this approach, the time series of the price returns of each stock is decomposed into a small number of intrinsic mode functions, which represent the price motion from high frequency to low frequency. These intrinsic mode functions are then grouped into three modes, i.e., the fast mode, medium mode and slow mode. The probability distribution of returns and auto-correlation of volatilities for the fast and medium modes exhibit similar behaviors as those of the full time series, i.e., these characteristics are rather robust in multi time scale. However, the cross-correlation between individual stocks and the return-volatility correlation are time scale dependent. The structure of business sectors is mainly governed by the fast mode when returns are sampled at a couple of days, while by the medium mode when returns are sampled at dozens of days. More importantly, the leverage and anti-leverage effects are dominated by the medium mode.

  4. Shoot litter breakdown and zinc dynamics of an aquatic plant, Schoenoplectus californicus.

    PubMed

    Arreghini, Silvana; de Cabo, Laura; Serafini, Roberto José María; Fabrizio de Iorio, Alicia

    2018-07-03

    Decomposition of plant debris is an important process in determining the structure and function of aquatic ecosystems. The aims were to find a mathematic model fitting the decomposition process of Schoenoplectus californicus shoots containing different Zn concentrations; compare the decomposition rates; and assess metal accumulation/mobilization during decomposition. A litterbag technique was applied with shoots containing three levels of Zn: collected from an unpolluted river (RIV) and from experimental populations at low (LoZn) and high (HiZn) Zn supply. The double exponential model explained S. californicus shoot decomposition, at first, higher initial proportion of refractory fraction in RIV detritus determined a lower decay rate and until 68 days, RIV and LoZn detritus behaved like a source of metal, releasing soluble/weakly bound zinc into the water; after 68 days, they became like a sink. However, HiZn detritus showed rapid release into the water during the first 8 days, changing to the sink condition up to 68 days, and then returning to the source condition up to 369 days. The knowledge of the role of detritus (sink/source) will allow defining a correct management of the vegetation used for zinc removal and providing a valuable tool for environmental remediation and rehabilitation planning.

  5. Hammerstein system represention of financial volatility processes

    NASA Astrophysics Data System (ADS)

    Capobianco, E.

    2002-05-01

    We show new modeling aspects of stock return volatility processes, by first representing them through Hammerstein Systems, and by then approximating the observed and transformed dynamics with wavelet-based atomic dictionaries. We thus propose an hybrid statistical methodology for volatility approximation and non-parametric estimation, and aim to use the information embedded in a bank of volatility sources obtained by decomposing the observed signal with multiresolution techniques. Scale dependent information refers both to market activity inherent to different temporally aggregated trading horizons, and to a variable degree of sparsity in representing the signal. A decomposition of the expansion coefficients in least dependent coordinates is then implemented through Independent Component Analysis. Based on the described steps, the features of volatility can be more effectively detected through global and greedy algorithms.

  6. Self-consistent asset pricing models

    NASA Astrophysics Data System (ADS)

    Malevergne, Y.; Sornette, D.

    2007-08-01

    We discuss the foundations of factor or regression models in the light of the self-consistency condition that the market portfolio (and more generally the risk factors) is (are) constituted of the assets whose returns it is (they are) supposed to explain. As already reported in several articles, self-consistency implies correlations between the return disturbances. As a consequence, the alphas and betas of the factor model are unobservable. Self-consistency leads to renormalized betas with zero effective alphas, which are observable with standard OLS regressions. When the conditions derived from internal consistency are not met, the model is necessarily incomplete, which means that some sources of risk cannot be replicated (or hedged) by a portfolio of stocks traded on the market, even for infinite economies. Analytical derivations and numerical simulations show that, for arbitrary choices of the proxy which are different from the true market portfolio, a modified linear regression holds with a non-zero value αi at the origin between an asset i's return and the proxy's return. Self-consistency also introduces “orthogonality” and “normality” conditions linking the betas, alphas (as well as the residuals) and the weights of the proxy portfolio. Two diagnostics based on these orthogonality and normality conditions are implemented on a basket of 323 assets which have been components of the S&P500 in the period from January 1990 to February 2005. These two diagnostics show interesting departures from dynamical self-consistency starting about 2 years before the end of the Internet bubble. Assuming that the CAPM holds with the self-consistency condition, the OLS method automatically obeys the resulting orthogonality and normality conditions and therefore provides a simple way to self-consistently assess the parameters of the model by using proxy portfolios made only of the assets which are used in the CAPM regressions. Finally, the factor decomposition with the self-consistency condition derives a risk-factor decomposition in the multi-factor case which is identical to the principal component analysis (PCA), thus providing a direct link between model-driven and data-driven constructions of risk factors. This correspondence shows that PCA will therefore suffer from the same limitations as the CAPM and its multi-factor generalization, namely lack of out-of-sample explanatory power and predictability. In the multi-period context, the self-consistency conditions force the betas to be time-dependent with specific constraints.

  7. Intrinsic Multi-Scale Dynamic Behaviors of Complex Financial Systems

    PubMed Central

    Ouyang, Fang-Yan; Zheng, Bo; Jiang, Xiong-Fei

    2015-01-01

    The empirical mode decomposition is applied to analyze the intrinsic multi-scale dynamic behaviors of complex financial systems. In this approach, the time series of the price returns of each stock is decomposed into a small number of intrinsic mode functions, which represent the price motion from high frequency to low frequency. These intrinsic mode functions are then grouped into three modes, i.e., the fast mode, medium mode and slow mode. The probability distribution of returns and auto-correlation of volatilities for the fast and medium modes exhibit similar behaviors as those of the full time series, i.e., these characteristics are rather robust in multi time scale. However, the cross-correlation between individual stocks and the return-volatility correlation are time scale dependent. The structure of business sectors is mainly governed by the fast mode when returns are sampled at a couple of days, while by the medium mode when returns are sampled at dozens of days. More importantly, the leverage and anti-leverage effects are dominated by the medium mode. PMID:26427063

  8. Dictionary-Based Tensor Canonical Polyadic Decomposition

    NASA Astrophysics Data System (ADS)

    Cohen, Jeremy Emile; Gillis, Nicolas

    2018-04-01

    To ensure interpretability of extracted sources in tensor decomposition, we introduce in this paper a dictionary-based tensor canonical polyadic decomposition which enforces one factor to belong exactly to a known dictionary. A new formulation of sparse coding is proposed which enables high dimensional tensors dictionary-based canonical polyadic decomposition. The benefits of using a dictionary in tensor decomposition models are explored both in terms of parameter identifiability and estimation accuracy. Performances of the proposed algorithms are evaluated on the decomposition of simulated data and the unmixing of hyperspectral images.

  9. An evaluation of soil chemistry in human cadaver decomposition islands: Potential for estimating postmortem interval (PMI).

    PubMed

    Fancher, J P; Aitkenhead-Peterson, J A; Farris, T; Mix, K; Schwab, A P; Wescott, D J; Hamilton, M D

    2017-10-01

    Soil samples from the Forensic Anthropology Research Facility (FARF) at Texas State University, San Marcos, TX, were analyzed for multiple soil characteristics from cadaver decomposition islands to a depth of 5centimeters (cm) from 63 human decomposition sites, as well as depths up to 15cm in a subset of 11 of the cadaver decomposition islands plus control soils. Postmortem interval (PMI) of the cadaver decomposition islands ranged from 6 to 1752 days. Some soil chemistry, including nitrate-N (NO 3 -N), ammonium-N (NH 4 -N), and dissolved inorganic carbon (DIC), peaked at early PMI values and their concentrations at 0-5cm returned to near control values over time likely due to translocation down the soil profile. Other soil chemistry, including dissolved organic carbon (DOC), dissolved organic nitrogen (DON), orthophosphate-P (PO 4 -P), sodium (Na + ), and potassium (K + ), remained higher than the control soil up to a PMI of 1752days postmortem. The body mass index (BMI) of the cadaver appeared to have some effect on the cadaver decomposition island chemistry. To estimate PMI using soil chemistry, backward, stepwise multiple regression analysis was used with PMI as the dependent variable and soil chemistry, body mass index (BMI) and physical soil characteristics such as saturated hydraulic conductivity as independent variables. Measures of soil parameters derived from predator and microbial mediated decomposition of human remains shows promise in estimating PMI to within 365days for a period up to nearly five years. This persistent change in soil chemistry extends the ability to estimate PMI beyond the traditionally utilized methods of entomology and taphonomy in support of medical-legal investigations, humanitarian recovery efforts, and criminal and civil cases. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Biogeochemistry of Decomposition and Detrital Processing

    NASA Astrophysics Data System (ADS)

    Sanderman, J.; Amundson, R.

    2003-12-01

    Decomposition is a key ecological process that roughly balances net primary production in terrestrial ecosystems and is an essential process in resupplying nutrients to the plant community. Decomposition consists of three concurrent processes: communition or fragmentation, leaching of water-soluble compounds, and microbial catabolism. Decomposition can also be viewed as a sequential process, what Eijsackers and Zehnder (1990) compare to a Russian matriochka doll. Soil macrofauna fragment and partially solubilize plant residues, facilitating establishment of a community of decomposer microorganisms. This decomposer community will gradually shift as the most easily degraded plant compounds are utilized and the more recalcitrant materials begin to accumulate. Given enough time and the proper environmental conditions, most naturally occurring compounds can completely be mineralized to inorganic forms. Simultaneously with mineralization, the process of humification acts to transform a fraction of the plant residues into stable soil organic matter (SOM) or humus. For reference, Schlesinger (1990) estimated that only ˜0.7% of detritus eventually becomes stabilized into humus.Decomposition plays a key role in the cycling of most plant macro- and micronutrients and in the formation of humus. Figure 1 places the roles of detrital processing and mineralization within the context of the biogeochemical cycling of essential plant nutrients. Chapin (1991) found that while the atmosphere supplied 4% and mineral weathering supplied no nitrogen and <1% of phosphorus, internal nutrient recycling is the source for >95% of all the nitrogen and phosphorus uptake by tundra species in Barrow, Alaska. In a cool temperate forest, nutrient recycling accounted for 93%, 89%, 88%, and 65% of total sources for nitrogen, phosphorus, potassium, and calcium, respectively ( Chapin, 1991). (13K)Figure 1. A decomposition-centric biogeochemical model of nutrient cycling. Although there is significant external input (1) and output (2) from neighboring ecosystems (such as erosion), weathering of primary minerals (3), loss of secondary minerals (4), atmospheric deposition and N-fixation (5) and volatilization (6), the majority of plant-available nutrients are supplied by internal recycling through decomposition. Nutrients that are taken up by plants (7) are either consumed by fauna (8) and returned to the soil through defecation and mortality (10) or returned to the soil through litterfall and mortality (9). Detritus and humus can be immobilized into microbial biomass (11 and 13). Humus is formed by the transformation and stabilization of detrital (12) and microbial (14) compounds. During these transformations, SOM is being continually mineralized by the microorganisms (15) replenishing the inorganic nutrient pool (after Swift et al., 1979). The second major ecosystem role of decomposition is in the formation and stabilization of humus. The cycling and stabilization of SOM in the litter-soil system is presented in a conceptual model in Figure 2. Parallel with litterfall and most root turnover, detrital processing is concentrated at or near the soil surface. As labile SOM is preferentially degraded, there is a progressive shift from labile to passive SOM with increasing depth. There are three basic mechanisms for SOM accumulation in the mineral soil: bioturbation or physical mixing of the soil by burrowing animals (e.g., earthworms, gophers, etc.), in situ decomposition of roots and root exudates, and the leaching of soluble organic compounds. In the absence of bioturbation, distinct litter layers often accumulate above the mineral soil. In grasslands where the majority of net primary productivity (NPP) is allocated belowground, root inputs will dominate. In sandy soils with ample rainfall, leaching may be the major process incorporating carbon into the soil. (11K)Figure 2. Conceptual model of carbon cycling in the litter-soil system. In each horizon or depth increment, SOM is represented by three pools: labile SOM, slow SOM, and passive SOM. Inputs include aboveground litterfall and belowground root turnover and exudates, which will be distributed among the pools based on the biochemical nature of the material. Outputs from each pool include mineralization to CO2 (dashed lines), humification (labile→slow→passive), and downward transport due to leaching and physical mixing. Communition by soil fauna will accelerate the decomposition process and reveal previously inaccessible materials. Soil mixing and other disturbances can also make physically protected passive SOM available to microbial attack (passive→slow). There exists an amazing body of literature on the subject of decomposition that draws from many disciplines - including ecology, soil science, microbiology, plant physiology, biochemistry, and zoology. In this chapter, we have attempted to draw information from all of these fields to present an integrated analysis of decomposition in a biogeochemical context. We begin by reviewing the composition of detrital resources and SOM (Section 8.07.2), the organisms responsible for decomposition ( Section 8.07.3), and some methods for quantifying decomposition rates ( Section 8.07.4). This is followed by a discussion of the mechanisms behind decomposition ( Section 8.07.5), humification ( Section 8.07.6), and the controls on these processes ( Section 8.07.7). We conclude the chapter with a brief discussion on how current biogeochemical models incorporate this information ( Section 8.07.8).

  11. Developing a New Computer-Aided Clinical Decision Support System for Prediction of Successful Postcardioversion Patients with Persistent Atrial Fibrillation.

    PubMed

    Sterling, Mark; Huang, David T; Ghoraani, Behnaz

    2015-01-01

    We propose a new algorithm to predict the outcome of direct-current electric (DCE) cardioversion for atrial fibrillation (AF) patients. AF is the most common cardiac arrhythmia and DCE cardioversion is a noninvasive treatment to end AF and return the patient to sinus rhythm (SR). Unfortunately, there is a high risk of AF recurrence in persistent AF patients; hence clinically it is important to predict the DCE outcome in order to avoid the procedure's side effects. This study develops a feature extraction and classification framework to predict AF recurrence patients from the underlying structure of atrial activity (AA). A multiresolution signal decomposition technique, based on matching pursuit (MP), was used to project the AA over a dictionary of wavelets. Seven novel features were derived from the decompositions and were employed in a quadratic discrimination analysis classification to predict the success of post-DCE cardioversion in 40 patients with persistent AF. The proposed algorithm achieved 100% sensitivity and 95% specificity, indicating that the proposed computational approach captures detailed structural information about the underlying AA and could provide reliable information for effective management of AF.

  12. Trenching reduces soil heterotrophic activity in a loblolly pine (Pinus taeda) forest exposed to elevated atmospheric [CO2] and N fertilization

    Treesearch

    J.E. Drake; A.C. Oishi; M. A. Giasson; R. Oren; Kurt Johnsen; A.C. Finzi

    2012-01-01

    Forests return large quantities of C to the atmosphere through soil respiration (Rsoil), which is often conceptually separated into autotrophic C respired by living roots (Rroot) and heterotrophic decomposition (Rhet) of soil organic matter (SOM). Live roots provide C sources for microbial metabolism via exudation, allocation to fungal associates, sloughed-off cells,...

  13. Predicting live and dead basal area in bark beetle-affected forests from discrete-return LiDAR

    Treesearch

    Andrew T. Hudak; Ben Bright; Jose Negron; Robert McGaughey; Hans-Erik Andersen; Jeffrey A. Hicke

    2012-01-01

    Recent bark beetle outbreaks in western North America have been widespread and severe. High tree mortality due to bark beetles affects the fundamental ecosystem processes of primary production and decomposition that largely determine carbon balance (Kurz et al. 2008, Pfeifer et al. 2011, Hicke et al. 2012). Forest managers need accurate data on beetle-induced tree...

  14. Optimized Kernel Entropy Components.

    PubMed

    Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau

    2017-06-01

    This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.

  15. Ozone assisted oxidation of gaseous PCDD/Fs over CNTs-containing composite catalysts at low temperature.

    PubMed

    Wang, Qiulin; Tang, Minghui; Peng, Yaqi; Du, Cuicui; Lu, Shengyong

    2018-05-01

    Ozone assisted carbon nanotubes (CNTs) supported vanadium oxide/titanium dioxide (V/Ti-CNTs) or vanadium oxide-manganese oxide/titanium dioxide (V-Mn/Ti-CNTs) catalysts towards gaseous PCDD/Fs (polychlorinated dibenzo-p-dioxins and polychlorinated dibenzofurans) catalytic oxidations at low temperature (150 °C) were investigated. The removal efficiency (RE) and decomposition efficiency (DE) of PCDD/Fs achieved with V-Mn/Ti-CNTs alone were 95% and 45% at 150 °C under a space velocity (SV) of 14000 h -1 ; yet, these values reached 99% and 91% when catalyst and low concentration (50 ppm) ozone were used in combined. The ozone promotion effect on catalytic activity was further enhanced with the addition of manganese oxide (MnO x ) and CNTs. Adding MnO x and CNTs in V/Ti catalysts facilitated the ozone decomposition (creating more active species on catalyst surface), thus, improved ozone utilization (demanding relatively lower ozone addition concentration). On the other hand, this study threw light upon ozone promotion mechanism based on the comparison of catalyst properties (i.e. components, surface area, surface acidity, redox ability and oxidation state) before and after ozone treatment. The experimental results indicate that a synergistic effect exists between catalyst and ozone: ozone is captured and decomposed on catalyst surface; meanwhile, the catalyst properties are changed by ozone in return. Reactive oxygen species from ozone decomposition and the accompanied catalyst properties optimization are crucial reasons for catalyst activation at low temperature. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Hydrogen iodide decomposition

    DOEpatents

    O'Keefe, Dennis R.; Norman, John H.

    1983-01-01

    Liquid hydrogen iodide is decomposed to form hydrogen and iodine in the presence of water using a soluble catalyst. Decomposition is carried out at a temperature between about 350.degree. K. and about 525.degree. K. and at a corresponding pressure between about 25 and about 300 atmospheres in the presence of an aqueous solution which acts as a carrier for the homogeneous catalyst. Various halides of the platinum group metals, particularly Pd, Rh and Pt, are used, particularly the chlorides and iodides which exhibit good solubility. After separation of the H.sub.2, the stream from the decomposer is countercurrently extracted with nearly dry HI to remove I.sub.2. The wet phase contains most of the catalyst and is recycled directly to the decomposition step. The catalyst in the remaining almost dry HI-I.sub.2 phase is then extracted into a wet phase which is also recycled. The catalyst-free HI-I.sub.2 phase is finally distilled to separate the HI and I.sub.2. The HI is recycled to the reactor; the I.sub.2 is returned to a reactor operating in accordance with the Bunsen equation to create more HI.

  17. Climate fails to predict wood decomposition at regional scales

    Treesearch

    Mark A. Bradford; Robert J. Warren; Petr Baldrian; Thomas W. Crowther; Daniel S. Maynard; Emily E. Oldfield; William R. Wieder; Stephen A. Wood; Joshua R. King

    2014-01-01

    Decomposition of organic matter strongly influences ecosystem carbon storage1. In Earth-system models, climate is a predominant control on the decomposition rates of organic matter2, 3, 4, 5. This assumption is based on the mean response of decomposition to climate, yet there is a growing appreciation in other areas of global change science that projections based on...

  18. An Efficient Local Correlation Matrix Decomposition Approach for the Localization Implementation of Ensemble-Based Assimilation Methods

    NASA Astrophysics Data System (ADS)

    Zhang, Hongqin; Tian, Xiangjun

    2018-04-01

    Ensemble-based data assimilation methods often use the so-called localization scheme to improve the representation of the ensemble background error covariance (Be). Extensive research has been undertaken to reduce the computational cost of these methods by using the localized ensemble samples to localize Be by means of a direct decomposition of the local correlation matrix C. However, the computational costs of the direct decomposition of the local correlation matrix C are still extremely high due to its high dimension. In this paper, we propose an efficient local correlation matrix decomposition approach based on the concept of alternating directions. This approach is intended to avoid direct decomposition of the correlation matrix. Instead, we first decompose the correlation matrix into 1-D correlation matrices in the three coordinate directions, then construct their empirical orthogonal function decomposition at low resolution. This procedure is followed by the 1-D spline interpolation process to transform the above decompositions to the high-resolution grid. Finally, an efficient correlation matrix decomposition is achieved by computing the very similar Kronecker product. We conducted a series of comparison experiments to illustrate the validity and accuracy of the proposed local correlation matrix decomposition approach. The effectiveness of the proposed correlation matrix decomposition approach and its efficient localization implementation of the nonlinear least-squares four-dimensional variational assimilation are further demonstrated by several groups of numerical experiments based on the Advanced Research Weather Research and Forecasting model.

  19. A study of stationarity in time series by using wavelet transform

    NASA Astrophysics Data System (ADS)

    Dghais, Amel Abdoullah Ahmed; Ismail, Mohd Tahir

    2014-07-01

    In this work the core objective is to apply discrete wavelet transform (DWT) functions namely Haar, Daubechies, Symmlet, Coiflet and discrete approximation of the meyer wavelets in non-stationary financial time series data from US stock market (DJIA30). The data consists of 2048 daily data of closing index starting from December 17, 2004 until October 23, 2012. From the unit root test the results show that the data is non stationary in the level. In order to study the stationarity of a time series, the autocorrelation function (ACF) is used. Results indicate that, Haar function is the lowest function to obtain noisy series as compared to Daubechies, Symmlet, Coiflet and discrete approximation of the meyer wavelets. In addition, the original data after decomposition by DWT is less noisy series than decomposition by DWT for return time series.

  20. [Effects of nitrogen application rates and straw returning on nutrient balance and grain yield of late sowing wheat in rice-wheat rotation].

    PubMed

    Zhang, Shan; Shi, Zu-liang; Yang, Si-jun; Gu, Ke-jun; Dai, Ting-bo; Wang, Fei; Li, Xiang; Sun, Ren-hua

    2015-09-01

    Field experiments were conducted to study the effects of nitrogen application rates and straw returning on grain yield, nutrient accumulation, nutrient release from straw and nutrient balance in late sowing wheat. The results showed that straw returning together with appropriate application of nitrogen fertilizer improved the grain yield. Dry matter, nitrogen, phosphorus and potassium accumulation increased significantly as the nitrogen application rate increased. At the same nitrogen application rate (270 kg N · hm(-2)), the dry matter, phosphorus and potassium accumulation of the treatment with straw returning were higher than that without straw returning, but the nitrogen accumulation was lower. Higher-rate nitrogen application promoted straw decomposition and nutrient release, and decreased the proportion of the nutrient released from straw after jointing. The dry matter, phosphorus and potassium release from straw showed a reverse 'N' type change with the wheat growing, while nitrogen release showed a 'V' type change. The nutrient surplus increased significantly with the nitrogen application rate. At the nitrogen application rate for the highest grain yield, nitrogen and potassium were surplus significantly, and phosphorus input could keep balance. It could be concluded that as to late sowing wheat with straw returning, applying nitrogen at 257 kg · hm(-2) and reducing potassium fertilizer application could improve grain yield and reduce nutrients loss.

  1. Wavelet optimization for content-based image retrieval in medical databases.

    PubMed

    Quellec, G; Lamard, M; Cazuguel, G; Cochener, B; Roux, C

    2010-04-01

    We propose in this article a content-based image retrieval (CBIR) method for diagnosis aid in medical fields. In the proposed system, images are indexed in a generic fashion, without extracting domain-specific features: a signature is built for each image from its wavelet transform. These image signatures characterize the distribution of wavelet coefficients in each subband of the decomposition. A distance measure is then defined to compare two image signatures and thus retrieve the most similar images in a database when a query image is submitted by a physician. To retrieve relevant images from a medical database, the signatures and the distance measure must be related to the medical interpretation of images. As a consequence, we introduce several degrees of freedom in the system so that it can be tuned to any pathology and image modality. In particular, we propose to adapt the wavelet basis, within the lifting scheme framework, and to use a custom decomposition scheme. Weights are also introduced between subbands. All these parameters are tuned by an optimization procedure, using the medical grading of each image in the database to define a performance measure. The system is assessed on two medical image databases: one for diabetic retinopathy follow up and one for screening mammography, as well as a general purpose database. Results are promising: a mean precision of 56.50%, 70.91% and 96.10% is achieved for these three databases, when five images are returned by the system. Copyright 2009 Elsevier B.V. All rights reserved.

  2. Comparison of methods for non-stationary hydrologic frequency analysis: Case study using annual maximum daily precipitation in Taiwan

    NASA Astrophysics Data System (ADS)

    Chen, Po-Chun; Wang, Yuan-Heng; You, Gene Jiing-Yun; Wei, Chih-Chiang

    2017-02-01

    Future climatic conditions likely will not satisfy stationarity assumption. To address this concern, this study applied three methods to analyze non-stationarity in hydrologic conditions. Based on the principle of identifying distribution and trends (IDT) with time-varying moments, we employed the parametric weighted least squares (WLS) estimation in conjunction with the non-parametric discrete wavelet transform (DWT) and ensemble empirical mode decomposition (EEMD). Our aim was to evaluate the applicability of non-parameter approaches, compared with traditional parameter-based methods. In contrast to most previous studies, which analyzed the non-stationarity of first moments, we incorporated second-moment analysis. Through the estimation of long-term risk, we were able to examine the behavior of return periods under two different definitions: the reciprocal of the exceedance probability of occurrence and the expected recurrence time. The proposed framework represents an improvement over stationary frequency analysis for the design of hydraulic systems. A case study was performed using precipitation data from major climate stations in Taiwan to evaluate the non-stationarity of annual maximum daily precipitation. The results demonstrate the applicability of these three methods in the identification of non-stationarity. For most cases, no significant differences were observed with regard to the trends identified using WLS, DWT, and EEMD. According to the results, a linear model should be able to capture time-variance in either the first or second moment while parabolic trends should be used with caution due to their characteristic rapid increases. It is also observed that local variations in precipitation tend to be overemphasized by DWT and EEMD. The two definitions provided for the concept of return period allows for ambiguous interpretation. With the consideration of non-stationarity, the return period is relatively small under the definition of expected recurrence time comparing to the estimation using the reciprocal of the exceedance probability of occurrence. However, the calculation of expected recurrence time is based on the assumption of perfect knowledge of long-term risk, which involves high uncertainty. When the risk is decreasing with time, the expected recurrence time will lead to the divergence of return period and make this definition inapplicable for engineering purposes.

  3. A compositional approach to building applications in a computational environment

    NASA Astrophysics Data System (ADS)

    Roslovtsev, V. V.; Shumsky, L. D.; Wolfengagen, V. E.

    2014-04-01

    The paper presents an approach to creating an applicative computational environment to feature computational processes and data decomposition, and a compositional approach to application building. The approach in question is based on the notion of combinator - both in systems with variable binding (such as λ-calculi) and those allowing programming without variables (combinatory logic style). We present a computation decomposition technique based on objects' structural decomposition, with the focus on computation decomposition. The computational environment's architecture is based on a network with nodes playing several roles simultaneously.

  4. Classification of subsurface objects using singular values derived from signal frames

    DOEpatents

    Chambers, David H; Paglieroni, David W

    2014-05-06

    The classification system represents a detected object with a feature vector derived from the return signals acquired by an array of N transceivers operating in multistatic mode. The classification system generates the feature vector by transforming the real-valued return signals into complex-valued spectra, using, for example, a Fast Fourier Transform. The classification system then generates a feature vector of singular values for each user-designated spectral sub-band by applying a singular value decomposition (SVD) to the N.times.N square complex-valued matrix formed from sub-band samples associated with all possible transmitter-receiver pairs. The resulting feature vector of singular values may be transformed into a feature vector of singular value likelihoods and then subjected to a multi-category linear or neural network classifier for object classification.

  5. Zero-Point Energy Constraint for Unimolecular Dissociation Reactions. Giving Trajectories Multiple Chances To Dissociate Correctly.

    PubMed

    Paul, Amit K; Hase, William L

    2016-01-28

    A zero-point energy (ZPE) constraint model is proposed for classical trajectory simulations of unimolecular decomposition and applied to CH4* → H + CH3 decomposition. With this model trajectories are not allowed to dissociate unless they have ZPE in the CH3 product. If not, they are returned to the CH4* region of phase space and, if necessary, given additional opportunities to dissociate with ZPE. The lifetime for dissociation of an individual trajectory is the time it takes to dissociate with ZPE in CH3, including multiple possible returns to CH4*. With this ZPE constraint the dissociation of CH4* is exponential in time as expected for intrinsic RRKM dynamics and the resulting rate constant is in good agreement with the harmonic quantum value of RRKM theory. In contrast, a model that discards trajectories without ZPE in the reaction products gives a CH4* → H + CH3 rate constant that agrees with the classical and not quantum RRKM value. The rate constant for the purely classical simulation indicates that anharmonicity may be important and the rate constant from the ZPE constrained classical trajectory simulation may not represent the complete anharmonicity of the RRKM quantum dynamics. The ZPE constraint model proposed here is compared with previous models for restricting ZPE flow in intramolecular dynamics, and connecting product and reactant/product quantum energy levels in chemical dynamics simulations.

  6. Canonical Sectors and Evolution of Firms in the US Stock Markets

    NASA Astrophysics Data System (ADS)

    Hayden, Lorien; Chachra, Ricky; Alemi, Alexander; Ginsparg, Paul; Sethna, James

    2015-03-01

    In this work, we show how unsupervised machine learning can provide a more objective and comprehensive broad-level sector decomposition of stocks. Classification of companies into sectors of the economy is important for macroeconomic analysis, and for investments into the sector-specific financial indices and exchange traded funds (ETFs). Historically, these major industrial classification systems and financial indices have been based on expert opinion and developed manually. Our method, in contrast, produces an emergent low-dimensional structure in the space of historical stock price returns. This emergent structure automatically identifies ``canonical sectors'' in the market, and assigns every stock a participation weight into these sectors. Furthermore, by analyzing data from different periods, we show how these weights for listed firms have evolved over time. This work was partially supported by NSF Grants DMR 1312160, OCI 0926550 and DGE-1144153 (LXH).

  7. Volatility behavior of visibility graph EMD financial time series from Ising interacting system

    NASA Astrophysics Data System (ADS)

    Zhang, Bo; Wang, Jun; Fang, Wen

    2015-08-01

    A financial market dynamics model is developed and investigated by stochastic Ising system, where the Ising model is the most popular ferromagnetic model in statistical physics systems. Applying two graph based analysis and multiscale entropy method, we investigate and compare the statistical volatility behavior of return time series and the corresponding IMF series derived from the empirical mode decomposition (EMD) method. And the real stock market indices are considered to be comparatively studied with the simulation data of the proposed model. Further, we find that the degree distribution of visibility graph for the simulation series has the power law tails, and the assortative network exhibits the mixing pattern property. All these features are in agreement with the real market data, the research confirms that the financial model established by the Ising system is reasonable.

  8. A practical material decomposition method for x-ray dual spectral computed tomography.

    PubMed

    Hu, Jingjing; Zhao, Xing

    2016-03-17

    X-ray dual spectral CT (DSCT) scans the measured object with two different x-ray spectra, and the acquired rawdata can be used to perform the material decomposition of the object. Direct calibration methods allow a faster material decomposition for DSCT and can be separated in two groups: image-based and rawdata-based. The image-based method is an approximative method, and beam hardening artifacts remain in the resulting material-selective images. The rawdata-based method generally obtains better image quality than the image-based method, but this method requires geometrically consistent rawdata. However, today's clinical dual energy CT scanners usually measure different rays for different energy spectra and acquire geometrically inconsistent rawdata sets, and thus cannot meet the requirement. This paper proposes a practical material decomposition method to perform rawdata-based material decomposition in the case of inconsistent measurement. This method first yields the desired consistent rawdata sets from the measured inconsistent rawdata sets, and then employs rawdata-based technique to perform material decomposition and reconstruct material-selective images. The proposed method was evaluated by use of simulated FORBILD thorax phantom rawdata and dental CT rawdata, and simulation results indicate that this method can produce highly quantitative DSCT images in the case of inconsistent DSCT measurements.

  9. Through-wall image enhancement using fuzzy and QR decomposition.

    PubMed

    Riaz, Muhammad Mohsin; Ghafoor, Abdul

    2014-01-01

    QR decomposition and fuzzy logic based scheme is proposed for through-wall image enhancement. QR decomposition is less complex compared to singular value decomposition. Fuzzy inference engine assigns weights to different overlapping subspaces. Quantitative measures and visual inspection are used to analyze existing and proposed techniques.

  10. Adaptive Fourier decomposition based ECG denoising.

    PubMed

    Wang, Ze; Wan, Feng; Wong, Chi Man; Zhang, Liming

    2016-10-01

    A novel ECG denoising method is proposed based on the adaptive Fourier decomposition (AFD). The AFD decomposes a signal according to its energy distribution, thereby making this algorithm suitable for separating pure ECG signal and noise with overlapping frequency ranges but different energy distributions. A stop criterion for the iterative decomposition process in the AFD is calculated on the basis of the estimated signal-to-noise ratio (SNR) of the noisy signal. The proposed AFD-based method is validated by the synthetic ECG signal using an ECG model and also real ECG signals from the MIT-BIH Arrhythmia Database both with additive Gaussian white noise. Simulation results of the proposed method show better performance on the denoising and the QRS detection in comparing with major ECG denoising schemes based on the wavelet transform, the Stockwell transform, the empirical mode decomposition, and the ensemble empirical mode decomposition. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Land Covers Classification Based on Random Forest Method Using Features from Full-Waveform LIDAR Data

    NASA Astrophysics Data System (ADS)

    Ma, L.; Zhou, M.; Li, C.

    2017-09-01

    In this study, a Random Forest (RF) based land covers classification method is presented to predict the types of land covers in Miyun area. The returned full-waveforms which were acquired by a LiteMapper 5600 airborne LiDAR system were processed, including waveform filtering, waveform decomposition and features extraction. The commonly used features that were distance, intensity, Full Width at Half Maximum (FWHM), skewness and kurtosis were extracted. These waveform features were used as attributes of training data for generating the RF prediction model. The RF prediction model was applied to predict the types of land covers in Miyun area as trees, buildings, farmland and ground. The classification results of these four types of land covers were obtained according to the ground truth information acquired from CCD image data of the same region. The RF classification results were compared with that of SVM method and show better results. The RF classification accuracy reached 89.73% and the classification Kappa was 0.8631.

  12. Rank-based decompositions of morphological templates.

    PubMed

    Sussner, P; Ritter, G X

    2000-01-01

    Methods for matrix decomposition have found numerous applications in image processing, in particular for the problem of template decomposition. Since existing matrix decomposition techniques are mainly concerned with the linear domain, we consider it timely to investigate matrix decomposition techniques in the nonlinear domain with applications in image processing. The mathematical basis for these investigations is the new theory of rank within minimax algebra. Thus far, only minimax decompositions of rank 1 and rank 2 matrices into outer product expansions are known to the image processing community. We derive a heuristic algorithm for the decomposition of matrices having arbitrary rank.

  13. Climate fails to predict wood decomposition at regional scales

    NASA Astrophysics Data System (ADS)

    Bradford, Mark A.; Warren, Robert J., II; Baldrian, Petr; Crowther, Thomas W.; Maynard, Daniel S.; Oldfield, Emily E.; Wieder, William R.; Wood, Stephen A.; King, Joshua R.

    2014-07-01

    Decomposition of organic matter strongly influences ecosystem carbon storage. In Earth-system models, climate is a predominant control on the decomposition rates of organic matter. This assumption is based on the mean response of decomposition to climate, yet there is a growing appreciation in other areas of global change science that projections based on mean responses can be irrelevant and misleading. We test whether climate controls on the decomposition rate of dead wood--a carbon stock estimated to represent 73 +/- 6 Pg carbon globally--are sensitive to the spatial scale from which they are inferred. We show that the common assumption that climate is a predominant control on decomposition is supported only when local-scale variation is aggregated into mean values. Disaggregated data instead reveal that local-scale factors explain 73% of the variation in wood decomposition, and climate only 28%. Further, the temperature sensitivity of decomposition estimated from local versus mean analyses is 1.3-times greater. Fundamental issues with mean correlations were highlighted decades ago, yet mean climate-decomposition relationships are used to generate simulations that inform management and adaptation under environmental change. Our results suggest that to predict accurately how decomposition will respond to climate change, models must account for local-scale factors that control regional dynamics.

  14. The Study and Development of Metal Oxide Reactive Adsorbents for the Destruction of Toxic Organic Compounds

    DTIC Science & Technology

    2008-04-15

    been achieved, but our microreactor studies showed a slight loss in product flow from the reactor, indicating a loss of decomposition capacity for...examination by infrared spectroscopy. A second sample of the same solid was placed in the microreactor as before and treated in the same fashion... a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. a

  15. Fast heap transform-based QR-decomposition of real and complex matrices: algorithms and codes

    NASA Astrophysics Data System (ADS)

    Grigoryan, Artyom M.

    2015-03-01

    In this paper, we describe a new look on the application of Givens rotations to the QR-decomposition problem, which is similar to the method of Householder transformations. We apply the concept of the discrete heap transform, or signal-induced unitary transforms which had been introduced by Grigoryan (2006) and used in signal and image processing. Both cases of real and complex nonsingular matrices are considered and examples of performing QR-decomposition of square matrices are given. The proposed method of QR-decomposition for the complex matrix is novel and differs from the known method of complex Givens rotation and is based on analytical equations for the heap transforms. Many examples illustrated the proposed heap transform method of QR-decomposition are given, algorithms are described in detail, and MATLAB-based codes are included.

  16. Capturing molecular multimode relaxation processes in excitable gases based on decomposition of acoustic relaxation spectra

    NASA Astrophysics Data System (ADS)

    Zhu, Ming; Liu, Tingting; Wang, Shu; Zhang, Kesheng

    2017-08-01

    Existing two-frequency reconstructive methods can only capture primary (single) molecular relaxation processes in excitable gases. In this paper, we present a reconstructive method based on the novel decomposition of frequency-dependent acoustic relaxation spectra to capture the entire molecular multimode relaxation process. This decomposition of acoustic relaxation spectra is developed from the frequency-dependent effective specific heat, indicating that a multi-relaxation process is the sum of the interior single-relaxation processes. Based on this decomposition, we can reconstruct the entire multi-relaxation process by capturing the relaxation times and relaxation strengths of N interior single-relaxation processes, using the measurements of acoustic absorption and sound speed at 2N frequencies. Experimental data for the gas mixtures CO2-N2 and CO2-O2 validate our decomposition and reconstruction approach.

  17. Extracting fingerprint of wireless devices based on phase noise and multiple level wavelet decomposition

    NASA Astrophysics Data System (ADS)

    Zhao, Weichen; Sun, Zhuo; Kong, Song

    2016-10-01

    Wireless devices can be identified by the fingerprint extracted from the signal transmitted, which is useful in wireless communication security and other fields. This paper presents a method that extracts fingerprint based on phase noise of signal and multiple level wavelet decomposition. The phase of signal will be extracted first and then decomposed by multiple level wavelet decomposition. The statistic value of each wavelet coefficient vector is utilized for constructing fingerprint. Besides, the relationship between wavelet decomposition level and recognition accuracy is simulated. And advertised decomposition level is revealed as well. Compared with previous methods, our method is simpler and the accuracy of recognition remains high when Signal Noise Ratio (SNR) is low.

  18. Ozone decomposition

    PubMed Central

    Batakliev, Todor; Georgiev, Vladimir; Anachkov, Metody; Rakovsky, Slavcho

    2014-01-01

    Catalytic ozone decomposition is of great significance because ozone is a toxic substance commonly found or generated in human environments (aircraft cabins, offices with photocopiers, laser printers, sterilizers). Considerable work has been done on ozone decomposition reported in the literature. This review provides a comprehensive summary of the literature, concentrating on analysis of the physico-chemical properties, synthesis and catalytic decomposition of ozone. This is supplemented by a review on kinetics and catalyst characterization which ties together the previously reported results. Noble metals and oxides of transition metals have been found to be the most active substances for ozone decomposition. The high price of precious metals stimulated the use of metal oxide catalysts and particularly the catalysts based on manganese oxide. It has been determined that the kinetics of ozone decomposition is of first order importance. A mechanism of the reaction of catalytic ozone decomposition is discussed, based on detailed spectroscopic investigations of the catalytic surface, showing the existence of peroxide and superoxide surface intermediates. PMID:26109880

  19. The design and implementation of signal decomposition system of CL multi-wavelet transform based on DSP builder

    NASA Astrophysics Data System (ADS)

    Huang, Yan; Wang, Zhihui

    2015-12-01

    With the development of FPGA, DSP Builder is widely applied to design system-level algorithms. The algorithm of CL multi-wavelet is more advanced and effective than scalar wavelets in processing signal decomposition. Thus, a system of CL multi-wavelet based on DSP Builder is designed for the first time in this paper. The system mainly contains three parts: a pre-filtering subsystem, a one-level decomposition subsystem and a two-level decomposition subsystem. It can be converted into hardware language VHDL by the Signal Complier block that can be used in Quartus II. After analyzing the energy indicator, it shows that this system outperforms Daubenchies wavelet in signal decomposition. Furthermore, it has proved to be suitable for the implementation of signal fusion based on SoPC hardware, and it will become a solid foundation in this new field.

  20. Scalable parallel elastic-plastic finite element analysis using a quasi-Newton method with a balancing domain decomposition preconditioner

    NASA Astrophysics Data System (ADS)

    Yusa, Yasunori; Okada, Hiroshi; Yamada, Tomonori; Yoshimura, Shinobu

    2018-04-01

    A domain decomposition method for large-scale elastic-plastic problems is proposed. The proposed method is based on a quasi-Newton method in conjunction with a balancing domain decomposition preconditioner. The use of a quasi-Newton method overcomes two problems associated with the conventional domain decomposition method based on the Newton-Raphson method: (1) avoidance of a double-loop iteration algorithm, which generally has large computational complexity, and (2) consideration of the local concentration of nonlinear deformation, which is observed in elastic-plastic problems with stress concentration. Moreover, the application of a balancing domain decomposition preconditioner ensures scalability. Using the conventional and proposed domain decomposition methods, several numerical tests, including weak scaling tests, were performed. The convergence performance of the proposed method is comparable to that of the conventional method. In particular, in elastic-plastic analysis, the proposed method exhibits better convergence performance than the conventional method.

  1. ENVIRONMENTAL ASSESSMENT OF THE BASE CATALYZED DECOMPOSITION (BCD) PROCESS

    EPA Science Inventory

    This report summarizes laboratory-scale, pilot-scale, and field performance data on BCD (Base Catalyzed Decomposition) and technology, collected to date by various governmental, academic, and private organizations.

  2. A Type-2 Block-Component-Decomposition Based 2D AOA Estimation Algorithm for an Electromagnetic Vector Sensor Array

    PubMed Central

    Gao, Yu-Fei; Gui, Guan; Xie, Wei; Zou, Yan-Bin; Yang, Yue; Wan, Qun

    2017-01-01

    This paper investigates a two-dimensional angle of arrival (2D AOA) estimation algorithm for the electromagnetic vector sensor (EMVS) array based on Type-2 block component decomposition (BCD) tensor modeling. Such a tensor decomposition method can take full advantage of the multidimensional structural information of electromagnetic signals to accomplish blind estimation for array parameters with higher resolution. However, existing tensor decomposition methods encounter many restrictions in applications of the EMVS array, such as the strict requirement for uniqueness conditions of decomposition, the inability to handle partially-polarized signals, etc. To solve these problems, this paper investigates tensor modeling for partially-polarized signals of an L-shaped EMVS array. The 2D AOA estimation algorithm based on rank-(L1,L2,·) BCD is developed, and the uniqueness condition of decomposition is analyzed. By means of the estimated steering matrix, the proposed algorithm can automatically achieve angle pair-matching. Numerical experiments demonstrate that the present algorithm has the advantages of both accuracy and robustness of parameter estimation. Even under the conditions of lower SNR, small angular separation and limited snapshots, the proposed algorithm still possesses better performance than subspace methods and the canonical polyadic decomposition (CPD) method. PMID:28448431

  3. A Type-2 Block-Component-Decomposition Based 2D AOA Estimation Algorithm for an Electromagnetic Vector Sensor Array.

    PubMed

    Gao, Yu-Fei; Gui, Guan; Xie, Wei; Zou, Yan-Bin; Yang, Yue; Wan, Qun

    2017-04-27

    This paper investigates a two-dimensional angle of arrival (2D AOA) estimation algorithm for the electromagnetic vector sensor (EMVS) array based on Type-2 block component decomposition (BCD) tensor modeling. Such a tensor decomposition method can take full advantage of the multidimensional structural information of electromagnetic signals to accomplish blind estimation for array parameters with higher resolution. However, existing tensor decomposition methods encounter many restrictions in applications of the EMVS array, such as the strict requirement for uniqueness conditions of decomposition, the inability to handle partially-polarized signals, etc. To solve these problems, this paper investigates tensor modeling for partially-polarized signals of an L-shaped EMVS array. The 2D AOA estimation algorithm based on rank- ( L 1 , L 2 , · ) BCD is developed, and the uniqueness condition of decomposition is analyzed. By means of the estimated steering matrix, the proposed algorithm can automatically achieve angle pair-matching. Numerical experiments demonstrate that the present algorithm has the advantages of both accuracy and robustness of parameter estimation. Even under the conditions of lower SNR, small angular separation and limited snapshots, the proposed algorithm still possesses better performance than subspace methods and the canonical polyadic decomposition (CPD) method.

  4. An Intelligent Pattern Recognition System Based on Neural Network and Wavelet Decomposition for Interpretation of Heart Sounds

    DTIC Science & Technology

    2001-10-25

    wavelet decomposition of signals and classification using neural network. Inputs to the system are the heart sound signals acquired by a stethoscope in a...Proceedings. pp. 415–418, 1990. [3] G. Ergun, “An intelligent diagnostic system for interpretation of arterpartum fetal heart rate tracings based on ANNs and...AN INTELLIGENT PATTERN RECOGNITION SYSTEM BASED ON NEURAL NETWORK AND WAVELET DECOMPOSITION FOR INTERPRETATION OF HEART SOUNDS I. TURKOGLU1, A

  5. What determines the income gap between French male and female GPs - the role of medical practices

    PubMed Central

    2012-01-01

    Background In many OECD countries, the gender differences in physicians’ pay favour male doctors. Due to the feminisation of the doctor profession, it is essential to measure this income gap in the French context of Fee-for-service payment (FFS) and then to precisely identify its determinants. The objective of this study is to measure and analyse the 2008 income gap between males and females general practitioners (GPs). This paper focuses on the role of gender medical practices differentials among GPs working in private practice in the southwest region of France. Methods Using data from 339 private-practice GPs, we measured an average gender income gap of approximately 26% in favour of men. Using the decomposition method, we examined the factors that could explain gender disparities in income. Results The analysis showed that 73% of the income gap can be explained by the average differences in doctors’ characteristics; for example, 61% of the gender income gap is explained by the gender differences in workload, i.e., number of consultations and visits, which is on average significantly lower for female GPs than for male GPs. Furthermore, the decomposition method allowed us to highlight the differences in the marginal returns of doctors’ characteristics and variables contributing to income, such as GP workload; we found that female GPs have a higher marginal return in terms of earnings when performing an additional medical service. Conclusions The findings of this study help to understand the determinants of the income gap between male and female GPs. Even though workload is clearly an essential determinant of income, FFS does not reduce the gender income gap, and there is an imperfect relationship between the provision of medical services and income. In the context of feminisation, it appears that female GPs receive a lower income but attain higher marginal returns when performing an additional consultation. PMID:22998173

  6. What determines the income gap between French male and female GPs - the role of medical practices.

    PubMed

    Dumontet, Magali; Le Vaillant, Marc; Franc, Carine

    2012-09-21

    In many OECD countries, the gender differences in physicians' pay favour male doctors. Due to the feminisation of the doctor profession, it is essential to measure this income gap in the French context of Fee-for-service payment (FFS) and then to precisely identify its determinants. The objective of this study is to measure and analyse the 2008 income gap between males and females general practitioners (GPs). This paper focuses on the role of gender medical practices differentials among GPs working in private practice in the southwest region of France. Using data from 339 private-practice GPs, we measured an average gender income gap of approximately 26% in favour of men. Using the decomposition method, we examined the factors that could explain gender disparities in income. The analysis showed that 73% of the income gap can be explained by the average differences in doctors' characteristics; for example, 61% of the gender income gap is explained by the gender differences in workload, i.e., number of consultations and visits, which is on average significantly lower for female GPs than for male GPs. Furthermore, the decomposition method allowed us to highlight the differences in the marginal returns of doctors' characteristics and variables contributing to income, such as GP workload; we found that female GPs have a higher marginal return in terms of earnings when performing an additional medical service. The findings of this study help to understand the determinants of the income gap between male and female GPs. Even though workload is clearly an essential determinant of income, FFS does not reduce the gender income gap, and there is an imperfect relationship between the provision of medical services and income. In the context of feminisation, it appears that female GPs receive a lower income but attain higher marginal returns when performing an additional consultation.

  7. Steganography based on pixel intensity value decomposition

    NASA Astrophysics Data System (ADS)

    Abdulla, Alan Anwar; Sellahewa, Harin; Jassim, Sabah A.

    2014-05-01

    This paper focuses on steganography based on pixel intensity value decomposition. A number of existing schemes such as binary, Fibonacci, Prime, Natural, Lucas, and Catalan-Fibonacci (CF) are evaluated in terms of payload capacity and stego quality. A new technique based on a specific representation is proposed to decompose pixel intensity values into 16 (virtual) bit-planes suitable for embedding purposes. The proposed decomposition has a desirable property whereby the sum of all bit-planes does not exceed the maximum pixel intensity value, i.e. 255. Experimental results demonstrate that the proposed technique offers an effective compromise between payload capacity and stego quality of existing embedding techniques based on pixel intensity value decomposition. Its capacity is equal to that of binary and Lucas, while it offers a higher capacity than Fibonacci, Prime, Natural, and CF when the secret bits are embedded in 1st Least Significant Bit (LSB). When the secret bits are embedded in higher bit-planes, i.e., 2nd LSB to 8th Most Significant Bit (MSB), the proposed scheme has more capacity than Natural numbers based embedding. However, from the 6th bit-plane onwards, the proposed scheme offers better stego quality. In general, the proposed decomposition scheme has less effect in terms of quality on pixel value when compared to most existing pixel intensity value decomposition techniques when embedding messages in higher bit-planes.

  8. A comparison of reduced-order modelling techniques for application in hyperthermia control and estimation.

    PubMed

    Bailey, E A; Dutton, A W; Mattingly, M; Devasia, S; Roemer, R B

    1998-01-01

    Reduced-order modelling techniques can make important contributions in the control and state estimation of large systems. In hyperthermia, reduced-order modelling can provide a useful tool by which a large thermal model can be reduced to the most significant subset of its full-order modes, making real-time control and estimation possible. Two such reduction methods, one based on modal decomposition and the other on balanced realization, are compared in the context of simulated hyperthermia heat transfer problems. The results show that the modal decomposition reduction method has three significant advantages over that of balanced realization. First, modal decomposition reduced models result in less error, when compared to the full-order model, than balanced realization reduced models of similar order in problems with low or moderate advective heat transfer. Second, because the balanced realization based methods require a priori knowledge of the sensor and actuator placements, the reduced-order model is not robust to changes in sensor or actuator locations, a limitation not present in modal decomposition. Third, the modal decomposition transformation is less demanding computationally. On the other hand, in thermal problems dominated by advective heat transfer, numerical instabilities make modal decomposition based reduction problematic. Modal decomposition methods are therefore recommended for reduction of models in which advection is not dominant and research continues into methods to render balanced realization based reduction more suitable for real-time clinical hyperthermia control and estimation.

  9. A New Approach of evaluating the damage in simply-supported reinforced concrete beam by Local mean decomposition (LMD)

    NASA Astrophysics Data System (ADS)

    Zhang, Xuebing; Liu, Ning; Xi, Jiaxin; Zhang, Yunqi; Zhang, Wenchun; Yang, Peipei

    2017-08-01

    How to analyze the nonstationary response signals and obtain vibration characters is extremely important in the vibration-based structural diagnosis methods. In this work, we introduce a more reasonable time-frequency decomposition method termed local mean decomposition (LMD) to instead the widely-used empirical mode decomposition (EMD). By employing the LMD method, one can derive a group of component signals, each of which is more stationary, and then analyze the vibration state and make the assessment of structural damage of a construction or building. We illustrated the effectiveness of LMD by a synthetic data and an experimental data recorded in a simply-supported reinforced concrete beam. Then based on the decomposition results, an elementary method of damage diagnosis was proposed.

  10. Water/cortical bone decomposition: A new approach in dual energy CT imaging for bone marrow oedema detection. A feasibility study.

    PubMed

    Biondi, M; Vanzi, E; De Otto, G; Banci Buonamici, F; Belmonte, G M; Mazzoni, L N; Guasti, A; Carbone, S F; Mazzei, M A; La Penna, A; Foderà, E; Guerreri, D; Maiolino, A; Volterrani, L

    2016-12-01

    Many studies aimed at validating the application of Dual Energy Computed Tomography (DECT) in clinical practice where conventional CT is not exhaustive. An example is given by bone marrow oedema detection, in which DECT based on water/calcium (W/Ca) decomposition was applied. In this paper a new DECT approach, based on water/cortical bone (W/CB) decomposition, was investigated. Eight patients suffering from marrow oedema were scanned with MRI and DECT. Two-materials density decomposition was performed in ROIs corresponding to normal bone marrow and oedema. These regions were drawn on DECT images using MRI informations. Both W/Ca and W/CB were considered as material basis. Scatter plots of W/Ca and W/CB concentrations were made for each ROI in order to evaluate if oedema could be distinguished from normal bone marrow. Thresholds were defined on the scatter plots in order to produce DECT images where oedema regions were highlighted through color maps. The agreement between these images and MR was scored by two expert radiologists. For all the patients, the best scores were obtained using W/CB density decomposition. In all cases, DECT color map images based on W/CB decomposition showed better agreement with MR in bone marrow oedema identification with respect to W/Ca decomposition. This result encourages further studies in order to evaluate if DECT based on W/CB decomposition could be an alternative technique to MR, which would be important when short scanning duration is relevant, as in the case of aged or traumatic patients. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  11. Significant Performance Enhancement in Asymmetric Supercapacitors based on Metal Oxides, Carbon nanotubes and Neutral Aqueous Electrolyte.

    PubMed

    Singh, Arvinder; Chandra, Amreesh

    2015-10-23

    Amongst the materials being investigated for supercapacitor electrodes, carbon based materials are most investigated. However, pure carbon materials suffer from inherent physical processes which limit the maximum specific energy and power that can be achieved in an energy storage device. Therefore, use of carbon-based composites with suitable nano-materials is attaining prominence. The synergistic effect between the pseudocapacitive nanomaterials (high specific energy) and carbon (high specific power) is expected to deliver the desired improvements. We report the fabrication of high capacitance asymmetric supercapacitor based on electrodes of composites of SnO2 and V2O5 with multiwall carbon nanotubes and neutral 0.5 M Li2SO4 aqueous electrolyte. The advantages of the fabricated asymmetric supercapacitors are compared with the results published in the literature. The widened operating voltage window is due to the higher over-potential of electrolyte decomposition and a large difference in the work functions of the used metal oxides. The charge balanced device returns the specific capacitance of ~198 F g(-1) with corresponding specific energy of ~89 Wh kg(-1) at 1 A g(-1). The proposed composite systems have shown great potential in fabricating high performance supercapacitors.

  12. 3D tensor-based blind multispectral image decomposition for tumor demarcation

    NASA Astrophysics Data System (ADS)

    Kopriva, Ivica; Peršin, Antun

    2010-03-01

    Blind decomposition of multi-spectral fluorescent image for tumor demarcation is formulated exploiting tensorial structure of the image. First contribution of the paper is identification of the matrix of spectral responses and 3D tensor of spatial distributions of the materials present in the image from Tucker3 or PARAFAC models of 3D image tensor. Second contribution of the paper is clustering based estimation of the number of the materials present in the image as well as matrix of their spectral profiles. 3D tensor of the spatial distributions of the materials is recovered through 3-mode multiplication of the multi-spectral image tensor and inverse of the matrix of spectral profiles. Tensor representation of the multi-spectral image preserves its local spatial structure that is lost, due to vectorization process, when matrix factorization-based decomposition methods (such as non-negative matrix factorization and independent component analysis) are used. Superior performance of the tensor-based image decomposition over matrix factorization-based decompositions is demonstrated on experimental red-green-blue (RGB) image with known ground truth as well as on RGB fluorescent images of the skin tumor (basal cell carcinoma).

  13. Decomposition of metabolic network into functional modules based on the global connectivity structure of reaction graph.

    PubMed

    Ma, Hong-Wu; Zhao, Xue-Ming; Yuan, Ying-Jin; Zeng, An-Ping

    2004-08-12

    Metabolic networks are organized in a modular, hierarchical manner. Methods for a rational decomposition of the metabolic network into relatively independent functional subsets are essential to better understand the modularity and organization principle of a large-scale, genome-wide network. Network decomposition is also necessary for functional analysis of metabolism by pathway analysis methods that are often hampered by the problem of combinatorial explosion due to the complexity of metabolic network. Decomposition methods proposed in literature are mainly based on the connection degree of metabolites. To obtain a more reasonable decomposition, the global connectivity structure of metabolic networks should be taken into account. In this work, we use a reaction graph representation of a metabolic network for the identification of its global connectivity structure and for decomposition. A bow-tie connectivity structure similar to that previously discovered for metabolite graph is found also to exist in the reaction graph. Based on this bow-tie structure, a new decomposition method is proposed, which uses a distance definition derived from the path length between two reactions. An hierarchical classification tree is first constructed from the distance matrix among the reactions in the giant strong component of the bow-tie structure. These reactions are then grouped into different subsets based on the hierarchical tree. Reactions in the IN and OUT subsets of the bow-tie structure are subsequently placed in the corresponding subsets according to a 'majority rule'. Compared with the decomposition methods proposed in literature, ours is based on combined properties of the global network structure and local reaction connectivity rather than, primarily, on the connection degree of metabolites. The method is applied to decompose the metabolic network of Escherichia coli. Eleven subsets are obtained. More detailed investigations of the subsets show that reactions in the same subset are really functionally related. The rational decomposition of metabolic networks, and subsequent studies of the subsets, make it more amenable to understand the inherent organization and functionality of metabolic networks at the modular level. http://genome.gbf.de/bioinformatics/

  14. An NN-Based SRD Decomposition Algorithm and Its Application in Nonlinear Compensation

    PubMed Central

    Yan, Honghang; Deng, Fang; Sun, Jian; Chen, Jie

    2014-01-01

    In this study, a neural network-based square root of descending (SRD) order decomposition algorithm for compensating for nonlinear data generated by sensors is presented. The study aims at exploring the optimized decomposition of data 1.00,0.00,0.00 and minimizing the computational complexity and memory space of the training process. A linear decomposition algorithm, which automatically finds the optimal decomposition of N subparts and reduces the training time to 1N and memory cost to 1N, has been implemented on nonlinear data obtained from an encoder. Particular focus is given to the theoretical access of estimating the numbers of hidden nodes and the precision of varying the decomposition method. Numerical experiments are designed to evaluate the effect of this algorithm. Moreover, a designed device for angular sensor calibration is presented. We conduct an experiment that samples the data of an encoder and compensates for the nonlinearity of the encoder to testify this novel algorithm. PMID:25232912

  15. Acid and alkali effects on the decomposition of HMX molecule: a computational study.

    PubMed

    Zhang, Chaoyang; Li, Yuzhen; Xiong, Ying; Wang, Xiaolin; Zhou, Mingfei

    2011-11-03

    The stored and wasted explosives are usually in an acid or alkali environment, leading to the importance of exploring the acid and alkali effects on the decomposition mechanism of explosives. The acid and alkali effects on the decomposition of HMX molecule in gaseous state and in aqueous solution at 298 K are studied using quantum chemistry and molecular force field calculations. The results show that both H(+) and OH(-) make the decomposition in gaseous state energetically favorable. However, the effect of H(+) is much different from that of OH(-) in aqueous solution: OH(-) can accelerate the decomposition but H(+) cannot. The difference is mainly caused by the large aqueous solvation energy difference between H(+) and OH(-). The results confirm that the dissociation of HMX is energetically favored only in the base solutions, in good agreement with previous HMX base hydrolysis experimental observations. The different acid and alkali effects on the HMX decomposition are dominated by the large aqueous solvation energy difference between H(+) and OH(-).

  16. A knowledge-based tool for multilevel decomposition of a complex design problem

    NASA Technical Reports Server (NTRS)

    Rogers, James L.

    1989-01-01

    Although much work has been done in applying artificial intelligence (AI) tools and techniques to problems in different engineering disciplines, only recently has the application of these tools begun to spread to the decomposition of complex design problems. A new tool based on AI techniques has been developed to implement a decomposition scheme suitable for multilevel optimization and display of data in an N x N matrix format.

  17. Role of Reactive Mn Complexes in a Litter Decomposition Model System

    NASA Astrophysics Data System (ADS)

    Nico, P. S.; Keiluweit, M.; Bougoure, J.; Kleber, M.; Summering, J. A.; Maynard, J. J.; Johnson, M.; Pett-Ridge, J.

    2012-12-01

    The search for controls on litter decomposition rates and pathways has yet to return definitive characteristics that are both statistically robust and can be understood as part of a mechanistic or numerical model. Herein we focus on Mn, an element present in all litter that is likely an active chemical agent of decomposition. Berg and co-workers (2010) found a strong correlation between Mn concentration in litter and the magnitude of litter degradation in boreal forests, suggesting that litter decomposition proceeds more efficiently in the presence of Mn. Although there is much circumstantial evidence for the potential role of Mn in lignin decomposition, few reports exist on mechanistic details of this process. For the current work, we are guided by the hypothesis that the dependence of decomposition on Mn is due to Mn (III)-oxalate complexes act as a 'pretreatment' for structurally intact ligno-carbohydrate complexes (LCC) in fresh plant cell walls (e.g. in litter, root and wood). Manganese (III)-ligand complexes such as Mn (III)-oxalate are known to be potent oxidizers of many different organic and inorganic compounds. In the litter system, the unique property of these complexes may be that they are much smaller than exo-enzymes and therefore more easily able to penetrate LCC complexes in plant cell walls. By acting as 'diffusible oxidizers' and reacting with the organic matrix of the cell wall, these compounds can increase the porosity of fresh litter thereby facilitating access of more specific lignin- and cellulose decomposing enzymes. This possibility was investigated by reacting cell walls of single Zinnia elegans tracheary elements with Mn (III)-oxalate complexes in a continuous flow reactor. The uniformity of these individual plant cells allowed us to examine Mn (III)-induced changes in cell wall chemistry and ultrastructure on the micro-scale using fluorescence and electron microscopy as well as IR and X-ray spectromicroscopy. This presentation will discuss the chemical changes induced by reaction of Mn (III)-complexes with the Zinnia cells, the impact of such reactions on cell integrity, and potential implications for soil C cycling.

  18. Study on the decomposition of trace benzene over V2O5-WO3 ...

    EPA Pesticide Factsheets

    Commercial and laboratory-prepared V2O5–WO3/TiO2-based catalysts with different compositions were tested for catalytic decomposition of chlorobenzene (ClBz) in simulated flue gas. Resonance enhanced multiphoton ionization-time of flight mass spectrometry (REMPI-TOFMS) was employed to measure real-time, trace concentrations of ClBz contained in the flue gas before and after the catalyst. The effects of various parameters, including vanadium content of the catalyst, the catalyst support, as well as the reaction temperature on decomposition of ClBz were investigated. The results showed that the ClBz decomposition efficiency was significantly enhanced when nano-TiO2 instead of conventional TiO2 was used as the catalyst support. No promotion effects were found in the ClBz decomposition process when the catalysts were wet-impregnated with CuO and CeO2. Tests with different concentrations (1,000, 500, and 100 ppb) of ClBz showed that ClBz-decomposition efficiency decreased with increasing concentration, unless active sites were plentiful. A comparison between ClBz and benzene decomposition on the V2O5–WO3/TiO2-based catalyst and the relative kinetics analysis showed that two different active sites were likely involved in the decomposition mechanism and the V=O and V-O-Ti groups may only work for the degradation of the phenyl group and the benzene ring rather than the C-Cl bond. V2O5-WO3/TiO2 based catalysts, that have been used for destruction of a wide variet

  19. Modeling the Response of Soil Organic Matter Decomposition to Warming: Effects of Dynamical Enzyme Productivity and Nuanced Representation of Respiration.

    NASA Astrophysics Data System (ADS)

    Sihi, D.; Gerber, S.; Inglett, K. S.; Inglett, P.

    2014-12-01

    Recent development in modeling soil organic carbon (SOC) decomposition includes the explicit incorporation of enzyme and microbial dynamics. A characteristic of these models is a feedback between substrate and consumers which is absent in traditional first order decay models. Second, microbial decomposition models incorporate carbon use efficiency (CUE) as a function of temperature which proved to be critical to prediction of SOC with warming. Our main goal is to explore microbial decomposition models with respect to responses of microbes to enzyme activity, costs to enzyme production, and to incorporation of growth vs. maintenance respiration. In order to simplify the modeling setup we assumed quick adjustment of enzyme activity and depolymerized carbon to microbial and SOC pools. Enzyme activity plays an important role to decomposition if its production is scaled to microbial biomass. In fact if microbes are allowed to optimize enzyme productivity the microbial enzyme model becomes unstable. Thus if the assumption of enzyme productivity is relaxed, other limiting factors must come into play. To stabilize the model, we account for two feedbacks that include cost of enzyme production and diminishing return of depolymerization with increasing enzyme concentration and activity. These feedback mechanisms caused the model to behave in a similar way to traditional, first order decay models. Most importantly, we found, that under warming, the changes in SOC carbon were more severe in enzyme synthesis is costly. In turn, carbon use efficiency (CUE) and its dynamical response to temperature is mainly determined by 1) the rate of turnover of microbes 2) the partitioning of dead microbial matter into different quality pools, and 3) and whether growth, maintenance respiration and microbial death rate have distinct responses to changes in temperature. Abbreviations: p: decay of enzyme, g: coefficient for growth respiration, : fraction of material from microbial turnover that enters the DOC pool, loss of C scaled to microbial mass, half saturation constant.

  20. The Decomposition of Carbonates and Organics on Mars

    NASA Technical Reports Server (NTRS)

    Quinn, Richard C.; Zent, Aaron; McKay, Chris; DeVincenzi, Donald L. (Technical Monitor)

    2000-01-01

    The return and analysis of pristine material that is relict of a putative period of chemical evolution is a fumdamental goal of the exobiological exploration of Mars. In order to accomplish this objective, it is desirable to find oxidant-free regions where pristine material can be accessed at the shallowest possible depth (ideally directly from the surface). The objective of our ongoing research is to understand the spatial and temporal distribution of oxidants in the martian regolith and the redox chemistry of the soil; in effect to understand the chemical mechanisms and kinetics relating to the in-situ destruction of organics and the formation of the reactive species responsible for the Viking biology results. In this work, we report on experimental studies of oxidizing processes that may contribute to carbonate and organic degradation on Mars. Organic molecules directly exposed to solar UV may decomposed either directly into CO2, or into more volatile organic fragments. Organic macromolecules not directly exposed to high UV flux are most likely to be affected by atmospheric oxidants which can diffuse to their surfaces. The oxidizing processes examined include: gas-phase oxidants, UV photolysis, and UV-assisted heterogeneous catalysis. For example, assuming a meteroritic infall rate of 4 x 10(exp -4) g/m^2yr (Flynn and McKay 1990) and a flux of organic carbon of 2 x 10(exp -5) g/m^2yr, laboratory measurements of the UV-assisted decomposition of benzenehexacarboxylic acid (mellitic acid, a likely intermediate of kerogen oxidation), indicate its decomposition rate on Mars would exceed the total flux of organic carbon to the planet by over four orders of magnitude. Our measurements indicate that although the decomposition temperature of kerogens in some cases exceeds the temperature limit of the Viking GCMS, it is unlikely kerogens or their decomposition intermediates were present at the Viking landings sites at levels above the GCMS detection limits.

  1. Deep Coastal Marine Taphonomy: Investigation into Carcass Decomposition in the Saanich Inlet, British Columbia Using a Baited Camera

    PubMed Central

    Anderson, Gail S.; Bell, Lynne S.

    2014-01-01

    Decomposition and faunal colonization of a carcass in the terrestrial environment has been well studied, but knowledge of decomposition in the marine environment is based almost entirely on anecdotal reports. Three pig carcasses were deployed in Saanich Inlet, BC, over 3 years utilizing Ocean Network Canada’s VENUS observatory. Each carcass was deployed in late summer/early fall at 99 m under a remotely controlled camera and observed several times a day. Dissolved oxygen, temperature, salinity, density and pressure were continuously measured. Carcass 1 was immediately colonized by Munida quadrispina, Pandalus platyceros and Metacarcinus magister, rapidly scavenged then dragged from view by Day 22. Artifacts specific to each of the crustaceans’ feeding patterns were observed. Carcass 2 was scavenged in a similar fashion. Exposed tissue became covered by Orchomenella obtusa (Family Lysianassidae) which removed all the internal tissues rapidly. Carcass 3 attracted only a few M. quadrispina, remaining intact, developing a thick filamentous sulphur bacterial mat, until Day 92, when it was skeletonized by crustacea. The major difference between the deployments was dissolved oxygen levels. The first two carcasses were placed when oxygen levels were tolerable, becoming more anoxic. This allowed larger crustacea to feed. However, Carcass 3 was deployed when the water was already extremely anoxic, which prevented larger crustacea from accessing the carcass. The smaller M. quadrispina were unable to break the skin alone. The larger crustacea returned when the Inlet was re-oxygenated in spring. Oxygen levels, therefore, drive the biota in this area, although most crustacea endured stressful levels of oxygen to access the carcasses for much of the time. These data will be valuable in forensic investigations involving submerged bodies, indicating types of water conditions to which the body has been exposed, identifying post-mortem artifacts and providing realistic expectations for recovery divers and families of the deceased. PMID:25329759

  2. A novel hybrid decomposition-and-ensemble model based on CEEMD and GWO for short-term PM2.5 concentration forecasting

    NASA Astrophysics Data System (ADS)

    Niu, Mingfei; Wang, Yufang; Sun, Shaolong; Li, Yongwu

    2016-06-01

    To enhance prediction reliability and accuracy, a hybrid model based on the promising principle of "decomposition and ensemble" and a recently proposed meta-heuristic called grey wolf optimizer (GWO) is introduced for daily PM2.5 concentration forecasting. Compared with existing PM2.5 forecasting methods, this proposed model has improved the prediction accuracy and hit rates of directional prediction. The proposed model involves three main steps, i.e., decomposing the original PM2.5 series into several intrinsic mode functions (IMFs) via complementary ensemble empirical mode decomposition (CEEMD) for simplifying the complex data; individually predicting each IMF with support vector regression (SVR) optimized by GWO; integrating all predicted IMFs for the ensemble result as the final prediction by another SVR optimized by GWO. Seven benchmark models, including single artificial intelligence (AI) models, other decomposition-ensemble models with different decomposition methods and models with the same decomposition-ensemble method but optimized by different algorithms, are considered to verify the superiority of the proposed hybrid model. The empirical study indicates that the proposed hybrid decomposition-ensemble model is remarkably superior to all considered benchmark models for its higher prediction accuracy and hit rates of directional prediction.

  3. Early stage litter decomposition across biomes

    Treesearch

    Ika Djukic; Sebastian Kepfer-Rojas; Inger Kappel Schmidt; Klaus Steenberg Larsen; Claus Beier; Björn Berg; Kris Verheyen; Adriano Caliman; Alain Paquette; Alba Gutiérrez-Girón; Alberto Humber; Alejandro Valdecantos; Alessandro Petraglia; Heather Alexander; Algirdas Augustaitis; Amélie Saillard; Ana Carolina Ruiz Fernández; Ana I. Sousa; Ana I. Lillebø; Anderson da Rocha Gripp; André-Jean Francez; Andrea Fischer; Andreas Bohner; Andrey Malyshev; Andrijana Andrić; Andy Smith; Angela Stanisci; Anikó Seres; Anja Schmidt; Anna Avila; Anne Probst; Annie Ouin; Anzar A. Khuroo; Arne Verstraeten; Arely N. Palabral-Aguilera; Artur Stefanski; Aurora Gaxiola; Bart Muys; Bernard Bosman; Bernd Ahrends; Bill Parker; Birgit Sattler; Bo Yang; Bohdan Juráni; Brigitta Erschbamer; Carmen Eugenia Rodriguez Ortiz; Casper T. Christiansen; E. Carol Adair; Céline Meredieu; Cendrine Mony; Charles A. Nock; Chi-Ling Chen; Chiao-Ping Wang; Christel Baum; Christian Rixen; Christine Delire; Christophe Piscart; Christopher Andrews; Corinna Rebmann; Cristina Branquinho; Dana Polyanskaya; David Fuentes Delgado; Dirk Wundram; Diyaa Radeideh; Eduardo Ordóñez-Regil; Edward Crawford; Elena Preda; Elena Tropina; Elli Groner; Eric Lucot; Erzsébet Hornung; Esperança Gacia; Esther Lévesque; Evanilde Benedito; Evgeny A. Davydov; Evy Ampoorter; Fabio Padilha Bolzan; Felipe Varela; Ferdinand Kristöfel; Fernando T. Maestre; Florence Maunoury-Danger; Florian Hofhansl; Florian Kitz; Flurin Sutter; Francisco Cuesta; Francisco de Almeida Lobo; Franco Leandro de Souza; Frank Berninger; Franz Zehetner; Georg Wohlfahrt; George Vourlitis; Geovana Carreño-Rocabado; Gina Arena; Gisele Daiane Pinha; Grizelle González; Guylaine Canut; Hanna Lee; Hans Verbeeck; Harald Auge; Harald Pauli; Hassan Bismarck Nacro; Héctor A. Bahamonde; Heike Feldhaar; Heinke Jäger; Helena C. Serrano; Hélène Verheyden; Helge Bruelheide; Henning Meesenburg; Hermann Jungkunst; Hervé Jactel; Hideaki Shibata; Hiroko Kurokawa; Hugo López Rosas; Hugo L. Rojas Villalobos; Ian Yesilonis; Inara Melece; Inge Van Halder; Inmaculada García Quirós; Isaac Makelele; Issaka Senou; István Fekete; Ivan Mihal; Ivika Ostonen; Jana Borovská; Javier Roales; Jawad Shoqeir; Jean-Christophe Lata; Jean-Paul Theurillat; Jean-Luc Probst; Jess Zimmerman; Jeyanny Vijayanathan; Jianwu Tang; Jill Thompson; Jiří Doležal; Joan-Albert Sanchez-Cabeza; Joël Merlet; Joh Henschel; Johan Neirynck; Johannes Knops; John Loehr; Jonathan von Oppen; Jónína Sigríður Þorláksdóttir; Jörg Löffler; José-Gilberto Cardoso-Mohedano; José-Luis Benito-Alonso; Jose Marcelo Torezan; Joseph C. Morina; Juan J. Jiménez; Juan Dario Quinde; Juha Alatalo; Julia Seeber; Jutta Stadler; Kaie Kriiska; Kalifa Coulibaly; Karibu Fukuzawa; Katalin Szlavecz; Katarína Gerhátová; Kate Lajtha; Kathrin Käppeler; Katie A. Jennings; Katja Tielbörger; Kazuhiko Hoshizaki; Ken Green; Lambiénou Yé; Laryssa Helena Ribeiro Pazianoto; Laura Dienstbach; Laura Williams; Laura Yahdjian; Laurel M. Brigham; Liesbeth van den Brink; Lindsey Rustad; al. et

    2018-01-01

    Through litter decomposition enormous amounts of carbon is emitted to the atmosphere. Numerous large-scale decomposition experiments have been conducted focusing on this fundamental soil process in order to understand the controls on the terrestrial carbon transfer to the atmosphere. However, previous studies were mostly based on site-specific litter and methodologies...

  4. Thermal decomposition of high-nitrogen energetic compounds: TAGzT and GUzT

    NASA Astrophysics Data System (ADS)

    Hayden, Heather F.

    The U.S. Navy is exploring high-nitrogen compounds as burning-rate additives to meet the growing demands of future high-performance gun systems. Two high-nitrogen compounds investigated as potential burning-rate additives are bis(triaminoguanidinium) 5,5-azobitetrazolate (TAGzT) and bis(guanidinium) 5,5'-azobitetrazolate (GUzT). Small-scale tests showed that formulations containing TAGzT exhibit significant increases in the burning rates of RDX-based gun propellants. However, when GUzT, a similarly structured molecule was incorporated into the formulation, there was essentially no effect on the burning rate of the propellant. Through the use of simultaneous thermogravimetric modulated beam mass spectrometry (STMBMS) and Fourier-Transform ion cyclotron resonance (FTICR) mass spectrometry methods, an investigation of the underlying chemical and physical processes that control the thermal decomposition behavior of TAGzT and GUzT alone and in the presence of RDX, was conducted. The objective was to determine why GUzT is not as good a burning-rate enhancer in RDX-based gun propellants as compared to TAGzT. The results show that TAGzT is an effective burning-rate modifier in the presence of RDX because the decomposition of TAGzT alters the initial stages of the decomposition of RDX. Hydrazine, formed in the decomposition of TAGzT, reacts faster with RDX than RDX can decompose itself. The reactions occur at temperatures below the melting point of RDX and thus the TAGzT decomposition products react with RDX in the gas phase. Although there is no hydrazine formed in the decomposition of GUzT, amines formed in the decomposition of GUzT react with aldehydes, formed in the decomposition of RDX, resulting in an increased reaction rate of RDX in the presence of GUzT. However, GUzT is not an effective burning-rate modifier because its decomposition does not alter the initial gas-phase decomposition of RDX. The decomposition of GUzT occurs at temperatures above the melting point of RDX. Therefore, the decomposition of GUzT affects reactions that are dominant in the liquid phase of RDX. Although GUzT is not an effective burning-rate modifier, features of its decomposition where the reaction between amines formed in the decomposition of GUzT react with the aldehydes, formed in the decomposition of RDX, may have implications from an insensitive-munitions perspective.

  5. Potential gains from hospital mergers in Denmark.

    PubMed

    Kristensen, Troels; Bogetoft, Peter; Pedersen, Kjeld Moeller

    2010-12-01

    The Danish hospital sector faces a major rebuilding program to centralize activity in fewer and larger hospitals. We aim to conduct an efficiency analysis of hospitals and to estimate the potential cost savings from the planned hospital mergers. We use Data Envelopment Analysis (DEA) to estimate a cost frontier. Based on this analysis, we calculate an efficiency score for each hospital and estimate the potential gains from the proposed mergers by comparing individual efficiencies with the efficiency of the combined hospitals. Furthermore, we apply a decomposition algorithm to split merger gains into technical efficiency, size (scale) and harmony (mix) gains. The motivation for this decomposition is that some of the apparent merger gains may actually be available with less than a full-scale merger, e.g., by sharing best practices and reallocating certain resources and tasks. Our results suggest that many hospitals are technically inefficient, and the expected "best practice" hospitals are quite efficient. Also, some mergers do not seem to lower costs. This finding indicates that some merged hospitals become too large and therefore experience diseconomies of scale. Other mergers lead to considerable cost reductions; we find potential gains resulting from learning better practices and the exploitation of economies of scope. To ensure robustness, we conduct a sensitivity analysis using two alternative returns-to-scale assumptions and two alternative estimation approaches. We consistently find potential gains from improving the technical efficiency and the exploitation of economies of scope from mergers.

  6. Near-lossless multichannel EEG compression based on matrix and tensor decompositions.

    PubMed

    Dauwels, Justin; Srinivasan, K; Reddy, M Ramasubba; Cichocki, Andrzej

    2013-05-01

    A novel near-lossless compression algorithm for multichannel electroencephalogram (MC-EEG) is proposed based on matrix/tensor decomposition models. MC-EEG is represented in suitable multiway (multidimensional) forms to efficiently exploit temporal and spatial correlations simultaneously. Several matrix/tensor decomposition models are analyzed in view of efficient decorrelation of the multiway forms of MC-EEG. A compression algorithm is built based on the principle of “lossy plus residual coding,” consisting of a matrix/tensor decomposition-based coder in the lossy layer followed by arithmetic coding in the residual layer. This approach guarantees a specifiable maximum absolute error between original and reconstructed signals. The compression algorithm is applied to three different scalp EEG datasets and an intracranial EEG dataset, each with different sampling rate and resolution. The proposed algorithm achieves attractive compression ratios compared to compressing individual channels separately. For similar compression ratios, the proposed algorithm achieves nearly fivefold lower average error compared to a similar wavelet-based volumetric MC-EEG compression algorithm.

  7. Technology test results from an intelligent, free-flying robot for crew and equipment retrieval in space

    NASA Technical Reports Server (NTRS)

    Erickson, J.; Goode, R.; Grimm, K.; Hess, C.; Norsworthy, R.; Anderson, G.; Merkel, L.; Phinney, D.

    1992-01-01

    The ground-based demonstrations of Extra Vehicular Activity (EVA) Retriever, a voice-supervised, intelligent, free-flying robot, are designed to evaluate the capability to retrieve objects (astronauts, equipment, and tools) which have accidentally separated from the Space Station. The EVA Retriever software is required to autonomously plan and execute a target rendezvous, grapple, and return to base while avoiding stationary and moving obstacles with subsequent object handover. The software architecture incorporates a heirarchical decomposition of the control system that is horizontally partitioned into five major functional subsystems: sensing, perception, world model, reasoning, and acting. The design provides for supervised autonomy as the primary mode of operation. It is intended to be an evolutionary system improving in capability over time and as it earns crew trust through reliable and safe operation. This paper gives an overview of the hardware, a focus on software, and a summary of results achieved recently from both computer simulations and air bearing floor demonstrations. Limitations of the technology used are evaluated. Plans for the next phase, during which moving targets and obstacles drive realtime behavior requirements, are discussed.

  8. Technology test results from an intelligent, free-flying robot for crew and equipment retrieval in space

    NASA Astrophysics Data System (ADS)

    Erickson, Jon D.; Goode, R.; Grimm, K. A.; Hess, Clifford W.; Norsworthy, Robert S.; Anderson, Greg D.; Merkel, L.; Phinney, Dale E.

    1992-03-01

    The ground-based demonstrations of Extra Vehicular Activity (EVA) Retriever, a voice- supervised, intelligent, free-flying robot, are designed to evaluate the capability to retrieve objects (astronauts, equipment, and tools) which have accidentally separated from the space station. The EVA Retriever software is required to autonomously plan and execute a target rendezvous, grapple, and return to base while avoiding stationary and moving obstacles with subsequent object handover. The software architecture incorporates a hierarchical decomposition of the control system that is horizontally partitioned into five major functional subsystems: sensing, perception, world model, reasoning, and acting. The design provides for supervised autonomy as the primary mode of operation. It is intended to be an evolutionary system improving in capability over time and as it earns crew trust through reliable and safe operation. This paper gives an overview of the hardware, a focus on software, and a summary of results achieved recently from both computer simulations and air bearing floor demonstrations. Limitations of the technology used are evaluated. Plans for the next phase, during which moving targets and obstacles drive realtime behavior requirements, are discussed.

  9. Multi-Centrality Graph Spectral Decompositions and Their Application to Cyber Intrusion Detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Pin-Yu; Choudhury, Sutanay; Hero, Alfred

    Many modern datasets can be represented as graphs and hence spectral decompositions such as graph principal component analysis (PCA) can be useful. Distinct from previous graph decomposition approaches based on subspace projection of a single topological feature, e.g., the centered graph adjacency matrix (graph Laplacian), we propose spectral decomposition approaches to graph PCA and graph dictionary learning that integrate multiple features, including graph walk statistics, centrality measures and graph distances to reference nodes. In this paper we propose a new PCA method for single graph analysis, called multi-centrality graph PCA (MC-GPCA), and a new dictionary learning method for ensembles ofmore » graphs, called multi-centrality graph dictionary learning (MC-GDL), both based on spectral decomposition of multi-centrality matrices. As an application to cyber intrusion detection, MC-GPCA can be an effective indicator of anomalous connectivity pattern and MC-GDL can provide discriminative basis for attack classification.« less

  10. Domain Decomposition By the Advancing-Partition Method for Parallel Unstructured Grid Generation

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.; Zagaris, George

    2009-01-01

    A new method of domain decomposition has been developed for generating unstructured grids in subdomains either sequentially or using multiple computers in parallel. Domain decomposition is a crucial and challenging step for parallel grid generation. Prior methods are generally based on auxiliary, complex, and computationally intensive operations for defining partition interfaces and usually produce grids of lower quality than those generated in single domains. The new technique, referred to as "Advancing Partition," is based on the Advancing-Front method, which partitions a domain as part of the volume mesh generation in a consistent and "natural" way. The benefits of this approach are: 1) the process of domain decomposition is highly automated, 2) partitioning of domain does not compromise the quality of the generated grids, and 3) the computational overhead for domain decomposition is minimal. The new method has been implemented in NASA's unstructured grid generation code VGRID.

  11. Domain Decomposition By the Advancing-Partition Method

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2008-01-01

    A new method of domain decomposition has been developed for generating unstructured grids in subdomains either sequentially or using multiple computers in parallel. Domain decomposition is a crucial and challenging step for parallel grid generation. Prior methods are generally based on auxiliary, complex, and computationally intensive operations for defining partition interfaces and usually produce grids of lower quality than those generated in single domains. The new technique, referred to as "Advancing Partition," is based on the Advancing-Front method, which partitions a domain as part of the volume mesh generation in a consistent and "natural" way. The benefits of this approach are: 1) the process of domain decomposition is highly automated, 2) partitioning of domain does not compromise the quality of the generated grids, and 3) the computational overhead for domain decomposition is minimal. The new method has been implemented in NASA's unstructured grid generation code VGRID.

  12. Effects of magnesium-based hydrogen storage materials on the thermal decomposition, burning rate, and explosive heat of ammonium perchlorate-based composite solid propellant.

    PubMed

    Liu, Leili; Li, Jie; Zhang, Lingyao; Tian, Siyu

    2018-01-15

    MgH 2 , Mg 2 NiH 4 , and Mg 2 CuH 3 were prepared, and their structure and hydrogen storage properties were determined through X-ray photoelectron spectroscopy and thermal analyzer. The effects of MgH 2 , Mg 2 NiH 4 , and Mg 2 CuH 3 on the thermal decomposition, burning rate, and explosive heat of ammonium perchlorate-based composite solid propellant were subsequently studied. Results indicated that MgH 2 , Mg 2 NiH 4 , and Mg 2 CuH 3 can decrease the thermal decomposition peak temperature and increase the total released heat of decomposition. These compounds can improve the effect of thermal decomposition of the propellant. The burning rates of the propellant increased using Mg-based hydrogen storage materials as promoter. The burning rates of the propellant also increased using MgH 2 instead of Al in the propellant, but its explosive heat was not enlarged. Nonetheless, the combustion heat of MgH 2 was higher than that of Al. A possible mechanism was thus proposed. Copyright © 2017. Published by Elsevier B.V.

  13. [Diversity of soil fauna in corn fields in Huang-Huai-Hai Plain of China under effects of conservation tillage].

    PubMed

    Zhu, Qiang-Gen; Zhu, An-Ning; Zhang, Jia-Bao; Zhang, Huan-Chao; Huang, Ping; Zhang, Cong-Zhi

    2009-10-01

    An investigation was made on the abundance and diversity of soil fauna in the corn fields under conventional and conservation tillage in Huang-Huai-Hai Plain of China. The abundance and diversity of soil fauna were higher at corn maturing (September) than at its jointing stage (July), and higher at jointing stage under conservation tillage than under conventional tillage. Soil fauna mainly distributed in surface soil layer (0-10 cm), but still had a larger number in 10-20 cm layer under conservation tillage. The individuals of acari, diptera, diplura, and microdrile oligochaetes, especially those of acari, were higher under conservation tillage than under conventional tillage. At maturing stage, an obvious effect of straw-returning under conservation tillage was observed, i. e., the more the straw returned, the higher the abundance of soil fauna, among which, the individuals of collembola, acari, coleopteran, and psocoptera, especially those of collembolan, increased significantly. The abundance of collembola at both jointing and maturing stages was significantly positively correlated with the quantity of straw returned, suggesting that collembola played an important role in straw decomposition and nutrient cycling.

  14. Succession and dynamics of Pristionchus nematodes and their microbiome during decomposition of Oryctes borbonicus on La Réunion Island.

    PubMed

    Meyer, Jan M; Baskaran, Praveen; Quast, Christian; Susoy, Vladislav; Rödelsperger, Christian; Glöckner, Frank O; Sommer, Ralf J

    2017-04-01

    Insects and nematodes represent the most species-rich animal taxa and they occur together in a variety of associations. Necromenic nematodes of the genus Pristionchus are found on scarab beetles with more than 30 species known from worldwide samplings. However, little is known about the dynamics and succession of nematodes and bacteria during the decomposition of beetle carcasses. Here, we study nematode and bacterial succession of the decomposing rhinoceros beetle Oryctes borbonicus on La Réunion Island. We show that Pristionchus pacificus exits the arrested dauer stage seven days after the beetles´ deaths. Surprisingly, new dauers are seen after 11 days, suggesting that some worms return to the dauer stage after one reproductive cycle. We used high-throughput sequencing of the 16S rRNA genes of decaying beetles, beetle guts and nematodes to study bacterial communities in comparison to soil. We find that soil environments have the most diverse bacterial communities. The bacterial community of living and decaying beetles are more stable but one single bacterial family dominates the microbiome of decaying beetles. In contrast, the microbiome of nematodes is relatively similar even across different families. This study represents the first characterization of the dynamics of nematode-bacterial interactions during the decomposition of insects. © 2017 Society for Applied Microbiology and John Wiley & Sons Ltd.

  15. An optimization approach for fitting canonical tensor decompositions.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dunlavy, Daniel M.; Acar, Evrim; Kolda, Tamara Gibson

    Tensor decompositions are higher-order analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as the CANDECOMP/PARAFAC decomposition (CPD), which expresses a tensor as the sum of component rank-one tensors and is used in a multitude of applications such as chemometrics, signal processing, neuroscience, and web analysis. The task of computing the CPD, however, can be difficult. The typical approach is based on alternating least squares (ALS) optimization, which can be remarkably fast but is not very accurate. Previously, nonlinear least squares (NLS) methodsmore » have also been recommended; existing NLS methods are accurate but slow. In this paper, we propose the use of gradient-based optimization methods. We discuss the mathematical calculation of the derivatives and further show that they can be computed efficiently, at the same cost as one iteration of ALS. Computational experiments demonstrate that the gradient-based optimization methods are much more accurate than ALS and orders of magnitude faster than NLS.« less

  16. Spider foraging strategy affects trophic cascades under natural and drought conditions.

    PubMed

    Liu, Shengjie; Chen, Jin; Gan, Wenjin; Schaefer, Douglas; Gan, Jianmin; Yang, Xiaodong

    2015-07-23

    Spiders can cause trophic cascades affecting litter decomposition rates. However, it remains unclear how spiders with different foraging strategies influence faunal communities, or present cascading effects on decomposition. Furthermore, increased dry periods predicted in future climates will likely have important consequences for trophic interactions in detritus-based food webs. We investigated independent and interactive effects of spider predation and drought on litter decomposition in a tropical forest floor. We manipulated densities of dominant spiders with actively hunting or sit-and-wait foraging strategies in microcosms which mimicked the tropical-forest floor. We found a positive trophic cascade on litter decomposition was triggered by actively hunting spiders under ambient rainfall, but sit-and-wait spiders did not cause this. The drought treatment reversed the effect of actively hunting spiders on litter decomposition. Under drought conditions, we observed negative trophic cascade effects on litter decomposition in all three spider treatments. Thus, reduced rainfall can alter predator-induced indirect effects on lower trophic levels and ecosystem processes, and is an example of how such changes may alter trophic cascades in detritus-based webs of tropical forests.

  17. Spider foraging strategy affects trophic cascades under natural and drought conditions

    PubMed Central

    Liu, Shengjie; Chen, Jin; Gan, Wenjin; Schaefer, Douglas; Gan, Jianmin; Yang, Xiaodong

    2015-01-01

    Spiders can cause trophic cascades affecting litter decomposition rates. However, it remains unclear how spiders with different foraging strategies influence faunal communities, or present cascading effects on decomposition. Furthermore, increased dry periods predicted in future climates will likely have important consequences for trophic interactions in detritus-based food webs. We investigated independent and interactive effects of spider predation and drought on litter decomposition in a tropical forest floor. We manipulated densities of dominant spiders with actively hunting or sit-and-wait foraging strategies in microcosms which mimicked the tropical-forest floor. We found a positive trophic cascade on litter decomposition was triggered by actively hunting spiders under ambient rainfall, but sit-and-wait spiders did not cause this. The drought treatment reversed the effect of actively hunting spiders on litter decomposition. Under drought conditions, we observed negative trophic cascade effects on litter decomposition in all three spider treatments. Thus, reduced rainfall can alter predator-induced indirect effects on lower trophic levels and ecosystem processes, and is an example of how such changes may alter trophic cascades in detritus-based webs of tropical forests. PMID:26202370

  18. Corrected confidence bands for functional data using principal components.

    PubMed

    Goldsmith, J; Greven, S; Crainiceanu, C

    2013-03-01

    Functional principal components (FPC) analysis is widely used to decompose and express functional observations. Curve estimates implicitly condition on basis functions and other quantities derived from FPC decompositions; however these objects are unknown in practice. In this article, we propose a method for obtaining correct curve estimates by accounting for uncertainty in FPC decompositions. Additionally, pointwise and simultaneous confidence intervals that account for both model- and decomposition-based variability are constructed. Standard mixed model representations of functional expansions are used to construct curve estimates and variances conditional on a specific decomposition. Iterated expectation and variance formulas combine model-based conditional estimates across the distribution of decompositions. A bootstrap procedure is implemented to understand the uncertainty in principal component decomposition quantities. Our method compares favorably to competing approaches in simulation studies that include both densely and sparsely observed functions. We apply our method to sparse observations of CD4 cell counts and to dense white-matter tract profiles. Code for the analyses and simulations is publicly available, and our method is implemented in the R package refund on CRAN. Copyright © 2013, The International Biometric Society.

  19. Corrected Confidence Bands for Functional Data Using Principal Components

    PubMed Central

    Goldsmith, J.; Greven, S.; Crainiceanu, C.

    2014-01-01

    Functional principal components (FPC) analysis is widely used to decompose and express functional observations. Curve estimates implicitly condition on basis functions and other quantities derived from FPC decompositions; however these objects are unknown in practice. In this article, we propose a method for obtaining correct curve estimates by accounting for uncertainty in FPC decompositions. Additionally, pointwise and simultaneous confidence intervals that account for both model- and decomposition-based variability are constructed. Standard mixed model representations of functional expansions are used to construct curve estimates and variances conditional on a specific decomposition. Iterated expectation and variance formulas combine model-based conditional estimates across the distribution of decompositions. A bootstrap procedure is implemented to understand the uncertainty in principal component decomposition quantities. Our method compares favorably to competing approaches in simulation studies that include both densely and sparsely observed functions. We apply our method to sparse observations of CD4 cell counts and to dense white-matter tract profiles. Code for the analyses and simulations is publicly available, and our method is implemented in the R package refund on CRAN. PMID:23003003

  20. A Four-Stage Hybrid Model for Hydrological Time Series Forecasting

    PubMed Central

    Di, Chongli; Yang, Xiaohua; Wang, Xiaochao

    2014-01-01

    Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of ‘denoising, decomposition and ensemble’. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models. PMID:25111782

  1. A four-stage hybrid model for hydrological time series forecasting.

    PubMed

    Di, Chongli; Yang, Xiaohua; Wang, Xiaochao

    2014-01-01

    Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of 'denoising, decomposition and ensemble'. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models.

  2. Microbial Decomposers Not Constrained by Climate History Along a Mediterranean Climate Gradient

    NASA Astrophysics Data System (ADS)

    Baker, N. R.; Khalili, B.; Martiny, J. B. H.; Allison, S. D.

    2017-12-01

    The return of organic carbon to the atmosphere through terrestrial decomposition is mediated through the breakdown of complex organic polymers by extracellular enzymes produced by microbial decomposer communities. Determining if and how these decomposer communities are constrained in their ability to degrade plant litter is necessary for predicting how carbon cycling will be affected by future climate change. To address this question, we deployed fine-pore nylon mesh "microbial cage" litterbags containing grassland litter with and without local inoculum across five sites in southern California, spanning a gradient of 10.3-22.8° C in mean annual temperature and 100-400+ mm mean annual precipitation. Litterbags were deployed in October 2014 and collected four times over the course of 14 months. Recovered litter was assayed for mass loss, litter chemistry, microbial biomass, extracellular enzymes (Vmax and Km­), and enzyme temperature sensitivities. We hypothesized that grassland litter would decompose most rapidly in the grassland site, and that access to local microbial communities would enhance litter decomposition rates and microbial activity in the other sites along the gradient. We determined that temperature and precipitation likely interact to limit microbial decomposition in the extreme sites along our gradient. Despite their unique climate history, grassland microbes were not restricted in their ability to decompose litter under different climate conditions. Although we observed a strong correlation between bacterial biomass and mass loss across the gradient, litter that was inoculated with local microbial communities lost less mass despite having greater bacterial biomass and potentially accumulating more microbial residues. Our results suggest that microbial community composition may not constrain C-cycling rates under climate change in our system. However, there may be community constraints on decomposition if climate change alters litter chemistry, a mechanism only indirectly addressed by our design.

  3. Re-establishing marshes can return carbon sink functions to a current carbon source in the Sacramento-San Joaquin Delta of California, USA

    USGS Publications Warehouse

    Miller, Robin L.; Fujii, Roger; Schmidt, Paul E.

    2011-01-01

    The Sacramento-San Joaquin Delta in California was an historic, vast inland freshwater wetland, where organic soils almost 20 meters deep formed over the last several millennia as the land surface elevation of marshes kept pace with sea level rise. A system of levees and pumps were installed in the late 1800s and early 1900s to drain the land for agricultural use. Since then, land surface has subsided more than 7 meters below sea level in some areas as organic soils have been lost to aerobic decomposition. As land surface elevations decrease, costs for levee maintenance and repair increase, as do the risks of flooding. Wetland restoration can be a way to mitigate subsidence by re-creating the environment in which the organic soils developed. A preliminary study of the effect of hydrologic regime on carbon cycling conducted on Twitchell Island during the mid-1990s showed that continuous, shallow flooding allowing for the growth of emergent marsh vegetation re-created a wetland environment where carbon preservation occurred. Under these conditions annual plant biomass carbon inputs were high, and microbial decomposition was reduced. Based on this preliminary study, the U.S. Geological Survey re-established permanently flooded wetlands in fall 1997, with shallow water depths of 25 and 55 centimeters, to investigate the potential to reverse subsidence of delta islands by preserving and accumulating organic substrates over time. Ten years after flooding, elevation gains from organic matter accumulation in areas of emergent marsh vegetation ranged from almost 30 to 60 centimeters, with average annual carbon storage rates approximating 1 kg/m2, while areas without emergent vegetation cover showed no significant change in elevation. Differences in accretion rates within areas of emergent marsh vegetation appeared to result from temporal and spatial variability in hydrologic factors and decomposition rates in the wetlands rather than variability in primary production. Decomposition rates were related to differences in hydrologic conditions, including water temperature, pH, dissolved oxygen concentration, and availability of alternate electron acceptors. The study showed that marsh re-establishment with permanent, low energy, shallow flooding can limit oxidation of organic soils, thus, effectively turning subsiding land from atmospheric carbon sources to carbon sinks, and at the same time reducing flood vulnerability.

  4. Model-based multiple patterning layout decomposition

    NASA Astrophysics Data System (ADS)

    Guo, Daifeng; Tian, Haitong; Du, Yuelin; Wong, Martin D. F.

    2015-10-01

    As one of the most promising next generation lithography technologies, multiple patterning lithography (MPL) plays an important role in the attempts to keep in pace with 10 nm technology node and beyond. With feature size keeps shrinking, it has become impossible to print dense layouts within one single exposure. As a result, MPL such as double patterning lithography (DPL) and triple patterning lithography (TPL) has been widely adopted. There is a large volume of literature on DPL/TPL layout decomposition, and the current approach is to formulate the problem as a classical graph-coloring problem: Layout features (polygons) are represented by vertices in a graph G and there is an edge between two vertices if and only if the distance between the two corresponding features are less than a minimum distance threshold value dmin. The problem is to color the vertices of G using k colors (k = 2 for DPL, k = 3 for TPL) such that no two vertices connected by an edge are given the same color. This is a rule-based approach, which impose a geometric distance as a minimum constraint to simply decompose polygons within the distance into different masks. It is not desired in practice because this criteria cannot completely capture the behavior of the optics. For example, it lacks of sufficient information such as the optical source characteristics and the effects between the polygons outside the minimum distance. To remedy the deficiency, a model-based layout decomposition approach to make the decomposition criteria base on simulation results was first introduced at SPIE 2013.1 However, the algorithm1 is based on simplified assumption on the optical simulation model and therefore its usage on real layouts is limited. Recently AMSL2 also proposed a model-based approach to layout decomposition by iteratively simulating the layout, which requires excessive computational resource and may lead to sub-optimal solutions. The approach2 also potentially generates too many stiches. In this paper, we propose a model-based MPL layout decomposition method using a pre-simulated library of frequent layout patterns. Instead of using the graph G in the standard graph-coloring formulation, we build an expanded graph H where each vertex represents a group of adjacent features together with a coloring solution. By utilizing the library and running sophisticated graph algorithms on H, our approach can obtain optimal decomposition results efficiently. Our model-based solution can achieve a practical mask design which significantly improves the lithography quality on the wafer compared to the rule based decomposition.

  5. Interface conditions for domain decomposition with radical grid refinement

    NASA Technical Reports Server (NTRS)

    Scroggs, Jeffrey S.

    1991-01-01

    Interface conditions for coupling the domains in a physically motivated domain decomposition method are discussed. The domain decomposition is based on an asymptotic-induced method for the numerical solution of hyperbolic conservation laws with small viscosity. The method consists of multiple stages. The first stage is to obtain a first approximation using a first-order method, such as the Godunov scheme. Subsequent stages of the method involve solving internal-layer problem via a domain decomposition. The method is derived and justified via singular perturbation techniques.

  6. A comparison between decomposition rates of buried and surface remains in a temperate region of South Africa.

    PubMed

    Marais-Werner, Anátulie; Myburgh, J; Becker, P J; Steyn, M

    2018-01-01

    Several studies have been conducted on decomposition patterns and rates of surface remains; however, much less are known about this process for buried remains. Understanding the process of decomposition in buried remains is extremely important and aids in criminal investigations, especially when attempting to estimate the post mortem interval (PMI). The aim of this study was to compare the rates of decomposition between buried and surface remains. For this purpose, 25 pigs (Sus scrofa; 45-80 kg) were buried and excavated at different post mortem intervals (7, 14, 33, 92, and 183 days). The observed total body scores were then compared to those of surface remains decomposing at the same location. Stages of decomposition were scored according to separate categories for different anatomical regions based on standardised methods. Variation in the degree of decomposition was considerable especially with the buried 7-day interval pigs that displayed different degrees of discolouration in the lower abdomen and trunk. At 14 and 33 days, buried pigs displayed features commonly associated with the early stages of decomposition, but with less variation. A state of advanced decomposition was reached where little change was observed in the next ±90-183 days after interment. Although the patterns of decomposition for buried and surface remains were very similar, the rates differed considerably. Based on the observations made in this study, guidelines for the estimation of PMI are proposed. This pertains to buried remains found at a depth of approximately 0.75 m in the Central Highveld of South Africa.

  7. Morphological decomposition of 2-D binary shapes into convex polygons: a heuristic algorithm.

    PubMed

    Xu, J

    2001-01-01

    In many morphological shape decomposition algorithms, either a shape can only be decomposed into shape components of extremely simple forms or a time consuming search process is employed to determine a decomposition. In this paper, we present a morphological shape decomposition algorithm that decomposes a two-dimensional (2-D) binary shape into a collection of convex polygonal components. A single convex polygonal approximation for a given image is first identified. This first component is determined incrementally by selecting a sequence of basic shape primitives. These shape primitives are chosen based on shape information extracted from the given shape at different scale levels. Additional shape components are identified recursively from the difference image between the given image and the first component. Simple operations are used to repair certain concavities caused by the set difference operation. The resulting hierarchical structure provides descriptions for the given shape at different detail levels. The experiments show that the decomposition results produced by the algorithm seem to be in good agreement with the natural structures of the given shapes. The computational cost of the algorithm is significantly lower than that of an earlier search-based convex decomposition algorithm. Compared to nonconvex decomposition algorithms, our algorithm allows accurate approximations for the given shapes at low coding costs.

  8. A statistical approach based on accumulated degree-days to predict decomposition-related processes in forensic studies.

    PubMed

    Michaud, Jean-Philippe; Moreau, Gaétan

    2011-01-01

    Using pig carcasses exposed over 3 years in rural fields during spring, summer, and fall, we studied the relationship between decomposition stages and degree-day accumulation (i) to verify the predictability of the decomposition stages used in forensic entomology to document carcass decomposition and (ii) to build a degree-day accumulation model applicable to various decomposition-related processes. Results indicate that the decomposition stages can be predicted with accuracy from temperature records and that a reliable degree-day index can be developed to study decomposition-related processes. The development of degree-day indices opens new doors for researchers and allows for the application of inferential tools unaffected by climatic variability, as well as for the inclusion of statistics in a science that is primarily descriptive and in need of validation methods in courtroom proceedings. © 2010 American Academy of Forensic Sciences.

  9. Automatic network coupling analysis for dynamical systems based on detailed kinetic models.

    PubMed

    Lebiedz, Dirk; Kammerer, Julia; Brandt-Pollmann, Ulrich

    2005-10-01

    We introduce a numerical complexity reduction method for the automatic identification and analysis of dynamic network decompositions in (bio)chemical kinetics based on error-controlled computation of a minimal model dimension represented by the number of (locally) active dynamical modes. Our algorithm exploits a generalized sensitivity analysis along state trajectories and subsequent singular value decomposition of sensitivity matrices for the identification of these dominant dynamical modes. It allows for a dynamic coupling analysis of (bio)chemical species in kinetic models that can be exploited for the piecewise computation of a minimal model on small time intervals and offers valuable functional insight into highly nonlinear reaction mechanisms and network dynamics. We present results for the identification of network decompositions in a simple oscillatory chemical reaction, time scale separation based model reduction in a Michaelis-Menten enzyme system and network decomposition of a detailed model for the oscillatory peroxidase-oxidase enzyme system.

  10. Testing the monogamy relations via rank-2 mixtures

    NASA Astrophysics Data System (ADS)

    Jung, Eylee; Park, DaeKil

    2016-10-01

    We introduce two tangle-based four-party entanglement measures t1 and t2, and two negativity-based measures n1 and n2, which are derived from the monogamy relations. These measures are computed for three four-qubit maximally entangled and W states explicitly. We also compute these measures for the rank-2 mixture ρ4=p | GHZ4>< GHZ4|+(1 -p ) | W4>< W4| by finding the corresponding optimal decompositions. It turns out that t1(ρ4) is trivial and the corresponding optimal decomposition is equal to the spectral decomposition. Probably, this triviality is a sign of the fact that the corresponding monogamy inequality is not sufficiently tight. We fail to compute t2(ρ4) due to the difficulty in the calculation of the residual entanglement. The negativity-based measures n1(ρ4) and n2(ρ4) are explicitly computed and the corresponding optimal decompositions are also derived explicitly.

  11. Multi-focus image fusion based on window empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Qin, Xinqiang; Zheng, Jiaoyue; Hu, Gang; Wang, Jiao

    2017-09-01

    In order to improve multi-focus image fusion quality, a novel fusion algorithm based on window empirical mode decomposition (WEMD) is proposed. This WEMD is an improved form of bidimensional empirical mode decomposition (BEMD), due to its decomposition process using the adding window principle, effectively resolving the signal concealment problem. We used WEMD for multi-focus image fusion, and formulated different fusion rules for bidimensional intrinsic mode function (BIMF) components and the residue component. For fusion of the BIMF components, the concept of the Sum-modified-Laplacian was used and a scheme based on the visual feature contrast adopted; when choosing the residue coefficients, a pixel value based on the local visibility was selected. We carried out four groups of multi-focus image fusion experiments and compared objective evaluation criteria with other three fusion methods. The experimental results show that the proposed fusion approach is effective and performs better at fusing multi-focus images than some traditional methods.

  12. Distributed Prognostics based on Structural Model Decomposition

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Bregon, Anibal; Roychoudhury, I.

    2014-01-01

    Within systems health management, prognostics focuses on predicting the remaining useful life of a system. In the model-based prognostics paradigm, physics-based models are constructed that describe the operation of a system and how it fails. Such approaches consist of an estimation phase, in which the health state of the system is first identified, and a prediction phase, in which the health state is projected forward in time to determine the end of life. Centralized solutions to these problems are often computationally expensive, do not scale well as the size of the system grows, and introduce a single point of failure. In this paper, we propose a novel distributed model-based prognostics scheme that formally describes how to decompose both the estimation and prediction problems into independent local subproblems whose solutions may be easily composed into a global solution. The decomposition of the prognostics problem is achieved through structural decomposition of the underlying models. The decomposition algorithm creates from the global system model a set of local submodels suitable for prognostics. Independent local estimation and prediction problems are formed based on these local submodels, resulting in a scalable distributed prognostics approach that allows the local subproblems to be solved in parallel, thus offering increases in computational efficiency. Using a centrifugal pump as a case study, we perform a number of simulation-based experiments to demonstrate the distributed approach, compare the performance with a centralized approach, and establish its scalability. Index Terms-model-based prognostics, distributed prognostics, structural model decomposition ABBREVIATIONS

  13. Multilevel decomposition of complete vehicle configuration in a parallel computing environment

    NASA Technical Reports Server (NTRS)

    Bhatt, Vinay; Ragsdell, K. M.

    1989-01-01

    This research summarizes various approaches to multilevel decomposition to solve large structural problems. A linear decomposition scheme based on the Sobieski algorithm is selected as a vehicle for automated synthesis of a complete vehicle configuration in a parallel processing environment. The research is in a developmental state. Preliminary numerical results are presented for several example problems.

  14. Effect of urea additive on the thermal decomposition kinetics of flame retardant greige cotton nonwoven fabric

    Treesearch

    Sunghyun Nam; Brian D. Condon; Robert H. White; Qi Zhao; Fei Yao; Michael Santiago Cintrón

    2012-01-01

    Urea is well known to have a synergistic action with phosphorus-based flame retardants (FRs) in enhancing the FR performance of cellulosic materials, but the effect of urea on the thermal decomposition kinetics has not been thoroughly studied. In this study, the activation energy (Ea) for the thermal decomposition of greige...

  15. Evaluating litter decomposition and soil organic matter dynamics in earth system models: contrasting analysis of long-term litter decomposition and steady-state soil carbon

    NASA Astrophysics Data System (ADS)

    Bonan, G. B.; Wieder, W. R.

    2012-12-01

    Decomposition is a large term in the global carbon budget, but models of the earth system that simulate carbon cycle-climate feedbacks are largely untested with respect to litter decomposition. Here, we demonstrate a protocol to document model performance with respect to both long-term (10 year) litter decomposition and steady-state soil carbon stocks. First, we test the soil organic matter parameterization of the Community Land Model version 4 (CLM4), the terrestrial component of the Community Earth System Model, with data from the Long-term Intersite Decomposition Experiment Team (LIDET). The LIDET dataset is a 10-year study of litter decomposition at multiple sites across North America and Central America. We show results for 10-year litter decomposition simulations compared with LIDET for 9 litter types and 20 sites in tundra, grassland, and boreal, conifer, deciduous, and tropical forest biomes. We show additional simulations with DAYCENT, a version of the CENTURY model, to ask how well an established ecosystem model matches the observations. The results reveal large discrepancy between the laboratory microcosm studies used to parameterize the CLM4 litter decomposition and the LIDET field study. Simulated carbon loss is more rapid than the observations across all sites, despite using the LIDET-provided climatic decomposition index to constrain temperature and moisture effects on decomposition. Nitrogen immobilization is similarly biased high. Closer agreement with the observations requires much lower decomposition rates, obtained with the assumption that nitrogen severely limits decomposition. DAYCENT better replicates the observations, for both carbon mass remaining and nitrogen, without requirement for nitrogen limitation of decomposition. Second, we compare global observationally-based datasets of soil carbon with simulated steady-state soil carbon stocks for both models. The models simulations were forced with observationally-based estimates of annual litterfall and model-derived climatic decomposition index. While comparison with the LIDET 10-year litterbag study reveals sharp contrasts between CLM4 and DAYCENT, simulations of steady-state soil carbon show less difference between models. Both CLM4 and DAYCENT significantly underestimate soil carbon. Sensitivity analyses highlight causes of the low soil carbon bias. The terrestrial biogeochemistry of earth system models must be critically tested with observations, and the consequences of particular model choices must be documented. Long-term litter decomposition experiments such as LIDET provide a real-world process-oriented benchmark to evaluate models and can critically inform model development. Analysis of steady-state soil carbon estimates reveal additional, but here different, inferences about model performance.

  16. Domain decomposition for aerodynamic and aeroacoustic analyses, and optimization

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay

    1995-01-01

    The overarching theme was the domain decomposition, which intended to improve the numerical solution technique for the partial differential equations at hand; in the present study, those that governed either the fluid flow, or the aeroacoustic wave propagation, or the sensitivity analysis for a gradient-based optimization. The role of the domain decomposition extended beyond the original impetus of discretizing geometrical complex regions or writing modular software for distributed-hardware computers. It induced function-space decompositions and operator decompositions that offered the valuable property of near independence of operator evaluation tasks. The objectives have gravitated about the extensions and implementations of either the previously developed or concurrently being developed methodologies: (1) aerodynamic sensitivity analysis with domain decomposition (SADD); (2) computational aeroacoustics of cavities; and (3) dynamic, multibody computational fluid dynamics using unstructured meshes.

  17. Metallographic study of metallic fragment of lunar surface material

    NASA Technical Reports Server (NTRS)

    Mints, R. I.; Petukhova, T. M.; Ivanov, A. V.

    1974-01-01

    A high precision investigation of a metallic fragment from the lunar material returned by the Soviet Luna 16 automatic station revealed three characteristic temperature intervals with different kinetics of solid solution decomposition. The following were found in the structure of the iron-nickel-cobalt alloy: (1) delta-phase and alpha-ferrite of diffusional, displacement origin in the grain boundary and acicular forms; and (2) martensite of isothermal and athermal nature, acicular, lamellar, massive, and dendritic. The diversity of the shapes of structural constituents is associated with the effect on their formation of elastic distortions and various mechanisms of deformation relaxation processes.

  18. Neural image analysis for estimating aerobic and anaerobic decomposition of organic matter based on the example of straw decomposition

    NASA Astrophysics Data System (ADS)

    Boniecki, P.; Nowakowski, K.; Slosarz, P.; Dach, J.; Pilarski, K.

    2012-04-01

    The purpose of the project was to identify the degree of organic matter decomposition by means of a neural model based on graphical information derived from image analysis. Empirical data (photographs of compost content at various stages of maturation) were used to generate an optimal neural classifier (Boniecki et al. 2009, Nowakowski et al. 2009). The best classification properties were found in an RBF (Radial Basis Function) artificial neural network, which demonstrates that the process is non-linear.

  19. An optimized time varying filtering based empirical mode decomposition method with grey wolf optimizer for machinery fault diagnosis

    NASA Astrophysics Data System (ADS)

    Zhang, Xin; Liu, Zhiwen; Miao, Qiang; Wang, Lei

    2018-03-01

    A time varying filtering based empirical mode decomposition (EMD) (TVF-EMD) method was proposed recently to solve the mode mixing problem of EMD method. Compared with the classical EMD, TVF-EMD was proven to improve the frequency separation performance and be robust to noise interference. However, the decomposition parameters (i.e., bandwidth threshold and B-spline order) significantly affect the decomposition results of this method. In original TVF-EMD method, the parameter values are assigned in advance, which makes it difficult to achieve satisfactory analysis results. To solve this problem, this paper develops an optimized TVF-EMD method based on grey wolf optimizer (GWO) algorithm for fault diagnosis of rotating machinery. Firstly, a measurement index termed weighted kurtosis index is constructed by using kurtosis index and correlation coefficient. Subsequently, the optimal TVF-EMD parameters that match with the input signal can be obtained by GWO algorithm using the maximum weighted kurtosis index as objective function. Finally, fault features can be extracted by analyzing the sensitive intrinsic mode function (IMF) owning the maximum weighted kurtosis index. Simulations and comparisons highlight the performance of TVF-EMD method for signal decomposition, and meanwhile verify the fact that bandwidth threshold and B-spline order are critical to the decomposition results. Two case studies on rotating machinery fault diagnosis demonstrate the effectiveness and advantages of the proposed method.

  20. Phase correlation of foreign exchange time series

    NASA Astrophysics Data System (ADS)

    Wu, Ming-Chya

    2007-03-01

    Correlation of foreign exchange rates in currency markets is investigated based on the empirical data of USD/DEM and USD/JPY exchange rates for a period from February 1 1986 to December 31 1996. The return of exchange time series is first decomposed into a number of intrinsic mode functions (IMFs) by the empirical mode decomposition method. The instantaneous phases of the resultant IMFs calculated by the Hilbert transform are then used to characterize the behaviors of pricing transmissions, and the correlation is probed by measuring the phase differences between two IMFs in the same order. From the distribution of phase differences, our results show explicitly that the correlations are stronger in daily time scale than in longer time scales. The demonstration for the correlations in periods of 1986-1989 and 1990-1993 indicates two exchange rates in the former period were more correlated than in the latter period. The result is consistent with the observations from the cross-correlation calculation.

  1. Comparing the Scoring of Human Decomposition from Digital Images to Scoring Using On-site Observations.

    PubMed

    Dabbs, Gretchen R; Bytheway, Joan A; Connor, Melissa

    2017-09-01

    When in forensic casework or empirical research in-person assessment of human decomposition is not possible, the sensible substitution is color photographic images. To date, no research has confirmed the utility of color photographic images as a proxy for in situ observation of the level of decomposition. Sixteen observers scored photographs of 13 human cadavers in varying decomposition stages (PMI 2-186 days) using the Total Body Score system (total n = 929 observations). The on-site TBS was compared with recorded observations from digital color images using a paired samples t-test. The average difference between on-site and photographic observations was -0.20 (t = -1.679, df = 928, p = 0.094). Individually, only two observers, both students with <1 year of experience, demonstrated TBS statistically significantly different than the on-site value, suggesting that with experience, observations of human decomposition based on digital images can be substituted for assessments based on observation of the corpse in situ, when necessary. © 2017 American Academy of Forensic Sciences.

  2. Augmenting the decomposition of EMG signals using supervised feature extraction techniques.

    PubMed

    Parsaei, Hossein; Gangeh, Mehrdad J; Stashuk, Daniel W; Kamel, Mohamed S

    2012-01-01

    Electromyographic (EMG) signal decomposition is the process of resolving an EMG signal into its constituent motor unit potential trains (MUPTs). In this work, the possibility of improving the decomposing results using two supervised feature extraction methods, i.e., Fisher discriminant analysis (FDA) and supervised principal component analysis (SPCA), is explored. Using the MUP labels provided by a decomposition-based quantitative EMG system as a training data for FDA and SPCA, the MUPs are transformed into a new feature space such that the MUPs of a single MU become as close as possible to each other while those created by different MUs become as far as possible. The MUPs are then reclassified using a certainty-based classification algorithm. Evaluation results using 10 simulated EMG signals comprised of 3-11 MUPTs demonstrate that FDA and SPCA on average improve the decomposition accuracy by 6%. The improvement for the most difficult-to-decompose signal is about 12%, which shows the proposed approach is most beneficial in the decomposition of more complex signals.

  3. Partial differential equation-based approach for empirical mode decomposition: application on image analysis.

    PubMed

    Niang, Oumar; Thioune, Abdoulaye; El Gueirea, Mouhamed Cheikh; Deléchelle, Eric; Lemoine, Jacques

    2012-09-01

    The major problem with the empirical mode decomposition (EMD) algorithm is its lack of a theoretical framework. So, it is difficult to characterize and evaluate this approach. In this paper, we propose, in the 2-D case, the use of an alternative implementation to the algorithmic definition of the so-called "sifting process" used in the original Huang's EMD method. This approach, especially based on partial differential equations (PDEs), was presented by Niang in previous works, in 2005 and 2007, and relies on a nonlinear diffusion-based filtering process to solve the mean envelope estimation problem. In the 1-D case, the efficiency of the PDE-based method, compared to the original EMD algorithmic version, was also illustrated in a recent paper. Recently, several 2-D extensions of the EMD method have been proposed. Despite some effort, 2-D versions for EMD appear poorly performing and are very time consuming. So in this paper, an extension to the 2-D space of the PDE-based approach is extensively described. This approach has been applied in cases of both signal and image decomposition. The obtained results confirm the usefulness of the new PDE-based sifting process for the decomposition of various kinds of data. Some results have been provided in the case of image decomposition. The effectiveness of the approach encourages its use in a number of signal and image applications such as denoising, detrending, or texture analysis.

  4. Global sensitivity analysis for fuzzy inputs based on the decomposition of fuzzy output entropy

    NASA Astrophysics Data System (ADS)

    Shi, Yan; Lu, Zhenzhou; Zhou, Yicheng

    2018-06-01

    To analyse the component of fuzzy output entropy, a decomposition method of fuzzy output entropy is first presented. After the decomposition of fuzzy output entropy, the total fuzzy output entropy can be expressed as the sum of the component fuzzy entropy contributed by fuzzy inputs. Based on the decomposition of fuzzy output entropy, a new global sensitivity analysis model is established for measuring the effects of uncertainties of fuzzy inputs on the output. The global sensitivity analysis model can not only tell the importance of fuzzy inputs but also simultaneously reflect the structural composition of the response function to a certain degree. Several examples illustrate the validity of the proposed global sensitivity analysis, which is a significant reference in engineering design and optimization of structural systems.

  5. A novel ECG data compression method based on adaptive Fourier decomposition

    NASA Astrophysics Data System (ADS)

    Tan, Chunyu; Zhang, Liming

    2017-12-01

    This paper presents a novel electrocardiogram (ECG) compression method based on adaptive Fourier decomposition (AFD). AFD is a newly developed signal decomposition approach, which can decompose a signal with fast convergence, and hence reconstruct ECG signals with high fidelity. Unlike most of the high performance algorithms, our method does not make use of any preprocessing operation before compression. Huffman coding is employed for further compression. Validated with 48 ECG recordings of MIT-BIH arrhythmia database, the proposed method achieves the compression ratio (CR) of 35.53 and the percentage root mean square difference (PRD) of 1.47% on average with N = 8 decomposition times and a robust PRD-CR relationship. The results demonstrate that the proposed method has a good performance compared with the state-of-the-art ECG compressors.

  6. Integrated Network Decompositions and Dynamic Programming for Graph Optimization (INDDGO)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    The INDDGO software package offers a set of tools for finding exact solutions to graph optimization problems via tree decompositions and dynamic programming algorithms. Currently the framework offers serial and parallel (distributed memory) algorithms for finding tree decompositions and solving the maximum weighted independent set problem. The parallel dynamic programming algorithm is implemented on top of the MADNESS task-based runtime.

  7. An inductance Fourier decomposition-based current-hysteresis control strategy for switched reluctance motors

    NASA Astrophysics Data System (ADS)

    Hua, Wei; Qi, Ji; Jia, Meng

    2017-05-01

    Switched reluctance machines (SRMs) have attracted extensive attentions due to the inherent advantages, including simple and robust structure, low cost, excellent fault-tolerance and wide speed range, etc. However, one of the bottlenecks limiting the SRMs for further applications is its unfavorable torque ripple, and consequently noise and vibration due to the unique doubly-salient structure and pulse-current-based power supply method. In this paper, an inductance Fourier decomposition-based current-hysteresis-control (IFD-CHC) strategy is proposed to reduce torque ripple of SRMs. After obtaining a nonlinear inductance-current-position model based Fourier decomposition, reference currents can be calculated by reference torque and the derived inductance model. Both the simulations and experimental results confirm the effectiveness of the proposed strategy.

  8. Defect Detection in Textures through the Use of Entropy as a Means for Automatically Selecting the Wavelet Decomposition Level.

    PubMed

    Navarro, Pedro J; Fernández-Isla, Carlos; Alcover, Pedro María; Suardíaz, Juan

    2016-07-27

    This paper presents a robust method for defect detection in textures, entropy-based automatic selection of the wavelet decomposition level (EADL), based on a wavelet reconstruction scheme, for detecting defects in a wide variety of structural and statistical textures. Two main features are presented. One of the new features is an original use of the normalized absolute function value (NABS) calculated from the wavelet coefficients derived at various different decomposition levels in order to identify textures where the defect can be isolated by eliminating the texture pattern in the first decomposition level. The second is the use of Shannon's entropy, calculated over detail subimages, for automatic selection of the band for image reconstruction, which, unlike other techniques, such as those based on the co-occurrence matrix or on energy calculation, provides a lower decomposition level, thus avoiding excessive degradation of the image, allowing a more accurate defect segmentation. A metric analysis of the results of the proposed method with nine different thresholding algorithms determined that selecting the appropriate thresholding method is important to achieve optimum performance in defect detection. As a consequence, several different thresholding algorithms depending on the type of texture are proposed.

  9. Dominant modal decomposition method

    NASA Astrophysics Data System (ADS)

    Dombovari, Zoltan

    2017-03-01

    The paper deals with the automatic decomposition of experimental frequency response functions (FRF's) of mechanical structures. The decomposition of FRF's is based on the Green function representation of free vibratory systems. After the determination of the impulse dynamic subspace, the system matrix is formulated and the poles are calculated directly. By means of the corresponding eigenvectors, the contribution of each element of the impulse dynamic subspace is determined and the sufficient decomposition of the corresponding FRF is carried out. With the presented dominant modal decomposition (DMD) method, the mode shapes, the modal participation vectors and the modal scaling factors are identified using the decomposed FRF's. Analytical example is presented along with experimental case studies taken from machine tool industry.

  10. Significant Performance Enhancement in Asymmetric Supercapacitors based on Metal Oxides, Carbon nanotubes and Neutral Aqueous Electrolyte

    PubMed Central

    Singh, Arvinder; Chandra, Amreesh

    2015-01-01

    Amongst the materials being investigated for supercapacitor electrodes, carbon based materials are most investigated. However, pure carbon materials suffer from inherent physical processes which limit the maximum specific energy and power that can be achieved in an energy storage device. Therefore, use of carbon-based composites with suitable nano-materials is attaining prominence. The synergistic effect between the pseudocapacitive nanomaterials (high specific energy) and carbon (high specific power) is expected to deliver the desired improvements. We report the fabrication of high capacitance asymmetric supercapacitor based on electrodes of composites of SnO2 and V2O5 with multiwall carbon nanotubes and neutral 0.5 M Li2SO4 aqueous electrolyte. The advantages of the fabricated asymmetric supercapacitors are compared with the results published in the literature. The widened operating voltage window is due to the higher over-potential of electrolyte decomposition and a large difference in the work functions of the used metal oxides. The charge balanced device returns the specific capacitance of ~198 F g−1 with corresponding specific energy of ~89 Wh kg−1 at 1 A g−1. The proposed composite systems have shown great potential in fabricating high performance supercapacitors. PMID:26494197

  11. Significant Performance Enhancement in Asymmetric Supercapacitors based on Metal Oxides, Carbon nanotubes and Neutral Aqueous Electrolyte

    NASA Astrophysics Data System (ADS)

    Singh, Arvinder; Chandra, Amreesh

    2015-10-01

    Amongst the materials being investigated for supercapacitor electrodes, carbon based materials are most investigated. However, pure carbon materials suffer from inherent physical processes which limit the maximum specific energy and power that can be achieved in an energy storage device. Therefore, use of carbon-based composites with suitable nano-materials is attaining prominence. The synergistic effect between the pseudocapacitive nanomaterials (high specific energy) and carbon (high specific power) is expected to deliver the desired improvements. We report the fabrication of high capacitance asymmetric supercapacitor based on electrodes of composites of SnO2 and V2O5 with multiwall carbon nanotubes and neutral 0.5 M Li2SO4 aqueous electrolyte. The advantages of the fabricated asymmetric supercapacitors are compared with the results published in the literature. The widened operating voltage window is due to the higher over-potential of electrolyte decomposition and a large difference in the work functions of the used metal oxides. The charge balanced device returns the specific capacitance of ~198 F g-1 with corresponding specific energy of ~89 Wh kg-1 at 1 A g-1. The proposed composite systems have shown great potential in fabricating high performance supercapacitors.

  12. About decomposition approach for solving the classification problem

    NASA Astrophysics Data System (ADS)

    Andrianova, A. A.

    2016-11-01

    This article describes the features of the application of an algorithm with using of decomposition methods for solving the binary classification problem of constructing a linear classifier based on Support Vector Machine method. Application of decomposition reduces the volume of calculations, in particular, due to the emerging possibilities to build parallel versions of the algorithm, which is a very important advantage for the solution of problems with big data. The analysis of the results of computational experiments conducted using the decomposition approach. The experiment use known data set for binary classification problem.

  13. Exploring Patterns of Soil Organic Matter Decomposition with Students through the Global Decomposition Project (GDP) and the Interactive Model of Leaf Decomposition (IMOLD)

    NASA Astrophysics Data System (ADS)

    Steiner, S. M.; Wood, J. H.

    2015-12-01

    As decomposition rates are affected by climate change, understanding crucial soil interactions that affect plant growth and decomposition becomes a vital part of contributing to the students' knowledge base. The Global Decomposition Project (GDP) is designed to introduce and educate students about soil organic matter and decomposition through a standardized protocol for collecting, reporting, and sharing data. The Interactive Model of Leaf Decomposition (IMOLD) utilizes animations and modeling to learn about the carbon cycle, leaf anatomy, and the role of microbes in decomposition. Paired together, IMOLD teaches the background information and allows simulation of numerous scenarios, and the GDP is a data collection protocol that allows students to gather usable measurements of decomposition in the field. Our presentation will detail how the GDP protocol works, how to obtain or make the materials needed, and how results will be shared. We will also highlight learning objectives from the three animations of IMOLD, and demonstrate how students can experiment with different climates and litter types using the interactive model to explore a variety of decomposition scenarios. The GDP demonstrates how scientific methods can be extended to educate broader audiences, and data collected by students can provide new insight into global patterns of soil decomposition. Using IMOLD, students will gain a better understanding of carbon cycling in the context of litter decomposition, as well as learn to pose questions they can answer with an authentic computer model. Using the GDP protocols and IMOLD provide a pathway for scientists and educators to interact and reach meaningful education and research goals.

  14. TE/TM decomposition of electromagnetic sources

    NASA Technical Reports Server (NTRS)

    Lindell, Ismo V.

    1988-01-01

    Three methods are given by which bounded EM sources can be decomposed into two parts radiating transverse electric (TE) and transverse magnetic (TM) fields with respect to a given constant direction in space. The theory applies source equivalence and nonradiating source concepts, which lead to decomposition methods based on a recursive formula or two differential equations for the determination of the TE and TM components of the original source. Decompositions for a dipole in terms of point, line, and plane sources are studied in detail. The planar decomposition is seen to match to an earlier result given by Clemmow (1963). As an application of the point decomposition method, it is demonstrated that the general exact image expression for the Sommerfeld half-space problem, previously derived through heuristic reasoning, can be more straightforwardly obtained through the present decomposition method.

  15. Thermal Decomposition Behavior of Hydroxytyrosol (HT) in Nitrogen Atmosphere Based on TG-FTIR Methods.

    PubMed

    Tu, Jun-Ling; Yuan, Jiao-Jiao

    2018-02-13

    The thermal decomposition behavior of olive hydroxytyrosol (HT) was first studied using thermogravimetry (TG). Cracked chemical bond and evolved gas analysis during the thermal decomposition process of HT were also investigated using thermogravimetry coupled with infrared spectroscopy (TG-FTIR). Thermogravimetry-Differential thermogravimetry (TG-DTG) curves revealed that the thermal decomposition of HT began at 262.8 °C and ended at 409.7 °C with a main mass loss. It was demonstrated that a high heating rate (over 20 K·min -1 ) restrained the thermal decomposition of HT, resulting in an obvious thermal hysteresis. Furthermore, a thermal decomposition kinetics investigation of HT indicated that the non-isothermal decomposition mechanism was one-dimensional diffusion (D1), integral form g ( x ) = x ², and differential form f ( x ) = 1/(2 x ). The four combined approaches were employed to calculate the activation energy ( E = 128.50 kJ·mol -1 ) and Arrhenius preexponential factor (ln A = 24.39 min -1 ). In addition, a tentative mechanism of HT thermal decomposition was further developed. The results provide a theoretical reference for the potential thermal stability of HT.

  16. Double Bounce Component in Cross-Polarimetric SAR from a New Scattering Target Decomposition

    NASA Astrophysics Data System (ADS)

    Hong, Sang-Hoon; Wdowinski, Shimon

    2013-08-01

    Common vegetation scattering theories assume that the Synthetic Aperture Radar (SAR) cross-polarization (cross-pol) signal represents solely volume scattering. We found this assumption incorrect based on SAR phase measurements acquired over the south Florida Everglades wetlands indicating that the cross-pol radar signal often samples the water surface beneath the vegetation. Based on these new observations, we propose that the cross-pol measurement consists of both volume scattering and double bounce components. The simplest multi-bounce scattering mechanism that generates cross-pol signal occurs by rotated dihedrals. Thus, we use the rotated dihedral mechanism with probability density function to revise some of the vegetation scattering theories and develop a three- component decomposition algorithm with single bounce, double bounce from both co-pol and cross-pol, and volume scattering components. We applied the new decomposition analysis to both urban and rural environments using Radarsat-2 quad-pol datasets. The decomposition of the San Francisco's urban area shows higher double bounce scattering and reduced volume scattering compared to other common three-component decomposition. The decomposition of the rural Everglades area shows that the relations between volume and cross-pol double bounce depend on the vegetation density. The new decomposition can be useful to better understand vegetation scattering behavior over the various surfaces and the estimation of above ground biomass using SAR observations.

  17. Aging-driven decomposition in zolpidem hemitartrate hemihydrate and the single-crystal structure of its decomposition products.

    PubMed

    Vega, Daniel R; Baggio, Ricardo; Roca, Mariana; Tombari, Dora

    2011-04-01

    The "aging-driven" decomposition of zolpidem hemitartrate hemihydrate (form A) has been followed by X-ray powder diffraction (XRPD), and the crystal and molecular structures of the decomposition products studied by single-crystal methods. The process is very similar to the "thermally driven" one, recently described in the literature for form E (Halasz and Dinnebier. 2010. J Pharm Sci 99(2): 871-874), resulting in a two-phase system: the neutral free base (common to both decomposition processes) and, in the present case, a novel zolpidem tartrate monohydrate, unique to the "aging-driven" decomposition. Our room-temperature single-crystal analysis gives for the free base comparable results as the high-temperature XRPD ones already reported by Halasz and Dinnebier: orthorhombic, Pcba, a = 9.6360(10) Å, b = 18.2690(5) Å, c = 18.4980(11) Å, and V = 3256.4(4) Å(3) . The unreported zolpidem tartrate monohydrate instead crystallizes in monoclinic P21 , which, for comparison purposes, we treated in the nonstandard setting P1121 with a = 20.7582(9) Å, b = 15.2331(5) Å, c = 7.2420(2) Å, γ = 90.826(2)°, and V = 2289.73(14) Å(3) . The structure presents two complete moieties in the asymmetric unit (z = 4, z' = 2). The different phases obtained in both decompositions are readily explained, considering the diverse genesis of both processes. Copyright © 2010 Wiley-Liss, Inc.

  18. Importance of Past Human and Natural Disturbance in Present-Day Net Ecosystem Productivity

    NASA Astrophysics Data System (ADS)

    Felzer, B. S.; Phelps, P.

    2014-12-01

    Gridded datasets of Net Ecosystem Exchange derived from eddy covariance and remote sensing measurements provide a means of validating Net Ecosystem Productivity (NEP, opposite of NEE) from terrestrial ecosystem models. While most forested regions in the U.S. are observed to be moderate to strong carbon sinks, models not including human or natural disturbances will tend to be more carbon neutral, which is expected of mature ecosystems. We have developed the Terrestrial Ecosystems Model Hydro version (TEM-Hydro) to include both human and natural disturbances to compare against gridded NEP datasets. Human disturbances are based on the Hurtt et al. (2006) land use transition dataset and include transient agricultural (crops and pasture) conversion and abandonment and timber harvest. We include natural disturbances of storms and fires based on stochastic return intervals. Tropical storms and hurricane return intervals are based on Zheng et al. (2009) and occur only along the U.S. Atlantic and Gulf coasts. Fire return intervals are based on LANDFIRE Rapid Assessment Vegetation Models and vegetation types from the Hurtt dataset. We are running three experiments with TEM-Hydro from 1700-2011 for the conterminous U.S.: potential vegetation (POT), human disturbance only (agriculture and timber harvest, LULC), and human plus natural disturbance (agriculture, timber harvest, storms, and fire, DISTURB). The goal is to compare our NEP values to those obtained by FLUXNET-MTE (Jung et al. 2009) from 1982-2008 and ECMOD (Xiao et al., 2008) from 2000-2006 for different plant functional types (PFTs) within the conterminous U.S. Preliminary results show that, for the entire U.S., potential vegetation yields an NEP of 10.8 gCm-2yr-1 vs 128.1 gCm-2yr-1 for LULC and 89.8 gCm-2yr-1 for DISTURB from 1982-2008. The effect of regrowth following agricultural and timber harvest disturbance therefore contributes substantially to the present-day carbon sink, while stochastic storms and fires have a negative effect on NEP. Even though the current NEP reflects the carbon uptake from regrowth, a full carbon accounting would also include the carbon released to the atmosphere during disturbance or carbon lost to decomposition of agricultural or timber products

  19. Assessing the effect of different treatments on decomposition rate of dairy manure.

    PubMed

    Khalil, Tariq M; Higgins, Stewart S; Ndegwa, Pius M; Frear, Craig S; Stöckle, Claudio O

    2016-11-01

    Confined animal feeding operations (CAFOs) contribute to greenhouse gas emission, but the magnitude of these emissions as a function of operation size, infrastructure, and manure management are difficult to assess. Modeling is a viable option to estimate gaseous emission and nutrient flows from CAFOs. These models use a decomposition rate constant for carbon mineralization. However, this constant is usually determined assuming a homogenous mix of manure, ignoring the effects of emerging manure treatments. The aim of this study was to measure and compare the decomposition rate constants of dairy manure in single and three-pool decomposition models, and to develop an empirical model based on chemical composition of manure for prediction of a decomposition rate constant. Decomposition rate constants of manure before and after an anaerobic digester (AD), following coarse fiber separation, and fine solids removal were determined under anaerobic conditions for single and three-pool decomposition models. The decomposition rates of treated manure effluents differed significantly from untreated manure for both single and three-pool decomposition models. In the single-pool decomposition model, AD effluent containing only suspended solids had a relatively high decomposition rate of 0.060 d(-1), while liquid with coarse fiber and fine solids removed had the lowest rate of 0.013 d(-1). In the three-pool decomposition model, fast and slow decomposition rate constants (0.25 d(-1) and 0.016 d(-1) respectively) of untreated AD influent were also significantly different from treated manure fractions. A regression model to predict the decomposition rate of treated dairy manure fitted well (R(2) = 0.83) to observed data. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Carbon Turnover in Organic Soils of Central Saskatchewan: Insights From a Core-Based Decomposition Study

    NASA Astrophysics Data System (ADS)

    Bauer, I. E.; Bhatti, J. S.; Hurdle, P. A.

    2004-05-01

    Field-based decomposition studies that examine several site types tend to use one of two approaches: Either the decay of one (or more) standard litters is examined in all sites, or litters native to each site type are incubated in the environment they came from. The first of these approaches examines effects of environment on decay, whereas the latter determines rates of mass loss characteristic of each site type. Both methods are usually restricted to a limited number of litters, and neither allows for a direct estimate of ecosystem-level parameters (e.g. heterotrophic respiration). In order to examine changes in total organic matter turnover along forest - peatland gradients in central Saskatchewan, we measured mass loss of native peat samples from six different depths (surface to 50 cm) over one year. Samples were obtained by sectioning short peat cores, and cores and samples were returned to their original position after determining the initial weight of each sample. A standard litter (birch popsicle sticks) was included at each depth, and water tables and soil temperature were monitored over the growing season. After one year, average mass loss in surface peat samples was similar to published values from litter bag studies, ranging from 12 to 21 percent in the environments examined. Native peat mass loss showed few systematic differences between sites or along the forest - peatland gradient, with over 60 percent of the total variability explained by depth alone. Mass loss of standard litter samples was highly variable, with high values in areas at the transition between upland and peatland that may have experienced recent disturbance. In combination, these results suggest strong litter-based control over natural rates of organic matter turnover. Estimates of heterotrophic respiration calculated from the mass loss data are higher than values obtained by eddy covariance or static chamber techniques, probably reflecting loss of material during the handling of samples or increased mass loss from manipulated profiles. Nevertheless, the core-based method is a useful tool in examining carbon dynamics of organic soils, since it provides a good relative index of organic matter turnover, and allows for separate examination of environmental and litter-based effects.

  1. Adaptive Fourier decomposition based R-peak detection for noisy ECG Signals.

    PubMed

    Ze Wang; Chi Man Wong; Feng Wan

    2017-07-01

    An adaptive Fourier decomposition (AFD) based R-peak detection method is proposed for noisy ECG signals. Although lots of QRS detection methods have been proposed in literature, most detection methods require high signal quality. The proposed method extracts the R waves from the energy domain using the AFD and determines the R-peak locations based on the key decomposition parameters, achieving the denoising and the R-peak detection at the same time. Validated by clinical ECG signals in the MIT-BIH Arrhythmia Database, the proposed method shows better performance than the Pan-Tompkin (PT) algorithm in both situations of a native PT and the PT with a denoising process.

  2. [Biogeochemical cycles in natural forest and conifer plantations in the high mountains of Colombia].

    PubMed

    León, Juan Diego; González, María Isabel; Gallardo, Juan Fernando

    2011-12-01

    Plant litter production and decomposition are two important processes in forest ecosystems, since they provide the main organic matter input to soil and regulate nutrient cycling. With the aim to study these processes, litterfall, standing litter and nutrient return were studied for three years in an oak forest (Quercus humboldtii), pine (Pinus patula) and cypress (Cupressus lusitanica) plantations, located in highlands of the Central Cordillera of Colombia. Evaluation methods included: fine litter collection at fortnightly intervals using litter traps; the litter layer samples at the end of each sampling year and chemical analyses of both litterfall and standing litter. Fine litter fall observed was similar in oak forest (7.5 Mg ha/y) and in pine (7.8 Mg ha/y), but very low in cypress (3.5 Mg ha/y). Litter standing was 1.76, 1.73 and 1.3 Mg ha/y in oak, pine and cypress, respectively. The mean residence time of the standing litter was of 3.3 years for cypress, 2.1 years for pine and 1.8 years for oak forests. In contrast, the total amount of retained elements (N, P, S, Ca, Mg, K, Cu, Fe, Mn and Zn) in the standing litter was higher in pine (115 kg/ha), followed by oak (78 kg/ha) and cypress (24 kg/ha). Oak forests showed the lowest mean residence time of nutrients and the highest nutrients return to the soil as a consequence of a faster decomposition. Thus, a higher nutrient supply to soils from oaks than from tree plantations, seems to be an ecological advantage for recovering and maintaining the main ecosystem functioning features, which needs to be taken into account in restoration programs in this highly degraded Andean mountains.

  3. Do soil organisms affect aboveground litter decomposition in the semiarid Patagonian steppe, Argentina?

    PubMed

    Araujo, Patricia I; Yahdjian, Laura; Austin, Amy T

    2012-01-01

    Surface litter decomposition in arid and semiarid ecosystems is often faster than predicted by climatic parameters such as annual precipitation or evapotranspiration, or based on standard indices of litter quality such as lignin or nitrogen concentrations. Abiotic photodegradation has been demonstrated to be an important factor controlling aboveground litter decomposition in aridland ecosystems, but soil fauna, particularly macrofauna such as termites and ants, have also been identified as key players affecting litter mass loss in warm deserts. Our objective was to quantify the importance of soil organisms on surface litter decomposition in the Patagonian steppe in the absence of photodegradative effects, to establish the relative importance of soil organisms on rates of mass loss and nitrogen release. We estimated the relative contribution of soil fauna and microbes to litter decomposition of a dominant grass using litterboxes with variable mesh sizes that excluded groups of soil fauna based on size class (10, 2, and 0.01 mm), which were placed beneath shrub canopies. We also employed chemical repellents (naphthalene and fungicide). The exclusion of macro- and mesofauna had no effect on litter mass loss over 3 years (P = 0.36), as litter decomposition was similar in all soil fauna exclusions and naphthalene-treated litter. In contrast, reduction of fungal activity significantly inhibited litter decomposition (P < 0.001). Although soil fauna have been mentioned as a key control of litter decomposition in warm deserts, biogeographic legacies and temperature limitation may constrain the importance of these organisms in temperate aridlands, particularly in the southern hemisphere.

  4. Better Decomposition Heuristics for the Maximum-Weight Connected Graph Problem Using Betweenness Centrality

    NASA Astrophysics Data System (ADS)

    Yamamoto, Takanori; Bannai, Hideo; Nagasaki, Masao; Miyano, Satoru

    We present new decomposition heuristics for finding the optimal solution for the maximum-weight connected graph problem, which is known to be NP-hard. Previous optimal algorithms for solving the problem decompose the input graph into subgraphs using heuristics based on node degree. We propose new heuristics based on betweenness centrality measures, and show through computational experiments that our new heuristics tend to reduce the number of subgraphs in the decomposition, and therefore could lead to the reduction in computational time for finding the optimal solution. The method is further applied to analysis of biological pathway data.

  5. A copyright protection scheme for digital images based on shuffled singular value decomposition and visual cryptography.

    PubMed

    Devi, B Pushpa; Singh, Kh Manglem; Roy, Sudipta

    2016-01-01

    This paper proposes a new watermarking algorithm based on the shuffled singular value decomposition and the visual cryptography for copyright protection of digital images. It generates the ownership and identification shares of the image based on visual cryptography. It decomposes the image into low and high frequency sub-bands. The low frequency sub-band is further divided into blocks of same size after shuffling it and then the singular value decomposition is applied to each randomly selected block. Shares are generated by comparing one of the elements in the first column of the left orthogonal matrix with its corresponding element in the right orthogonal matrix of the singular value decomposition of the block of the low frequency sub-band. The experimental results show that the proposed scheme clearly verifies the copyright of the digital images, and is robust to withstand several image processing attacks. Comparison with the other related visual cryptography-based algorithms reveals that the proposed method gives better performance. The proposed method is especially resilient against the rotation attack.

  6. In Situ Probes of Capture and Decomposition of Chemical Warfare Agent Simulants by Zr-Based Metal Organic Frameworks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plonka, Anna M.; Wang, Qi; Gordon, Wesley O.

    Recently, Zr-based metal organic frameworks (MOFs) were shown to be among the fastest catalysts of nerve-agent hydrolysis in solution. Here, we report a detailed study of the adsorption and decomposition of a nerve-agent simulant, dimethyl methylphosphonate (DMMP), on UiO-66, UiO-67, MOF-808, and NU-1000 using synchrotron-based X-ray powder diffraction, X-ray absorption, and infrared spectroscopy, which reveals key aspects of the reaction mechanism. The diffraction measurements indicate that all four MOFs adsorb DMMP (introduced at atmospheric pressures through a flow of helium or air) within the pore space. In addition, the combination of X-ray absorption and infrared spectra suggests direct coordination ofmore » DMMP to the Zr6 cores of all MOFs, which ultimately leads to decomposition to phosphonate products. Our experimental probes into the mechanism of adsorption and decomposition of chemical warfare agent simulants on Zr-based MOFs open new opportunities in rational design of new and superior decontamination materials.« less

  7. In Situ Probes of Capture and Decomposition of Chemical Warfare Agent Simulants by Zr-Based Metal Organic Frameworks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plonka, Anna M.; Wang, Qi; Gordon, Wesley O.

    Zr-based metal organic frameworks (MOFs) have been recently shown to be among the fastest catalysts of nerve-agent hydrolysis in solution. We report a detailed study of the adsorption and decomposition of a nerve-agent simulant, dimethyl methylphosphonate (DMMP), on UiO-66, UiO-67, MOF-808, and NU-1000 using synchrotron-based X-ray powder diffraction, X-ray absorption, and infrared spectroscopy, which reveals key aspects of the reaction mechanism. The diffraction measurements indicate that all four MOFs adsorb DMMP (introduced at atmospheric pressures through a flow of helium or air) within the pore space. In addition, the combination of X-ray absorption and infrared spectra suggests direct coordination ofmore » DMMP to the Zr6 cores of all MOFs, which ultimately leads to decomposition to phosphonate products. These experimental probes into the mechanism of adsorption and decomposition of chemical warfare agent simulants on Zr-based MOFs open new opportunities in rational design of new and superior decontamination materials.« less

  8. Phase unwrapping with graph cuts optimization and dual decomposition acceleration for 3D high-resolution MRI data.

    PubMed

    Dong, Jianwu; Chen, Feng; Zhou, Dong; Liu, Tian; Yu, Zhaofei; Wang, Yi

    2017-03-01

    Existence of low SNR regions and rapid-phase variations pose challenges to spatial phase unwrapping algorithms. Global optimization-based phase unwrapping methods are widely used, but are significantly slower than greedy methods. In this paper, dual decomposition acceleration is introduced to speed up a three-dimensional graph cut-based phase unwrapping algorithm. The phase unwrapping problem is formulated as a global discrete energy minimization problem, whereas the technique of dual decomposition is used to increase the computational efficiency by splitting the full problem into overlapping subproblems and enforcing the congruence of overlapping variables. Using three dimensional (3D) multiecho gradient echo images from an agarose phantom and five brain hemorrhage patients, we compared this proposed method with an unaccelerated graph cut-based method. Experimental results show up to 18-fold acceleration in computation time. Dual decomposition significantly improves the computational efficiency of 3D graph cut-based phase unwrapping algorithms. Magn Reson Med 77:1353-1358, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  9. In Situ Probes of Capture and Decomposition of Chemical Warfare Agent Simulants by Zr-Based Metal Organic Frameworks

    DOE PAGES

    Plonka, Anna M.; Wang, Qi; Gordon, Wesley O.; ...

    2016-12-30

    Recently, Zr-based metal organic frameworks (MOFs) were shown to be among the fastest catalysts of nerve-agent hydrolysis in solution. Here, we report a detailed study of the adsorption and decomposition of a nerve-agent simulant, dimethyl methylphosphonate (DMMP), on UiO-66, UiO-67, MOF-808, and NU-1000 using synchrotron-based X-ray powder diffraction, X-ray absorption, and infrared spectroscopy, which reveals key aspects of the reaction mechanism. The diffraction measurements indicate that all four MOFs adsorb DMMP (introduced at atmospheric pressures through a flow of helium or air) within the pore space. In addition, the combination of X-ray absorption and infrared spectra suggests direct coordination ofmore » DMMP to the Zr6 cores of all MOFs, which ultimately leads to decomposition to phosphonate products. Our experimental probes into the mechanism of adsorption and decomposition of chemical warfare agent simulants on Zr-based MOFs open new opportunities in rational design of new and superior decontamination materials.« less

  10. A Novel Multilevel-SVD Method to Improve Multistep Ahead Forecasting in Traffic Accidents Domain.

    PubMed

    Barba, Lida; Rodríguez, Nibaldo

    2017-01-01

    Here is proposed a novel method for decomposing a nonstationary time series in components of low and high frequency. The method is based on Multilevel Singular Value Decomposition (MSVD) of a Hankel matrix. The decomposition is used to improve the forecasting accuracy of Multiple Input Multiple Output (MIMO) linear and nonlinear models. Three time series coming from traffic accidents domain are used. They represent the number of persons with injuries in traffic accidents of Santiago, Chile. The data were continuously collected by the Chilean Police and were weekly sampled from 2000:1 to 2014:12. The performance of MSVD is compared with the decomposition in components of low and high frequency of a commonly accepted method based on Stationary Wavelet Transform (SWT). SWT in conjunction with the Autoregressive model (SWT + MIMO-AR) and SWT in conjunction with an Autoregressive Neural Network (SWT + MIMO-ANN) were evaluated. The empirical results have shown that the best accuracy was achieved by the forecasting model based on the proposed decomposition method MSVD, in comparison with the forecasting models based on SWT.

  11. A Novel Multilevel-SVD Method to Improve Multistep Ahead Forecasting in Traffic Accidents Domain

    PubMed Central

    Rodríguez, Nibaldo

    2017-01-01

    Here is proposed a novel method for decomposing a nonstationary time series in components of low and high frequency. The method is based on Multilevel Singular Value Decomposition (MSVD) of a Hankel matrix. The decomposition is used to improve the forecasting accuracy of Multiple Input Multiple Output (MIMO) linear and nonlinear models. Three time series coming from traffic accidents domain are used. They represent the number of persons with injuries in traffic accidents of Santiago, Chile. The data were continuously collected by the Chilean Police and were weekly sampled from 2000:1 to 2014:12. The performance of MSVD is compared with the decomposition in components of low and high frequency of a commonly accepted method based on Stationary Wavelet Transform (SWT). SWT in conjunction with the Autoregressive model (SWT + MIMO-AR) and SWT in conjunction with an Autoregressive Neural Network (SWT + MIMO-ANN) were evaluated. The empirical results have shown that the best accuracy was achieved by the forecasting model based on the proposed decomposition method MSVD, in comparison with the forecasting models based on SWT. PMID:28261267

  12. FWT2D: A massively parallel program for frequency-domain full-waveform tomography of wide-aperture seismic data—Part 1: Algorithm

    NASA Astrophysics Data System (ADS)

    Sourbier, Florent; Operto, Stéphane; Virieux, Jean; Amestoy, Patrick; L'Excellent, Jean-Yves

    2009-03-01

    This is the first paper in a two-part series that describes a massively parallel code that performs 2D frequency-domain full-waveform inversion of wide-aperture seismic data for imaging complex structures. Full-waveform inversion methods, namely quantitative seismic imaging methods based on the resolution of the full wave equation, are computationally expensive. Therefore, designing efficient algorithms which take advantage of parallel computing facilities is critical for the appraisal of these approaches when applied to representative case studies and for further improvements. Full-waveform modelling requires the resolution of a large sparse system of linear equations which is performed with the massively parallel direct solver MUMPS for efficient multiple-shot simulations. Efficiency of the multiple-shot solution phase (forward/backward substitutions) is improved by using the BLAS3 library. The inverse problem relies on a classic local optimization approach implemented with a gradient method. The direct solver returns the multiple-shot wavefield solutions distributed over the processors according to a domain decomposition driven by the distribution of the LU factors. The domain decomposition of the wavefield solutions is used to compute in parallel the gradient of the objective function and the diagonal Hessian, this latter providing a suitable scaling of the gradient. The algorithm allows one to test different strategies for multiscale frequency inversion ranging from successive mono-frequency inversion to simultaneous multifrequency inversion. These different inversion strategies will be illustrated in the following companion paper. The parallel efficiency and the scalability of the code will also be quantified.

  13. Microbial decomposers not constrained by climate history along a Mediterranean climate gradient in southern California.

    PubMed

    Baker, Nameer R; Khalili, Banafshe; Martiny, Jennifer B H; Allison, Steven D

    2018-06-01

    Microbial decomposers mediate the return of CO 2 to the atmosphere by producing extracellular enzymes to degrade complex plant polymers, making plant carbon available for metabolism. Determining if and how these decomposer communities are constrained in their ability to degrade plant litter is necessary for predicting how carbon cycling will be affected by future climate change. We analyzed mass loss, litter chemistry, microbial biomass, extracellular enzyme activities, and enzyme temperature sensitivities in grassland litter transplanted along a Mediterranean climate gradient in southern California. Microbial community composition was manipulated by caging litter within bags made of nylon membrane that prevent microbial immigration. To test whether grassland microbes were constrained by climate history, half of the bags were inoculated with local microbial communities native to each gradient site. We determined that temperature and precipitation likely interact to limit microbial decomposition in the extreme sites along our gradient. Despite their unique climate history, grassland microbial communities were not restricted in their ability to decompose litter under different climate conditions across the gradient, although microbial communities across our gradient may be restricted in their ability to degrade different types of litter. We did find some evidence that local microbial communities were optimized based on climate, but local microbial taxa that proliferated after inoculation into litterbags did not enhance litter decomposition. Our results suggest that microbial community composition does not constrain C-cycling rates under climate change in our system, but optimization to particular resource environments may act as more general constraints on microbial communities. © 2018 by the Ecological Society of America.

  14. On the hadron mass decomposition

    NASA Astrophysics Data System (ADS)

    Lorcé, Cédric

    2018-02-01

    We argue that the standard decompositions of the hadron mass overlook pressure effects, and hence should be interpreted with great care. Based on the semiclassical picture, we propose a new decomposition that properly accounts for these pressure effects. Because of Lorentz covariance, we stress that the hadron mass decomposition automatically comes along with a stability constraint, which we discuss for the first time. We show also that if a hadron is seen as made of quarks and gluons, one cannot decompose its mass into more than two contributions without running into trouble with the consistency of the physical interpretation. In particular, the so-called quark mass and trace anomaly contributions appear to be purely conventional. Based on the current phenomenological values, we find that in average quarks exert a repulsive force inside nucleons, balanced exactly by the gluon attractive force.

  15. Primary decomposition of zero-dimensional ideals over finite fields

    NASA Astrophysics Data System (ADS)

    Gao, Shuhong; Wan, Daqing; Wang, Mingsheng

    2009-03-01

    A new algorithm is presented for computing primary decomposition of zero-dimensional ideals over finite fields. Like Berlekamp's algorithm for univariate polynomials, the new method is based on the invariant subspace of the Frobenius map acting on the quotient algebra. The dimension of the invariant subspace equals the number of primary components, and a basis of the invariant subspace yields a complete decomposition. Unlike previous approaches for decomposing multivariate polynomial systems, the new method does not need primality testing nor any generic projection, instead it reduces the general decomposition problem directly to root finding of univariate polynomials over the ground field. Also, it is shown how Groebner basis structure can be used to get partial primary decomposition without any root finding.

  16. Increased nitrogen availability counteracts climatic change feedback from increased temperature on boreal forest soil organic matter degradation

    NASA Astrophysics Data System (ADS)

    Erhagen, Bjorn; Nilsson, Mats; Oquist, Mats; Ilstedt, Ulrik; Sparrman, Tobias; Schleucher, Jurgen

    2014-05-01

    Over the last century, the greenhouse gas concentrations in the atmosphere have increased dramatically, greatly exceeding pre-industrial levels that had prevailed for the preceding 420 000 years. At the same time the annual anthropogenic contribution to the global terrestrial nitrogen cycle has increased and currently exceeds natural inputs. Both temperature and nitrogen levels have profound effects on the global carbon cycle including the rate of organic matter decomposition, which is the most important biogeochemical process that returns CO2 to the atmosphere. Here we show for the first time that increasing the availability of nitrogen not only directly affects the rate of organic matter decomposition but also significantly affects its temperature dependence. We incubated litter and soil organic matter from a long-term (40 years) nitrogen fertilization experiment in a boreal Scots pine (Pinus silvestris L.) forest at different temperatures and determined the temperature dependence of the decomposition of the sample's organic matter in each case. Nitrogen fertilization did not affect the temperature sensitivity (Q10) of the decomposition of fresh plant litter but strongly reduced that for humus soil organic matter. The Q10 response of the 0-3 cm soil layer decreased from 2.5±0.35 to an average of 1.9±0.21 over all nitrogen treatments, and from 2.2±0.19 to 1.6±0.16 in response to the most intense nitrogen fertilization treatment in the 4-7 cm soil layer. Long-term nitrogen additions also significantly affected the organic chemical composition (as determined by 13C CP-MAS NMR spectroscopy) of the soil organic matter. These changes in chemical composition contributed significantly (p<0.05) to the reduced Q10 response. These new insights into the relationship between nitrogen availability and the temperature sensitivity of organic matter decomposition will be important for understanding and predicting how increases in global temperature and rising anthropogenic nitrogen inputs will affect the global carbon cycle and the associated climatic feedback processes.

  17. Repeated decompositions reveal the stability of infomax decomposition of fMRI data

    PubMed Central

    Duann, Jeng-Ren; Jung, Tzyy-Ping; Sejnowski, Terrence J.; Makeig, Scott

    2010-01-01

    In this study, we decomposed 12 fMRI data sets from six subjects each 101 times using the infomax algorithm. The first decomposition was taken as a reference decomposition; the others were used to form a component matrix of 100 by 100 components. Equivalence relations between components in this matrix, defined as maximum spatial correlations to the components of the reference decomposition, were found by the Hungarian sorting method and used to form 100 equivalence classes for each data set. We then tested the reproducibility of the matched components in the equivalence classes using uncertainty measures based on component distributions, time courses, and ROC curves. Infomax ICA rarely failed to derive nearly the same components in different decompositions. Very few components per data set were poorly reproduced, even using vector angle uncertainty measures stricter than correlation and detection theory measures. PMID:17281453

  18. a Novel Two-Component Decomposition for Co-Polar Channels of GF-3 Quad-Pol Data

    NASA Astrophysics Data System (ADS)

    Kwok, E.; Li, C. H.; Zhao, Q. H.; Li, Y.

    2018-04-01

    Polarimetric target decomposition theory is the most dynamic and exploratory research area in the field of PolSAR. But most methods of target decomposition are based on fully polarized data (quad pol) and seldom utilize dual-polar data for target decomposition. Given this, we proposed a novel two-component decomposition method for co-polar channels of GF-3 quad-pol data. This method decomposes the data into two scattering contributions: surface, double bounce in dual co-polar channels. To save this underdetermined problem, a criterion for determining the model is proposed. The criterion can be named as second-order averaged scattering angle, which originates from the H/α decomposition. and we also put forward an alternative parameter of it. To validate the effectiveness of proposed decomposition, Liaodong Bay is selected as research area. The area is located in northeastern China, where it grows various wetland resources and appears sea ice phenomenon in winter. and we use the GF-3 quad-pol data as study data, which which is China's first C-band polarimetric synthetic aperture radar (PolSAR) satellite. The dependencies between the features of proposed algorithm and comparison decompositions (Pauli decomposition, An&Yang decomposition, Yamaguchi S4R decomposition) were investigated in the study. Though several aspects of the experimental discussion, we can draw the conclusion: the proposed algorithm may be suitable for special scenes with low vegetation coverage or low vegetation in the non-growing season; proposed decomposition features only using co-polar data are highly correlated with the corresponding comparison decomposition features under quad-polarization data. Moreover, it would be become input of the subsequent classification or parameter inversion.

  19. A density functional theory study of the decomposition mechanism of nitroglycerin.

    PubMed

    Pei, Liguan; Dong, Kehai; Tang, Yanhui; Zhang, Bo; Yu, Chang; Li, Wenzuo

    2017-08-21

    The detailed decomposition mechanism of nitroglycerin (NG) in the gas phase was studied by examining reaction pathways using density functional theory (DFT) and canonical variational transition state theory combined with a small-curvature tunneling correction (CVT/SCT). The mechanism of NG autocatalytic decomposition was investigated at the B3LYP/6-31G(d,p) level of theory. Five possible decomposition pathways involving NG were identified and the rate constants for the pathways at temperatures ranging from 200 to 1000 K were calculated using CVT/SCT. There was found to be a lower energy barrier to the β-H abstraction reaction than to the α-H abstraction reaction during the initial step in the autocatalytic decomposition of NG. The decomposition pathways for CHOCOCHONO 2 (a product obtained following the abstraction of three H atoms from NG by NO 2 ) include O-NO 2 cleavage or isomer production, meaning that the autocatalytic decomposition of NG has two reaction pathways, both of which are exothermic. The rate constants for these two reaction pathways are greater than the rate constants for the three pathways corresponding to unimolecular NG decomposition. The overall process of NG decomposition can be divided into two stages based on the NO 2 concentration, which affects the decomposition products and reactions. In the first stage, the reaction pathway corresponding to O-NO 2 cleavage is the main pathway, but the rates of the two autocatalytic decomposition pathways increase with increasing NO 2 concentration. However, when a threshold NO 2 concentration is reached, the NG decomposition process enters its second stage, with the two pathways for NG autocatalytic decomposition becoming the main and secondary reaction pathways.

  20. Image fusion method based on regional feature and improved bidimensional empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Qin, Xinqiang; Hu, Gang; Hu, Kai

    2018-01-01

    The decomposition of multiple source images using bidimensional empirical mode decomposition (BEMD) often produces mismatched bidimensional intrinsic mode functions, either by their number or their frequency, making image fusion difficult. A solution to this problem is proposed using a fixed number of iterations and a union operation in the sifting process. By combining the local regional features of the images, an image fusion method has been developed. First, the source images are decomposed using the proposed BEMD to produce the first intrinsic mode function (IMF) and residue component. Second, for the IMF component, a selection and weighted average strategy based on local area energy is used to obtain a high-frequency fusion component. Third, for the residue component, a selection and weighted average strategy based on local average gray difference is used to obtain a low-frequency fusion component. Finally, the fused image is obtained by applying the inverse BEMD transform. Experimental results show that the proposed algorithm provides superior performance over methods based on wavelet transform, line and column-based EMD, and complex empirical mode decomposition, both in terms of visual quality and objective evaluation criteria.

  1. An efficient computational approach to model statistical correlations in photon counting x-ray detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faby, Sebastian; Maier, Joscha; Sawall, Stefan

    2016-07-15

    Purpose: To introduce and evaluate an increment matrix approach (IMA) describing the signal statistics of energy-selective photon counting detectors including spatial–spectral correlations between energy bins of neighboring detector pixels. The importance of the occurring correlations for image-based material decomposition is studied. Methods: An IMA describing the counter increase patterns in a photon counting detector is proposed. This IMA has the potential to decrease the number of required random numbers compared to Monte Carlo simulations by pursuing an approach based on convolutions. To validate and demonstrate the IMA, an approximate semirealistic detector model is provided, simulating a photon counting detector inmore » a simplified manner, e.g., by neglecting count rate-dependent effects. In this way, the spatial–spectral correlations on the detector level are obtained and fed into the IMA. The importance of these correlations in reconstructed energy bin images and the corresponding detector performance in image-based material decomposition is evaluated using a statistically optimal decomposition algorithm. Results: The results of IMA together with the semirealistic detector model were compared to other models and measurements using the spectral response and the energy bin sensitivity, finding a good agreement. Correlations between the different reconstructed energy bin images could be observed, and turned out to be of weak nature. These correlations were found to be not relevant in image-based material decomposition. An even simpler simulation procedure based on the energy bin sensitivity was tested instead and yielded similar results for the image-based material decomposition task, as long as the fact that one incident photon can increase multiple counters across neighboring detector pixels is taken into account. Conclusions: The IMA is computationally efficient as it required about 10{sup 2} random numbers per ray incident on a detector pixel instead of an estimated 10{sup 8} random numbers per ray as Monte Carlo approaches would need. The spatial–spectral correlations as described by IMA are not important for the studied image-based material decomposition task. Respecting the absolute photon counts and thus the multiple counter increases by a single x-ray photon, the same material decomposition performance could be obtained with a simpler detector description using the energy bin sensitivity.« less

  2. Mechanistic study of the influence of pyrolysis conditions on potassium speciation in biochar "preparation-application" process.

    PubMed

    Tan, Zhongxin; Liu, Liyun; Zhang, Limei; Huang, Qiaoyun

    2017-12-01

    Biochar samples produced from rice straw by pyrolysis at different temperatures (400°C and 800°C) and under different atmospheres (N 2 and CO 2 ) were applied to lettuce growth in a 'preparation-application' system. The conversion of potassium in the prepared biochar and the effect of the temperature used for pyrolysis on the bioavailability of potassium in the biochar were investigated. Root samples from lettuce plants grown with and without application of biochar were assayed by X-ray photoelectron spectroscopy (XPS). The optimal conditions for preparation of biochar to achieve the maximum bioavailability of potassium (i.e. for returning biochar to soil) were thus determined. Complex-K, a stable speciation of potassium in rice straw, was transformed into potassium sulfate, potassium nitrate, potassium nitrite, and potassium chloride after oxygen-limited pyrolysis. The aforementioned ionic-state potassium species can be directly absorbed and used by plants. Decomposition of the stable speciation of potassium during the pyrolysis process was more effective at higher temperature, whereas the pyrolysis atmosphere (CO 2 and N 2 ) had little effect on the quality of the biochar. Based on the potassium speciation in the biochar, the preparation cost, and the plant growth and rigor after the application of returning biochar to soil, 400°C and CO 2 atmosphere were the most appropriate conditions for preparation of biochar. Copyright © 2017. Published by Elsevier B.V.

  3. Analysis of Self-Excited Combustion Instabilities Using Decomposition Techniques

    DTIC Science & Technology

    2016-07-05

    are evaluated for the study of self-excited longitudinal combustion instabilities in laboratory-scaled single-element gas turbine and rocket...Air Force Base, California 93524 DOI: 10.2514/1.J054557 Proper orthogonal decomposition and dynamic mode decomposition are evaluated for the study of...instabilities. In addition, we also evaluate the capabilities of the methods to deal with data sets of different spatial extents and temporal resolution

  4. An Integrated Chemical Reactor-Heat Exchanger Based on Ammonium Carbamate (POSTPRINT)

    DTIC Science & Technology

    2012-10-01

    With the scrubber and exhaust operating, the test cell ammonia concentration remains below 5 ppm. To further reduce NH3 release into the test cell...material has a high decomposition enthalpy and exhibits decomposition over a wide range of temperatures. AC decomposition produces ammonia and carbon...installation due to toxic gas ( ammonia ) generation during operation. Therefore, the experiment is intended to be remotely operated. A secondary control

  5. Thermal Decomposition Behaviors and Burning Characteristics of AN/Nitramine-Based Composite Propellant

    NASA Astrophysics Data System (ADS)

    Naya, Tomoki; Kohga, Makoto

    2015-04-01

    Ammonium nitrate (AN) has attracted much attention due to its clean burning nature as an oxidizer. However, an AN-based composite propellant has the disadvantages of low burning rate and poor ignitability. In this study, we added nitramine of cyclotrimethylene trinitramine (RDX) or cyclotetramethylene tetranitramine (HMX) as a high-energy material to AN propellants to overcome these disadvantages. The thermal decomposition and burning rate characteristics of the prepared propellants were examined as the ratio of AN and nitramine was varied. In the thermal decomposition process, AN/RDX propellants showed unique mass loss peaks in the lower temperature range that were not observed for AN or RDX propellants alone. AN and RDX decomposed continuously as an almost single oxidizer in the AN/RDX propellant. In contrast, AN/HMX propellants exhibited thermal decomposition characteristics similar to those of AN and HMX, which decomposed almost separately in the thermal decomposition of the AN/HMX propellant. The ignitability was improved and the burning rate increased by the addition of nitramine for both AN/RDX and AN/HMX propellants. The increased burning rates of AN/RDX propellants were greater than those of AN/HMX. The difference in the thermal decomposition and burning characteristics was caused by the interaction between AN and RDX.

  6. Performance of tensor decomposition-based modal identification under nonstationary vibration

    NASA Astrophysics Data System (ADS)

    Friesen, P.; Sadhu, A.

    2017-03-01

    Health monitoring of civil engineering structures is of paramount importance when they are subjected to natural hazards or extreme climatic events like earthquake, strong wind gusts or man-made excitations. Most of the traditional modal identification methods are reliant on stationarity assumption of the vibration response and posed difficulty while analyzing nonstationary vibration (e.g. earthquake or human-induced vibration). Recently tensor decomposition based methods are emerged as powerful and yet generic blind (i.e. without requiring a knowledge of input characteristics) signal decomposition tool for structural modal identification. In this paper, a tensor decomposition based system identification method is further explored to estimate modal parameters using nonstationary vibration generated due to either earthquake or pedestrian induced excitation in a structure. The effects of lag parameters and sensor densities on tensor decomposition are studied with respect to the extent of nonstationarity of the responses characterized by the stationary duration and peak ground acceleration of the earthquake. A suite of more than 1400 earthquakes is used to investigate the performance of the proposed method under a wide variety of ground motions utilizing both complete and partial measurements of a high-rise building model. Apart from the earthquake, human-induced nonstationary vibration of a real-life pedestrian bridge is also used to verify the accuracy of the proposed method.

  7. [Progress in Raman spectroscopic measurement of methane hydrate].

    PubMed

    Xu, Feng; Zhu, Li-hua; Wu, Qiang; Xu, Long-jun

    2009-09-01

    Complex thermodynamics and kinetics problems are involved in the methane hydrate formation and decomposition, and these problems are crucial to understanding the mechanisms of hydrate formation and hydrate decomposition. However, it was difficult to accurately obtain such information due to the difficulty of measurement since methane hydrate is only stable under low temperature and high pressure condition, and until recent years, methane hydrate has been measured in situ using Raman spectroscopy. Raman spectroscopy, a non-destructive and non-invasive technique, is used to study vibrational modes of molecules. Studies of methane hydrate using Raman spectroscopy have been developed over the last decade. The Raman spectra of CH4 in vapor phase and in hydrate phase are presented in this paper. The progress in the research on methane hydrate formation thermodynamics, formation kinetics, decomposition kinetics and decomposition mechanism based on Raman spectroscopic measurements in the laboratory and deep sea are reviewed. Formation thermodynamic studies, including in situ observation of formation condition of methane hydrate, analysis of structure, and determination of hydrate cage occupancy and hydration numbers by using Raman spectroscopy, are emphasized. In the aspect of formation kinetics, research on variation in hydrate cage amount and methane concentration in water during the growth of hydrate using Raman spectroscopy is also introduced. For the methane hydrate decomposition, the investigation associated with decomposition mechanism, the mutative law of cage occupancy ratio and the formulation of decomposition rate in porous media are described. The important aspects for future hydrate research based on Raman spectroscopy are discussed.

  8. Experimental and modeling study on decomposition kinetics of methane hydrates in different media.

    PubMed

    Liang, Minyan; Chen, Guangjin; Sun, Changyu; Yan, Lijun; Liu, Jiang; Ma, Qinglan

    2005-10-13

    The decomposition kinetic behaviors of methane hydrates formed in 5 cm3 porous wet activated carbon were studied experimentally in a closed system in the temperature range of 275.8-264.4 K. The decomposition rates of methane hydrates formed from 5 cm3 of pure free water and an aqueous solution of 650 g x m(-3) sodium dodecyl sulfate (SDS) were also measured for comparison. The decomposition rates of methane hydrates in seven different cases were compared. The results showed that the methane hydrates dissociate more rapidly in porous activated carbon than in free systems. A mathematical model was developed for describing the decomposition kinetic behavior of methane hydrates below ice point based on an ice-shielding mechanism in which a porous ice layer was assumed to be formed during the decomposition of hydrate, and the diffusion of methane molecules through it was assumed to be one of the control steps. The parameters of the model were determined by correlating the decomposition rate data, and the activation energies were further determined with respect to three different media. The model was found to well describe the decomposition kinetic behavior of methane hydrate in different media.

  9. Localized motion in random matrix decomposition of complex financial systems

    NASA Astrophysics Data System (ADS)

    Jiang, Xiong-Fei; Zheng, Bo; Ren, Fei; Qiu, Tian

    2017-04-01

    With the random matrix theory, we decompose the multi-dimensional time series of complex financial systems into a set of orthogonal eigenmode functions, which are classified into the market mode, sector mode, and random mode. In particular, the localized motion generated by the business sectors, plays an important role in financial systems. Both the business sectors and their impact on the stock market are identified from the localized motion. We clarify that the localized motion induces different characteristics of the time correlations for the stock-market index and individual stocks. With a variation of a two-factor model, we reproduce the return-volatility correlations of the eigenmodes.

  10. Decomposition Theory in the Teaching of Elementary Linear Algebra.

    ERIC Educational Resources Information Center

    London, R. R.; Rogosinski, H. P.

    1990-01-01

    Described is a decomposition theory from which the Cayley-Hamilton theorem, the diagonalizability of complex square matrices, and functional calculus can be developed. The theory and its applications are based on elementary polynomial algebra. (KR)

  11. Adaptive variational mode decomposition method for signal processing based on mode characteristic

    NASA Astrophysics Data System (ADS)

    Lian, Jijian; Liu, Zhuo; Wang, Haijun; Dong, Xiaofeng

    2018-07-01

    Variational mode decomposition is a completely non-recursive decomposition model, where all the modes are extracted concurrently. However, the model requires a preset mode number, which limits the adaptability of the method since a large deviation in the number of mode set will cause the discard or mixing of the mode. Hence, a method called Adaptive Variational Mode Decomposition (AVMD) was proposed to automatically determine the mode number based on the characteristic of intrinsic mode function. The method was used to analyze the simulation signals and the measured signals in the hydropower plant. Comparisons have also been conducted to evaluate the performance by using VMD, EMD and EWT. It is indicated that the proposed method has strong adaptability and is robust to noise. It can determine the mode number appropriately without modulation even when the signal frequencies are relatively close.

  12. Native conflict awared layout decomposition in triple patterning lithography using bin-based library matching method

    NASA Astrophysics Data System (ADS)

    Ke, Xianhua; Jiang, Hao; Lv, Wen; Liu, Shiyuan

    2016-03-01

    Triple patterning (TP) lithography becomes a feasible technology for manufacturing as the feature size further scale down to sub 14/10 nm. In TP, a layout is decomposed into three masks followed with exposures and etches/freezing processes respectively. Previous works mostly focus on layout decomposition with minimal conflicts and stitches simultaneously. However, since any existence of native conflict will result in layout re-design/modification and reperforming the time-consuming decomposition, the effective method that can be aware of native conflicts (NCs) in layout is desirable. In this paper, a bin-based library matching method is proposed for NCs detection and layout decomposition. First, a layout is divided into bins and the corresponding conflict graph in each bin is constructed. Then, we match the conflict graph in a prebuilt colored library, and as a result the NCs can be located and highlighted quickly.

  13. Two Dimensional Finite Element Based Magnetotelluric Inversion using Singular Value Decomposition Method on Transverse Electric Mode

    NASA Astrophysics Data System (ADS)

    Tjong, Tiffany; Yihaa’ Roodhiyah, Lisa; Nurhasan; Sutarno, Doddy

    2018-04-01

    In this work, an inversion scheme was performed using a vector finite element (VFE) based 2-D magnetotelluric (MT) forward modelling. We use an inversion scheme with Singular value decomposition (SVD) method toimprove the accuracy of MT inversion.The inversion scheme was applied to transverse electric (TE) mode of MT. SVD method was used in this inversion to decompose the Jacobian matrices. Singular values which obtained from the decomposition process were analyzed. This enabled us to determine the importance of data and therefore to define a threshold for truncation process. The truncation of singular value in inversion processcould improve the resulted model.

  14. Reaction behaviors of decomposition of monocrotophos in aqueous solution by UV and UV/O processes.

    PubMed

    Ku, Y; Wang, W; Shen, Y S

    2000-02-01

    The decomposition of monocrotophos (cis-3-dimethoxyphosphinyloxy-N-methyl-crotonamide) in aqueous solution by UV and UV/O(3) processes was studied. The experiments were carried out under various solution pH values to investigate the decomposition efficiencies of the reactant and organic intermediates in order to determine the completeness of decomposition. The photolytic decomposition rate of monocrotophos was increased with increasing solution pH because the solution pH affects the distribution and light absorbance of monocrotophos species. The combination of O(3) with UV light apparently promoted the decomposition and mineralization of monocrotophos in aqueous solution. For the UV/O(3) process, the breakage of the >C=C< bond of monocrotophos by ozone molecules was found to occur first, followed by mineralization by hydroxyl radicals to generate CO(3)(2-), PO4(3-), and NO(3)(-) anions in sequence. The quasi-global kinetics based on a simplified consecutive-parallel reaction scheme was developed to describe the temporal behavior of monocrotophos decomposition in aqueous solution by the UV/O(3) process.

  15. Catalytic and inhibiting effects of lithium peroxide and hydroxide on sodium chlorate decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cannon, J.C.; Zhang, Y.

    1995-09-01

    Chemical oxygen generators based on sodium chlorate and lithium perchlorate are used in airplanes, submarines, diving, and mine rescue. Catalytic decomposition of sodium chlorate in the presence of cobalt oxide, lithium peroxide, and lithium hydroxide is studied using thermal gravimetric analysis. Lithium peroxide and hydroxide are both moderately active catalysts for the decomposition of sodium chlorate when used alone, and inhibitors when used with the more active catalyst cobalt oxide.

  16. Reactivity of fluoroalkanes in reactions of coordinated molecular decomposition

    NASA Astrophysics Data System (ADS)

    Pokidova, T. S.; Denisov, E. T.

    2017-08-01

    Experimental results on the coordinated molecular decomposition of RF fluoroalkanes to olefin and HF are analyzed using the model of intersecting parabolas (IPM). The kinetic parameters are calculated to allow estimates of the activation energy ( E) and rate constant ( k) of these reactions, based on enthalpy and IPM algorithms. Parameters E and k are found for the first time for eight RF decomposition reactions. The factors that affect activation energy E of RF decomposition (the enthalpy of the reaction, the electronegativity of the atoms of reaction centers, and the dipole-dipole interaction of polar groups) are determined. The values of E and k for reverse reactions of addition are estimated.

  17. Study on the decomposition of trace benzene over V2O5-WO3/TiO2-based catalysts in simulated flue gas

    EPA Science Inventory

    Commercial and laboratory-prepared V2O5–WO3/TiO2-based catalysts with different compositions were tested for catalytic decomposition of chlorobenzene (ClBz) in simulated flue gas. Resonance enhanced multiphoton ionization-time of flight mass spectrometry (REMPI-TOFMS) was employe...

  18. Frequency hopping signal detection based on wavelet decomposition and Hilbert-Huang transform

    NASA Astrophysics Data System (ADS)

    Zheng, Yang; Chen, Xihao; Zhu, Rui

    2017-07-01

    Frequency hopping (FH) signal is widely adopted by military communications as a kind of low probability interception signal. Therefore, it is very important to research the FH signal detection algorithm. The existing detection algorithm of FH signals based on the time-frequency analysis cannot satisfy the time and frequency resolution requirement at the same time due to the influence of window function. In order to solve this problem, an algorithm based on wavelet decomposition and Hilbert-Huang transform (HHT) was proposed. The proposed algorithm removes the noise of the received signals by wavelet decomposition and detects the FH signals by Hilbert-Huang transform. Simulation results show the proposed algorithm takes into account both the time resolution and the frequency resolution. Correspondingly, the accuracy of FH signals detection can be improved.

  19. Atomic Cholesky decompositions: a route to unbiased auxiliary basis sets for density fitting approximation with tunable accuracy and efficiency.

    PubMed

    Aquilante, Francesco; Gagliardi, Laura; Pedersen, Thomas Bondo; Lindh, Roland

    2009-04-21

    Cholesky decomposition of the atomic two-electron integral matrix has recently been proposed as a procedure for automated generation of auxiliary basis sets for the density fitting approximation [F. Aquilante et al., J. Chem. Phys. 127, 114107 (2007)]. In order to increase computational performance while maintaining accuracy, we propose here to reduce the number of primitive Gaussian functions of the contracted auxiliary basis functions by means of a second Cholesky decomposition. Test calculations show that this procedure is most beneficial in conjunction with highly contracted atomic orbital basis sets such as atomic natural orbitals, and that the error resulting from the second decomposition is negligible. We also demonstrate theoretically as well as computationally that the locality of the fitting coefficients can be controlled by means of the decomposition threshold even with the long-ranged Coulomb metric. Cholesky decomposition-based auxiliary basis sets are thus ideally suited for local density fitting approximations.

  20. Atomic Cholesky decompositions: A route to unbiased auxiliary basis sets for density fitting approximation with tunable accuracy and efficiency

    NASA Astrophysics Data System (ADS)

    Aquilante, Francesco; Gagliardi, Laura; Pedersen, Thomas Bondo; Lindh, Roland

    2009-04-01

    Cholesky decomposition of the atomic two-electron integral matrix has recently been proposed as a procedure for automated generation of auxiliary basis sets for the density fitting approximation [F. Aquilante et al., J. Chem. Phys. 127, 114107 (2007)]. In order to increase computational performance while maintaining accuracy, we propose here to reduce the number of primitive Gaussian functions of the contracted auxiliary basis functions by means of a second Cholesky decomposition. Test calculations show that this procedure is most beneficial in conjunction with highly contracted atomic orbital basis sets such as atomic natural orbitals, and that the error resulting from the second decomposition is negligible. We also demonstrate theoretically as well as computationally that the locality of the fitting coefficients can be controlled by means of the decomposition threshold even with the long-ranged Coulomb metric. Cholesky decomposition-based auxiliary basis sets are thus ideally suited for local density fitting approximations.

  1. Forecasting hotspots in East Kutai, Kutai Kartanegara, and West Kutai as early warning information

    NASA Astrophysics Data System (ADS)

    Wahyuningsih, S.; Goejantoro, R.; Rizki, N. A.

    2018-04-01

    The aims of this research are to model hotspots and forecast hotspot 2017 in East Kutai, Kutai Kartanegara and West Kutai. The methods which used in this research were Holt exponential smoothing, Holt’s additive dump trend method, Holt-Winters’ additive method, additive decomposition method, multiplicative decomposition method, Loess decomposition method and Box-Jenkins method. For smoothing techniques, additive decomposition is better than Holt’s exponential smoothing. The hotspots model using Box-Jenkins method were Autoregressive Moving Average ARIMA(1,1,0), ARIMA(0,2,1), and ARIMA(0,1,0). Comparing the results from all methods which were used in this research, and based on Root of Mean Squared Error (RMSE), show that Loess decomposition method is the best times series model, because it has the least RMSE. Thus the Loess decomposition model used to forecast the number of hotspot. The forecasting result indicatethat hotspots pattern tend to increase at the end of 2017 in Kutai Kartanegara and West Kutai, but stationary in East Kutai.

  2. A general framework of noise suppression in material decomposition for dual-energy CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petrongolo, Michael; Dong, Xue; Zhu, Lei, E-mail: leizhu@gatech.edu

    Purpose: As a general problem of dual-energy CT (DECT), noise amplification in material decomposition severely reduces the signal-to-noise ratio on the decomposed images compared to that on the original CT images. In this work, the authors propose a general framework of noise suppression in material decomposition for DECT. The method is based on an iterative algorithm recently developed in their group for image-domain decomposition of DECT, with an extension to include nonlinear decomposition models. The generalized framework of iterative DECT decomposition enables beam-hardening correction with simultaneous noise suppression, which improves the clinical benefits of DECT. Methods: The authors propose tomore » suppress noise on the decomposed images of DECT using convex optimization, which is formulated in the form of least-squares estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, the authors include the inverse of the estimated variance–covariance matrix of the decomposed images as the penalty weight in the least-squares term. Analytical formulas are derived to compute the variance–covariance matrix for decomposed images with general-form numerical or analytical decomposition. As a demonstration, the authors implement the proposed algorithm on phantom data using an empirical polynomial function of decomposition measured on a calibration scan. The polynomial coefficients are determined from the projection data acquired on a wedge phantom, and the signal decomposition is performed in the projection domain. Results: On the Catphan{sup ®}600 phantom, the proposed noise suppression method reduces the average noise standard deviation of basis material images by one to two orders of magnitude, with a superior performance on spatial resolution as shown in comparisons of line-pair images and modulation transfer function measurements. On the synthesized monoenergetic CT images, the noise standard deviation is reduced by a factor of 2–3. By using nonlinear decomposition on projections, the authors’ method effectively suppresses the streaking artifacts of beam hardening and obtains more uniform images than their previous approach based on a linear model. Similar performance of noise suppression is observed in the results of an anthropomorphic head phantom and a pediatric chest phantom generated by the proposed method. With beam-hardening correction enabled by their approach, the image spatial nonuniformity on the head phantom is reduced from around 10% on the original CT images to 4.9% on the synthesized monoenergetic CT image. On the pediatric chest phantom, their method suppresses image noise standard deviation by a factor of around 7.5, and compared with linear decomposition, it reduces the estimation error of electron densities from 33.3% to 8.6%. Conclusions: The authors propose a general framework of noise suppression in material decomposition for DECT. Phantom studies have shown the proposed method improves the image uniformity and the accuracy of electron density measurements by effective beam-hardening correction and reduces noise level without noticeable resolution loss.« less

  3. Thermal stability and kinetics of decomposition of ammonium nitrate in the presence of pyrite.

    PubMed

    Gunawan, Richard; Zhang, Dongke

    2009-06-15

    The interaction between ammonium nitrate based industrial explosives and pyrite-rich minerals in mining operations can lead to the occurrence of spontaneous explosion of the explosives. In an effort to provide a scientific basis for safe applications of industrial explosives in reactive mining grounds containing pyrite, ammonium nitrate decomposition, with and without the presence of pyrite, was studied using a simultaneous Differential Scanning Calorimetry and Thermogravimetric Analyser (DSC-TGA) and a gas-sealed isothermal reactor, respectively. The activation energy and the pre-exponential factor of ammonium nitrate decomposition were determined to be 102.6 kJ mol(-1) and 4.55 x 10(7)s(-1) without the presence of pyrite and 101.8 kJ mol(-1) and 2.57 x 10(9)s(-1) with the presence of pyrite. The kinetics of ammonium nitrate decomposition was then used to calculate the critical temperatures for ammonium nitrate decomposition with and without the presence of pyrite, based on the Frank-Kamenetskii model of thermal explosion. It was shown that the presence of pyrite reduces the temperature for, and accelerates the rate of, decomposition of ammonium nitrate. It was further shown that pyrite can significantly reduce the critical temperature of ammonium nitrate decomposition, causing undesired premature detonation of the explosives. The critical temperature also decreases with increasing diameter of the blast holes charged with the explosive. The concept of using the critical temperature as indication of the thermal stability of the explosives to evaluate the risk of spontaneous explosion was verified in the gas-sealed isothermal reactor experiments.

  4. Optimal domain decomposition strategies

    NASA Technical Reports Server (NTRS)

    Yoon, Yonghyun; Soni, Bharat K.

    1995-01-01

    The primary interest of the authors is in the area of grid generation, in particular, optimal domain decomposition about realistic configurations. A grid generation procedure with optimal blocking strategies has been developed to generate multi-block grids for a circular-to-rectangular transition duct. The focus of this study is the domain decomposition which optimizes solution algorithm/block compatibility based on geometrical complexities as well as the physical characteristics of flow field. The progress realized in this study is summarized in this paper.

  5. Thermal Decomposition of Condensed-Phase Nitromethane from Molecular Dynamics from ReaxFF Reactive Dynamics

    DTIC Science & Technology

    2011-05-04

    pubs.acs.org/JPCB Thermal Decomposition of Condensed-Phase Nitromethane from Molecular Dynamics from ReaxFF Reactive Dynamics Si-ping Han,†,‡ Adri C. T. van...ABSTRACT: We studied the thermal decomposition and subsequent reaction of the energetic material nitromethane (CH3NO2) using molec- ular dynamics...with ReaxFF, a first principles-based reactive force field. We characterize the chemistry of liquid and solid nitromethane at high temperatures (2000

  6. Crop residue decomposition in Minnesota biochar-amended plots

    NASA Astrophysics Data System (ADS)

    Weyers, S. L.; Spokas, K. A.

    2014-06-01

    Impacts of biochar application at laboratory scales are routinely studied, but impacts of biochar application on decomposition of crop residues at field scales have not been widely addressed. The priming or hindrance of crop residue decomposition could have a cascading impact on soil processes, particularly those influencing nutrient availability. Our objectives were to evaluate biochar effects on field decomposition of crop residue, using plots that were amended with biochars made from different plant-based feedstocks and pyrolysis platforms in the fall of 2008. Litterbags containing wheat straw material were buried in July of 2011 below the soil surface in a continuous-corn cropped field in plots that had received one of seven different biochar amendments or a uncharred wood-pellet amendment 2.5 yr prior to start of this study. Litterbags were collected over the course of 14 weeks. Microbial biomass was assessed in treatment plots the previous fall. Though first-order decomposition rate constants were positively correlated to microbial biomass, neither parameter was statistically affected by biochar or wood-pellet treatments. The findings indicated only a residual of potentially positive and negative initial impacts of biochars on residue decomposition, which fit in line with established feedstock and pyrolysis influences. Overall, these findings indicate that no significant alteration in the microbial dynamics of the soil decomposer communities occurred as a consequence of the application of plant-based biochars evaluated here.

  7. Circular Mixture Modeling of Color Distribution for Blind Stain Separation in Pathology Images.

    PubMed

    Li, Xingyu; Plataniotis, Konstantinos N

    2017-01-01

    In digital pathology, to address color variation and histological component colocalization in pathology images, stain decomposition is usually performed preceding spectral normalization and tissue component segmentation. This paper examines the problem of stain decomposition, which is a naturally nonnegative matrix factorization (NMF) problem in algebra, and introduces a systematical and analytical solution consisting of a circular color analysis module and an NMF-based computation module. Unlike the paradigm of existing stain decomposition algorithms where stain proportions are computed from estimated stain spectra using a matrix inverse operation directly, the introduced solution estimates stain spectra and stain depths via probabilistic reasoning individually. Since the proposed method pays extra attentions to achromatic pixels in color analysis and stain co-occurrence in pixel clustering, it achieves consistent and reliable stain decomposition with minimum decomposition residue. Particularly, aware of the periodic and angular nature of hue, we propose the use of a circular von Mises mixture model to analyze the hue distribution, and provide a complete color-based pixel soft-clustering solution to address color mixing introduced by stain overlap. This innovation combined with saturation-weighted computation makes our study effective for weak stains and broad-spectrum stains. Extensive experimentation on multiple public pathology datasets suggests that our approach outperforms state-of-the-art blind stain separation methods in terms of decomposition effectiveness.

  8. Data-driven process decomposition and robust online distributed modelling for large-scale processes

    NASA Astrophysics Data System (ADS)

    Shu, Zhang; Lijuan, Li; Lijuan, Yao; Shipin, Yang; Tao, Zou

    2018-02-01

    With the increasing attention of networked control, system decomposition and distributed models show significant importance in the implementation of model-based control strategy. In this paper, a data-driven system decomposition and online distributed subsystem modelling algorithm was proposed for large-scale chemical processes. The key controlled variables are first partitioned by affinity propagation clustering algorithm into several clusters. Each cluster can be regarded as a subsystem. Then the inputs of each subsystem are selected by offline canonical correlation analysis between all process variables and its controlled variables. Process decomposition is then realised after the screening of input and output variables. When the system decomposition is finished, the online subsystem modelling can be carried out by recursively block-wise renewing the samples. The proposed algorithm was applied in the Tennessee Eastman process and the validity was verified.

  9. Daily water level forecasting using wavelet decomposition and artificial intelligence techniques

    NASA Astrophysics Data System (ADS)

    Seo, Youngmin; Kim, Sungwon; Kisi, Ozgur; Singh, Vijay P.

    2015-01-01

    Reliable water level forecasting for reservoir inflow is essential for reservoir operation. The objective of this paper is to develop and apply two hybrid models for daily water level forecasting and investigate their accuracy. These two hybrid models are wavelet-based artificial neural network (WANN) and wavelet-based adaptive neuro-fuzzy inference system (WANFIS). Wavelet decomposition is employed to decompose an input time series into approximation and detail components. The decomposed time series are used as inputs to artificial neural networks (ANN) and adaptive neuro-fuzzy inference system (ANFIS) for WANN and WANFIS models, respectively. Based on statistical performance indexes, the WANN and WANFIS models are found to produce better efficiency than the ANN and ANFIS models. WANFIS7-sym10 yields the best performance among all other models. It is found that wavelet decomposition improves the accuracy of ANN and ANFIS. This study evaluates the accuracy of the WANN and WANFIS models for different mother wavelets, including Daubechies, Symmlet and Coiflet wavelets. It is found that the model performance is dependent on input sets and mother wavelets, and the wavelet decomposition using mother wavelet, db10, can further improve the efficiency of ANN and ANFIS models. Results obtained from this study indicate that the conjunction of wavelet decomposition and artificial intelligence models can be a useful tool for accurate forecasting daily water level and can yield better efficiency than the conventional forecasting models.

  10. Power System Decomposition for Practical Implementation of Bulk-Grid Voltage Control Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vallem, Mallikarjuna R.; Vyakaranam, Bharat GNVSR; Holzer, Jesse T.

    Power system algorithms such as AC optimal power flow and coordinated volt/var control of the bulk power system are computationally intensive and become difficult to solve in operational time frames. The computational time required to run these algorithms increases exponentially as the size of the power system increases. The solution time for multiple subsystems is less than that for solving the entire system simultaneously, and the local nature of the voltage problem lends itself to such decomposition. This paper describes an algorithm that can be used to perform power system decomposition from the point of view of the voltage controlmore » problem. Our approach takes advantage of the dominant localized effect of voltage control and is based on clustering buses according to the electrical distances between them. One of the contributions of the paper is to use multidimensional scaling to compute n-dimensional Euclidean coordinates for each bus based on electrical distance to perform algorithms like K-means clustering. A simple coordinated reactive power control of photovoltaic inverters for voltage regulation is used to demonstrate the effectiveness of the proposed decomposition algorithm and its components. The proposed decomposition method is demonstrated on the IEEE 118-bus system.« less

  11. Hydrated electron based decomposition of perfluorooctane sulfonate (PFOS) in the VUV/sulfite system.

    PubMed

    Gu, Yurong; Liu, Tongzhou; Wang, Hongjie; Han, Huili; Dong, Wenyi

    2017-12-31

    As one of the most reactive species, hydrated electron (e aq - ) is promising for reductive decomposition of recalcitrant organic pollutants, such as perfluorooctane sulfonate (PFOS). In this study, PFOS decomposition using a vacuum ultraviolet (VUV)/sulfite system was systematically investigated in comparison with sole VUV and ultraviolet (UV)/sulfite systems. A fast and nearly complete (97.3%) PFOS decomposition was observed within 4h from its initial concentration of 37.2μM in the VUV/sulfite system. The observed rate constant (k obs ) for PFOS decomposition in the studied system was 0.87±0.0060h -1 , which was nearly 7.5 and 2 folds faster than that in sole VUV and UV/sulfite systems, respectively. Compared to previously studied UV/sulfite system, VUV/sulfite system enhanced PFOS decomposition in both weak acidic and alkaline pH conditions. In weak acidic condition (pH6.0), PFOS predominantly decomposed via direct VUV photolysis, whereas in alkaline condition (pH>9.0), PFOS decomposition was mainly induced by e aq - generated from both sulfite and VUV photolytic reactions. At a fixed initial solution pH (pH10.0), PFOS decomposition kinetics showed a positive linear dependence with sulfite dosage. The co-presence of humic acid (HA) and NO 3 - obviously suppressed PFOS decomposition, whereas HCO 3 - showed marginal inhibition. A few amount of short chain perfluorocarboxylic acids (PFCAs) were detected in PFOS decomposition process, and a high defluorination efficiency (75.4%) was achieved. These results suggested most fluorine atoms in PFOS molecule ultimately mineralized into fluoride ions, and the mechanisms for PFOS decomposition in the VUV/sulfite system were proposed. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. A Decomposition Method for Security Constrained Economic Dispatch of a Three-Layer Power System

    NASA Astrophysics Data System (ADS)

    Yang, Junfeng; Luo, Zhiqiang; Dong, Cheng; Lai, Xiaowen; Wang, Yang

    2018-01-01

    This paper proposes a new decomposition method for the security-constrained economic dispatch in a three-layer large-scale power system. The decomposition is realized using two main techniques. The first is to use Ward equivalencing-based network reduction to reduce the number of variables and constraints in the high-layer model without sacrificing accuracy. The second is to develop a price response function to exchange signal information between neighboring layers, which significantly improves the information exchange efficiency of each iteration and results in less iterations and less computational time. The case studies based on the duplicated RTS-79 system demonstrate the effectiveness and robustness of the proposed method.

  13. Thermal Decomposition of IMX-104: Ingredient Interactions Govern Thermal Insensitivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maharrey, Sean; Wiese-Smith, Deneille; Highley, Aaron M.

    2015-04-01

    This report summarizes initial studies into the chemical basis of the thermal insensitivity of INMX-104. The work follows upon similar efforts investigating this behavior for another DNAN-based insensitive explosive, IMX-101. The experiments described demonstrate a clear similarity between the ingredient interactions that were shown to lead to the thermal insensitivity observed in IMX-101 and those that are active in IMX-104 at elevated temperatures. Specifically, the onset of decomposition of RDX is shifted to a lower temperature based on the interaction of the RDX with liquid DNAN. This early onset of decomposition dissipates some stored energy that is then unavailable formore » a delayed, more violent release.« less

  14. Potential macro-detritivore range expansion into the subarctic stimulates litter decomposition: a new positive feedback mechanism to climate change?

    PubMed

    van Geffen, Koert G; Berg, Matty P; Aerts, Rien

    2011-12-01

    As a result of low decomposition rates, high-latitude ecosystems store large amounts of carbon. Litter decomposition in these ecosystems is constrained by harsh abiotic conditions, but also by the absence of macro-detritivores. We have studied the potential effects of their climate change-driven northward range expansion on the decomposition of two contrasting subarctic litter types. Litter of Alnus incana and Betula pubescens was incubated in microcosms together with monocultures and all possible combinations of three functionally different macro-detritivores (the earthworm Lumbricus rubellus, isopod Oniscus asellus, and millipede Julus scandinavius). Our results show that these macro-detritivores stimulated decomposition, especially of the high-quality A. incana litter and that the macro-detritivores tested differed in their decomposition-stimulating effects, with earthworms having the largest influence. Decomposition processes increased with increasing number of macro-detritivore species, and positive net diveristy effects occurred in several macro-detritivore treatments. However, after correction for macro-detritivore biomass, all interspecific differences in macro-detritivore effects, as well as the positive effects of species number on subarctic litter decomposition disappeared. The net diversity effects also appeared to be driven by variation in biomass, with a possible exception of net diversity effects in mass loss. Based on these results, we conclude that the expected climate change-induced range expansion of macro-detritivores into subarctic regions is likely to result in accelerated decomposition rates. Our results also indicate that the magnitude of macro-detritivore effects on subarctic decomposition will mainly depend on macro-detritivore biomass, rather than on macro-detritivore species number or identity.

  15. Surface Fitting Filtering of LIDAR Point Cloud with Waveform Information

    NASA Astrophysics Data System (ADS)

    Xing, S.; Li, P.; Xu, Q.; Wang, D.; Li, P.

    2017-09-01

    Full-waveform LiDAR is an active technology of photogrammetry and remote sensing. It provides more detailed information about objects along the path of a laser pulse than discrete-return topographic LiDAR. The point cloud and waveform information with high quality can be obtained by waveform decomposition, which could make contributions to accurate filtering. The surface fitting filtering method with waveform information is proposed to present such advantage. Firstly, discrete point cloud and waveform parameters are resolved by global convergent Levenberg Marquardt decomposition. Secondly, the ground seed points are selected, of which the abnormal ones are detected by waveform parameters and robust estimation. Thirdly, the terrain surface is fitted and the height difference threshold is determined in consideration of window size and mean square error. Finally, the points are classified gradually with the rising of window size. The filtering process is finished until window size is larger than threshold. The waveform data in urban, farmland and mountain areas from "WATER (Watershed Allied Telemetry Experimental Research)" are selected for experiments. Results prove that compared with traditional method, the accuracy of point cloud filtering is further improved and the proposed method has highly practical value.

  16. CP decomposition approach to blind separation for DS-CDMA system using a new performance index

    NASA Astrophysics Data System (ADS)

    Rouijel, Awatif; Minaoui, Khalid; Comon, Pierre; Aboutajdine, Driss

    2014-12-01

    In this paper, we present a canonical polyadic (CP) tensor decomposition isolating the scaling matrix. This has two major implications: (i) the problem conditioning shows up explicitly and could be controlled through a constraint on the so-called coherences and (ii) a performance criterion concerning the factor matrices can be exactly calculated and is more realistic than performance metrics used in the literature. Two new algorithms optimizing the CP decomposition based on gradient descent are proposed. This decomposition is illustrated by an application to direct-sequence code division multiplexing access (DS-CDMA) systems; computer simulations are provided and demonstrate the good behavior of these algorithms, compared to others in the literature.

  17. The Elusive Universal Post-Mortem Interval Formula

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vass, Arpad Alexander

    The following manuscript details our initial attempt at developing universal post-mortem interval formulas describing human decomposition. These formulas are empirically derived from data collected over the last 20 years from the University of Tennessee's Anthropology Research Facility, in Knoxville, Tennessee, USA. Two formulas were developed (surface decomposition and burial decomposition) based on temperature, moisture, and the partial pressure of oxygen, as being three of the four primary drivers for human decomposition. It is hoped that worldwide application of these formulas to environments and situations not readily studied in Tennessee will result in interdisciplinary cooperation between scientists and law enforcement personnelmore » that will allow for future refinements of these models leading to increased accuracy.« less

  18. Synthesis of Antimalarial Agents from 2,3-Dihydro-1,6-Diazaphenalene Derivatives.

    DTIC Science & Technology

    1982-03-01

    ago; however, conversion of this stable salt to the free base (2) resulted in decomposition of 2 prohibiting simple alkyla- tion of the material; a...however, Mr. Musallam pointed out it was a black gummy solid on arrival, hence the lack of activity may be due to decomposition which occurred in transit...16 decomposition , there is special interest with regard to the oxidation of 4. In particular, the similarities between the properties of 4 2a,b and

  19. Lumley decomposition of turbulent boundary layer at high Reynolds numbers

    NASA Astrophysics Data System (ADS)

    Tutkun, Murat; George, William K.

    2017-02-01

    The decomposition proposed by Lumley in 1966 is applied to a high Reynolds number turbulent boundary layer. The experimental database was created by a hot-wire rake of 143 probes in the Laboratoire de Mécanique de Lille wind tunnel. The Reynolds numbers based on momentum thickness (Reθ) are 9800 and 19 100. Three-dimensional decomposition is performed, namely, proper orthogonal decomposition (POD) in the inhomogeneous and bounded wall-normal direction, Fourier decomposition in the homogeneous spanwise direction, and Fourier decomposition in time. The first POD modes in both cases carry nearly 50% of turbulence kinetic energy when the energy is integrated over Fourier dimensions. The eigenspectra always peak near zero frequency and most of the large scale, energy carrying features are found at the low end of the spectra. The spanwise Fourier mode which has the largest amount of energy is the first spanwise mode and its symmetrical pair. Pre-multiplied eigenspectra have only one distinct peak and it matches the secondary peak observed in the log-layer of pre-multiplied velocity spectra. Energy carrying modes obtained from the POD scale with outer scaling parameters. Full or partial reconstruction of turbulent velocity signal based only on energetic modes or non-energetic modes revealed the behaviour of urms in distinct regions across the boundary layer. When urms is based on energetic reconstruction, there exists (a) an exponential decay from near wall to log-layer, (b) a constant layer through the log-layer, and (c) another exponential decay in the outer region. The non-energetic reconstruction reveals that urms has (a) an exponential decay from the near-wall to the end of log-layer and (b) a constant layer in the outer region. Scaling of urms using the outer parameters is best when both energetic and non-energetic profiles are combined.

  20. Experimental investigation of the catalytic decomposition and combustion characteristics of a non-toxic ammonium dinitramide (ADN)-based monopropellant thruster

    NASA Astrophysics Data System (ADS)

    Chen, Jun; Li, Guoxiu; Zhang, Tao; Wang, Meng; Yu, Yusong

    2016-12-01

    Low toxicity ammonium dinitramide (ADN)-based aerospace propulsion systems currently show promise with regard to applications such as controlling satellite attitude. In the present work, the decomposition and combustion processes of an ADN-based monopropellant thruster were systematically studied, using a thermally stable catalyst to promote the decomposition reaction. The performance of the ADN propulsion system was investigated using a ground test system under vacuum, and the physical properties of the ADN-based propellant were also examined. Using this system, the effects of the preheating temperature and feed pressure on the combustion characteristics and thruster performance during steady state operation were observed. The results indicate that the propellant and catalyst employed during this work, as well as the design and manufacture of the thruster, met performance requirements. Moreover, the 1 N ADN thruster generated a specific impulse of 223 s, demonstrating the efficacy of the new catalyst. The thruster operational parameters (specifically, the preheating temperature and feed pressure) were found to have a significant effect on the decomposition and combustion processes within the thruster, and the performance of the thruster was demonstrated to improve at higher feed pressures and elevated preheating temperatures. A lower temperature of 140 °C was determined to activate the catalytic decomposition and combustion processes more effectively compared with the results obtained using other conditions. The data obtained in this study should be beneficial to future systematic and in-depth investigations of the combustion mechanism and characteristics within an ADN thruster.

  1. Iterative filtering decomposition based on local spectral evolution kernel

    PubMed Central

    Wang, Yang; Wei, Guo-Wei; Yang, Siyang

    2011-01-01

    The synthesizing information, achieving understanding, and deriving insight from increasingly massive, time-varying, noisy and possibly conflicting data sets are some of most challenging tasks in the present information age. Traditional technologies, such as Fourier transform and wavelet multi-resolution analysis, are inadequate to handle all of the above-mentioned tasks. The empirical model decomposition (EMD) has emerged as a new powerful tool for resolving many challenging problems in data processing and analysis. Recently, an iterative filtering decomposition (IFD) has been introduced to address the stability and efficiency problems of the EMD. Another data analysis technique is the local spectral evolution kernel (LSEK), which provides a near prefect low pass filter with desirable time-frequency localizations. The present work utilizes the LSEK to further stabilize the IFD, and offers an efficient, flexible and robust scheme for information extraction, complexity reduction, and signal and image understanding. The performance of the present LSEK based IFD is intensively validated over a wide range of data processing tasks, including mode decomposition, analysis of time-varying data, information extraction from nonlinear dynamic systems, etc. The utility, robustness and usefulness of the proposed LESK based IFD are demonstrated via a large number of applications, such as the analysis of stock market data, the decomposition of ocean wave magnitudes, the understanding of physiologic signals and information recovery from noisy images. The performance of the proposed method is compared with that of existing methods in the literature. Our results indicate that the LSEK based IFD improves both the efficiency and the stability of conventional EMD algorithms. PMID:22350559

  2. Lossless and Sufficient - Invariant Decomposition of Deterministic Target

    NASA Astrophysics Data System (ADS)

    Paladini, Riccardo; Ferro Famil, Laurent; Pottier, Eric; Martorella, Marco; Berizzi, Fabrizio

    2011-03-01

    The symmetric radar scattering matrix of a reciprocal target is projected on the circular polarization basis and is decomposed into four orientation invariant parameters, relative phase and relative orientation. The physical interpretation of this results is found in the wave-particle nature of radar scattering due to the circular polarization nature of elemental packets of energy. The proposed decomposition, is based on left orthogonal to left Special Unitary basis, providing the target description in term of a unitary vector. A comparison between the proposed CTD and Cameron, Kennaugh and Krogager decompositions is also pointed out. A validation by the use of both anechoic chamber data and airborne EMISAR data of DTU is used to show the effectiveness of this decomposition for the analysis of coherent targets. In the second paper we will show the application of the rotation group U(3) for the decomposition of distributed targets into nine meaningful parameters.

  3. Catalytic effects of inorganic acids on the decomposition of ammonium nitrate.

    PubMed

    Sun, Jinhua; Sun, Zhanhui; Wang, Qingsong; Ding, Hui; Wang, Tong; Jiang, Chuansheng

    2005-12-09

    In order to evaluate the catalytic effects of inorganic acids on the decomposition of ammonium nitrate (AN), the heat releases of decomposition or reaction of pure AN and its mixtures with inorganic acids were analyzed by a heat flux calorimeter C80. Through the experiments, the different reaction mechanisms of AN and its mixtures were analyzed. The chemical reaction kinetic parameters such as reaction order, activation energy and frequency factor were calculated with the C80 experimental results for different samples. Based on these parameters and the thermal runaway models (Semenov and Frank-Kamenestkii model), the self-accelerating decomposition temperatures (SADTs) of AN and its mixtures were calculated and compared. The results show that the mixtures of AN with acid are more unsteady than pure AN. The AN decomposition reaction is catalyzed by acid. The calculated SADTs of AN mixtures with acid are much lower than that of pure AN.

  4. Geometric decomposition of the conformation tensor in viscoelastic turbulence

    NASA Astrophysics Data System (ADS)

    Hameduddin, Ismail; Meneveau, Charles; Zaki, Tamer A.; Gayme, Dennice F.

    2018-05-01

    This work introduces a mathematical approach to analysing the polymer dynamics in turbulent viscoelastic flows that uses a new geometric decomposition of the conformation tensor, along with associated scalar measures of the polymer fluctuations. The approach circumvents an inherent difficulty in traditional Reynolds decompositions of the conformation tensor: the fluctuating tensor fields are not positive-definite and so do not retain the physical meaning of the tensor. The geometric decomposition of the conformation tensor yields both mean and fluctuating tensor fields that are positive-definite. The fluctuating tensor in the present decomposition has a clear physical interpretation as a polymer deformation relative to the mean configuration. Scalar measures of this fluctuating conformation tensor are developed based on the non-Euclidean geometry of the set of positive-definite tensors. Drag-reduced viscoelastic turbulent channel flow is then used an example case study. The conformation tensor field, obtained using direct numerical simulations, is analysed using the proposed framework.

  5. Joint detection and tracking of size-varying infrared targets based on block-wise sparse decomposition

    NASA Astrophysics Data System (ADS)

    Li, Miao; Lin, Zaiping; Long, Yunli; An, Wei; Zhou, Yiyu

    2016-05-01

    The high variability of target size makes small target detection in Infrared Search and Track (IRST) a challenging task. A joint detection and tracking method based on block-wise sparse decomposition is proposed to address this problem. For detection, the infrared image is divided into overlapped blocks, and each block is weighted on the local image complexity and target existence probabilities. Target-background decomposition is solved by block-wise inexact augmented Lagrange multipliers. For tracking, label multi-Bernoulli (LMB) tracker tracks multiple targets taking the result of single-frame detection as input, and provides corresponding target existence probabilities for detection. Unlike fixed-size methods, the proposed method can accommodate size-varying targets, due to no special assumption for the size and shape of small targets. Because of exact decomposition, classical target measurements are extended and additional direction information is provided to improve tracking performance. The experimental results show that the proposed method can effectively suppress background clutters, detect and track size-varying targets in infrared images.

  6. Rotational-path decomposition based recursive planning for spacecraft attitude reorientation

    NASA Astrophysics Data System (ADS)

    Xu, Rui; Wang, Hui; Xu, Wenming; Cui, Pingyuan; Zhu, Shengying

    2018-02-01

    The spacecraft reorientation is a common task in many space missions. With multiple pointing constraints, it is greatly difficult to solve the constrained spacecraft reorientation planning problem. To deal with this problem, an efficient rotational-path decomposition based recursive planning (RDRP) method is proposed in this paper. The uniform pointing-constraint-ignored attitude rotation planning process is designed to solve all rotations without considering pointing constraints. Then the whole path is checked node by node. If any pointing constraint is violated, the nearest critical increment approach will be used to generate feasible alternative nodes in the process of rotational-path decomposition. As the planning path of each subdivision may still violate pointing constraints, multiple decomposition is needed and the reorientation planning is designed as a recursive manner. Simulation results demonstrate the effectiveness of the proposed method. The proposed method has been successfully applied in two SPARK microsatellites to solve onboard constrained attitude reorientation planning problem, which were developed by the Shanghai Engineering Center for Microsatellites and launched on 22 December 2016.

  7. Phosphorus cycles of forest and upland grassland ecosystems and some effects of land management practices.

    PubMed

    Harrison, A F

    The distribution of phosphorus capital and net annual transfers of phosphorus between the major components of two unfertilized phosphorus-deficient UK ecosystems, an oak--ash woodland in the Lake District and an Agrostis-Festuca grassland in Snowdonia (both on acid brown-earth soils), have been estimted in terms of kg P ha--1. In both ecosystems less than 3% of the phosphorus, totalling 1890 kg P ha--1 and 3040 kg P ha--1 for the woodland and grassland, respectively, is contained in the living biomass and half that is below ground level. Nearly all the phosphorus is in the soil matrix. Although the biomass phosphorus is mostly in the vegetation, the soil fauna and vegetation is slower (25%) than in the grassland vegetatation (208%). More than 85% of the net annual vegetation uptake of phosphorus from the soil is returned to the soil, mainly in organic debris, which in the grassland ecosystem is more than twice as rich in phosphorus (0.125% P) as in the woodland ecosystem (0.053% P). These concentrations are related to the rates of turnover (input/P content) of phosphorus in the litter layer on the soil surface; it is faster in the grassland (460%) than in the woodland (144%). In both cycles plant uptake of phosphorus largely depends on the release of phosphorus through decomposition of the organic matter returned to soil. In both the woodland and the grassland, the amount of cycling phosphorus is potentially reduced by its immobilization in tree and sheep production and in undecomposed organic matter accumulating in soil. It is assumed that the reductions are counterbalanced by the replenishment of cycling phosphorus by (i) some mineralization of organically bound phosphorus in the mineral soil, (ii) the income in rainfall and aerosols not being effectively lost in soil drainage waters and (iii) rock weathering. The effects of the growth of conifers and sheep grazing on the balance between decomposition and accumulation of organic matter returned to soil are considered in relation to the rate of phosphorus cycling and the pedogenetic changes in soil phosphorus condition leading to reduced fertility. Although controlled sheep grazing speeds up phosphorus cycling and may reverse the pedogenetic trend in favour of soil improvement, conifers may slow down phosphorus cycling and promote the pedogenetic trend towards infertility.

  8. Decomposition of 3,5-dinitrobenzamide in aqueous solution during UV/H2O2 and UV/TiO2 oxidation processes.

    PubMed

    Yan, Yingjie; Liao, Qi-Nan; Ji, Feng; Wang, Wei; Yuan, Shoujun; Hu, Zhen-Hu

    2017-02-01

    3,5-Dinitrobenzamide has been widely used as a feed additive to control coccidiosis in poultry, and part of the added 3,5-dinitrobenzamide is excreted into wastewater and surface water. The removal of 3,5-dinitrobenzamide from wastewater and surface water has not been reported in previous studies. Highly reactive hydroxyl radicals from UV/hydrogen peroxide (H 2 O 2 ) and UV/titanium dioxide (TiO 2 ) advanced oxidation processes (AOPs) can decompose organic contaminants efficiently. In this study, the decomposition of 3,5-dinitrobenzamide in aqueous solution during UV/H 2 O 2 and UV/TiO 2 oxidation processes was investigated. The decomposition of 3,5-dinitrobenzamide fits well with a fluence-based pseudo-first-order kinetics model. The decomposition in both two oxidation processes was affected by solution pH, and was inhibited under alkaline conditions. Inorganic anions such as NO 3 - , Cl - , SO 4 2- , HCO 3 - , and CO 3 2- inhibited the degradation of 3,5-dinitrobenzamide during the UV/H 2 O 2 and UV/TiO 2 oxidation processes. After complete decomposition in both oxidation processes, approximately 50% of 3,5-dinitrobenzamide was decomposed into organic intermediates, and the rest was mineralized to CO 2 , H 2 O, and other inorganic anions. Ions such as NH 4 + , NO 3 - , and NO 2 - were released into aqueous solution during the degradation. The primary decomposition products of 3,5-dinitrobenzamide were identified using time-of-flight mass spectrometry (LCMS-IT-TOF). Based on these products and ions release, a possible decomposition pathway of 3,5-dinitrobenzamide in both UV/H 2 O 2 and UV/TiO 2 processes was proposed.

  9. Domain decomposition: A bridge between nature and parallel computers

    NASA Technical Reports Server (NTRS)

    Keyes, David E.

    1992-01-01

    Domain decomposition is an intuitive organizing principle for a partial differential equation (PDE) computation, both physically and architecturally. However, its significance extends beyond the readily apparent issues of geometry and discretization, on one hand, and of modular software and distributed hardware, on the other. Engineering and computer science aspects are bridged by an old but recently enriched mathematical theory that offers the subject not only unity, but also tools for analysis and generalization. Domain decomposition induces function-space and operator decompositions with valuable properties. Function-space bases and operator splittings that are not derived from domain decompositions generally lack one or more of these properties. The evolution of domain decomposition methods for elliptically dominated problems has linked two major algorithmic developments of the last 15 years: multilevel and Krylov methods. Domain decomposition methods may be considered descendants of both classes with an inheritance from each: they are nearly optimal and at the same time efficiently parallelizable. Many computationally driven application areas are ripe for these developments. A progression is made from a mathematically informal motivation for domain decomposition methods to a specific focus on fluid dynamics applications. To be introductory rather than comprehensive, simple examples are provided while convergence proofs and algorithmic details are left to the original references; however, an attempt is made to convey their most salient features, especially where this leads to algorithmic insight.

  10. Canonical decomposition of magnetotelluric responses: Experiment on 1D anisotropic structures

    NASA Astrophysics Data System (ADS)

    Guo, Ze-qiu; Wei, Wen-bo; Ye, Gao-feng; Jin, Sheng; Jing, Jian-en

    2015-08-01

    Horizontal electrical heterogeneity of subsurface earth is mostly originated from structural complexity and electrical anisotropy, and local near-surface electrical heterogeneity will severely distort regional electromagnetic responses. Conventional distortion analyses for magnetotelluric soundings are primarily physical decomposition methods with respect to isotropic models, which mostly presume that the geoelectric distribution of geological structures is of local and regional patterns represented by 3D/2D models. Due to the widespread anisotropy of earth media, the confusion between 1D anisotropic responses and 2D isotropic responses, and the defects of physical decomposition methods, we propose to conduct modeling experiments with canonical decomposition in terms of 1D layered anisotropic models, and the method is one of the mathematical decomposition methods based on eigenstate analyses differentiated from distortion analyses, which can be used to recover electrical information such as strike directions, and maximum and minimum conductivity. We tested this method with numerical simulation experiments on several 1D synthetic models, which turned out that canonical decomposition is quite effective to reveal geological anisotropic information. Finally, for the background of anisotropy from previous study by geological and seismological methods, canonical decomposition is applied to real data acquired in North China Craton for 1D anisotropy analyses, and the result shows that, with effective modeling and cautious interpretation, canonical decomposition could be another good method to detect anisotropy of geological media.

  11. Litter decomposition patterns and dynamics across biomes: Initial results from the global TeaComposition initiative

    NASA Astrophysics Data System (ADS)

    Djukic, Ika; Kappel Schmidt, Inger; Steenberg Larsen, Klaus; Beier, Claus

    2017-04-01

    Litter decomposition represents one of the largest fluxes in the global terrestrial carbon cycle and a number of large-scale decomposition experiments have been conducted focusing on this fundamental soil process. However, previous studies were most often based on site-specific litters and methodologies. The contrasting litter and soil types used and the general lack of common protocols still poses a major challenge as it adds major uncertainty to meta-analyses across different experiments and sites. In the TeaComposition initiative, we aim to investigate the potential litter decomposition by using standardized substrates (tea) for comparison of temporal litter decomposition rates across different ecosystems worldwide. To this end, Lipton tea bags (Rooibos and Green Tea) has been buried in the H-A or Ah horizon and incubated over the period of 36 months within 400 sites covering diverse ecosystems in 9 zonobiomes. We measured initial litter chemistry and litter mass loss 3 months after the start of decomposition and linked the decomposition rates to site and climatic conditions as well as to the existing decompositions rates of the local litter. We will present and discuss the outcomes of this study. Acknowledgment: We are thankful to colleagues from more than 300 sites who were participating in the implementation of this initiative and who are not mentioned individually as co-authors yet.

  12. Decomposition of sulfamethoxazole and trimethoprim by continuous UVA/LED/TiO2 photocatalysis: Decomposition pathways, residual antibacterial activity and toxicity.

    PubMed

    Cai, Qinqing; Hu, Jiangyong

    2017-02-05

    In this study, continuous LED/UVA/TiO 2 photocatalytic decomposition of sulfamethoxazole (SMX) and trimethoprim (TMP) was investigated. More than 90% of SMX and TMP were removed within 20min by the continuous photoreactor (with the initial concentration of 400ppb for each). The removal rates of SMX and TMP decreased with higher initial antibiotics loadings. SMX was much easier decomposed in acidic condition, while pH affected little on TMP's decomposition. 0.003% was found to be the optimum H 2 O 2 dosage to enhance SMX photocatalytic decomposition. Decomposition pathways of SMX and TMP were proposed based on the intermediates identified by using LC-MS-MS and GC-MS. Aniline was identified as a new intermediate generated during SMX photocatalytic decomposition. Antibacterial activity study with a reference Escherichia coli strain was also conducted during the photocatalytic process. Results indicated that with every portion of TMP removed, the residual antibacterial activity decreased by one portion. However, the synergistic effect between SMX and TMP tended to slow down the antibacterial activity removal of SMX and TMP mixture. Chronic toxicity studies conducted with Vibrio fischeri exhibited 13-20% bioluminescence inhibition during the decomposition of 1ppm SMX and 1ppm TMP, no acute toxicity to V. fischeri was observed during the photocatalytic process. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Investigation of Prediction Method and Fundamental Thermo-decomposition Properties on Gasification of Woody Biomass

    NASA Astrophysics Data System (ADS)

    Morita, Akihiro

    Recently, development of energy transfer technology based on woody biomass remarkably has been forwarding accompanied biomass boom for gasification and liquefaction. To elevate on yield of energy into biomass for transportation and exergy is extremely important for essential utilization and production of bio-fuels. Because, conversion to bio-fuel must be discussion in detail thermo-decomposition characteristics for biomass main composition formed on cellulose and hemicelluloses, lignin. In this research, we analyze thermo-decomposition characteristics of each biomass main composition on both active (air) and passive (N2) atmosphere. Especially, we suggest predict model of gasification based on change of atomic carbon ratio with thermo-decomposition. 1) Even if it heat-treats cedar chip by 473K, loss of energy hardly produces it. From this, it acquired that the substance contributed to weight reduction was a low ingredient of energy value. 2) If cedar chip is heated in the 473K around, it can be predicted that the substance with a low energy value like water or acetic acid has arisen by thermal decomposition. It suggested that the transportation performance of the biomass improved by choosing and eliminating these. 3) Each ingredient of hydrogen, nitrogen, and oxygen which dissipated in the gasification process acquired that it was direct proportion to the carbonaceous dissipation rate. 4) The action at the time of thermo-decomposition of (the carbon, hydrogen, nitrogen, oxygen which are) the main constituent factors of the biomass suggested a possibility of being predicted by a statistical method.

  14. On the Composition of Risk Preference and Belief

    ERIC Educational Resources Information Center

    Wakkar, Peter P.

    2004-01-01

    Prospect theory assumes nonadditive decision weights for preferences over risky gambles. Such decision weights generalize additive probabilities. This article proposes a decomposition of decision weights into a component reflecting risk attitude and a new component depending on belief. The decomposition is based on an observable preference…

  15. Initial mechanisms for the unimolecular decomposition of electronically excited bisfuroxan based energetic materials.

    PubMed

    Yuan, Bing; Bernstein, Elliot R

    2017-01-07

    Unimolecular decomposition of energetic molecules, 3,3'-diamino-4,4'-bisfuroxan (labeled as A) and 4,4'-diamino-3,3'-bisfuroxan (labeled as B), has been explored via 226/236 nm single photon laser excitation/decomposition. These two energetic molecules, subsequent to UV excitation, create NO as an initial decomposition product at the nanosecond excitation energies (5.0-5.5 eV) with warm vibrational temperature (1170 ± 50 K for A, 1400 ± 50 K for B) and cold rotational temperature (<55 K). Initial decomposition mechanisms for these two electronically excited, isolated molecules are explored at the complete active space self-consistent field (CASSCF(12,12)/6-31G(d)) level with and without MP2 correction. Potential energy surface calculations illustrate that conical intersections play an essential role in the calculated decomposition mechanisms. Based on experimental observations and theoretical calculations, NO product is released through opening of the furoxan ring: ring opening can occur either on the S 1 excited or S 0 ground electronic state. The reaction path with the lowest energetic barrier is that for which the furoxan ring opens on the S 1 state via the breaking of the N1-O1 bond. Subsequently, the molecule moves to the ground S 0 state through related ring-opening conical intersections, and an NO product is formed on the ground state surface with little rotational excitation at the last NO dissociation step. For the ground state ring opening decomposition mechanism, the N-O bond and C-N bond break together in order to generate dissociated NO. With the MP2 correction for the CASSCF(12,12) surface, the potential energies of molecules with dissociated NO product are in the range from 2.04 to 3.14 eV, close to the theoretical result for the density functional theory (B3LYP) and MP2 methods. The CASMP2(12,12) corrected approach is essential in order to obtain a reasonable potential energy surface that corresponds to the observed decomposition behavior of these molecules. Apparently, highly excited states are essential for an accurate representation of the kinetics and dynamics of excited state decomposition of both of these bisfuroxan energetic molecules. The experimental vibrational temperatures of NO products of A and B are about 800-1000 K lower than previously studied energetic molecules with NO as a decomposition product.

  16. ADVANCED OXIDATION: OXALATE DECOMPOSITION TESTING WITH OZONE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ketusky, E.; Subramanian, K.

    At the Savannah River Site (SRS), oxalic acid is currently considered the preferred agent for chemically cleaning the large underground Liquid Radioactive Waste Tanks. It is applied only in the final stages of emptying a tank when generally less than 5,000 kg of waste solids remain, and slurrying based removal methods are no-longer effective. The use of oxalic acid is preferred because of its combined dissolution and chelating properties, as well as the fact that corrosion to the carbon steel tank walls can be controlled. Although oxalic acid is the preferred agent, there are significant potential downstream impacts. Impacts include:more » (1) Degraded evaporator operation; (2) Resultant oxalate precipitates taking away critically needed operating volume; and (3) Eventual creation of significant volumes of additional feed to salt processing. As an alternative to dealing with the downstream impacts, oxalate decomposition using variations of ozone based Advanced Oxidation Process (AOP) were investigated. In general AOPs use ozone or peroxide and a catalyst to create hydroxyl radicals. Hydroxyl radicals have among the highest oxidation potentials, and are commonly used to decompose organics. Although oxalate is considered among the most difficult organic to decompose, the ability of hydroxyl radicals to decompose oxalate is considered to be well demonstrated. In addition, as AOPs are considered to be 'green' their use enables any net chemical additions to the waste to be minimized. In order to test the ability to decompose the oxalate and determine the decomposition rates, a test rig was designed, where 10 vol% ozone would be educted into a spent oxalic acid decomposition loop, with the loop maintained at 70 C and recirculated at 40L/min. Each of the spent oxalic acid streams would be created from three oxalic acid strikes of an F-area simulant (i.e., Purex = high Fe/Al concentration) and H-area simulant (i.e., H area modified Purex = high Al/Fe concentration) after nearing dissolution equilibrium, and then decomposed to {le} 100 Parts per Million (ppm) oxalate. Since AOP technology largely originated on using ultraviolet (UV) light as a primary catalyst, decomposition of the spent oxalic acid, well exposed to a medium pressure mercury vapor light was considered the benchmark. However, with multi-valent metals already contained in the feed, and maintenance of the UV light a concern; testing was conducted to evaluate the impact from removing the UV light. Using current AOP terminology, the test without the UV light would likely be considered an ozone based, dark, ferrioxalate type, decomposition process. Specifically, as part of the testing, the impacts from the following were investigated: (1) Importance of the UV light on the decomposition rates when decomposing 1 wt% spent oxalic acid; (2) Impact of increasing the oxalic acid strength from 1 to 2.5 wt% on the decomposition rates; and (3) For F-area testing, the advantage of increasing the spent oxalic acid flowrate from 40 L/min (liters/minute) to 50 L/min during decomposition of the 2.5 wt% spent oxalic acid. The results showed that removal of the UV light (from 1 wt% testing) slowed the decomposition rates in both the F & H testing. Specifically, for F-Area Strike 1, the time increased from about 6 hours to 8 hours. In H-Area, the impact was not as significant, with the time required for Strike 1 to be decomposed to less than 100 ppm increasing slightly, from 5.4 to 6.4 hours. For the spent 2.5 wt% oxalic acid decomposition tests (all) without the UV light, the F-area decompositions required approx. 10 to 13 hours, while the corresponding required H-Area decompositions times ranged from 10 to 21 hours. For the 2.5 wt% F-Area sludge, the increased availability of iron likely caused the increased decomposition rates compared to the 1 wt% oxalic acid based tests. In addition, for the F-testing, increasing the recirculation flow rates from 40 liter/minute to 50 liter/minute resulted in an increased decomposition rate, suggesting a better use of ozone.« less

  17. Parallelization of PANDA discrete ordinates code using spatial decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humbert, P.

    2006-07-01

    We present the parallel method, based on spatial domain decomposition, implemented in the 2D and 3D versions of the discrete Ordinates code PANDA. The spatial mesh is orthogonal and the spatial domain decomposition is Cartesian. For 3D problems a 3D Cartesian domain topology is created and the parallel method is based on a domain diagonal plane ordered sweep algorithm. The parallel efficiency of the method is improved by directions and octants pipelining. The implementation of the algorithm is straightforward using MPI blocking point to point communications. The efficiency of the method is illustrated by an application to the 3D-Ext C5G7more » benchmark of the OECD/NEA. (authors)« less

  18. Decomposition and extraction: a new framework for visual classification.

    PubMed

    Fang, Yuqiang; Chen, Qiang; Sun, Lin; Dai, Bin; Yan, Shuicheng

    2014-08-01

    In this paper, we present a novel framework for visual classification based on hierarchical image decomposition and hybrid midlevel feature extraction. Unlike most midlevel feature learning methods, which focus on the process of coding or pooling, we emphasize that the mechanism of image composition also strongly influences the feature extraction. To effectively explore the image content for the feature extraction, we model a multiplicity feature representation mechanism through meaningful hierarchical image decomposition followed by a fusion step. In particularly, we first propose a new hierarchical image decomposition approach in which each image is decomposed into a series of hierarchical semantical components, i.e, the structure and texture images. Then, different feature extraction schemes can be adopted to match the decomposed structure and texture processes in a dissociative manner. Here, two schemes are explored to produce property related feature representations. One is based on a single-stage network over hand-crafted features and the other is based on a multistage network, which can learn features from raw pixels automatically. Finally, those multiple midlevel features are incorporated by solving a multiple kernel learning task. Extensive experiments are conducted on several challenging data sets for visual classification, and experimental results demonstrate the effectiveness of the proposed method.

  19. Nutrient availability controls the decomposition activities of the ectomycorrhizal fungi Paxillus involutus and Laccaria bicolor

    NASA Astrophysics Data System (ADS)

    Nicolás, César; Martin-Bertelsen, Tomas; Bentzer, Johan; Johansson, Tomas; Smits, Mark; Troein, Carl; Persson, Per; Tunlid, Anders

    2017-04-01

    Ectomycorrhizal (ECM) fungi play an important role in the ecological sustainability of northern temperate and boreal forests by foraging and mining soil organic matter for nutrients to their host plants. In this process, the fungal partner provides the plant host with nutrients and receives in return carbon, which supports the growth of extramatrical mycelium. Here, we examine the chemical changes in the soil organic matter (SOM) and physiological response of two species of ECM fungi Paxillus involutus and Laccaria bicolor during the decomposition of SOM and utilization of glucose. These two ECM fungi were grown in axenic cultures containing a water extract of organic matter (WEOM), which was supplemented with glucose at the start of the experiment. The fungi then went through two phases: a decomposition phase characterized by a WEOM with glucose followed by a starvation phase, with no glucose left in the media. The chemical modifications in the WEOM were followed using techniques such as infrared and X-ray absorption spectroscopy, while the fungal physiological response was studied using transcriptomic (RNAseq) analysis. The spectroscopic techniques showed that both fungi enhanced the amount of oxidized compounds while uptaking glucose or nitrogen from the media. In case of P. involutus, this oxidation process was more pronounced than that occurring with L. bicolor. In addition, the X-ray absorption spectroscopy showed a higher reduced iron content in WEOM incubated with P. involutus in comparison to L. bicolor, which may suggest the preference of P. involutus for oxidative mechanisms via Fenton chemistry. During the decomposition phase, both fungi expressed a large number of transcripts encoding proteins associated with oxidation of lignocellulose in wood decomposing fungi. In parallel, the expression levels of extracellular peptidases, and enzymes involved in the metabolism of amino acids and assimilated glucose were regulated. However, during prolonged starvation, transcripts encoding extracellular enzymes such as peptidases and laccases were upregulated concomitantly with transporters and metabolic enzymes, which suggest that some of the released cellular material were re-assimilated by the mycelium. These results show the concomitant changes in gene expression of EMF and in nutrient availability in the WEOM and reveal the combination of transcriptomic and spectroscopic techniques as a useful tool to better understand the decomposition process in soil.

  20. Legume presence reduces the decomposition rate of non-legume roots, role of plant traits?

    NASA Astrophysics Data System (ADS)

    De Deyn, Gerlinde B.; Saar, Sirgi; Barel, Janna; Semchenko, Marina

    2016-04-01

    Plant litter traits are known to play an important role in the rate of litter decomposition and mineralization, both for aboveground and belowground litter. However also the biotic and abiotic environment in which the litter decomposes plays a significant role in the rate of decomposition. The presence of living plants may accelerate litter decomposition rates via a priming effects. The size of this effect is expected to be related to the traits of the litter. In this study we focus on root litter, given that roots and their link to ecosystem processes have received relatively little attention in trait-based research. To test the effect of a growing legume plant on root decomposition and the role of root traits in this we used dead roots of 7 different grassland species (comprising grasses, a forb and legumes), determined their C, N, P content and quantified litter mass loss after eight weeks of incubation in soil with and without white clover. We expected faster root decomposition with white clover, especially for root litter with low N content. In contrast we found slower decomposition of grass and forb roots which were poor in N (negative priming) in presence of white clover, while decomposition rates of legume roots were not affected by the presence of white clover. Overall we found that root decomposition can be slowed down in the presence of a living plant and that this effect depends on the traits of the decomposing roots, with a pronounced reduction in root litter poor in N and P, but not in the relatively nutrient-rich legume root litters. The negative priming effect of legume plants on non-legume litter decomposition may have resulted from preferential substrate utilisation by soil microbes.

  1. Structural system identification based on variational mode decomposition

    NASA Astrophysics Data System (ADS)

    Bagheri, Abdollah; Ozbulut, Osman E.; Harris, Devin K.

    2018-03-01

    In this paper, a new structural identification method is proposed to identify the modal properties of engineering structures based on dynamic response decomposition using the variational mode decomposition (VMD). The VMD approach is a decomposition algorithm that has been developed as a means to overcome some of the drawbacks and limitations of the empirical mode decomposition method. The VMD-based modal identification algorithm decomposes the acceleration signal into a series of distinct modal responses and their respective center frequencies, such that when combined their cumulative modal responses reproduce the original acceleration response. The decaying amplitude of the extracted modal responses is then used to identify the modal damping ratios using a linear fitting function on modal response data. Finally, after extracting modal responses from available sensors, the mode shape vector for each of the decomposed modes in the system is identified from all obtained modal response data. To demonstrate the efficiency of the algorithm, a series of numerical, laboratory, and field case studies were evaluated. The laboratory case study utilized the vibration response of a three-story shear frame, whereas the field study leveraged the ambient vibration response of a pedestrian bridge to characterize the modal properties of the structure. The modal properties of the shear frame were computed using analytical approach for a comparison with the experimental modal frequencies. Results from these case studies demonstrated that the proposed method is efficient and accurate in identifying modal data of the structures.

  2. Intelligent diagnosis of short hydraulic signal based on improved EEMD and SVM with few low-dimensional training samples

    NASA Astrophysics Data System (ADS)

    Zhang, Meijun; Tang, Jian; Zhang, Xiaoming; Zhang, Jiaojiao

    2016-03-01

    The high accurate classification ability of an intelligent diagnosis method often needs a large amount of training samples with high-dimensional eigenvectors, however the characteristics of the signal need to be extracted accurately. Although the existing EMD(empirical mode decomposition) and EEMD(ensemble empirical mode decomposition) are suitable for processing non-stationary and non-linear signals, but when a short signal, such as a hydraulic impact signal, is concerned, their decomposition accuracy become very poor. An improve EEMD is proposed specifically for short hydraulic impact signals. The improvements of this new EEMD are mainly reflected in four aspects, including self-adaptive de-noising based on EEMD, signal extension based on SVM(support vector machine), extreme center fitting based on cubic spline interpolation, and pseudo component exclusion based on cross-correlation analysis. After the energy eigenvector is extracted from the result of the improved EEMD, the fault pattern recognition based on SVM with small amount of low-dimensional training samples is studied. At last, the diagnosis ability of improved EEMD+SVM method is compared with the EEMD+SVM and EMD+SVM methods, and its diagnosis accuracy is distinctly higher than the other two methods no matter the dimension of the eigenvectors are low or high. The improved EEMD is very propitious for the decomposition of short signal, such as hydraulic impact signal, and its combination with SVM has high ability for the diagnosis of hydraulic impact faults.

  3. Velocity boundary conditions for vorticity formulations of the incompressible Navier-Stokes equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kempka, S.N.; Strickland, J.H.; Glass, M.W.

    1995-04-01

    formulation to satisfy velocity boundary conditions for the vorticity form of the incompressible, viscous fluid momentum equations is presented. The tangential and normal components of the velocity boundary condition are satisfied simultaneously by creating vorticity adjacent to boundaries. The newly created vorticity is determined using a kinematical formulation which is a generalization of Helmholtz` decomposition of a vector field. Though it has not been generally recognized, these formulations resolve the over-specification issue associated with creating voracity to satisfy velocity boundary conditions. The generalized decomposition has not been widely used, apparently due to a lack of a useful physical interpretation. Anmore » analysis is presented which shows that the generalized decomposition has a relatively simple physical interpretation which facilitates its numerical implementation. The implementation of the generalized decomposition is discussed in detail. As an example the flow in a two-dimensional lid-driven cavity is simulated. The solution technique is based on a Lagrangian transport algorithm in the hydrocode ALEGRA. ALEGRA`s Lagrangian transport algorithm has been modified to solve the vorticity transport equation and the generalized decomposition, thus providing a new, accurate method to simulate incompressible flows. This numerical implementation and the new boundary condition formulation allow vorticity-based formulations to be used in a wider range of engineering problems.« less

  4. Limited-memory adaptive snapshot selection for proper orthogonal decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oxberry, Geoffrey M.; Kostova-Vassilevska, Tanya; Arrighi, Bill

    2015-04-02

    Reduced order models are useful for accelerating simulations in many-query contexts, such as optimization, uncertainty quantification, and sensitivity analysis. However, offline training of reduced order models can have prohibitively expensive memory and floating-point operation costs in high-performance computing applications, where memory per core is limited. To overcome this limitation for proper orthogonal decomposition, we propose a novel adaptive selection method for snapshots in time that limits offline training costs by selecting snapshots according an error control mechanism similar to that found in adaptive time-stepping ordinary differential equation solvers. The error estimator used in this work is related to theory boundingmore » the approximation error in time of proper orthogonal decomposition-based reduced order models, and memory usage is minimized by computing the singular value decomposition using a single-pass incremental algorithm. Results for a viscous Burgers’ test problem demonstrate convergence in the limit as the algorithm error tolerances go to zero; in this limit, the full order model is recovered to within discretization error. The resulting method can be used on supercomputers to generate proper orthogonal decomposition-based reduced order models, or as a subroutine within hyperreduction algorithms that require taking snapshots in time, or within greedy algorithms for sampling parameter space.« less

  5. Thermal decomposition of condensed-phase nitromethane from molecular dynamics from ReaxFF reactive dynamics.

    PubMed

    Han, Si-ping; van Duin, Adri C T; Goddard, William A; Strachan, Alejandro

    2011-05-26

    We studied the thermal decomposition and subsequent reaction of the energetic material nitromethane (CH(3)NO(2)) using molecular dynamics with ReaxFF, a first principles-based reactive force field. We characterize the chemistry of liquid and solid nitromethane at high temperatures (2000-3000 K) and density 1.97 g/cm(3) for times up to 200 ps. At T = 3000 K the first reaction in the decomposition of nitromethane is an intermolecular proton transfer leading to CH(3)NOOH and CH(2)NO(2). For lower temperatures (T = 2500 and 2000 K) the first reaction during decomposition is often an isomerization reaction involving the scission of the C-N bond the formation of a C-O bond to form methyl nitrate (CH(3)ONO). Also at very early times we observe intramolecular proton transfer events. The main product of these reactions is H(2)O which starts forming following those initiation steps. The appearance of H(2)O marks the beginning of the exothermic chemistry. Recent quantum-mechanics-based molecular dynamics simulations on the chemical reactions and time scales for decomposition of a crystalline sample heated to T = 3000 K for a few picoseconds are in excellent agreement with our results, providing an important, direct validation of ReaxFF.

  6. Energetic contaminants inhibit plant litter decomposition in soil.

    PubMed

    Kuperman, Roman G; Checkai, Ronald T; Simini, Michael; Sunahara, Geoffrey I; Hawari, Jalal

    2018-05-30

    Individual effects of nitrogen-based energetic materials (EMs) 2,4-dinitrotoluene (2,4-DNT), 2-amino-4,6-dinitrotoluene (2-ADNT), 4-amino-2,6-dinitrotoluene (4-ADNT), nitroglycerin (NG), and 2,4,6,8,10,12-hexanitrohexaazaisowurtzitane (CL-20) on litter decomposition, an essential biologically-mediated soil process, were assessed using Orchard grass (Dactylis glomerata) straw in Sassafras sandy loam (SSL) soil, which has physicochemical characteristics that support "very high" qualitative relative bioavailability for organic chemicals. Batches of SSL soil were separately amended with individual EMs or acetone carrier control. To quantify the decomposition rates, one straw cluster was harvested from a set of randomly selected replicate containers from within each treatment, after 1, 2, 3, 4, 6, and 8 months of exposure. Results showed that soil amended with 2,4-DNT or NG inhibited litter decomposition rates based on the median effective concentration (EC50) values of 1122 mg/kg and 860 mg/kg, respectively. Exposure to 2-ADNT, 4-ADNT or CL-20 amended soil did not significantly affect litter decomposition in SSL soil at ≥ 10,000 mg/kg. These ecotoxicological data will be helpful in identifying concentrations of EMs in soil that present an acceptable ecological risk for biologically-mediated soil processes. Published by Elsevier Inc.

  7. A parallel domain decomposition-based implicit method for the Cahn–Hilliard–Cook phase-field equation in 3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Xiang; Yang, Chao; State Key Laboratory of Computer Science, Chinese Academy of Sciences, Beijing 100190

    2015-03-15

    We present a numerical algorithm for simulating the spinodal decomposition described by the three dimensional Cahn–Hilliard–Cook (CHC) equation, which is a fourth-order stochastic partial differential equation with a noise term. The equation is discretized in space and time based on a fully implicit, cell-centered finite difference scheme, with an adaptive time-stepping strategy designed to accelerate the progress to equilibrium. At each time step, a parallel Newton–Krylov–Schwarz algorithm is used to solve the nonlinear system. We discuss various numerical and computational challenges associated with the method. The numerical scheme is validated by a comparison with an explicit scheme of high accuracymore » (and unreasonably high cost). We present steady state solutions of the CHC equation in two and three dimensions. The effect of the thermal fluctuation on the spinodal decomposition process is studied. We show that the existence of the thermal fluctuation accelerates the spinodal decomposition process and that the final steady morphology is sensitive to the stochastic noise. We also show the evolution of the energies and statistical moments. In terms of the parallel performance, it is found that the implicit domain decomposition approach scales well on supercomputers with a large number of processors.« less

  8. Selective NOx Recirculation for Stationary Lean-Burn Natural Gas Engines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nigel Clark; Gregory Thompson; Richard Atkinson

    Selective NOx Recirculation (SNR) involves cooling the engine exhaust gas and then adsorbing the oxides of nitrogen (NOx) from the exhaust stream, followed by the periodic desorption of NOx. By returning the desorbed, concentrated NOx into the engine intake and through the combustion chamber, a percentage of the NOx is decomposed during the combustion process. An initial study of NOx decomposition during lean-burn combustion was concluded in 2004 using a 1993 Cummins L10G 240hp natural gas engine. It was observed that the air/fuel ratio, injected NO (nitric oxide) quantity and engine operating points affected NOx decomposition rates of the engine.more » Chemical kinetic modeling results were also used to determine optimum NOx decomposition operating points and were published in the 2004 annual report. A NOx decomposition rate of 27% was measured from this engine under lean-burn conditions while the software model predicted between 35-42% NOx decomposition for similar conditions. A later technology 1998 Cummins L10G 280hp natural gas engine was procured with the assistance of Cummins Inc. to replace the previous engine used for 2005 experimental research. The new engine was equipped with an electronic fuel management system with closed-loop control that provided a more stable air/fuel ratio control and improved the repeatability of the tests. The engine was instrumented with an in-cylinder pressure measurement system and electronic controls, and was adapted to operate over a range of air/fuel ratios. The engine was connected to a newly commissioned 300hp alternating current (AC) motoring dynamometer. The second experimental campaign was performed to acquire both stoichiometric and slightly rich (0.97 lambda ratio) burn NOx decomposition rates. Effects of engine load and speed on decomposition were quantified, but Exhaust Gas Recirculation (EGR) was not varied independently. Decomposition rates of up to 92% were demonstrated. Following recommendations at the 2004 ARES peer review meeting at Argonne National Laboratories, in-cylinder pressure was measured to calculate engine indicated mean effective pressure (IMEP) changes due to NOx injections and EGR variations, and to observe conditions in the cylinder. The third experimental campaign gathered NOx decomposition data at 800, 1200 and 1800 rpm. EGR was added via an external loop, with EGR ranging from zero to the point of misfire. The air/fuel ratio was set at both stoichiometric and slightly rich conditions, and NOx decomposition rates were calculated for each set of runs. Modifications were made to the engine exhaust manifold to record individual exhaust temperatures. The three experimental campaigns have provided the data needed for a comprehensive model of NOx decomposition during the combustion process, and data have confirmed that there was no significant impact of injected NO on in-cylinder pressure. The NOx adsorption system provided by Sorbent Technologies Corp. (Twinsburg, OH), comprised a NOx adsorber, heat exchanger and a demister. These components were connected to the engine, and data were gathered to show both the adsorption of NOx from the engine, and desorption of NOx from the carbon-based sorbent material back into the engine intake, using a heated air stream. In order to quantify the NOx adsorption/desorption characteristics of the sorbent material, a bench top adsorption system was constructed and instrumented with thermocouples and the system output was fed into a NOx analyzer. The temperature of this apparatus was controlled while gathering data on the characteristics of the sorbent material. These data were required for development of a system model. Preliminary data were gathered in 2005, and will continue in early 2006. To assess the economic benefits of the proposed SNR technology the WVU research team has been joined in the last quarter by Dr Richard Turton (WVU-Chemical Engineering), who is modeling, sizing and costing the major components. The tasks will address modeling and preliminary design of the heat exchanger, demister and NOx sorbent chamber suitable for a given engine. A simplified linear driving force model was developed to predict NOx adsorption into the sorbent material as cooled exhaust passes over fresh sorbent material. This aspect of the research will continue into 2006, and the benefits and challenges of SNR will be compared with those of competing systems, such as Selective Catalytic Reduction. Chemical kinetic modeling using the CHEMKIN software package was extended in 2005 to the case of slightly rich burn with EGR. Simulations were performed at 10%, 20%, 30% and 40% of the intake air replaced with EGR. NOx decomposition efficiency was calculated at the point in time where 98% of fuel was consumed, which is believed to be a conservative approach. The modeling data show that reductions of over 70% are possible using the ''98% fuel burned'' assumption.« less

  9. Decomposition of terrestrial resource subsidies in headwater streams: Does consumer diversity matter?

    Treesearch

    David Stoker; Amber J. Falkner; Kelly M. Murray; Ashley K. Lang; Thomas R. Barnum; Jeffrey Hepinstall-Cymerman; Michael J. Conroy; Robert J. Cooper; Catherine M. Pringle

    2017-01-01

    Resource subsidies and biodiversity are essential for maintaining community structure and ecosystem functioning, but the relative importance of consumer diversity and resource characteristics to decomposition remains unclear. Forested headwater streams are detritus-based systems, dependent on leaf litter inputs from adjacent riparian ecosystems, and...

  10. THE SPITZER SURVEY OF STELLAR STRUCTURE IN GALAXIES (S{sup 4}G): MULTI-COMPONENT DECOMPOSITION STRATEGIES AND DATA RELEASE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salo, Heikki; Laurikainen, Eija; Laine, Jarkko

    The Spitzer Survey of Stellar Structure in Galaxies (S{sup 4}G) is a deep 3.6 and 4.5 μm imaging survey of 2352 nearby (<40 Mpc) galaxies. We describe the S{sup 4}G data analysis pipeline 4, which is dedicated to two-dimensional structural surface brightness decompositions of 3.6 μm images, using GALFIT3.0. Besides automatic 1-component Sérsic fits, and 2-component Sérsic bulge + exponential disk fits, we present human-supervised multi-component decompositions, which include, when judged appropriate, a central point source, bulge, disk, and bar components. Comparison of the fitted parameters indicates that multi-component models are needed to obtain reliable estimates for the bulge Sérsicmore » index and bulge-to-total light ratio (B/T), confirming earlier results. Here, we describe the preparations of input data done for decompositions, give examples of our decomposition strategy, and describe the data products released via IRSA and via our web page (www.oulu.fi/astronomy/S4G-PIPELINE4/MAIN). These products include all the input data and decomposition files in electronic form, making it easy to extend the decompositions to suit specific science purposes. We also provide our IDL-based visualization tools (GALFIDL) developed for displaying/running GALFIT-decompositions, as well as our mask editing procedure (MASK-EDIT) used in data preparation. A detailed analysis of the bulge, disk, and bar parameters derived from multi-component decompositions will be published separately.« less

  11. Gas Pressure Monitored Iodide-Catalyzed Decomposition Kinetics of H[subscript 2]O[subscript 2]: Initial-Rate and Integrated-Rate Methods in the General Chemistry Lab

    ERIC Educational Resources Information Center

    Nyasulu, Frazier; Barlag, Rebecca

    2010-01-01

    The reaction kinetics of the iodide-catalyzed decomposition of [subscript 2]O[subscript 2] using the integrated-rate method is described. The method is based on the measurement of the total gas pressure using a datalogger and pressure sensor. This is a modification of a previously reported experiment based on the initial-rate approach. (Contains 2…

  12. Systems-based decomposition schemes for the approximate solution of multi-term fractional differential equations

    NASA Astrophysics Data System (ADS)

    Ford, Neville J.; Connolly, Joseph A.

    2009-07-01

    We give a comparison of the efficiency of three alternative decomposition schemes for the approximate solution of multi-term fractional differential equations using the Caputo form of the fractional derivative. The schemes we compare are based on conversion of the original problem into a system of equations. We review alternative approaches and consider how the most appropriate numerical scheme may be chosen to solve a particular equation.

  13. Tissue artifact removal from respiratory signals based on empirical mode decomposition.

    PubMed

    Liu, Shaopeng; Gao, Robert X; John, Dinesh; Staudenmayer, John; Freedson, Patty

    2013-05-01

    On-line measurement of respiration plays an important role in monitoring human physical activities. Such measurement commonly employs sensing belts secured around the rib cage and abdomen of the test object. Affected by the movement of body tissues, respiratory signals typically have a low signal-to-noise ratio. Removing tissue artifacts therefore is critical to ensuring effective respiration analysis. This paper presents a signal decomposition technique for tissue artifact removal from respiratory signals, based on the empirical mode decomposition (EMD). An algorithm based on the mutual information and power criteria was devised to automatically select appropriate intrinsic mode functions for tissue artifact removal and respiratory signal reconstruction. Performance of the EMD-algorithm was evaluated through simulations and real-life experiments (N = 105). Comparison with low-pass filtering that has been conventionally applied confirmed the effectiveness of the technique in tissue artifacts removal.

  14. Fast polar decomposition of an arbitrary matrix

    NASA Technical Reports Server (NTRS)

    Higham, Nicholas J.; Schreiber, Robert S.

    1988-01-01

    The polar decomposition of an m x n matrix A of full rank, where m is greater than or equal to n, can be computed using a quadratically convergent algorithm. The algorithm is based on a Newton iteration involving a matrix inverse. With the use of a preliminary complete orthogonal decomposition the algorithm can be extended to arbitrary A. How to use the algorithm to compute the positive semi-definite square root of a Hermitian positive semi-definite matrix is described. A hybrid algorithm which adaptively switches from the matrix inversion based iteration to a matrix multiplication based iteration due to Kovarik, and to Bjorck and Bowie is formulated. The decision when to switch is made using a condition estimator. This matrix multiplication rich algorithm is shown to be more efficient on machines for which matrix multiplication can be executed 1.5 times faster than matrix inversion.

  15. A Graph Based Backtracking Algorithm for Solving General CSPs

    NASA Technical Reports Server (NTRS)

    Pang, Wanlin; Goodwin, Scott D.

    2003-01-01

    Many AI tasks can be formalized as constraint satisfaction problems (CSPs), which involve finding values for variables subject to constraints. While solving a CSP is an NP-complete task in general, tractable classes of CSPs have been identified based on the structure of the underlying constraint graphs. Much effort has been spent on exploiting structural properties of the constraint graph to improve the efficiency of finding a solution. These efforts contributed to development of a class of CSP solving algorithms called decomposition algorithms. The strength of CSP decomposition is that its worst-case complexity depends on the structural properties of the constraint graph and is usually better than the worst-case complexity of search methods. Its practical application is limited, however, since it cannot be applied if the CSP is not decomposable. In this paper, we propose a graph based backtracking algorithm called omega-CDBT, which shares merits and overcomes the weaknesses of both decomposition and search approaches.

  16. Automatic single-image-based rain streaks removal via image decomposition.

    PubMed

    Kang, Li-Wei; Lin, Chia-Wen; Fu, Yu-Hsiang

    2012-04-01

    Rain removal from a video is a challenging problem and has been recently investigated extensively. Nevertheless, the problem of rain removal from a single image was rarely studied in the literature, where no temporal information among successive images can be exploited, making the problem very challenging. In this paper, we propose a single-image-based rain removal framework via properly formulating rain removal as an image decomposition problem based on morphological component analysis. Instead of directly applying a conventional image decomposition technique, the proposed method first decomposes an image into the low- and high-frequency (HF) parts using a bilateral filter. The HF part is then decomposed into a "rain component" and a "nonrain component" by performing dictionary learning and sparse coding. As a result, the rain component can be successfully removed from the image while preserving most original image details. Experimental results demonstrate the efficacy of the proposed algorithm.

  17. Stabilization of the Thermal Decomposition of Poly(Propylene Carbonate) Through Copper Ion Incorporation and Use in Self-Patterning

    NASA Astrophysics Data System (ADS)

    Spencer, Todd J.; Chen, Yu-Chun; Saha, Rajarshi; Kohl, Paul A.

    2011-06-01

    Incorporation of copper ions into poly(propylene carbonate) (PPC) films cast from γ-butyrolactone (GBL), trichloroethylene (TCE) or methylene chloride (MeCl) solutions containing a photo-acid generator is shown to stabilize the PPC from thermal decomposition. Copper ions were introduced into the PPC mixtures by bringing the polymer mixture into contact with copper metal. The metal was oxidized and dissolved into the PPC mixture. The dissolved copper interferes with the decomposition mechanism of PPC, raising its decomposition temperature. Thermogravimetric analysis shows that copper ions make PPC more stable by up to 50°C. Spectroscopic analysis indicates that copper ions may stabilize terminal carboxylic acid groups, inhibiting PPC decomposition. The change in thermal stability based on PPC exposure to patterned copper substrates was used to provide a self-aligned patterning method for PPC on copper traces without the need for an additional photopatterning registration step. Thermal decomposition of PPC is then used to create air isolation regions around the copper traces. The spatial resolution of the self-patterning PPC process is limited by the lateral diffusion of the copper ions within the PPC. The concentration profiles of copper within the PPC, patterning resolution, and temperature effects on the PPC decomposition have been studied.

  18. Utilization of a balanced steady state free precession signal model for improved fat/water decomposition.

    PubMed

    Henze Bancroft, Leah C; Strigel, Roberta M; Hernando, Diego; Johnson, Kevin M; Kelcz, Frederick; Kijowski, Richard; Block, Walter F

    2016-03-01

    Chemical shift based fat/water decomposition methods such as IDEAL are frequently used in challenging imaging environments with large B0 inhomogeneity. However, they do not account for the signal modulations introduced by a balanced steady state free precession (bSSFP) acquisition. Here we demonstrate improved performance when the bSSFP frequency response is properly incorporated into the multipeak spectral fat model used in the decomposition process. Balanced SSFP allows for rapid imaging but also introduces a characteristic frequency response featuring periodic nulls and pass bands. Fat spectral components in adjacent pass bands will experience bulk phase offsets and magnitude modulations that change the expected constructive and destructive interference between the fat spectral components. A bSSFP signal model was incorporated into the fat/water decomposition process and used to generate images of a fat phantom, and bilateral breast and knee images in four normal volunteers at 1.5 Tesla. Incorporation of the bSSFP signal model into the decomposition process improved the performance of the fat/water decomposition. Incorporation of this model allows rapid bSSFP imaging sequences to use robust fat/water decomposition methods such as IDEAL. While only one set of imaging parameters were presented, the method is compatible with any field strength or repetition time. © 2015 Wiley Periodicals, Inc.

  19. A projection method for low speed flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Colella, P.; Pao, K.

    The authors propose a decomposition applicable to low speed, inviscid flows of all Mach numbers less than 1. By using the Hodge decomposition, they may write the velocity field as the sum of a divergence-free vector field and a gradient of a scalar function. Evolution equations for these parts are presented. A numerical procedure based on this decomposition is designed, using projection methods for solving the incompressible variables and a backward-Euler method for solving the potential variables. Numerical experiments are included to illustrate various aspects of the algorithm.

  20. A new multivariate empirical mode decomposition method for improving the performance of SSVEP-based brain-computer interface

    NASA Astrophysics Data System (ADS)

    Chen, Yi-Feng; Atal, Kiran; Xie, Sheng-Quan; Liu, Quan

    2017-08-01

    Objective. Accurate and efficient detection of steady-state visual evoked potentials (SSVEP) in electroencephalogram (EEG) is essential for the related brain-computer interface (BCI) applications. Approach. Although the canonical correlation analysis (CCA) has been applied extensively and successfully to SSVEP recognition, the spontaneous EEG activities and artifacts that often occur during data recording can deteriorate the recognition performance. Therefore, it is meaningful to extract a few frequency sub-bands of interest to avoid or reduce the influence of unrelated brain activity and artifacts. This paper presents an improved method to detect the frequency component associated with SSVEP using multivariate empirical mode decomposition (MEMD) and CCA (MEMD-CCA). EEG signals from nine healthy volunteers were recorded to evaluate the performance of the proposed method for SSVEP recognition. Main results. We compared our method with CCA and temporally local multivariate synchronization index (TMSI). The results suggest that the MEMD-CCA achieved significantly higher accuracy in contrast to standard CCA and TMSI. It gave the improvements of 1.34%, 3.11%, 3.33%, 10.45%, 15.78%, 18.45%, 15.00% and 14.22% on average over CCA at time windows from 0.5 s to 5 s and 0.55%, 1.56%, 7.78%, 14.67%, 13.67%, 7.33% and 7.78% over TMSI from 0.75 s to 5 s. The method outperformed the filter-based decomposition (FB), empirical mode decomposition (EMD) and wavelet decomposition (WT) based CCA for SSVEP recognition. Significance. The results demonstrate the ability of our proposed MEMD-CCA to improve the performance of SSVEP-based BCI.

  1. Highly Scalable Matching Pursuit Signal Decomposition Algorithm

    NASA Technical Reports Server (NTRS)

    Christensen, Daniel; Das, Santanu; Srivastava, Ashok N.

    2009-01-01

    Matching Pursuit Decomposition (MPD) is a powerful iterative algorithm for signal decomposition and feature extraction. MPD decomposes any signal into linear combinations of its dictionary elements or atoms . A best fit atom from an arbitrarily defined dictionary is determined through cross-correlation. The selected atom is subtracted from the signal and this procedure is repeated on the residual in the subsequent iterations until a stopping criterion is met. The reconstructed signal reveals the waveform structure of the original signal. However, a sufficiently large dictionary is required for an accurate reconstruction; this in return increases the computational burden of the algorithm, thus limiting its applicability and level of adoption. The purpose of this research is to improve the scalability and performance of the classical MPD algorithm. Correlation thresholds were defined to prune insignificant atoms from the dictionary. The Coarse-Fine Grids and Multiple Atom Extraction techniques were proposed to decrease the computational burden of the algorithm. The Coarse-Fine Grids method enabled the approximation and refinement of the parameters for the best fit atom. The ability to extract multiple atoms within a single iteration enhanced the effectiveness and efficiency of each iteration. These improvements were implemented to produce an improved Matching Pursuit Decomposition algorithm entitled MPD++. Disparate signal decomposition applications may require a particular emphasis of accuracy or computational efficiency. The prominence of the key signal features required for the proper signal classification dictates the level of accuracy necessary in the decomposition. The MPD++ algorithm may be easily adapted to accommodate the imposed requirements. Certain feature extraction applications may require rapid signal decomposition. The full potential of MPD++ may be utilized to produce incredible performance gains while extracting only slightly less energy than the standard algorithm. When the utmost accuracy must be achieved, the modified algorithm extracts atoms more conservatively but still exhibits computational gains over classical MPD. The MPD++ algorithm was demonstrated using an over-complete dictionary on real life data. Computational times were reduced by factors of 1.9 and 44 for the emphases of accuracy and performance, respectively. The modified algorithm extracted similar amounts of energy compared to classical MPD. The degree of the improvement in computational time depends on the complexity of the data, the initialization parameters, and the breadth of the dictionary. The results of the research confirm that the three modifications successfully improved the scalability and computational efficiency of the MPD algorithm. Correlation Thresholding decreased the time complexity by reducing the dictionary size. Multiple Atom Extraction also reduced the time complexity by decreasing the number of iterations required for a stopping criterion to be reached. The Course-Fine Grids technique enabled complicated atoms with numerous variable parameters to be effectively represented in the dictionary. Due to the nature of the three proposed modifications, they are capable of being stacked and have cumulative effects on the reduction of the time complexity.

  2. The predictive power of singular value decomposition entropy for stock market dynamics

    NASA Astrophysics Data System (ADS)

    Caraiani, Petre

    2014-01-01

    We use a correlation-based approach to analyze financial data from the US stock market, both daily and monthly observations from the Dow Jones. We compute the entropy based on the singular value decomposition of the correlation matrix for the components of the Dow Jones Industrial Index. Based on a moving window, we derive time varying measures of entropy for both daily and monthly data. We find that the entropy has a predictive ability with respect to stock market dynamics as indicated by the Granger causality tests.

  3. Theoretical study of the decomposition mechanism of environmentally friendly insulating medium C3F7CN in the presence of H2O in a discharge

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoxing; Li, Yi; Xiao, Song; Tian, Shuangshuang; Deng, Zaitao; Tang, Ju

    2017-08-01

    C3F7CN has been the focus of the alternative gas research field over the past two years because of its excellent insulation properties and environmental characteristics. Experimental studies on its insulation performance have made many achievements. However, few studies on the formation mechanism of the decomposition components exist. A discussion of the decomposition characteristics of insulating media will provide guidance for scientific experimental research and the work that must be completed before further engineering application. In this study, the decomposition mechanism of C3F7CN in the presence of trace H2O under discharge was calculated based on the density functional theory and transition state theory. The reaction heat, Gibbs free energy, and activation energy of different decomposition pathways were investigated. The ionization parameters and toxicity of C3F7CN and various decomposition products were analyzed from the molecular structure perspective. The formation mechanism of the C3F7CN discharge decomposition components and the influence of trace water were evaluated. This paper confirms that C3F7CN has excellent decomposition characteristics, which provide theoretical support for later experiments and related engineering applications. However, the existence of trace water has a negative impact on C3F7CN’s insulation performance. Thus, strict trace water content standards should be developed to ensure dielectric insulation and the safety of maintenance personnel.

  4. Soil organic matter decomposition follows plant productivity response to sea-level rise

    NASA Astrophysics Data System (ADS)

    Mueller, Peter; Jensen, Kai; Megonigal, James Patrick

    2015-04-01

    The accumulation of soil organic matter (SOM) is an important mechanism for many tidal wetlands to keep pace with sea-level rise. SOM accumulation is governed by the rates of production and decomposition of organic matter. While plant productivity responses to sea-level rise are well understood, far less is known about the response of SOM decomposition to accelerated sea-level rise. Here we quantified the effects of sea-level rise on SOM decomposition by exposing planted and unplanted tidal marsh monoliths to experimentally manipulated flood duration. The study was performed in a field-based mesocosm facility at the Smithsonian Global Change Research Wetland, a micro tidal brackish marsh in Maryland, US. SOM decomposition was quantified as CO2 efflux, with plant- and SOM-derived CO2 separated using a stable carbon isotope approach. Despite the dogma that decomposition rates are inversely related to flooding, SOM mineralization was not sensitive to varying flood duration over a 35 cm range in surface elevation in unplanted mesocoms. In the presence of plants, decomposition rates were strongly and positively related to aboveground biomass (p≤0.01, R2≥0.59). We conclude that rates of soil carbon loss through decomposition are driven by plant responses to sea level in this intensively studied tidal marsh. If our result applies more generally to tidal wetlands, it has important implications for modeling carbon sequestration and marsh accretion in response to accelerated sea-level rise.

  5. Eigenvalue-eigenvector decomposition (EED) analysis of dissimilarity and covariance matrix obtained from total synchronous fluorescence spectral (TSFS) data sets of herbal preparations: Optimizing the classification approach

    NASA Astrophysics Data System (ADS)

    Tarai, Madhumita; Kumar, Keshav; Divya, O.; Bairi, Partha; Mishra, Kishor Kumar; Mishra, Ashok Kumar

    2017-09-01

    The present work compares the dissimilarity and covariance based unsupervised chemometric classification approaches by taking the total synchronous fluorescence spectroscopy data sets acquired for the cumin and non-cumin based herbal preparations. The conventional decomposition method involves eigenvalue-eigenvector analysis of the covariance of the data set and finds the factors that can explain the overall major sources of variation present in the data set. The conventional approach does this irrespective of the fact that the samples belong to intrinsically different groups and hence leads to poor class separation. The present work shows that classification of such samples can be optimized by performing the eigenvalue-eigenvector decomposition on the pair-wise dissimilarity matrix.

  6. Decomposition rates and termite assemblage composition in semiarid Africa

    USGS Publications Warehouse

    Schuurman, G.

    2005-01-01

    Outside of the humid tropics, abiotic factors are generally considered the dominant regulators of decomposition, and biotic influences are frequently not considered in predicting decomposition rates. In this study, I examined the effect of termite assemblage composition and abundance on decomposition of wood litter of an indigenous species (Croton megalobotrys) in five terrestrial habitats of the highly seasonal semiarid Okavango Delta region of northern Botswana, to determine whether natural variation in decomposer community composition and abundance influences decomposition rates. 1 conducted the study in two areas, Xudum and Santawani, with the Xudum study preceding the Santawani study. I assessed termite assemblage composition and abundance using a grid of survey baits (rolls of toilet paper) placed on the soil surface and checked 2-4 times/month. I placed a billet (a section of wood litter) next to each survey bait and measured decomposition in a plot by averaging the mass loss of its billets. Decomposition rates varied up to sixfold among plots within the same habitat and locality, despite the fact that these plots experienced the same climate. In addition, billets decomposed significantly faster during the cooler and drier Santawani study, contradicting climate-based predictions. Because termite incidence was generally higher in Santawani plots, termite abundance initially seemed a likely determinant of decomposition in this system. However, no significant effect of termite incidence on billet mass loss rates was observed among the Xudum plots, where decomposition rates remained low even though termite incidence varied considerably. Considering the incidences of fungus-growing termites and non-fungus-growing termites separately resolves this apparent contradiction: in both Santawani and Xudum, only fungus-growing termites play a significant role in decomposition. This result is mirrored in an analysis of the full data set of combined Xudum and Santawani data. The determination that natural variation in the abundance of a single taxonomic group of soil fauna, a termite subfamily, determines almost all observed variation in decomposition rates supports the emerging view that biotic influences may be important in many biomes and that consideration of decomposer community composition and abundance may be critical for accurate prediction of decomposition rates. ?? 2005 by the Ecological Society of America.

  7. System and methods for determining masking signals for applying empirical mode decomposition (EMD) and for demodulating intrinsic mode functions obtained from application of EMD

    DOEpatents

    Senroy, Nilanjan [New Delhi, IN; Suryanarayanan, Siddharth [Littleton, CO

    2011-03-15

    A computer-implemented method of signal processing is provided. The method includes generating one or more masking signals based upon a computed Fourier transform of a received signal. The method further includes determining one or more intrinsic mode functions (IMFs) of the received signal by performing a masking-signal-based empirical mode decomposition (EMD) using the at least one masking signal.

  8. Ultrasonic technique for imaging tissue vibrations: preliminary results.

    PubMed

    Sikdar, Siddhartha; Beach, Kirk W; Vaezy, Shahram; Kim, Yongmin

    2005-02-01

    We propose an ultrasound (US)-based technique for imaging vibrations in the blood vessel walls and surrounding tissue caused by eddies produced during flow through narrowed or punctured arteries. Our approach is to utilize the clutter signal, normally suppressed in conventional color flow imaging, to detect and characterize local tissue vibrations. We demonstrate the feasibility of visualizing the origin and extent of vibrations relative to the underlying anatomy and blood flow in real-time and their quantitative assessment, including measurements of the amplitude, frequency and spatial distribution. We present two signal-processing algorithms, one based on phase decomposition and the other based on spectral estimation using eigen decomposition for isolating vibrations from clutter, blood flow and noise using an ensemble of US echoes. In simulation studies, the computationally efficient phase-decomposition method achieved 96% sensitivity and 98% specificity for vibration detection and was robust to broadband vibrations. Somewhat higher sensitivity (98%) and specificity (99%) could be achieved using the more computationally intensive eigen decomposition-based algorithm. Vibration amplitudes as low as 1 mum were measured accurately in phantom experiments. Real-time tissue vibration imaging at typical color-flow frame rates was implemented on a software-programmable US system. Vibrations were studied in vivo in a stenosed femoral bypass vein graft in a human subject and in a punctured femoral artery and incised spleen in an animal model.

  9. New insights into thermal decomposition of polycyclic aromatic hydrocarbon oxyradicals.

    PubMed

    Liu, Peng; Lin, He; Yang, Yang; Shao, Can; Gu, Chen; Huang, Zhen

    2014-12-04

    Thermal decompositions of polycyclic aromatic hydrocarbon (PAH) oxyradicals on various surface sites including five-membered ring, free-edge, zigzag, and armchair have been systematically investigated by using ab initio density functional theory B3LYP/6-311+G(d,p) basis set. The calculation based on Hückel theory indicates that PAHs (3H-cydopenta[a]anthracene oxyradical) with oxyradicals on a five-membered ring site have high chemical reactivity. The rate coefficients of PAH oxyradical decomposition were evaluated by using Rice-Ramsperger-Kassel-Marcus theory and solving the master equations in the temperature range of 1500-2500 K and the pressure range of 0.1-10 atm. The kinetic calculations revealed that the rate coefficients of PAH oxyradical decomposition are temperature-, pressure-, and surface site-dependent, and the oxyradical on a five-membered ring is easier to decompose than that on a six-membered ring. Four-membered rings were found in decomposition of the five-membered ring, and a new reaction channel of PAH evolution involving four-membered rings is recommended.

  10. A fast new algorithm for a robot neurocontroller using inverse QR decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morris, A.S.; Khemaissia, S.

    2000-01-01

    A new adaptive neural network controller for robots is presented. The controller is based on direct adaptive techniques. Unlike many neural network controllers in the literature, inverse dynamical model evaluation is not required. A numerically robust, computationally efficient processing scheme for neutral network weight estimation is described, namely, the inverse QR decomposition (INVQR). The inverse QR decomposition and a weighted recursive least-squares (WRLS) method for neural network weight estimation is derived using Cholesky factorization of the data matrix. The algorithm that performs the efficient INVQR of the underlying space-time data matrix may be implemented in parallel on a triangular array.more » Furthermore, its systolic architecture is well suited for VLSI implementation. Another important benefit is well suited for VLSI implementation. Another important benefit of the INVQR decomposition is that it solves directly for the time-recursive least-squares filter vector, while avoiding the sequential back-substitution step required by the QR decomposition approaches.« less

  11. The response of the HMX-based material PBXN-9 to thermal insults: thermal decomposition kinetics and morphological changes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glascoe, E A; Hsu, P C; Springer, H K

    PBXN-9, an HMX-formulation, is thermally damaged and thermally decomposed in order to determine the morphological changes and decomposition kinetics that occur in the material after mild to moderate heating. The material and its constituents were decomposed using standard thermal analysis techniques (DSC and TGA) and the decomposition kinetics are reported using different kinetic models. Pressed parts and prill were thermally damaged, i.e. heated to temperatures that resulted in material changes but did not result in significant decomposition or explosion, and analyzed. In general, the thermally damaged samples showed a significant increase in porosity and decrease in density and a smallmore » amount of weight loss. These PBXN-9 samples appear to sustain more thermal damage than similar HMX-Viton A formulations and the most likely reasons are the decomposition/evaporation of a volatile plasticizer and a polymorphic transition of the HMX from {beta} to {delta} phase.« less

  12. Applying Novel Time-Frequency Moments Singular Value Decomposition Method and Artificial Neural Networks for Ballistocardiography

    NASA Astrophysics Data System (ADS)

    Akhbardeh, Alireza; Junnila, Sakari; Koivuluoma, Mikko; Koivistoinen, Teemu; Värri, Alpo

    2006-12-01

    As we know, singular value decomposition (SVD) is designed for computing singular values (SVs) of a matrix. Then, if it is used for finding SVs of an [InlineEquation not available: see fulltext.]-by-1 or 1-by- [InlineEquation not available: see fulltext.] array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ''time-frequency moments singular value decomposition (TFM-SVD).'' In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal). This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs) for ballistocardiogram (BCG) data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.

  13. Decomposition Algorithm for Global Reachability on a Time-Varying Graph

    NASA Technical Reports Server (NTRS)

    Kuwata, Yoshiaki

    2010-01-01

    A decomposition algorithm has been developed for global reachability analysis on a space-time grid. By exploiting the upper block-triangular structure, the planning problem is decomposed into smaller subproblems, which is much more scalable than the original approach. Recent studies have proposed the use of a hot-air (Montgolfier) balloon for possible exploration of Titan and Venus because these bodies have thick haze or cloud layers that limit the science return from an orbiter, and the atmospheres would provide enough buoyancy for balloons. One of the important questions that needs to be addressed is what surface locations the balloon can reach from an initial location, and how long it would take. This is referred to as the global reachability problem, where the paths from starting locations to all possible target locations must be computed. The balloon could be driven with its own actuation, but its actuation capability is fairly limited. It would be more efficient to take advantage of the wind field and ride the wind that is much stronger than what the actuator could produce. It is possible to pose the path planning problem as a graph search problem on a directed graph by discretizing the spacetime world and the vehicle actuation. The decomposition algorithm provides reachability analysis of a time-varying graph. Because the balloon only moves in the positive direction in time, the adjacency matrix of the graph can be represented with an upper block-triangular matrix, and this upper block-triangular structure can be exploited to decompose a large graph search problem. The new approach consumes a much smaller amount of memory, which also helps speed up the overall computation when the computing resource has a limited physical memory compared to the problem size.

  14. Flower litters of alpine plants affect soil nitrogen and phosphorus rapidly in the eastern Tibetan Plateau

    NASA Astrophysics Data System (ADS)

    Wang, Jinniu; Xu, Bo; Wu, Yan; Gao, Jing; Shi, Fusun

    2016-10-01

    Litters of reproductive organs have rarely been studied despite their role in allocating nutrients for offspring reproduction. This study determines the mechanism through which flower litters efficiently increase the available soil nutrient pool. Field experiments were conducted to collect plant litters and calculate biomass production in an alpine meadow of the eastern Tibetan Plateau. C, N, P, lignin, cellulose content, and their relevant ratios of litters were analyzed to identify their decomposition features. A pot experiment was performed to determine the effects of litter addition on the soil nutrition pool by comparing the treated and control samples. The litter-bag method was used to verify decomposition rates. The flower litters of phanerophyte plants were comparable with non-flower litters. Biomass partitioning of other herbaceous species accounted for 10-40 % of the aboveground biomass. Flower litter possessed significantly higher N and P levels but less C / N, N / P, lignin / N, and lignin and cellulose concentrations than leaf litter. The litter-bag experiment confirmed that the flower litters of Rhododendron przewalskii and Meconopsis integrifolia decompose approximately 3 times faster than mixed litters within 50 days. Pot experiment findings indicated that flower litter addition significantly increased the available nutrient pool and soil microbial productivity. The time of litter fall significantly influenced soil available N and P, and soil microbial biomass. Flower litters fed the soil nutrition pool and influenced nutrition cycling in alpine ecosystems more efficiently because of their non-ignorable production, faster decomposition rate, and higher nutrient contents compared with non-flower litters. The underlying mechanism can enrich nutrients, which return to the soil, and non-structural carbohydrates, which feed and enhance the transitions of soil microorganisms.

  15. Proof of a new colour decomposition for QCD amplitudes

    DOE PAGES

    Melia, Tom

    2015-12-16

    Recently, Johansson and Ochirov conjectured the form of a new colour decom-position for QCD tree-level amplitudes. This note provides a proof of that conjecture. The proof is based on ‘Mario World’ Feynman diagrams, which exhibit the hierarchical Dyck structure previously found to be very useful when dealing with multi-quark amplitudes.

  16. Teaching Quality Management Model for the Training of Innovation Ability and the Multilevel Decomposition Indicators

    ERIC Educational Resources Information Center

    Lu, Xingjiang; Yao, Chen; Zheng, Jianmin

    2013-01-01

    This paper focuses on the training of undergraduate students' innovation ability. On top of the theoretical framework of the Quality Function Deployment (QFD), we propose a teaching quality management model. Based on this model, we establish a multilevel decomposition indicator system, which integrates innovation ability characterized by four…

  17. Proof of a new colour decomposition for QCD amplitudes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Melia, Tom

    Recently, Johansson and Ochirov conjectured the form of a new colour decom-position for QCD tree-level amplitudes. This note provides a proof of that conjecture. The proof is based on ‘Mario World’ Feynman diagrams, which exhibit the hierarchical Dyck structure previously found to be very useful when dealing with multi-quark amplitudes.

  18. Mechanism of the Thermal Decomposition of Ethanethiol and Dimethylsulfide

    NASA Astrophysics Data System (ADS)

    Melhado, William Francis; Whitman, Jared Connor; Kong, Jessica; Anderson, Daniel Easton; Vasiliou, AnGayle (AJ)

    2016-06-01

    Combustion of organosulfur contaminants in petroleum-based fuels and biofuels produces sulfur oxides (SO_x). These pollutants are highly regulated by the EPA because they have been linked to poor respiratory health and negative environmental impacts. Therefore much effort has been made to remove sulfur compounds in petroleum-based fuels and biofuels. Currently desulfurization methods used in the fuel industry are costly and inefficient. Research of the thermal decomposition mechanisms of organosulfur species can be implemented via engineering simulations to modify existing refining technologies to design more efficient sulfur removal processes. We have used a resistively-heated SiC tubular reactor to study the thermal decomposition of ethanethiol (CH_3CH_2SH) and dimethylsulfide (CH_3SCH_3). The decomposition products are identified by two independent techniques: 118.2 nm VUV photoionization mass spectroscopy and infrared spectroscopy. The thermal cracking products for CH_3CH_2SH are CH_2CH_2, SH, and H_2S and the thermal cracking products from CH_3SCH_3 are CH_3S, CH_2S, and CH_3.

  19. Renewable energy in electric utility capacity planning: a decomposition approach with application to a Mexican utility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Staschus, K.

    1985-01-01

    In this dissertation, efficient algorithms for electric-utility capacity expansion planning with renewable energy are developed. The algorithms include a deterministic phase that quickly finds a near-optimal expansion plan using derating and a linearized approximation to the time-dependent availability of nondispatchable energy sources. A probabilistic second phase needs comparatively few computer-time consuming probabilistic simulation iterations to modify this solution towards the optimal expansion plan. For the deterministic first phase, two algorithms, based on a Lagrangian Dual decomposition and a Generalized Benders Decomposition, are developed. The probabilistic second phase uses a Generalized Benders Decomposition approach. Extensive computational tests of the algorithms aremore » reported. Among the deterministic algorithms, the one based on Lagrangian Duality proves fastest. The two-phase approach is shown to save up to 80% in computing time as compared to a purely probabilistic algorithm. The algorithms are applied to determine the optimal expansion plan for the Tijuana-Mexicali subsystem of the Mexican electric utility system. A strong recommendation to push conservation programs in the desert city of Mexicali results from this implementation.« less

  20. An Orthogonal Evolutionary Algorithm With Learning Automata for Multiobjective Optimization.

    PubMed

    Dai, Cai; Wang, Yuping; Ye, Miao; Xue, Xingsi; Liu, Hailin

    2016-12-01

    Research on multiobjective optimization problems becomes one of the hottest topics of intelligent computation. In order to improve the search efficiency of an evolutionary algorithm and maintain the diversity of solutions, in this paper, the learning automata (LA) is first used for quantization orthogonal crossover (QOX), and a new fitness function based on decomposition is proposed to achieve these two purposes. Based on these, an orthogonal evolutionary algorithm with LA for complex multiobjective optimization problems with continuous variables is proposed. The experimental results show that in continuous states, the proposed algorithm is able to achieve accurate Pareto-optimal sets and wide Pareto-optimal fronts efficiently. Moreover, the comparison with the several existing well-known algorithms: nondominated sorting genetic algorithm II, decomposition-based multiobjective evolutionary algorithm, decomposition-based multiobjective evolutionary algorithm with an ensemble of neighborhood sizes, multiobjective optimization by LA, and multiobjective immune algorithm with nondominated neighbor-based selection, on 15 multiobjective benchmark problems, shows that the proposed algorithm is able to find more accurate and evenly distributed Pareto-optimal fronts than the compared ones.

  1. Empirical projection-based basis-component decomposition method

    NASA Astrophysics Data System (ADS)

    Brendel, Bernhard; Roessl, Ewald; Schlomka, Jens-Peter; Proksa, Roland

    2009-02-01

    Advances in the development of semiconductor based, photon-counting x-ray detectors stimulate research in the domain of energy-resolving pre-clinical and clinical computed tomography (CT). For counting detectors acquiring x-ray attenuation in at least three different energy windows, an extended basis component decomposition can be performed in which in addition to the conventional approach of Alvarez and Macovski a third basis component is introduced, e.g., a gadolinium based CT contrast material. After the decomposition of the measured projection data into the basis component projections, conventional filtered-backprojection reconstruction is performed to obtain the basis-component images. In recent work, this basis component decomposition was obtained by maximizing the likelihood-function of the measurements. This procedure is time consuming and often unstable for excessively noisy data or low intrinsic energy resolution of the detector. Therefore, alternative procedures are of interest. Here, we introduce a generalization of the idea of empirical dual-energy processing published by Stenner et al. to multi-energy, photon-counting CT raw data. Instead of working in the image-domain, we use prior spectral knowledge about the acquisition system (tube spectra, bin sensitivities) to parameterize the line-integrals of the basis component decomposition directly in the projection domain. We compare this empirical approach with the maximum-likelihood (ML) approach considering image noise and image bias (artifacts) and see that only moderate noise increase is to be expected for small bias in the empirical approach. Given the drastic reduction of pre-processing time, the empirical approach is considered a viable alternative to the ML approach.

  2. Decomposition of energetic chemicals contaminated with iron or stainless steel.

    PubMed

    Chervin, Sima; Bodman, Glenn T; Barnhart, Richard W

    2006-03-17

    Contamination of chemicals or reaction mixtures with iron or stainless steel is likely to take place during chemical processing. If energetic and thermally unstable chemicals are involved in a manufacturing process, contamination with iron or stainless steel can impact the decomposition characteristics of these chemicals and, subsequently, the safety of the processes, and should be investigated. The goal of this project was to undertake a systematic approach to study the impact of iron or stainless steel contamination on the decomposition characteristics of different chemical classes. Differential scanning calorimetry (DSC) was used to study the decomposition reaction by testing each chemical pure, and in mixtures with iron and stainless steel. The following classes of energetic chemicals were investigated: nitrobenzenes, tetrazoles, hydrazines, hydroxylamines and oximes, sulfonic acid derivatives and monomers. The following non-energetic groups were investigated for contributing effects: halogens, hydroxyls, amines, amides, nitriles, sulfonic acid esters, carbonyl halides and salts of hydrochloric acid. Based on the results obtained, conclusions were drawn regarding the sensitivity of the decomposition reaction to contamination with iron and stainless steel for the chemical classes listed above. It was demonstrated that the most sensitive classes are hydrazines and hydroxylamines/oximes. Contamination of these chemicals with iron or stainless steel not only destabilizes them, leading to decomposition at significantly lower temperatures, but also sometimes causes increased severity of the decomposition. The sensitivity of nitrobenzenes to contamination with iron or stainless steel depended upon the presence of other contributing groups: the presence of such groups as acid chlorides or chlorine/fluorine significantly increased the effect of contamination on decomposition characteristics of nitrobenzenes. The decomposition of sulfonic acid derivatives and tetrazoles was not impacted by presence of iron or stainless steel.

  3. Computer implemented empirical mode decomposition method apparatus, and article of manufacture utilizing curvature extrema

    NASA Technical Reports Server (NTRS)

    Shen, Zheng (Inventor); Huang, Norden Eh (Inventor)

    2003-01-01

    A computer implemented physical signal analysis method is includes two essential steps and the associated presentation techniques of the results. All the steps exist only in a computer: there are no analytic expressions resulting from the method. The first step is a computer implemented Empirical Mode Decomposition to extract a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals based on local extrema and curvature extrema. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform. The final result is the Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum.

  4. MEG masked priming evidence for form-based decomposition of irregular verbs

    PubMed Central

    Fruchter, Joseph; Stockall, Linnaea; Marantz, Alec

    2013-01-01

    To what extent does morphological structure play a role in early processing of visually presented English past tense verbs? Previous masked priming studies have demonstrated effects of obligatory form-based decomposition for genuinely affixed words (teacher-TEACH) and pseudo-affixed words (corner-CORN), but not for orthographic controls (brothel-BROTH). Additionally, MEG single word reading studies have demonstrated that the transition probability from stem to affix (in genuinely affixed words) modulates an early evoked response known as the M170; parallel findings have been shown for the transition probability from stem to pseudo-affix (in pseudo-affixed words). Here, utilizing the M170 as a neural index of visual form-based morphological decomposition, we ask whether the M170 demonstrates masked morphological priming effects for irregular past tense verbs (following a previous study which obtained behavioral masked priming effects for irregulars). Dual mechanism theories of the English past tense predict a rule-based decomposition for regulars but not for irregulars, while certain single mechanism theories predict rule-based decomposition even for irregulars. MEG data was recorded for 16 subjects performing a visual masked priming lexical decision task. Using a functional region of interest (fROI) defined on the basis of repetition priming and regular morphological priming effects within the left fusiform and inferior temporal regions, we found that activity in this fROI was modulated by the masked priming manipulation for irregular verbs, during the time window of the M170. We also found effects of the scores generated by the learning model of Albright and Hayes (2003) on the degree of priming for irregular verbs. The results favor a single mechanism account of the English past tense, in which even irregulars are decomposed into stems and affixes prior to lexical access, as opposed to a dual mechanism model, in which irregulars are recognized as whole forms. PMID:24319420

  5. Domain decomposition for a mixed finite element method in three dimensions

    USGS Publications Warehouse

    Cai, Z.; Parashkevov, R.R.; Russell, T.F.; Wilson, J.D.; Ye, X.

    2003-01-01

    We consider the solution of the discrete linear system resulting from a mixed finite element discretization applied to a second-order elliptic boundary value problem in three dimensions. Based on a decomposition of the velocity space, these equations can be reduced to a discrete elliptic problem by eliminating the pressure through the use of substructures of the domain. The practicality of the reduction relies on a local basis, presented here, for the divergence-free subspace of the velocity space. We consider additive and multiplicative domain decomposition methods for solving the reduced elliptic problem, and their uniform convergence is established.

  6. Artifact removal from EEG data with empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Grubov, Vadim V.; Runnova, Anastasiya E.; Efremova, Tatyana Yu.; Hramov, Alexander E.

    2017-03-01

    In the paper we propose the novel method for dealing with the physiological artifacts caused by intensive activity of facial and neck muscles and other movements in experimental human EEG recordings. The method is based on analysis of EEG signals with empirical mode decomposition (Hilbert-Huang transform). We introduce the mathematical algorithm of the method with following steps: empirical mode decomposition of EEG signal, choosing of empirical modes with artifacts, removing empirical modes with artifacts, reconstruction of the initial EEG signal. We test the method on filtration of experimental human EEG signals from movement artifacts and show high efficiency of the method.

  7. Application of vacuum stability test to determine thermal decomposition kinetics of nitramines bonded by polyurethane matrix

    NASA Astrophysics Data System (ADS)

    Elbeih, Ahmed; Abd-Elghany, Mohamed; Elshenawy, Tamer

    2017-03-01

    Vacuum stability test (VST) is mainly used to study compatibility and stability of energetic materials. In this work, VST has been investigated to study thermal decomposition kinetics of four cyclic nitramines, 1,3,5-trinitro-1,3,5-triazinane (RDX) and 1,3,5,7-tetranitro-1,3,5,7-tetrazocane (HMX), cis-1,3,4,6-tetranitrooctahydroimidazo-[4,5-d]imidazole (BCHMX), 2,4,6,8,10,12-hexanitro-2,4,6,8,10,12-hexaazaisowurtzitane (ε-HNIW, CL-20), bonded by polyurethane matrix based on hydroxyl terminated polybutadiene (HTPB). Model fitting and model free (isoconversional) methods have been applied to determine the decomposition kinetics from VST results. For comparison, the decomposition kinetics were determined isothermally by ignition delay technique and non-isothermally by Advanced Kinetics and Technology Solution (AKTS) software. The activation energies for thermolysis obtained by isoconversional method based on VST technique of RDX/HTPB, HMX/HTPB, BCHMX/HTPB and CL20/HTPB were 157.1, 203.1, 190.0 and 176.8 kJ mol-1 respectively. Model fitting method proved that the mechanism of thermal decomposition of BCHMX/HTPB is controlled by the nucleation model while all the other studied PBXs are controlled by the diffusion models. A linear relationship between the ignition temperatures and the activation energies was observed. BCHMX/HTPB is interesting new PBX in the research stage.

  8. Probabilistic Round Trip Contamination Analysis of a Mars Sample Acquisition and Handling Process Using Markovian Decompositions

    NASA Technical Reports Server (NTRS)

    Hudson, Nicolas; Lin, Ying; Barengoltz, Jack

    2010-01-01

    A method for evaluating the probability of a Viable Earth Microorganism (VEM) contaminating a sample during the sample acquisition and handling (SAH) process of a potential future Mars Sample Return mission is developed. A scenario where multiple core samples would be acquired using a rotary percussive coring tool, deployed from an arm on a MER class rover is analyzed. The analysis is conducted in a structured way by decomposing sample acquisition and handling process into a series of discrete time steps, and breaking the physical system into a set of relevant components. At each discrete time step, two key functions are defined: The probability of a VEM being released from each component, and the transport matrix, which represents the probability of VEM transport from one component to another. By defining the expected the number of VEMs on each component at the start of the sampling process, these decompositions allow the expected number of VEMs on each component at each sampling step to be represented as a Markov chain. This formalism provides a rigorous mathematical framework in which to analyze the probability of a VEM entering the sample chain, as well as making the analysis tractable by breaking the process down into small analyzable steps.

  9. [Effects of straw returning combined with medium and microelements application on soil organic carbon sequestration in cropland.

    PubMed

    Jiang, Zhen Hui; Shi, Jiang Lan; Jia, Zhou; Ding, Ting Ting; Tian, Xiao Hong

    2016-04-22

    A 52-day incubation experiment was conducted to investigate the effects of maize straw decomposition with combined medium element (S) and microelements (Fe and Zn) application on arable soil organic carbon sequestration. During the straw decomposition, the soil microbial biomass carbon (MBC) content and CO 2 -C mineralization rate increased with the addition of S, Fe and Zn, respectively. Also, the cumulative CO 2 -C efflux after 52-day laboratory incubation significantly increased in the treatments with S, or Fe, or Zn addition, while there was no significant reduction of soil organic carbon content in the treatments. In addition, Fe or Zn application increased the inert C pools and their proportion, and apparent balance of soil organic carbon, indicating a promoting effect of Fe or Zn addition on soil organic carbon sequestration. In contrast, S addition decreased the proportion of inert C pools and apparent balance of soil organic carbon, indicating an adverse effect of S addition on soil organic carbon sequestration. The results suggested that when nitrogen and phosphorus fertilizers were applied, inclusion of S, or Fe, or Zn in straw incorporation could promote soil organic carbon mineralization process, while organic carbon sequestration was favored by Fe or Zn addition, but not by S addition.

  10. Health care in the CIS countries : the case of hospitals in Ukraine.

    PubMed

    Pilyavsky, Anatoly; Staat, Matthias

    2006-09-01

    The study analyses the technical efficiency of community hospitals in Ukraine during 1997-2001. Hospital cost amount to two-thirds of Ukrainian spending on health care. Data are available on the number of beds, physicians and nurses employed, surgical procedures performed, and admissions and patient days. We employ data envelopment analysis to calculate the efficiency of hospitals and to assess productivity changes over time. The scores calculated with an output-oriented model assuming constant returns to scale range from 150% to 110%. Average relative inefficiency of the hospitals is initially above 30% and later drops to 15% or below. The average productivity change is positive but below 1%; a Malmquist index decomposition reveals that negative technological progress is overcompensated by positive catching-up.

  11. Some ways of plants wastes utilization in bioregenerative life support systems

    NASA Astrophysics Data System (ADS)

    Kovaleva, N. P.; Tikhomirov, A. A.; Tirranen, L. S.; Ushakova, S. A.; Zolotukhin, I. G.; Anischenko, O. V.

    In works on experimental modeling of bioregenerative life support systems BLSS carried out at Institute of Biophysics Russian Academy of Science Siberian Branch SB RAS the possibility of increase of a system closure degree under the condition of inedible plant biomass return into the organic matter turnover was demonstrated At the same time when radish inedible biomass was subjected to biological oxidation in soil-like substrate SLS after its drying then wheat straw was subjected to stepwise processing including mushrooms growing stage Mushrooms cultivation facilitated to lignin destruction and quicker straw decomposition On the other hand mushrooms growing required additional technological procedures leading to complication of a technological chain of straw processing The purpose of this work is to study the possibility of exclusion of mushrooms growing stage under straw pretreatment for its further use as an equivalent of radish edible biomass grown on SLS To solve the problem put by the radish cenosis in a conveyer regime was grown The conveyer included radish four ages with the conveyer step equal to 7 days The experiment consisted of two successive stages On the first stage radish was grown without straw addition into SLS control To return mineral elements into SLS the biomass grown was restored in SLS On the second stage inedible radish biomass and wheat straw were returned into SLS in the quantity equivalent to edible biomass The possibility of the method described was estimated according to plant productivity microbiological

  12. Waveform LiDAR processing: comparison of classic approaches and optimized Gold deconvolution to characterize vegetation structure and terrain elevation

    NASA Astrophysics Data System (ADS)

    Zhou, T.; Popescu, S. C.; Krause, K.

    2016-12-01

    Waveform Light Detection and Ranging (LiDAR) data have advantages over discrete-return LiDAR data in accurately characterizing vegetation structure. However, we lack a comprehensive understanding of waveform data processing approaches under different topography and vegetation conditions. The objective of this paper is to highlight a novel deconvolution algorithm, the Gold algorithm, for processing waveform LiDAR data with optimal deconvolution parameters. Further, we present a comparative study of waveform processing methods to provide insight into selecting an approach for a given combination of vegetation and terrain characteristics. We employed two waveform processing methods: 1) direct decomposition, 2) deconvolution and decomposition. In method two, we utilized two deconvolution algorithms - the Richardson Lucy (RL) algorithm and the Gold algorithm. The comprehensive and quantitative comparisons were conducted in terms of the number of detected echoes, position accuracy, the bias of the end products (such as digital terrain model (DTM) and canopy height model (CHM)) from discrete LiDAR data, along with parameter uncertainty for these end products obtained from different methods. This study was conducted at three study sites that include diverse ecological regions, vegetation and elevation gradients. Results demonstrate that two deconvolution algorithms are sensitive to the pre-processing steps of input data. The deconvolution and decomposition method is more capable of detecting hidden echoes with a lower false echo detection rate, especially for the Gold algorithm. Compared to the reference data, all approaches generate satisfactory accuracy assessment results with small mean spatial difference (<1.22 m for DTMs, < 0.77 m for CHMs) and root mean square error (RMSE) (<1.26 m for DTMs, < 1.93 m for CHMs). More specifically, the Gold algorithm is superior to others with smaller root mean square error (RMSE) (< 1.01m), while the direct decomposition approach works better in terms of the percentage of spatial difference within 0.5 and 1 m. The parameter uncertainty analysis demonstrates that the Gold algorithm outperforms other approaches in dense vegetation areas, with the smallest RMSE, and the RL algorithm performs better in sparse vegetation areas in terms of RMSE.

  13. Climate Induced Changes in Global-Scale Litter Decomposition and Long-term Relationships with Net Primary Productivity

    NASA Astrophysics Data System (ADS)

    Silver, W. L.; Smith, W. K.; Parton, W. J.; Wieder, W. R.; DelGrosso, S.

    2016-12-01

    Surface litter decomposition represents the largest annual carbon (C) flux to the atmosphere from terrestrial ecosystems (Esser et al. 1982). Using broad-scale long-term datasets we show that litter decomposition rates are largely predicted by a climate-decomposition index (CDI) at a global scale, and use CDI to estimate patterns in litter decomposition over the 110 years from 1901-2011. There were rapid changes in CDI over the last 30 y of the record amounting to a 4.3% increase globally. Boreal forests (+13.9%), tundra (+12.2%), savannas (+5.3%), and temperate (+2.4%) and tropical (+2.1%) forests all experienced accelerated decomposition. During the same period, most biomes experienced corresponding increases in a primary production index (PPI) estimated from an ensemble of long-term, observation-based productivity indices. The percent increase in PPI was only half that of decomposition globally. Tropical forests and savannas showed no increase in PPI to offset greater decomposition rates. Temperature-limited ecosystems (i.e., tundra, boreal, and temperate forests) showed the greatest differences between CDI and PPI, highlighting potentially large decoupling of C fluxes in these biomes. Precipitation and actual evapotranspiration were the best climate predictors of CDI at a global scale, while PPI varied consistently with actual evapotranspiration. As expected, temperature was the best predictor of PPI across temperature limited ecosystems. Our results show that climate change could be leading to a decoupling of C uptake and losses, potentially resulting in lower C storage in northern latitudes, temperate and tropical forests, and savannas.

  14. Children's Understanding of the Relation between Addition and Subtraction: Inversion, Identity, and Decomposition.

    ERIC Educational Resources Information Center

    Bryant, Peter; Rendu, Alison; Christie, Clare

    1999-01-01

    Examined whether 5- and 6-year-olds understand that addition and subtraction cancel each other and whether this understanding is based on identity or quantity of addend and subtrahend. Found that children used inversion principle. Six- to eight-year-olds also used inversion and decomposition to solve a + b - (B+1) problems. Concluded that…

  15. Regional contingencies in the relationship between aboveground biomass and litter in the world’s grasslands

    Treesearch

    L.R. O' Halloran; E.T. Borer; E.W. Seabloom; A.S. MacDougall; E.E. Cleland; R.L. McCulley; S. Hobbie; S. Harpole; N.M. DeCrappeo; C.-J. Chu; J.D. Bakker; K.F. Davies; G. Du; J. Firn; N. Hagenah; K.S. Hofmockel; J.M.H. Knops; W. Li; B.A. Melbourne; J.W. Morgan; J.L. Orrock; S.M. Prober; C.J. Stevens

    2013-01-01

    Based on regional-scale studies, aboveground production and litter decomposition are thought to positively covary, because they are driven by shared biotic and climatic factors. Until now we have been unable to test whether production and decomposition are generally coupled across climatically dissimilar regions, because we lacked replicated data collected within a...

  16. Optimal classification for the diagnosis of duchenne muscular dystrophy images using support vector machines.

    PubMed

    Zhang, Ming-Huan; Ma, Jun-Shan; Shen, Ying; Chen, Ying

    2016-09-01

    This study aimed to investigate the optimal support vector machines (SVM)-based classifier of duchenne muscular dystrophy (DMD) magnetic resonance imaging (MRI) images. T1-weighted (T1W) and T2-weighted (T2W) images of the 15 boys with DMD and 15 normal controls were obtained. Textural features of the images were extracted and wavelet decomposed, and then, principal features were selected. Scale transform was then performed for MRI images. Afterward, SVM-based classifiers of MRI images were analyzed based on the radical basis function and decomposition levels. The cost (C) parameter and kernel parameter [Formula: see text] were used for classification. Then, the optimal SVM-based classifier, expressed as [Formula: see text]), was identified by performance evaluation (sensitivity, specificity and accuracy). Eight of 12 textural features were selected as principal features (eigenvalues [Formula: see text]). The 16 SVM-based classifiers were obtained using combination of (C, [Formula: see text]), and those with lower C and [Formula: see text] values showed higher performances, especially classifier of [Formula: see text]). The SVM-based classifiers of T1W images showed higher performance than T1W images at the same decomposition level. The T1W images in classifier of [Formula: see text]) at level 2 decomposition showed the highest performance of all, and its overall correct sensitivity, specificity, and accuracy reached 96.9, 97.3, and 97.1 %, respectively. The T1W images in SVM-based classifier [Formula: see text] at level 2 decomposition showed the highest performance of all, demonstrating that it was the optimal classification for the diagnosis of DMD.

  17. Three dimensional empirical mode decomposition analysis apparatus, method and article manufacture

    NASA Technical Reports Server (NTRS)

    Gloersen, Per (Inventor)

    2004-01-01

    An apparatus and method of analysis for three-dimensional (3D) physical phenomena. The physical phenomena may include any varying 3D phenomena such as time varying polar ice flows. A repesentation of the 3D phenomena is passed through a Hilbert transform to convert the data into complex form. A spatial variable is separated from the complex representation by producing a time based covariance matrix. The temporal parts of the principal components are produced by applying Singular Value Decomposition (SVD). Based on the rapidity with which the eigenvalues decay, the first 3-10 complex principal components (CPC) are selected for Empirical Mode Decomposition into intrinsic modes. The intrinsic modes produced are filtered in order to reconstruct the spatial part of the CPC. Finally, a filtered time series may be reconstructed from the first 3-10 filtered complex principal components.

  18. Decomposition of Proteins into Dynamic Units from Atomic Cross-Correlation Functions.

    PubMed

    Calligari, Paolo; Gerolin, Marco; Abergel, Daniel; Polimeno, Antonino

    2017-01-10

    In this article, we present a clustering method of atoms in proteins based on the analysis of the correlation times of interatomic distance correlation functions computed from MD simulations. The goal is to provide a coarse-grained description of the protein in terms of fewer elements that can be treated as dynamically independent subunits. Importantly, this domain decomposition method does not take into account structural properties of the protein. Instead, the clustering of protein residues in terms of networks of dynamically correlated domains is defined on the basis of the effective correlation times of the pair distance correlation functions. For these properties, our method stands as a complementary analysis to the customary protein decomposition in terms of quasi-rigid, structure-based domains. Results obtained for a prototypal protein structure illustrate the approach proposed.

  19. Eigenvalue-eigenvector decomposition (EED) analysis of dissimilarity and covariance matrix obtained from total synchronous fluorescence spectral (TSFS) data sets of herbal preparations: Optimizing the classification approach.

    PubMed

    Tarai, Madhumita; Kumar, Keshav; Divya, O; Bairi, Partha; Mishra, Kishor Kumar; Mishra, Ashok Kumar

    2017-09-05

    The present work compares the dissimilarity and covariance based unsupervised chemometric classification approaches by taking the total synchronous fluorescence spectroscopy data sets acquired for the cumin and non-cumin based herbal preparations. The conventional decomposition method involves eigenvalue-eigenvector analysis of the covariance of the data set and finds the factors that can explain the overall major sources of variation present in the data set. The conventional approach does this irrespective of the fact that the samples belong to intrinsically different groups and hence leads to poor class separation. The present work shows that classification of such samples can be optimized by performing the eigenvalue-eigenvector decomposition on the pair-wise dissimilarity matrix. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Urban-area extraction from polarimetric SAR image using combination of target decomposition and orientation angle

    NASA Astrophysics Data System (ADS)

    Zou, Bin; Lu, Da; Wu, Zhilu; Qiao, Zhijun G.

    2016-05-01

    The results of model-based target decomposition are the main features used to discriminate urban and non-urban area in polarimetric synthetic aperture radar (PolSAR) application. Traditional urban-area extraction methods based on modelbased target decomposition usually misclassified ground-trunk structure as urban-area or misclassified rotated urbanarea as forest. This paper introduces another feature named orientation angle to improve urban-area extraction scheme for the accurate mapping in urban by PolSAR image. The proposed method takes randomness of orientation angle into account for restriction of urban area first and, subsequently, implements rotation angle to improve results that oriented urban areas are recognized as double-bounce objects from volume scattering. ESAR L-band PolSAR data of the Oberpfaffenhofen Test Site Area was used to validate the proposed algorithm.

  1. Is it worth hyperaccumulating Ni on non-serpentine soils? Decomposition dynamics of mixed-species litters containing hyperaccumulated Ni across serpentine and non-serpentine environments.

    PubMed

    Adamidis, George C; Kazakou, Elena; Aloupi, Maria; Dimitrakopoulos, Panayiotis G

    2016-06-01

    Nickel (Ni)-hyperaccumulating species produce high-Ni litters and may potentially influence important ecosystem processes such as decomposition. Although litters resembling the natural community conditions are essential in order to predict decomposition dynamics, decomposition of mixed-species litters containing hyperaccumulated Ni has never been studied. This study aims to test the effect of different litter mixtures containing hyperaccumulated Ni on decomposition and Ni release across serpentine and non-serpentine soils. Three different litter mixtures were prepared based on the relative abundance of the dominant species in three serpentine soils in the island of Lesbos, Greece where the Ni-hyperaccumulator Alyssum lesbiacum is present. Each litter mixture decomposed on its original serpentine habitat and on an adjacent non-serpentine habitat, in order to investigate whether the decomposition rates differ across the contrasted soils. In order to make comparisons across litter mixtures and to investigate whether additive or non-additive patterns of mass loss occur, a control non-serpentine site was used. Mass loss and Ni release were measured after 90, 180 and 270 d of field exposure. The decomposition rates and Ni release had higher values on serpentine soils after all periods of field exposure. The recorded rapid release of hyperaccumulated Ni is positively related to the initial litter Ni concentration. No differences were found in the decomposition of the three different litter mixtures at the control non-serpentine site, while their patterns of mass loss were additive. Our results: (1) demonstrate the rapid decomposition of litters containing hyperaccumulated Ni on serpentine soils, indicating the presence of metal-tolerant decomposers; and (2) imply the selective decomposition of low-Ni parts of litters by the decomposers on non-serpentine soils. This study provides support for the elemental allelopathy hypothesis of hyperaccumulation, presenting the potential selective advantages acquired by metal-hyperaccumulating plants through litter decomposition on serpentine soils. © The Author 2016. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  2. A data-driven method to enhance vibration signal decomposition for rolling bearing fault analysis

    NASA Astrophysics Data System (ADS)

    Grasso, M.; Chatterton, S.; Pennacchi, P.; Colosimo, B. M.

    2016-12-01

    Health condition analysis and diagnostics of rotating machinery requires the capability of properly characterizing the information content of sensor signals in order to detect and identify possible fault features. Time-frequency analysis plays a fundamental role, as it allows determining both the existence and the causes of a fault. The separation of components belonging to different time-frequency scales, either associated to healthy or faulty conditions, represents a challenge that motivates the development of effective methodologies for multi-scale signal decomposition. In this framework, the Empirical Mode Decomposition (EMD) is a flexible tool, thanks to its data-driven and adaptive nature. However, the EMD usually yields an over-decomposition of the original signals into a large number of intrinsic mode functions (IMFs). The selection of most relevant IMFs is a challenging task, and the reference literature lacks automated methods to achieve a synthetic decomposition into few physically meaningful modes by avoiding the generation of spurious or meaningless modes. The paper proposes a novel automated approach aimed at generating a decomposition into a minimal number of relevant modes, called Combined Mode Functions (CMFs), each consisting in a sum of adjacent IMFs that share similar properties. The final number of CMFs is selected in a fully data driven way, leading to an enhanced characterization of the signal content without any information loss. A novel criterion to assess the dissimilarity between adjacent CMFs is proposed, based on probability density functions of frequency spectra. The method is suitable to analyze vibration signals that may be periodically acquired within the operating life of rotating machineries. A rolling element bearing fault analysis based on experimental data is presented to demonstrate the performances of the method and the provided benefits.

  3. RESOLVING THE ACTIVE GALACTIC NUCLEUS AND HOST EMISSION IN THE MID-INFRARED USING A MODEL-INDEPENDENT SPECTRAL DECOMPOSITION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hernán-Caballero, Antonio; Alonso-Herrero, Almudena; Hatziminaoglou, Evanthia

    2015-04-20

    We present results on the spectral decomposition of 118 Spitzer Infrared Spectrograph (IRS) spectra from local active galactic nuclei (AGNs) using a large set of Spitzer/IRS spectra as templates. The templates are themselves IRS spectra from extreme cases where a single physical component (stellar, interstellar, or AGN) completely dominates the integrated mid-infrared emission. We show that a linear combination of one template for each physical component reproduces the observed IRS spectra of AGN hosts with unprecedented fidelity for a template fitting method with no need to model extinction separately. We use full probability distribution functions to estimate expectation values andmore » uncertainties for observables, and find that the decomposition results are robust against degeneracies. Furthermore, we compare the AGN spectra derived from the spectral decomposition with sub-arcsecond resolution nuclear photometry and spectroscopy from ground-based observations. We find that the AGN component derived from the decomposition closely matches the nuclear spectrum with a 1σ dispersion of 0.12 dex in luminosity and typical uncertainties of ∼0.19 in the spectral index and ∼0.1 in the silicate strength. We conclude that the emission from the host galaxy can be reliably removed from the IRS spectra of AGNs. This allows for unbiased studies of the AGN emission in intermediate- and high-redshift galaxies—currently inaccesible to ground-based observations—with archival Spitzer/IRS data and in the future with the Mid-InfraRed Instrument of the James Webb Space Telescope. The decomposition code and templates are available at http://denebola.org/ahc/deblendIRS.« less

  4. The suitability of visual taphonomic methods for digital photographs: An experimental approach with pig carcasses in a tropical climate.

    PubMed

    Ribéreau-Gayon, Agathe; Rando, Carolyn; Morgan, Ruth M; Carter, David O

    2018-05-01

    In the context of increased scrutiny of the methods in forensic sciences, it is essential to ensure that the approaches used in forensic taphonomy to measure decomposition and estimate the postmortem interval are underpinned by robust evidence-based data. Digital photographs are an important source of documentation in forensic taphonomic investigations but the suitability of the current approaches for photographs, rather than real-time remains, is poorly studied which can undermine accurate forensic conclusions. The present study aimed to investigate the suitability of 2D colour digital photographs for evaluating decomposition of exposed human analogues (Sus scrofa domesticus) in a tropical savanna environment (Hawaii), using two published scoring methods; Megyesi et al., 2005 and Keough et al., 2017. It was found that there were significant differences between the real-time and photograph decomposition scores when the Megyesi et al. method was used. However, the Keough et al. method applied to photographs reflected real-time decomposition more closely and thus appears more suitable to evaluate pig decomposition from 2D photographs. The findings indicate that the type of scoring method used has a significant impact on the ability to accurately evaluate the decomposition of exposed pig carcasses from photographs. It was further identified that photographic taphonomic analysis can reach high inter-observer reproducibility. These novel findings are of significant importance for the forensic sciences as they highlight the potential for high quality photograph coverage to provide useful complementary information for the forensic taphonomic investigation. New recommendations to develop robust transparent approaches adapted to photographs in forensic taphonomy are suggested based on these findings. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  5. In silico Pathway Activation Network Decomposition Analysis (iPANDA) as a method for biomarker development.

    PubMed

    Ozerov, Ivan V; Lezhnina, Ksenia V; Izumchenko, Evgeny; Artemov, Artem V; Medintsev, Sergey; Vanhaelen, Quentin; Aliper, Alexander; Vijg, Jan; Osipov, Andreyan N; Labat, Ivan; West, Michael D; Buzdin, Anton; Cantor, Charles R; Nikolsky, Yuri; Borisov, Nikolay; Irincheeva, Irina; Khokhlovich, Edward; Sidransky, David; Camargo, Miguel Luiz; Zhavoronkov, Alex

    2016-11-16

    Signalling pathway activation analysis is a powerful approach for extracting biologically relevant features from large-scale transcriptomic and proteomic data. However, modern pathway-based methods often fail to provide stable pathway signatures of a specific phenotype or reliable disease biomarkers. In the present study, we introduce the in silico Pathway Activation Network Decomposition Analysis (iPANDA) as a scalable robust method for biomarker identification using gene expression data. The iPANDA method combines precalculated gene coexpression data with gene importance factors based on the degree of differential gene expression and pathway topology decomposition for obtaining pathway activation scores. Using Microarray Analysis Quality Control (MAQC) data sets and pretreatment data on Taxol-based neoadjuvant breast cancer therapy from multiple sources, we demonstrate that iPANDA provides significant noise reduction in transcriptomic data and identifies highly robust sets of biologically relevant pathway signatures. We successfully apply iPANDA for stratifying breast cancer patients according to their sensitivity to neoadjuvant therapy.

  6. In silico Pathway Activation Network Decomposition Analysis (iPANDA) as a method for biomarker development

    PubMed Central

    Ozerov, Ivan V.; Lezhnina, Ksenia V.; Izumchenko, Evgeny; Artemov, Artem V.; Medintsev, Sergey; Vanhaelen, Quentin; Aliper, Alexander; Vijg, Jan; Osipov, Andreyan N.; Labat, Ivan; West, Michael D.; Buzdin, Anton; Cantor, Charles R.; Nikolsky, Yuri; Borisov, Nikolay; Irincheeva, Irina; Khokhlovich, Edward; Sidransky, David; Camargo, Miguel Luiz; Zhavoronkov, Alex

    2016-01-01

    Signalling pathway activation analysis is a powerful approach for extracting biologically relevant features from large-scale transcriptomic and proteomic data. However, modern pathway-based methods often fail to provide stable pathway signatures of a specific phenotype or reliable disease biomarkers. In the present study, we introduce the in silico Pathway Activation Network Decomposition Analysis (iPANDA) as a scalable robust method for biomarker identification using gene expression data. The iPANDA method combines precalculated gene coexpression data with gene importance factors based on the degree of differential gene expression and pathway topology decomposition for obtaining pathway activation scores. Using Microarray Analysis Quality Control (MAQC) data sets and pretreatment data on Taxol-based neoadjuvant breast cancer therapy from multiple sources, we demonstrate that iPANDA provides significant noise reduction in transcriptomic data and identifies highly robust sets of biologically relevant pathway signatures. We successfully apply iPANDA for stratifying breast cancer patients according to their sensitivity to neoadjuvant therapy. PMID:27848968

  7. Palm vein recognition based on directional empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Lee, Jen-Chun; Chang, Chien-Ping; Chen, Wei-Kuei

    2014-04-01

    Directional empirical mode decomposition (DEMD) has recently been proposed to make empirical mode decomposition suitable for the processing of texture analysis. Using DEMD, samples are decomposed into a series of images, referred to as two-dimensional intrinsic mode functions (2-D IMFs), from finer to large scale. A DEMD-based 2 linear discriminant analysis (LDA) for palm vein recognition is proposed. The proposed method progresses through three steps: (i) a set of 2-D IMF features of various scale and orientation are extracted using DEMD, (ii) the 2LDA method is then applied to reduce the dimensionality of the feature space in both the row and column directions, and (iii) the nearest neighbor classifier is used for classification. We also propose two strategies for using the set of 2-D IMF features: ensemble DEMD vein representation (EDVR) and multichannel DEMD vein representation (MDVR). In experiments using palm vein databases, the proposed MDVR-based 2LDA method achieved recognition accuracy of 99.73%, thereby demonstrating its feasibility for palm vein recognition.

  8. Aligning observed and modelled behaviour based on workflow decomposition

    NASA Astrophysics Data System (ADS)

    Wang, Lu; Du, YuYue; Liu, Wei

    2017-09-01

    When business processes are mostly supported by information systems, the availability of event logs generated from these systems, as well as the requirement of appropriate process models are increasing. Business processes can be discovered, monitored and enhanced by extracting process-related information. However, some events cannot be correctly identified because of the explosion of the amount of event logs. Therefore, a new process mining technique is proposed based on a workflow decomposition method in this paper. Petri nets (PNs) are used to describe business processes, and then conformance checking of event logs and process models is investigated. A decomposition approach is proposed to divide large process models and event logs into several separate parts that can be analysed independently; while an alignment approach based on a state equation method in PN theory enhances the performance of conformance checking. Both approaches are implemented in programmable read-only memory (ProM). The correctness and effectiveness of the proposed methods are illustrated through experiments.

  9. Decomposition of Imidazolium-Based Ionic Liquids in Contact with Lithium Metal.

    PubMed

    Schmitz, Paulo; Jakelski, Rene; Pyschik, Marcelina; Jalkanen, Kirsi; Nowak, Sascha; Winter, Martin; Bieker, Peter

    2017-03-09

    Ionic liquids (ILs) are considered to be suitable electrolyte components for lithium-metal batteries. Imidazolium cation based ILs were previously found to be applicable for battery systems with a lithium-metal negative electrode. However, herein it is shown that, in contrast to the well-known IL N-butyl-N-methylpyrrolidinium bis[(trifluoromethyl)sulfonyl]imide ([Pyr 14 ][TFSI]), 1-ethyl-3-methylimidazolium bis[(trifluoromethyl)sulfonyl]imide ([C2MIm][TFSI]) and 1-butyl-3-methylimidazolium bis[(trifluoromethyl)sulfonyl]imide ([C4MIm][TFSI]) are chemically unstable versus metallic lithium. A lithium-metal sheet was immersed in pure imidazolium-based IL samples and aged at 60 °C for 28 days. Afterwards, the aged IL samples were investigated to deduce possible decomposition products of the imidazolium cation. The chemical instability of the ILs in contact with lithium metal and a possible decomposition starting point are shown for the first time. Furthermore, the investigated imidazolium-based ILs can be utilized for lithium-metal batteries through the addition of the solid-electrolyte interphase (SEI) film-forming additive fluoroethylene carbonate. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Quantitative Analysis of Microstructural Constituents in Welded Transformation-Induced-Plasticity Steels

    NASA Astrophysics Data System (ADS)

    Amirthalingam, M.; Hermans, M. J. M.; Zhao, L.; Richardson, I. M.

    2010-02-01

    A quantitative analysis of retained austenite and nonmetallic inclusions in gas tungsten arc (GTA)-welded aluminum-containing transformation-induced-plasticity (TRIP) steels is presented. The amount of retained austenite in the heat-affected and fusion zones of welded aluminum-containing TRIP steel with different base metal austenite fractions has been measured by magnetic saturation measurements, to study the effect of weld thermal cycles on the stabilization of austenite. It is found that for base metals containing 3 to 14 pct of austenite, 4 to 13 pct of austenite is found in the heat-affected zones and 6 to 10 pct in the fusion zones. The decomposition kinetics of retained austenite in the base metal and welded samples was also studied by thermomagnetic measurements. The decomposition kinetics of the austenite in the fusion zone is found to be slower compared to that in the base metal. Thermomagnetic measurements indicated the formation of ferromagnetic ɛ carbides above 290 °C and paramagnetic η( ɛ') transient iron carbides at approximately 400 °C due to the decomposition of austenite during heating.

  11. Complete Decomposition of Li2CO3 in Li-O2 Batteries Using Ir/B4C as Noncarbon-Based Oxygen Electrode.

    PubMed

    Song, Shidong; Xu, Wu; Zheng, Jianming; Luo, Langli; Engelhard, Mark H; Bowden, Mark E; Liu, Bin; Wang, Chong-Min; Zhang, Ji-Guang

    2017-03-08

    Instability of carbon-based oxygen electrodes and incomplete decomposition of Li 2 CO 3 during charge process are critical barriers for rechargeable Li-O 2 batteries. Here we report the complete decomposition of Li 2 CO 3 in Li-O 2 batteries using the ultrafine iridium-decorated boron carbide (Ir/B 4 C) nanocomposite as a noncarbon based oxygen electrode. The systematic investigation on charging the Li 2 CO 3 preloaded Ir/B 4 C electrode in an ether-based electrolyte demonstrates that the Ir/B 4 C electrode can decompose Li 2 CO 3 with an efficiency close to 100% at a voltage below 4.37 V. In contrast, the bare B 4 C without Ir electrocatalyst can only decompose 4.7% of the preloaded Li 2 CO 3 . Theoretical analysis indicates that the high efficiency decomposition of Li 2 CO 3 can be attributed to the synergistic effects of Ir and B 4 C. Ir has a high affinity for oxygen species, which could lower the energy barrier for electrochemical oxidation of Li 2 CO 3 . B 4 C exhibits much higher chemical and electrochemical stability than carbon-based electrodes and high catalytic activity for Li-O 2 reactions. A Li-O 2 battery using Ir/B 4 C as the oxygen electrode material shows highly enhanced cycling stability than those using the bare B 4 C oxygen electrode. Further development of these stable oxygen-electrodes could accelerate practical applications of Li-O 2 batteries.

  12. Thermal decomposition of nano-enabled thermoplastics: Possible environmental health and safety implications

    PubMed Central

    Sotiriou, Georgios A.; Singh, Dilpreet; Zhang, Fang; Chalbot, Marie-Cecile G.; Spielman-Sun, Eleanor; Hoering, Lutz; Kavouras, Ilias G.; Lowry, Gregory V.; Wohlleben, Wendel; Demokritou, Philip

    2015-01-01

    Nano-enabled products (NEPs) are currently part of our life prompting for detailed investigation of potential nano-release across their life-cycle. Particularly interesting is their end-of-life thermal decomposition scenario. Here, we examine the thermal decomposition of a widely used NEP, namely thermoplastic nanocomposites, and assess the properties of the byproducts (released aerosol and residual ash) and possible environmental health and safety implications. We focus on establishing a fundamental understanding on the effect of thermal decomposition parameters, such as polymer matrix, nanofiller properties, decomposition temperature, on the properties of byproducts using a recently-developed lab-based experimental integrated platform. Our results indicate that thermoplastic polymer matrix strongly influences size and morphology of released aerosol, while there was minimal but detectable nano-release, especially when inorganic nanofillers were used. The chemical composition of the released aerosol was found not to be strongly influenced by the presence of nanofiller at least for the low, industry-relevant loadings assessed here. Furthermore, the morphology and composition of residual ash was found to be strongly influenced by the presence of nanofiller. The findings presented here on thermal decomposition/incineration of NEPs raise important questions and concerns regarding the potential fate and transport of released engineered nanomaterials in environmental media and potential environmental health and safety implications. PMID:26642449

  13. On the classification of mixed floating pollutants on the Yellow Sea of China by using a quad-polarized SAR image

    NASA Astrophysics Data System (ADS)

    Wang, Xiaochen; Shao, Yun; Tian, Wei; Li, Kun

    2018-06-01

    This study explored different methodologies using a C-band RADARSAT-2 quad-polarized Synthetic Aperture Radar (SAR) image located over China's Yellow Sea to investigate polarization decomposition parameters for identifying mixed floating pollutants from a complex ocean background. It was found that solitary polarization decomposition did not meet the demand for detecting and classifying multiple floating pollutants, even after applying a polarized SAR image. Furthermore, considering that Yamaguchi decomposition is sensitive to vegetation and the algal variety Enteromorpha prolifera, while H/A/alpha decomposition is sensitive to oil spills, a combination of parameters which was deduced from these two decompositions was proposed for marine environmental monitoring of mixed floating sea surface pollutants. A combination of volume scattering, surface scattering, and scattering entropy was the best indicator for classifying mixed floating pollutants from a complex ocean background. The Kappa coefficients for Enteromorpha prolifera and oil spills were 0.7514 and 0.8470, respectively, evidence that the composite polarized parameters based on quad-polarized SAR imagery proposed in this research is an effective monitoring method for complex marine pollution.

  14. Density-dependent liquid nitromethane decomposition: molecular dynamics simulations based on ReaxFF.

    PubMed

    Rom, Naomi; Zybin, Sergey V; van Duin, Adri C T; Goddard, William A; Zeiri, Yehuda; Katz, Gil; Kosloff, Ronnie

    2011-09-15

    The decomposition mechanism of hot liquid nitromethane at various compressions was studied using reactive force field (ReaxFF) molecular dynamics simulations. A competition between two different initial thermal decomposition schemes is observed, depending on compression. At low densities, unimolecular C-N bond cleavage is the dominant route, producing CH(3) and NO(2) fragments. As density and pressure rise approaching the Chapman-Jouget detonation conditions (∼30% compression, >2500 K) the dominant mechanism switches to the formation of the CH(3)NO fragment via H-transfer and/or N-O bond rupture. The change in the decomposition mechanism of hot liquid NM leads to a different kinetic and energetic behavior, as well as products distribution. The calculated density dependence of the enthalpy change correlates with the change in initial decomposition reaction mechanism. It can be used as a convenient and useful global parameter for the detection of reaction dynamics. Atomic averaged local diffusion coefficients are shown to be sensitive to the reactions dynamics, and can be used to distinguish between time periods where chemical reactions occur and diffusion-dominated, nonreactive time periods. © 2011 American Chemical Society

  15. Nonlinear mode decomposition: A noise-robust, adaptive decomposition method

    NASA Astrophysics Data System (ADS)

    Iatsenko, Dmytro; McClintock, Peter V. E.; Stefanovska, Aneta

    2015-09-01

    The signals emanating from complex systems are usually composed of a mixture of different oscillations which, for a reliable analysis, should be separated from each other and from the inevitable background of noise. Here we introduce an adaptive decomposition tool—nonlinear mode decomposition (NMD)—which decomposes a given signal into a set of physically meaningful oscillations for any wave form, simultaneously removing the noise. NMD is based on the powerful combination of time-frequency analysis techniques—which, together with the adaptive choice of their parameters, make it extremely noise robust—and surrogate data tests used to identify interdependent oscillations and to distinguish deterministic from random activity. We illustrate the application of NMD to both simulated and real signals and demonstrate its qualitative and quantitative superiority over other approaches, such as (ensemble) empirical mode decomposition, Karhunen-Loève expansion, and independent component analysis. We point out that NMD is likely to be applicable and useful in many different areas of research, such as geophysics, finance, and the life sciences. The necessary matlab codes for running NMD are freely available for download.

  16. Efficient material decomposition method for dual-energy X-ray cargo inspection system

    NASA Astrophysics Data System (ADS)

    Lee, Donghyeon; Lee, Jiseoc; Min, Jonghwan; Lee, Byungcheol; Lee, Byeongno; Oh, Kyungmin; Kim, Jaehyun; Cho, Seungryong

    2018-03-01

    Dual-energy X-ray inspection systems are widely used today for it provides X-ray attenuation contrast of the imaged object and also its material information. Material decomposition capability allows a higher detection sensitivity of potential targets including purposely loaded impurities in agricultural product inspections and threats in security scans for example. Dual-energy X-ray transmission data can be transformed into two basis material thickness data, and its transformation accuracy heavily relies on a calibration of material decomposition process. The calibration process in general can be laborious and time consuming. Moreover, a conventional calibration method is often challenged by the nonuniform spectral characteristics of the X-ray beam in the entire field-of-view (FOV). In this work, we developed an efficient material decomposition calibration process for a linear accelerator (LINAC) based high-energy X-ray cargo inspection system. We also proposed a multi-spot calibration method to improve the decomposition performance throughout the entire FOV. Experimental validation of the proposed method has been demonstrated by use of a cargo inspection system that supports 6 MV and 9 MV dual-energy imaging.

  17. Amplitude-cyclic frequency decomposition of vibration signals for bearing fault diagnosis based on phase editing

    NASA Astrophysics Data System (ADS)

    Barbini, L.; Eltabach, M.; Hillis, A. J.; du Bois, J. L.

    2018-03-01

    In rotating machine diagnosis different spectral tools are used to analyse vibration signals. Despite the good diagnostic performance such tools are usually refined, computationally complex to implement and require oversight of an expert user. This paper introduces an intuitive and easy to implement method for vibration analysis: amplitude cyclic frequency decomposition. This method firstly separates vibration signals accordingly to their spectral amplitudes and secondly uses the squared envelope spectrum to reveal the presence of cyclostationarity in each amplitude level. The intuitive idea is that in a rotating machine different components contribute vibrations at different amplitudes, for instance defective bearings contribute a very weak signal in contrast to gears. This paper also introduces a new quantity, the decomposition squared envelope spectrum, which enables separation between the components of a rotating machine. The amplitude cyclic frequency decomposition and the decomposition squared envelope spectrum are tested on real word signals, both at stationary and varying speeds, using data from a wind turbine gearbox and an aircraft engine. In addition a benchmark comparison to the spectral correlation method is presented.

  18. Decomposition Behavior of Curcumin during Solar Irradiation when Contact with Inorganic Particles

    NASA Astrophysics Data System (ADS)

    Nandiyanto, A. B. D.; Wiryani, A. S.; Rusli, A.; Purnamasari, A.; Abdullah, A. G.; Riza, L. S.

    2017-03-01

    Curcumin is one of materials which have been widely used in medicine, Asian cuisine, and traditional cosmetic. Therefore, understanding the stability of curcumin has been widely studied. The purpose of this study was to investigate the stability of curcumin solution against solar irradiation when making contact with inorganic material. As a model for the inorganic material, titanium dioxide (TiO2) was used. In the experimental method, the curcumin solution was irradiated using a solar irradiation. To confirm the stability of curcumin when contact with inorganic material, we added TiO2 micro particles with different concentrations. The results showed that the concentration of curcumin decreased during solar irradiation. The less concentration of curcumin affected the more decomposition rate obtained. The decomposition rate was increased greatly when TiO2 was added, in which the more TiO2 concentration added allowed the faster decomposition rate. Based on the result, we conclude that the curcumin is relatively stable as long as using higher concentration of curcumin and is no inorganic material existed. Then, the decomposition can be minimized by avoiding contact with inorganic material.

  19. The deconvolution of complex spectra by artificial immune system

    NASA Astrophysics Data System (ADS)

    Galiakhmetova, D. I.; Sibgatullin, M. E.; Galimullin, D. Z.; Kamalova, D. I.

    2017-11-01

    An application of the artificial immune system method for decomposition of complex spectra is presented. The results of decomposition of the model contour consisting of three components, Gaussian contours, are demonstrated. The method of artificial immune system is an optimization method, which is based on the behaviour of the immune system and refers to modern methods of search for the engine optimization.

  20. Application of composite dictionary multi-atom matching in gear fault diagnosis.

    PubMed

    Cui, Lingli; Kang, Chenhui; Wang, Huaqing; Chen, Peng

    2011-01-01

    The sparse decomposition based on matching pursuit is an adaptive sparse expression method for signals. This paper proposes an idea concerning a composite dictionary multi-atom matching decomposition and reconstruction algorithm, and the introduction of threshold de-noising in the reconstruction algorithm. Based on the structural characteristics of gear fault signals, a composite dictionary combining the impulse time-frequency dictionary and the Fourier dictionary was constituted, and a genetic algorithm was applied to search for the best matching atom. The analysis results of gear fault simulation signals indicated the effectiveness of the hard threshold, and the impulse or harmonic characteristic components could be separately extracted. Meanwhile, the robustness of the composite dictionary multi-atom matching algorithm at different noise levels was investigated. Aiming at the effects of data lengths on the calculation efficiency of the algorithm, an improved segmented decomposition and reconstruction algorithm was proposed, and the calculation efficiency of the decomposition algorithm was significantly enhanced. In addition it is shown that the multi-atom matching algorithm was superior to the single-atom matching algorithm in both calculation efficiency and algorithm robustness. Finally, the above algorithm was applied to gear fault engineering signals, and achieved good results.

  1. Nonlinear QR code based optical image encryption using spiral phase transform, equal modulus decomposition and singular value decomposition

    NASA Astrophysics Data System (ADS)

    Kumar, Ravi; Bhaduri, Basanta; Nishchal, Naveen K.

    2018-01-01

    In this study, we propose a quick response (QR) code based nonlinear optical image encryption technique using spiral phase transform (SPT), equal modulus decomposition (EMD) and singular value decomposition (SVD). First, the primary image is converted into a QR code and then multiplied with a spiral phase mask (SPM). Next, the product is spiral phase transformed with particular spiral phase function, and further, the EMD is performed on the output of SPT, which results into two complex images, Z 1 and Z 2. Among these, Z 1 is further Fresnel propagated with distance d, and Z 2 is reserved as a decryption key. Afterwards, SVD is performed on Fresnel propagated output to get three decomposed matrices i.e. one diagonal matrix and two unitary matrices. The two unitary matrices are modulated with two different SPMs and then, the inverse SVD is performed using the diagonal matrix and modulated unitary matrices to get the final encrypted image. Numerical simulation results confirm the validity and effectiveness of the proposed technique. The proposed technique is robust against noise attack, specific attack, and brutal force attack. Simulation results are presented in support of the proposed idea.

  2. Computer implemented empirical mode decomposition method, apparatus and article of manufacture

    NASA Technical Reports Server (NTRS)

    Huang, Norden E. (Inventor)

    1999-01-01

    A computer implemented physical signal analysis method is invented. This method includes two essential steps and the associated presentation techniques of the results. All the steps exist only in a computer: there are no analytic expressions resulting from the method. The first step is a computer implemented Empirical Mode Decomposition to extract a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform. The final result is the Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum.

  3. Long-term litter decomposition controlled by manganese redox cycling

    PubMed Central

    Keiluweit, Marco; Nico, Peter; Harmon, Mark E.; Mao, Jingdong; Pett-Ridge, Jennifer; Kleber, Markus

    2015-01-01

    Litter decomposition is a keystone ecosystem process impacting nutrient cycling and productivity, soil properties, and the terrestrial carbon (C) balance, but the factors regulating decomposition rate are still poorly understood. Traditional models assume that the rate is controlled by litter quality, relying on parameters such as lignin content as predictors. However, a strong correlation has been observed between the manganese (Mn) content of litter and decomposition rates across a variety of forest ecosystems. Here, we show that long-term litter decomposition in forest ecosystems is tightly coupled to Mn redox cycling. Over 7 years of litter decomposition, microbial transformation of litter was paralleled by variations in Mn oxidation state and concentration. A detailed chemical imaging analysis of the litter revealed that fungi recruit and redistribute unreactive Mn2+ provided by fresh plant litter to produce oxidative Mn3+ species at sites of active decay, with Mn eventually accumulating as insoluble Mn3+/4+ oxides. Formation of reactive Mn3+ species coincided with the generation of aromatic oxidation products, providing direct proof of the previously posited role of Mn3+-based oxidizers in the breakdown of litter. Our results suggest that the litter-decomposing machinery at our coniferous forest site depends on the ability of plants and microbes to supply, accumulate, and regenerate short-lived Mn3+ species in the litter layer. This observation indicates that biogeochemical constraints on bioavailability, mobility, and reactivity of Mn in the plant–soil system may have a profound impact on litter decomposition rates. PMID:26372954

  4. Methanol decomposition reactions over a boron-doped graphene supported Ru-Pt catalyst.

    PubMed

    Damte, Jemal Yimer; Lyu, Shang-Lin; Leggesse, Ermias Girma; Jiang, Jyh Chiang

    2018-04-04

    The decomposition of methanol is currently attracting research attention due to the potential widespread applications of its end products. In this work, density functional theory (DFT) calculations have been performed to investigate the adsorption and decomposition of methanol on a Ru-Pt/boron doped graphene surface. We find that the most favorable reaction pathway is methanol (CH3OH) decomposition through O-H bond breaking to form methoxide (CH3O) as the initial step, followed by further dehydrogenation steps which generate formaldehyde (CH2O), formyl (CHO), and carbon monoxide (CO). The calculations illustrate that CH3OH and CO groups prefer to adsorb at the Ru-top sites, while CH2OH, CH3O, CH2O, CHO, and H2 groups favor the Ru-Pt bridge sites, indicating the preference of Ru atoms to adsorb the active intermediates or species having lone-pair electrons. Based on the results, it is found that the energy barrier for CH3OH decomposition through the initial O-H bond breaking is less than its desorption energy of 0.95 eV, showing that CH3OH prefers to undergo decomposition to CH3O rather than direct desorption. The study provides in-depth theoretical insights into the potentially enhanced catalytic activity of Ru-Pt/boron doped graphene surfaces for methanol decomposition reactions, thereby contributing to the understanding and designing of an efficient catalyst under optimum conditions.

  5. Long-term litter decomposition controlled by manganese redox cycling.

    PubMed

    Keiluweit, Marco; Nico, Peter; Harmon, Mark E; Mao, Jingdong; Pett-Ridge, Jennifer; Kleber, Markus

    2015-09-22

    Litter decomposition is a keystone ecosystem process impacting nutrient cycling and productivity, soil properties, and the terrestrial carbon (C) balance, but the factors regulating decomposition rate are still poorly understood. Traditional models assume that the rate is controlled by litter quality, relying on parameters such as lignin content as predictors. However, a strong correlation has been observed between the manganese (Mn) content of litter and decomposition rates across a variety of forest ecosystems. Here, we show that long-term litter decomposition in forest ecosystems is tightly coupled to Mn redox cycling. Over 7 years of litter decomposition, microbial transformation of litter was paralleled by variations in Mn oxidation state and concentration. A detailed chemical imaging analysis of the litter revealed that fungi recruit and redistribute unreactive Mn(2+) provided by fresh plant litter to produce oxidative Mn(3+) species at sites of active decay, with Mn eventually accumulating as insoluble Mn(3+/4+) oxides. Formation of reactive Mn(3+) species coincided with the generation of aromatic oxidation products, providing direct proof of the previously posited role of Mn(3+)-based oxidizers in the breakdown of litter. Our results suggest that the litter-decomposing machinery at our coniferous forest site depends on the ability of plants and microbes to supply, accumulate, and regenerate short-lived Mn(3+) species in the litter layer. This observation indicates that biogeochemical constraints on bioavailability, mobility, and reactivity of Mn in the plant-soil system may have a profound impact on litter decomposition rates.

  6. On the decomposition of synchronous state mechines using sequence invariant state machines

    NASA Technical Reports Server (NTRS)

    Hebbalalu, K.; Whitaker, S.; Cameron, K.

    1992-01-01

    This paper presents a few techniques for the decomposition of Synchronous State Machines of medium to large sizes into smaller component machines. The methods are based on the nature of the transitions and sequences of states in the machine and on the number and variety of inputs to the machine. The results of the decomposition, and of using the Sequence Invariant State Machine (SISM) Design Technique for generating the component machines, include great ease and quickness in the design and implementation processes. Furthermore, there is increased flexibility in making modifications to the original design leading to negligible re-design time.

  7. Generalized neurofuzzy network modeling algorithms using Bézier-Bernstein polynomial functions and additive decomposition.

    PubMed

    Hong, X; Harris, C J

    2000-01-01

    This paper introduces a new neurofuzzy model construction algorithm for nonlinear dynamic systems based upon basis functions that are Bézier-Bernstein polynomial functions. This paper is generalized in that it copes with n-dimensional inputs by utilising an additive decomposition construction to overcome the curse of dimensionality associated with high n. This new construction algorithm also introduces univariate Bézier-Bernstein polynomial functions for the completeness of the generalized procedure. Like the B-spline expansion based neurofuzzy systems, Bézier-Bernstein polynomial function based neurofuzzy networks hold desirable properties such as nonnegativity of the basis functions, unity of support, and interpretability of basis function as fuzzy membership functions, moreover with the additional advantages of structural parsimony and Delaunay input space partition, essentially overcoming the curse of dimensionality associated with conventional fuzzy and RBF networks. This new modeling network is based on additive decomposition approach together with two separate basis function formation approaches for both univariate and bivariate Bézier-Bernstein polynomial functions used in model construction. The overall network weights are then learnt using conventional least squares methods. Numerical examples are included to demonstrate the effectiveness of this new data based modeling approach.

  8. Organisational determinants of production and efficiency in general practice: a population-based study.

    PubMed

    Olsen, Kim Rose; Gyrd-Hansen, Dorte; Sørensen, Torben Højmark; Kristensen, Troels; Vedsted, Peter; Street, Andrew

    2013-04-01

    Shortage of general practitioners (GPs) and an increased political focus on primary care have enforced the interest in efficiency analysis in the Danish primary care sector. This paper assesses the association between organisational factors of general practices and production and efficiency. We assume that production and efficiency can be modelled using a behavioural production function. We apply the Battese and Coelli (Empir Econ 20:325-332, 1995) estimator to accomplish a decomposition of exogenous variables to determine the production frontier and variables determining the individual GPs distance to this frontier. Two different measures of practice outputs (number of office visits and total production) were applied and the results compared. The results indicate that nurses do not substitute GPs in the production. The production function exhibited constant returns to scale. The mean level of efficiency was between 0.79 and 0.84, and list size was the most important determinant of variation in efficiency levels. Nurses are currently undertaking other tasks than GPs, and larger practices do not lead to increased production per GP. However, a relative increase in list size increased the efficiency. This indicates that organisational changes aiming to increase capacity in general practice should be carefully designed and tested.

  9. A Perturbation Based Decomposition of Compound-Evoked Potentials for Characterization of Nerve Fiber Size Distributions.

    PubMed

    Szlavik, Robert B

    2016-02-01

    The characterization of peripheral nerve fiber distributions, in terms of diameter or velocity, is of clinical significance because information associated with these distributions can be utilized in the differential diagnosis of peripheral neuropathies. Electro-diagnostic techniques can be applied to the investigation of peripheral neuropathies and can yield valuable diagnostic information while being minimally invasive. Nerve conduction velocity studies are single parameter tests that yield no detailed information regarding the characteristics of the population of nerve fibers that contribute to the compound-evoked potential. Decomposition of the compound-evoked potential, such that the velocity or diameter distribution of the contributing nerve fibers may be determined, is necessary if information regarding the population of contributing nerve fibers is to be ascertained from the electro-diagnostic study. In this work, a perturbation-based decomposition of compound-evoked potentials is proposed that facilitates determination of the fiber diameter distribution associated with the compound-evoked potential. The decomposition is based on representing the single fiber-evoked potential, associated with each diameter class, as being perturbed by contributions, of varying degree, from all the other diameter class single fiber-evoked potentials. The resultant estimator of the contributing nerve fiber diameter distribution is valid for relatively large separations in diameter classes. It is also useful in situations where the separation between diameter classes is small and the concomitant single fiber-evoked potentials are not orthogonal.

  10. A Novel Framework Based on FastICA for High Density Surface EMG Decomposition

    PubMed Central

    Chen, Maoqi; Zhou, Ping

    2015-01-01

    This study presents a progressive FastICA peel-off (PFP) framework for high density surface electromyogram (EMG) decomposition. The novel framework is based on a shift-invariant model for describing surface EMG. The decomposition process can be viewed as progressively expanding the set of motor unit spike trains, which is primarily based on FastICA. To overcome the local convergence of FastICA, a “peel off” strategy (i.e. removal of the estimated motor unit action potential (MUAP) trains from the previous step) is used to mitigate the effects of the already identified motor units, so more motor units can be extracted. Moreover, a constrained FastICA is applied to assess the extracted spike trains and correct possible erroneous or missed spikes. These procedures work together to improve the decomposition performance. The proposed framework was validated using simulated surface EMG signals with different motor unit numbers (30, 70, 91) and signal to noise ratios (SNRs) (20, 10, 0 dB). The results demonstrated relatively large numbers of extracted motor units and high accuracies (high F1-scores). The framework was also tested with 111 trials of 64-channel electrode array experimental surface EMG signals during the first dorsal interosseous (FDI) muscle contraction at different intensities. On average 14.1 ± 5.0 motor units were identified from each trial of experimental surface EMG signals. PMID:25775496

  11. Set-Based Discrete Particle Swarm Optimization Based on Decomposition for Permutation-Based Multiobjective Combinatorial Optimization Problems.

    PubMed

    Yu, Xue; Chen, Wei-Neng; Gu, Tianlong; Zhang, Huaxiang; Yuan, Huaqiang; Kwong, Sam; Zhang, Jun

    2018-07-01

    This paper studies a specific class of multiobjective combinatorial optimization problems (MOCOPs), namely the permutation-based MOCOPs. Many commonly seen MOCOPs, e.g., multiobjective traveling salesman problem (MOTSP), multiobjective project scheduling problem (MOPSP), belong to this problem class and they can be very different. However, as the permutation-based MOCOPs share the inherent similarity that the structure of their search space is usually in the shape of a permutation tree, this paper proposes a generic multiobjective set-based particle swarm optimization methodology based on decomposition, termed MS-PSO/D. In order to coordinate with the property of permutation-based MOCOPs, MS-PSO/D utilizes an element-based representation and a constructive approach. Through this, feasible solutions under constraints can be generated step by step following the permutation-tree-shaped structure. And problem-related heuristic information is introduced in the constructive approach for efficiency. In order to address the multiobjective optimization issues, the decomposition strategy is employed, in which the problem is converted into multiple single-objective subproblems according to a set of weight vectors. Besides, a flexible mechanism for diversity control is provided in MS-PSO/D. Extensive experiments have been conducted to study MS-PSO/D on two permutation-based MOCOPs, namely the MOTSP and the MOPSP. Experimental results validate that the proposed methodology is promising.

  12. Effect of decomposition and organic residues on resistivity of copper films fabricated via low-temperature sintering of complex particle mixed dispersions

    NASA Astrophysics Data System (ADS)

    Yong, Yingqiong; Nguyen, Mai Thanh; Tsukamoto, Hiroki; Matsubara, Masaki; Liao, Ying-Chih; Yonezawa, Tetsu

    2017-03-01

    Mixtures of a copper complex and copper fine particles as copper-based metal-organic decomposition (MOD) dispersions have been demonstrated to be effective for low-temperature sintering of conductive copper film. However, the copper particle size effect on decomposition process of the dispersion during heating and the effect of organic residues on the resistivity have not been studied. In this study, the decomposition process of dispersions containing mixtures of a copper complex and copper particles with various sizes was studied. The effect of organic residues on the resistivity was also studied using thermogravimetric analysis. In addition, the choice of copper salts in the copper complex was also discussed. In this work, a low-resistivity sintered copper film (7 × 10-6 Ω·m) at a temperature as low as 100 °C was achieved without using any reductive gas.

  13. Exothermic or Endothermic Decomposition of Disubstituted Tetrazoles Tuned by Substitution Fashion and Substituents.

    PubMed

    Jia, Yu-Hui; Yang, Kai-Xiang; Chen, Shi-Lu; Huang, Mu-Hua

    2018-01-11

    Nitrogen-rich compounds such as tetrazoles are widely used as candidates in gas-generating agents. However, the details of the differentiation of the two isomers of disubstituted tetrazoles are rarely studied, which is very important information for designing advanced materials based on tetrazoles. In this article, pairs of 2,5- and 1,5-disubstituted tetrazoles were carefully designed and prepared for study on their thermal decomposition behavior. Also, the substitution fashion of 2,5- and 1,5- and the substituents at C-5 position were found to affect the endothermic or exothermic properties. This is for the first time to the best of our knowledge that the thermal decomposition properties of different tetrazoles could be tuned by substitution ways and substitute groups, which could be used as a useful platform to design advanced materials for temperature-dependent rockets. The aza-Claisen rearrangement was proposed to understand the endothermic decomposition behavior.

  14. Predicting the decomposition of Scots pine, Norway spruce, and birch stems in Finland.

    PubMed

    Mäkinen, Harri; Hynynen, Jari; Siitonen, Juha; Sievänen, Risto

    2006-10-01

    Models were developed for predicting the decomposition of dead wood for the main tree species in Finland, based on data collected from long-term thinning experiments in southern and central Finland. The decomposition rates were strongly related to the number of years after tree death. In contrast to previous studies, which have used the first-order exponential model, we found that the decomposition rate was not constant. Therefore, the Gompertz and Chapman-Richard's functions were fitted to the data. The slow initial decomposition period was mainly due to the fact that most dead trees remained standing as snags after their death. The initial period was followed by a period of rapid decomposition and, finally, by a period of moderately slow decomposition. Birch stems decomposed more rapidly than Scots pine and Norway spruce stems. Decomposition rates of Norway spruce stems were somewhat lower than those of Scots pine. Because the carbon concentration of decaying boles was relatively stable (about 50%) the rate of carbon loss follows that of mass loss. Models were also developed for the probability that a dead tree remains standing as a snag. During the first years after death, the probability was high. Thereafter, it decreased rapidly, the decrease being faster for birch stems than for Scots pine and Norway spruce stems. Almost all stems had fallen down within 40 years after their death. In Scots pine and Norway spruce, most snags remained hard and belonged to decay class 1. In birch, a higher proportion of snags belonged to the more advanced decay classes. The models provide a framework for predicting dead wood dynamics in managed as well as dense unthinned stands. The models can be incorporated into forest management planning systems, thereby facilitating estimates of carbon dynamics.

  15. Experimental and DFT simulation study of a novel felodipine cocrystal: Characterization, dissolving properties and thermal decomposition kinetics.

    PubMed

    Yang, Caiqin; Guo, Wei; Lin, Yulong; Lin, Qianqian; Wang, Jiaojiao; Wang, Jing; Zeng, Yanli

    2018-05-30

    In this study, a new cocrystal of felodipine (Fel) and glutaric acid (Glu) with a high dissolution rate was developed using the solvent ultrasonic method. The prepared cocrystal was characterized using X-ray powder diffraction, differential scanning calorimetry, thermogravimetric (TG) analysis, and infrared (IR) spectroscopy. To provide basic information about the optimization of pharmaceutical preparations of Fel-based cocrystals, this work investigated the thermal decomposition kinetics of the Fel-Glu cocrystal through non-isothermal thermogravimetry. Density functional theory (DFT) simulations were also performed on the Fel monomer and the trimolecular cocrystal compound for exploring the mechanisms underlying hydrogen bonding formation and thermal decomposition. Combined results of IR spectroscopy and DFT simulation verified that the Fel-Glu cocrystal formed via the NH⋯OC and CO⋯HO hydrogen bonds between Fel and Glu at the ratio of 1:2. The TG/derivative TG curves indicated that the thermal decomposition of the Fel-Glu cocrystal underwent a two-step process. The apparent activation energy (E a ) and pre-exponential factor (A) of the thermal decomposition for the first stage were 84.90 kJ mol -1 and 7.03 × 10 7  min -1 , respectively. The mechanism underlying thermal decomposition possibly involved nucleation and growth, with the integral mechanism function G(α) of α 3/2 . DFT calculation revealed that the hydrogen bonding between Fel and Glu weakened the terminal methoxyl, methyl, and ethyl groups in the Fel molecule. As a result, these groups were lost along with the Glu molecule in the first thermal decomposition. In conclusion, the formed cocrystal exhibited different thermal decomposition kinetics and showed different E a , A, and shelf life from the intact active pharmaceutical ingredient. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Decomposition of heterogeneous organic matterand its long-term stabilization in soils

    USGS Publications Warehouse

    Sierra, Carlos A.; Harmon, Mark E.; Perakis, Steven S.

    2011-01-01

    Soil organic matter is a complex mixture of material with heterogeneous biological, physical, and chemical properties. Decomposition models represent this heterogeneity either as a set of discrete pools with different residence times or as a continuum of qualities. It is unclear though, whether these two different approaches yield comparable predictions of organic matter dynamics. Here, we compare predictions from these two different approaches and propose an intermediate approach to study organic matter decomposition based on concepts from continuous models implemented numerically. We found that the disagreement between discrete and continuous approaches can be considerable depending on the degree of nonlinearity of the model and simulation time. The two approaches can diverge substantially for predicting long-term processes in soils. Based on our alternative approach, which is a modification of the continuous quality theory, we explored the temporal patterns that emerge by treating substrate heterogeneity explicitly. The analysis suggests that the pattern of carbon mineralization over time is highly dependent on the degree and form of nonlinearity in the model, mostly expressed as differences in microbial growth and efficiency for different substrates. Moreover, short-term stabilization and destabilization mechanisms operating simultaneously result in long-term accumulation of carbon characterized by low decomposition rates, independent of the characteristics of the incoming litter. We show that representation of heterogeneity in the decomposition process can lead to substantial improvements in our understanding of carbon mineralization and its long-term stability in soils.

  17. Variance decomposition in stochastic simulators.

    PubMed

    Le Maître, O P; Knio, O M; Moraes, A

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  18. Computer implemented empirical mode decomposition method, apparatus, and article of manufacture for two-dimensional signals

    NASA Technical Reports Server (NTRS)

    Huang, Norden E. (Inventor)

    2001-01-01

    A computer implemented method of processing two-dimensional physical signals includes five basic components and the associated presentation techniques of the results. The first component decomposes the two-dimensional signal into one-dimensional profiles. The second component is a computer implemented Empirical Mode Decomposition that extracts a collection of Intrinsic Mode Functions (IMF's) from each profile based on local extrema and/or curvature extrema. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the profiles. In the third component, the IMF's of each profile are then subjected to a Hilbert Transform. The fourth component collates the Hilbert transformed IMF's of the profiles to form a two-dimensional Hilbert Spectrum. A fifth component manipulates the IMF's by, for example, filtering the two-dimensional signal by reconstructing the two-dimensional signal from selected IMF(s).

  19. Photocatalytic activity of silicon-based nanoflakes for the decomposition of nitrogen monoxide.

    PubMed

    Itahara, Hiroshi; Wu, Xiaoyong; Imagawa, Haruo; Yin, Shu; Kojima, Kazunobu; Chichibu, Shigefusa F; Sato, Tsugio

    2017-07-04

    The photocatalytic decomposition of nitrogen monoxide (NO) was achieved for the first time using Si-based nanomaterials. Nanocomposite powders composed of Si nanoflakes and metallic particles (Ni and Ni 3 Si) were synthesized using a simple one-pot reaction of layered CaSi 2 and NiCl 2 . The synthesized nanocomposites have a wide optical absorption band from the visible to the ultraviolet. Under the assumption of a direct transition, the photoabsorption behavior is well described and an absorption edge of ca. 1.8 eV is indicated. Conventional Si and SiO powders with indirect absorption edges of 1.1 and 1.4 eV, respectively, exhibit considerably low photocatalytic activities for NO decomposition. In contrast, the synthesized nanocomposites exhibited photocatalytic activities under irradiation with light at wavelengths >290 nm (<4.28 eV). The photocatalytic activities of the nanocomposites were confirmed to be constant and did not degrade with the light irradiation time.

  20. Use of the Morlet mother wavelet in the frequency-scale domain decomposition technique for the modal identification of ambient vibration responses

    NASA Astrophysics Data System (ADS)

    Le, Thien-Phu

    2017-10-01

    The frequency-scale domain decomposition technique has recently been proposed for operational modal analysis. The technique is based on the Cauchy mother wavelet. In this paper, the approach is extended to the Morlet mother wavelet, which is very popular in signal processing due to its superior time-frequency localization. Based on the regressive form and an appropriate norm of the Morlet mother wavelet, the continuous wavelet transform of the power spectral density of ambient responses enables modes in the frequency-scale domain to be highlighted. Analytical developments first demonstrate the link between modal parameters and the local maxima of the continuous wavelet transform modulus. The link formula is then used as the foundation of the proposed modal identification method. Its practical procedure, combined with the singular value decomposition algorithm, is presented step by step. The proposition is finally verified using numerical examples and a laboratory test.

  1. Variance decomposition in stochastic simulators

    NASA Astrophysics Data System (ADS)

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  2. Aircraft family design using enhanced collaborative optimization

    NASA Astrophysics Data System (ADS)

    Roth, Brian Douglas

    Significant progress has been made toward the development of multidisciplinary design optimization (MDO) methods that are well-suited to practical large-scale design problems. However, opportunities exist for further progress. This thesis describes the development of enhanced collaborative optimization (ECO), a new decomposition-based MDO method. To support the development effort, the thesis offers a detailed comparison of two existing MDO methods: collaborative optimization (CO) and analytical target cascading (ATC). This aids in clarifying their function and capabilities, and it provides inspiration for the development of ECO. The ECO method offers several significant contributions. First, it enhances communication between disciplinary design teams while retaining the low-order coupling between them. Second, it provides disciplinary design teams with more authority over the design process. Third, it resolves several troubling computational inefficiencies that are associated with CO. As a result, ECO provides significant computational savings (relative to CO) for the test cases and practical design problems described in this thesis. New aircraft development projects seldom focus on a single set of mission requirements. Rather, a family of aircraft is designed, with each family member tailored to a different set of requirements. This thesis illustrates the application of decomposition-based MDO methods to aircraft family design. This represents a new application area, since MDO methods have traditionally been applied to multidisciplinary problems. ECO offers aircraft family design the same benefits that it affords to multidisciplinary design problems. Namely, it simplifies analysis integration, it provides a means to manage problem complexity, and it enables concurrent design of all family members. In support of aircraft family design, this thesis introduces a new wing structural model with sufficient fidelity to capture the tradeoffs associated with component commonality, but of appropriate fidelity for aircraft conceptual design. The thesis also introduces a new aircraft family concept. Unlike most families, the intent is not necessarily to produce all family members. Rather, the family includes members for immediate production and members that address potential future market conditions and/or environmental regulations. The result is a set of designs that yield a small performance penalty today in return for significant future flexibility to produce family members that respond to new market conditions and environmental regulations.

  3. Multi-level basis selection of wavelet packet decomposition tree for heart sound classification.

    PubMed

    Safara, Fatemeh; Doraisamy, Shyamala; Azman, Azreen; Jantan, Azrul; Abdullah Ramaiah, Asri Ranga

    2013-10-01

    Wavelet packet transform decomposes a signal into a set of orthonormal bases (nodes) and provides opportunities to select an appropriate set of these bases for feature extraction. In this paper, multi-level basis selection (MLBS) is proposed to preserve the most informative bases of a wavelet packet decomposition tree through removing less informative bases by applying three exclusion criteria: frequency range, noise frequency, and energy threshold. MLBS achieved an accuracy of 97.56% for classifying normal heart sound, aortic stenosis, mitral regurgitation, and aortic regurgitation. MLBS is a promising basis selection to be suggested for signals with a small range of frequencies. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.

  4. Plants mediate soil organic matter decomposition in response to sea level rise.

    PubMed

    Mueller, Peter; Jensen, Kai; Megonigal, James Patrick

    2016-01-01

    Tidal marshes have a large capacity for producing and storing organic matter, making their role in the global carbon budget disproportionate to land area. Most of the organic matter stored in these systems is in soils where it contributes 2-5 times more to surface accretion than an equal mass of minerals. Soil organic matter (SOM) sequestration is the primary process by which tidal marshes become perched high in the tidal frame, decreasing their vulnerability to accelerated relative sea level rise (RSLR). Plant growth responses to RSLR are well understood and represented in century-scale forecast models of soil surface elevation change. We understand far less about the response of SOM decomposition to accelerated RSLR. Here we quantified the effects of flooding depth and duration on SOM decomposition by exposing planted and unplanted field-based mesocosms to experimentally manipulated relative sea level over two consecutive growing seasons. SOM decomposition was quantified as CO2 efflux, with plant- and SOM-derived CO2 separated via δ(13) CO2 . Despite the dominant paradigm that decomposition rates are inversely related to flooding, SOM decomposition in the absence of plants was not sensitive to flooding depth and duration. The presence of plants had a dramatic effect on SOM decomposition, increasing SOM-derived CO2 flux by up to 267% and 125% (in 2012 and 2013, respectively) compared to unplanted controls in the two growing seasons. Furthermore, plant stimulation of SOM decomposition was strongly and positively related to plant biomass and in particular aboveground biomass. We conclude that SOM decomposition rates are not directly driven by relative sea level and its effect on oxygen diffusion through soil, but indirectly by plant responses to relative sea level. If this result applies more generally to tidal wetlands, it has important implications for models of SOM accumulation and surface elevation change in response to accelerated RSLR. © 2015 John Wiley & Sons Ltd.

  5. Functional and Structural Succession of Soil Microbial Communities below Decomposing Human Cadavers

    PubMed Central

    Cobaugh, Kelly L.; Schaeffer, Sean M.; DeBruyn, Jennifer M.

    2015-01-01

    The ecological succession of microbes during cadaver decomposition has garnered interest in both basic and applied research contexts (e.g. community assembly and dynamics; forensic indicator of time since death). Yet current understanding of microbial ecology during decomposition is almost entirely based on plant litter. We know very little about microbes recycling carcass-derived organic matter despite the unique decomposition processes. Our objective was to quantify the taxonomic and functional succession of microbial populations in soils below decomposing cadavers, testing the hypotheses that a) periods of increased activity during decomposition are associated with particular taxa; and b) human-associated taxa are introduced to soils, but do not persist outside their host. We collected soils from beneath four cadavers throughout decomposition, and analyzed soil chemistry, microbial activity and bacterial community structure. As expected, decomposition resulted in pulses of soil C and nutrients (particularly ammonia) and stimulated microbial activity. There was no change in total bacterial abundances, however we observed distinct changes in both function and community composition. During active decay (7 - 12 days postmortem), respiration and biomass production rates were high: the community was dominated by Proteobacteria (increased from 15.0 to 26.1% relative abundance) and Firmicutes (increased from 1.0 to 29.0%), with reduced Acidobacteria abundances (decreased from 30.4 to 9.8%). Once decay rates slowed (10 - 23 d postmortem), respiration was elevated, but biomass production rates dropped dramatically; this community with low growth efficiency was dominated by Firmicutes (increased to 50.9%) and other anaerobic taxa. Human-associated bacteria, including the obligately anaerobic Bacteroides, were detected at high concentrations in soil throughout decomposition, up to 198 d postmortem. Our results revealed the pattern of functional and compositional succession in soil microbial communities during decomposition of human-derived organic matter, provided insight into decomposition processes, and identified putative predictor populations for time since death estimation. PMID:26067226

  6. Non-linear analytic and coanalytic problems ( L_p-theory, Clifford analysis, examples)

    NASA Astrophysics Data System (ADS)

    Dubinskii, Yu A.; Osipenko, A. S.

    2000-02-01

    Two kinds of new mathematical model of variational type are put forward: non-linear analytic and coanalytic problems. The formulation of these non-linear boundary-value problems is based on a decomposition of the complete scale of Sobolev spaces into the "orthogonal" sum of analytic and coanalytic subspaces. A similar decomposition is considered in the framework of Clifford analysis. Explicit examples are presented.

  7. Loblolly pine needle decomposition and nutrient dynamics as affected by irrigation, fertilization, and substrate quality

    Treesearch

    Felipe G. Sanchez

    2001-01-01

    This study examined the effects of initial litter quality and irrigation and fertilization treatments on litter decomposition rates and nutrient dynamics (N, Ca, K, Mg, and P) of loblolly (Pinus taeda L.) pine needles in the North Carolina Sand Hills over 3 years. Litter quality was based on the initial C/N ratios, with the high-quality litter having...

  8. Protection from wintertime rainfall reduces nutrient losses and greenhouse gas emissions during the decomposition of poultry and horse manure-based amendments.

    PubMed

    Maltais-Landry, Gabriel; Neufeld, Katarina; Poon, David; Grant, Nicholas; Nesic, Zoran; Smukler, Sean

    2018-04-01

    Manure-based soil amendments (herein "amendments") are important fertility sources, but differences among amendment types and management can significantly affect their nutrient value and environmental impacts. A 6-month in situ decomposition experiment was conducted to determine how protection from wintertime rainfall affected nutrient losses and greenhouse gas (GHG) emissions in poultry (broiler chicken and turkey) and horse amendments. Changes in total nutrient concentration were measured every 3 months, changes in ammonium (NH 4 + ) and nitrate (NO 3 - ) concentrations every month, and GHG emissions of carbon dioxide (CO 2 ), methane (CH 4 ), and nitrous oxide (N 2 O) every 7-14 days. Poultry amendments maintained higher nutrient concentrations (except for K), higher emissions of CO 2 and N 2 O, and lower CH 4 emissions than horse amendments. Exposing amendments to rainfall increased total N and NH 4 + losses in poultry amendments, P losses in turkey and horse amendments, and K losses and cumulative N 2 O emissions for all amendments. However, it did not affect CO 2 or CH 4 emissions. Overall, rainfall exposure would decrease total N inputs by 37% (horse), 59% (broiler chicken), or 74% (turkey) for a given application rate (wet weight basis) after 6 months of decomposition, with similar losses for NH 4 + (69-96%), P (41-73%), and K (91-97%). This study confirms the benefits of facilities protected from rainfall to reduce nutrient losses and GHG emissions during amendment decomposition. The impact of rainfall protection on nutrient losses and GHG emissions was monitored during the decomposition of broiler chicken, turkey, and horse manure-based soil amendments. Amendments exposed to rainfall had large ammonium and potassium losses, resulting in a 37-74% decrease in N inputs when compared with amendments protected from rainfall. Nitrous oxide emissions were also higher with rainfall exposure, although it had no effect on carbon dioxide and methane emissions. Overall, this work highlights the benefits of rainfall protection during amendment decomposition to reduce nutrient losses and GHG emissions.

  9. Decorating graphene oxide with CuO nanoparticles in a water-isopropanol system.

    PubMed

    Zhu, Junwu; Zeng, Guiyu; Nie, Fude; Xu, Xiaoming; Chen, Sheng; Han, Qiaofeng; Wang, Xin

    2010-06-01

    A facile chemical procedure capable of aligning CuO nanoparticles on graphene oxide (GO) in a water-isopropanol system has been described. Scanning electron microscopy (SEM) and transmission electron microscopy (TEM) observations indicate that the exfoliated GO sheets are decorated randomly by spindly or spherical CuO nanoparticle aggregates, forming well-ordered CuO:GO nanocomposites. A formation mechanism of these interesting nanocomposites is proposed as intercalation and adsorption of Cu2+ ions onto the GO sheets, followed by the nucleation and growth of the CuO crystallites, which in return resulted in the exfoliation of GO sheets. Moreover, the obtained nanocomposites exhibit a high catalytic activity for the thermal decomposition of ammonium perchlorate (AP), due to the concerted effect of CuO and GO.

  10. Complete description of all self-similar models driven by Lévy stable noise

    NASA Astrophysics Data System (ADS)

    Weron, Aleksander; Burnecki, Krzysztof; Mercik, Szymon; Weron, Karina

    2005-01-01

    A canonical decomposition of H -self-similar Lévy symmetric α -stable processes is presented. The resulting components completely described by both deterministic kernels and the corresponding stochastic integral with respect to the Lévy symmetric α -stable motion are shown to be related to the dissipative and conservative parts of the dynamics. This result provides stochastic analysis tools for study the anomalous diffusion phenomena in the Langevin equation framework. For example, a simple computer test for testing the origins of self-similarity is implemented for four real empirical time series recorded from different physical systems: an ionic current flow through a single channel in a biological membrane, an energy of solar flares, a seismic electric signal recorded during seismic Earth activity, and foreign exchange rate daily returns.

  11. MO-FG-CAMPUS-IeP1-02: Dose Reduction in Contrast-Enhanced Digital Mammography Using a Photon-Counting Detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, S; Kang, S; Eom, J

    Purpose: Photon-counting detectors (PCDs) allow multi-energy X-ray imaging without additional exposures and spectral overlap. This capability results in the improvement of accuracy of material decomposition for dual-energy X-ray imaging and the reduction of radiation dose. In this study, the PCD-based contrast-enhanced dual-energy mammography (CEDM) was compared with the conventional CDEM in terms of radiation dose, image quality and accuracy of material decomposition. Methods: A dual-energy model was designed by using Beer-Lambert’s law and rational inverse fitting function for decomposing materials from a polychromatic X-ray source. A cadmium zinc telluride (CZT)-based PCD, which has five energy thresholds, and iodine solutions includedmore » in a 3D half-cylindrical phantom, which composed of 50% glandular and 50% adipose tissues, were simulated by using a Monte Carlo simulation tool. The low- and high-energy images were obtained in accordance with the clinical exposure conditions for the conventional CDEM. Energy bins of 20–33 and 34–50 keV were defined from X-ray energy spectra simulated at 50 kVp with different dose levels for implementing the PCD-based CDEM. The dual-energy mammographic techniques were compared by means of absorbed dose, noise property and normalized root-mean-square error (NRMSE). Results: Comparing to the conventional CEDM, the iodine solutions were clearly decomposed for the PCD-based CEDM. Although the radiation dose for the PCD-based CDEM was lower than that for the conventional CEDM, the PCD-based CDEM improved the noise property and accuracy of decomposition images. Conclusion: This study demonstrates that the PCD-based CDEM allows the quantitative material decomposition, and reduces radiation dose in comparison with the conventional CDEM. Therefore, the PCD-based CDEM is able to provide useful information for detecting breast tumor and enhancing diagnostic accuracy in mammography.« less

  12. Sensor-Based Vibration Signal Feature Extraction Using an Improved Composite Dictionary Matching Pursuit Algorithm

    PubMed Central

    Cui, Lingli; Wu, Na; Wang, Wenjing; Kang, Chenhui

    2014-01-01

    This paper presents a new method for a composite dictionary matching pursuit algorithm, which is applied to vibration sensor signal feature extraction and fault diagnosis of a gearbox. Three advantages are highlighted in the new method. First, the composite dictionary in the algorithm has been changed from multi-atom matching to single-atom matching. Compared to non-composite dictionary single-atom matching, the original composite dictionary multi-atom matching pursuit (CD-MaMP) algorithm can achieve noise reduction in the reconstruction stage, but it cannot dramatically reduce the computational cost and improve the efficiency in the decomposition stage. Therefore, the optimized composite dictionary single-atom matching algorithm (CD-SaMP) is proposed. Second, the termination condition of iteration based on the attenuation coefficient is put forward to improve the sparsity and efficiency of the algorithm, which adjusts the parameters of the termination condition constantly in the process of decomposition to avoid noise. Third, composite dictionaries are enriched with the modulation dictionary, which is one of the important structural characteristics of gear fault signals. Meanwhile, the termination condition of iteration settings, sub-feature dictionary selections and operation efficiency between CD-MaMP and CD-SaMP are discussed, aiming at gear simulation vibration signals with noise. The simulation sensor-based vibration signal results show that the termination condition of iteration based on the attenuation coefficient enhances decomposition sparsity greatly and achieves a good effect of noise reduction. Furthermore, the modulation dictionary achieves a better matching effect compared to the Fourier dictionary, and CD-SaMP has a great advantage of sparsity and efficiency compared with the CD-MaMP. The sensor-based vibration signals measured from practical engineering gearbox analyses have further shown that the CD-SaMP decomposition and reconstruction algorithm is feasible and effective. PMID:25207870

  13. Sensor-based vibration signal feature extraction using an improved composite dictionary matching pursuit algorithm.

    PubMed

    Cui, Lingli; Wu, Na; Wang, Wenjing; Kang, Chenhui

    2014-09-09

    This paper presents a new method for a composite dictionary matching pursuit algorithm, which is applied to vibration sensor signal feature extraction and fault diagnosis of a gearbox. Three advantages are highlighted in the new method. First, the composite dictionary in the algorithm has been changed from multi-atom matching to single-atom matching. Compared to non-composite dictionary single-atom matching, the original composite dictionary multi-atom matching pursuit (CD-MaMP) algorithm can achieve noise reduction in the reconstruction stage, but it cannot dramatically reduce the computational cost and improve the efficiency in the decomposition stage. Therefore, the optimized composite dictionary single-atom matching algorithm (CD-SaMP) is proposed. Second, the termination condition of iteration based on the attenuation coefficient is put forward to improve the sparsity and efficiency of the algorithm, which adjusts the parameters of the termination condition constantly in the process of decomposition to avoid noise. Third, composite dictionaries are enriched with the modulation dictionary, which is one of the important structural characteristics of gear fault signals. Meanwhile, the termination condition of iteration settings, sub-feature dictionary selections and operation efficiency between CD-MaMP and CD-SaMP are discussed, aiming at gear simulation vibration signals with noise. The simulation sensor-based vibration signal results show that the termination condition of iteration based on the attenuation coefficient enhances decomposition sparsity greatly and achieves a good effect of noise reduction. Furthermore, the modulation dictionary achieves a better matching effect compared to the Fourier dictionary, and CD-SaMP has a great advantage of sparsity and efficiency compared with the CD-MaMP. The sensor-based vibration signals measured from practical engineering gearbox analyses have further shown that the CD-SaMP decomposition and reconstruction algorithm is feasible and effective.

  14. Tree decomposition based fast search of RNA structures including pseudoknots in genomes.

    PubMed

    Song, Yinglei; Liu, Chunmei; Malmberg, Russell; Pan, Fangfang; Cai, Liming

    2005-01-01

    Searching genomes for RNA secondary structure with computational methods has become an important approach to the annotation of non-coding RNAs. However, due to the lack of efficient algorithms for accurate RNA structure-sequence alignment, computer programs capable of fast and effectively searching genomes for RNA secondary structures have not been available. In this paper, a novel RNA structure profiling model is introduced based on the notion of a conformational graph to specify the consensus structure of an RNA family. Tree decomposition yields a small tree width t for such conformation graphs (e.g., t = 2 for stem loops and only a slight increase for pseudo-knots). Within this modelling framework, the optimal alignment of a sequence to the structure model corresponds to finding a maximum valued isomorphic subgraph and consequently can be accomplished through dynamic programming on the tree decomposition of the conformational graph in time O(k(t)N(2)), where k is a small parameter; and N is the size of the projiled RNA structure. Experiments show that the application of the alignment algorithm to search in genomes yields the same search accuracy as methods based on a Covariance model with a significant reduction in computation time. In particular; very accurate searches of tmRNAs in bacteria genomes and of telomerase RNAs in yeast genomes can be accomplished in days, as opposed to months required by other methods. The tree decomposition based searching tool is free upon request and can be downloaded at our site h t t p ://w.uga.edu/RNA-informatics/software/index.php.

  15. Long-term litter decomposition controlled by manganese redox cycling

    DOE PAGES

    Keiluweit, Marco; Nico, Peter S.; Harmon, Mark; ...

    2015-09-08

    Litter decomposition is a keystone ecosystem process impacting nutrient cycling and productivity, soil properties, and the terrestrial carbon (C) balance, but the factors regulating decomposition rate are still poorly understood. Traditional models assume that the rate is controlled by litter quality, relying on parameters such as lignin content as predictors. However, a strong correlation has been observed between the manganese (Mn) content of litter and decomposition rates across a variety of forest ecosystems. Here, we show that long-term litter decomposition in forest ecosystems is tightly coupled to Mn redox cycling. Over 7 years of litter decomposition, microbial transformation of littermore » was paralleled by variations in Mn oxidation state and concentration. A detailed chemical imaging analysis of the litter revealed that fungi recruit and redistribute unreactive Mn 2+ provided by fresh plant litter to produce oxidative Mn 3+ species at sites of active decay, with Mn eventually accumulating as insoluble Mn 3+/4+ oxides. Formation of reactive Mn 3+ species coincided with the generation of aromatic oxidation products, providing direct proof of the previously posited role of Mn 3+-based oxidizers in the breakdown of litter. Our results suggest that the litter-decomposing machinery at our coniferous forest site depends on the ability of plants and microbes to supply, accumulate, and regenerate short-lived Mn 3+ species in the litter layer. As a result, this observation indicates that biogeochemical constraints on bioavailability, mobility, and reactivity of Mn in the plant–soil system may have a profound impact on litter decomposition rates.« less

  16. Controllable pneumatic generator based on the catalytic decomposition of hydrogen peroxide

    NASA Astrophysics Data System (ADS)

    Kim, Kyung-Rok; Kim, Kyung-Soo; Kim, Soohyun

    2014-07-01

    This paper presents a novel compact and controllable pneumatic generator that uses hydrogen peroxide decomposition. A fuel micro-injector using a piston-pump mechanism is devised and tested to control the chemical decomposition rate. By controlling the injection rate, the feedback controller maintains the pressure of the gas reservoir at a desired pressure level. Thermodynamic analysis and experiments are performed to demonstrate the feasibility of the proposed pneumatic generator. Using a prototype of the pneumatic generator, it takes 6 s to reach 3.5 bars with a reservoir volume of 200 ml at the room temperature, which is sufficiently rapid and effective to maintain the repetitive lifting of a 1 kg mass.

  17. Controllable pneumatic generator based on the catalytic decomposition of hydrogen peroxide.

    PubMed

    Kim, Kyung-Rok; Kim, Kyung-Soo; Kim, Soohyun

    2014-07-01

    This paper presents a novel compact and controllable pneumatic generator that uses hydrogen peroxide decomposition. A fuel micro-injector using a piston-pump mechanism is devised and tested to control the chemical decomposition rate. By controlling the injection rate, the feedback controller maintains the pressure of the gas reservoir at a desired pressure level. Thermodynamic analysis and experiments are performed to demonstrate the feasibility of the proposed pneumatic generator. Using a prototype of the pneumatic generator, it takes 6 s to reach 3.5 bars with a reservoir volume of 200 ml at the room temperature, which is sufficiently rapid and effective to maintain the repetitive lifting of a 1 kg mass.

  18. Multitasking domain decomposition fast Poisson solvers on the Cray Y-MP

    NASA Technical Reports Server (NTRS)

    Chan, Tony F.; Fatoohi, Rod A.

    1990-01-01

    The results of multitasking implementation of a domain decomposition fast Poisson solver on eight processors of the Cray Y-MP are presented. The object of this research is to study the performance of domain decomposition methods on a Cray supercomputer and to analyze the performance of different multitasking techniques using highly parallel algorithms. Two implementations of multitasking are considered: macrotasking (parallelism at the subroutine level) and microtasking (parallelism at the do-loop level). A conventional FFT-based fast Poisson solver is also multitasked. The results of different implementations are compared and analyzed. A speedup of over 7.4 on the Cray Y-MP running in a dedicated environment is achieved for all cases.

  19. The Living Dead: Bacterial Community Structure of a Cadaver at the Onset and End of the Bloat Stage of Decomposition

    PubMed Central

    Hyde, Embriette R.; Haarmann, Daniel P.; Lynne, Aaron M.; Bucheli, Sibyl R.; Petrosino, Joseph F.

    2013-01-01

    Human decomposition is a mosaic system with an intimate association between biotic and abiotic factors. Despite the integral role of bacteria in the decomposition process, few studies have catalogued bacterial biodiversity for terrestrial scenarios. To explore the microbiome of decomposition, two cadavers were placed at the Southeast Texas Applied Forensic Science facility and allowed to decompose under natural conditions. The bloat stage of decomposition, a stage easily identified in taphonomy and readily attributed to microbial physiology, was targeted. Each cadaver was sampled at two time points, at the onset and end of the bloat stage, from various body sites including internal locations. Bacterial samples were analyzed by pyrosequencing of the 16S rRNA gene. Our data show a shift from aerobic bacteria to anaerobic bacteria in all body sites sampled and demonstrate variation in community structure between bodies, between sample sites within a body, and between initial and end points of the bloat stage within a sample site. These data are best not viewed as points of comparison but rather additive data sets. While some species recovered are the same as those observed in culture-based studies, many are novel. Our results are preliminary and add to a larger emerging data set; a more comprehensive study is needed to further dissect the role of bacteria in human decomposition. PMID:24204941

  20. Decomposition of toluene in a steady-state atmospheric-pressure glow discharge

    NASA Astrophysics Data System (ADS)

    Trushkin, A. N.; Grushin, M. E.; Kochetov, I. V.; Trushkin, N. I.; Akishev, Yu. S.

    2013-02-01

    Results are presented from experimental studies of decomposition of toluene (C6H5CH3) in a polluted air flow by means of a steady-state atmospheric pressure glow discharge at different water vapor contents in the working gas. The experimental results on the degree of C6H5CH3 removal are compared with the results of computer simulations conducted in the framework of the developed kinetic model of plasma chemical decomposition of toluene in the N2: O2: H2O gas mixture. A substantial influence of the gas flow humidity on toluene decomposition in the atmospheric pressure glow discharge is demonstrated. The main mechanisms of the influence of humidity on C6H5CH3 decomposition are determined. The existence of two stages in the process of toluene removal, which differ in their duration and the intensity of plasma chemical decomposition of C6H5CH3 is established. Based on the results of computer simulations, the composition of the products of plasma chemical reactions at the output of the reactor is analyzed as a function of the specific energy deposition and gas flow humidity. The existence of a catalytic cycle in which hydroxyl radical OH acts a catalyst and which substantially accelerates the recombination of oxygen atoms and suppression of ozone generation when the plasma-forming gas contains water vapor is established.

  1. Breast density evaluation using spectral mammography, radiologist reader assessment and segmentation techniques: a retrospective study based on left and right breast comparison

    PubMed Central

    Molloi, Sabee; Ding, Huanjun; Feig, Stephen

    2015-01-01

    Purpose The purpose of this study was to compare the precision of mammographic breast density measurement using radiologist reader assessment, histogram threshold segmentation, fuzzy C-mean segmentation and spectral material decomposition. Materials and Methods Spectral mammography images from a total of 92 consecutive asymptomatic women (50–69 years old) who presented for annual screening mammography were retrospectively analyzed for this study. Breast density was estimated using 10 radiologist reader assessment, standard histogram thresholding, fuzzy C-mean algorithm and spectral material decomposition. The breast density correlation between left and right breasts was used to assess the precision of these techniques to measure breast composition relative to dual-energy material decomposition. Results In comparison to the other techniques, the results of breast density measurements using dual-energy material decomposition showed the highest correlation. The relative standard error of estimate for breast density measurements from left and right breasts using radiologist reader assessment, standard histogram thresholding, fuzzy C-mean algorithm and dual-energy material decomposition was calculated to be 1.95, 2.87, 2.07 and 1.00, respectively. Conclusion The results indicate that the precision of dual-energy material decomposition was approximately factor of two higher than the other techniques with regard to better correlation of breast density measurements from right and left breasts. PMID:26031229

  2. Decomposition Mechanism of C5F10O: An Environmentally Friendly Insulation Medium.

    PubMed

    Zhang, Xiaoxing; Li, Yi; Xiao, Song; Tang, Ju; Tian, Shuangshuang; Deng, Zaitao

    2017-09-05

    SF 6 , the most widely used electrical-equipment-insulation gas, has serious greenhouse effects. C 5 F 10 O has attracted much attention as an alternative gas in recent two years, but the environmental impact of its decomposition products is unclear. In this work, the decomposition characteristics of C 5 F 10 O were studied based on gas chromatography-mass spectrometry and density functional theory. We found that the amount of decomposition products of C 5 F 10 O, namely, CF 4 , C 2 F 6 , C 3 F 6 , C 3 F 8 , C 4 F 10 , and C 6 F 14 , increased with increased number of discharges. Under a high-energy electric field, the C-C bond of C 5 F 10 O between carbonyl carbon and α-carbon atoms was most likely to break and generate CF 3 CO•, C 3 F 7 • or C 3 F 7 CO•, CF 3 • free radicals. CF 3 •, and C 3 F 7 • free radicals produced by the breakage more easily recombined to form small molecular products. By analyzing the ionization parameters, toxicity, and environmental effects of C 5 F 10 O and its decomposition products, we found that C 5 F 10 O gas mixtures exhibit great decomposition and environmental characteristics with low toxicity, with great potential to replace SF 6 .

  3. The living dead: bacterial community structure of a cadaver at the onset and end of the bloat stage of decomposition.

    PubMed

    Hyde, Embriette R; Haarmann, Daniel P; Lynne, Aaron M; Bucheli, Sibyl R; Petrosino, Joseph F

    2013-01-01

    Human decomposition is a mosaic system with an intimate association between biotic and abiotic factors. Despite the integral role of bacteria in the decomposition process, few studies have catalogued bacterial biodiversity for terrestrial scenarios. To explore the microbiome of decomposition, two cadavers were placed at the Southeast Texas Applied Forensic Science facility and allowed to decompose under natural conditions. The bloat stage of decomposition, a stage easily identified in taphonomy and readily attributed to microbial physiology, was targeted. Each cadaver was sampled at two time points, at the onset and end of the bloat stage, from various body sites including internal locations. Bacterial samples were analyzed by pyrosequencing of the 16S rRNA gene. Our data show a shift from aerobic bacteria to anaerobic bacteria in all body sites sampled and demonstrate variation in community structure between bodies, between sample sites within a body, and between initial and end points of the bloat stage within a sample site. These data are best not viewed as points of comparison but rather additive data sets. While some species recovered are the same as those observed in culture-based studies, many are novel. Our results are preliminary and add to a larger emerging data set; a more comprehensive study is needed to further dissect the role of bacteria in human decomposition.

  4. Implementation of material decomposition using an EMCCD and CMOS-based micro-CT system.

    PubMed

    Podgorsak, Alexander R; Nagesh, Sv Setlur; Bednarek, Daniel R; Rudin, Stephen; Ionita, Ciprian N

    2017-02-11

    This project assessed the effectiveness of using two different detectors to obtain dual-energy (DE) micro-CT data for the carrying out of material decomposition. A micro-CT coupled to either a complementary metal-oxide semiconductor (CMOS) or an electron multiplying CCD (EMCCD) detector was used to acquire image data of a 3D-printed phantom with channels filled with different materials. At any instance, materials such as iohexol contrast agent, water, and platinum were selected to make up the scanned object. DE micro-CT data was acquired, and slices of the scanned object were differentiated by material makeup. The success of the decomposition was assessed quantitatively through the computation of percentage normalized root-mean-square error (%NRMSE). Our results indicate a successful decomposition of iohexol for both detectors (%NRMSE values of 1.8 for EMCCD, 2.4 for CMOS), as well as platinum (%NRMSE value of 4.7). The CMOS detector performed material decomposition on air and water on average with 7 times more %NRMSE, possibly due to the decreased sensitivity of the CMOS system. Material decomposition showed the potential to differentiate between materials such as the iohexol and platinum, perhaps opening the door for its use in the neurovascular anatomical region. Work supported by Toshiba America Medical Systems, and partially supported by NIH grant 2R01EB002873.

  5. Implementation of material decomposition using an EMCCD and CMOS-based micro-CT system

    NASA Astrophysics Data System (ADS)

    Podgorsak, Alexander R.; Nagesh, S. V. Setlur; Bednarek, Daniel R.; Rudin, Stephen; Ionita, Ciprian N.

    2017-03-01

    This project assessed the effectiveness of using two different detectors to obtain dual-energy (DE) micro-CT data for the carrying out of material decomposition. A micro-CT coupled to either a complementary metal-oxide semiconductor (CMOS) or an electron multiplying CCD (EMCCD) detector was used to acquire image data of a 3D-printed phantom with channels filled with different materials. At any instance, materials such as iohexol contrast agent, water, and platinum were selected to make up the scanned object. DE micro-CT data was acquired, and slices of the scanned object were differentiated by material makeup. The success of the decomposition was assessed quantitatively through the computation of percentage normalized root-mean-square error (%NRMSE). Our results indicate a successful decomposition of iohexol for both detectors (%NRMSE values of 1.8 for EMCCD, 2.4 for CMOS), as well as platinum (%NRMSE value of 4.7). The CMOS detector performed material decomposition on air and water on average with 7 times more %NRMSE, possibly due to the decreased sensitivity of the CMOS system. Material decomposition showed the potential to differentiate between materials such as the iohexol and platinum, perhaps opening the door for its use in the neurovascular anatomical region. Work supported by Toshiba America Medical Systems, and partially supported by NIH grant 2R01EB002873.

  6. Rapid habitability assessment of Mars samples by pyrolysis-FTIR

    NASA Astrophysics Data System (ADS)

    Gordon, Peter R.; Sephton, Mark A.

    2016-02-01

    Pyrolysis Fourier transform infrared spectroscopy (pyrolysis FTIR) is a potential sample selection method for Mars Sample Return missions. FTIR spectroscopy can be performed on solid and liquid samples but also on gases following preliminary thermal extraction, pyrolysis or gasification steps. The detection of hydrocarbon and non-hydrocarbon gases can reveal information on sample mineralogy and past habitability of the environment in which the sample was created. The absorption of IR radiation at specific wavenumbers by organic functional groups can indicate the presence and type of any organic matter present. Here we assess the utility of pyrolysis-FTIR to release water, carbon dioxide, sulfur dioxide and organic matter from Mars relevant materials to enable a rapid habitability assessment of target rocks for sample return. For our assessment a range of minerals were analyzed by attenuated total reflectance FTIR. Subsequently, the mineral samples were subjected to single step pyrolysis and multi step pyrolysis and the products characterised by gas phase FTIR. Data from both single step and multi step pyrolysis-FTIR provide the ability to identify minerals that reflect habitable environments through their water and carbon dioxide responses. Multi step pyrolysis-FTIR can be used to gain more detailed information on the sources of the liberated water and carbon dioxide owing to the characteristic decomposition temperatures of different mineral phases. Habitation can be suggested when pyrolysis-FTIR indicates the presence of organic matter within the sample. Pyrolysis-FTIR, therefore, represents an effective method to assess whether Mars Sample Return target rocks represent habitable conditions and potential records of habitation and can play an important role in sample triage operations.

  7. Thermogravimetric analysis and kinetic modeling of low-transition-temperature mixtures pretreated oil palm empty fruit bunch for possible maximum yield of pyrolysis oil.

    PubMed

    Yiin, Chung Loong; Yusup, Suzana; Quitain, Armando T; Uemura, Yoshimitsu; Sasaki, Mitsuru; Kida, Tetsuya

    2018-05-01

    The impacts of low-transition-temperature mixtures (LTTMs) pretreatment on thermal decomposition and kinetics of empty fruit bunch (EFB) were investigated by thermogravimetric analysis. EFB was pretreated with the LTTMs under different duration of pretreatment which enabled various degrees of alteration to their structure. The TG-DTG curves showed that LTTMs pretreatment on EFB shifted the temperature and rate of decomposition to higher values. The EFB pretreated with sucrose and choline chloride-based LTTMs had attained the highest mass loss of volatile matter (78.69% and 75.71%) after 18 h of pretreatment. For monosodium glutamate-based LTTMs, the 24 h pretreated EFB had achieved the maximum mass loss (76.1%). Based on the Coats-Redfern integral method, the LTTMs pretreatment led to an increase in activation energy of the thermal decomposition of EFB from 80.00 to 82.82-94.80 kJ/mol. The activation energy was mainly affected by the demineralization and alteration in cellulose crystallinity after LTTMs pretreatment. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Block matrix based LU decomposition to analyze kinetic damping in active plasma resonance spectroscopy

    NASA Astrophysics Data System (ADS)

    Roehl, Jan Hendrik; Oberrath, Jens

    2016-09-01

    ``Active plasma resonance spectroscopy'' (APRS) is a widely used diagnostic method to measure plasma parameter like electron density. Measurements with APRS probes in plasmas of a few Pa typically show a broadening of the spectrum due to kinetic effects. To analyze the broadening a general kinetic model in electrostatic approximation based on functional analytic methods has been presented [ 1 ] . One of the main results is, that the system response function Y(ω) is given in terms of the matrix elements of the resolvent of the dynamic operator evaluated for values on the imaginary axis. To determine the response function of a specific probe the resolvent has to be approximated by a huge matrix which is given by a banded block structure. Due to this structure a block based LU decomposition can be implemented. It leads to a solution of Y(ω) which is given only by products of matrices of the inner block size. This LU decomposition allows to analyze the influence of kinetic effects on the broadening and saves memory and calculation time. Gratitude is expressed to the internal funding of Leuphana University.

  9. Radiation noise of the bearing applied to the ceramic motorized spindle based on the sub-source decomposition method

    NASA Astrophysics Data System (ADS)

    Bai, X. T.; Wu, Y. H.; Zhang, K.; Chen, C. Z.; Yan, H. P.

    2017-12-01

    This paper mainly focuses on the calculation and analysis on the radiation noise of the angular contact ball bearing applied to the ceramic motorized spindle. The dynamic model containing the main working conditions and structural parameters is established based on dynamic theory of rolling bearing. The sub-source decomposition method is introduced in for the calculation of the radiation noise of the bearing, and a comparative experiment is adopted to check the precision of the method. Then the comparison between the contribution of different components is carried out in frequency domain based on the sub-source decomposition method. The spectrum of radiation noise of different components under various rotation speeds are used as the basis of assessing the contribution of different eigenfrequencies on the radiation noise of the components, and the proportion of friction noise and impact noise is evaluated as well. The results of the research provide the theoretical basis for the calculation of bearing noise, and offers reference to the impact of different components on the radiation noise of the bearing under different rotation speed.

  10. Decomposition-Based Multiobjective Evolutionary Algorithm for Community Detection in Dynamic Social Networks

    PubMed Central

    Ma, Jingjing; Liu, Jie; Ma, Wenping; Gong, Maoguo; Jiao, Licheng

    2014-01-01

    Community structure is one of the most important properties in social networks. In dynamic networks, there are two conflicting criteria that need to be considered. One is the snapshot quality, which evaluates the quality of the community partitions at the current time step. The other is the temporal cost, which evaluates the difference between communities at different time steps. In this paper, we propose a decomposition-based multiobjective community detection algorithm to simultaneously optimize these two objectives to reveal community structure and its evolution in dynamic networks. It employs the framework of multiobjective evolutionary algorithm based on decomposition to simultaneously optimize the modularity and normalized mutual information, which quantitatively measure the quality of the community partitions and temporal cost, respectively. A local search strategy dealing with the problem-specific knowledge is incorporated to improve the effectiveness of the new algorithm. Experiments on computer-generated and real-world networks demonstrate that the proposed algorithm can not only find community structure and capture community evolution more accurately, but also be steadier than the two compared algorithms. PMID:24723806

  11. Polar decomposition for attitude determination from vector observations

    NASA Technical Reports Server (NTRS)

    Bar-Itzhack, Itzhack Y.

    1993-01-01

    This work treats the problem of weighted least squares fitting of a 3D Euclidean-coordinate transformation matrix to a set of unit vectors measured in the reference and transformed coordinates. A closed-form analytic solution to the problem is re-derived. The fact that the solution is the closest orthogonal matrix to some matrix defined on the measured vectors and their weights is clearly demonstrated. Several known algorithms for computing the analytic closed form solution are considered. An algorithm is discussed which is based on the polar decomposition of matrices into the closest unitary matrix to the decomposed matrix and a Hermitian matrix. A somewhat longer improved algorithm is suggested too. A comparison of several algorithms is carried out using simulated data as well as real data from the Upper Atmosphere Research Satellite. The comparison is based on accuracy and time consumption. It is concluded that the algorithms based on polar decomposition yield a simple although somewhat less accurate solution. The precision of the latter algorithms increase with the number of the measured vectors and with the accuracy of their measurement.

  12. Decomposition-based multiobjective evolutionary algorithm for community detection in dynamic social networks.

    PubMed

    Ma, Jingjing; Liu, Jie; Ma, Wenping; Gong, Maoguo; Jiao, Licheng

    2014-01-01

    Community structure is one of the most important properties in social networks. In dynamic networks, there are two conflicting criteria that need to be considered. One is the snapshot quality, which evaluates the quality of the community partitions at the current time step. The other is the temporal cost, which evaluates the difference between communities at different time steps. In this paper, we propose a decomposition-based multiobjective community detection algorithm to simultaneously optimize these two objectives to reveal community structure and its evolution in dynamic networks. It employs the framework of multiobjective evolutionary algorithm based on decomposition to simultaneously optimize the modularity and normalized mutual information, which quantitatively measure the quality of the community partitions and temporal cost, respectively. A local search strategy dealing with the problem-specific knowledge is incorporated to improve the effectiveness of the new algorithm. Experiments on computer-generated and real-world networks demonstrate that the proposed algorithm can not only find community structure and capture community evolution more accurately, but also be steadier than the two compared algorithms.

  13. Synthesis, crystal structure and catalytic effect on thermal decomposition of RDX and AP: An energetic coordination polymer [Pb{sub 2}(C{sub 5}H{sub 3}N{sub 5}O{sub 5}){sub 2}(NMP)·NMP]{sub n}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Jin-jian; Yancheng Teachers College, Yancheng 224002; Liu, Zu-Liang, E-mail: liuzl@mail.njust.edu.cn

    2013-04-15

    An energetic lead(II) coordination polymer based on the ligand ANPyO has been synthesized and its crystal structure has been got. The polymer was characterized by FT-IR spectroscopy, elemental analysis, DSC and TG-DTG technologies. Thermal analysis shows that there are one endothermic process and two exothermic decomposition stages in the temperature range of 50–600 °C with final residues 57.09%. The non-isothermal kinetic has also been studied on the main exothermic decomposition using the Kissinger's and Ozawa–Doyle's methods, the apparent activation energy is calculated as 195.2 KJ/mol. Furthermore, DSC measurements show that the polymer has significant catalytic effect on the thermal decompositionmore » of ammonium perchlorate. - Graphical abstract: An energetic lead(II) coordination polymer of ANPyO has been synthesized, structurally characterized and properties tested. Highlights: ► We have synthesized and characterized an energetic lead(II) coordination polymer. ► We have measured its molecular structure and thermal decomposition. ► It has significant catalytic effect on thermal decomposition of AP.« less

  14. Coarse-to-fine markerless gait analysis based on PCA and Gauss-Laguerre decomposition

    NASA Astrophysics Data System (ADS)

    Goffredo, Michela; Schmid, Maurizio; Conforto, Silvia; Carli, Marco; Neri, Alessandro; D'Alessio, Tommaso

    2005-04-01

    Human movement analysis is generally performed through the utilization of marker-based systems, which allow reconstructing, with high levels of accuracy, the trajectories of markers allocated on specific points of the human body. Marker based systems, however, show some drawbacks that can be overcome by the use of video systems applying markerless techniques. In this paper, a specifically designed computer vision technique for the detection and tracking of relevant body points is presented. It is based on the Gauss-Laguerre Decomposition, and a Principal Component Analysis Technique (PCA) is used to circumscribe the region of interest. Results obtained on both synthetic and experimental tests provide significant reduction of the computational costs, with no significant reduction of the tracking accuracy.

  15. Automated analysis of biological oscillator models using mode decomposition.

    PubMed

    Konopka, Tomasz

    2011-04-01

    Oscillating signals produced by biological systems have shapes, described by their Fourier spectra, that can potentially reveal the mechanisms that generate them. Extracting this information from measured signals is interesting for the validation of theoretical models, discovery and classification of interaction types, and for optimal experiment design. An automated workflow is described for the analysis of oscillating signals. A software package is developed to match signal shapes to hundreds of a priori viable model structures defined by a class of first-order differential equations. The package computes parameter values for each model by exploiting the mode decomposition of oscillating signals and formulating the matching problem in terms of systems of simultaneous polynomial equations. On the basis of the computed parameter values, the software returns a list of models consistent with the data. In validation tests with synthetic datasets, it not only shortlists those model structures used to generate the data but also shows that excellent fits can sometimes be achieved with alternative equations. The listing of all consistent equations is indicative of how further invalidation might be achieved with additional information. When applied to data from a microarray experiment on mice, the procedure finds several candidate model structures to describe interactions related to the circadian rhythm. This shows that experimental data on oscillators is indeed rich in information about gene regulation mechanisms. The software package is available at http://babylone.ulb.ac.be/autoosc/.

  16. Soil physical, chemical and gas-flux characterization from Picea mariana stands near Erickson Creek, Alaska

    USGS Publications Warehouse

    O'Donnell, Jonathan A.; Harden, Jennifer W.; Manies, Kristen L.

    2011-01-01

    Fire is a particularly important control on the carbon (C) balance of the boreal forest, and fire-return intervals and fire severity appear to have increased since the late 1900s in North America. In addition to the immediate release of stored C to the atmosphere through organic-matter combustion, fire also modifies soil conditions, possibly affecting C exchange between terrestrial and atmospheric pools for decades after the burn. The effects of fire on ecosystem C dynamics vary across the landscape, with topographic position and soil drainage functioning as important controls. The data reported here contributed to a larger U.S. Geological Survey (USGS) study, published in the journal Ecosystems by O'Donnell and others (2009). To evaluate the effects of fire and drainage on ecosystem C dynamics, we selected sample sites within the 2003 Erickson Creek fire scar to measure CO2 fluxes and soil C inventories in burned and unburned (control) sites in both upland and lowland black spruce (Picea mariana) forests. The results of this study suggested that although fire can create soil climate conditions which are more conducive to rapid decomposition, rates of C release from soils may be constrained after fire by changes in moisture and (or) substrate quality that impede rates of decomposition. Here, we report detailed site information, methodology, and data (in spreadsheet files) from that study.

  17. Interactions between geomorphology and ecosystem processes in travertine streams: Implications for decommissioning a dam on Fossil Creek, Arizona

    NASA Astrophysics Data System (ADS)

    Marks, Jane C.; Parnell, Roderic; Carter, Cody; Dinger, Eric C.; Haden, G. Allen

    2006-07-01

    Travertine deposits of calcium carbonate can dominate channel geomorphology in streams where travertine deposition creates a distinct morphology characterized by travertine terraces, steep waterfalls, and large pools. Algae and microorganisms can facilitate travertine deposition, but how travertine affects material and energy flow in stream ecosystems is less well understood. Nearly a century of flow diversion for hydropower production has decimated the natural travertine formations in Fossil Creek, Arizona. The dam will be decommissioned in 2005. Returning carbonate-rich spring water to the natural stream channel should promote travertine deposition. How will the recovery of travertine affect the ecology of the creek? To address this question, we compared primary production, decomposition, and the abundance and diversity of invertebrates and fish in travertine and riffle/run reaches of Fossil Creek, Arizona. We found that travertine supports higher primary productivity, faster rates of leaf litter decomposition, and higher species richness of the native invertebrate assemblage. Observations from snorkeling in the stream indicate that fish density is also higher in the travertine reach. We postulate that restoring travertine to Fossil Creek will increase stream productivity, rates of litter processing, and energy flow up the food web. Higher aquatic productivity could fundamentally shift the nature of the stream from a sink to a source of energy for the surrounding terrestrial landscape.

  18. Long-term sensitivity of soil carbon turnover to warming.

    PubMed

    Knorr, W; Prentice, I C; House, J I; Holland, E A

    2005-01-20

    The sensitivity of soil carbon to warming is a major uncertainty in projections of carbon dioxide concentration and climate. Experimental studies overwhelmingly indicate increased soil organic carbon (SOC) decomposition at higher temperatures, resulting in increased carbon dioxide emissions from soils. However, recent findings have been cited as evidence against increased soil carbon emissions in a warmer world. In soil warming experiments, the initially increased carbon dioxide efflux returns to pre-warming rates within one to three years, and apparent carbon pool turnover times are insensitive to temperature. It has already been suggested that the apparent lack of temperature dependence could be an artefact due to neglecting the extreme heterogeneity of soil carbon, but no explicit model has yet been presented that can reconcile all the above findings. Here we present a simple three-pool model that partitions SOC into components with different intrinsic turnover rates. Using this model, we show that the results of all the soil-warming experiments are compatible with long-term temperature sensitivity of SOC turnover: they can be explained by rapid depletion of labile SOC combined with the negligible response of non-labile SOC on experimental timescales. Furthermore, we present evidence that non-labile SOC is more sensitive to temperature than labile SOC, implying that the long-term positive feedback of soil decomposition in a warming world may be even stronger than predicted by global models.

  19. A novel hybrid model for air quality index forecasting based on two-phase decomposition technique and modified extreme learning machine.

    PubMed

    Wang, Deyun; Wei, Shuai; Luo, Hongyuan; Yue, Chenqiang; Grunder, Olivier

    2017-02-15

    The randomness, non-stationarity and irregularity of air quality index (AQI) series bring the difficulty of AQI forecasting. To enhance forecast accuracy, a novel hybrid forecasting model combining two-phase decomposition technique and extreme learning machine (ELM) optimized by differential evolution (DE) algorithm is developed for AQI forecasting in this paper. In phase I, the complementary ensemble empirical mode decomposition (CEEMD) is utilized to decompose the AQI series into a set of intrinsic mode functions (IMFs) with different frequencies; in phase II, in order to further handle the high frequency IMFs which will increase the forecast difficulty, variational mode decomposition (VMD) is employed to decompose the high frequency IMFs into a number of variational modes (VMs). Then, the ELM model optimized by DE algorithm is applied to forecast all the IMFs and VMs. Finally, the forecast value of each high frequency IMF is obtained through adding up the forecast results of all corresponding VMs, and the forecast series of AQI is obtained by aggregating the forecast results of all IMFs. To verify and validate the proposed model, two daily AQI series from July 1, 2014 to June 30, 2016 collected from Beijing and Shanghai located in China are taken as the test cases to conduct the empirical study. The experimental results show that the proposed hybrid model based on two-phase decomposition technique is remarkably superior to all other considered models for its higher forecast accuracy. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Adaptive fault feature extraction from wayside acoustic signals from train bearings

    NASA Astrophysics Data System (ADS)

    Zhang, Dingcheng; Entezami, Mani; Stewart, Edward; Roberts, Clive; Yu, Dejie

    2018-07-01

    Wayside acoustic detection of train bearing faults plays a significant role in maintaining safety in the railway transport system. However, the bearing fault information is normally masked by strong background noises and harmonic interferences generated by other components (e.g. axles and gears). In order to extract the bearing fault feature information effectively, a novel method called improved singular value decomposition (ISVD) with resonance-based signal sparse decomposition (RSSD), namely the ISVD-RSSD method, is proposed in this paper. A Savitzky-Golay (S-G) smoothing filter is used to filter singular vectors (SVs) in the ISVD method as an extension of the singular value decomposition (SVD) theorem. Hilbert spectrum entropy and a stepwise optimisation strategy are used to optimize the S-G filter's parameters. The RSSD method is able to nonlinearly decompose the wayside acoustic signal of a faulty train bearing into high and low resonance components, the latter of which contains bearing fault information. However, the high level of noise usually results in poor decomposition results from the RSSD method. Hence, the collected wayside acoustic signal must first be de-noised using the ISVD component of the ISVD-RSSD method. Next, the de-noised signal is decomposed by using the RSSD method. The obtained low resonance component is then demodulated with a Hilbert transform such that the bearing fault can be detected by observing Hilbert envelope spectra. The effectiveness of the ISVD-RSSD method is verified through both laboratory field-based experiments as described in the paper. The results indicate that the proposed method is superior to conventional spectrum analysis and ensemble empirical mode decomposition methods.

  1. Innovative PCDD/F-containing gas stream generating system applied in catalytic decomposition of gaseous dioxins over V2O5-WO3/TiO2-based catalysts.

    PubMed

    Yang, Chia Cheng; Chang, Shu Hao; Hong, Bao Zhen; Chi, Kai Hsien; Chang, Moo Been

    2008-10-01

    Development of effective PCDD/F (polychlorinated dibenzo-p-dioxin and dibenzofuran) control technologies is essential for environmental engineers and researchers. In this study, a PCDD/F-containing gas stream generating system was developed to investigate the efficiency and effectiveness of innovative PCDD/F control technologies. The system designed and constructed can stably generate the gas stream with the PCDD/F concentration ranging from 1.0 to 100ng TEQ Nm(-3) while reproducibility test indicates that the PCDD/F recovery efficiencies are between 93% and 112%. This new PCDD/F-containing gas stream generating device is first applied in the investigation of the catalytic PCDD/F control technology. The catalytic decomposition of PCDD/Fs was evaluated with two types of commercial V(2)O(5)-WO(3)/TiO(2)-based catalysts (catalyst A and catalyst B) at controlled temperature, water vapor content, and space velocity. 84% and 91% PCDD/F destruction efficiencies are achieved with catalysts A and B, respectively, at 280 degrees C with the space velocity of 5000h(-1). The results also indicate that the presence of water vapor inhibits PCDD/F decomposition due to its competition with PCDD/F molecules for adsorption on the active vanadia sites for both catalysts. In addition, this study combined integral reaction and Mars-Van Krevelen model to calculate the activation energies of OCDD and OCDF decomposition. The activation energies of OCDD and OCDF decomposition via catalysis are calculated as 24.8kJmol(-1) and 25.2kJmol(-1), respectively.

  2. Vapor Pressure Data and Analysis for Selected HD Decomposition Products: 1,4-Thioxane, Divinyl Sulfoxide, Chloroethyl Acetylsulfide, and 1,4-Dithiane

    DTIC Science & Technology

    2018-06-01

    decomposition products from bis-(2-chloroethyl) sulfide (HD). These data were measured using an ASTM International method that is based on differential...2.1 Materials and Method ........................................................................................2 2.2 Data Analysis...and Method The source and purity of the materials studied are listed in Table 1. Table 1. Sample Information for Title Compounds Compound

  3. Factors Affecting Regional Per-Capita Carbon Emissions in China Based on an LMDI Factor Decomposition Model

    PubMed Central

    Dong, Feng; Long, Ruyin; Chen, Hong; Li, Xiaohui; Yang, Qingliang

    2013-01-01

    China is considered to be the main carbon producer in the world. The per-capita carbon emissions indicator is an important measure of the regional carbon emissions situation. This study used the LMDI factor decomposition model–panel co-integration test two-step method to analyze the factors that affect per-capita carbon emissions. The main results are as follows. (1) During 1997, Eastern China, Central China, and Western China ranked first, second, and third in the per-capita carbon emissions, while in 2009 the pecking order changed to Eastern China, Western China, and Central China. (2) According to the LMDI decomposition results, the key driver boosting the per-capita carbon emissions in the three economic regions of China between 1997 and 2009 was economic development, and the energy efficiency was much greater than the energy structure after considering their effect on restraining increased per-capita carbon emissions. (3) Based on the decomposition, the factors that affected per-capita carbon emissions in the panel co-integration test showed that Central China had the best energy structure elasticity in its regional per-capita carbon emissions. Thus, Central China was ranked first for energy efficiency elasticity, while Western China was ranked first for economic development elasticity. PMID:24353753

  4. Factors affecting regional per-capita carbon emissions in China based on an LMDI factor decomposition model.

    PubMed

    Dong, Feng; Long, Ruyin; Chen, Hong; Li, Xiaohui; Yang, Qingliang

    2013-01-01

    China is considered to be the main carbon producer in the world. The per-capita carbon emissions indicator is an important measure of the regional carbon emissions situation. This study used the LMDI factor decomposition model-panel co-integration test two-step method to analyze the factors that affect per-capita carbon emissions. The main results are as follows. (1) During 1997, Eastern China, Central China, and Western China ranked first, second, and third in the per-capita carbon emissions, while in 2009 the pecking order changed to Eastern China, Western China, and Central China. (2) According to the LMDI decomposition results, the key driver boosting the per-capita carbon emissions in the three economic regions of China between 1997 and 2009 was economic development, and the energy efficiency was much greater than the energy structure after considering their effect on restraining increased per-capita carbon emissions. (3) Based on the decomposition, the factors that affected per-capita carbon emissions in the panel co-integration test showed that Central China had the best energy structure elasticity in its regional per-capita carbon emissions. Thus, Central China was ranked first for energy efficiency elasticity, while Western China was ranked first for economic development elasticity.

  5. Empirical mode decomposition apparatus, method and article of manufacture for analyzing biological signals and performing curve fitting

    NASA Technical Reports Server (NTRS)

    Huang, Norden E. (Inventor)

    2004-01-01

    A computer implemented physical signal analysis method includes four basic steps and the associated presentation techniques of the results. The first step is a computer implemented Empirical Mode Decomposition that extracts a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform which produces a Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum. The third step filters the physical signal by combining a subset of the IMFs. In the fourth step, a curve may be fitted to the filtered signal which may not have been possible with the original, unfiltered signal.

  6. Multi-scale structure and topological anomaly detection via a new network statistic: The onion decomposition.

    PubMed

    Hébert-Dufresne, Laurent; Grochow, Joshua A; Allard, Antoine

    2016-08-18

    We introduce a network statistic that measures structural properties at the micro-, meso-, and macroscopic scales, while still being easy to compute and interpretable at a glance. Our statistic, the onion spectrum, is based on the onion decomposition, which refines the k-core decomposition, a standard network fingerprinting method. The onion spectrum is exactly as easy to compute as the k-cores: It is based on the stages at which each vertex gets removed from a graph in the standard algorithm for computing the k-cores. Yet, the onion spectrum reveals much more information about a network, and at multiple scales; for example, it can be used to quantify node heterogeneity, degree correlations, centrality, and tree- or lattice-likeness. Furthermore, unlike the k-core decomposition, the combined degree-onion spectrum immediately gives a clear local picture of the network around each node which allows the detection of interesting subgraphs whose topological structure differs from the global network organization. This local description can also be leveraged to easily generate samples from the ensemble of networks with a given joint degree-onion distribution. We demonstrate the utility of the onion spectrum for understanding both static and dynamic properties on several standard graph models and on many real-world networks.

  7. Low-rank canonical-tensor decomposition of potential energy surfaces: application to grid-based diagrammatic vibrational Green's function theory

    NASA Astrophysics Data System (ADS)

    Rai, Prashant; Sargsyan, Khachik; Najm, Habib; Hermes, Matthew R.; Hirata, So

    2017-09-01

    A new method is proposed for a fast evaluation of high-dimensional integrals of potential energy surfaces (PES) that arise in many areas of quantum dynamics. It decomposes a PES into a canonical low-rank tensor format, reducing its integral into a relatively short sum of products of low-dimensional integrals. The decomposition is achieved by the alternating least squares (ALS) algorithm, requiring only a small number of single-point energy evaluations. Therefore, it eradicates a force-constant evaluation as the hotspot of many quantum dynamics simulations and also possibly lifts the curse of dimensionality. This general method is applied to the anharmonic vibrational zero-point and transition energy calculations of molecules using the second-order diagrammatic vibrational many-body Green's function (XVH2) theory with a harmonic-approximation reference. In this application, high dimensional PES and Green's functions are both subjected to a low-rank decomposition. Evaluating the molecular integrals over a low-rank PES and Green's functions as sums of low-dimensional integrals using the Gauss-Hermite quadrature, this canonical-tensor-decomposition-based XVH2 (CT-XVH2) achieves an accuracy of 0.1 cm-1 or higher and nearly an order of magnitude speedup as compared with the original algorithm using force constants for water and formaldehyde.

  8. Empirical mode decomposition apparatus, method and article of manufacture for analyzing biological signals and performing curve fitting

    NASA Technical Reports Server (NTRS)

    Huang, Norden E. (Inventor)

    2002-01-01

    A computer implemented physical signal analysis method includes four basic steps and the associated presentation techniques of the results. The first step is a computer implemented Empirical Mode Decomposition that extracts a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform which produces a Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum. The third step filters the physical signal by combining a subset of the IMFs. In the fourth step, a curve may be fitted to the filtered signal which may not have been possible with the original, unfiltered signal.

  9. Wavelet-bounded empirical mode decomposition for measured time series analysis

    NASA Astrophysics Data System (ADS)

    Moore, Keegan J.; Kurt, Mehmet; Eriten, Melih; McFarland, D. Michael; Bergman, Lawrence A.; Vakakis, Alexander F.

    2018-01-01

    Empirical mode decomposition (EMD) is a powerful technique for separating the transient responses of nonlinear and nonstationary systems into finite sets of nearly orthogonal components, called intrinsic mode functions (IMFs), which represent the dynamics on different characteristic time scales. However, a deficiency of EMD is the mixing of two or more components in a single IMF, which can drastically affect the physical meaning of the empirical decomposition results. In this paper, we present a new approached based on EMD, designated as wavelet-bounded empirical mode decomposition (WBEMD), which is a closed-loop, optimization-based solution to the problem of mode mixing. The optimization routine relies on maximizing the isolation of an IMF around a characteristic frequency. This isolation is measured by fitting a bounding function around the IMF in the frequency domain and computing the area under this function. It follows that a large (small) area corresponds to a poorly (well) separated IMF. An optimization routine is developed based on this result with the objective of minimizing the bounding-function area and with the masking signal parameters serving as free parameters, such that a well-separated IMF is extracted. As examples of application of WBEMD we apply the proposed method, first to a stationary, two-component signal, and then to the numerically simulated response of a cantilever beam with an essentially nonlinear end attachment. We find that WBEMD vastly improves upon EMD and that the extracted sets of IMFs provide insight into the underlying physics of the response of each system.

  10. Resonance-Based Sparse Signal Decomposition and its Application in Mechanical Fault Diagnosis: A Review.

    PubMed

    Huang, Wentao; Sun, Hongjian; Wang, Weijie

    2017-06-03

    Mechanical equipment is the heart of industry. For this reason, mechanical fault diagnosis has drawn considerable attention. In terms of the rich information hidden in fault vibration signals, the processing and analysis techniques of vibration signals have become a crucial research issue in the field of mechanical fault diagnosis. Based on the theory of sparse decomposition, Selesnick proposed a novel nonlinear signal processing method: resonance-based sparse signal decomposition (RSSD). Since being put forward, RSSD has become widely recognized, and many RSSD-based methods have been developed to guide mechanical fault diagnosis. This paper attempts to summarize and review the theoretical developments and application advances of RSSD in mechanical fault diagnosis, and to provide a more comprehensive reference for those interested in RSSD and mechanical fault diagnosis. Followed by a brief introduction of RSSD's theoretical foundation, based on different optimization directions, applications of RSSD in mechanical fault diagnosis are categorized into five aspects: original RSSD, parameter optimized RSSD, subband optimized RSSD, integrated optimized RSSD, and RSSD combined with other methods. On this basis, outstanding issues in current RSSD study are also pointed out, as well as corresponding instructional solutions. We hope this review will provide an insightful reference for researchers and readers who are interested in RSSD and mechanical fault diagnosis.

  11. Resonance-Based Sparse Signal Decomposition and Its Application in Mechanical Fault Diagnosis: A Review

    PubMed Central

    Huang, Wentao; Sun, Hongjian; Wang, Weijie

    2017-01-01

    Mechanical equipment is the heart of industry. For this reason, mechanical fault diagnosis has drawn considerable attention. In terms of the rich information hidden in fault vibration signals, the processing and analysis techniques of vibration signals have become a crucial research issue in the field of mechanical fault diagnosis. Based on the theory of sparse decomposition, Selesnick proposed a novel nonlinear signal processing method: resonance-based sparse signal decomposition (RSSD). Since being put forward, RSSD has become widely recognized, and many RSSD-based methods have been developed to guide mechanical fault diagnosis. This paper attempts to summarize and review the theoretical developments and application advances of RSSD in mechanical fault diagnosis, and to provide a more comprehensive reference for those interested in RSSD and mechanical fault diagnosis. Followed by a brief introduction of RSSD’s theoretical foundation, based on different optimization directions, applications of RSSD in mechanical fault diagnosis are categorized into five aspects: original RSSD, parameter optimized RSSD, subband optimized RSSD, integrated optimized RSSD, and RSSD combined with other methods. On this basis, outstanding issues in current RSSD study are also pointed out, as well as corresponding instructional solutions. We hope this review will provide an insightful reference for researchers and readers who are interested in RSSD and mechanical fault diagnosis. PMID:28587198

  12. An innovative approach for characteristic analysis and state-of-health diagnosis for a Li-ion cell based on the discrete wavelet transform

    NASA Astrophysics Data System (ADS)

    Kim, Jonghoon; Cho, B. H.

    2014-08-01

    This paper introduces an innovative approach to analyze electrochemical characteristics and state-of-health (SOH) diagnosis of a Li-ion cell based on the discrete wavelet transform (DWT). In this approach, the DWT has been applied as a powerful tool in the analysis of the discharging/charging voltage signal (DCVS) with non-stationary and transient phenomena for a Li-ion cell. Specifically, DWT-based multi-resolution analysis (MRA) is used for extracting information on the electrochemical characteristics in both time and frequency domain simultaneously. Through using the MRA with implementation of the wavelet decomposition, the information on the electrochemical characteristics of a Li-ion cell can be extracted from the DCVS over a wide frequency range. Wavelet decomposition based on the selection of the order 3 Daubechies wavelet (dB3) and scale 5 as the best wavelet function and the optimal decomposition scale is implemented. In particular, this present approach develops these investigations one step further by showing low and high frequency components (approximation component An and detail component Dn, respectively) extracted from variable Li-ion cells with different electrochemical characteristics caused by aging effect. Experimental results show the clearness of the DWT-based approach for the reliable diagnosis of the SOH for a Li-ion cell.

  13. Impact of systemic risk in the real estate sector on banking return.

    PubMed

    Li, Shouwei; Pan, Qing; He, Jianmin

    2016-01-01

    In this paper, we measure systemic risk in the real estate sector based on contingent claims analysis, and then investigate its impact on banking return. Based on the data in China, we find that systemic risk in the real estate sector has a negative effect on banking return, but this effect is temporary; banking risk aversion and implicit interest expense have considerable impact on banking return.

  14. Constrained reduced-order models based on proper orthogonal decomposition

    DOE PAGES

    Reddy, Sohail R.; Freno, Brian Andrew; Cizmas, Paul G. A.; ...

    2017-04-09

    A novel approach is presented to constrain reduced-order models (ROM) based on proper orthogonal decomposition (POD). The Karush–Kuhn–Tucker (KKT) conditions were applied to the traditional reduced-order model to constrain the solution to user-defined bounds. The constrained reduced-order model (C-ROM) was applied and validated against the analytical solution to the first-order wave equation. C-ROM was also applied to the analysis of fluidized beds. Lastly, it was shown that the ROM and C-ROM produced accurate results and that C-ROM was less sensitive to error propagation through time than the ROM.

  15. Optimal pattern synthesis for speech recognition based on principal component analysis

    NASA Astrophysics Data System (ADS)

    Korsun, O. N.; Poliyev, A. V.

    2018-02-01

    The algorithm for building an optimal pattern for the purpose of automatic speech recognition, which increases the probability of correct recognition, is developed and presented in this work. The optimal pattern forming is based on the decomposition of an initial pattern to principal components, which enables to reduce the dimension of multi-parameter optimization problem. At the next step the training samples are introduced and the optimal estimates for principal components decomposition coefficients are obtained by a numeric parameter optimization algorithm. Finally, we consider the experiment results that show the improvement in speech recognition introduced by the proposed optimization algorithm.

  16. Electrochemical Energy Summit An International Summit in Support of Societal Energy Needs

    DTIC Science & Technology

    2015-03-31

    40A/dm2, 80 deg.C. Raney Ni alloy coating had advantage for oxygen over- voltage. (100mV – 200mV saving against Ni metal) Thermal decomposition...100mV – 200mV saving against Ni base metal. Cathode: Thermal decomposition coating of mixed noble metal on Ni base metal showed low hydrogen over... thermal stability up to 210 ◦C, and exhibited a high proton conductivity (2.4×10−2 S cm−1 at 80 ◦C) and a low methanol permeability (3.3×10−7 cm2 s−1

  17. Linear stability analysis of detonations via numerical computation and dynamic mode decomposition

    NASA Astrophysics Data System (ADS)

    Kabanov, Dmitry I.; Kasimov, Aslan R.

    2018-03-01

    We introduce a new method to investigate linear stability of gaseous detonations that is based on an accurate shock-fitting numerical integration of the linearized reactive Euler equations with a subsequent analysis of the computed solution via the dynamic mode decomposition. The method is applied to the detonation models based on both the standard one-step Arrhenius kinetics and two-step exothermic-endothermic reaction kinetics. Stability spectra for all cases are computed and analyzed. The new approach is shown to be a viable alternative to the traditional normal-mode analysis used in detonation theory.

  18. Range-based volatility, expected stock returns, and the low volatility anomaly

    PubMed Central

    2017-01-01

    One of the foundations of financial economics is the idea that rational investors will discount stocks with more risk (volatility), which will result in a positive relation between risk and future returns. However, the empirical evidence is mixed when determining how volatility is related to future returns. In this paper, we examine this relation using a range-based measure of volatility, which is shown to be theoretically, numerically, and empirically superior to other measures of volatility. In a variety of tests, we find that range-based volatility is negatively associated with expected stock returns. These results are robust to time-series multifactor models as well as cross-sectional tests. Our findings contribute to the debate about the direction of the relationship between risk and return and confirm the presence of the low volatility anomaly, or the anomalous finding that low volatility stocks outperform high volatility stocks. In other tests, we find that the lower returns associated with range-based volatility are driven by stocks with lottery-like characteristics. PMID:29190652

  19. Range-based volatility, expected stock returns, and the low volatility anomaly.

    PubMed

    Blau, Benjamin M; Whitby, Ryan J

    2017-01-01

    One of the foundations of financial economics is the idea that rational investors will discount stocks with more risk (volatility), which will result in a positive relation between risk and future returns. However, the empirical evidence is mixed when determining how volatility is related to future returns. In this paper, we examine this relation using a range-based measure of volatility, which is shown to be theoretically, numerically, and empirically superior to other measures of volatility. In a variety of tests, we find that range-based volatility is negatively associated with expected stock returns. These results are robust to time-series multifactor models as well as cross-sectional tests. Our findings contribute to the debate about the direction of the relationship between risk and return and confirm the presence of the low volatility anomaly, or the anomalous finding that low volatility stocks outperform high volatility stocks. In other tests, we find that the lower returns associated with range-based volatility are driven by stocks with lottery-like characteristics.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, X; Petrongolo, M; Wang, T

    Purpose: A general problem of dual-energy CT (DECT) is that the decomposition is sensitive to noise in the two sets of dual-energy projection data, resulting in severely degraded qualities of decomposed images. We have previously proposed an iterative denoising method for DECT. Using a linear decomposition function, the method does not gain the full benefits of DECT on beam-hardening correction. In this work, we expand the framework of our iterative method to include non-linear decomposition models for noise suppression in DECT. Methods: We first obtain decomposed projections, which are free of beam-hardening artifacts, using a lookup table pre-measured on amore » calibration phantom. First-pass material images with high noise are reconstructed from the decomposed projections using standard filter-backprojection reconstruction. Noise on the decomposed images is then suppressed by an iterative method, which is formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, we include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. Analytical formulae are derived to compute the variance-covariance matrix from the measured decomposition lookup table. Results: We have evaluated the proposed method via phantom studies. Using non-linear decomposition, our method effectively suppresses the streaking artifacts of beam-hardening and obtains more uniform images than our previous approach based on a linear model. The proposed method reduces the average noise standard deviation of two basis materials by one order of magnitude without sacrificing the spatial resolution. Conclusion: We propose a general framework of iterative denoising for material decomposition of DECT. Preliminary phantom studies have shown the proposed method improves the image uniformity and reduces noise level without resolution loss. In the future, we will perform more phantom studies to further validate the performance of the purposed method. This work is supported by a Varian MRA grant.« less

  1. The thermal stability of sodium beta'-Alumina solid electrolyte ceramic in AMTEC cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Roger M.; Ryan, Margaret A.; Homer, Margie L.

    1999-01-22

    A critical component of alkali metal thermal-to electric converter (AMTEC) devices for long duration space missions is the beta'-alumina solid electrolyte ceramic (BASE), for which there exists no substitute. The temperature and environmental conditions under which BASE remains stable control operational parameters of AMTEC devices. We have used mass loss experiments in vacuum to 1573K to characterize the kinetics of BASE decomposition, and conductivity and exchange current measurements in sodium vapor filled exposure cells to 1223K to investigate changes in the BASE which affect its ionic conductivity. There is no clear evidence of direct thermal decomposition of BASE below 1273K,more » although limited soda loss may occur. Reactive metals such as Mn or Cr can react with BASE at temperatures at least as low as 1223K.« less

  2. Nano or micro? A mechanism on thermal decomposition of ammonium perchlorate catalyzed by cobalt oxalate.

    PubMed

    Zou, Min; Jiang, Xiaohong; Lu, Lude; Wang, Xin

    2012-07-30

    Micrometer-sized cobalt oxalates with different morphologies have been prepared in the presence of surfactants. The effect of catalysts morphology on the thermal decomposition of ammonium perchlorate (AP) was evaluated by differential thermal analysis (DSC). Remarkably, contrary to the well-accepted concepts, no direct relationship between the morphologies of catalysts and their activities has been observed. Based on the structural and morphological variation of the catalysts during the reaction, a catalytic mechanism on thermal decomposition of ammonium perchlorate catalyzed by cobalt oxalate is proposed. We believe that it is the "self-crushing and self-distributed" occurred within the reaction that really works for the improvement of the overall catalytic activities. In this process, both catalysts and reactants have been crashed and distributed uniformly in an automatic way. This work provides an in-depth insight into the thermal decomposition mechanism of AP as catalyzed by oxalates. Copyright © 2012 Elsevier B.V. All rights reserved.

  3. Directional analysis of cardiac motion field from gated fluorodeoxyglucose PET images using the Discrete Helmholtz Hodge Decomposition.

    PubMed

    Sims, J A; Giorgi, M C; Oliveira, M A; Meneghetti, J C; Gutierrez, M A

    2018-04-01

    Extract directional information related to left ventricular (LV) rotation and torsion from a 4D PET motion field using the Discrete Helmholtz Hodge Decomposition (DHHD). Synthetic motion fields were created using superposition of rotational and radial field components and cardiac fields produced using optical flow from a control and patient image. These were decomposed into curl-free (CF) and divergence-free (DF) components using the DHHD. Synthetic radial components were present in the CF field and synthetic rotational components in the DF field, with each retaining its center position, direction of motion and diameter after decomposition. Direction of rotation at apex and base for the control field were in opposite directions during systole, reversing during diastole. The patient DF field had little overall rotation with several small rotators. The decomposition of the LV motion field into directional components could assist quantification of LV torsion, but further processing stages seem necessary. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Kinetics of methane hydrate decomposition studied via in situ low temperature X-ray powder diffraction.

    PubMed

    Everett, S Michelle; Rawn, Claudia J; Keffer, David J; Mull, Derek L; Payzant, E Andrew; Phelps, Tommy J

    2013-05-02

    Gas hydrate is known to have a slowed decomposition rate at ambient pressure and temperatures below the melting point of ice. As hydrate exothermically decomposes, gas is released and water of the clathrate cages transforms into ice. Based on results from the decomposition of three nominally similar methane hydrate samples, the kinetics of two regions, 180-200 and 230-260 K, within the overall decomposition range 140-260 K, were studied by in situ low temperature X-ray powder diffraction. The kinetic rate constants, k(a), and the reaction mechanisms, n, for ice formation from methane hydrate were determined by the Avrami model within each region, and activation energies, E(a), were determined by the Arrhenius plot. E(a) determined from the data for 180-200 K was 42 kJ/mol and for 230-260 K was 22 kJ/mol. The higher E(a) in the colder temperature range was attributed to a difference in the microstructure of ice between the two regions.

  5. Role of litter turnover in soil quality in tropical degraded lands of Colombia.

    PubMed

    León, Juan D; Osorio, Nelson W

    2014-01-01

    Land degradation is the result of soil mismanagement that reduces soil productivity and environmental services. An alternative to improve degraded soils through reactivation of biogeochemical nutrient cycles (via litter production and decomposition) is the establishment of active restoration models using new forestry plantations, agroforestry, and silvopastoral systems. On the other hand, passive models of restoration consist of promoting natural successional processes with native plants. The objective in this review is to discuss the role of litter production and decomposition as a key strategy to reactivate biogeochemical nutrient cycles and thus improve soil quality in degraded land of the tropics. For this purpose the results of different projects of land restoration in Colombia are presented based on the dynamics of litter production, nutrient content, and decomposition. The results indicate that in only 6-13 years it is possible to detect soil properties improvements due to litter fall and decomposition. Despite that, low soil nutrient availability, particularly of N and P, seems to be major constraint to reclamation of these fragile ecosystems.

  6. A Domain Decomposition Parallelization of the Fast Marching Method

    NASA Technical Reports Server (NTRS)

    Herrmann, M.

    2003-01-01

    In this paper, the first domain decomposition parallelization of the Fast Marching Method for level sets has been presented. Parallel speedup has been demonstrated in both the optimal and non-optimal domain decomposition case. The parallel performance of the proposed method is strongly dependent on load balancing separately the number of nodes on each side of the interface. A load imbalance of nodes on either side of the domain leads to an increase in communication and rollback operations. Furthermore, the amount of inter-domain communication can be reduced by aligning the inter-domain boundaries with the interface normal vectors. In the case of optimal load balancing and aligned inter-domain boundaries, the proposed parallel FMM algorithm is highly efficient, reaching efficiency factors of up to 0.98. Future work will focus on the extension of the proposed parallel algorithm to higher order accuracy. Also, to further enhance parallel performance, the coupling of the domain decomposition parallelization to the G(sub 0)-based parallelization will be investigated.

  7. Signal Separation of Helicopter Radar Returns Using Wavelet-Based Sparse Signal Optimisation

    DTIC Science & Technology

    2016-10-01

    RR–0436 ABSTRACT A novel wavelet-based sparse signal representation technique is used to separate the main and tail rotor blade components of a...helicopter from the composite radar returns. The received signal consists of returns from the rotating main and tail rotor blades , the helicopter body...component signal com- prising of returns from the main body, the main and tail rotor hubs and blades . Temporal and Doppler characteristics of these

  8. Surpassing Humans and Computers with JellyBean: Crowd-Vision-Hybrid Counting Algorithms.

    PubMed

    Sarma, Akash Das; Jain, Ayush; Nandi, Arnab; Parameswaran, Aditya; Widom, Jennifer

    2015-11-01

    Counting objects is a fundamental image processisng primitive, and has many scientific, health, surveillance, security, and military applications. Existing supervised computer vision techniques typically require large quantities of labeled training data, and even with that, fail to return accurate results in all but the most stylized settings. Using vanilla crowd-sourcing, on the other hand, can lead to significant errors, especially on images with many objects. In this paper, we present our JellyBean suite of algorithms, that combines the best of crowds and computer vision to count objects in images, and uses judicious decomposition of images to greatly improve accuracy at low cost. Our algorithms have several desirable properties: (i) they are theoretically optimal or near-optimal , in that they ask as few questions as possible to humans (under certain intuitively reasonable assumptions that we justify in our paper experimentally); (ii) they operate under stand-alone or hybrid modes, in that they can either work independent of computer vision algorithms, or work in concert with them, depending on whether the computer vision techniques are available or useful for the given setting; (iii) they perform very well in practice, returning accurate counts on images that no individual worker or computer vision algorithm can count correctly, while not incurring a high cost.

  9. Understanding health-care access and utilization disparities among Latino children in the United States.

    PubMed

    Langellier, Brent A; Chen, Jie; Vargas-Bustamante, Arturo; Inkelas, Moira; Ortega, Alexander N

    2016-06-01

    It is important to understand the source of health-care disparities between Latinos and other children in the United States. We examine parent-reported health-care access and utilization among Latino, White, and Black children (≤17 years old) in the United States in the 2006-2011 National Health Interview Survey. Using Blinder-Oaxaca decomposition, we portion health-care disparities into two parts (1) those attributable to differences in the levels of sociodemographic characteristics (e.g., income) and (2) those attributable to differences in group-specific regression coefficients that measure the health-care 'return' Latino, White, and Black children receive on these characteristics. In the United States, Latino children are less likely than Whites to have a usual source of care, receive at least one preventive care visit, and visit a doctor, and are more likely to have delayed care. The return on sociodemographic characteristics explains 20-30% of the disparity between Latino and White children in the usual source of care, delayed care, and doctor visits and 40-50% of the disparity between Latinos and Blacks in emergency department use and preventive care. Much of the health-care disadvantage experienced by Latino children would persist if Latinos had the sociodemographic characteristics as Whites and Blacks. © The Author(s) 2014.

  10. The Speech multi features fusion perceptual hash algorithm based on tensor decomposition

    NASA Astrophysics Data System (ADS)

    Huang, Y. B.; Fan, M. H.; Zhang, Q. Y.

    2018-03-01

    With constant progress in modern speech communication technologies, the speech data is prone to be attacked by the noise or maliciously tampered. In order to make the speech perception hash algorithm has strong robustness and high efficiency, this paper put forward a speech perception hash algorithm based on the tensor decomposition and multi features is proposed. This algorithm analyses the speech perception feature acquires each speech component wavelet packet decomposition. LPCC, LSP and ISP feature of each speech component are extracted to constitute the speech feature tensor. Speech authentication is done by generating the hash values through feature matrix quantification which use mid-value. Experimental results showing that the proposed algorithm is robust for content to maintain operations compared with similar algorithms. It is able to resist the attack of the common background noise. Also, the algorithm is highly efficiency in terms of arithmetic, and is able to meet the real-time requirements of speech communication and complete the speech authentication quickly.

  11. Solid-base loaded WO{sub 3} photocatalyst for decomposition of harmful organics under visible light irradiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kako, Tetsuya, E-mail: kako.tetsuya@nims.go.jp; Meng, Xianguang; Ye, Jinhua

    Composite of NaBiO{sub 3}-loaded WO{sub 3} with a mixing ratio of 10:100 was prepared for photocatalytic harmful-organic-contaminant decomposition. The composite properties were measured using X-ray diffraction, ultraviolet-visible spectrophotometer (UV-Vis), and valence band-X-ray photoelectron spectroscope (VB-XPS). The results exhibited that the potentials for top of the valence band and bottom of conduction band for NaBiO{sub 3} can be estimated, respectively, as +2.5 V and -0.1 to 0 V. Furthermore, WO{sub 3}, NaBiO{sub 3}, and the composite showed IPA oxidation properties under visible-light irradiation. Results show that the composite exhibited much higher photocatalytic activity about 2-propanol (IPA) decomposition into CO{sub 2} thanmore » individual WO{sub 3} or NaBiO{sub 3} because of charge separation promotion and the base effect of NaBiO{sub 3}.« less

  12. Variance decomposition in stochastic simulators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Le Maître, O. P., E-mail: olm@limsi.fr; Knio, O. M., E-mail: knio@duke.edu; Moraes, A., E-mail: alvaro.moraesgutierrez@kaust.edu.sa

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance.more » Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.« less

  13. Watermarking scheme based on singular value decomposition and homomorphic transform

    NASA Astrophysics Data System (ADS)

    Verma, Deval; Aggarwal, A. K.; Agarwal, Himanshu

    2017-10-01

    A semi-blind watermarking scheme based on singular-value-decomposition (SVD) and homomorphic transform is pro-posed. This scheme ensures the digital security of an eight bit gray scale image by inserting an invisible eight bit gray scale wa-termark into it. The key approach of the scheme is to apply the homomorphic transform on the host image to obtain its reflectance component. The watermark is embedded into the singular values that are obtained by applying the singular value decomposition on the reflectance component. Peak-signal-to-noise-ratio (PSNR), normalized-correlation-coefficient (NCC) and mean-structural-similarity-index-measure (MSSIM) are used to evaluate the performance of the scheme. Invisibility of watermark is ensured by visual inspection and high value of PSNR of watermarked images. Presence of watermark is ensured by visual inspection and high values of NCC and MSSIM of extracted watermarks. Robustness of the scheme is verified by high values of NCC and MSSIM for attacked watermarked images.

  14. FCDECOMP: decomposition of metabolic networks based on flux coupling relations.

    PubMed

    Rezvan, Abolfazl; Marashi, Sayed-Amir; Eslahchi, Changiz

    2014-10-01

    A metabolic network model provides a computational framework to study the metabolism of a cell at the system level. Due to their large sizes and complexity, rational decomposition of these networks into subsystems is a strategy to obtain better insight into the metabolic functions. Additionally, decomposing metabolic networks paves the way to use computational methods that will be otherwise very slow when run on the original genome-scale network. In the present study, we propose FCDECOMP decomposition method based on flux coupling relations (FCRs) between pairs of reaction fluxes. This approach utilizes a genetic algorithm (GA) to obtain subsystems that can be analyzed in isolation, i.e. without considering the reactions of the original network in the analysis. Therefore, we propose that our method is useful for discovering biologically meaningful modules in metabolic networks. As a case study, we show that when this method is applied to the metabolic networks of barley seeds and yeast, the modules are in good agreement with the biological compartments of these networks.

  15. Reactive Goal Decomposition Hierarchies for On-Board Autonomy

    NASA Astrophysics Data System (ADS)

    Hartmann, L.

    2002-01-01

    As our experience grows, space missions and systems are expected to address ever more complex and demanding requirements with fewer resources (e.g., mass, power, budget). One approach to accommodating these higher expectations is to increase the level of autonomy to improve the capabilities and robustness of on- board systems and to simplify operations. The goal decomposition hierarchies described here provide a simple but powerful form of goal-directed behavior that is relatively easy to implement for space systems. A goal corresponds to a state or condition that an operator of the space system would like to bring about. In the system described here goals are decomposed into simpler subgoals until the subgoals are simple enough to execute directly. For each goal there is an activation condition and a set of decompositions. The decompositions correspond to different ways of achieving the higher level goal. Each decomposition contains a gating condition and a set of subgoals to be "executed" sequentially or in parallel. The gating conditions are evaluated in order and for the first one that is true, the corresponding decomposition is executed in order to achieve the higher level goal. The activation condition specifies global conditions (i.e., for all decompositions of the goal) that need to hold in order for the goal to be achieved. In real-time, parameters and state information are passed between goals and subgoals in the decomposition; a termination indication (success, failure, degree) is passed up when a decomposition finishes executing. The lowest level decompositions include servo control loops and finite state machines for generating control signals and sequencing i/o. Semaphores and shared memory are used to synchronize and coordinate decompositions that execute in parallel. The goal decomposition hierarchy is reactive in that the generated behavior is sensitive to the real-time state of the system and the environment. That is, the system is able to react to state and environment and in general can terminate the execution of a decomposition and attempt a new decomposition at any level in the hierarchy. This goal decomposition system is suitable for workstation, microprocessor and fpga implementation and thus is able to support the full range of prototyping activities, from mission design in the laboratory to development of the fpga firmware for the flight system. This approach is based on previous artificial intelligence work including (1) Brooks' subsumption architecture for robot control, (2) Firby's Reactive Action Package System (RAPS) for mediating between high level automated planning and low level execution and (3) hierarchical task networks for automated planning. Reactive goal decomposition hierarchies can be used for a wide variety of on-board autonomy applications including automating low level operation sequences (such as scheduling prerequisite operations, e.g., heaters, warm-up periods, monitoring power constraints), coordinating multiple spacecraft as in formation flying and constellations, robot manipulator operations, rendez-vous, docking, servicing, assembly, on-orbit maintenance, planetary rover operations, solar system and interstellar probes, intelligent science data gathering and disaster early warning. Goal decomposition hierarchies can support high level fault tolerance. Given models of on-board resources and goals to accomplish, the decomposition hierarchy could allocate resources to goals taking into account existing faults and in real-time reallocating resources as new faults arise. Resources to be modeled include memory (e.g., ROM, FPGA configuration memory, processor memory, payload instrument memory), processors, on-board and interspacecraft network nodes and links, sensors, actuators (e.g., attitude determination and control, guidance and navigation) and payload instruments. A goal decomposition hierarchy could be defined to map mission goals and tasks to available on-board resources. As faults occur and are detected the resource allocation is modified to avoid using the faulty resource. Goal decomposition hierarchies can implement variable autonomy (in which the operator chooses to command the system at a high or low level, mixed initiative planning (in which the system is able to interact with the operator, e.g, to request operator intervention when a working envelope is exceeded) and distributed control (in which, for example, multiple spacecraft cooperate to accomplish a task without a fixed master). The full paper will describe in greater detail how goal decompositions work, how they can be implemented, techniques for implementing a candidate application and the current state of the fpga implementation.

  16. Assessment of skeletal changes after post-mortem exposure to fire as an indicator of decomposition stage.

    PubMed

    Keough, N; L'Abbé, E N; Steyn, M; Pretorius, S

    2015-01-01

    Forensic anthropologists are tasked with interpreting the sequence of events from death to the discovery of a body. Burned bone often evokes questions as to the timing of burning events. The purpose of this study was to assess the progression of thermal damage on bones with advancement in decomposition. Twenty-five pigs in various stages of decomposition (fresh, early, advanced, early and late skeletonisation) were exposed to fire for 30 min. The scored heat-related features on bone included colour change (unaltered, charred, calcined), brown and heat borders, heat lines, delineation, greasy bone, joint shielding, predictable and minimal cracking, delamination and heat-induced fractures. Colour changes were scored according to a ranked percentage scale (0-3) and the remaining traits as absent or present (0/1). Kappa statistics was used to evaluate intra- and inter-observer error. Transition analysis was used to formulate probability mass functions [P(X=j|i)] to predict decomposition stage from the scored features of thermal destruction. Nine traits displayed potential to predict decomposition stage from burned remains. An increase in calcined and charred bone occurred synchronously with advancement of decomposition with subsequent decrease in unaltered surfaces. Greasy bone appeared more often in the early/fresh stages (fleshed bone). Heat borders, heat lines, delineation, joint shielding, predictable and minimal cracking are associated with advanced decomposition, when bone remains wet but lacks extensive soft tissue protection. Brown burn/borders, delamination and other heat-induced fractures are associated with early and late skeletonisation, showing that organic composition of bone and percentage of flesh present affect the manner in which it burns. No statistically significant difference was noted among observers for the majority of the traits, indicating that they can be scored reliably. Based on the data analysis, the pattern of heat-induced changes may assist in estimating decomposition stage from unknown, burned remains. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  17. The environmental variables that impact human decomposition in terrestrially exposed contexts within Canada.

    PubMed

    Cockle, Diane Lyn; Bell, Lynne S

    2017-03-01

    Little is known about the nature and trajectory of human decomposition in Canada. This study involved the examination of 96 retrospective police death investigation cases selected using the Canadian ViCLAS (Violent Crime Linkage Analysis System) and sudden death police databases. A classification system was designed and applied based on the latest visible stages of autolysis (stages 1-2), putrefaction (3-5) and skeletonisation (6-8) observed. The analysis of the progression of decomposition using time (Post Mortem Interval (PMI) in days) and temperature accumulated-degree-days (ADD) score found considerable variability during the putrefaction and skeletonisation phases, with poor predictability noted after stage 5 (post bloat). The visible progression of decomposition outdoors was characterized by a brown to black discolouration at stage 5 and remnant desiccated black tissue at stage 7. No bodies were totally skeletonised in under one year. Mummification of tissue was rare with earlier onset in winter as opposed to summer, considered likely due to lower seasonal humidity. It was found that neither ADD nor the PMI were significant dependent variables for the decomposition score with correlations of 53% for temperature and 41% for time. It took almost twice as much time and 1.5 times more temperature (ADD) for the set of cases exposed to cold and freezing temperatures (4°C or less) to reach putrefaction compared to the warm group. The amount of precipitation and/or clothing had a negligible impact on the advancement of decomposition, whereas the lack of sun exposure (full shade) had a small positive effect. This study found that the poor predictability of onset and the duration of late stage decomposition, combined with our limited understanding of the full range of variables which influence the speed of decomposition, makes PMI estimations for exposed terrestrial cases in Canada unreliable, but also calls in question PMI estimations elsewhere. Copyright © 2016 The Chartered Society of Forensic Sciences. Published by Elsevier B.V. All rights reserved.

  18. Gold - A novel deconvolution algorithm with optimization for waveform LiDAR processing

    NASA Astrophysics Data System (ADS)

    Zhou, Tan; Popescu, Sorin C.; Krause, Keith; Sheridan, Ryan D.; Putman, Eric

    2017-07-01

    Waveform Light Detection and Ranging (LiDAR) data have advantages over discrete-return LiDAR data in accurately characterizing vegetation structure. However, we lack a comprehensive understanding of waveform data processing approaches under different topography and vegetation conditions. The objective of this paper is to highlight a novel deconvolution algorithm, the Gold algorithm, for processing waveform LiDAR data with optimal deconvolution parameters. Further, we present a comparative study of waveform processing methods to provide insight into selecting an approach for a given combination of vegetation and terrain characteristics. We employed two waveform processing methods: (1) direct decomposition, (2) deconvolution and decomposition. In method two, we utilized two deconvolution algorithms - the Richardson-Lucy (RL) algorithm and the Gold algorithm. The comprehensive and quantitative comparisons were conducted in terms of the number of detected echoes, position accuracy, the bias of the end products (such as digital terrain model (DTM) and canopy height model (CHM)) from the corresponding reference data, along with parameter uncertainty for these end products obtained from different methods. This study was conducted at three study sites that include diverse ecological regions, vegetation and elevation gradients. Results demonstrate that two deconvolution algorithms are sensitive to the pre-processing steps of input data. The deconvolution and decomposition method is more capable of detecting hidden echoes with a lower false echo detection rate, especially for the Gold algorithm. Compared to the reference data, all approaches generate satisfactory accuracy assessment results with small mean spatial difference (<1.22 m for DTMs, <0.77 m for CHMs) and root mean square error (RMSE) (<1.26 m for DTMs, <1.93 m for CHMs). More specifically, the Gold algorithm is superior to others with smaller root mean square error (RMSE) (<1.01 m), while the direct decomposition approach works better in terms of the percentage of spatial difference within 0.5 and 1 m. The parameter uncertainty analysis demonstrates that the Gold algorithm outperforms other approaches in dense vegetation areas, with the smallest RMSE, and the RL algorithm performs better in sparse vegetation areas in terms of RMSE. Additionally, the high level of uncertainty occurs more on areas with high slope and high vegetation. This study provides an alternative and innovative approach for waveform processing that will benefit high fidelity processing of waveform LiDAR data to characterize vegetation structures.

  19. Intramolecular distribution of stable nitrogen and oxygen isotopes of nitrous oxide emitted during coal combustion.

    PubMed

    Ogawa, Mitsuteru; Yoshida, Naohiro

    2005-11-01

    The intramolecular distribution of stable isotopes in nitrous oxide that is emitted during coal combustion was analyzed using an isotopic ratio mass spectrometer equipped with a modified ion collector system (IRMS). The coal was combusted in a test furnace fitted with a single burner and the flue gases were collected at the furnace exit following removal of SO(x), NO(x), and H2O in order to avoid the formation of artifact nitrous oxide. The nitrous oxide in the flue gases proved to be enriched in 15N relative to the fuel coal. In air-staged combustion experiments, the staged air ratio was controlled over a range of 0 (unstaged combustion), 20%, and 30%. As the staged air ratio increased, the delta15N and delta18O of the nitrous oxide in the flue gases became depleted. The central nitrogen of the nitrous oxide molecule, N(alpha), was enriched in 15N relative to that occupying the end position of the molecule, N(beta), but this preference, expressed as delta15N(alpha)-delta15N(beta), decreased with the increase in the staged air ratio. Thermal decomposition and hydrogen reduction experiments carried out using a tube reactor allowed qualitative estimates of the kinetic isotope effects that occurred during the decomposition of the nitrous oxide and quantitative estimates of the extent to which the nitrous oxide had decomposed. The site preference of nitrous oxide increased with the extent of the decomposition reactions. Assuming that no site preference exists in nitrous oxide before decomposition, the behavior of nitrous oxide in the test combustion furnace was analyzed using the Rayleigh equation based on a single distillation model. As a result, the extent of decomposition of nitrous oxide was estimated as 0.24-0.26 during the decomposition reaction governed by the thermal decomposition and as 0.35-0.38 during the decomposition reaction governed by the hydrogen reduction in staged combustion. The intramolecular distribution of nitrous oxide can be a valuable parameter to estimate the extent of decomposition reaction and to understand the reaction pathway of nitrous oxide at the high temperature.

  20. Decomposition dynamics of mixed litter in a seasonally flooded forest near the Orinoco river

    NASA Astrophysics Data System (ADS)

    Bastianoni, Alessia; Chacón, Noemí; Méndez, Carlos L.; Flores, Saúl

    2015-04-01

    We evaluated the decomposition of a litter mixture in the seasonally flooded forest of a tributary of the Orinoco river. This mixture was prepared using three litter species, based on the litter fall rate observed over a complete hydro-period (2012-2013). The mixture loading ratio was 0.46 of Pouteria orinocoensis (Sapotaceae), 0.38 of Alibertia latifolia (Rubiaceae) and 0.16 of Acosmium nitens (Fabaceae). The initial chemical composition of each single litter species was also determined. Litterbags (20 × 20 cm, 2 mm opening) containing either each single species or the mixture, were deployed on the flooded forest soil and sampled after 30, 240, 270, 300 and 330 days. There were differences in initial total N and P concentrations, with A. nitens (AN) showing the highest nutrient concentrations (%NAN = 1.86 ± 0.19; %PAN = 0.058 ± 0.008) and P. orinocoensis (PO) and A. latifolia (AL) the lowest (%NPO = 0.92 ± 0.06; %NAL = 1.04 ± 0.04; %PPO = 0.029 ± 0.005; %PAL = 0.032 ± 0.001). Litter from AN showed the greatest mass loss (55%) and fastest decomposition rate (k = 0.00185 ± 0.00028) while litter from AL and the mixture showed the smallest mass loss (24% and 27% respectively) and the slowest decomposition rate (kAL = 0.00078 ± 0.00012 and kMIX = 0.00077 ± 0.00006). Decomposition rates were significantly and positively correlated with initial N (r = 0.556, p < 0.05) and P concentrations (r = 0.482, p < 0.05). Nevertheless, there were no significant differences between the expected decomposition rate and the observed decomposition rate of the mixture (additive response). To test the nature of the additivity, an enhancement factor (f) on decomposition rates for each single species was calculated. The species with the highest and smallest value of f were AN and AL, respectively. The fact that two out of the three species had values significantly different from 1, suggests that the additivity detected in our mixture was a consequence of the counterbalancing of the positive and negative effects of each species over the decomposition of the litter mixture.

  1. Mechanistic and Kinetic Analysis of Na2SO4-Modified Laterite Decomposition by Thermogravimetry Coupled with Mass Spectrometry

    PubMed Central

    Yang, Song; Du, Wenguang; Shi, Pengzheng; Shangguan, Ju; Liu, Shoujun; Zhou, Changhai; Chen, Peng; Zhang, Qian; Fan, Huiling

    2016-01-01

    Nickel laterites cannot be effectively used in physical methods because of their poor crystallinity and fine grain size. Na2SO4 is the most efficient additive for grade enrichment and Ni recovery. However, how Na2SO4 affects the selective reduction of laterite ores has not been clearly investigated. This study investigated the decomposition of laterite with and without the addition of Na2SO4 in an argon atmosphere using thermogravimetry coupled with mass spectrometry (TG-MS). Approximately 25 mg of samples with 20 wt% Na2SO4 was pyrolyzed under a 100 ml/min Ar flow at a heating rate of 10°C/min from room temperature to 1300°C. The kinetic study was based on derivative thermogravimetric (DTG) curves. The evolution of the pyrolysis gas composition was detected by mass spectrometry, and the decomposition products were analyzed by X-ray diffraction (XRD). The decomposition behavior of laterite with the addition of Na2SO4 was similar to that of pure laterite below 800°C during the first three stages. However, in the fourth stage, the dolomite decomposed at 897°C, which is approximately 200°C lower than the decomposition of pure laterite. In the last stage, the laterite decomposed and emitted SO2 in the presence of Na2SO4 with an activation energy of 91.37 kJ/mol. The decomposition of laterite with and without the addition of Na2SO4 can be described by one first-order reaction. Moreover, the use of Na2SO4 as the modification agent can reduce the activation energy of laterite decomposition; thus, the reaction rate can be accelerated, and the reaction temperature can be markedly reduced. PMID:27333072

  2. Interacting Microbe and Litter Quality Controls on Litter Decomposition: A Modeling Analysis

    PubMed Central

    Moorhead, Daryl; Lashermes, Gwenaëlle; Recous, Sylvie; Bertrand, Isabelle

    2014-01-01

    The decomposition of plant litter in soil is a dynamic process during which substrate chemistry and microbial controls interact. We more clearly quantify these controls with a revised version of the Guild-based Decomposition Model (GDM) in which we used a reverse Michaelis-Menten approach to simulate short-term (112 days) decomposition of roots from four genotypes of Zea mays that differed primarily in lignin chemistry. A co-metabolic relationship between the degradation of lignin and holocellulose (cellulose+hemicellulose) fractions of litter showed that the reduction in decay rate with increasing lignin concentration (LCI) was related to the level of arabinan substitutions in arabinoxylan chains (i.e., arabinan to xylan or A∶X ratio) and the extent to which hemicellulose chains are cross-linked with lignin in plant cell walls. This pattern was consistent between genotypes and during progressive decomposition within each genotype. Moreover, decay rates were controlled by these cross-linkages from the start of decomposition. We also discovered it necessary to divide the Van Soest soluble (labile) fraction of litter C into two pools: one that rapidly decomposed and a second that was more persistent. Simulated microbial production was consistent with recent studies suggesting that more rapidly decomposing materials can generate greater amounts of potentially recalcitrant microbial products despite the rapid loss of litter mass. Sensitivity analyses failed to identify any model parameter that consistently explained a large proportion of model variation, suggesting that feedback controls between litter quality and microbial activity in the reverse Michaelis-Menten approach resulted in stable model behavior. Model extrapolations to an independent set of data, derived from the decomposition of 12 different genotypes of maize roots, averaged within <3% of observed respiration rates and total CO2 efflux over 112 days. PMID:25264895

  3. The implications of microbial and substrate limitation for the fates of carbon in different organic soil horizon types of boreal forest ecosystems: a mechanistically based model analysis

    USGS Publications Warehouse

    He, Y.; Zhuang, Q.; Harden, Jennifer W.; McGuire, A. David; Fan, Z.; Liu, Y.; Wickland, Kimberly P.

    2014-01-01

    The large amount of soil carbon in boreal forest ecosystems has the potential to influence the climate system if released in large quantities in response to warming. Thus, there is a need to better understand and represent the environmental sensitivity of soil carbon decomposition. Most soil carbon decomposition models rely on empirical relationships omitting key biogeochemical mechanisms and their response to climate change is highly uncertain. In this study, we developed a multi-layer microbial explicit soil decomposition model framework for boreal forest ecosystems. A thorough sensitivity analysis was conducted to identify dominating biogeochemical processes and to highlight structural limitations. Our results indicate that substrate availability (limited by soil water diffusion and substrate quality) is likely to be a major constraint on soil decomposition in the fibrous horizon (40–60% of soil organic carbon (SOC) pool size variation), while energy limited microbial activity in the amorphous horizon exerts a predominant control on soil decomposition (>70% of SOC pool size variation). Elevated temperature alleviated the energy constraint of microbial activity most notably in amorphous soils, whereas moisture only exhibited a marginal effect on dissolved substrate supply and microbial activity. Our study highlights the different decomposition properties and underlying mechanisms of soil dynamics between fibrous and amorphous soil horizons. Soil decomposition models should consider explicitly representing different boreal soil horizons and soil–microbial interactions to better characterize biogeochemical processes in boreal forest ecosystems. A more comprehensive representation of critical biogeochemical mechanisms of soil moisture effects may be required to improve the performance of the soil model we analyzed in this study.

  4. Plants Regulate Soil Organic Matter Decomposition in Response to Sea Level Rise

    NASA Astrophysics Data System (ADS)

    Megonigal, P.; Mueller, P.; Jensen, K.

    2014-12-01

    Tidal wetlands have a large capacity for producing and storing organic matter, making their role in the global carbon budget disproportionate to their land area. Most of the organic matter stored in these systems is in soils where it contributes 2-5 times more to surface accretion than an equal mass of minerals. Soil organic matter (SOM) sequestration is the primary process by which tidal wetlands become perched high in the tidal frame, decreasing their vulnerability to accelerated sea level rise. Plant growth responses to sea level rise are well understood and represented in century-scale forecast models of soil surface elevation change. We understand far less about the response of soil organic matter decomposition to rapid sea level rise. Here we quantified the effects of sea level on SOM decomposition rates by exposing planted and unplanted tidal marsh monoliths to experimentally manipulated flood duration. The study was performed in a field-based mesocosm facility at the Smithsonian's Global Change Research Wetland. SOM decomposition rate was quantified as CO2 efflux, with plant- and SOM-derived CO2 separated with a two end-member δ13C-CO2 model. Despite the dogma that decomposition rates are inversely related to flooding, SOM mineralization was not sensitive to flood duration over a 35 cm range in soil surface elevation. However, decomposition rates were strongly and positively related to aboveground biomass (R2≥0.59, p≤0.01). We conclude that soil carbon loss through decomposition is driven by plant responses to sea level in this intensively studied tidal marsh. If this result applies more generally to tidal wetlands, it has important implications for modeling soil organic matter and surface elevation change in response to accelerated sea level rise.

  5. A Review of Radiolysis Concerns for Water Shielding in Fission Surface Power Applications

    NASA Technical Reports Server (NTRS)

    Schoenfeld, Michael P.

    2008-01-01

    This paper presents an overview of radiolysis concerns with regard to water shields for fission surface power. A review of the radiolysis process is presented and key parameters and trends are identified. From this understanding of the radiolytic decomposition of water, shield pressurization and corrosion are identified as the primary concerns. Existing experimental and modeling data addressing concerns are summarized. It was found that radiolysis of pure water in a closed volume results in minimal, if any net decomposition, and therefore reduces the potential for shield pressurization and corrosion. With the space program focus m emphasize more on permanent return to the Moon and eventually manned exploration of Mars, there has been a renewed look at fission power to meet the difficult technical & design challenges associated with this effort. This is due to the ability of fission power to provide a power rich environment that is insensitive to solar intensity and related aspects such as duration of night, dusty environments, and distance from the sun, etc. One critical aspect in the utilization of fission power for these applications of manned exploration is shielding. Although not typically considered for space applications, water shields have been identified as one potential option due to benefits in mass savings and reduced development cost and technical risk (Poston, 2006). However, the water shield option requires demonstration of its ability to meet key technical challenges including such things as adequate natural circulation for thermal management and capability for operational periods up to 8 years. Thermal management concerns have begun to be addressed and are not expected to be a problem (Pearson, 2007). One significant concern remaining is the ability to maintain the shield integrity through its operational lifetime. Shield integrity could be compromised through shield pressurization and corrosion resulting from the radiolytic decomposition of water.

  6. Rapid Return of Nitrogen but not Phosphorus to Ecosystem Nutrition During Decomposition of Quagga Mussel Tissue in Sand, Mud, or Water During Oxic or Anoxic Incubation: Implications for Phytoplankton Bioenergetics.

    NASA Astrophysics Data System (ADS)

    Cooney, E. M.; Cuhel, R. L.; Aguilar, C.

    2016-02-01

    In 2003 Quagga mussels were found to have invaded Lake Michigan. Their presence has changed the structure of the lake both ecologically (benthification) as well as chemically (oligotrophication). They consume large amounts of phytoplankton, which decreases the particulate nitrogen and phosphorous nutrients available to other consumers including zooplankton. As a result, fisheries productivity has decreased nearly 95%. Recently reaching the end of the first life cycle, in death they release a portion of these nutrients back into the freshwater system during decomposition. This work determined amounts of phosphorus and nitrogen nutrient recycling for several relevant sediment-water interface conditions: oxic vs anoxic in water, mud, or sand over a weeklong period. Concentrations of ammonium, soluble reactive phosphorus, and nitrate were used to analyze nutrient release as decomposition took place. In a short time up to 25% of tissue N was released as ammonia, and under oxic conditions in mud or sand, nitrification converted some of the ammonia to nitrate. Unexpectedly, mussels decaying in anoxic conditions released ammonium much more slowly. A slower rate of release in ammonium for the intact body with the shell (burial) was observed when compared to ground mussel tissue (detritivory). Nitrate was removed in anoxic incubations, indicating anaerobic denitrification. Phosphate release was initially higher under anoxic conditions than those decaying aerobically. There was no significant difference in the amount or rate of release of SRP between ground mussel and whole bodied with the shell. The anoxic treatment showed similar patterns of release for both ground mussel and intact body with shell. Most important, phosphate was subsequently removed in all treatments and diffusible nutrient was minimal (<100nM). The results link to nutrient assimilation patterns of deep phytoplankton communities, which can replace nitrate with ammonium as an N source.

  7. First-principles and thermodynamic analysis of trimethylgallium (TMG) decomposition during MOVPE growth of GaN

    NASA Astrophysics Data System (ADS)

    Sekiguchi, K.; Shirakawa, H.; Yamamoto, Y.; Araidai, M.; Kangawa, Y.; Kakimoto, K.; Shiraishi, K.

    2017-06-01

    We analyzed the decomposition mechanisms of trimethylgallium (TMG) used for the gallium source of GaN fabrication based on first-principles calculations and thermodynamic analysis. We considered two conditions. One condition is under the total pressure of 1 atm and the other one is under metal organic vapor phase epitaxy (MOVPE) growth of GaN. Our calculated results show that H2 is indispensable for TMG decomposition under both conditions. In GaN MOVPE, TMG with H2 spontaneously decomposes into Ga(CH3) and Ga(CH3) decomposes into Ga atom gas when temperature is higher than 440 K. From these calculations, we confirmed that TMG surely becomes Ga atom gas near the GaN substrate surfaces.

  8. Dual energy computed tomography for the head.

    PubMed

    Naruto, Norihito; Itoh, Toshihide; Noguchi, Kyo

    2018-02-01

    Dual energy CT (DECT) is a promising technology that provides better diagnostic accuracy in several brain diseases. DECT can generate various types of CT images from a single acquisition data set at high kV and low kV based on material decomposition algorithms. The two-material decomposition algorithm can separate bone/calcification from iodine accurately. The three-material decomposition algorithm can generate a virtual non-contrast image, which helps to identify conditions such as brain hemorrhage. A virtual monochromatic image has the potential to eliminate metal artifacts by reducing beam-hardening effects. DECT also enables exploration of advanced imaging to make diagnosis easier. One such novel application of DECT is the X-Map, which helps to visualize ischemic stroke in the brain without using iodine contrast medium.

  9. How long the singular value decomposed entropy predicts the stock market? - Evidence from the Dow Jones Industrial Average Index

    NASA Astrophysics Data System (ADS)

    Gu, Rongbao; Shao, Yanmin

    2016-07-01

    In this paper, a new concept of multi-scales singular value decomposition entropy based on DCCA cross correlation analysis is proposed and its predictive power for the Dow Jones Industrial Average Index is studied. Using Granger causality analysis with different time scales, it is found that, the singular value decomposition entropy has predictive power for the Dow Jones Industrial Average Index for period less than one month, but not for more than one month. This shows how long the singular value decomposition entropy predicts the stock market that extends Caraiani's result obtained in Caraiani (2014). On the other hand, the result also shows an essential characteristic of stock market as a chaotic dynamic system.

  10. Task decomposition for a multilimbed robot to work in reachable but unorientable space

    NASA Technical Reports Server (NTRS)

    Su, Chau; Zheng, Yuan F.

    1991-01-01

    Robot manipulators installed on legged mobile platforms are suggested for enlarging robot workspace. To plan the motion of such a system, the arm-platform motion coordination problem is raised, and a task decomposition is proposed to solve the problem. A given task described by the destination position and orientation of the end effector is decomposed into subtasks for arm manipulation and for platform configuration, respectively. The former is defined as the end-effector position and orientation with respect to the platform, and the latter as the platform position and orientation in the base coordinates. Three approaches are proposed for the task decomposition. The approaches are also evaluated in terms of the displacements, from which an optimal approach can be selected.

  11. Investigation of induced unimolecular decomposition for development of visible chemical lasers. Quarterly progress report, 1 August 1976--30 October 1976

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piper, L G; Taylor, R L

    This report summarizes progress during the second quarterly period of the subject contract. The methods available for the production of excited electronic states following azide decomposition are summarized. It is concluded that an experiment designed to study the kinetics of and branching ratios for electronically excited products from azide radicals reactions will be most productive in elucidating excitation mechanisms for potential chemical lasers. A flow reactor is described in which these studies may be undertaken. The major feature of this apparatus is a clean azide radical source based upon the thermal decomposition of solid, ionic azides. The contruction of themore » experimental apparatus has been started.« less

  12. Combining DCQGMP-Based Sparse Decomposition and MPDR Beamformer for Multi-Type Interferences Mitigation for GNSS Receivers.

    PubMed

    Guo, Qiang; Qi, Liangang

    2017-04-10

    In the coexistence of multiple types of interfering signals, the performance of interference suppression methods based on time and frequency domains is degraded seriously, and the technique using an antenna array requires a large enough size and huge hardware costs. To combat multi-type interferences better for GNSS receivers, this paper proposes a cascaded multi-type interferences mitigation method combining improved double chain quantum genetic matching pursuit (DCQGMP)-based sparse decomposition and an MPDR beamformer. The key idea behind the proposed method is that the multiple types of interfering signals can be excised by taking advantage of their sparse features in different domains. In the first stage, the single-tone (multi-tone) and linear chirp interfering signals are canceled by sparse decomposition according to their sparsity in the over-complete dictionary. In order to improve the timeliness of matching pursuit (MP)-based sparse decomposition, a DCQGMP is introduced by combining an improved double chain quantum genetic algorithm (DCQGA) and the MP algorithm, and the DCQGMP algorithm is extended to handle the multi-channel signals according to the correlation among the signals in different channels. In the second stage, the minimum power distortionless response (MPDR) beamformer is utilized to nullify the residuary interferences (e.g., wideband Gaussian noise interferences). Several simulation results show that the proposed method can not only improve the interference mitigation degree of freedom (DoF) of the array antenna, but also effectively deal with the interference arriving from the same direction with the GNSS signal, which can be sparse represented in the over-complete dictionary. Moreover, it does not bring serious distortions into the navigation signal.

  13. Combining DCQGMP-Based Sparse Decomposition and MPDR Beamformer for Multi-Type Interferences Mitigation for GNSS Receivers

    PubMed Central

    Guo, Qiang; Qi, Liangang

    2017-01-01

    In the coexistence of multiple types of interfering signals, the performance of interference suppression methods based on time and frequency domains is degraded seriously, and the technique using an antenna array requires a large enough size and huge hardware costs. To combat multi-type interferences better for GNSS receivers, this paper proposes a cascaded multi-type interferences mitigation method combining improved double chain quantum genetic matching pursuit (DCQGMP)-based sparse decomposition and an MPDR beamformer. The key idea behind the proposed method is that the multiple types of interfering signals can be excised by taking advantage of their sparse features in different domains. In the first stage, the single-tone (multi-tone) and linear chirp interfering signals are canceled by sparse decomposition according to their sparsity in the over-complete dictionary. In order to improve the timeliness of matching pursuit (MP)-based sparse decomposition, a DCQGMP is introduced by combining an improved double chain quantum genetic algorithm (DCQGA) and the MP algorithm, and the DCQGMP algorithm is extended to handle the multi-channel signals according to the correlation among the signals in different channels. In the second stage, the minimum power distortionless response (MPDR) beamformer is utilized to nullify the residuary interferences (e.g., wideband Gaussian noise interferences). Several simulation results show that the proposed method can not only improve the interference mitigation degree of freedom (DoF) of the array antenna, but also effectively deal with the interference arriving from the same direction with the GNSS signal, which can be sparse represented in the over-complete dictionary. Moreover, it does not bring serious distortions into the navigation signal. PMID:28394290

  14. Gas Sensing Analysis of Ag-Decorated Graphene for Sulfur Hexafluoride Decomposition Products Based on the Density Functional Theory

    PubMed Central

    Zhang, Xiaoxing; Huang, Rong; Gui, Yingang; Zeng, Hong

    2016-01-01

    Detection of decomposition products of sulfur hexafluoride (SF6) is one of the best ways to diagnose early latent insulation faults in gas-insulated equipment, and the occurrence of sudden accidents can be avoided effectively by finding early latent faults. Recently, functionalized graphene, a kind of gas sensing material, has been reported to show good application prospects in the gas sensor field. Therefore, calculations were performed to analyze the gas sensing properties of intrinsic graphene (Int-graphene) and functionalized graphene-based material, Ag-decorated graphene (Ag-graphene), for decomposition products of SF6, including SO2F2, SOF2, and SO2, based on density functional theory (DFT). We thoroughly investigated a series of parameters presenting gas-sensing properties of adsorbing process about gas molecule (SO2F2, SOF2, SO2) and double gas molecules (2SO2F2, 2SOF2, 2SO2) on Ag-graphene, including adsorption energy, net charge transfer, electronic state density, and the highest and lowest unoccupied molecular orbital. The results showed that the Ag atom significantly enhances the electrochemical reactivity of graphene, reflected in the change of conductivity during the adsorption process. SO2F2 and SO2 gas molecules on Ag-graphene presented chemisorption, and the adsorption strength was SO2F2 > SO2, while SOF2 absorption on Ag-graphene was physical adsorption. Thus, we concluded that Ag-graphene showed good selectivity and high sensitivity to SO2F2. The results can provide a helpful guide in exploring Ag-graphene material in experiments for monitoring the insulation status of SF6-insulated equipment based on detecting decomposition products of SF6. PMID:27809269

  15. Performance impact of stop lists and morphological decomposition on word-word corpus-based semantic space models.

    PubMed

    Keith, Jeff; Westbury, Chris; Goldman, James

    2015-09-01

    Corpus-based semantic space models, which primarily rely on lexical co-occurrence statistics, have proven effective in modeling and predicting human behavior in a number of experimental paradigms that explore semantic memory representation. The most widely studied extant models, however, are strongly influenced by orthographic word frequency (e.g., Shaoul & Westbury, Behavior Research Methods, 38, 190-195, 2006). This has the implication that high-frequency closed-class words can potentially bias co-occurrence statistics. Because these closed-class words are purported to carry primarily syntactic, rather than semantic, information, the performance of corpus-based semantic space models may be improved by excluding closed-class words (using stop lists) from co-occurrence statistics, while retaining their syntactic information through other means (e.g., part-of-speech tagging and/or affixes from inflected word forms). Additionally, very little work has been done to explore the effect of employing morphological decomposition on the inflected forms of words in corpora prior to compiling co-occurrence statistics, despite (controversial) evidence that humans perform early morphological decomposition in semantic processing. In this study, we explored the impact of these factors on corpus-based semantic space models. From this study, morphological decomposition appears to significantly improve performance in word-word co-occurrence semantic space models, providing some support for the claim that sublexical information-specifically, word morphology-plays a role in lexical semantic processing. An overall decrease in performance was observed in models employing stop lists (e.g., excluding closed-class words). Furthermore, we found some evidence that weakens the claim that closed-class words supply primarily syntactic information in word-word co-occurrence semantic space models.

  16. Basis material decomposition method for material discrimination with a new spectrometric X-ray imaging detector

    NASA Astrophysics Data System (ADS)

    Brambilla, A.; Gorecki, A.; Potop, A.; Paulus, C.; Verger, L.

    2017-08-01

    Energy sensitive photon counting X-ray detectors provide energy dependent information which can be exploited for material identification. The attenuation of an X-ray beam as a function of energy depends on the effective atomic number Zeff and the density. However, the measured attenuation is degraded by the imperfections of the detector response such as charge sharing or pile-up. These imperfections lead to non-linearities that limit the benefits of energy resolved imaging. This work aims to implement a basis material decomposition method which overcomes these problems. Basis material decomposition is based on the fact that the attenuation of any material or complex object can be accurately reproduced by a combination of equivalent thicknesses of basis materials. Our method is based on a calibration phase to learn the response of the detector for different combinations of thicknesses of the basis materials. The decomposition algorithm finds the thicknesses of basis material whose spectrum is closest to the measurement, using a maximum likelihood criterion assuming a Poisson law distribution of photon counts for each energy bin. The method was used with a ME100 linear array spectrometric X-ray imager to decompose different plastic materials on a Polyethylene and Polyvinyl Chloride base. The resulting equivalent thicknesses were used to estimate the effective atomic number Zeff. The results are in good agreement with the theoretical Zeff, regardless of the plastic sample thickness. The linear behaviour of the equivalent lengths makes it possible to process overlapped materials. Moreover, the method was tested with a 3 materials base by adding gadolinium, whose K-edge is not taken into account by the other two materials. The proposed method has the advantage that it can be used with any number of energy channels, taking full advantage of the high energy resolution of the ME100 detector. Although in principle two channels are sufficient, experimental measurements show that the use of a high number of channels significantly improves the accuracy of decomposition by reducing noise and systematic bias.

  17. Effects of calibration methods on quantitative material decomposition in photon-counting spectral computed tomography using a maximum a posteriori estimator.

    PubMed

    Curtis, Tyler E; Roeder, Ryan K

    2017-10-01

    Advances in photon-counting detectors have enabled quantitative material decomposition using multi-energy or spectral computed tomography (CT). Supervised methods for material decomposition utilize an estimated attenuation for each material of interest at each photon energy level, which must be calibrated based upon calculated or measured values for known compositions. Measurements using a calibration phantom can advantageously account for system-specific noise, but the effect of calibration methods on the material basis matrix and subsequent quantitative material decomposition has not been experimentally investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on the accuracy of quantitative material decomposition in the image domain. Gadolinium was chosen as a model contrast agent in imaging phantoms, which also contained bone tissue and water as negative controls. The maximum gadolinium concentration (30, 60, and 90 mM) and total number of concentrations (2, 4, and 7) were independently varied to systematically investigate effects of the material basis matrix and scaling factor calibration on the quantitative (root mean squared error, RMSE) and spatial (sensitivity and specificity) accuracy of material decomposition. Images of calibration and sample phantoms were acquired using a commercially available photon-counting spectral micro-CT system with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material decomposition of gadolinium, calcium, and water was performed for each calibration method using a maximum a posteriori estimator. Both the quantitative and spatial accuracy of material decomposition were most improved by using an increased maximum gadolinium concentration (range) in the basis matrix calibration; the effects of using a greater number of concentrations were relatively small in magnitude by comparison. The material basis matrix calibration was more sensitive to changes in the calibration methods than the scaling factor calibration. The material basis matrix calibration significantly influenced both the quantitative and spatial accuracy of material decomposition, while the scaling factor calibration influenced quantitative but not spatial accuracy. Importantly, the median RMSE of material decomposition was as low as ~1.5 mM (~0.24 mg/mL gadolinium), which was similar in magnitude to that measured by optical spectroscopy on the same samples. The accuracy of quantitative material decomposition in photon-counting spectral CT was significantly influenced by calibration methods which must therefore be carefully considered for the intended diagnostic imaging application. © 2017 American Association of Physicists in Medicine.

  18. Decomposition of toluene in a steady-state atmospheric-pressure glow discharge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trushkin, A. N.; Grushin, M. E.; Kochetov, I. V.

    Results are presented from experimental studies of decomposition of toluene (C{sub 6}H{sub 5}CH{sub 3}) in a polluted air flow by means of a steady-state atmospheric pressure glow discharge at different water vapor contents in the working gas. The experimental results on the degree of C{sub 6}H{sub 5}CH{sub 3} removal are compared with the results of computer simulations conducted in the framework of the developed kinetic model of plasma chemical decomposition of toluene in the N{sub 2}: O{sub 2}: H{sub 2}O gas mixture. A substantial influence of the gas flow humidity on toluene decomposition in the atmospheric pressure glow discharge ismore » demonstrated. The main mechanisms of the influence of humidity on C{sub 6}H{sub 5}CH{sub 3} decomposition are determined. The existence of two stages in the process of toluene removal, which differ in their duration and the intensity of plasma chemical decomposition of C{sub 6}H{sub 5}CH{sub 3} is established. Based on the results of computer simulations, the composition of the products of plasma chemical reactions at the output of the reactor is analyzed as a function of the specific energy deposition and gas flow humidity. The existence of a catalytic cycle in which hydroxyl radical OH acts a catalyst and which substantially accelerates the recombination of oxygen atoms and suppression of ozone generation when the plasma-forming gas contains water vapor is established.« less

  19. Pointwise Partial Information Decomposition Using the Specificity and Ambiguity Lattices

    NASA Astrophysics Data System (ADS)

    Finn, Conor; Lizier, Joseph

    2018-04-01

    What are the distinct ways in which a set of predictor variables can provide information about a target variable? When does a variable provide unique information, when do variables share redundant information, and when do variables combine synergistically to provide complementary information? The redundancy lattice from the partial information decomposition of Williams and Beer provided a promising glimpse at the answer to these questions. However, this structure was constructed using a much criticised measure of redundant information, and despite sustained research, no completely satisfactory replacement measure has been proposed. In this paper, we take a different approach, applying the axiomatic derivation of the redundancy lattice to a single realisation from a set of discrete variables. To overcome the difficulty associated with signed pointwise mutual information, we apply this decomposition separately to the unsigned entropic components of pointwise mutual information which we refer to as the specificity and ambiguity. This yields a separate redundancy lattice for each component. Then based upon an operational interpretation of redundancy, we define measures of redundant specificity and ambiguity enabling us to evaluate the partial information atoms in each lattice. These atoms can be recombined to yield the sought-after multivariate information decomposition. We apply this framework to canonical examples from the literature and discuss the results and the various properties of the decomposition. In particular, the pointwise decomposition using specificity and ambiguity satisfies a chain rule over target variables, which provides new insights into the so-called two-bit-copy example.

  20. Plastic waste sacks alter the rate of decomposition of dismembered bodies within.

    PubMed

    Scholl, Kassra; Moffatt, Colin

    2017-07-01

    As a result of criminal activity, human bodies are sometimes dismembered and concealed within sealed, plastic waste sacks. Consequently, due to the inhibited ingress of insects and dismemberment, the rate of decomposition of the body parts within may be different to that of whole, exposed bodies. Correspondingly, once found, an estimation of the postmortem interval may be affected and lead to erroneous inferences. This study set out to determine whether insects were excluded and how rate of decomposition was affected inside such plastic sacks. The limbs, torsos and heads of 24 dismembered pigs were sealed using nylon cable ties within plastic garbage sacks, half of which were of a type claimed to repel insects. Using a body scoring scale to quantify decomposition, the body parts in the sacks were compared to those of ten exposed, whole pig carcasses. Insects were found to have entered both types of plastic sack. There was no difference in rate of decomposition in the two types of sack (F 1,65  = 1.78, p = 0.19), but this was considerably slower than those of whole carcasses (F 1,408  = 1453, p < 0.001), with heads showing the largest differences. As well as a slower decomposition, sacks resulted in formation of some adipocere tissue as a result of high humidity within. Based upon existing methods, postmortem intervals for body parts within sealed sacks would be significantly underestimated.

  1. Metagenomic analysis of antibiotic resistance genes (ARGs) during refuse decomposition.

    PubMed

    Liu, Xi; Yang, Shu; Wang, Yangqing; Zhao, He-Ping; Song, Liyan

    2018-04-12

    Landfill is important reservoirs of residual antibiotics and antibiotic resistance genes (ARGs), but the mechanism of landfill application influence on antibiotic resistance remains unclear. Although refuse decomposition plays a crucial role in landfill stabilization, its impact on the antibiotic resistance has not been well characterized. To better understand the impact, we studied the dynamics of ARGs and the bacterial community composition during refuse decomposition in a bench-scale bioreactor after long term operation (265d) based on metagenomics analysis. The total abundances of ARGs increased from 431.0ppm in the initial aerobic phase (AP) to 643.9ppm in the later methanogenic phase (MP) during refuse decomposition, suggesting that application of landfill for municipal solid waste (MSW) treatment may elevate the level of ARGs. A shift from drug-specific (bacitracin, tetracycline and sulfonamide) resistance to multidrug resistance was observed during the refuse decomposition and was driven by a shift of potential bacteria hosts. The elevated abundance of Pseudomonas mainly contributed to the increasing abundance of multidrug ARGs (mexF and mexW). Accordingly, the percentage of ARGs encoding an efflux pump increased during refuse decomposition, suggesting that potential bacteria hosts developed this mechanism to adapt to the carbon and energy shortage when biodegradable substances were depleted. Overall, our findings indicate that the use of landfill for MSW treatment increased antibiotic resistance, and demonstrate the need for a comprehensive investigation of antibiotic resistance in landfill. Copyright © 2018. Published by Elsevier B.V.

  2. Microbial Signatures of Cadaver Gravesoil During Decomposition.

    PubMed

    Finley, Sheree J; Pechal, Jennifer L; Benbow, M Eric; Robertson, B K; Javan, Gulnaz T

    2016-04-01

    Genomic studies have estimated there are approximately 10(3)-10(6) bacterial species per gram of soil. The microbial species found in soil associated with decomposing human remains (gravesoil) have been investigated and recognized as potential molecular determinants for estimates of time since death. The nascent era of high-throughput amplicon sequencing of the conserved 16S ribosomal RNA (rRNA) gene region of gravesoil microbes is allowing research to expand beyond more subjective empirical methods used in forensic microbiology. The goal of the present study was to evaluate microbial communities and identify taxonomic signatures associated with the gravesoil human cadavers. Using 16S rRNA gene amplicon-based sequencing, soil microbial communities were surveyed from 18 cadavers placed on the surface or buried that were allowed to decompose over a range of decomposition time periods (3-303 days). Surface soil microbial communities showed a decreasing trend in taxon richness, diversity, and evenness over decomposition, while buried cadaver-soil microbial communities demonstrated increasing taxon richness, consistent diversity, and decreasing evenness. The results show that ubiquitous Proteobacteria was confirmed as the most abundant phylum in all gravesoil samples. Surface cadaver-soil communities demonstrated a decrease in Acidobacteria and an increase in Firmicutes relative abundance over decomposition, while buried soil communities were consistent in their community composition throughout decomposition. Better understanding of microbial community structure and its shifts over time may be important for advancing general knowledge of decomposition soil ecology and its potential use during forensic investigations.

  3. Human decomposition and the reliability of a 'Universal' model for post mortem interval estimations.

    PubMed

    Cockle, Diane L; Bell, Lynne S

    2015-08-01

    Human decomposition is a complex biological process driven by an array of variables which are not clearly understood. The medico-legal community have long been searching for a reliable method to establish the post-mortem interval (PMI) for those whose deaths have either been hidden, or gone un-noticed. To date, attempts to develop a PMI estimation method based on the state of the body either at the scene or at autopsy have been unsuccessful. One recent study has proposed that two simple formulae, based on the level of decomposition humidity and temperature, could be used to accurately calculate the PMI for bodies outside, on or under the surface worldwide. This study attempted to validate 'Formula I' [1] (for bodies on the surface) using 42 Canadian cases with known PMIs. The results indicated that bodies exposed to warm temperatures consistently overestimated the known PMI by a large and inconsistent margin for Formula I estimations. And for bodies exposed to cold and freezing temperatures (less than 4°C), then the PMI was dramatically under estimated. The ability of 'Formulae II' to estimate the PMI for buried bodies was also examined using a set of 22 known Canadian burial cases. As these cases used in this study are retrospective, some of the data needed for Formula II was not available. The 4.6 value used in Formula II to represent the standard ratio of time that burial decelerates the rate of decomposition was examined. The average time taken to achieve each stage of decomposition both on, and under the surface was compared for the 118 known cases. It was found that the rate of decomposition was not consistent throughout all stages of decomposition. The rates of autolysis above and below the ground were equivalent with the buried cases staying in a state of putrefaction for a prolonged period of time. It is suggested that differences in temperature extremes and humidity levels between geographic regions may make it impractical to apply formulas developed in one region to any other region. These results also suggest that there are other variables, apart from temperature and humidity that may impact the rate of human decomposition. These variables, or complex of variables, are considered regionally specific. Neither of the Universal Formulae performed well, and our results do not support the proposition of Universality for PMI estimation. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  4. Accounting- versus economic-based rates of return: implications for profitability measures in the pharmaceutical industry.

    PubMed

    Skrepnek, Grant H

    2004-01-01

    Accounting-based profits have indicated that pharmaceutical firms have achieved greater returns relative to other sectors. However, partially due to the theoretically inappropriate reporting of research and development (R&D) expenditures according to generally accepted accounting principles, evidence suggests that a substantial and upward bias is present in accounting-based rates of return for corporations with high levels of intangible assets. Given the intensity of R&D in pharmaceutical firms, accounting-based profit metrics in the drug sector may be affected to a greater extent than other industries. The aim of this work was to address measurement issues associated with corporate performance and factors that contribute to the bias within accounting-based rates of return. Seminal and broadly cited works on the subject of accounting- versus economic-based rates of return were reviewed from the economic and finance literature, with an emphasis placed on issues and scientific evidence directly related to the drug development process and pharmaceutical industry. With international convergence and harmonization of accounting standards being imminent, stricter adherence to theoretically sound economic principles is advocated, particularly those based on discounted cash-flow methods. Researchers, financial analysts, and policy makers must be cognizant of the biases and limitations present within numerous corporate performance measures. Furthermore, the development of more robust and valid economic models of the pharmaceutical industry is required to capture the unique dimensions of risk and return of the drug development process. Empiric work has illustrated that estimates of economic-based rates of return range from approximately 2 to approximately 11 percentage points below various accounting-based rates of return for drug companies. Because differences in the nature of risk and uncertainty borne by drug manufacturers versus other sectors make comparative assessments of performance challenging and often inappropriate, continued work in this area is warranted.

  5. Validation of Distributed Soil Moisture: Airborne Polarimetric SAR vs. Ground-based Sensor Networks

    NASA Astrophysics Data System (ADS)

    Jagdhuber, T.; Kohling, M.; Hajnsek, I.; Montzka, C.; Papathanassiou, K. P.

    2012-04-01

    The knowledge of spatially distributed soil moisture is highly desirable for an enhanced hydrological modeling in terms of flood prevention and for yield optimization in combination with precision farming. Especially in mid-latitudes, the growing agricultural vegetation results in an increasing soil coverage along the crop cycle. For a remote sensing approach, this vegetation influence has to be separated from the soil contribution within the resolution cell to extract the actual soil moisture. Therefore a hybrid decomposition was developed for estimation of soil moisture under vegetation cover using fully polarimetric SAR data. The novel polarimetric decomposition combines a model-based decomposition, separating the volume component from the ground components, with an eigen-based decomposition of the two ground components into a surface and a dihedral scattering contribution. Hence, this hybrid decomposition, which is based on [1,2], establishes an innovative way to retrieve soil moisture under vegetation. The developed inversion algorithm for soil moisture under vegetation cover is applied on fully polarimetric data of the TERENO campaign, conducted in May and June 2011 for the Rur catchment within the Eifel/Lower Rhine Valley Observatory. The fully polarimetric SAR data were acquired in high spatial resolution (range: 1.92m, azimuth: 0.6m) by DLR's novel F-SAR sensor at L-band. The inverted soil moisture product from the airborne SAR data is validated with corresponding distributed ground measurements for a quality assessment of the developed algorithm. The in situ measurements were obtained on the one hand by mobile FDR probes from agricultural fields near the towns of Merzenhausen and Selhausen incorporating different crop types and on the other hand by distributed wireless sensor networks (SoilNet clusters) from a grassland test site (near the town of Rollesbroich) and from a forest stand (within the Wüstebach sub-catchment). Each SoilNet cluster incorporates around 150 wireless measuring devices on a grid of approximately 30ha for distributed soil moisture sensing. Finally, the comparison of both distributed soil moisture products results in a discussion on potentials and limitations for obtaining soil moisture under vegetation cover with high resolution fully polarimetric SAR. [1] S.R. Cloude, Polarisation: applications in remote sensing. Oxford, Oxford University Press, 2010. [2] Jagdhuber, T., Hajnsek, I., Papathanassiou, K.P. and Bronstert, A.: A Hybrid Decomposition for Soil Moisture Estimation under Vegetation Cover Using Polarimetric SAR. Proc. of the 5th International Workshop on Science and Applications of SAR Polarimetry and Polarimetric Interferometry, ESA-ESRIN, Frascati, Italy, January 24-28, 2011, p.1-6.

  6. Identification of large geomorphological anomalies based on 2D discrete wavelet transform

    NASA Astrophysics Data System (ADS)

    Doglioni, A.; Simeone, V.

    2012-04-01

    The identification and analysis based on quantitative evidences of large geomorphological anomalies is an important stage for the study of large landslides. Numerical geomorphic analyses represent an interesting approach to this kind of studies, allowing for a detailed and pretty accurate identification of hidden topographic anomalies that may be related to large landslides. Here a geomorphic numerical analyses of the Digital Terrain Model (DTM) is presented. The introduced approach is based on 2D discrete wavelet transform (Antoine et al., 2003; Bruun and Nilsen, 2003, Booth et al., 2009). The 2D wavelet decomposition of the DTM, and in particular the analysis of the detail coefficients of the wavelet transform can provide evidences of anomalies or singularities, i.e. discontinuities of the land surface. These discontinuities are not very evident from the DTM as it is, while 2D wavelet transform allows for grid-based analysis of DTM and for mapping the decomposition. In fact, the grid-based DTM can be assumed as a matrix, where a discrete wavelet transform (Daubechies, 1992) is performed columnwise and linewise, which basically represent horizontal and vertical directions. The outcomes of this analysis are low-frequency approximation coefficients and high-frequency detail coefficients. Detail coefficients are analyzed, since their variations are associated to discontinuities of the DTM. Detailed coefficients are estimated assuming to perform 2D wavelet transform both for the horizontal direction (east-west) and for the vertical direction (north-south). Detail coefficients are then mapped for both the cases, thus allowing to visualize and quantify potential anomalies of the land surface. Moreover, wavelet decomposition can be pushed to further levels, assuming a higher scale number of the transform. This may potentially return further interesting results, in terms of identification of the anomalies of land surface. In this kind of approach, the choice of a proper mother wavelet function is a tricky point, since it conditions the analysis and then their outcomes. Therefore multiple levels as well as multiple wavelet analyses are guessed. Here the introduced approach is applied to some interesting cases study of south Italy, in particular for the identification of large anomalies associated to large landslides at the transition between Apennine chain domain and the foredeep domain. In particular low Biferno valley and Fortore valley are here analyzed. Finally, the wavelet transforms are performed on multiple levels, thus trying to address the problem of which is the level extent for an accurate analysis fit to a specific problem. Antoine J.P., Carrette P., Murenzi R., and Piette B., (2003), Image analysis with two-dimensional continuous wavelet transform, Signal Processing, 31(3), pp. 241-272, doi:10.1016/0165-1684(93)90085-O. Booth A.M., Roering J.J., and Taylor Perron J., (2009), Automated landslide mapping using spectral analysis and high-resolution topographic data: Puget Sound lowlands, Washington, and Portland Hills, Oregon, Geomorphology, 109(3-4), pp. 132-147, doi:10.1016/j.geomorph.2009.02.027. Bruun B.T., and Nilsen S., (2003), Wavelet representation of large digital terrain models, Computers and Geoscience, 29(6), pp. 695-703, doi:10.1016/S0098-3004(03)00015-3. Daubechies, I. (1992), Ten lectures on wavelets, SIAM.

  7. Leith diffusion model for homogeneous anisotropic turbulence

    DOE PAGES

    Rubinstein, Robert; Clark, Timothy T.; Kurien, Susan

    2017-06-01

    Here, a proposal for a spectral closure model for homogeneous anisotropic turbulence. The systematic development begins by closing the third-order correlation describing nonlinear interactions by an anisotropic generalization of the Leith diffusion model for isotropic turbulence. The correlation tensor is then decomposed into a tensorially isotropic part, or directional anisotropy, and a trace-free remainder, or polarization anisotropy. The directional and polarization components are then decomposed using irreducible representations of the SO(3) symmetry group. Under the ansatz that the decomposition is truncated at quadratic order, evolution equations are derived for the directional and polarization pieces of the correlation tensor. Here, numericalmore » simulation of the model equations for a freely decaying anisotropic flow illustrate the non-trivial effects of spectral dependencies on the different return-to-isotropy rates of the directional and polarization contributions.« less

  8. Assemblage composition of fungal wood-decay species has a major influence on how climate and wood quality modify decomposition.

    PubMed

    Venugopal, Parvathy; Junninen, Kaisa; Edman, Mattias; Kouki, Jari

    2017-03-01

    The interactions among saprotrophic fungal species, as well as their interactions with environmental factors, may have a major influence on wood decay and carbon release in ecosystems. We studied the effect that decomposer diversity (species richness and assemblage composition) has on wood decomposition when the climatic variables and substrate quality vary simultaneously. We used two temperatures (16 and 21°C) and two humidity levels (70% and 90%) with two wood qualities (wood from managed and old-growth forests) of Pinus sylvestris. In a 9-month experiment, the effects of fungal diversity were tested using four wood-decaying fungi (Antrodia xantha, Dichomitus squalens, Fomitopsis pinicola and Gloeophyllum protractum) at assemblage levels of one, two and four species. Wood quality and assemblage composition affected the influence of climatic factors on decomposition rates. Fungal assemblage composition was found to be more important than fungal species richness, indicating that species-specific fungal traits are of paramount importance in driving decomposition. We conclude that models containing fungal wood-decay species (and wood-based carbon) need to take into account species-specific and assemblage composition-specific properties to improve predictive capacity in regard to decomposition-related carbon dynamics. © FEMS 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  9. Comparison of two interpolation methods for empirical mode decomposition based evaluation of radiographic femur bone images.

    PubMed

    Udhayakumar, Ganesan; Sujatha, Chinnaswamy Manoharan; Ramakrishnan, Swaminathan

    2013-01-01

    Analysis of bone strength in radiographic images is an important component of estimation of bone quality in diseases such as osteoporosis. Conventional radiographic femur bone images are used to analyze its architecture using bi-dimensional empirical mode decomposition method. Surface interpolation of local maxima and minima points of an image is a crucial part of bi-dimensional empirical mode decomposition method and the choice of appropriate interpolation depends on specific structure of the problem. In this work, two interpolation methods of bi-dimensional empirical mode decomposition are analyzed to characterize the trabecular femur bone architecture of radiographic images. The trabecular bone regions of normal and osteoporotic femur bone images (N = 40) recorded under standard condition are used for this study. The compressive and tensile strength regions of the images are delineated using pre-processing procedures. The delineated images are decomposed into their corresponding intrinsic mode functions using interpolation methods such as Radial basis function multiquadratic and hierarchical b-spline techniques. Results show that bi-dimensional empirical mode decomposition analyses using both interpolations are able to represent architectural variations of femur bone radiographic images. As the strength of the bone depends on architectural variation in addition to bone mass, this study seems to be clinically useful.

  10. Laser augmented decomposition. II. D/sub 3/BPF/sub 3/. [Deuterium effects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chien, K.R.; Bauer, S.H.

    1976-06-17

    The study of the accelerated decomposition of H/sub 3/BPF/sub 3/ induced by laser radiation (930-950 cm/sup -1/ was extended to the fully deuterated species. While in all essential respects the kinetics of the ir photolysis for the two compounds are identical, the few differences which were uncovered proved crucial in pointing to interesting features of the mechanism. These verified predictions were based on a normal mode analysis for the distribution of potential energy among the internal coordinates. For the laser augmented decomposition, E/sub a//sup L/ = 3.5 +- 1 kcal/mol, compared with E/sub a//sup th/ = 29.3 kcal/mol for themore » thermal process. The quantum efficiency is low, approximately 4 x 10/sup 4/ photons/molecule decomposed. The rates of decomposition depend on the isotopic content and are sensitively dependent on the frequency of the irradiating line. For example, with P(24) large fractionation ratios were found for D/sub 3/BPF/sub 3/ vs. H/sub 3/BPF/sub 3/, and small differences for D/sub 3//sup 11/BPF/sub 3/ vs. D/sub 3//sup 10/BPF/sub 3/. The levels of decomposition induced by the sequential three-photon absorption have been semiquantitatively accounted for.« less

  11. Enhanced development of a catalyst chamber for the decomposition of up to 1.0 kg/s hydrogen peroxide

    NASA Astrophysics Data System (ADS)

    Božić, Ognjan; Porrmann, Dennis; Lancelle, Daniel; May, Stefan

    2016-06-01

    A new innovative hybrid rocket engine concept is developed within the AHRES program of the German Aerospace Center (DLR). This rocket engine based on hydroxyl-terminated polybutadiene (HTPB) with metallic additives as solid fuel and high test peroxide (HTP) as liquid oxidizer. Instead of a conventional ignition system, a catalyst chamber with a silver mesh catalyst is designed to decompose the HTP. The newly modified catalyst chamber is able to decompose up to 1.0 kg/s of 87.5 wt% HTP. Used as a monopropellant thruster, this equals an average thrust of 1600 N. The catalyst chamber is designed using the self-developed software tool SHAKIRA. The applied kinetic law, which determines catalytic decomposition of HTP within the catalyst chamber, is given and commented. Several calculations are carried out to determine the appropriate geometry for complete decomposition with a minimum of catalyst material. A number of tests under steady state conditions are carried out, using 87.5 wt% HTP with different flow rates and a constant amount of catalyst material. To verify the decomposition, the temperature is measured and compared with the theoretical prediction. The experimental results show good agreement with the results generated by the design tool. The developed catalyst chamber provides a simple, reliable ignition system for hybrid rocket propulsion systems based on hydrogen peroxide as oxidizer. This system is capable for multiple reignition. The developed hardware and software can be used to design full scale monopropellant thrusters based on HTP and catalyst chambers for hybrid rocket engines.

  12. Efficient morse decompositions of vector fields.

    PubMed

    Chen, Guoning; Mischaikow, Konstantin; Laramee, Robert S; Zhang, Eugene

    2008-01-01

    Existing topology-based vector field analysis techniques rely on the ability to extract the individual trajectories such as fixed points, periodic orbits, and separatrices that are sensitive to noise and errors introduced by simulation and interpolation. This can make such vector field analysis unsuitable for rigorous interpretations. We advocate the use of Morse decompositions, which are robust with respect to perturbations, to encode the topological structures of a vector field in the form of a directed graph, called a Morse connection graph (MCG). While an MCG exists for every vector field, it need not be unique. Previous techniques for computing MCG's, while fast, are overly conservative and usually results in MCG's that are too coarse to be useful for the applications. To address this issue, we present a new technique for performing Morse decomposition based on the concept of tau-maps, which typically provides finer MCG's than existing techniques. Furthermore, the choice of tau provides a natural tradeoff between the fineness of the MCG's and the computational costs. We provide efficient implementations of Morse decomposition based on tau-maps, which include the use of forward and backward mapping techniques and an adaptive approach in constructing better approximations of the images of the triangles in the meshes used for simulation.. Furthermore, we propose the use of spatial tau-maps in addition to the original temporal tau-maps. These techniques provide additional trade-offs between the quality of the MCGs and the speed of computation. We demonstrate the utility of our technique with various examples in the plane and on surfaces including engine simulation data sets.

  13. Synthesis, X-ray crystallography, thermal studies, spectroscopic and electrochemistry investigations of uranyl Schiff base complexes.

    PubMed

    Asadi, Zahra; Shorkaei, Mohammad Ranjkesh

    2013-03-15

    Some tetradentate salen type Schiff bases and their uranyl complexes were synthesized and characterized by UV-Vis, NMR, IR, TG, C.H.N. and X-ray crystallographic studies. From these investigations it is confirmed that a solvent molecule occupied the fifth position of the equatorial plane of the distorted pentagonal bipyramidal structure. Also, the kinetics of complex decomposition by using thermo gravimetric methods (TG) was studied. The thermal decomposition reactions are first order for the studied complexes. To examine the properties of uranyl complexes according to the substitutional groups, we have carried out the electrochemical studies. The electrochemical reactions of uranyl Schiff base complexes in acetonitrile were reversible. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. Post-decomposition optimizations using pattern matching and rule-based clustering for multi-patterning technology

    NASA Astrophysics Data System (ADS)

    Wang, Lynn T.-N.; Madhavan, Sriram

    2018-03-01

    A pattern matching and rule-based polygon clustering methodology with DFM scoring is proposed to detect decomposition-induced manufacturability detractors and fix the layout designs prior to manufacturing. A pattern matcher scans the layout for pre-characterized patterns from a library. If a pattern were detected, rule-based clustering identifies the neighboring polygons that interact with those captured by the pattern. Then, DFM scores are computed for the possible layout fixes: the fix with the best score is applied. The proposed methodology was applied to two 20nm products with a chip area of 11 mm2 on the metal 2 layer. All the hotspots were resolved. The number of DFM spacing violations decreased by 7-15%.

  15. Weak characteristic information extraction from early fault of wind turbine generator gearbox

    NASA Astrophysics Data System (ADS)

    Xu, Xiaoli; Liu, Xiuli

    2017-09-01

    Given the weak early degradation characteristic information during early fault evolution in gearbox of wind turbine generator, traditional singular value decomposition (SVD)-based denoising may result in loss of useful information. A weak characteristic information extraction based on μ-SVD and local mean decomposition (LMD) is developed to address this problem. The basic principle of the method is as follows: Determine the denoising order based on cumulative contribution rate, perform signal reconstruction, extract and subject the noisy part of signal to LMD and μ-SVD denoising, and obtain denoised signal through superposition. Experimental results show that this method can significantly weaken signal noise, effectively extract the weak characteristic information of early fault, and facilitate the early fault warning and dynamic predictive maintenance.

  16. Automated Decomposition of Model-based Learning Problems

    NASA Technical Reports Server (NTRS)

    Williams, Brian C.; Millar, Bill

    1996-01-01

    A new generation of sensor rich, massively distributed autonomous systems is being developed that has the potential for unprecedented performance, such as smart buildings, reconfigurable factories, adaptive traffic systems and remote earth ecosystem monitoring. To achieve high performance these massive systems will need to accurately model themselves and their environment from sensor information. Accomplishing this on a grand scale requires automating the art of large-scale modeling. This paper presents a formalization of [\\em decompositional model-based learning (DML)], a method developed by observing a modeler's expertise at decomposing large scale model estimation tasks. The method exploits a striking analogy between learning and consistency-based diagnosis. Moriarty, an implementation of DML, has been applied to thermal modeling of a smart building, demonstrating a significant improvement in learning rate.

  17. Modeling Anaerobic Soil Organic Carbon Decomposition in Arctic Polygon Tundra: Insights into Soil Geochemical Influences on Carbon Mineralization: Modeling Archive

    DOE Data Explorer

    Zheng, Jianqiu; Thornton, Peter; Painter, Scott; Gu, Baohua; Wullschleger, Stan; Graham, David

    2018-06-13

    This anaerobic carbon decomposition model is developed with explicit representation of fermentation, methanogenesis and iron reduction by combining three well-known modeling approaches developed in different disciplines. A pool-based model to represent upstream carbon transformations and replenishment of DOC pool, a thermodynamically-based model to calculate rate kinetics and biomass growth for methanogenesis and Fe(III) reduction, and a humic ion-binding model for aqueous phase speciation and pH calculation are implemented into the open source geochemical model PHREEQC (V3.0). Installation of PHREEQC is required to run this model.

  18. Using AI Planning Techniques to Automatically Generate Image Processing Procedures: A Preliminary Report

    NASA Technical Reports Server (NTRS)

    Chien, S.

    1994-01-01

    This paper describes work on the Multimission VICAR Planner (MVP) system to automatically construct executable image processing procedures for custom image processing requests for the JPL Multimission Image Processing Lab (MIPL). This paper focuses on two issues. First, large search spaces caused by complex plans required the use of hand encoded control information. In order to address this in a manner similar to that used by human experts, MVP uses a decomposition-based planner to implement hierarchical/skeletal planning at the higher level and then uses a classical operator based planner to solve subproblems in contexts defined by the high-level decomposition.

  19. Decomposability and scalability in space-based observatory scheduling

    NASA Technical Reports Server (NTRS)

    Muscettola, Nicola; Smith, Stephen F.

    1992-01-01

    In this paper, we discuss issues of problem and model decomposition within the HSTS scheduling framework. HSTS was developed and originally applied in the context of the Hubble Space Telescope (HST) scheduling problem, motivated by the limitations of the current solution and, more generally, the insufficiency of classical planning and scheduling approaches in this problem context. We first summarize the salient architectural characteristics of HSTS and their relationship to previous scheduling and AI planning research. Then, we describe some key problem decomposition techniques supported by HSTS and underlying our integrated planning and scheduling approach, and we discuss the leverage they provide in solving space-based observatory scheduling problems.

  20. Least squares QR-based decomposition provides an efficient way of computing optimal regularization parameter in photoacoustic tomography.

    PubMed

    Shaw, Calvin B; Prakash, Jaya; Pramanik, Manojit; Yalavarthy, Phaneendra K

    2013-08-01

    A computationally efficient approach that computes the optimal regularization parameter for the Tikhonov-minimization scheme is developed for photoacoustic imaging. This approach is based on the least squares-QR decomposition which is a well-known dimensionality reduction technique for a large system of equations. It is shown that the proposed framework is effective in terms of quantitative and qualitative reconstructions of initial pressure distribution enabled via finding an optimal regularization parameter. The computational efficiency and performance of the proposed method are shown using a test case of numerical blood vessel phantom, where the initial pressure is exactly known for quantitative comparison.

  1. Determination of knock characteristics in spark ignition engines: an approach based on ensemble empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Li, Ning; Yang, Jianguo; Zhou, Rui; Liang, Caiping

    2016-04-01

    Knock is one of the major constraints to improve the performance and thermal efficiency of spark ignition (SI) engines. It can also result in severe permanent engine damage under certain operating conditions. Based on the ensemble empirical mode decomposition (EEMD), this paper proposes a new approach to determine the knock characteristics in SI engines. By adding a uniformly distributed and finite white Gaussian noise, the EEMD can preserve signal continuity in different scales and therefore alleviates the mode-mixing problem occurring in the classic empirical mode decomposition (EMD). The feasibilities of applying the EEMD to detect the knock signatures of a test SI engine via the pressure signal measured from combustion chamber and the vibration signal measured from cylinder head are investigated. Experimental results show that the EEMD-based method is able to detect the knock signatures from both the pressure signal and vibration signal, even in initial stage of knock. Finally, by comparing the application results with those obtained by short-time Fourier transform (STFT), Wigner-Ville distribution (WVD) and discrete wavelet transform (DWT), the superiority of the EEMD method in determining knock characteristics is demonstrated.

  2. S-matrix decomposition, natural reaction channels, and the quantum transition state approach to reactive scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manthe, Uwe, E-mail: uwe.manthe@uni-bielefeld.de; Ellerbrock, Roman, E-mail: roman.ellerbrock@uni-bielefeld.de

    2016-05-28

    A new approach for the quantum-state resolved analysis of polyatomic reactions is introduced. Based on the singular value decomposition of the S-matrix, energy-dependent natural reaction channels and natural reaction probabilities are defined. It is shown that the natural reaction probabilities are equal to the eigenvalues of the reaction probability operator [U. Manthe and W. H. Miller, J. Chem. Phys. 99, 3411 (1993)]. Consequently, the natural reaction channels can be interpreted as uniquely defined pathways through the transition state of the reaction. The analysis can efficiently be combined with reactive scattering calculations based on the propagation of thermal flux eigenstates. Inmore » contrast to a decomposition based straightforwardly on thermal flux eigenstates, it does not depend on the choice of the dividing surface separating reactants from products. The new approach is illustrated studying a prototypical example, the H + CH{sub 4} → H{sub 2} + CH{sub 3} reaction. The natural reaction probabilities and the contributions of the different vibrational states of the methyl product to the natural reaction channels are calculated and discussed. The relation between the thermal flux eigenstates and the natural reaction channels is studied in detail.« less

  3. A Compound Fault Diagnosis for Rolling Bearings Method Based on Blind Source Separation and Ensemble Empirical Mode Decomposition

    PubMed Central

    Wang, Huaqing; Li, Ruitong; Tang, Gang; Yuan, Hongfang; Zhao, Qingliang; Cao, Xi

    2014-01-01

    A Compound fault signal usually contains multiple characteristic signals and strong confusion noise, which makes it difficult to separate week fault signals from them through conventional ways, such as FFT-based envelope detection, wavelet transform or empirical mode decomposition individually. In order to improve the compound faults diagnose of rolling bearings via signals’ separation, the present paper proposes a new method to identify compound faults from measured mixed-signals, which is based on ensemble empirical mode decomposition (EEMD) method and independent component analysis (ICA) technique. With the approach, a vibration signal is firstly decomposed into intrinsic mode functions (IMF) by EEMD method to obtain multichannel signals. Then, according to a cross correlation criterion, the corresponding IMF is selected as the input matrix of ICA. Finally, the compound faults can be separated effectively by executing ICA method, which makes the fault features more easily extracted and more clearly identified. Experimental results validate the effectiveness of the proposed method in compound fault separating, which works not only for the outer race defect, but also for the rollers defect and the unbalance fault of the experimental system. PMID:25289644

  4. Forest height estimation from mountain forest areas using general model-based decomposition for polarimetric interferometric synthetic aperture radar images

    NASA Astrophysics Data System (ADS)

    Minh, Nghia Pham; Zou, Bin; Cai, Hongjun; Wang, Chengyi

    2014-01-01

    The estimation of forest parameters over mountain forest areas using polarimetric interferometric synthetic aperture radar (PolInSAR) images is one of the greatest interests in remote sensing applications. For mountain forest areas, scattering mechanisms are strongly affected by the ground topography variations. Most of the previous studies in modeling microwave backscattering signatures of forest area have been carried out over relatively flat areas. Therefore, a new algorithm for the forest height estimation from mountain forest areas using the general model-based decomposition (GMBD) for PolInSAR image is proposed. This algorithm enables the retrieval of not only the forest parameters, but also the magnitude associated with each mechanism. In addition, general double- and single-bounce scattering models are proposed to fit for the cross-polarization and off-diagonal term by separating their independent orientation angle, which remains unachieved in the previous model-based decompositions. The efficiency of the proposed approach is demonstrated with simulated data from PolSARProSim software and ALOS-PALSAR spaceborne PolInSAR datasets over the Kalimantan areas, Indonesia. Experimental results indicate that forest height could be effectively estimated by GMBD.

  5. Pyrosequencing-based assessment of microbial community shifts in leachate from animal carcass burial lysimeter.

    PubMed

    Kim, Hyun Young; Seo, Jiyoung; Kim, Tae-Hun; Shim, Bomi; Cha, Seok Mun; Yu, Seungho

    2017-06-01

    This study examined the use of microbial community structure as a bio-indicator of decomposition levels. High-throughput pyrosequencing technology was used to assess the shift in microbial community of leachate from animal carcass lysimeter. The leachate samples were collected monthly for one year and a total of 164,639 pyrosequencing reads were obtained and used in the taxonomic classification and operational taxonomy units (OTUs) distribution analysis based on sequence similarity. Our results show considerable changes in the phylum-level bacterial composition, suggesting that the microbial community is a sensitive parameter affected by the burial environment. The phylum classification results showed that Proteobacteria (Pseudomonas) were the most influential taxa in earlier decomposition stage whereas Firmicutes (Clostridium, Sporanaerobacter, and Peptostreptococcus) were dominant in later stage under anaerobic conditions. The result of this study can provide useful information on a time series of leachate profiles of microbial community structures and suggest patterns of microbial diversity in livestock burial sites. In addition, this result can be applicable to predict the decomposition stages under clay loam based soil conditions of animal livestock. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Intelligent Diagnosis Method for Rotating Machinery Using Dictionary Learning and Singular Value Decomposition.

    PubMed

    Han, Te; Jiang, Dongxiang; Zhang, Xiaochen; Sun, Yankui

    2017-03-27

    Rotating machinery is widely used in industrial applications. With the trend towards more precise and more critical operating conditions, mechanical failures may easily occur. Condition monitoring and fault diagnosis (CMFD) technology is an effective tool to enhance the reliability and security of rotating machinery. In this paper, an intelligent fault diagnosis method based on dictionary learning and singular value decomposition (SVD) is proposed. First, the dictionary learning scheme is capable of generating an adaptive dictionary whose atoms reveal the underlying structure of raw signals. Essentially, dictionary learning is employed as an adaptive feature extraction method regardless of any prior knowledge. Second, the singular value sequence of learned dictionary matrix is served to extract feature vector. Generally, since the vector is of high dimensionality, a simple and practical principal component analysis (PCA) is applied to reduce dimensionality. Finally, the K -nearest neighbor (KNN) algorithm is adopted for identification and classification of fault patterns automatically. Two experimental case studies are investigated to corroborate the effectiveness of the proposed method in intelligent diagnosis of rotating machinery faults. The comparison analysis validates that the dictionary learning-based matrix construction approach outperforms the mode decomposition-based methods in terms of capacity and adaptability for feature extraction.

  7. Response of SOM Decomposition to Anthropogenic N Deposition: Simulations From the PnET-SOM Model.

    NASA Astrophysics Data System (ADS)

    Tonitto, C.; Goodale, C. L.; Ollinger, S. V.; Jenkins, J. P.

    2008-12-01

    Anthropogenic forcing of the C and N cycles has caused rapid change in atmospheric CO2 and N deposition, with complex and uncertain effects on forest C and N balance. With some exceptions, models of forest ecosystem response to anthropogenic perturbation have historically focused more on aboveground than belowground processes; the complexity of soil organic matter (SOM) is often represented with abstract or incomplete SOM pools, and remains difficult to quantify. We developed a model of SOM dynamics in northern hardwood forests with explicit feedbacks between C and N cycles. The soil model is linked to the aboveground dynamics of the PnET model to form PnET-SOM. The SOM model includes: 1) physically measurable SOM pools, including humic and mineral-associated SOM in O, A, and B soil horizons, 2) empirical soil turnover times based on 14C data, 3) alternative SOM decomposition algorithms with and without explicit microbial processing, and 4) soluble element transport explicitly linked to the hydrologic cycle. We tested model sensitivity to changes in litter decomposition rate (k) and completeness of decomposition (limit value) by altering these parameters based on experimental observations from long-term litter decomposition experiments with N fertilization treatments. After a 100 year simulation, the Oe+Oa horizon SOC pool was reduced by 15 % and the A-horizon humified SOC was reduced by 7 % for N deposition scenarios relative to forests without N fertilization. In contrast, predictions for slower time-scale pools showed negligible variation in response to variation in the limit values tested, with A-horizon mineral SOC pools reduced by < 3 % and B-horizon mineral SOC reduced by 0.1 % for N deposition scenarios relative to forests without N fertilization. The model was also used to test the effect of varying initial litter decomposition rate to simulate response to N deposition. In contrast to the effect of varying limit values, simulations in which only k-values were varied did not drastically alter the predicted SOC pool distribution throughout the soil profile, but did significantly alter the Oi SOC pool. These results suggest that describing soil response to N deposition via alteration of the limit value alone, or as a combined alteration of limit value and the initial decomposition rate, can lead to significant variation in predicted long-term C storage.

  8. On the behavior of return stroke current and the remotely detected electric field change waveform

    NASA Astrophysics Data System (ADS)

    Shao, Xuan-Min; Lay, Erin; Jacobson, Abram R.

    2012-04-01

    After accumulating a large number of remotely recorded negative return stroke electric field change waveforms, a subtle but persistent kink was found following the main return stroke peak by several microseconds. To understand the corresponding return stroke current properties behind the kink and the general return stroke radiation waveform, we analyze strokes occurring in triggered lightning flashes for which have been measured both the channel base current and simultaneous remote electric radiation field. In this study, the channel base current is assumed to propagate along the return stroke channel in a dispersive and lossy manner. The measured channel base current is band-pass filtered, and the higher-frequency component is assumed to attenuate faster than the lower-frequency component. The radiation electric field is computed for such a current behavior and is then propagated to distant sensors. It is found that such a return stroke model is capable of very closely reproducing the measured electric waveforms at multiple stations for the triggered return strokes, and such a model is considered applicable to the common behavior of the natural return stroke as well. On the basis of the analysis, a number of other observables are derived. The time-evolving current dispersion and attenuation compare well with previously reported optical observations. The observable speed tends to agree with optical and VHF observations. Line charge density that is removed or deposited by the return stroke is derived, and the implication of the charge density distribution on leader channel decay is discussed.

  9. A Thermodynamically Consistent Approach to Phase-Separating Viscous Fluids

    NASA Astrophysics Data System (ADS)

    Anders, Denis; Weinberg, Kerstin

    2018-04-01

    The de-mixing properties of heterogeneous viscous fluids are determined by an interplay of diffusion, surface tension and a superposed velocity field. In this contribution a variational model of the decomposition, based on the Navier-Stokes equations for incompressible laminar flow and the extended Korteweg-Cahn-Hilliard equations, is formulated. An exemplary numerical simulation using C1-continuous finite elements demonstrates the capability of this model to compute phase decomposition and coarsening of the moving fluid.

  10. Adiabatic Compression Sensitivity of AF-M315E

    DTIC Science & Technology

    2015-07-01

    the current work is to expand the knowledge base from previous experiments completed at AFRL for AF-M315E in stainless steel U-tubes at room...addressed, to some degree, with the use of clamps and a large stainless steel plate to dissipate any major vibrations. A large preheated bath of 50:50 v/v...autocatalytic chain decomposition in the propellant. This exothermic decomposition decreases the fume -off initiation temperature of the propellant and its

  11. Civil Engineering Corrosion Control. Volume 1. Corrosion Control - General

    DTIC Science & Technology

    1975-01-01

    is generated in the boiler by the decomposition of carbonates and bicar- bonates of sodium, calcium, and magnesium. (c) The pH Range. Natural waters...and products of decomposition Acting as either anodic or cathodic depolarizers. 4.4.1 Forms of Microorganisms. In almost any soil or water, there are... 1945 . Based on field tests of the Iron and Steel Institute Corrosion Committee reported by J.C. Hudson (J. Iron Steel Inst., 11, 209, 1943), with

  12. Direct water decomposition on transition metal surfaces: Structural dependence and catalytic screening

    DOE PAGES

    Tsai, Charlie; Lee, Kyoungjin; Yoo, Jong Suk; ...

    2016-02-16

    Density functional theory calculations are used to investigate thermal water decomposition over the close-packed (111), stepped (211), and open (100) facets of transition metal surfaces. A descriptor-based approach is used to determine that the (211) facet leads to the highest possible rates. As a result, a range of 96 binary alloys were screened for their potential activity and a rate control analysis was performed to assess how the overall rate could be improved.

  13. Measuring and decomposing socioeconomic inequality in healthcare delivery: A microsimulation approach with application to the Palestinian conflict-affected fragile setting.

    PubMed

    Abu-Zaineh, Mohammad; Mataria, Awad; Moatti, Jean-Paul; Ventelou, Bruno

    2011-01-01

    Socioeconomic-related inequalities in healthcare delivery have been extensively studied in developed countries, using standard linear models of decomposition. This paper seeks to assess equity in healthcare delivery in the particular context of the occupied Palestinian territory: the West Bank and the Gaza Strip, using a new method of decomposition based on microsimulations. Besides avoiding the 'unavoidable price' of linearity restriction that is imposed by the standard methods of decomposition, the microsimulation-based decomposition enables to circumvent the potentially contentious role of heterogeneity in behaviours and to better disentangle the various sources driving inequality in healthcare utilisation. Results suggest that the worse-off do have a disproportinately greater need for all levels of care. However with the exception of primary-level, utilisation of all levels of care appears to be significantly higher for the better-off. The microsimulation method has made it possible to identify the contributions of factors driving such pro-rich patterns. While much of the inequality in utilisation appears to be caused by the prevailing socioeconomic inequalities, detailed analysis attributes a non-trivial part (circa 30% of inequalities) to heterogeneity in healthcare-seeking behaviours across socioeconomic groups of the population. Several policy recommendations for improving equity in healthcare delivery in the occupied Palestinian territory are proposed. Copyright © 2010 Elsevier Ltd. All rights reserved.

  14. Complete Decomposition of Li 2 CO 3 in Li–O 2 Batteries Using Ir/B 4 C as Noncarbon-Based Oxygen Electrode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Shidong; Xu, Wu; Zheng, Jianming

    Incomplete decomposition of Li2CO3 during charge process is a critical barrier for rechargeable Li-O2 batteries. Here we report complete decomposition of Li2CO3 in Li-O2 batteries using ultrafine iridium-decorated boron carbide (Ir/B4C) nanocomposite as oxygen electrode. The systematic investigation on charging the Li2CO3 preloaded Ir/B4C electrode in an ether-based electrolyte demonstrates that Ir/B4C electrode can decompose Li2CO3 with an efficiency close to 100% at below 4.37 V. In contrast, the bare B4C without Ir electrocatalyst can only decompose 4.7% of preloaded Li2CO3. The reaction mechanism of Li2CO3 decomposition in the presence of Ir/B4C electrocatalyst has been further investigated. A Li-O2 batterymore » using Ir/B4C as oxygen electrode material shows highly enhanced cycling stability than that using bare B4C oxygen electrode. These results clearly demonstrate that Ir/B4C is an effecitive oxygen electrode amterial to completely decompose Li2CO3 at relatively low charge voltages and is of significant importance in improving the cycle performanc of aprotic Li-O2 batteries.« less

  15. Low-rank canonical-tensor decomposition of potential energy surfaces: application to grid-based diagrammatic vibrational Green's function theory

    DOE PAGES

    Rai, Prashant; Sargsyan, Khachik; Najm, Habib; ...

    2017-03-07

    Here, a new method is proposed for a fast evaluation of high-dimensional integrals of potential energy surfaces (PES) that arise in many areas of quantum dynamics. It decomposes a PES into a canonical low-rank tensor format, reducing its integral into a relatively short sum of products of low-dimensional integrals. The decomposition is achieved by the alternating least squares (ALS) algorithm, requiring only a small number of single-point energy evaluations. Therefore, it eradicates a force-constant evaluation as the hotspot of many quantum dynamics simulations and also possibly lifts the curse of dimensionality. This general method is applied to the anharmonic vibrationalmore » zero-point and transition energy calculations of molecules using the second-order diagrammatic vibrational many-body Green's function (XVH2) theory with a harmonic-approximation reference. In this application, high dimensional PES and Green's functions are both subjected to a low-rank decomposition. Evaluating the molecular integrals over a low-rank PES and Green's functions as sums of low-dimensional integrals using the Gauss–Hermite quadrature, this canonical-tensor-decomposition-based XVH2 (CT-XVH2) achieves an accuracy of 0.1 cm -1 or higher and nearly an order of magnitude speedup as compared with the original algorithm using force constants for water and formaldehyde.« less

  16. Low-rank canonical-tensor decomposition of potential energy surfaces: application to grid-based diagrammatic vibrational Green's function theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rai, Prashant; Sargsyan, Khachik; Najm, Habib

    Here, a new method is proposed for a fast evaluation of high-dimensional integrals of potential energy surfaces (PES) that arise in many areas of quantum dynamics. It decomposes a PES into a canonical low-rank tensor format, reducing its integral into a relatively short sum of products of low-dimensional integrals. The decomposition is achieved by the alternating least squares (ALS) algorithm, requiring only a small number of single-point energy evaluations. Therefore, it eradicates a force-constant evaluation as the hotspot of many quantum dynamics simulations and also possibly lifts the curse of dimensionality. This general method is applied to the anharmonic vibrationalmore » zero-point and transition energy calculations of molecules using the second-order diagrammatic vibrational many-body Green's function (XVH2) theory with a harmonic-approximation reference. In this application, high dimensional PES and Green's functions are both subjected to a low-rank decomposition. Evaluating the molecular integrals over a low-rank PES and Green's functions as sums of low-dimensional integrals using the Gauss–Hermite quadrature, this canonical-tensor-decomposition-based XVH2 (CT-XVH2) achieves an accuracy of 0.1 cm -1 or higher and nearly an order of magnitude speedup as compared with the original algorithm using force constants for water and formaldehyde.« less

  17. Prediction of the Maximum Temperature for Life Based on the Stability of Metabolites to Decomposition in Water

    PubMed Central

    Bains, William; Xiao, Yao; Yu, Changyong

    2015-01-01

    The components of life must survive in a cell long enough to perform their function in that cell. Because the rate of attack by water increases with temperature, we can, in principle, predict a maximum temperature above which an active terrestrial metabolism cannot function by analysis of the decomposition rates of the components of life, and comparison of those rates with the metabolites’ minimum metabolic half-lives. The present study is a first step in this direction, providing an analytical framework and method, and analyzing the stability of 63 small molecule metabolites based on literature data. Assuming that attack by water follows a first order rate equation, we extracted decomposition rate constants from literature data and estimated their statistical reliability. The resulting rate equations were then used to give a measure of confidence in the half-life of the metabolite concerned at different temperatures. There is little reliable data on metabolite decomposition or hydrolysis rates in the literature, the data is mostly confined to a small number of classes of chemicals, and the data available are sometimes mutually contradictory because of varying reaction conditions. However, a preliminary analysis suggests that terrestrial biochemistry is limited to environments below ~150–180 °C. We comment briefly on why pressure is likely to have a small effect on this. PMID:25821932

  18. My46: a web-based tool for self-guided management of genomic test results in research and clinical settings

    PubMed Central

    Tabor, Holly K.; Jamal, Seema M.; Yu, Joon-Ho; Crouch, Julia M.; Shankar, Aditi G.; Dent, Karin M.; Anderson, Nick; Miller, Damon A.; Futral, Brett T.; Bamshad, Michael J.

    2016-01-01

    A major challenge to implementing precision medicine is the need for an efficient and cost-effective strategy for returning individual genomic test results that is easily scalable and can be incorporated into multiple models of clinical practice. My46 is a web-based tool for managing the return of genetic results that was designed and developed to support a wide range of approaches to results disclosure, ranging from traditional face-to-face disclosure to self-guided models. My46 has five key functions: set and modify results return preferences, return results, educate, manage return of results, and assess return of results. These key functions are supported by six distinct modules and a suite of features that enhance the user experience, ease site navigation, facilitate knowledge sharing, and enable results return tracking. My46 is a potentially effective solution for returning results and supports current trends toward shared decision-making between patient and provider and patient-driven health management. PMID:27632689

  19. Reliable and Efficient Parallel Processing Algorithms and Architectures for Modern Signal Processing. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Liu, Kuojuey Ray

    1990-01-01

    Least-squares (LS) estimations and spectral decomposition algorithms constitute the heart of modern signal processing and communication problems. Implementations of recursive LS and spectral decomposition algorithms onto parallel processing architectures such as systolic arrays with efficient fault-tolerant schemes are the major concerns of this dissertation. There are four major results in this dissertation. First, we propose the systolic block Householder transformation with application to the recursive least-squares minimization. It is successfully implemented on a systolic array with a two-level pipelined implementation at the vector level as well as at the word level. Second, a real-time algorithm-based concurrent error detection scheme based on the residual method is proposed for the QRD RLS systolic array. The fault diagnosis, order degraded reconfiguration, and performance analysis are also considered. Third, the dynamic range, stability, error detection capability under finite-precision implementation, order degraded performance, and residual estimation under faulty situations for the QRD RLS systolic array are studied in details. Finally, we propose the use of multi-phase systolic algorithms for spectral decomposition based on the QR algorithm. Two systolic architectures, one based on triangular array and another based on rectangular array, are presented for the multiphase operations with fault-tolerant considerations. Eigenvectors and singular vectors can be easily obtained by using the multi-pase operations. Performance issues are also considered.

  20. Multiscale Characterization of PM2.5 in Southern Taiwan based on Noise-assisted Multivariate Empirical Mode Decomposition and Time-dependent Intrinsic Correlation

    NASA Astrophysics Data System (ADS)

    Hsiao, Y. R.; Tsai, C.

    2017-12-01

    As the WHO Air Quality Guideline indicates, ambient air pollution exposes world populations under threat of fatal symptoms (e.g. heart disease, lung cancer, asthma etc.), raising concerns of air pollution sources and relative factors. This study presents a novel approach to investigating the multiscale variations of PM2.5 in southern Taiwan over the past decade, with four meteorological influencing factors (Temperature, relative humidity, precipitation and wind speed),based on Noise-assisted Multivariate Empirical Mode Decomposition(NAMEMD) algorithm, Hilbert Spectral Analysis(HSA) and Time-dependent Intrinsic Correlation(TDIC) method. NAMEMD algorithm is a fully data-driven approach designed for nonlinear and nonstationary multivariate signals, and is performed to decompose multivariate signals into a collection of channels of Intrinsic Mode Functions (IMFs). TDIC method is an EMD-based method using a set of sliding window sizes to quantify localized correlation coefficients for multiscale signals. With the alignment property and quasi-dyadic filter bank of NAMEMD algorithm, one is able to produce same number of IMFs for all variables and estimates the cross correlation in a more accurate way. The performance of spectral representation of NAMEMD-HSA method is compared with Complementary Empirical Mode Decomposition/ Hilbert Spectral Analysis (CEEMD-HSA) and Wavelet Analysis. The nature of NAMAMD-based TDICC analysis is then compared with CEEMD-based TDIC analysis and the traditional correlation analysis.

  1. Phosphate addition enhanced soil inorganic nutrients to a large extent in three tropical forests.

    PubMed

    Zhu, Feifei; Lu, Xiankai; Liu, Lei; Mo, Jiangming

    2015-01-21

    Elevated nitrogen (N) deposition may constrain soil phosphorus (P) and base cation availability in tropical forests, for which limited evidence have yet been available. In this study, we reported responses of soil inorganic nutrients to full factorial N and P treatments in three tropical forests different in initial soil N status (N-saturated old-growth forest and two less-N-rich younger forests). Responses of microbial biomass, annual litterfall production and nutrient input were also monitored. Results showed that N treatments decreased soil inorganic nutrients (except N) in all three forests, but the underlying mechanisms varied depending on forests: through inhibition on litter decomposition in the old-growth forest and through Al(3+) replacement of Ca(2+) in the two younger forests. In contrast, besides great elevation in soil available P, P treatments induced 60%, 50%, 26% increases in sum of exchangeable (K(+)+Ca(2+)+Mg(2+)) in the old-growth and the two younger forests, respectively. These positive effects of P were closely related to P-stimulated microbial biomass and litter nutrient input, implying possible stimulation of nutrient return. Our results suggest that N deposition may result in decreases in soil inorganic nutrients (except N) and that P addition can enhance soil inorganic nutrients to support ecosystem processes in these tropical forests.

  2. Phosphate addition enhanced soil inorganic nutrients to a large extent in three tropical forests

    PubMed Central

    Zhu, Feifei; Lu, Xiankai; Liu, Lei; Mo, Jiangming

    2015-01-01

    Elevated nitrogen (N) deposition may constrain soil phosphorus (P) and base cation availability in tropical forests, for which limited evidence have yet been available. In this study, we reported responses of soil inorganic nutrients to full factorial N and P treatments in three tropical forests different in initial soil N status (N-saturated old-growth forest and two less-N-rich younger forests). Responses of microbial biomass, annual litterfall production and nutrient input were also monitored. Results showed that N treatments decreased soil inorganic nutrients (except N) in all three forests, but the underlying mechanisms varied depending on forests: through inhibition on litter decomposition in the old-growth forest and through Al3+ replacement of Ca2+ in the two younger forests. In contrast, besides great elevation in soil available P, P treatments induced 60%, 50%, 26% increases in sum of exchangeable (K++Ca2++Mg2+) in the old-growth and the two younger forests, respectively. These positive effects of P were closely related to P-stimulated microbial biomass and litter nutrient input, implying possible stimulation of nutrient return. Our results suggest that N deposition may result in decreases in soil inorganic nutrients (except N) and that P addition can enhance soil inorganic nutrients to support ecosystem processes in these tropical forests. PMID:25605567

  3. Impact of biogenic very short-lived bromine on the Antarctic ozone hole during the 21st century

    NASA Astrophysics Data System (ADS)

    Fernandez, Rafael Pedro; Kinnison, Douglas E.; Lamarque, Jean-Francois; Tilmes, Simone; Saiz-Lopez, Alfonso

    2017-04-01

    Active bromine released from the photochemical decomposition of biogenic very short-lived bromocarbons (VSLBr) enhances stratospheric ozone depletion. Based on a dual set of 1960-2100 coupled chemistry-climate simulations (i.e. with and without VSLBr), we show that the maximum Antarctic ozone hole depletion increases by up to 14% when natural VSLBr are considered, in better agreement with ozone observations. The impact of the additional 5 pptv VSLBr on Antarctic ozone is most evident in the periphery of the ozone hole, producing an expansion of the ozone hole area of 5 million km2, which is equivalent in magnitude to the recently estimated Antarctic ozone healing due to the implementation of the Montreal Protocol. We find that the inclusion of VSLBr in CAM-Chem does not introduce a significant delay of the modelled ozone return date to 1980 October levels, but instead affect the depth and duration of the simulated ozone hole. Our analysis further shows that total bromine-catalysed ozone destruction in the lower stratosphere surpasses that of chlorine by year 2070, and indicates that natural VSLBr chemistry would dominate Antarctic ozone seasonality before the end of the 21st century. This work suggests a large influence of biogenic bromine on the future Antarctic ozone layer.

  4. Feature discrimination/identification based upon SAR return variations

    NASA Technical Reports Server (NTRS)

    Rasco, W. A., Sr.; Pietsch, R.

    1978-01-01

    A study of the statistics of The look-to-look variation statistics in the returns recorded in-flight by a digital, realtime SAR system are analyzed. The determination that the variations in the look-to-look returns from different classes do carry information content unique to the classes was illustrated by a model based on four variants derived from four look in-flight SAR data under study. The model was limited to four classes of returns: mowed grass on a athletic field, rough unmowed grass and weeds on a large vacant field, young fruit trees in a large orchard, and metal mobile homes and storage buildings in a large mobile home park. The data population in excess of 1000 returns represented over 250 individual pixels from the four classes. The multivariant discriminant model operated on the set of returns for each pixel and assigned that pixel to one of the four classes, based on the target variants and the probability distribution function of the four variants for each class.

  5. Decompositions of large-scale biological systems based on dynamical properties.

    PubMed

    Soranzo, Nicola; Ramezani, Fahimeh; Iacono, Giovanni; Altafini, Claudio

    2012-01-01

    Given a large-scale biological network represented as an influence graph, in this article we investigate possible decompositions of the network aimed at highlighting specific dynamical properties. The first decomposition we study consists in finding a maximal directed acyclic subgraph of the network, which dynamically corresponds to searching for a maximal open-loop subsystem of the given system. Another dynamical property investigated is strong monotonicity. We propose two methods to deal with this property, both aimed at decomposing the system into strongly monotone subsystems, but with different structural characteristics: one method tends to produce a single large strongly monotone component, while the other typically generates a set of smaller disjoint strongly monotone subsystems. Original heuristics for the methods investigated are described in the article. altafini@sissa.it

  6. Dynamic Load Balancing Based on Constrained K-D Tree Decomposition for Parallel Particle Tracing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jiang; Guo, Hanqi; Yuan, Xiaoru

    Particle tracing is a fundamental technique in flow field data visualization. In this work, we present a novel dynamic load balancing method for parallel particle tracing. Specifically, we employ a constrained k-d tree decomposition approach to dynamically redistribute tasks among processes. Each process is initially assigned a regularly partitioned block along with duplicated ghost layer under the memory limit. During particle tracing, the k-d tree decomposition is dynamically performed by constraining the cutting planes in the overlap range of duplicated data. This ensures that each process is reassigned particles as even as possible, and on the other hand the newmore » assigned particles for a process always locate in its block. Result shows good load balance and high efficiency of our method.« less

  7. Numeric Modified Adomian Decomposition Method for Power System Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dimitrovski, Aleksandar D; Simunovic, Srdjan; Pannala, Sreekanth

    This paper investigates the applicability of numeric Wazwaz El Sayed modified Adomian Decomposition Method (WES-ADM) for time domain simulation of power systems. WESADM is a numerical method based on a modified Adomian decomposition (ADM) technique. WES-ADM is a numerical approximation method for the solution of nonlinear ordinary differential equations. The non-linear terms in the differential equations are approximated using Adomian polynomials. In this paper WES-ADM is applied to time domain simulations of multimachine power systems. WECC 3-generator, 9-bus system and IEEE 10-generator, 39-bus system have been used to test the applicability of the approach. Several fault scenarios have been tested.more » It has been found that the proposed approach is faster than the trapezoidal method with comparable accuracy.« less

  8. A biorthogonal decomposition for the identification and simulation of non-stationary and non-Gaussian random fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zentner, I.; Ferré, G., E-mail: gregoire.ferre@ponts.org; Poirion, F.

    2016-06-01

    In this paper, a new method for the identification and simulation of non-Gaussian and non-stationary stochastic fields given a database is proposed. It is based on two successive biorthogonal decompositions aiming at representing spatio–temporal stochastic fields. The proposed double expansion allows to build the model even in the case of large-size problems by separating the time, space and random parts of the field. A Gaussian kernel estimator is used to simulate the high dimensional set of random variables appearing in the decomposition. The capability of the method to reproduce the non-stationary and non-Gaussian features of random phenomena is illustrated bymore » applications to earthquakes (seismic ground motion) and sea states (wave heights).« less

  9. Impact of litter quantity on the soil bacteria community during the decomposition of Quercus wutaishanica litter.

    PubMed

    Zeng, Quanchao; Liu, Yang; An, Shaoshan

    2017-01-01

    The forest ecosystem is the main component of terrestrial ecosystems. The global climate and the functions and processes of soil microbes in the ecosystem are all influenced by litter decomposition. The effects of litter decomposition on the abundance of soil microorganisms remain unknown. Here, we analyzed soil bacterial communities during the litter decomposition process in an incubation experiment under treatment with different litter quantities based on annual litterfall data (normal quantity, 200 g/(m 2 /yr); double quantity, 400 g/(m 2 /yr) and control, no litter). The results showed that litter quantity had significant effects on soil carbon fractions, nitrogen fractions, and bacterial community compositions, but significant differences were not found in the soil bacterial diversity. The normal litter quantity enhanced the relative abundance of Actinobacteria and Firmicutes and reduced the relative abundance of Bacteroidetes, Plantctomycets and Nitrospiare. The Beta-, Gamma-, and Deltaproteobacteria were significantly less abundant in the normal quantity litter addition treatment, and were subsequently more abundant in the double quantity litter addition treatment. The bacterial communities transitioned from Proteobacteria-dominant (Beta-, Gamma-, and Delta) to Actinobacteria-dominant during the decomposition of the normal quantity of litter. A cluster analysis showed that the double litter treatment and the control had similar bacterial community compositions. These results suggested that the double quantity litter limited the shift of the soil bacterial community. Our results indicate that litter decomposition alters bacterial dynamics under the accumulation of litter during the vegetation restoration process, which provides important significant guidelines for the management of forest ecosystems.

  10. Returning to the Moon: Building the Systems Engineering Base for Successful Science Missions

    NASA Astrophysics Data System (ADS)

    Eppler, D.; Young, K.; Bleacher, J.; Klaus, K.; Barker, D.; Evans, C.; Tewksbury, B.; Schmitt, H.; Hurtado, J.; Deans, M.; Yingst, A.; Spudis, P.; Bell, E.; Skinner, J.; Cohen, B.; Head, J.

    2018-04-01

    Enabling science return on future lunar missions will require coordination between the science community, design engineers, and mission operators. Our chapter is based on developing science-based systems engineering and operations requirements.

  11. Distributed Damage Estimation for Prognostics based on Structural Model Decomposition

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew; Bregon, Anibal; Roychoudhury, Indranil

    2011-01-01

    Model-based prognostics approaches capture system knowledge in the form of physics-based models of components, and how they fail. These methods consist of a damage estimation phase, in which the health state of a component is estimated, and a prediction phase, in which the health state is projected forward in time to determine end of life. However, the damage estimation problem is often multi-dimensional and computationally intensive. We propose a model decomposition approach adapted from the diagnosis community, called possible conflicts, in order to both improve the computational efficiency of damage estimation, and formulate a damage estimation approach that is inherently distributed. Local state estimates are combined into a global state estimate from which prediction is performed. Using a centrifugal pump as a case study, we perform a number of simulation-based experiments to demonstrate the approach.

  12. Genetic code, hamming distance and stochastic matrices.

    PubMed

    He, Matthew X; Petoukhov, Sergei V; Ricci, Paolo E

    2004-09-01

    In this paper we use the Gray code representation of the genetic code C=00, U=10, G=11 and A=01 (C pairs with G, A pairs with U) to generate a sequence of genetic code-based matrices. In connection with these code-based matrices, we use the Hamming distance to generate a sequence of numerical matrices. We then further investigate the properties of the numerical matrices and show that they are doubly stochastic and symmetric. We determine the frequency distributions of the Hamming distances, building blocks of the matrices, decomposition and iterations of matrices. We present an explicit decomposition formula for the genetic code-based matrix in terms of permutation matrices, which provides a hypercube representation of the genetic code. It is also observed that there is a Hamiltonian cycle in a genetic code-based hypercube.

  13. Model-Based Speech Signal Coding Using Optimized Temporal Decomposition for Storage and Broadcasting Applications

    NASA Astrophysics Data System (ADS)

    Athaudage, Chandranath R. N.; Bradley, Alan B.; Lech, Margaret

    2003-12-01

    A dynamic programming-based optimization strategy for a temporal decomposition (TD) model of speech and its application to low-rate speech coding in storage and broadcasting is presented. In previous work with the spectral stability-based event localizing (SBEL) TD algorithm, the event localization was performed based on a spectral stability criterion. Although this approach gave reasonably good results, there was no assurance on the optimality of the event locations. In the present work, we have optimized the event localizing task using a dynamic programming-based optimization strategy. Simulation results show that an improved TD model accuracy can be achieved. A methodology of incorporating the optimized TD algorithm within the standard MELP speech coder for the efficient compression of speech spectral information is also presented. The performance evaluation results revealed that the proposed speech coding scheme achieves 50%-60% compression of speech spectral information with negligible degradation in the decoded speech quality.

  14. Trace Norm Regularized CANDECOMP/PARAFAC Decomposition With Missing Data.

    PubMed

    Liu, Yuanyuan; Shang, Fanhua; Jiao, Licheng; Cheng, James; Cheng, Hong

    2015-11-01

    In recent years, low-rank tensor completion (LRTC) problems have received a significant amount of attention in computer vision, data mining, and signal processing. The existing trace norm minimization algorithms for iteratively solving LRTC problems involve multiple singular value decompositions of very large matrices at each iteration. Therefore, they suffer from high computational cost. In this paper, we propose a novel trace norm regularized CANDECOMP/PARAFAC decomposition (TNCP) method for simultaneous tensor decomposition and completion. We first formulate a factor matrix rank minimization model by deducing the relation between the rank of each factor matrix and the mode- n rank of a tensor. Then, we introduce a tractable relaxation of our rank function, and then achieve a convex combination problem of much smaller-scale matrix trace norm minimization. Finally, we develop an efficient algorithm based on alternating direction method of multipliers to solve our problem. The promising experimental results on synthetic and real-world data validate the effectiveness of our TNCP method. Moreover, TNCP is significantly faster than the state-of-the-art methods and scales to larger problems.

  15. Calculation of excitation energies from the CC2 linear response theory using Cholesky decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baudin, Pablo, E-mail: baudin.pablo@gmail.com; qLEAP – Center for Theoretical Chemistry, Department of Chemistry, Aarhus University, Langelandsgade 140, DK-8000 Aarhus C; Marín, José Sánchez

    2014-03-14

    A new implementation of the approximate coupled cluster singles and doubles CC2 linear response model is reported. It employs a Cholesky decomposition of the two-electron integrals that significantly reduces the computational cost and the storage requirements of the method compared to standard implementations. Our algorithm also exploits a partitioning form of the CC2 equations which reduces the dimension of the problem and avoids the storage of doubles amplitudes. We present calculation of excitation energies of benzene using a hierarchy of basis sets and compare the results with conventional CC2 calculations. The reduction of the scaling is evaluated as well asmore » the effect of the Cholesky decomposition parameter on the quality of the results. The new algorithm is used to perform an extrapolation to complete basis set investigation on the spectroscopically interesting benzylallene conformers. A set of calculations on medium-sized molecules is carried out to check the dependence of the accuracy of the results on the decomposition thresholds. Moreover, CC2 singlet excitation energies of the free base porphin are also presented.« less

  16. Parallel processing methods for space based power systems

    NASA Technical Reports Server (NTRS)

    Berry, F. C.

    1993-01-01

    This report presents a method for doing load-flow analysis of a power system by using a decomposition approach. The power system for the Space Shuttle is used as a basis to build a model for the load-flow analysis. To test the decomposition method for doing load-flow analysis, simulations were performed on power systems of 16, 25, 34, 43, 52, 61, 70, and 79 nodes. Each of the power systems was divided into subsystems and simulated under steady-state conditions. The results from these tests have been found to be as accurate as tests performed using a standard serial simulator. The division of the power systems into different subsystems was done by assigning a processor to each area. There were 13 transputers available, therefore, up to 13 different subsystems could be simulated at the same time. This report has preliminary results for a load-flow analysis using a decomposition principal. The report shows that the decomposition algorithm for load-flow analysis is well suited for parallel processing and provides increases in the speed of execution.

  17. Density functional theory study of HfCl4, ZrCl4, and Al(CH3)3 decomposition on hydroxylated SiO2: Initial stage of high-k atomic layer deposition

    NASA Astrophysics Data System (ADS)

    Jeloaica, L.; Estève, A.; Djafari Rouhani, M.; Estève, D.

    2003-07-01

    The initial stage of atomic layer deposition of HfO2, ZrO2, and Al2O3 high-k films, i.e., the decomposition of HfCl4, ZrCl4, and Al(CH3)3 precursor molecules on an OH-terminated SiO2 surface, is investigated within density functional theory. The energy barriers are determined using artificial activation of vibrational normal modes. For all precursors, reaction proceeds through the formation of intermediate complexes that have equivalent formation energies (˜-0.45 eV), and results in HCl and CH4 formation with activation energies of 0.88, 0.91, and 1.04 eV for Hf, Zr, and Al based precursors, respectively. The reaction product of Al(CH3)3 decomposition is found to be more stable (by -1.45 eV) than the chemisorbed intermediate complex compared to the endothermic decomposition of HfCl4 and ZrCl4 chemisorbed precursors (0.26 and 0.29 eV, respectively).

  18. A structural model decomposition framework for systems health management

    NASA Astrophysics Data System (ADS)

    Roychoudhury, I.; Daigle, M.; Bregon, A.; Pulido, B.

    Systems health management (SHM) is an important set of technologies aimed at increasing system safety and reliability by detecting, isolating, and identifying faults; and predicting when the system reaches end of life (EOL), so that appropriate fault mitigation and recovery actions can be taken. Model-based SHM approaches typically make use of global, monolithic system models for online analysis, which results in a loss of scalability and efficiency for large-scale systems. Improvement in scalability and efficiency can be achieved by decomposing the system model into smaller local submodels and operating on these submodels instead. In this paper, the global system model is analyzed offline and structurally decomposed into local submodels. We define a common model decomposition framework for extracting submodels from the global model. This framework is then used to develop algorithms for solving model decomposition problems for the design of three separate SHM technologies, namely, estimation (which is useful for fault detection and identification), fault isolation, and EOL prediction. We solve these model decomposition problems using a three-tank system as a case study.

  19. A Structural Model Decomposition Framework for Systems Health Management

    NASA Technical Reports Server (NTRS)

    Roychoudhury, Indranil; Daigle, Matthew J.; Bregon, Anibal; Pulido, Belamino

    2013-01-01

    Systems health management (SHM) is an important set of technologies aimed at increasing system safety and reliability by detecting, isolating, and identifying faults; and predicting when the system reaches end of life (EOL), so that appropriate fault mitigation and recovery actions can be taken. Model-based SHM approaches typically make use of global, monolithic system models for online analysis, which results in a loss of scalability and efficiency for large-scale systems. Improvement in scalability and efficiency can be achieved by decomposing the system model into smaller local submodels and operating on these submodels instead. In this paper, the global system model is analyzed offline and structurally decomposed into local submodels. We define a common model decomposition framework for extracting submodels from the global model. This framework is then used to develop algorithms for solving model decomposition problems for the design of three separate SHM technologies, namely, estimation (which is useful for fault detection and identification), fault isolation, and EOL prediction. We solve these model decomposition problems using a three-tank system as a case study.

  20. Role of Litter Turnover in Soil Quality in Tropical Degraded Lands of Colombia

    PubMed Central

    León, Juan D.; Osorio, Nelson W.

    2014-01-01

    Land degradation is the result of soil mismanagement that reduces soil productivity and environmental services. An alternative to improve degraded soils through reactivation of biogeochemical nutrient cycles (via litter production and decomposition) is the establishment of active restoration models using new forestry plantations, agroforestry, and silvopastoral systems. On the other hand, passive models of restoration consist of promoting natural successional processes with native plants. The objective in this review is to discuss the role of litter production and decomposition as a key strategy to reactivate biogeochemical nutrient cycles and thus improve soil quality in degraded land of the tropics. For this purpose the results of different projects of land restoration in Colombia are presented based on the dynamics of litter production, nutrient content, and decomposition. The results indicate that in only 6–13 years it is possible to detect soil properties improvements due to litter fall and decomposition. Despite that, low soil nutrient availability, particularly of N and P, seems to be major constraint to reclamation of these fragile ecosystems. PMID:24696656

  1. Initial decomposition of the condensed-phase β-HMX under shock waves: molecular dynamics simulations.

    PubMed

    Ge, Ni-Na; Wei, Yong-Kai; Ji, Guang-Fu; Chen, Xiang-Rong; Zhao, Feng; Wei, Dong-Qing

    2012-11-26

    We have performed quantum-based multiscale simulations to study the initial chemical processes of condensed-phase octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine (HMX) under shock wave loading. A self-consistent charge density-functional tight-binding (SCC-DFTB) method was employed. The results show that the initial decomposition of shocked HMX is triggered by the N-NO(2) bond breaking under the low velocity impact (8 km/s). As the shock velocity increases (11 km/s), the homolytic cleavage of the N-NO(2) bond is suppressed under high pressure, the C-H bond dissociation becomes the primary pathway for HMX decomposition in its early stages. It is accompanied by a five-membered ring formation and hydrogen transfer from the CH(2) group to the -NO(2) group. Our simulations suggest that the initial chemical processes of shocked HMX are dependent on the impact velocity, which gain new insights into the initial decomposition mechanism of HMX upon shock loading at the atomistic level, and have important implications for understanding and development of energetic materials.

  2. A decomposition model and voxel selection framework for fMRI analysis to predict neural response of visual stimuli.

    PubMed

    Raut, Savita V; Yadav, Dinkar M

    2018-03-28

    This paper presents an fMRI signal analysis methodology using geometric mean curve decomposition (GMCD) and mutual information-based voxel selection framework. Previously, the fMRI signal analysis has been conducted using empirical mean curve decomposition (EMCD) model and voxel selection on raw fMRI signal. The erstwhile methodology loses frequency component, while the latter methodology suffers from signal redundancy. Both challenges are addressed by our methodology in which the frequency component is considered by decomposing the raw fMRI signal using geometric mean rather than arithmetic mean and the voxels are selected from EMCD signal using GMCD components, rather than raw fMRI signal. The proposed methodologies are adopted for predicting the neural response. Experimentations are conducted in the openly available fMRI data of six subjects, and comparisons are made with existing decomposition models and voxel selection frameworks. Subsequently, the effect of degree of selected voxels and the selection constraints are analyzed. The comparative results and the analysis demonstrate the superiority and the reliability of the proposed methodology.

  3. Facile synthesis of a mesoporous Co3O4 network for Li-storage via thermal decomposition of an amorphous metal complex.

    PubMed

    Wen, Wei; Wu, Jin-Ming; Cao, Min-Hua

    2014-11-07

    A facile strategy is developed for mass fabrication of porous Co3O4 networks via the thermal decomposition of an amorphous cobalt-based complex. At a low mass loading, the achieved porous Co3O4 network exhibits excellent performance for lithium storage, which has a high capacity of 587 mA h g(-1) after 500 cycles at a current density of 1000 mA g(-1).

  4. Synergies from using higher order symplectic decompositions both for ordinary differential equations and quantum Monte Carlo methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matuttis, Hans-Georg; Wang, Xiaoxing

    Decomposition methods of the Suzuki-Trotter type of various orders have been derived in different fields. Applying them both to classical ordinary differential equations (ODEs) and quantum systems allows to judge their effectiveness and gives new insights for many body quantum mechanics where reference data are scarce. Further, based on data for 6 × 6 system we conclude that sampling with sign (minus-sign problem) is probably detrimental to the accuracy of fermionic simulations with determinant algorithms.

  5. Application of reiteration of Hankel singular value decomposition in quality control

    NASA Astrophysics Data System (ADS)

    Staniszewski, Michał; Skorupa, Agnieszka; Boguszewicz, Łukasz; Michalczuk, Agnieszka; Wereszczyński, Kamil; Wicher, Magdalena; Konopka, Marek; Sokół, Maria; Polański, Andrzej

    2017-07-01

    Medical centres are obliged to store past medical records, including the results of quality assurance (QA) tests of the medical equipment, which is especially useful in checking reproducibility of medical devices and procedures. Analysis of multivariate time series is an important part of quality control of NMR data. In this work we proposean anomaly detection tool based on Reiteration of Hankel Singular Value Decomposition method. The presented method was compared with external software and authors obtained comparable results.

  6. Application of thermogravimetric studies for optimization of lithium hexafluorophosphate production

    NASA Astrophysics Data System (ADS)

    Smagin, A. A.; Matyukha, V. A.; Korobtsev, V. P.

    Lithium hexafluorophosphate, isolated from hydrogen fluoride solution (anhydrous) by decanting and filtering, is an adduct of composition LiPF 6*HF. By thermogravimetric investigations the dynamics of HF removal from LiPF 6 by LiPF 6*HF thermal decomposition was studied. Based on the experimental data the constants entering into the equations as C = C0*exp( t*K0* exp(- E/RT)) were calculated, explaining the thermal decomposition processes of LiPF 6*HF and LiPF 6.

  7. SOI layout decomposition for double patterning lithography on high-performance computer platforms

    NASA Astrophysics Data System (ADS)

    Verstov, Vladimir; Zinchenko, Lyudmila; Makarchuk, Vladimir

    2014-12-01

    In the paper silicon on insulator layout decomposition algorithms for the double patterning lithography on high performance computing platforms are discussed. Our approach is based on the use of a contradiction graph and a modified concurrent breadth-first search algorithm. We evaluate our technique on 45 nm Nangate Open Cell Library including non-Manhattan geometry. Experimental results show that our soft computing algorithms decompose layout successfully and a minimal distance between polygons in layout is increased.

  8. Dynamic load balancing algorithm for molecular dynamics based on Voronoi cells domain decompositions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fattebert, J.-L.; Richards, D.F.; Glosli, J.N.

    2012-12-01

    We present a new algorithm for automatic parallel load balancing in classical molecular dynamics. It assumes a spatial domain decomposition of particles into Voronoi cells. It is a gradient method which attempts to minimize a cost function by displacing Voronoi sites associated with each processor/sub-domain along steepest descent directions. Excellent load balance has been obtained for quasi-2D and 3D practical applications, with up to 440·10 6 particles on 65,536 MPI tasks.

  9. Integration of progressive hedging and dual decomposition in stochastic integer programs

    DOE PAGES

    Watson, Jean -Paul; Guo, Ge; Hackebeil, Gabriel; ...

    2015-04-07

    We present a method for integrating the Progressive Hedging (PH) algorithm and the Dual Decomposition (DD) algorithm of Carøe and Schultz for stochastic mixed-integer programs. Based on the correspondence between lower bounds obtained with PH and DD, a method to transform weights from PH to Lagrange multipliers in DD is found. Fast progress in early iterations of PH speeds up convergence of DD to an exact solution. As a result, we report computational results on server location and unit commitment instances.

  10. Proceedings for the ICASE Workshop on Heterogeneous Boundary Conditions

    NASA Technical Reports Server (NTRS)

    Perkins, A. Louise; Scroggs, Jeffrey S.

    1991-01-01

    Domain Decomposition is a complex problem with many interesting aspects. The choice of decomposition can be made based on many different criteria, and the choice of interface of internal boundary conditions are numerous. The various regions under study may have different dynamical balances, indicating that different physical processes are dominating the flow in these regions. This conference was called in recognition of the need to more clearly define the nature of these complex problems. This proceedings is a collection of the presentations and the discussion groups.

  11. SU-F-J-138: An Extension of PCA-Based Respiratory Deformation Modeling Via Multi-Linear Decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iliopoulos, AS; Sun, X; Pitsianis, N

    Purpose: To address and lift the limited degree of freedom (DoF) of globally bilinear motion components such as those based on principal components analysis (PCA), for encoding and modeling volumetric deformation motion. Methods: We provide a systematic approach to obtaining a multi-linear decomposition (MLD) and associated motion model from deformation vector field (DVF) data. We had previously introduced MLD for capturing multi-way relationships between DVF variables, without being restricted by the bilinear component format of PCA-based models. PCA-based modeling is commonly used for encoding patient-specific deformation as per planning 4D-CT images, and aiding on-board motion estimation during radiotherapy. However, themore » bilinear space-time decomposition inherently limits the DoF of such models by the small number of respiratory phases. While this limit is not reached in model studies using analytical or digital phantoms with low-rank motion, it compromises modeling power in the presence of relative motion, asymmetries and hysteresis, etc, which are often observed in patient data. Specifically, a low-DoF model will spuriously couple incoherent motion components, compromising its adaptability to on-board deformation changes. By the multi-linear format of extracted motion components, MLD-based models can encode higher-DoF deformation structure. Results: We conduct mathematical and experimental comparisons between PCA- and MLD-based models. A set of temporally-sampled analytical trajectories provides a synthetic, high-rank DVF; trajectories correspond to respiratory and cardiac motion factors, including different relative frequencies and spatial variations. Additionally, a digital XCAT phantom is used to simulate a lung lesion deforming incoherently with respect to the body, which adheres to a simple respiratory trend. In both cases, coupling of incoherent motion components due to a low model DoF is clearly demonstrated. Conclusion: Multi-linear decomposition can enable decoupling of distinct motion factors in high-rank DVF measurements. This may improve motion model expressiveness and adaptability to on-board deformation, aiding model-based image reconstruction for target verification. NIH Grant No. R01-184173.« less

  12. A fast identification algorithm for Box-Cox transformation based radial basis function neural network.

    PubMed

    Hong, Xia

    2006-07-01

    In this letter, a Box-Cox transformation-based radial basis function (RBF) neural network is introduced using the RBF neural network to represent the transformed system output. Initially a fixed and moderate sized RBF model base is derived based on a rank revealing orthogonal matrix triangularization (QR decomposition). Then a new fast identification algorithm is introduced using Gauss-Newton algorithm to derive the required Box-Cox transformation, based on a maximum likelihood estimator. The main contribution of this letter is to explore the special structure of the proposed RBF neural network for computational efficiency by utilizing the inverse of matrix block decomposition lemma. Finally, the Box-Cox transformation-based RBF neural network, with good generalization and sparsity, is identified based on the derived optimal Box-Cox transformation and a D-optimality-based orthogonal forward regression algorithm. The proposed algorithm and its efficacy are demonstrated with an illustrative example in comparison with support vector machine regression.

  13. Credit Where Credit Is Due: An Approach to Education Returns Based on Shapley Values

    ERIC Educational Resources Information Center

    Barakat, Bilal; Crespo Cuaresma, Jesus

    2017-01-01

    We propose the use of methods based on the Shapley value to assess the fact that private returns to lower levels of educational attainment should be credited with part of the returns from higher attainment levels, since achieving primary education is a necessary condition to enter secondary and tertiary educational levels. We apply the proposed…

  14. Analysis of a parallelized nonlinear elliptic boundary value problem solver with application to reacting flows

    NASA Technical Reports Server (NTRS)

    Keyes, David E.; Smooke, Mitchell D.

    1987-01-01

    A parallelized finite difference code based on the Newton method for systems of nonlinear elliptic boundary value problems in two dimensions is analyzed in terms of computational complexity and parallel efficiency. An approximate cost function depending on 15 dimensionless parameters is derived for algorithms based on stripwise and boxwise decompositions of the domain and a one-to-one assignment of the strip or box subdomains to processors. The sensitivity of the cost functions to the parameters is explored in regions of parameter space corresponding to model small-order systems with inexpensive function evaluations and also a coupled system of nineteen equations with very expensive function evaluations. The algorithm was implemented on the Intel Hypercube, and some experimental results for the model problems with stripwise decompositions are presented and compared with the theory. In the context of computational combustion problems, multiprocessors of either message-passing or shared-memory type may be employed with stripwise decompositions to realize speedup of O(n), where n is mesh resolution in one direction, for reasonable n.

  15. Speech rhythm analysis with decomposition of the amplitude envelope: characterizing rhythmic patterns within and across languages.

    PubMed

    Tilsen, Sam; Arvaniti, Amalia

    2013-07-01

    This study presents a method for analyzing speech rhythm using empirical mode decomposition of the speech amplitude envelope, which allows for extraction and quantification of syllabic- and supra-syllabic time-scale components of the envelope. The method of empirical mode decomposition of a vocalic energy amplitude envelope is illustrated in detail, and several types of rhythm metrics derived from this method are presented. Spontaneous speech extracted from the Buckeye Corpus is used to assess the effect of utterance length on metrics, and it is shown how metrics representing variability in the supra-syllabic time-scale components of the envelope can be used to identify stretches of speech with targeted rhythmic characteristics. Furthermore, the envelope-based metrics are used to characterize cross-linguistic differences in speech rhythm in the UC San Diego Speech Lab corpus of English, German, Greek, Italian, Korean, and Spanish speech elicited in read sentences, read passages, and spontaneous speech. The envelope-based metrics exhibit significant effects of language and elicitation method that argue for a nuanced view of cross-linguistic rhythm patterns.

  16. Integrating a Genetic Algorithm Into a Knowledge-Based System for Ordering Complex Design Processes

    NASA Technical Reports Server (NTRS)

    Rogers, James L.; McCulley, Collin M.; Bloebaum, Christina L.

    1996-01-01

    The design cycle associated with large engineering systems requires an initial decomposition of the complex system into design processes which are coupled through the transference of output data. Some of these design processes may be grouped into iterative subcycles. In analyzing or optimizing such a coupled system, it is essential to be able to determine the best ordering of the processes within these subcycles to reduce design cycle time and cost. Many decomposition approaches assume the capability is available to determine what design processes and couplings exist and what order of execution will be imposed during the design cycle. Unfortunately, this is often a complex problem and beyond the capabilities of a human design manager. A new feature, a genetic algorithm, has been added to DeMAID (Design Manager's Aid for Intelligent Decomposition) to allow the design manager to rapidly examine many different combinations of ordering processes in an iterative subcycle and to optimize the ordering based on cost, time, and iteration requirements. Two sample test cases are presented to show the effects of optimizing the ordering with a genetic algorithm.

  17. Regional income inequality model based on theil index decomposition and weighted variance coeficient

    NASA Astrophysics Data System (ADS)

    Sitepu, H. R.; Darnius, O.; Tambunan, W. N.

    2018-03-01

    Regional income inequality is an important issue in the study on economic development of a certain region. Rapid economic development may not in accordance with people’s per capita income. The method of measuring the regional income inequality has been suggested by many experts. This research used Theil index and weighted variance coefficient in order to measure the regional income inequality. Regional income decomposition which becomes the productivity of work force and their participation in regional income inequality, based on Theil index, can be presented in linear relation. When the economic assumption in j sector, sectoral income value, and the rate of work force are used, the work force productivity imbalance can be decomposed to become the component in sectors and in intra-sectors. Next, weighted variation coefficient is defined in the revenue and productivity of the work force. From the quadrate of the weighted variation coefficient result, it was found that decomposition of regional revenue imbalance could be analyzed by finding out how far each component contribute to regional imbalance which, in this research, was analyzed in nine sectors of economic business.

  18. Coherent vorticity extraction in resistive drift-wave turbulence: Comparison of orthogonal wavelets versus proper orthogonal decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Futatani, S.; Bos, W.J.T.; Del-Castillo-Negrete, Diego B

    2011-01-01

    We assess two techniques for extracting coherent vortices out of turbulent flows: the wavelet based Coherent Vorticity Extraction (CVE) and the Proper Orthogonal Decomposition (POD). The former decomposes the flow field into an orthogonal wavelet representation and subsequent thresholding of the coefficients allows one to split the flow into organized coherent vortices with non-Gaussian statistics and an incoherent random part which is structureless. POD is based on the singular value decomposition and decomposes the flow into basis functions which are optimal with respect to the retained energy for the ensemble average. Both techniques are applied to direct numerical simulation datamore » of two-dimensional drift-wave turbulence governed by Hasegawa Wakatani equation, considering two limit cases: the quasi-hydrodynamic and the quasi-adiabatic regimes. The results are compared in terms of compression rate, retained energy, retained enstrophy and retained radial flux, together with the enstrophy spectrum and higher order statistics. (c) 2010 Published by Elsevier Masson SAS on behalf of Academie des sciences.« less

  19. Reactions catalyzed by haloporphyrins

    DOEpatents

    Ellis, P.E. Jr.; Lyons, J.E.

    1996-02-06

    The invention provides novel methods for the oxidation of hydrocarbons with oxygen-containing gas to form hydroxy-group containing compounds and for the decomposition of hydroperoxides to form hydroxy-group containing compounds. The catalysts used in the methods of the invention comprise transition metal complexes of a porphyrin ring having 1 to 12 halogen substituents on the porphyrin ring, at least one of said halogens being in a meso position and/or the catalyst containing no aryl group in a meso position. The catalyst compositions are prepared by halogenating a transition metal complex of a porphyrin. In one embodiment, a complex of a porphyrin with a metal whose porphyrin complexes are not active for oxidation of alkanes is halogenated, thereby to obtain a haloporphyrin complex of that metal, the metal is removed from the haloporphyrin complex to obtain the free base form of the haloporphyrin, and a metal such as iron whose porphyrin complexes are active for oxidation of alkanes and for the decomposition of alkyl hydroperoxides is complexed with the free base to obtain an active catalyst for oxidation of alkanes and decomposition of alkyl hydroperoxides.

  20. Haloporphyrins and their preparation and use as catalysts

    DOEpatents

    Ellis, Jr., Paul E.; Lyons, James E.

    1997-01-01

    The invention provides novel catalyst compositions, useful in the oxidation of hydrocarbons with air or oxygen to form hydroxy-group containing compounds and in the decomposition of hydroperoxides to form hydroxy-group containing compounds. The catalysts comprise transition metal complexes of a porphyrin ring having 1 to 12 halogen substituents on the porphyrin ring, at least one of said halogens being in a meso position and/or the catalyst containing no aryl group in a meso position. The compositions are prepared by halogenating a transition metal complex of a porphyrin. In one embodiment, a complex of a porphyrin with a metal whose porphyrin complexes are not active for oxidation of hydrocarbons is halogenated, thereby to obtain a haloporphyrin complex of that metal, the metal is removed from the haloporphyrin complex to obtain the free base form of the haloporphyrin, and a metal such as iron whose porphyrin complexes are active for oxidation of hydrocarbons and for the decomposition of alkyl hydroperoxides is complexed with the free base to obtain an active catalyst for oxidation of hydrocarbons and decomposition of alkyl hydroperoxides.

Top