Optimization of data analysis for the in vivo neutron activation analysis of aluminum in bone.
Mohseni, H K; Matysiak, W; Chettle, D R; Byun, S H; Priest, N; Atanackovic, J; Prestwich, W V
2016-10-01
An existing system at McMaster University has been used for the in vivo measurement of aluminum in human bone. Precise and detailed analysis approaches are necessary to determine the aluminum concentration because of the low levels of aluminum found in the bone and the challenges associated with its detection. Phantoms resembling the composition of the human hand with varying concentrations of aluminum were made for testing the system prior to the application to human studies. A spectral decomposition model and a photopeak fitting model involving the inverse-variance weighted mean and a time-dependent analysis were explored to analyze the results and determine the model with the best performance and lowest minimum detection limit. The results showed that the spectral decomposition and the photopeak fitting model with the inverse-variance weighted mean both provided better results compared to the other methods tested. The spectral decomposition method resulted in a marginally lower detection limit (5μg Al/g Ca) compared to the inverse-variance weighted mean (5.2μg Al/g Ca), rendering both equally applicable to human measurements. Copyright © 2016 Elsevier Ltd. All rights reserved.
An analysis of scatter decomposition
NASA Technical Reports Server (NTRS)
Nicol, David M.; Saltz, Joel H.
1990-01-01
A formal analysis of a powerful mapping technique known as scatter decomposition is presented. Scatter decomposition divides an irregular computational domain into a large number of equal sized pieces, and distributes them modularly among processors. A probabilistic model of workload in one dimension is used to formally explain why, and when scatter decomposition works. The first result is that if correlation in workload is a convex function of distance, then scattering a more finely decomposed domain yields a lower average processor workload variance. The second result shows that if the workload process is stationary Gaussian and the correlation function decreases linearly in distance until becoming zero and then remains zero, scattering a more finely decomposed domain yields a lower expected maximum processor workload. Finally it is shown that if the correlation function decreases linearly across the entire domain, then among all mappings that assign an equal number of domain pieces to each processor, scatter decomposition minimizes the average processor workload variance. The dependence of these results on the assumption of decreasing correlation is illustrated with situations where a coarser granularity actually achieves better load balance.
Corrected confidence bands for functional data using principal components.
Goldsmith, J; Greven, S; Crainiceanu, C
2013-03-01
Functional principal components (FPC) analysis is widely used to decompose and express functional observations. Curve estimates implicitly condition on basis functions and other quantities derived from FPC decompositions; however these objects are unknown in practice. In this article, we propose a method for obtaining correct curve estimates by accounting for uncertainty in FPC decompositions. Additionally, pointwise and simultaneous confidence intervals that account for both model- and decomposition-based variability are constructed. Standard mixed model representations of functional expansions are used to construct curve estimates and variances conditional on a specific decomposition. Iterated expectation and variance formulas combine model-based conditional estimates across the distribution of decompositions. A bootstrap procedure is implemented to understand the uncertainty in principal component decomposition quantities. Our method compares favorably to competing approaches in simulation studies that include both densely and sparsely observed functions. We apply our method to sparse observations of CD4 cell counts and to dense white-matter tract profiles. Code for the analyses and simulations is publicly available, and our method is implemented in the R package refund on CRAN. Copyright © 2013, The International Biometric Society.
Corrected Confidence Bands for Functional Data Using Principal Components
Goldsmith, J.; Greven, S.; Crainiceanu, C.
2014-01-01
Functional principal components (FPC) analysis is widely used to decompose and express functional observations. Curve estimates implicitly condition on basis functions and other quantities derived from FPC decompositions; however these objects are unknown in practice. In this article, we propose a method for obtaining correct curve estimates by accounting for uncertainty in FPC decompositions. Additionally, pointwise and simultaneous confidence intervals that account for both model- and decomposition-based variability are constructed. Standard mixed model representations of functional expansions are used to construct curve estimates and variances conditional on a specific decomposition. Iterated expectation and variance formulas combine model-based conditional estimates across the distribution of decompositions. A bootstrap procedure is implemented to understand the uncertainty in principal component decomposition quantities. Our method compares favorably to competing approaches in simulation studies that include both densely and sparsely observed functions. We apply our method to sparse observations of CD4 cell counts and to dense white-matter tract profiles. Code for the analyses and simulations is publicly available, and our method is implemented in the R package refund on CRAN. PMID:23003003
Influential Observations in Principal Factor Analysis.
ERIC Educational Resources Information Center
Tanaka, Yutaka; Odaka, Yoshimasa
1989-01-01
A method is proposed for detecting influential observations in iterative principal factor analysis. Theoretical influence functions are derived for two components of the common variance decomposition. The major mathematical tool is the influence function derived by Tanaka (1988). (SLD)
Lebigre, Christophe; Arcese, Peter; Reid, Jane M
2013-07-01
Age-specific variances and covariances in reproductive success shape the total variance in lifetime reproductive success (LRS), age-specific opportunities for selection, and population demographic variance and effective size. Age-specific (co)variances in reproductive success achieved through different reproductive routes must therefore be quantified to predict population, phenotypic and evolutionary dynamics in age-structured populations. While numerous studies have quantified age-specific variation in mean reproductive success, age-specific variances and covariances in reproductive success, and the contributions of different reproductive routes to these (co)variances, have not been comprehensively quantified in natural populations. We applied 'additive' and 'independent' methods of variance decomposition to complete data describing apparent (social) and realised (genetic) age-specific reproductive success across 11 cohorts of socially monogamous but genetically polygynandrous song sparrows (Melospiza melodia). We thereby quantified age-specific (co)variances in male within-pair and extra-pair reproductive success (WPRS and EPRS) and the contributions of these (co)variances to the total variances in age-specific reproductive success and LRS. 'Additive' decomposition showed that within-age and among-age (co)variances in WPRS across males aged 2-4 years contributed most to the total variance in LRS. Age-specific (co)variances in EPRS contributed relatively little. However, extra-pair reproduction altered age-specific variances in reproductive success relative to the social mating system, and hence altered the relative contributions of age-specific reproductive success to the total variance in LRS. 'Independent' decomposition showed that the (co)variances in age-specific WPRS, EPRS and total reproductive success, and the resulting opportunities for selection, varied substantially across males that survived to each age. Furthermore, extra-pair reproduction increased the variance in age-specific reproductive success relative to the social mating system to a degree that increased across successive age classes. This comprehensive decomposition of the total variances in age-specific reproductive success and LRS into age-specific (co)variances attributable to two reproductive routes showed that within-age and among-age covariances contributed substantially to the total variance and that extra-pair reproduction can alter the (co)variance structure of age-specific reproductive success. Such covariances and impacts should consequently be integrated into theoretical assessments of demographic and evolutionary processes in age-structured populations. © 2013 The Authors. Journal of Animal Ecology © 2013 British Ecological Society.
Efficient Scores, Variance Decompositions and Monte Carlo Swindles.
1984-08-28
to ;r Then a version .of Pythagoras ’ theorem gives the variance decomposition (6.1) varT var S var o(T-S) P P0 0 0 One way to see this is to note...complete sufficient statistics for (B, a) , and that the standard- ized residuals a(y - XB) 6 are ancillary. Basu’s sufficiency- ancillarity theorem
[Exploration of influencing factors of price of herbal based on VAR model].
Wang, Nuo; Liu, Shu-Zhen; Yang, Guang
2014-10-01
Based on vector auto-regression (VAR) model, this paper takes advantage of Granger causality test, variance decomposition and impulse response analysis techniques to carry out a comprehensive study of the factors influencing the price of Chinese herbal, including herbal cultivation costs, acreage, natural disasters, the residents' needs and inflation. The study found that there is Granger causality relationship between inflation and herbal prices, cultivation costs and herbal prices. And in the total variance analysis of Chinese herbal and medicine price index, the largest contribution to it is from its own fluctuations, followed by the cultivation costs and inflation.
McNamee, R L; Eddy, W F
2001-12-01
Analysis of variance (ANOVA) is widely used for the study of experimental data. Here, the reach of this tool is extended to cover the preprocessing of functional magnetic resonance imaging (fMRI) data. This technique, termed visual ANOVA (VANOVA), provides both numerical and pictorial information to aid the user in understanding the effects of various parts of the data analysis. Unlike a formal ANOVA, this method does not depend on the mathematics of orthogonal projections or strictly additive decompositions. An illustrative example is presented and the application of the method to a large number of fMRI experiments is discussed. Copyright 2001 Wiley-Liss, Inc.
Quantitative evaluation of muscle synergy models: a single-trial task decoding approach
Delis, Ioannis; Berret, Bastien; Pozzo, Thierry; Panzeri, Stefano
2013-01-01
Muscle synergies, i.e., invariant coordinated activations of groups of muscles, have been proposed as building blocks that the central nervous system (CNS) uses to construct the patterns of muscle activity utilized for executing movements. Several efficient dimensionality reduction algorithms that extract putative synergies from electromyographic (EMG) signals have been developed. Typically, the quality of synergy decompositions is assessed by computing the Variance Accounted For (VAF). Yet, little is known about the extent to which the combination of those synergies encodes task-discriminating variations of muscle activity in individual trials. To address this question, here we conceive and develop a novel computational framework to evaluate muscle synergy decompositions in task space. Unlike previous methods considering the total variance of muscle patterns (VAF based metrics), our approach focuses on variance discriminating execution of different tasks. The procedure is based on single-trial task decoding from muscle synergy activation features. The task decoding based metric evaluates quantitatively the mapping between synergy recruitment and task identification and automatically determines the minimal number of synergies that captures all the task-discriminating variability in the synergy activations. In this paper, we first validate the method on plausibly simulated EMG datasets. We then show that it can be applied to different types of muscle synergy decomposition and illustrate its applicability to real data by using it for the analysis of EMG recordings during an arm pointing task. We find that time-varying and synchronous synergies with similar number of parameters are equally efficient in task decoding, suggesting that in this experimental paradigm they are equally valid representations of muscle synergies. Overall, these findings stress the effectiveness of the decoding metric in systematically assessing muscle synergy decompositions in task space. PMID:23471195
Variance decomposition in stochastic simulators.
Le Maître, O P; Knio, O M; Moraes, A
2015-06-28
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Variance decomposition in stochastic simulators
NASA Astrophysics Data System (ADS)
Le Maître, O. P.; Knio, O. M.; Moraes, A.
2015-06-01
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Xu, Li; Jiang, Yong; Qiu, Rong
2018-01-01
In present study, co-pyrolysis behavior of rape straw, waste tire and their various blends were investigated. TG-FTIR indicated that co-pyrolysis was characterized by a four-step reaction, and H 2 O, CH, OH, CO 2 and CO groups were the main products evolved during the process. Additionally, using BBD-based experimental results, best-fit multiple regression models with high R 2 -pred values (94.10% for mass loss and 95.37% for reaction heat), which correlated explanatory variables with the responses, were presented. The derived models were analyzed by ANOVA at 95% confidence interval, F-test, lack-of-fit test and residues normal probability plots implied the models described well the experimental data. Finally, the model uncertainties as well as the interactive effect of these parameters were studied, the total-, first- and second-order sensitivity indices of operating factors were proposed using Sobol' variance decomposition. To the authors' knowledge, this is the first time global parameter sensitivity analysis has been performed in (co-)pyrolysis literature. Copyright © 2017 Elsevier Ltd. All rights reserved.
Regression-based adaptive sparse polynomial dimensional decomposition for sensitivity analysis
NASA Astrophysics Data System (ADS)
Tang, Kunkun; Congedo, Pietro; Abgrall, Remi
2014-11-01
Polynomial dimensional decomposition (PDD) is employed in this work for global sensitivity analysis and uncertainty quantification of stochastic systems subject to a large number of random input variables. Due to the intimate structure between PDD and Analysis-of-Variance, PDD is able to provide simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to polynomial chaos (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of the standard method unaffordable for real engineering applications. In order to address this problem of curse of dimensionality, this work proposes a variance-based adaptive strategy aiming to build a cheap meta-model by sparse-PDD with PDD coefficients computed by regression. During this adaptive procedure, the model representation by PDD only contains few terms, so that the cost to resolve repeatedly the linear system of the least-square regression problem is negligible. The size of the final sparse-PDD representation is much smaller than the full PDD, since only significant terms are eventually retained. Consequently, a much less number of calls to the deterministic model is required to compute the final PDD coefficients.
Seidelmann, Katrin N; Scherer-Lorenzen, Michael; Niklaus, Pascal A
2016-01-01
Effects of tree species diversity on decomposition can operate via a multitude of mechanism, including alterations of microclimate by the forest canopy. Studying such effects in natural settings is complicated by the fact that topography also affects microclimate and thus decomposition, so that effects of diversity are more difficult to isolate. Here, we quantified decomposition rates of standard litter in young subtropical forest stands, separating effects of canopy tree species richness and topography, and quantifying their direct and micro-climate-mediated components. Our litterbag study was carried out at two experimental sites of a biodiversity-ecosystem functioning field experiment in south-east China (BEF-China). The field sites display strong topographical heterogeneity and were planted with tree communities ranging from monocultures to mixtures of 24 native subtropical tree species. Litter bags filled with senescent leaves of three native tree species were placed from Nov. 2011 to Oct. 2012 on 134 plots along the tree species diversity gradient. Topographic features were measured for all and microclimate in a subset of plots. Stand species richness, topography and microclimate explained important fractions of the variations in litter decomposition rates, with diversity and topographic effects in part mediated by microclimatic changes. Tree stands were 2-3 years old, but nevertheless tree species diversity explained more variation (54.3%) in decomposition than topography (7.7%). Tree species richness slowed litter decomposition, an effect that slightly depended on litter species identity. A large part of the variance in decomposition was explained by tree species composition, with the presence of three tree species playing a significant role. Microclimate explained 31.4% of the variance in decomposition, and was related to lower soil moisture. Within this microclimate effect, species diversity (without composition) explained 8.9% and topography 34.4% of variance. Topography mainly affected diurnal temperature amplitudes by varying incident solar radiation.
Seidelmann, Katrin N.; Scherer-Lorenzen, Michael; Niklaus, Pascal A.
2016-01-01
Effects of tree species diversity on decomposition can operate via a multitude of mechanism, including alterations of microclimate by the forest canopy. Studying such effects in natural settings is complicated by the fact that topography also affects microclimate and thus decomposition, so that effects of diversity are more difficult to isolate. Here, we quantified decomposition rates of standard litter in young subtropical forest stands, separating effects of canopy tree species richness and topography, and quantifying their direct and micro-climate-mediated components. Our litterbag study was carried out at two experimental sites of a biodiversity-ecosystem functioning field experiment in south-east China (BEF-China). The field sites display strong topographical heterogeneity and were planted with tree communities ranging from monocultures to mixtures of 24 native subtropical tree species. Litter bags filled with senescent leaves of three native tree species were placed from Nov. 2011 to Oct. 2012 on 134 plots along the tree species diversity gradient. Topographic features were measured for all and microclimate in a subset of plots. Stand species richness, topography and microclimate explained important fractions of the variations in litter decomposition rates, with diversity and topographic effects in part mediated by microclimatic changes. Tree stands were 2–3 years old, but nevertheless tree species diversity explained more variation (54.3%) in decomposition than topography (7.7%). Tree species richness slowed litter decomposition, an effect that slightly depended on litter species identity. A large part of the variance in decomposition was explained by tree species composition, with the presence of three tree species playing a significant role. Microclimate explained 31.4% of the variance in decomposition, and was related to lower soil moisture. Within this microclimate effect, species diversity (without composition) explained 8.9% and topography 34.4% of variance. Topography mainly affected diurnal temperature amplitudes by varying incident solar radiation. PMID:27490180
ERIC Educational Resources Information Center
Bolt, Daniel M.; Ysseldyke, Jim; Patterson, Michael J.
2010-01-01
A three-level variance decomposition analysis was used to examine the sources of variability in implementation of a technology-enhanced progress monitoring system within each year of a 2-year study using a randomized-controlled design. We show that results of technology-enhanced progress monitoring are not necessarily a measure of student…
Variance decomposition in stochastic simulators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Le Maître, O. P., E-mail: olm@limsi.fr; Knio, O. M., E-mail: knio@duke.edu; Moraes, A., E-mail: alvaro.moraesgutierrez@kaust.edu.sa
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance.more » Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.« less
Variability of ICA decomposition may impact EEG signals when used to remove eyeblink artifacts
PONTIFEX, MATTHEW B.; GWIZDALA, KATHRYN L.; PARKS, ANDREW C.; BILLINGER, MARTIN; BRUNNER, CLEMENS
2017-01-01
Despite the growing use of independent component analysis (ICA) algorithms for isolating and removing eyeblink-related activity from EEG data, we have limited understanding of how variability associated with ICA uncertainty may be influencing the reconstructed EEG signal after removing the eyeblink artifact components. To characterize the magnitude of this ICA uncertainty and to understand the extent to which it may influence findings within ERP and EEG investigations, ICA decompositions of EEG data from 32 college-aged young adults were repeated 30 times for three popular ICA algorithms. Following each decomposition, eyeblink components were identified and removed. The remaining components were back-projected, and the resulting clean EEG data were further used to analyze ERPs. Findings revealed that ICA uncertainty results in variation in P3 amplitude as well as variation across all EEG sampling points, but differs across ICA algorithms as a function of the spatial location of the EEG channel. This investigation highlights the potential of ICA uncertainty to introduce additional sources of variance when the data are back-projected without artifact components. Careful selection of ICA algorithms and parameters can reduce the extent to which ICA uncertainty may introduce an additional source of variance within ERP/EEG studies. PMID:28026876
Karmakar, Bibha; Malkin, Ida; Kobyliansky, Eugene
2013-06-01
Dermatoglyphic asymmetry and diversity traits from a large number of twins (MZ and DZ) were analyzed based on principal factors to evaluate genetic effects and common familial environmental influences on twin data by the use of maximum likelihood-based Variance decomposition analysis. Sample consists of monozygotic (MZ) twins of two sexes (102 male pairs and 138 female pairs) and 120 pairs of dizygotic (DZ) female twins. All asymmetry (DA and FA) and diversity of dermatoglyphic traits were clearly separated into factors. These are perfectly corroborated with the earlier studies in different ethnic populations, which indicate a common biological validity perhaps exists of the underlying component structures of dermatoglyphic characters. Our heritability result in twins clearly showed that DA_F2 is inherited mostly in dominant type (28.0%) and FA_F1 is additive (60.7%), but no significant difference in sexes was observed for these factors. Inheritance is also very prominent in diversity Factor 1, which is exactly corroborated with our previous findings. The present results are similar with the earlier results of finger ridge count diversity in twin data, which suggested that finger ridge count diversity is under genetic control.
Modelling health and output at business cycle horizons for the USA.
Narayan, Paresh Kumar
2010-07-01
In this paper we employ a theoretical framework - a simple macro model augmented with health - that draws guidance from the Keynesian view of business cycles to examine the relative importance of permanent and transitory shocks in explaining variations in health expenditure and output at business cycle horizons for the USA. The variance decomposition analysis of shocks reveals that at business cycle horizons permanent shocks explain the bulk of the variations in output, while transitory shocks explain the bulk of the variations in health expenditures. We undertake a shock decomposition analysis for private health expenditures versus public health expenditures and interestingly find that while transitory shocks are more important for private sector expenditures, permanent shocks dominate public health expenditures. Copyright (c) 2009 John Wiley & Sons, Ltd.
Once upon Multivariate Analyses: When They Tell Several Stories about Biological Evolution.
Renaud, Sabrina; Dufour, Anne-Béatrice; Hardouin, Emilie A; Ledevin, Ronan; Auffray, Jean-Christophe
2015-01-01
Geometric morphometrics aims to characterize of the geometry of complex traits. It is therefore by essence multivariate. The most popular methods to investigate patterns of differentiation in this context are (1) the Principal Component Analysis (PCA), which is an eigenvalue decomposition of the total variance-covariance matrix among all specimens; (2) the Canonical Variate Analysis (CVA, a.k.a. linear discriminant analysis (LDA) for more than two groups), which aims at separating the groups by maximizing the between-group to within-group variance ratio; (3) the between-group PCA (bgPCA) which investigates patterns of between-group variation, without standardizing by the within-group variance. Standardizing within-group variance, as performed in the CVA, distorts the relationships among groups, an effect that is particularly strong if the variance is similarly oriented in a comparable way in all groups. Such shared direction of main morphological variance may occur and have a biological meaning, for instance corresponding to the most frequent standing genetic variation in a population. Here we undertake a case study of the evolution of house mouse molar shape across various islands, based on the real dataset and simulations. We investigated how patterns of main variance influence the depiction of among-group differentiation according to the interpretation of the PCA, bgPCA and CVA. Without arguing about a method performing 'better' than another, it rather emerges that working on the total or between-group variance (PCA and bgPCA) will tend to put the focus on the role of direction of main variance as line of least resistance to evolution. Standardizing by the within-group variance (CVA), by dampening the expression of this line of least resistance, has the potential to reveal other relevant patterns of differentiation that may otherwise be blurred.
SU-E-QI-14: Quantitative Variogram Detection of Mild, Unilateral Disease in Elastase-Treated Rats
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacob, R; Carson, J
2014-06-15
Purpose: Determining the presence of mild or early disease in the lungs can be challenging and subjective. We present a rapid and objective method for evaluating lung damage in a rat model of unilateral mild emphysema based on a new approach to heterogeneity assessment. We combined octree decomposition (used in three-dimensional (3D) computer graphics) with variograms (used in geostatistics to assess spatial relationships) to evaluate 3D computed tomography (CT) lung images for disease. Methods: Male, Sprague-Dawley rats (232 ± 7 g) were intratracheally dosed with 50 U/kg of elastase dissolved in 200 μL of saline to a single lobe (n=6)more » or with saline only (n=5). After four weeks, 3D micro-CT images were acquired at end expiration on mechanically ventilated rats using prospective gating. Images were masked, and lungs were decomposed to homogeneous blocks of 2×2×2, 4×4×4, and 8×8×8 voxels using octree decomposition. The spatial variance – the square of the difference of signal intensity – between all pairs of the 8×8×8 blocks was calculated. Variograms – graphs of distance vs. variance - were made, and data were fit to a power law and the exponent determined. The mean HU values, coefficient of variation (CoV), and the emphysema index (EI) were calculated and compared to the variograms. Results: The variogram analysis showed that significant differences between groups existed (p<0.01), whereas the mean HU (p=0.07), CoV (p=0.24), and EI (p=0.08) did not. Calculation time for the variogram for a typical 1000 block decomposition was ∼6 seconds, and octree decomposition took ∼2 minutes. Decomposing the images prior to variogram calculation resulted in a ∼700x decrease in time as compared to other published approaches. Conclusions: Our results suggest that the approach combining octree decomposition and variogram analysis may be a rapid, non-subjective, and sensitive imaging-based biomarker for quantitative characterization of lung disease.« less
The Variance of Intraclass Correlations in Three- and Four-Level Models
ERIC Educational Resources Information Center
Hedges, Larry V.; Hedberg, E. C.; Kuyper, Arend M.
2012-01-01
Intraclass correlations are used to summarize the variance decomposition in populations with multilevel hierarchical structure. There has recently been considerable interest in estimating intraclass correlations from surveys or designed experiments to provide design parameters for planning future large-scale randomized experiments. The large…
The Variance of Intraclass Correlations in Three and Four Level
ERIC Educational Resources Information Center
Hedges, Larry V.; Hedberg, Eric C.; Kuyper, Arend M.
2012-01-01
Intraclass correlations are used to summarize the variance decomposition in popula- tions with multilevel hierarchical structure. There has recently been considerable interest in estimating intraclass correlations from surveys or designed experiments to provide design parameters for planning future large-scale randomized experiments. The large…
Szczepankiewicz, Filip; van Westen, Danielle; Englund, Elisabet; Westin, Carl-Fredrik; Ståhlberg, Freddy; Lätt, Jimmy; Sundgren, Pia C; Nilsson, Markus
2016-11-15
The structural heterogeneity of tumor tissue can be probed by diffusion MRI (dMRI) in terms of the variance of apparent diffusivities within a voxel. However, the link between the diffusional variance and the tissue heterogeneity is not well-established. To investigate this link we test the hypothesis that diffusional variance, caused by microscopic anisotropy and isotropic heterogeneity, is associated with variable cell eccentricity and cell density in brain tumors. We performed dMRI using a novel encoding scheme for diffusional variance decomposition (DIVIDE) in 7 meningiomas and 8 gliomas prior to surgery. The diffusional variance was quantified from dMRI in terms of the total mean kurtosis (MK T ), and DIVIDE was used to decompose MK T into components caused by microscopic anisotropy (MK A ) and isotropic heterogeneity (MK I ). Diffusion anisotropy was evaluated in terms of the fractional anisotropy (FA) and microscopic fractional anisotropy (μFA). Quantitative microscopy was performed on the excised tumor tissue, where structural anisotropy and cell density were quantified by structure tensor analysis and cell nuclei segmentation, respectively. In order to validate the DIVIDE parameters they were correlated to the corresponding parameters derived from microscopy. We found an excellent agreement between the DIVIDE parameters and corresponding microscopy parameters; MK A correlated with cell eccentricity (r=0.95, p<10 -7 ) and MK I with the cell density variance (r=0.83, p<10 -3 ). The diffusion anisotropy correlated with structure tensor anisotropy on the voxel-scale (FA, r=0.80, p<10 -3 ) and microscopic scale (μFA, r=0.93, p<10 -6 ). A multiple regression analysis showed that the conventional MK T parameter reflects both variable cell eccentricity and cell density, and therefore lacks specificity in terms of microstructure characteristics. However, specificity was obtained by decomposing the two contributions; MK A was associated only to cell eccentricity, and MK I only to cell density variance. The variance in meningiomas was caused primarily by microscopic anisotropy (mean±s.d.) MK A =1.11±0.33 vs MK I =0.44±0.20 (p<10 -3 ), whereas in the gliomas, it was mostly caused by isotropic heterogeneity MK I =0.57±0.30 vs MK A =0.26±0.11 (p<0.05). In conclusion, DIVIDE allows non-invasive mapping of parameters that reflect variable cell eccentricity and density. These results constitute convincing evidence that a link exists between specific aspects of tissue heterogeneity and parameters from dMRI. Decomposing effects of microscopic anisotropy and isotropic heterogeneity facilitates an improved interpretation of tumor heterogeneity as well as diffusion anisotropy on both the microscopic and macroscopic scale. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Asumadu-Sarkodie, Samuel; Owusu, Phebe Asantewaa
2016-07-01
In this study, the relationship between carbon dioxide emissions, GDP, energy use, and population growth in Ghana was investigated from 1971 to 2013 by comparing the vector error correction model (VECM) and the autoregressive distributed lag (ARDL). Prior to testing for Granger causality based on VECM, the study tested for unit roots, Johansen's multivariate co-integration and performed a variance decomposition analysis using Cholesky's technique. Evidence from the variance decomposition shows that 21 % of future shocks in carbon dioxide emissions are due to fluctuations in energy use, 8 % of future shocks are due to fluctuations in GDP, and 6 % of future shocks are due to fluctuations in population. There was evidence of bidirectional causality running from energy use to GDP and a unidirectional causality running from carbon dioxide emissions to energy use, carbon dioxide emissions to GDP, carbon dioxide emissions to population, and population to energy use. Evidence from the long-run elasticities shows that a 1 % increase in population in Ghana will increase carbon dioxide emissions by 1.72 %. There was evidence of short-run equilibrium relationship running from energy use to carbon dioxide emissions and GDP to carbon dioxide emissions. As a policy implication, the addition of renewable energy and clean energy technologies into Ghana's energy mix can help mitigate climate change and its impact in the future.
NASA Astrophysics Data System (ADS)
Li, Jiqing; Duan, Zhipeng; Huang, Jing
2018-06-01
With the aggravation of the global climate change, the shortage of water resources in China is becoming more and more serious. Using reasonable methods to study changes in precipitation is very important for planning and management of water resources. Based on the time series of precipitation in Beijing from 1951 to 2015, the multi-scale features of precipitation are analyzed by the Extreme-point Symmetric Mode Decomposition (ESMD) method to forecast the precipitation shift. The results show that the precipitation series have periodic changes of 2.6, 4.3, 14 and 21.7 years, and the variance contribution rate of each modal component shows that the inter-annual variation dominates the precipitation in Beijing. It is predicted that precipitation in Beijing will continue to decrease in the near future.
Efficient scheme for parametric fitting of data in arbitrary dimensions.
Pang, Ning-Ning; Tzeng, Wen-Jer; Kao, Hisen-Ching
2008-07-01
We propose an efficient scheme for parametric fitting expressed in terms of the Legendre polynomials. For continuous systems, our scheme is exact and the derived explicit expression is very helpful for further analytical studies. For discrete systems, our scheme is almost as accurate as the method of singular value decomposition. Through a few numerical examples, we show that our algorithm costs much less CPU time and memory space than the method of singular value decomposition. Thus, our algorithm is very suitable for a large amount of data fitting. In addition, the proposed scheme can also be used to extract the global structure of fluctuating systems. We then derive the exact relation between the correlation function and the detrended variance function of fluctuating systems in arbitrary dimensions and give a general scaling analysis.
Feedback process responsible for intermodel diversity of ENSO variability
NASA Astrophysics Data System (ADS)
An, Soon-Il; Heo, Eun Sook; Kim, Seon Tae
2017-05-01
The origin of the intermodel diversity of the El Niño-Southern Oscillation (ENSO) variability is investigated by applying a singular value decomposition (SVD) analysis between the intermodel tropical Pacific sea surface temperature anomalies (SSTA) variance and the intermodel ENSO stability index (BJ index). The first SVD mode features an ENSO-like pattern for the intermodel SSTA variance (74% of total variance) and the dominant thermocline feedback (TH) for the BJ index (51%). Intermodel TH is mainly modified by the intermodel sensitivity of the zonal thermocline gradient response to zonal winds over the equatorial Pacific (βh), and the intermodel βh is correlated higher with the intermodel off-equatorial wind stress curl anomalies than the equatorial zonal wind stress anomalies. Finally, the intermodel off-equatorial wind stress curl is associated with the meridional shape and intensity of ENSO-related wind patterns, which may cause a model-to-model difference in ENSO variability by influencing the off-equatorial oceanic Rossby wave response.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niu, T; Dong, X; Petrongolo, M
Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its material decomposition capability. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical value. Existing de-noising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. We propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm is formulated in the form of least-square estimationmore » with smoothness regularization. It includes the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. Performance is evaluated using an evaluation phantom (Catphan 600) and an anthropomorphic head phantom. Results are compared to those generated using direct matrix inversion with no noise suppression, a de-noising method applied on the decomposed images, and an existing algorithm with similar formulation but with an edge-preserving regularization term. Results: On the Catphan phantom, our method retains the same spatial resolution as the CT images before decomposition while reducing the noise standard deviation of decomposed images by over 98%. The other methods either degrade spatial resolution or achieve less low-contrast detectability. Also, our method yields lower electron density measurement error than direct matrix inversion and reduces error variation by over 97%. On the head phantom, it reduces the noise standard deviation of decomposed images by over 97% without blurring the sinus structures. Conclusion: We propose an iterative image-domain decomposition method for DECT. The method combines noise suppression and material decomposition into an iterative process and achieves both goals simultaneously. The proposed algorithm shows superior performance on noise suppression with high image spatial resolution and low-contrast detectability. This work is supported by a Varian MRA grant.« less
Smoothing spline ANOVA frailty model for recurrent event data.
Du, Pang; Jiang, Yihua; Wang, Yuedong
2011-12-01
Gap time hazard estimation is of particular interest in recurrent event data. This article proposes a fully nonparametric approach for estimating the gap time hazard. Smoothing spline analysis of variance (ANOVA) decompositions are used to model the log gap time hazard as a joint function of gap time and covariates, and general frailty is introduced to account for between-subject heterogeneity and within-subject correlation. We estimate the nonparametric gap time hazard function and parameters in the frailty distribution using a combination of the Newton-Raphson procedure, the stochastic approximation algorithm (SAA), and the Markov chain Monte Carlo (MCMC) method. The convergence of the algorithm is guaranteed by decreasing the step size of parameter update and/or increasing the MCMC sample size along iterations. Model selection procedure is also developed to identify negligible components in a functional ANOVA decomposition of the log gap time hazard. We evaluate the proposed methods with simulation studies and illustrate its use through the analysis of bladder tumor data. © 2011, The International Biometric Society.
Video denoising using low rank tensor decomposition
NASA Astrophysics Data System (ADS)
Gui, Lihua; Cui, Gaochao; Zhao, Qibin; Wang, Dongsheng; Cichocki, Andrzej; Cao, Jianting
2017-03-01
Reducing noise in a video sequence is of vital important in many real-world applications. One popular method is block matching collaborative filtering. However, the main drawback of this method is that noise standard deviation for the whole video sequence is known in advance. In this paper, we present a tensor based denoising framework that considers 3D patches instead of 2D patches. By collecting the similar 3D patches non-locally, we employ the low-rank tensor decomposition for collaborative filtering. Since we specify the non-informative prior over the noise precision parameter, the noise variance can be inferred automatically from observed video data. Therefore, our method is more practical, which does not require knowing the noise variance. The experimental on video denoising demonstrates the effectiveness of our proposed method.
Component separation for cosmic microwave background radiation
NASA Astrophysics Data System (ADS)
Fernández-Cobos, R.; Vielva, P.; Barreiro, R. B.; Martínez-González, E.
2011-11-01
Cosmic microwave background (CMB) radiation data obtained by different experiments contains, besides the desired signal, a superposition of microwave sky contributions mainly due to, on the one hand, synchrotron radiation, free-free emission and re-emission of dust clouds in our galaxy; and, on the other hand, extragalactic sources. We present an analytical method, using a wavelet decomposition on the sphere, to recover the CMB signal from microwave maps. Being applied to both temperature and polarization data, it is shown as a significant powerful tool when it is used in particularly polluted regions of the sky. The applied wavelet has the advantages of requiring little computering time in its calculations being adapted to the HEALPix pixelization scheme (which is the format that the community uses to report the CMB data) and offering the possibility of multi-resolution analysis. The decomposition is implemented as part of a template fitting method, minimizing the variance of the resulting map. The method was tested with simulations of WMAP data and results have been positive, with improvements up to 12% in the variance of the resulting full sky map and about 3% in low contaminate regions. Finally, we also present some preliminary results with WMAP data in the form of an angular cross power spectrum C_ℓ^{TE}, consistent with the spectrum offered by WMAP team.
A study on characteristics of retrospective optimal interpolation with WRF testbed
NASA Astrophysics Data System (ADS)
Kim, S.; Noh, N.; Lim, G.
2012-12-01
This study presents the application of retrospective optimal interpolation (ROI) with Weather Research and Forecasting model (WRF). Song et al. (2009) suggest ROI method which is an optimal interpolation (OI) that gradually assimilates observations over the analysis window for variance-minimum estimate of an atmospheric state at the initial time of the analysis window. Song and Lim (2011) improve the method by incorporating eigen-decomposition and covariance inflation. ROI method assimilates the data at post analysis time using perturbation method (Errico and Raeder, 1999) without adjoint model. In this study, ROI method is applied to WRF model to validate the algorithm and to investigate the capability. The computational costs for ROI can be reduced due to the eigen-decomposition of background error covariance. Using the background error covariance in eigen-space, 1-profile assimilation experiment is performed. The difference between forecast errors with assimilation and without assimilation is obviously increased as time passed, which means the improvement of forecast error by assimilation. The characteristics and strength/weakness of ROI method are investigated by conducting the experiments with other data assimilation method.
Bitzen, Alexander; Sternickel, Karsten; Lewalter, Thorsten; Schwab, Jörg Otto; Yang, Alexander; Schrickel, Jan Wilko; Linhart, Markus; Wolpert, Christian; Jung, Werner; David, Peter; Lüderitz, Berndt; Nickenig, Georg; Lickfett, Lars
2007-10-01
Patients with atrial fibrillation (AF) often exhibit abnormalities of P wave morphology during sinus rhythm. We examined a novel method for automatic P wave analysis in the 24-hour-Holter-ECG of 60 patients with paroxysmal or persistent AF and 12 healthy subjects. Recorded ECG signals were transferred to the analysis program where 5-10 P and R waves were manually marked. A wavelet transform performed a time-frequency decomposition to train neural networks. Afterwards, the detected P waves were described using a Gauss function optimized to fit the individual morphology and providing amplitude and duration at half P wave height. >96% of P waves were detected, 47.4 +/- 20.7% successfully analyzed afterwards. In the patient population, the mean amplitude was 0.073 +/- 0.028 mV (mean variance 0.020 +/- 0.008 mV(2)), the mean duration at half height 23.5 +/- 2.7 ms (mean variance 4.2 +/- 1.6 ms(2)). In the control group, the mean amplitude (0.105 +/- 0.020 ms) was significantly higher (P < 0.0005), the mean variance of duration at half height (2.9 +/- 0.6 ms(2)) significantly lower (P < 0.0085). This method shows promise for identification of triggering factors of AF.
Regional income inequality model based on theil index decomposition and weighted variance coeficient
NASA Astrophysics Data System (ADS)
Sitepu, H. R.; Darnius, O.; Tambunan, W. N.
2018-03-01
Regional income inequality is an important issue in the study on economic development of a certain region. Rapid economic development may not in accordance with people’s per capita income. The method of measuring the regional income inequality has been suggested by many experts. This research used Theil index and weighted variance coefficient in order to measure the regional income inequality. Regional income decomposition which becomes the productivity of work force and their participation in regional income inequality, based on Theil index, can be presented in linear relation. When the economic assumption in j sector, sectoral income value, and the rate of work force are used, the work force productivity imbalance can be decomposed to become the component in sectors and in intra-sectors. Next, weighted variation coefficient is defined in the revenue and productivity of the work force. From the quadrate of the weighted variation coefficient result, it was found that decomposition of regional revenue imbalance could be analyzed by finding out how far each component contribute to regional imbalance which, in this research, was analyzed in nine sectors of economic business.
Independent EEG Sources Are Dipolar
Delorme, Arnaud; Palmer, Jason; Onton, Julie; Oostenveld, Robert; Makeig, Scott
2012-01-01
Independent component analysis (ICA) and blind source separation (BSS) methods are increasingly used to separate individual brain and non-brain source signals mixed by volume conduction in electroencephalographic (EEG) and other electrophysiological recordings. We compared results of decomposing thirteen 71-channel human scalp EEG datasets by 22 ICA and BSS algorithms, assessing the pairwise mutual information (PMI) in scalp channel pairs, the remaining PMI in component pairs, the overall mutual information reduction (MIR) effected by each decomposition, and decomposition ‘dipolarity’ defined as the number of component scalp maps matching the projection of a single equivalent dipole with less than a given residual variance. The least well-performing algorithm was principal component analysis (PCA); best performing were AMICA and other likelihood/mutual information based ICA methods. Though these and other commonly-used decomposition methods returned many similar components, across 18 ICA/BSS algorithms mean dipolarity varied linearly with both MIR and with PMI remaining between the resulting component time courses, a result compatible with an interpretation of many maximally independent EEG components as being volume-conducted projections of partially-synchronous local cortical field activity within single compact cortical domains. To encourage further method comparisons, the data and software used to prepare the results have been made available (http://sccn.ucsd.edu/wiki/BSSComparison). PMID:22355308
NASA Astrophysics Data System (ADS)
Feigin, Alexander; Gavrilov, Andrey; Loskutov, Evgeny; Mukhin, Dmitry
2015-04-01
Proper decomposition of the complex system into well separated "modes" is a way to reveal and understand the mechanisms governing the system behaviour as well as discover essential feedbacks and nonlinearities. The decomposition is also natural procedure that provides to construct adequate and concurrently simplest models of both corresponding sub-systems, and of the system in whole. In recent works two new methods of decomposition of the Earth's climate system into well separated modes were discussed. The first method [1-3] is based on the MSSA (Multichannel Singular Spectral Analysis) [4] for linear expanding vector (space-distributed) time series and makes allowance delayed correlations of the processes recorded in spatially separated points. The second one [5-7] allows to construct nonlinear dynamic modes, but neglects delay of correlations. It was demonstrated [1-3] that first method provides effective separation of different time scales, but prevent from correct reduction of data dimension: slope of variance spectrum of spatio-temporal empirical orthogonal functions that are "structural material" for linear spatio-temporal modes, is too flat. The second method overcomes this problem: variance spectrum of nonlinear modes falls essentially sharply [5-7]. However neglecting time-lag correlations brings error of mode selection that is uncontrolled and increases with growth of mode time scale. In the report we combine these two methods in such a way that the developed algorithm allows constructing nonlinear spatio-temporal modes. The algorithm is applied for decomposition of (i) multi hundreds years globally distributed data generated by the INM RAS Coupled Climate Model [8], and (ii) 156 years time series of SST anomalies distributed over the globe [9]. We compare efficiency of different methods of decomposition and discuss the abilities of nonlinear spatio-temporal modes for construction of adequate and concurrently simplest ("optimal") models of climate systems. 1. Feigin A.M., Mukhin D., Gavrilov A., Volodin E.M., and Loskutov E.M. (2013) "Separation of spatial-temporal patterns ("climatic modes") by combined analysis of really measured and generated numerically vector time series", AGU 2013 Fall Meeting, Abstract NG33A-1574. 2. Alexander Feigin, Dmitry Mukhin, Andrey Gavrilov, Evgeny Volodin, and Evgeny Loskutov (2014) "Approach to analysis of multiscale space-distributed time series: separation of spatio-temporal modes with essentially different time scales", Geophysical Research Abstracts, Vol. 16, EGU2014-6877. 3. Dmitry Mukhin, Dmitri Kondrashov, Evgeny Loskutov, Andrey Gavrilov, Alexander Feigin, and Michael Ghil (2014) "Predicting critical transitions in ENSO models, Part II: Spatially dependent models", Journal of Climate (accepted, doi: 10.1175/JCLI-D-14-00240.1). 4. Ghil, M., R. M. Allen, M. D. Dettinger, K. Ide, D. Kondrashov, et al. (2002) "Advanced spectral methods for climatic time series", Rev. Geophys. 40(1), 3.1-3.41. 5. Dmitry Mukhin, Andrey Gavrilov, Evgeny M Loskutov and Alexander M Feigin (2014) "Nonlinear Decomposition of Climate Data: a New Method for Reconstruction of Dynamical Modes", AGU 2014 Fall Meeting, Abstract NG43A-3752. 6. Andrey Gavrilov, Dmitry Mukhin, Evgeny Loskutov, and Alexander Feigin (2015) "Empirical decomposition of climate data into nonlinear dynamic modes", Geophysical Research Abstracts, Vol. 17, EGU2015-627. 7. Dmitry Mukhin, Andrey Gavrilov, Evgeny Loskutov, Alexander Feigin, and Juergen Kurths (2015) "Reconstruction of principal dynamical modes from climatic variability: nonlinear approach", Geophysical Research Abstracts, Vol. 17, EGU2015-5729. 8. http://83.149.207.89/GCM_DATA_PLOTTING/GCM_INM_DATA_XY_en.htm. 9. http://iridl.ldeo.columbia.edu/SOURCES/.KAPLAN/.EXTENDED/.v2/.ssta/.
NASA Astrophysics Data System (ADS)
Dai, Heng; Chen, Xingyuan; Ye, Ming; Song, Xuehang; Zachara, John M.
2017-05-01
Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study, we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multilayer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially distributed input variables.
NASA Astrophysics Data System (ADS)
Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.
2017-12-01
Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multi-layer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed input variables.
Iterative image-domain decomposition for dual-energy CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niu, Tianye; Dong, Xue; Petrongolo, Michael
2014-04-15
Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its capability of material decomposition. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical values of DECT. Existing denoising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. In this work, the authors propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm ismore » formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, the authors include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. The regularization term enforces the image smoothness by calculating the square sum of neighboring pixel value differences. To retain the boundary sharpness of the decomposed images, the authors detect the edges in the CT images before decomposition. These edge pixels have small weights in the calculation of the regularization term. Distinct from the existing denoising algorithms applied on the images before or after decomposition, the method has an iterative process for noise suppression, with decomposition performed in each iteration. The authors implement the proposed algorithm using a standard conjugate gradient algorithm. The method performance is evaluated using an evaluation phantom (Catphan©600) and an anthropomorphic head phantom. The results are compared with those generated using direct matrix inversion with no noise suppression, a denoising method applied on the decomposed images, and an existing algorithm with similar formulation as the proposed method but with an edge-preserving regularization term. Results: On the Catphan phantom, the method maintains the same spatial resolution on the decomposed images as that of the CT images before decomposition (8 pairs/cm) while significantly reducing their noise standard deviation. Compared to that obtained by the direct matrix inversion, the noise standard deviation in the images decomposed by the proposed algorithm is reduced by over 98%. Without considering the noise correlation properties in the formulation, the denoising scheme degrades the spatial resolution to 6 pairs/cm for the same level of noise suppression. Compared to the edge-preserving algorithm, the method achieves better low-contrast detectability. A quantitative study is performed on the contrast-rod slice of Catphan phantom. The proposed method achieves lower electron density measurement error as compared to that by the direct matrix inversion, and significantly reduces the error variation by over 97%. On the head phantom, the method reduces the noise standard deviation of decomposed images by over 97% without blurring the sinus structures. Conclusions: The authors propose an iterative image-domain decomposition method for DECT. The method combines noise suppression and material decomposition into an iterative process and achieves both goals simultaneously. By exploring the full variance-covariance properties of the decomposed images and utilizing the edge predetection, the proposed algorithm shows superior performance on noise suppression with high image spatial resolution and low-contrast detectability.« less
NASA Astrophysics Data System (ADS)
Tang, Kunkun; Congedo, Pietro M.; Abgrall, Rémi
2016-06-01
The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable for real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.
NASA Astrophysics Data System (ADS)
Lafare, Antoine E. A.; Peach, Denis W.; Hughes, Andrew G.
2016-02-01
The daily groundwater level (GWL) response in the Permo-Triassic Sandstone aquifers in the Eden Valley, England (UK), has been studied using the seasonal trend decomposition by LOESS (STL) technique. The hydrographs from 18 boreholes in the Permo-Triassic Sandstone were decomposed into three components: seasonality, general trend and remainder. The decomposition was analysed first visually, then using tools involving a variance ratio, time-series hierarchical clustering and correlation analysis. Differences and similarities in decomposition pattern were explained using the physical and hydrogeological information associated with each borehole. The Penrith Sandstone exhibits vertical and horizontal heterogeneity, whereas the more homogeneous St Bees Sandstone groundwater hydrographs characterize a well-identified seasonality; however, exceptions can be identified. A stronger trend component is obtained in the silicified parts of the northern Penrith Sandstone, while the southern Penrith, containing Brockram (breccias) Formation, shows a greater relative variability of the seasonal component. Other boreholes drilled as shallow/deep pairs show differences in responses, revealing the potential vertical heterogeneities within the Penrith Sandstone. The differences in bedrock characteristics between and within the Penrith and St Bees Sandstone formations appear to influence the GWL response. The de-seasonalized and de-trended GWL time series were then used to characterize the response, for example in terms of memory effect (autocorrelation analysis). By applying the STL method, it is possible to analyse GWL hydrographs leading to better conceptual understanding of the groundwater flow. Thus, variation in groundwater response can be used to gain insight into the aquifer physical properties and understand differences in groundwater behaviour.
NASA Astrophysics Data System (ADS)
Kasprzyk, J. R.; Reed, P. M.; Characklis, G. W.; Kirsch, B. R.
2010-12-01
This paper proposes and demonstrates a new interactive framework for sensitivity-informed de Novo programming, in which a learning approach to formulating decision problems can confront the deep uncertainty within water management problems. The framework couples global sensitivity analysis using Sobol’ variance decomposition with multiobjective evolutionary algorithms (MOEAs) to generate planning alternatives and test their robustness to new modeling assumptions and scenarios. We explore these issues within the context of a risk-based water supply management problem, where a city seeks the most efficient use of a water market. The case study examines a single city’s water supply in the Lower Rio Grande Valley (LRGV) in Texas, using both a 10-year planning horizon and an extreme single-year drought scenario. The city’s water supply portfolio comprises a volume of permanent rights to reservoir inflows and use of a water market through anticipatory thresholds for acquiring transfers of water through optioning and spot leases. Diagnostic information from the Sobol’ variance decomposition is used to create a sensitivity-informed problem formulation testing different decision variable configurations, with tradeoffs for the formulation solved using a MOEA. Subsequent analysis uses the drought scenario to expose tradeoffs between long-term and short-term planning and illustrate the impact of deeply uncertain assumptions on water availability in droughts. The results demonstrate water supply portfolios’ efficiency, reliability, and utilization of transfers in the water supply market and show how to adaptively improve the value and robustness of our problem formulations by evolving our definition of optimality to discover key tradeoffs.
NASA Astrophysics Data System (ADS)
Qing, Zhou; Weili, Jiao; Tengfei, Long
2014-03-01
The Rational Function Model (RFM) is a new generalized sensor model. It does not need the physical parameters of sensors to achieve a high accuracy that is compatible to the rigorous sensor models. At present, the main method to solve RPCs is the Least Squares Estimation. But when coefficients has a large number or the distribution of the control points is not even, the classical least square method loses its superiority due to the ill-conditioning problem of design matrix. Condition Index and Variance Decomposition Proportion (CIVDP) is a reliable method for diagnosing the multicollinearity among the design matrix. It can not only detect the multicollinearity, but also can locate the parameters and show the corresponding columns in the design matrix. In this paper, the CIVDP method is used to diagnose the ill-condition problem of the RFM and to find the multicollinearity in the normal matrix.
Hill, Mary C.
2010-01-01
Doherty and Hunt (2009) present important ideas for first-order-second moment sensitivity analysis, but five issues are discussed in this comment. First, considering the composite-scaled sensitivity (CSS) jointly with parameter correlation coefficients (PCC) in a CSS/PCC analysis addresses the difficulties with CSS mentioned in the introduction. Second, their new parameter identifiability statistic actually is likely to do a poor job of parameter identifiability in common situations. The statistic instead performs the very useful role of showing how model parameters are included in the estimated singular value decomposition (SVD) parameters. Its close relation to CSS is shown. Third, the idea from p. 125 that a suitable truncation point for SVD parameters can be identified using the prediction variance is challenged using results from Moore and Doherty (2005). Fourth, the relative error reduction statistic of Doherty and Hunt is shown to belong to an emerging set of statistics here named perturbed calculated variance statistics. Finally, the perturbed calculated variance statistics OPR and PPR mentioned on p. 121 are shown to explicitly include the parameter null-space component of uncertainty. Indeed, OPR and PPR results that account for null-space uncertainty have appeared in the literature since 2000.
Ghosh, Sudipta; Dosaev, Tasbulat; Prakash, Jai; Livshits, Gregory
2017-04-01
The major aim of this study was to conduct comparative quantitative-genetic analysis of the body composition (BCP) and somatotype (STP) variation, as well as their correlations with blood pressure (BP) in two ethnically, culturally and geographically different populations: Santhal, indigenous ethnic group from India and Chuvash, indigenous population from Russia. Correspondently two pedigree-based samples were collected from 1,262 Santhal and1,558 Chuvash individuals, respectively. At the first stage of the study, descriptive statistics and a series of univariate regression analyses were calculated. Finally, multiple and multivariate regression (MMR) analyses, with BP measurements as dependent variables and age, sex, BCP and STP as independent variables were carried out in each sample separately. The significant and independent covariates of BP were identified and used for re-examination in pedigree-based variance decomposition analysis. Despite clear and significant differences between the populations in BCP/STP, both Santhal and Chuvash were found to be predominantly mesomorphic irrespective of their sex. According to MMR analyses variation of BP significantly depended on age and mesomorphic component in both samples, and in addition on sex, ectomorphy and fat mass index in Santhal and on fat free mass index in Chuvash samples, respectively. Additive genetic component contributes to a substantial proportion of blood pressure and body composition variance. Variance component analysis in addition to above mentioned results suggests that additive genetic factors influence BP and BCP/STP associations significantly. © 2017 Wiley Periodicals, Inc.
A variance-decomposition approach to investigating multiscale habitat associations
Lawler, J.J.; Edwards, T.C.
2006-01-01
The recognition of the importance of spatial scale in ecology has led many researchers to take multiscale approaches to studying habitat associations. However, few of the studies that investigate habitat associations at multiple spatial scales have considered the potential effects of cross-scale correlations in measured habitat variables. When cross-scale correlations in such studies are strong, conclusions drawn about the relative strength of habitat associations at different spatial scales may be inaccurate. Here we adapt and demonstrate an analytical technique based on variance decomposition for quantifying the influence of cross-scale correlations on multiscale habitat associations. We used the technique to quantify the variation in nest-site locations of Red-naped Sapsuckers (Sphyrapicus nuchalis) and Northern Flickers (Colaptes auratus) associated with habitat descriptors at three spatial scales. We demonstrate how the method can be used to identify components of variation that are associated only with factors at a single spatial scale as well as shared components of variation that represent cross-scale correlations. Despite the fact that no explanatory variables in our models were highly correlated (r < 0.60), we found that shared components of variation reflecting cross-scale correlations accounted for roughly half of the deviance explained by the models. These results highlight the importance of both conducting habitat analyses at multiple spatial scales and of quantifying the effects of cross-scale correlations in such analyses. Given the limits of conventional analytical techniques, we recommend alternative methods, such as the variance-decomposition technique demonstrated here, for analyzing habitat associations at multiple spatial scales. ?? The Cooper Ornithological Society 2006.
Mean-Variance Hedging on Uncertain Time Horizon in a Market with a Jump
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kharroubi, Idris, E-mail: kharroubi@ceremade.dauphine.fr; Lim, Thomas, E-mail: lim@ensiie.fr; Ngoupeyou, Armand, E-mail: armand.ngoupeyou@univ-paris-diderot.fr
2013-12-15
In this work, we study the problem of mean-variance hedging with a random horizon T∧τ, where T is a deterministic constant and τ is a jump time of the underlying asset price process. We first formulate this problem as a stochastic control problem and relate it to a system of BSDEs with a jump. We then provide a verification theorem which gives the optimal strategy for the mean-variance hedging using the solution of the previous system of BSDEs. Finally, we prove that this system of BSDEs admits a solution via a decomposition approach coming from filtration enlargement theory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, X; Petrongolo, M; Wang, T
Purpose: A general problem of dual-energy CT (DECT) is that the decomposition is sensitive to noise in the two sets of dual-energy projection data, resulting in severely degraded qualities of decomposed images. We have previously proposed an iterative denoising method for DECT. Using a linear decomposition function, the method does not gain the full benefits of DECT on beam-hardening correction. In this work, we expand the framework of our iterative method to include non-linear decomposition models for noise suppression in DECT. Methods: We first obtain decomposed projections, which are free of beam-hardening artifacts, using a lookup table pre-measured on amore » calibration phantom. First-pass material images with high noise are reconstructed from the decomposed projections using standard filter-backprojection reconstruction. Noise on the decomposed images is then suppressed by an iterative method, which is formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, we include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. Analytical formulae are derived to compute the variance-covariance matrix from the measured decomposition lookup table. Results: We have evaluated the proposed method via phantom studies. Using non-linear decomposition, our method effectively suppresses the streaking artifacts of beam-hardening and obtains more uniform images than our previous approach based on a linear model. The proposed method reduces the average noise standard deviation of two basis materials by one order of magnitude without sacrificing the spatial resolution. Conclusion: We propose a general framework of iterative denoising for material decomposition of DECT. Preliminary phantom studies have shown the proposed method improves the image uniformity and reduces noise level without resolution loss. In the future, we will perform more phantom studies to further validate the performance of the purposed method. This work is supported by a Varian MRA grant.« less
Global sensitivity analysis in stochastic simulators of uncertain reaction networks.
Navarro Jimenez, M; Le Maître, O P; Knio, O M
2016-12-28
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.
Global sensitivity analysis in stochastic simulators of uncertain reaction networks
Navarro Jimenez, M.; Le Maître, O. P.; Knio, O. M.
2016-12-23
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol’s decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes thatmore » the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. Here, a sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.« less
Global sensitivity analysis in stochastic simulators of uncertain reaction networks
NASA Astrophysics Data System (ADS)
Navarro Jimenez, M.; Le Maître, O. P.; Knio, O. M.
2016-12-01
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.
2015-12-01
The material flow account of Tangshan City was established by material flow analysis (MFA) method to analyze the periodical characteristics of material input and output in the operation of economy-environment system, and the impact of material input and output intensities on economic development. Using econometric model, the long-term interaction mechanism and relationship among the indexes of gross domestic product (GDP) , direct material input (DMI), domestic processed output (DPO) were investigated after unit root hypothesis test, Johansen cointegration test, vector error correction model, impulse response function and variance decomposition. The results showed that during 1992-2011, DMI and DPO both increased, and the growth rate of DMI was higher than that of DPO. The input intensity of DMI increased, while the intensity of DPO fell in volatility. Long-term stable cointegration relationship existed between GDP, DMI and DPO. Their interaction relationship showed a trend from fluctuation to gradual ste adiness. DMI and DPO had strong, positive impacts on economic development in short-term, but the economy-environment system gradually weakened these effects by short-term dynamically adjusting indicators inside and outside of the system. Ultimately, the system showed a long-term equilibrium relationship. The effect of economic scale on economy was gradually increasing. After decomposing the contribution of each index to GDP, it was found that DMI's contribution grew, GDP's contribution declined, DPO's contribution changed little. On the whole, the economic development of Tangshan City has followed the traditional production path of resource-based city, mostly depending on the material input which caused high energy consumption and serous environmental pollution.
Young, Julia M; Morgan, Benjamin R; Mišić, Bratislav; Schweizer, Tom A; Ibrahim, George M; Macdonald, R Loch
2015-12-01
Individuals who have aneurysmal subarachnoid hemorrhages (SAHs) experience decreased health-related qualities of life (HRQoLs) that persist after the primary insult. To identify clinical variables that concurrently associate with HRQoL outcomes by using a partial least-squares approach, which has the distinct advantage of explaining multidimensional variance where predictor variables may be highly collinear. Data collected from the CONSCIOUS-1 trial was used to extract 29 clinical variables including SAH presentation, hospital procedures, and demographic information in addition to 5 HRQoL outcome variables for 256 individuals. A partial least-squares analysis was performed by calculating a heterogeneous correlation matrix and applying singular value decomposition to determine components that best represent the correlations between the 2 sets of variables. Bootstrapping was used to estimate statistical significance. The first 2 components accounting for 81.6% and 7.8% of the total variance revealed significant associations between clinical predictors and HRQoL outcomes. The first component identified associations between disability in self-care with longer durations of critical care stay, invasive intracranial monitoring, ventricular drain time, poorer clinical grade on presentation, greater amounts of cerebral spinal fluid drainage, and a history of hypertension. The second component identified associations between disability due to pain and discomfort as well as anxiety and depression with greater body mass index, abnormal heart rate, longer durations of deep sedation and critical care, and higher World Federation of Neurosurgical Societies and Hijdra scores. By applying a data-driven, multivariate approach, we identified robust associations between SAH clinical presentations and HRQoL outcomes. EQ-VAS, EuroQoL visual analog scaleHRQoL, health-related quality of lifeICU, intensive care unitIVH, intraventricular hemorrhagePLS, partial least squaresSAH, subarachnoid hemorrhageSVD, singular value decompositionWFNS, World Federation of Neurosurgical Societies.
Global-scale modes of surface temperature variability on interannual to century timescales
NASA Technical Reports Server (NTRS)
Mann, Michael E.; Park, Jeffrey
1994-01-01
Using 100 years of global temperature anomaly data, we have performed a singluar value decomposition of temperature variations in narrow frequency bands to isolate coherent spatio-temporal modes of global climate variability. Statistical significance is determined from confidence limits obtained by Monte Carlo simulations. Secular variance is dominated by a globally coherent trend; with nearly all grid points warming in phase at varying amplitude. A smaller, but significant, share of the secular variance corresponds to a pattern dominated by warming and subsequent cooling in the high latitude North Atlantic with a roughly centennial timescale. Spatial patterns associated with significant peaks in variance within a broad period range from 2.8 to 5.7 years exhibit characteristic El Nino-Southern Oscillation (ENSO) patterns. A recent transition to a regime of higher ENSO frequency is suggested by our analysis. An interdecadal mode in the 15-to-18 years period and a mode centered at 7-to-8 years period both exhibit predominantly a North Atlantic Oscillation (NAO) temperature pattern. A potentially significant decadal mode centered on 11-to-12 years period also exhibits an NAO temperature pattern and may be modulated by the century-scale North Atlantic variability.
Dedollarization in Turkey after decades of dollarization: A myth or reality?
NASA Astrophysics Data System (ADS)
Metin-Özcan, Kıvılcım; Us, Vuslat
2007-11-01
The paper analyzes dollarization in the Turkish economy given the evidence on dedollarization signals. On conducting a Vector Autoregression (VAR) model, the empirical evidence suggests that dollarization has mostly been shaped by macroeconomic imbalances as measured by exchange rate depreciation volatility, inflation volatility and expectations. Furthermore, the generalized impulse response function (IRF) analysis, in addition to the analysis of variance decomposition (VDC) gives support to the notion that dollarization seems to sustain its persistent nature, thus hysteresis still prevails. Hence, unfavorable macroeconomic conditions apparently contribute to dollarization while dollarization itself contains inertia. Furthermore, dedollarization that presumably started after 2001 has lost headway after May 2006. Thus, it seems too early to conclude that dollarization changed its route to dedollarization.
Tang, Jing; Yurova, Alla Y; Schurgers, Guy; Miller, Paul A; Olin, Stefan; Smith, Benjamin; Siewert, Matthias B; Olefeldt, David; Pilesjö, Petter; Poska, Anneli
2018-05-01
Tundra soils account for 50% of global stocks of soil organic carbon (SOC), and it is expected that the amplified climate warming in high latitude could cause loss of this SOC through decomposition. Decomposed SOC could become hydrologically accessible, which increase downstream dissolved organic carbon (DOC) export and subsequent carbon release to the atmosphere, constituting a positive feedback to climate warming. However, DOC export is often neglected in ecosystem models. In this paper, we incorporate processes related to DOC production, mineralization, diffusion, sorption-desorption, and leaching into a customized arctic version of the dynamic ecosystem model LPJ-GUESS in order to mechanistically model catchment DOC export, and to link this flux to other ecosystem processes. The extended LPJ-GUESS is compared to observed DOC export at Stordalen catchment in northern Sweden. Vegetation communities include flood-tolerant graminoids (Eriophorum) and Sphagnum moss, birch forest and dwarf shrub communities. The processes, sorption-desorption and microbial decomposition (DOC production and mineralization) are found to contribute most to the variance in DOC export based on a detailed variance-based Sobol sensitivity analysis (SA) at grid cell-level. Catchment-level SA shows that the highest mean DOC exports come from the Eriophorum peatland (fen). A comparison with observations shows that the model captures the seasonality of DOC fluxes. Two catchment simulations, one without water lateral routing and one without peatland processes, were compared with the catchment simulations with all processes. The comparison showed that the current implementation of catchment lateral flow and peatland processes in LPJ-GUESS are essential to capture catchment-level DOC dynamics and indicate the model is at an appropriate level of complexity to represent the main mechanism of DOC dynamics in soils. The extended model provides a new tool to investigate potential interactions among climate change, vegetation dynamics, soil hydrology and DOC dynamics at both stand-alone to catchment scales. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Efremova, T. T.; Avrova, A. F.; Efremov, S. P.
2016-09-01
The approaches of multivariate statistics have been used for the numerical classification of morphogenetic types of moss litters in swampy spruce forests according to their physicochemical properties (the ash content, decomposition degree, bulk density, pH, mass, and thickness). Three clusters of moss litters— peat, peaty, and high-ash peaty—have been specified. The functions of classification for identification of new objects have been calculated and evaluated. The degree of decomposition and the ash content are the main classification parameters of litters, though all other characteristics are also statistically significant. The final prediction accuracy of the assignment of a litter to a particular cluster is 86%. Two leading factors participating in the clustering of litters have been determined. The first factor—the degree of transformation of plant remains (quality)—specifies 49% of the total variance, and the second factor—the accumulation rate (quantity)— specifies 26% of the total variance. The morphogenetic structure and physicochemical properties of the clusters of moss litters are characterized.
Enhanced algorithms for stochastic programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishna, Alamuru S.
1993-09-01
In this dissertation, we present some of the recent advances made in solving two-stage stochastic linear programming problems of large size and complexity. Decomposition and sampling are two fundamental components of techniques to solve stochastic optimization problems. We describe improvements to the current techniques in both these areas. We studied different ways of using importance sampling techniques in the context of Stochastic programming, by varying the choice of approximation functions used in this method. We have concluded that approximating the recourse function by a computationally inexpensive piecewise-linear function is highly efficient. This reduced the problem from finding the mean ofmore » a computationally expensive functions to finding that of a computationally inexpensive function. Then we implemented various variance reduction techniques to estimate the mean of a piecewise-linear function. This method achieved similar variance reductions in orders of magnitude less time than, when we directly applied variance-reduction techniques directly on the given problem. In solving a stochastic linear program, the expected value problem is usually solved before a stochastic solution and also to speed-up the algorithm by making use of the information obtained from the solution of the expected value problem. We have devised a new decomposition scheme to improve the convergence of this algorithm.« less
Reactive control and reasoning assistance for scientific laboratory instruments
NASA Technical Reports Server (NTRS)
Thompson, David E.; Levinson, Richard; Robinson, Peter
1993-01-01
Scientific laboratory instruments that are involved in chemical or physical sample identification frequently require substantial human preparation, attention, and interactive control during their operation. Successful real-time analysis of incoming data that supports such interactive control requires: (1) a clear recognition of variance of the data from expected results; and (2) rapid diagnosis of possible alternative hypotheses which might explain the variance. Such analysis then aids in decisions about modifying the experiment protocol, as well as being a goal itself. This paper reports on a collaborative project at the NASA Ames Research Center between artificial intelligence researchers and planetary microbial ecologists. Our team is currently engaged in developing software that autonomously controls science laboratory instruments and that provides data analysis of the real-time data in support of dynamic refinement of the experiment control. the first two instruments to which this technology has been applied are a differential thermal analyzer (DTA) and a gas chromatograph (GC). coupled together, they form a new geochemicstry and microbial analysis tool that is capable of rapid identification of the organiz and mineralogical constituents in soils. The thermal decomposition of the minerals and organics, and the attendance release of evolved gases, provides data about the structural and molecular chemistry of the soil samples.
Performance of Language-Coordinated Collective Systems: A Study of Wine Recognition and Description
Zubek, Julian; Denkiewicz, Michał; Dębska, Agnieszka; Radkowska, Alicja; Komorowska-Mach, Joanna; Litwin, Piotr; Stępień, Magdalena; Kucińska, Adrianna; Sitarska, Ewa; Komorowska, Krystyna; Fusaroli, Riccardo; Tylén, Kristian; Rączaszek-Leonardi, Joanna
2016-01-01
Most of our perceptions of and engagements with the world are shaped by our immersion in social interactions, cultural traditions, tools and linguistic categories. In this study we experimentally investigate the impact of two types of language-based coordination on the recognition and description of complex sensory stimuli: that of red wine. Participants were asked to taste, remember and successively recognize samples of wines within a larger set in a two-by-two experimental design: (1) either individually or in pairs, and (2) with or without the support of a sommelier card—a cultural linguistic tool designed for wine description. Both effectiveness of recognition and the kinds of errors in the four conditions were analyzed. While our experimental manipulations did not impact recognition accuracy, bias-variance decomposition of error revealed non-trivial differences in how participants solved the task. Pairs generally displayed reduced bias and increased variance compared to individuals, however the variance dropped significantly when they used the sommelier card. The effect of sommelier card reducing the variance was observed only in pairs, individuals did not seem to benefit from the cultural linguistic tool. Analysis of descriptions generated with the aid of sommelier cards shows that pairs were more coherent and discriminative than individuals. The findings are discussed in terms of global properties and dynamics of collective systems when constrained by different types of cultural practices. PMID:27729875
NASA Astrophysics Data System (ADS)
Varghese, Bino; Hwang, Darryl; Mohamed, Passant; Cen, Steven; Deng, Christopher; Chang, Michael; Duddalwar, Vinay
2017-11-01
Purpose: To evaluate potential use of wavelets analysis in discriminating benign and malignant renal masses (RM) Materials and Methods: Regions of interest of the whole lesion were manually segmented and co-registered from multiphase CT acquisitions of 144 patients (98 malignant RM: renal cell carcinoma (RCC) and 46 benign RM: oncocytoma, lipid-poor angiomyolipoma). Here, the Haar wavelet was used to analyze the grayscale images of the largest segmented tumor in the axial direction. Six metrics (energy, entropy, homogeneity, contrast, standard deviation (SD) and variance) derived from 3-levels of image decomposition in 3 directions (horizontal, vertical and diagonal) respectively, were used to quantify tumor texture. Independent t-test or Wilcoxon rank sum test depending on data normality were used as exploratory univariate analysis. Stepwise logistic regression and receiver operator characteristics (ROC) curve analysis were used to select predictors and assess prediction accuracy, respectively. Results: Consistently, 5 out of 6 wavelet-based texture measures (except homogeneity) were higher for malignant tumors compared to benign, when accounting for individual texture direction. Homogeneity was consistently lower in malignant than benign tumors irrespective of direction. SD and variance measured in the diagonal direction on the corticomedullary phase showed significant (p<0.05) difference between benign versus malignant tumors. The multivariate model with variance (3 directions) and SD (vertical direction) extracted from the excretory and pre-contrast phase, respectively showed an area under the ROC curve (AUC) of 0.78 (p < 0.05) in discriminating malignant from benign. Conclusion: Wavelet analysis is a valuable texture evaluation tool to add to a radiomics platforms geared at reliably characterizing and stratifying renal masses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gallagher, Neal B.; Blake, Thomas A.; Gassman, Paul L.
2006-07-01
Multivariate curve resolution (MCR) is a powerful technique for extracting chemical information from measured spectra on complex mixtures. The difficulty with applying MCR to soil reflectance measurements is that light scattering artifacts can contribute much more variance to the measurements than the analyte(s) of interest. Two methods were integrated into a MCR decomposition to account for light scattering effects. Firstly, an extended mixture model using pure analyte spectra augmented with scattering ‘spectra’ was used for the measured spectra. And secondly, second derivative preprocessed spectra, which have higher selectivity than the unprocessed spectra, were included in a second block as amore » part of the decomposition. The conventional alternating least squares (ALS) algorithm was modified to simultaneously decompose the measured and second derivative spectra in a two-block decomposition. Equality constraints were also included to incorporate information about sampling conditions. The result was an MCR decomposition that provided interpretable spectra from soil reflectance measurements.« less
Kuesten, Carla; Bi, Jian
2018-06-03
Conventional drivers of liking analysis was extended with a time dimension into temporal drivers of liking (TDOL) based on functional data analysis methodology and non-additive models for multiple-attribute time-intensity (MATI) data. The non-additive models, which consider both direct effects and interaction effects of attributes to consumer overall liking, include Choquet integral and fuzzy measure in the multi-criteria decision-making, and linear regression based on variance decomposition. Dynamics of TDOL, i.e., the derivatives of the relative importance functional curves were also explored. Well-established R packages 'fda', 'kappalab' and 'relaimpo' were used in the paper for developing TDOL. Applied use of these methods shows that the relative importance of MATI curves offers insights for understanding the temporal aspects of consumer liking for fruit chews.
Amplification and dampening of soil respiration by changes in temperature variability
Sierra, C.A.; Harmon, M.E.; Thomann, E.; Perakis, S.S.; Loescher, H.W.
2011-01-01
Accelerated release of carbon from soils is one of the most important feed backs related to anthropogenically induced climate change. Studies addressing the mechanisms for soil carbon release through organic matter decomposition have focused on the effect of changes in the average temperature, with little attention to changes in temperature vari-ability. Anthropogenic activities are likely to modify both the average state and the variability of the climatic system; therefore, the effects of future warming on decomposition should not only focus on trends in the average temperature, but also variability expressed as a change of the probability distribution of temperature.Using analytical and numerical analyses we tested common relationships between temperature and respiration and found that the variability of temperature plays an important role determining respiration rates of soil organic matter. Changes in temperature variability, without changes in the average temperature, can affect the amount of carbon released through respiration over the long term. Furthermore, simultaneous changes in the average and variance of temperature can either amplify or dampen there release of carbon through soil respiration as climate regimes change. The effects depend on the degree of convexity of the relationship between temperature and respiration and the magnitude of the change in temperature variance. A potential consequence of this effect of variability would be higher respiration in regions where both the mean and variance of temperature are expected to increase, such as in some low latitude regions; and lower amounts of respiration where the average temperature is expected to increase and the variance to decrease, such as in northern high latitudes.
NASA Technical Reports Server (NTRS)
Bradshaw, G. A.
1995-01-01
There has been an increased interest in the quantification of pattern in ecological systems over the past years. This interest is motivated by the desire to construct valid models which extend across many scales. Spatial methods must quantify pattern, discriminate types of pattern, and relate hierarchical phenomena across scales. Wavelet analysis is introduced as a method to identify spatial structure in ecological transect data. The main advantage of the wavelet transform over other methods is its ability to preserve and display hierarchical information while allowing for pattern decomposition. Two applications of wavelet analysis are illustrated, as a means to: (1) quantify known spatial patterns in Douglas-fir forests at several scales, and (2) construct spatially-explicit hypotheses regarding pattern generating mechanisms. Application of the wavelet variance, derived from the wavelet transform, is developed for forest ecosystem analysis to obtain additional insight into spatially-explicit data. Specifically, the resolution capabilities of the wavelet variance are compared to the semi-variogram and Fourier power spectra for the description of spatial data using a set of one-dimensional stationary and non-stationary processes. The wavelet cross-covariance function is derived from the wavelet transform and introduced as a alternative method for the analysis of multivariate spatial data of understory vegetation and canopy in Douglas-fir forests of the western Cascades of Oregon.
Wang, Yuanjia; Chen, Huaihou
2012-01-01
Summary We examine a generalized F-test of a nonparametric function through penalized splines and a linear mixed effects model representation. With a mixed effects model representation of penalized splines, we imbed the test of an unspecified function into a test of some fixed effects and a variance component in a linear mixed effects model with nuisance variance components under the null. The procedure can be used to test a nonparametric function or varying-coefficient with clustered data, compare two spline functions, test the significance of an unspecified function in an additive model with multiple components, and test a row or a column effect in a two-way analysis of variance model. Through a spectral decomposition of the residual sum of squares, we provide a fast algorithm for computing the null distribution of the test, which significantly improves the computational efficiency over bootstrap. The spectral representation reveals a connection between the likelihood ratio test (LRT) in a multiple variance components model and a single component model. We examine our methods through simulations, where we show that the power of the generalized F-test may be higher than the LRT, depending on the hypothesis of interest and the true model under the alternative. We apply these methods to compute the genome-wide critical value and p-value of a genetic association test in a genome-wide association study (GWAS), where the usual bootstrap is computationally intensive (up to 108 simulations) and asymptotic approximation may be unreliable and conservative. PMID:23020801
Wang, Yuanjia; Chen, Huaihou
2012-12-01
We examine a generalized F-test of a nonparametric function through penalized splines and a linear mixed effects model representation. With a mixed effects model representation of penalized splines, we imbed the test of an unspecified function into a test of some fixed effects and a variance component in a linear mixed effects model with nuisance variance components under the null. The procedure can be used to test a nonparametric function or varying-coefficient with clustered data, compare two spline functions, test the significance of an unspecified function in an additive model with multiple components, and test a row or a column effect in a two-way analysis of variance model. Through a spectral decomposition of the residual sum of squares, we provide a fast algorithm for computing the null distribution of the test, which significantly improves the computational efficiency over bootstrap. The spectral representation reveals a connection between the likelihood ratio test (LRT) in a multiple variance components model and a single component model. We examine our methods through simulations, where we show that the power of the generalized F-test may be higher than the LRT, depending on the hypothesis of interest and the true model under the alternative. We apply these methods to compute the genome-wide critical value and p-value of a genetic association test in a genome-wide association study (GWAS), where the usual bootstrap is computationally intensive (up to 10(8) simulations) and asymptotic approximation may be unreliable and conservative. © 2012, The International Biometric Society.
Decomposition of Some Well-Known Variance Reduction Techniques. Revision.
1985-05-01
34use a family of transformatlom to convert given samples into samples conditioned on a given characteristic (p. 04)." Dub and Horowitz (1979), Granovsky ...34Antithetic Varlates Revisited," Commun. ACM 26, 11, 064-971. Granovsky , B.L. (1981), "Optimal Formulae of the Conditional Monte Carlo," SIAM J. Alg
ERIC Educational Resources Information Center
Hedges, Larry V.; Hedberg, E. C.
2013-01-01
Background: Cluster-randomized experiments that assign intact groups such as schools or school districts to treatment conditions are increasingly common in educational research. Such experiments are inherently multilevel designs whose sensitivity (statistical power and precision of estimates) depends on the variance decomposition across levels.…
ERIC Educational Resources Information Center
Hedges, Larry V.; Hedberg, Eric C.
2013-01-01
Background: Cluster randomized experiments that assign intact groups such as schools or school districts to treatment conditions are increasingly common in educational research. Such experiments are inherently multilevel designs whose sensitivity (statistical power and precision of estimates) depends on the variance decomposition across levels.…
Two biased estimation techniques in linear regression: Application to aircraft
NASA Technical Reports Server (NTRS)
Klein, Vladislav
1988-01-01
Several ways for detection and assessment of collinearity in measured data are discussed. Because data collinearity usually results in poor least squares estimates, two estimation techniques which can limit a damaging effect of collinearity are presented. These two techniques, the principal components regression and mixed estimation, belong to a class of biased estimation techniques. Detection and assessment of data collinearity and the two biased estimation techniques are demonstrated in two examples using flight test data from longitudinal maneuvers of an experimental aircraft. The eigensystem analysis and parameter variance decomposition appeared to be a promising tool for collinearity evaluation. The biased estimators had far better accuracy than the results from the ordinary least squares technique.
Analysis of information flows among individual companies in the KOSDAQ market
NASA Astrophysics Data System (ADS)
Kim, Ho-Yong; Oh, Gabjin
2016-08-01
In this paper, we employ the variance decomposition method to measure the strength and the direction of interconnections among companies in the KOSDAQ (Korean Securities Dealers Automated Quotation) stock market. We analyze the 200 companies listed on the KOSDAQ market from January 2001 to December 2015. We find that the systemic risk, measured by using the interconnections, increases substantially during periods of financial crisis such as the bankruptcy of Lehman brothers and the European financial crisis. In particular, we find that the increases in the aggregated information flows can be used to predict the increment of the market volatility that may occur during a sub-prime financial crisis period.
Isolating and Examining Sources of Suppression and Multicollinearity in Multiple Linear Regression.
Beckstead, Jason W
2012-03-30
The presence of suppression (and multicollinearity) in multiple regression analysis complicates interpretation of predictor-criterion relationships. The mathematical conditions that produce suppression in regression analysis have received considerable attention in the methodological literature but until now nothing in the way of an analytic strategy to isolate, examine, and remove suppression effects has been offered. In this article such an approach, rooted in confirmatory factor analysis theory and employing matrix algebra, is developed. Suppression is viewed as the result of criterion-irrelevant variance operating among predictors. Decomposition of predictor variables into criterion-relevant and criterion-irrelevant components using structural equation modeling permits derivation of regression weights with the effects of criterion-irrelevant variance omitted. Three examples with data from applied research are used to illustrate the approach: the first assesses child and parent characteristics to explain why some parents of children with obsessive-compulsive disorder accommodate their child's compulsions more so than do others, the second examines various dimensions of personal health to explain individual differences in global quality of life among patients following heart surgery, and the third deals with quantifying the relative importance of various aptitudes for explaining academic performance in a sample of nursing students. The approach is offered as an analytic tool for investigators interested in understanding predictor-criterion relationships when complex patterns of intercorrelation among predictors are present and is shown to augment dominance analysis.
Optimized Kernel Entropy Components.
Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau
2017-06-01
This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.
Cai, C; Rodet, T; Legoupil, S; Mohammad-Djafari, A
2013-11-01
Dual-energy computed tomography (DECT) makes it possible to get two fractions of basis materials without segmentation. One is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical DECT measurements are usually obtained with polychromatic x-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam polychromaticity fail to estimate the correct decomposition fractions and result in beam-hardening artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log preprocessing and the ill-conditioned water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on nonlinear forward models counting the beam polychromaticity show great potential for giving accurate fraction images. This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint maximum a posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a nonquadratic cost function. To solve it, the use of a monotone conjugate gradient algorithm with suboptimal descent steps is proposed. The performance of the proposed approach is analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also necessary to have the accurate spectrum information about the source-detector system. When dealing with experimental data, the spectrum can be predicted by a Monte Carlo simulator. For the materials between water and bone, less than 5% separation errors are observed on the estimated decomposition fractions. The proposed approach is a statistical reconstruction approach based on a nonlinear forward model counting the full beam polychromaticity and applied directly to the projections without taking negative-log. Compared to the approaches based on linear forward models and the BHA correction approaches, it has advantages in noise robustness and reconstruction accuracy.
Liu, Gaisheng; Lu, Zhiming; Zhang, Dongxiao
2007-01-01
A new approach has been developed for solving solute transport problems in randomly heterogeneous media using the Karhunen‐Loève‐based moment equation (KLME) technique proposed by Zhang and Lu (2004). The KLME approach combines the Karhunen‐Loève decomposition of the underlying random conductivity field and the perturbative and polynomial expansions of dependent variables including the hydraulic head, flow velocity, dispersion coefficient, and solute concentration. The equations obtained in this approach are sequential, and their structure is formulated in the same form as the original governing equations such that any existing simulator, such as Modular Three‐Dimensional Multispecies Transport Model for Simulation of Advection, Dispersion, and Chemical Reactions of Contaminants in Groundwater Systems (MT3DMS), can be directly applied as the solver. Through a series of two‐dimensional examples, the validity of the KLME approach is evaluated against the classical Monte Carlo simulations. Results indicate that under the flow and transport conditions examined in this work, the KLME approach provides an accurate representation of the mean concentration. For the concentration variance, the accuracy of the KLME approach is good when the conductivity variance is 0.5. As the conductivity variance increases up to 1.0, the mismatch on the concentration variance becomes large, although the mean concentration can still be accurately reproduced by the KLME approach. Our results also indicate that when the conductivity variance is relatively large, neglecting the effects of the cross terms between velocity fluctuations and local dispersivities, as done in some previous studies, can produce noticeable errors, and a rigorous treatment of the dispersion terms becomes more appropriate.
Matrix approach to uncertainty assessment and reduction for modeling terrestrial carbon cycle
NASA Astrophysics Data System (ADS)
Luo, Y.; Xia, J.; Ahlström, A.; Zhou, S.; Huang, Y.; Shi, Z.; Wang, Y.; Du, Z.; Lu, X.
2017-12-01
Terrestrial ecosystems absorb approximately 30% of the anthropogenic carbon dioxide emissions. This estimate has been deduced indirectly: combining analyses of atmospheric carbon dioxide concentrations with ocean observations to infer the net terrestrial carbon flux. In contrast, when knowledge about the terrestrial carbon cycle is integrated into different terrestrial carbon models they make widely different predictions. To improve the terrestrial carbon models, we have recently developed a matrix approach to uncertainty assessment and reduction. Specifically, the terrestrial carbon cycle has been commonly represented by a series of carbon balance equations to track carbon influxes into and effluxes out of individual pools in earth system models. This representation matches our understanding of carbon cycle processes well and can be reorganized into one matrix equation without changing any modeled carbon cycle processes and mechanisms. We have developed matrix equations of several global land C cycle models, including CLM3.5, 4.0 and 4.5, CABLE, LPJ-GUESS, and ORCHIDEE. Indeed, the matrix equation is generic and can be applied to other land carbon models. This matrix approach offers a suite of new diagnostic tools, such as the 3-dimensional (3-D) parameter space, traceability analysis, and variance decomposition, for uncertainty analysis. For example, predictions of carbon dynamics with complex land models can be placed in a 3-D parameter space (carbon input, residence time, and storage potential) as a common metric to measure how much model predictions are different. The latter can be traced to its source components by decomposing model predictions to a hierarchy of traceable components. Then, variance decomposition can help attribute the spread in predictions among multiple models to precisely identify sources of uncertainty. The highly uncertain components can be constrained by data as the matrix equation makes data assimilation computationally possible. We will illustrate various applications of this matrix approach to uncertainty assessment and reduction for terrestrial carbon cycle models.
Differential decomposition of bacterial and viral fecal indicators in common human pollution types.
Wanjugi, Pauline; Sivaganesan, Mano; Korajkic, Asja; Kelty, Catherine A; McMinn, Brian; Ulrich, Robert; Harwood, Valerie J; Shanks, Orin C
2016-11-15
Understanding the decomposition of microorganisms associated with different human fecal pollution types is necessary for proper implementation of many water quality management practices, as well as predicting associated public health risks. Here, the decomposition of select cultivated and molecular indicators of fecal pollution originating from fresh human feces, septage, and primary effluent sewage in a subtropical marine environment was assessed over a six day period with an emphasis on the influence of ambient sunlight and indigenous microbiota. Ambient water mixed with each fecal pollution type was placed in dialysis bags and incubated in situ in a submersible aquatic mesocosm. Genetic and cultivated fecal indicators including fecal indicator bacteria (enterococci, E. coli, and Bacteroidales), coliphage (somatic and F+), Bacteroides fragilis phage (GB-124), and human-associated genetic indicators (HF183/BacR287 and HumM2) were measured in each sample. Simple linear regression assessing treatment trends in each pollution type over time showed significant decay (p ≤ 0.05) in most treatments for feces and sewage (27/28 and 32/40, respectively), compared to septage (6/26). A two-way analysis of variance of log 10 reduction values for sewage and feces experiments indicated that treatments differentially impact survival of cultivated bacteria, cultivated phage, and genetic indicators. Findings suggest that sunlight is critical for phage decay, and indigenous microbiota play a lesser role. For bacterial cultivated and genetic indicators, the influence of indigenous microbiota varied by pollution type. This study offers new insights on the decomposition of common human fecal pollution types in a subtropical marine environment with important implications for water quality management applications. Published by Elsevier Ltd.
ERIC Educational Resources Information Center
Shiyko, Mariya P.; Ram, Nilam
2011-01-01
Researchers have been making use of ecological momentary assessment (EMA) and other study designs that sample feelings and behaviors in real time and in naturalistic settings to study temporal dynamics and contextual factors of a wide variety of psychological, physiological, and behavioral processes. As EMA designs become more widespread,…
Deconstructing risk: Separable encoding of variance and skewness in the brain
Symmonds, Mkael; Wright, Nicholas D.; Bach, Dominik R.; Dolan, Raymond J.
2011-01-01
Risky choice entails a need to appraise all possible outcomes and integrate this information with individual risk preference. Risk is frequently quantified solely by statistical variance of outcomes, but here we provide evidence that individuals’ choice behaviour is sensitive to both dispersion (variance) and asymmetry (skewness) of outcomes. Using a novel behavioural paradigm in humans, we independently manipulated these ‘summary statistics’ while scanning subjects with fMRI. We show that a behavioural sensitivity to variance and skewness is mirrored in neuroanatomically dissociable representations of these quantities, with parietal cortex showing sensitivity to the former and prefrontal cortex and ventral striatum to the latter. Furthermore, integration of these objective risk metrics with subjective risk preference is expressed in a subject-specific coupling between neural activity and choice behaviour in anterior insula. Our findings show that risk is neither monolithic from a behavioural nor neural perspective and its decomposition is evident both in distinct behavioural preferences and in segregated underlying brain representations. PMID:21763444
Decomposing the relation between Rapid Automatized Naming (RAN) and reading ability.
Arnell, Karen M; Joanisse, Marc F; Klein, Raymond M; Busseri, Michael A; Tannock, Rosemary
2009-09-01
The Rapid Automatized Naming (RAN) test involves rapidly naming sequences of items presented in a visual array. RAN has generated considerable interest because RAN performance predicts reading achievement. This study sought to determine what elements of RAN are responsible for the shared variance between RAN and reading performance using a series of cognitive tasks and a latent variable modelling approach. Participants performed RAN measures, a test of reading speed and comprehension, and six tasks, which tapped various hypothesised components of the RAN. RAN shared 10% of the variance with reading comprehension and 17% with reading rate. Together, the decomposition tasks explained 52% and 39% of the variance shared between RAN and reading comprehension and between RAN and reading rate, respectively. Significant predictors suggested that working memory encoding underlies part of the relationship between RAN and reading ability.
Denoising Medical Images using Calculus of Variations
Kohan, Mahdi Nakhaie; Behnam, Hamid
2011-01-01
We propose a method for medical image denoising using calculus of variations and local variance estimation by shaped windows. This method reduces any additive noise and preserves small patterns and edges of images. A pyramid structure-texture decomposition of images is used to separate noise and texture components based on local variance measures. The experimental results show that the proposed method has visual improvement as well as a better SNR, RMSE and PSNR than common medical image denoising methods. Experimental results in denoising a sample Magnetic Resonance image show that SNR, PSNR and RMSE have been improved by 19, 9 and 21 percents respectively. PMID:22606674
Variance-Based Cluster Selection Criteria in a K-Means Framework for One-Mode Dissimilarity Data.
Vera, J Fernando; Macías, Rodrigo
2017-06-01
One of the main problems in cluster analysis is that of determining the number of groups in the data. In general, the approach taken depends on the cluster method used. For K-means, some of the most widely employed criteria are formulated in terms of the decomposition of the total point scatter, regarding a two-mode data set of N points in p dimensions, which are optimally arranged into K classes. This paper addresses the formulation of criteria to determine the number of clusters, in the general situation in which the available information for clustering is a one-mode [Formula: see text] dissimilarity matrix describing the objects. In this framework, p and the coordinates of points are usually unknown, and the application of criteria originally formulated for two-mode data sets is dependent on their possible reformulation in the one-mode situation. The decomposition of the variability of the clustered objects is proposed in terms of the corresponding block-shaped partition of the dissimilarity matrix. Within-block and between-block dispersion values for the partitioned dissimilarity matrix are derived, and variance-based criteria are subsequently formulated in order to determine the number of groups in the data. A Monte Carlo experiment was carried out to study the performance of the proposed criteria. For simulated clustered points in p dimensions, greater efficiency in recovering the number of clusters is obtained when the criteria are calculated from the related Euclidean distances instead of the known two-mode data set, in general, for unequal-sized clusters and for low dimensionality situations. For simulated dissimilarity data sets, the proposed criteria always outperform the results obtained when these criteria are calculated from their original formulation, using dissimilarities instead of distances.
NASA Astrophysics Data System (ADS)
Hosseinzadehtalaei, Parisa; Tabari, Hossein; Willems, Patrick
2018-02-01
An ensemble of 88 regional climate model (RCM) simulations at 0.11° and 0.44° spatial resolutions from the EURO-CORDEX project is analyzed for central Belgium to investigate the projected impact of climate change on precipitation intensity-duration-frequency (IDF) relationships and extreme precipitation quantiles typically used in water engineering designs. The rate of uncertainty arising from the choice of RCM, driving GCM, and radiative concentration pathway (RCP4.5 & RCP8.5) is quantified using a variance decomposition technique after reconstruction of missing data in GCM × RCM combinations. A comparative analysis between the historical simulations of the EURO-CORDEX 0.11° and 0.44° RCMs shows higher precipitation intensities by the finer resolution runs, leading to a larger overestimation of the observations-based IDFs by the 0.11° runs. The results reveal that making a temporal stationarity assumption for the climate system may lead to underestimation of precipitation quantiles up to 70% by the end of this century. This projected increase is generally larger for the 0.11° RCMs compared with the 0.44° RCMs. The relative changes in extreme precipitation do depend on return period and duration, indicating an amplification for larger return periods and for smaller durations. The variance decomposition approach generally identifies RCM as the most dominant component of uncertainty in changes of more extreme precipitation (return period of 10 years) for both 0.11° and 0.44° resolutions, followed by GCM and RCP scenario. The uncertainties associated with cross-contributions of RCMs, GCMs, and RCPs play a non-negligible role in the associated uncertainties of the changes.
NASA Astrophysics Data System (ADS)
Fiedler, Emma; Mao, Chongyuan; Good, Simon; Waters, Jennifer; Martin, Matthew
2017-04-01
OSTIA is the Met Office's Operational Sea Surface Temperature (SST) and Ice Analysis system, which produces L4 (globally complete, gridded) analyses on a daily basis. Work is currently being undertaken to replace the original OI (Optimal Interpolation) data assimilation scheme with NEMOVAR, a 3D-Var data assimilation method developed for use with the NEMO ocean model. A dual background error correlation length scale formulation is used for SST in OSTIA, as implemented in NEMOVAR. Short and long length scales are combined according to the ratio of the decomposition of the background error variances into short and long spatial correlations. The pre-defined background error variances vary spatially and seasonally, but not on shorter time-scales. If the derived length scales applied to the daily analysis are too long, SST features may be smoothed out. Therefore a flow-dependent component to determining the effective length scale has also been developed. The total horizontal gradient of the background SST field is used to identify regions where the length scale should be shortened. These methods together have led to an improvement in the resolution of SST features compared to the previous OI analysis system, without the introduction of spurious noise. This presentation will show validation results for feature resolution in OSTIA using the OI scheme, the dual length scale NEMOVAR scheme, and the flow-dependent implementation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Heng; Chen, Xingyuan; Ye, Ming
Sensitivity analysis is an important tool for quantifying uncertainty in the outputs of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a hierarchical sensitivity analysis method that (1) constructs an uncertainty hierarchy by analyzing the input uncertainty sources, and (2) accounts for the spatial correlation among parameters at each level ofmore » the hierarchy using geostatistical tools. The contribution of uncertainty source at each hierarchy level is measured by sensitivity indices calculated using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport in model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally as driven by the dynamic interaction between groundwater and river water at the site. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed parameters.« less
Erhagen, Björn; Öquist, Mats; Sparrman, Tobias; Haei, Mahsa; Ilstedt, Ulrik; Hedenström, Mattias; Schleucher, Jürgen; Nilsson, Mats B
2013-12-01
The global soil carbon pool is approximately three times larger than the contemporary atmospheric pool, therefore even minor changes to its integrity may have major implications for atmospheric CO2 concentrations. While theory predicts that the chemical composition of organic matter should constitute a master control on the temperature response of its decomposition, this relationship has not yet been fully demonstrated. We used laboratory incubations of forest soil organic matter (SOM) and fresh litter material together with NMR spectroscopy to make this connection between organic chemical composition and temperature sensitivity of decomposition. Temperature response of decomposition in both fresh litter and SOM was directly related to the chemical composition of the constituent organic matter, explaining 90% and 70% of the variance in Q10 in litter and SOM, respectively. The Q10 of litter decreased with increasing proportions of aromatic and O-aromatic compounds, and increased with increased contents of alkyl- and O-alkyl carbons. In contrast, in SOM, decomposition was affected only by carbonyl compounds. To reveal why a certain group of organic chemical compounds affected the temperature sensitivity of organic matter decomposition in litter and SOM, a more detailed characterization of the (13) C aromatic region using Heteronuclear Single Quantum Coherence (HSQC) was conducted. The results revealed considerable differences in the aromatic region between litter and SOM. This suggests that the correlation between chemical composition of organic matter and the temperature response of decomposition differed between litter and SOM. The temperature response of soil decomposition processes can thus be described by the chemical composition of its constituent organic matter, this paves the way for improved ecosystem modeling of biosphere feedbacks under a changing climate. © 2013 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Kim, Shin-Woo; Noh, Nam-Kyu; Lim, Gyu-Ho
2013-04-01
This study presents the introduction of retrospective optimal interpolation (ROI) and its application with Weather Research and Forecasting model (WRF). Song et al. (2009) suggested ROI method which is an optimal interpolation (OI) that gradually assimilates observations over the analysis window for variance-minimum estimate of an atmospheric state at the initial time of the analysis window. The assimilation window of ROI algorithm is gradually increased, similar with that of the quasi-static variational assimilation (QSVA; Pires et al., 1996). Unlike QSVA method, however, ROI method assimilates the data at post analysis time using perturbation method (Verlaan and Heemink, 1997) without adjoint model. Song and Lim (2011) improved this method by incorporating eigen-decomposition and covariance inflation. The computational costs for ROI can be reduced due to the eigen-decomposition of background error covariance which can concentrate ROI analyses on the error variances of governing eigenmodes by transforming the control variables into eigenspace. A total energy norm is used for the normalization of each control variables. In this study, ROI method is applied to WRF model with Observing System Simulation Experiment (OSSE) to validate the algorithm and to investigate the capability. Horizontal wind, pressure, potential temperature, and water vapor mixing ratio are used for control variables and observations. Firstly, 1-profile assimilation experiment is performed. Subsequently, OSSE's are performed using the virtual observing system which consists of synop, ship, and sonde data. The difference between forecast errors with assimilation and without assimilation is obviously increased as time passed, which means the improvement of forecast error with the assimilation by ROI. The characteristics and strength/weakness of ROI method are also investigated by conducting the experiments with 3D-Var (3-dimensional variational) method and 4D-Var (4-dimensional variational) method. In the initial time, ROI produces a larger forecast error than that of 4D-Var. However, the difference between the two experimental results is decreased gradually with time, and the ROI shows apparently better result (i.e., smaller forecast error) than that of 4D-Var after 9-hour forecast.
Wen, Zaidao; Hou, Zaidao; Jiao, Licheng
2017-11-01
Discriminative dictionary learning (DDL) framework has been widely used in image classification which aims to learn some class-specific feature vectors as well as a representative dictionary according to a set of labeled training samples. However, interclass similarities and intraclass variances among input samples and learned features will generally weaken the representability of dictionary and the discrimination of feature vectors so as to degrade the classification performance. Therefore, how to explicitly represent them becomes an important issue. In this paper, we present a novel DDL framework with two-level low rank and group sparse decomposition model. In the first level, we learn a class-shared and several class-specific dictionaries, where a low rank and a group sparse regularization are, respectively, imposed on the corresponding feature matrices. In the second level, the class-specific feature matrix will be further decomposed into a low rank and a sparse matrix so that intraclass variances can be separated to concentrate the corresponding feature vectors. Extensive experimental results demonstrate the effectiveness of our model. Compared with the other state-of-the-arts on several popular image databases, our model can achieve a competitive or better performance in terms of the classification accuracy.
Repeatability of circadian behavioural variation revealed in free-ranging marine fish.
Alós, Josep; Martorell-Barceló, Martina; Campos-Candela, Andrea
2017-02-01
Repeatable between-individual differences in the behavioural manifestation of underlying circadian rhythms determine chronotypes in humans and terrestrial animals. Here, we have repeatedly measured three circadian behaviours, awakening time, rest onset and rest duration, in the free-ranging pearly razorfish, Xyrithchys novacula , facilitated by acoustic tracking technology and hidden Markov models. In addition, daily travelled distance, a standard measure of daily activity as fish personality trait, was repeatedly assessed using a State-Space Model. We have decomposed the variance of these four behavioural traits using linear mixed models and estimated repeatability scores ( R ) while controlling for environmental co-variates: year of experimentation, spatial location of the activity, fish size and gender and their interactions. Between- and within-individual variance decomposition revealed significant R s in all traits suggesting high predictability of individual circadian behavioural variation and the existence of chronotypes. The decomposition of the correlations among chronotypes and the personality trait studied here into between- and within-individual correlations did not reveal any significant correlation at between-individual level. We therefore propose circadian behavioural variation as an independent axis of the fish personality, and the study of chronotypes and their consequences as a novel dimension in understanding within-species fish behavioural diversity.
Rose, Emily; Paczolt, Kimberly A; Jones, Adam G
2013-09-01
Empirical studies of sexual selection often focus on events occurring either before or after mating but rarely both and consequently may fail to discern the relative magnitudes and interactions of premating and postmating episodes of selection. Here, we simultaneously quantify premating and postmating selection in the sex-role-reversed Gulf pipefish by using a microsatellite-based analysis of parentage in experimental populations. Female pipefish exhibited an opportunity for selection (I) of 1.64, which was higher than that of males (0.35). Decompositions of I and the selection differential on body size showed that over 95% of the selection on females arose from the premating phase. We also found evidence for a trade-off between selection phases, where multiply mating females had significantly lower offspring survivorship compared to singly mated females. In males, variance in relative fitness arose mainly from the number of eggs received per copulation and a small number of males who failed to mate. Overall, our study exemplifies a general approach for the decomposition of total selection into premating and postmating phases to understand the interplay among components of natural and sexual selection that conspire to shape sexually selected traits.
Wu, Xiaodong; Zhao, Lin; Hu, Guojie; Liu, Guimin; Li, Wangping; Ding, Yongjian
2018-02-01
Permafrost degradation can stimulate the decomposition of organic soil matter and cause a large amount of greenhouse gas emissions into the atmosphere. The light fraction organic matter (LFOM) is a labile substrate for microbial decomposition and probably plays an important role in future permafrost carbon cycles. However, little is known about the distribution of LFOM and its relationship with permafrost and environmental factors. Here, we investigated the light fraction carbon (LFC) and nitrogen (LFN) contents and stocks under meadows and wet meadows with different permafrost conditions on the southern Qinghai-Tibetan Plateau. Our results showed that LFC and LFN were mainly distributed in the upper 30cm of soils, and the sites with permafrost had significantly higher contents of LFC and LFN than those from the sites without existing permafrost. The LFC and LFN decreased sharply with depth, suggesting that the soil organic matter (SOM) in this area was highly decomposed in deep soils. Soil moisture and bulk density explained approximately 50% of the variances in LFC and LFN for all the sampling sites, while soil moisture explained approximately 30% of the variance in permafrost sites. Both the C:N ratios and LFC:LFN ratios in the sites with permafrost were higher than those in the sites without permafrost. The results suggested that the permafrost and land cover types are the main factors controlling LFOM content and stock, and that permafrost degradation would lead to a decrease of LFOM and soil C:N ratios, thus accelerating the decomposition of SOM. Copyright © 2017 Elsevier B.V. All rights reserved.
Statistical analysis of effective singular values in matrix rank determination
NASA Technical Reports Server (NTRS)
Konstantinides, Konstantinos; Yao, Kung
1988-01-01
A major problem in using SVD (singular-value decomposition) as a tool in determining the effective rank of a perturbed matrix is that of distinguishing between significantly small and significantly large singular values to the end, conference regions are derived for the perturbed singular values of matrices with noisy observation data. The analysis is based on the theories of perturbations of singular values and statistical significance test. Threshold bounds for perturbation due to finite-precision and i.i.d. random models are evaluated. In random models, the threshold bounds depend on the dimension of the matrix, the noisy variance, and predefined statistical level of significance. Results applied to the problem of determining the effective order of a linear autoregressive system from the approximate rank of a sample autocorrelation matrix are considered. Various numerical examples illustrating the usefulness of these bounds and comparisons to other previously known approaches are given.
Spectral decomposition of internal gravity wave sea surface height in global models
NASA Astrophysics Data System (ADS)
Savage, Anna C.; Arbic, Brian K.; Alford, Matthew H.; Ansong, Joseph K.; Farrar, J. Thomas; Menemenlis, Dimitris; O'Rourke, Amanda K.; Richman, James G.; Shriver, Jay F.; Voet, Gunnar; Wallcraft, Alan J.; Zamudio, Luis
2017-10-01
Two global ocean models ranging in horizontal resolution from 1/12° to 1/48° are used to study the space and time scales of sea surface height (SSH) signals associated with internal gravity waves (IGWs). Frequency-horizontal wavenumber SSH spectral densities are computed over seven regions of the world ocean from two simulations of the HYbrid Coordinate Ocean Model (HYCOM) and three simulations of the Massachusetts Institute of Technology general circulation model (MITgcm). High wavenumber, high-frequency SSH variance follows the predicted IGW linear dispersion curves. The realism of high-frequency motions (>0.87 cpd) in the models is tested through comparison of the frequency spectral density of dynamic height variance computed from the highest-resolution runs of each model (1/25° HYCOM and 1/48° MITgcm) with dynamic height variance frequency spectral density computed from nine in situ profiling instruments. These high-frequency motions are of particular interest because of their contributions to the small-scale SSH variability that will be observed on a global scale in the upcoming Surface Water and Ocean Topography (SWOT) satellite altimetry mission. The variance at supertidal frequencies can be comparable to the tidal and low-frequency variance for high wavenumbers (length scales smaller than ˜50 km), especially in the higher-resolution simulations. In the highest-resolution simulations, the high-frequency variance can be greater than the low-frequency variance at these scales.
Time-frequency analysis : mathematical analysis of the empirical mode decomposition.
DOT National Transportation Integrated Search
2009-01-01
Invented over 10 years ago, empirical mode : decomposition (EMD) provides a nonlinear : time-frequency analysis with the ability to successfully : analyze nonstationary signals. Mathematical : Analysis of the Empirical Mode Decomposition : is a...
A TV-constrained decomposition method for spectral CT
NASA Astrophysics Data System (ADS)
Guo, Xiaoyue; Zhang, Li; Xing, Yuxiang
2017-03-01
Spectral CT is attracting more and more attention in medicine, industrial nondestructive testing and security inspection field. Material decomposition is an important issue to a spectral CT to discriminate materials. Because of the spectrum overlap of energy channels, as well as the correlation of basis functions, it is well acknowledged that decomposition step in spectral CT imaging causes noise amplification and artifacts in component coefficient images. In this work, we propose materials decomposition via an optimization method to improve the quality of decomposed coefficient images. On the basis of general optimization problem, total variance minimization is constrained on coefficient images in our overall objective function with adjustable weights. We solve this constrained optimization problem under the framework of ADMM. Validation on both a numerical dental phantom in simulation and a real phantom of pig leg on a practical CT system using dual-energy imaging is executed. Both numerical and physical experiments give visually obvious better reconstructions than a general direct inverse method. SNR and SSIM are adopted to quantitatively evaluate the image quality of decomposed component coefficients. All results demonstrate that the TV-constrained decomposition method performs well in reducing noise without losing spatial resolution so that improving the image quality. The method can be easily incorporated into different types of spectral imaging modalities, as well as for cases with energy channels more than two.
NASA Astrophysics Data System (ADS)
Gao, Jing; Burt, James E.
2017-12-01
This study investigates the usefulness of a per-pixel bias-variance error decomposition (BVD) for understanding and improving spatially-explicit data-driven models of continuous variables in environmental remote sensing (ERS). BVD is a model evaluation method originated from machine learning and have not been examined for ERS applications. Demonstrated with a showcase regression tree model mapping land imperviousness (0-100%) using Landsat images, our results showed that BVD can reveal sources of estimation errors, map how these sources vary across space, reveal the effects of various model characteristics on estimation accuracy, and enable in-depth comparison of different error metrics. Specifically, BVD bias maps can help analysts identify and delineate model spatial non-stationarity; BVD variance maps can indicate potential effects of ensemble methods (e.g. bagging), and inform efficient training sample allocation - training samples should capture the full complexity of the modeled process, and more samples should be allocated to regions with more complex underlying processes rather than regions covering larger areas. Through examining the relationships between model characteristics and their effects on estimation accuracy revealed by BVD for both absolute and squared errors (i.e. error is the absolute or the squared value of the difference between observation and estimate), we found that the two error metrics embody different diagnostic emphases, can lead to different conclusions about the same model, and may suggest different solutions for performance improvement. We emphasize BVD's strength in revealing the connection between model characteristics and estimation accuracy, as understanding this relationship empowers analysts to effectively steer performance through model adjustments.
Variance decomposition shows the importance of human-climate feedbacks in the Earth system
NASA Astrophysics Data System (ADS)
Calvin, K. V.; Bond-Lamberty, B. P.; Jones, A. D.; Shi, X.; Di Vittorio, A. V.; Thornton, P. E.
2017-12-01
The human and Earth systems are intricately linked: climate influences agricultural production, renewable energy potential, and water availability, for example, while anthropogenic emissions from industry and land use change alter temperature and precipitation. Such feedbacks have the potential to significantly alter future climate change. Current climate change projections contain significant uncertainties, however, and because Earth System Models do not generally include dynamic human (demography, economy, energy, water, land use) components, little is known about how climate feedbacks contribute to that uncertainty. Here we use variance decomposition of a novel coupled human-earth system model to show that the influence of human-climate feedbacks can be as large as 17% of the total variance in the near term for global mean temperature rise, and 11% in the long term for cropland area. The near-term contribution of energy and land use feedbacks to the climate on global mean temperature rise is as large as that from model internal variability, a factor typically considered in modeling studies. Conversely, the contribution of climate feedbacks to cropland extent, while non-negligible, is less than that from socioeconomics, policy, or model. Previous assessments have largely excluded these feedbacks, with the climate community focusing on uncertainty due to internal variability, scenario, and model and the integrated assessment community focusing on uncertainty due to socioeconomics, technology, policy, and model. Our results set the stage for a new generation of models and hypothesis testing to determine when and how bidirectional feedbacks between human and Earth systems should be considered in future assessments of climate change.
Equity and length of lifespan are not the same.
Seligman, Benjamin; Greenberg, Gabi; Tuljapurkar, Shripad
2016-07-26
Efforts to understand the dramatic declines in mortality over the past century have focused on life expectancy. However, understanding changes in disparity in age of death is important to understanding mechanisms of mortality improvement and devising policy to promote health equity. We derive a novel decomposition of variance in age of death, a measure of inequality, and apply it to cause-specific contributions to the change in variance among the G7 countries (Canada, France, Germany, Italy, Japan, the United Kingdom, and the United States) from 1950 to 2010. We find that the causes of death that contributed most to declines in the variance are different from those that contributed most to increase in life expectancy; in particular, they affect mortality at younger ages. We also find that, for two leading causes of death [cancers and cardiovascular disease (CVD)], there are no consistent relationships between changes in life expectancy and variance either within countries over time or between countries. These results show that promoting health at younger ages is critical for health equity and that policies to control cancer and CVD may have differing implications for equity.
Jusot, Florence; Tubeuf, Sandy; Trannoy, Alain
2013-12-01
The way to treat the correlation between circumstances and effort is a central, yet largely neglected issue in the applied literature on inequality of opportunity. This paper adopts three alternative normative ways of treating this correlation championed by Roemer, Barry and Swift and assesses their empirical relevance using survey data. We combine regression analysis with the natural decomposition of the variance to compare the relative contributions of circumstances and efforts to overall health inequality according to the different normative principles. Our results suggest that, in practice, the normative principle on the way to treat the correlation between circumstances and effort makes little difference on the relative contributions of circumstances and efforts to explained health inequality. Copyright © 2013 John Wiley & Sons, Ltd.
Lidman, Johan; Jonsson, Micael; Burrows, Ryan M; Bundschuh, Mirco; Sponseller, Ryan A
2017-02-01
Although the importance of stream condition for leaf litter decomposition has been extensively studied, little is known about how processing rates change in response to altered riparian vegetation community composition. We investigated patterns of plant litter input and decomposition across 20 boreal headwater streams that varied in proportions of riparian deciduous and coniferous trees. We measured a suite of in-stream physical and chemical characteristics, as well as the amount and type of litter inputs from riparian vegetation, and related these to decomposition rates of native (alder, birch, and spruce) and introduced (lodgepole pine) litter species incubated in coarse- and fine-mesh bags. Total litter inputs ranged more than fivefold among sites and increased with the proportion of deciduous vegetation in the riparian zone. In line with differences in initial litter quality, mean decomposition rate was highest for alder, followed by birch, spruce, and lodgepole pine (12, 55, and 68% lower rates, respectively). Further, these rates were greater in coarse-mesh bags that allow colonization by macroinvertebrates. Variance in decomposition rate among sites for different species was best explained by different sets of environmental conditions, but litter-input composition (i.e., quality) was overall highly important. On average, native litter decomposed faster in sites with higher-quality litter input and (with the exception of spruce) higher concentrations of dissolved nutrients and open canopies. By contrast, lodgepole pine decomposed more rapidly in sites receiving lower-quality litter inputs. Birch litter decomposition rate in coarse-mesh bags was best predicted by the same environmental variables as in fine-mesh bags, with additional positive influences of macroinvertebrate species richness. Hence, to facilitate energy turnover in boreal headwaters, forest management with focus on conifer production should aim at increasing the presence of native deciduous trees along streams, as they promote conditions that favor higher decomposition rates of terrestrial plant litter.
Brandao, Livia M; Monhart, Matthias; Schötzau, Andreas; Ledolter, Anna A; Palmowski-Wolfe, Anja M
2017-08-01
To further improve analysis of the two-flash multifocal electroretinogram (2F-mfERG) in glaucoma in regard to structure-function analysis, using discrete wavelet transform (DWT) analysis. Sixty subjects [35 controls and 25 primary open-angle glaucoma (POAG)] underwent 2F-mfERG. Responses were analyzed with the DWT. The DWT level that could best separate POAG from controls was compared to the root-mean-square (RMS) calculations previously used in the analysis of the 2F-mfERG. In a subgroup analysis, structure-function correlation was assessed between DWT, optical coherence tomography and automated perimetry (mf103 customized pattern) for the central 15°. Frequency level 4 of the wavelet variance analysis (144 Hz, WVA-144) was most sensitive (p < 0.003). It correlated positively with RMS but had a better AUC. Positive relations were found between visual field, WVA-144 and GCIPL thickness. The highest predictive factor for glaucoma diagnostic was seen in the GCIPL, but this improved further by adding the mean sensitivity and WVA-144. mfERG using WVA analysis improves glaucoma diagnosis, especially when combined with GCIPL and MS.
Boundary Conditions for Scalar (Co)Variances over Heterogeneous Surfaces
NASA Astrophysics Data System (ADS)
Machulskaya, Ekaterina; Mironov, Dmitrii
2018-05-01
The problem of boundary conditions for the variances and covariances of scalar quantities (e.g., temperature and humidity) at the underlying surface is considered. If the surface is treated as horizontally homogeneous, Monin-Obukhov similarity suggests the Neumann boundary conditions that set the surface fluxes of scalar variances and covariances to zero. Over heterogeneous surfaces, these boundary conditions are not a viable choice since the spatial variability of various surface and soil characteristics, such as the ground fluxes of heat and moisture and the surface radiation balance, is not accounted for. Boundary conditions are developed that are consistent with the tile approach used to compute scalar (and momentum) fluxes over heterogeneous surfaces. To this end, the third-order transport terms (fluxes of variances) are examined analytically using a triple decomposition of fluctuating velocity and scalars into the grid-box mean, the fluctuation of tile-mean quantity about the grid-box mean, and the sub-tile fluctuation. The effect of the proposed boundary conditions on mixing in an archetypical stably-stratified boundary layer is illustrated with a single-column numerical experiment. The proposed boundary conditions should be applied in atmospheric models that utilize turbulence parametrization schemes with transport equations for scalar variances and covariances including the third-order turbulent transport (diffusion) terms.
NASA Astrophysics Data System (ADS)
Forootan, Ehsan; Kusche, Jürgen
2016-04-01
Geodetic/geophysical observations, such as the time series of global terrestrial water storage change or sea level and temperature change, represent samples of physical processes and therefore contain information about complex physical interactionswith many inherent time scales. Extracting relevant information from these samples, for example quantifying the seasonality of a physical process or its variability due to large-scale ocean-atmosphere interactions, is not possible by rendering simple time series approaches. In the last decades, decomposition techniques have found increasing interest for extracting patterns from geophysical observations. Traditionally, principal component analysis (PCA) and more recently independent component analysis (ICA) are common techniques to extract statistical orthogonal (uncorrelated) and independent modes that represent the maximum variance of observations, respectively. PCA and ICA can be classified as stationary signal decomposition techniques since they are based on decomposing the auto-covariance matrix or diagonalizing higher (than two)-order statistical tensors from centered time series. However, the stationary assumption is obviously not justifiable for many geophysical and climate variables even after removing cyclic components e.g., the seasonal cycles. In this paper, we present a new decomposition method, the complex independent component analysis (CICA, Forootan, PhD-2014), which can be applied to extract to non-stationary (changing in space and time) patterns from geophysical time series. Here, CICA is derived as an extension of real-valued ICA (Forootan and Kusche, JoG-2012), where we (i) define a new complex data set using a Hilbert transformation. The complex time series contain the observed values in their real part, and the temporal rate of variability in their imaginary part. (ii) An ICA algorithm based on diagonalization of fourth-order cumulants is then applied to decompose the new complex data set in (i). (iii) Dominant non-stationary patterns are recognized as independent complex patterns that can be used to represent the space and time amplitude and phase propagations. We present the results of CICA on simulated and real cases e.g., for quantifying the impact of large-scale ocean-atmosphere interaction on global mass changes. Forootan (PhD-2014) Statistical signal decomposition techniques for analyzing time-variable satellite gravimetry data, PhD Thesis, University of Bonn, http://hss.ulb.uni-bonn.de/2014/3766/3766.htm Forootan and Kusche (JoG-2012) Separation of global time-variable gravity signals into maximally independent components, Journal of Geodesy 86 (7), 477-497, doi: 10.1007/s00190-011-0532-5
Ligmann-Zielinska, Arika; Kramer, Daniel B; Spence Cheruvelil, Kendra; Soranno, Patricia A
2014-01-01
Agent-based models (ABMs) have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1) efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2) conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system.
Ligmann-Zielinska, Arika; Kramer, Daniel B.; Spence Cheruvelil, Kendra; Soranno, Patricia A.
2014-01-01
Agent-based models (ABMs) have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1) efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2) conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system. PMID:25340764
Direct and Indirect Effects of UV-B Exposure on Litter Decomposition: A Meta-Analysis
Song, Xinzhang; Peng, Changhui; Jiang, Hong; Zhu, Qiuan; Wang, Weifeng
2013-01-01
Ultraviolet-B (UV-B) exposure in the course of litter decomposition may have a direct effect on decomposition rates via changing states of photodegradation or decomposer constitution in litter while UV-B exposure during growth periods may alter chemical compositions and physical properties of plants. Consequently, these changes will indirectly affect subsequent litter decomposition processes in soil. Although studies are available on both the positive and negative effects (including no observable effects) of UV-B exposure on litter decomposition, a comprehensive analysis leading to an adequate understanding remains unresolved. Using data from 93 studies across six biomes, this introductory meta-analysis found that elevated UV-B directly increased litter decomposition rates by 7% and indirectly by 12% while attenuated UV-B directly decreased litter decomposition rates by 23% and indirectly increased litter decomposition rates by 7%. However, neither positive nor negative effects were statistically significant. Woody plant litter decomposition seemed more sensitive to UV-B than herbaceous plant litter except under conditions of indirect effects of elevated UV-B. Furthermore, levels of UV-B intensity significantly affected litter decomposition response to UV-B (P<0.05). UV-B effects on litter decomposition were to a large degree compounded by climatic factors (e.g., MAP and MAT) (P<0.05) and litter chemistry (e.g., lignin content) (P<0.01). Results suggest these factors likely have a bearing on masking the important role of UV-B on litter decomposition. No significant differences in UV-B effects on litter decomposition were found between study types (field experiment vs. laboratory incubation), litter forms (leaf vs. needle), and decay duration. Indirect effects of elevated UV-B on litter decomposition significantly increased with decay duration (P<0.001). Additionally, relatively small changes in UV-B exposure intensity (30%) had significant direct effects on litter decomposition (P<0.05). The intent of this meta-analysis was to improve our understanding of the overall effects of UV-B on litter decomposition. PMID:23818993
Differential Decomposition of Bacterial and Viral Fecal ...
Understanding the decomposition of microorganisms associated with different human fecal pollution types is necessary for proper implementation of many water qualitymanagement practices, as well as predicting associated public health risks. Here, thedecomposition of select cultivated and molecular indicators of fecal pollution originating from fresh human feces, septage, and primary effluent sewage in a subtropical marine environment was assessed over a six day period with an emphasis on the influence of ambient sunlight and indigenous microbiota. Ambient water mixed with each fecal pollution type was placed in dialysis bags and incubated in situ in a submersible aquatic mesocosm. Genetic and cultivated fecal indicators including fecal indicator bacteria (enterococci, E. coli, and Bacteroidales), coliphage (somatic and F+), Bacteroides fragilis phage (GB-124), and human-associated geneticindicators (HF183/BacR287 and HumM2) were measured in each sample. Simple linearregression assessing treatment trends in each pollution type over time showed significant decay (p ≤ 0.05) in most treatments for feces and sewage (27/28 and 32/40, respectively), compared to septage (6/26). A two-way analysis of variance of log10 reduction values for sewage and feces experiments indicated that treatments differentially impact survival of cultivated bacteria, cultivated phage, and genetic indicators. Findings suggest that sunlight is critical for phage decay, and indigenous microbio
The wider determinants of inequalities in health: a decomposition analysis
2011-01-01
Background The common starting point of many studies scrutinizing the factors underlying health inequalities is that material, cultural-behavioural, and psycho-social factors affect the distribution of health systematically through income, education, occupation, wealth or similar indicators of socioeconomic structure. However, little is known regarding if and to what extent these factors can assert systematic influence on the distribution of health of a population independent of the effects channelled through income, education, or wealth. Methods Using representative data from the German Socioeconomic Panel, we apply Fields' regression based decomposition techniques to decompose variations in health into its sources. Controlling for income, education, occupation, and wealth, we assess the relative importance of the explanatory factors over and above their effect on the variation in health channelled through the commonly applied measures of socioeconomic status. Results The analysis suggests that three main factors persistently contribute to variance in health: the capability score, cultural-behavioural variables and to a lower extent, the materialist approach. Of the three, the capability score illustrates the explanatory power of interaction and compound effects as it captures the individual's socioeconomic, social, and psychological resources in relation to his/her exposure to life challenges. Conclusion Models that take a reductionist perspective and do not allow for the possibility that health inequalities are generated by factors over and above their effect on the variation in health channelled through one of the socioeconomic measures are underspecified and may fail to capture the determinants of health inequalities. PMID:21791075
Wavelets, ridgelets, and curvelets for Poisson noise removal.
Zhang, Bo; Fadili, Jalal M; Starck, Jean-Luc
2008-07-01
In order to denoise Poisson count data, we introduce a variance stabilizing transform (VST) applied on a filtered discrete Poisson process, yielding a near Gaussian process with asymptotic constant variance. This new transform, which can be deemed as an extension of the Anscombe transform to filtered data, is simple, fast, and efficient in (very) low-count situations. We combine this VST with the filter banks of wavelets, ridgelets and curvelets, leading to multiscale VSTs (MS-VSTs) and nonlinear decomposition schemes. By doing so, the noise-contaminated coefficients of these MS-VST-modified transforms are asymptotically normally distributed with known variances. A classical hypothesis-testing framework is adopted to detect the significant coefficients, and a sparsity-driven iterative scheme reconstructs properly the final estimate. A range of examples show the power of this MS-VST approach for recovering important structures of various morphologies in (very) low-count images. These results also demonstrate that the MS-VST approach is competitive relative to many existing denoising methods.
Yao, Shengnan; Zeng, Weiming; Wang, Nizhuan; Chen, Lei
2013-07-01
Independent component analysis (ICA) has been proven to be effective for functional magnetic resonance imaging (fMRI) data analysis. However, ICA decomposition requires to optimize the unmixing matrix iteratively whose initial values are generated randomly. Thus the randomness of the initialization leads to different ICA decomposition results. Therefore, just one-time decomposition for fMRI data analysis is not usually reliable. Under this circumstance, several methods about repeated decompositions with ICA (RDICA) were proposed to reveal the stability of ICA decomposition. Although utilizing RDICA has achieved satisfying results in validating the performance of ICA decomposition, RDICA cost much computing time. To mitigate the problem, in this paper, we propose a method, named ATGP-ICA, to do the fMRI data analysis. This method generates fixed initial values with automatic target generation process (ATGP) instead of being produced randomly. We performed experimental tests on both hybrid data and fMRI data to indicate the effectiveness of the new method and made a performance comparison of the traditional one-time decomposition with ICA (ODICA), RDICA and ATGP-ICA. The proposed method demonstrated that it not only could eliminate the randomness of ICA decomposition, but also could save much computing time compared to RDICA. Furthermore, the ROC (Receiver Operating Characteristic) power analysis also denoted the better signal reconstruction performance of ATGP-ICA than that of RDICA. Copyright © 2013 Elsevier Inc. All rights reserved.
Tool Wear Feature Extraction Based on Hilbert Marginal Spectrum
NASA Astrophysics Data System (ADS)
Guan, Shan; Song, Weijie; Pang, Hongyang
2017-09-01
In the metal cutting process, the signal contains a wealth of tool wear state information. A tool wear signal’s analysis and feature extraction method based on Hilbert marginal spectrum is proposed. Firstly, the tool wear signal was decomposed by empirical mode decomposition algorithm and the intrinsic mode functions including the main information were screened out by the correlation coefficient and the variance contribution rate. Secondly, Hilbert transform was performed on the main intrinsic mode functions. Hilbert time-frequency spectrum and Hilbert marginal spectrum were obtained by Hilbert transform. Finally, Amplitude domain indexes were extracted on the basis of the Hilbert marginal spectrum and they structured recognition feature vector of tool wear state. The research results show that the extracted features can effectively characterize the different wear state of the tool, which provides a basis for monitoring tool wear condition.
Hong, Bonnie; Du, Yingzhou; Mukerji, Pushkor; Roper, Jason M; Appenzeller, Laura M
2017-07-12
Regulatory-compliant rodent subchronic feeding studies are compulsory regardless of a hypothesis to test, according to recent EU legislation for the safety assessment of whole food/feed produced from genetically modified (GM) crops containing a single genetic transformation event (European Union Commission Implementing Regulation No. 503/2013). The Implementing Regulation refers to guidelines set forth by the European Food Safety Authority (EFSA) for the design, conduct, and analysis of rodent subchronic feeding studies. The set of EFSA recommendations was rigorously applied to a 90-day feeding study in Sprague-Dawley rats. After study completion, the appropriateness and applicability of these recommendations were assessed using a battery of statistical analysis approaches including both retrospective and prospective statistical power analyses as well as variance-covariance decomposition. In the interest of animal welfare considerations, alternative experimental designs were investigated and evaluated in the context of informing the health risk assessment of food/feed from GM crops.
An analysis of parameter sensitivities of preference-inspired co-evolutionary algorithms
NASA Astrophysics Data System (ADS)
Wang, Rui; Mansor, Maszatul M.; Purshouse, Robin C.; Fleming, Peter J.
2015-10-01
Many-objective optimisation problems remain challenging for many state-of-the-art multi-objective evolutionary algorithms. Preference-inspired co-evolutionary algorithms (PICEAs) which co-evolve the usual population of candidate solutions with a family of decision-maker preferences during the search have been demonstrated to be effective on such problems. However, it is unknown whether PICEAs are robust with respect to the parameter settings. This study aims to address this question. First, a global sensitivity analysis method - the Sobol' variance decomposition method - is employed to determine the relative importance of the parameters controlling the performance of PICEAs. Experimental results show that the performance of PICEAs is controlled for the most part by the number of function evaluations. Next, we investigate the effect of key parameters identified from the Sobol' test and the genetic operators employed in PICEAs. Experimental results show improved performance of the PICEAs as more preferences are co-evolved. Additionally, some suggestions for genetic operator settings are provided for non-expert users.
A data base and analysis program for shuttle main engine dynamic pressure measurements
NASA Technical Reports Server (NTRS)
Coffin, T.
1986-01-01
A dynamic pressure data base management system is described for measurements obtained from space shuttle main engine (SSME) hot firing tests. The data were provided in terms of engine power level and rms pressure time histories, and power spectra of the dynamic pressure measurements at selected times during each test. Test measurements and engine locations are defined along with a discussion of data acquisition and reduction procedures. A description of the data base management analysis system is provided and subroutines developed for obtaining selected measurement means, variances, ranges and other statistics of interest are discussed. A summary of pressure spectra obtained at SSME rated power level is provided for reference. Application of the singular value decomposition technique to spectrum interpolation is discussed and isoplots of interpolated spectra are presented to indicate measurement trends with engine power level. Program listings of the data base management and spectrum interpolation software are given. Appendices are included to document all data base measurements.
Parallel Computation of Flow in Heterogeneous Media Modelled by Mixed Finite Elements
NASA Astrophysics Data System (ADS)
Cliffe, K. A.; Graham, I. G.; Scheichl, R.; Stals, L.
2000-11-01
In this paper we describe a fast parallel method for solving highly ill-conditioned saddle-point systems arising from mixed finite element simulations of stochastic partial differential equations (PDEs) modelling flow in heterogeneous media. Each realisation of these stochastic PDEs requires the solution of the linear first-order velocity-pressure system comprising Darcy's law coupled with an incompressibility constraint. The chief difficulty is that the permeability may be highly variable, especially when the statistical model has a large variance and a small correlation length. For reasonable accuracy, the discretisation has to be extremely fine. We solve these problems by first reducing the saddle-point formulation to a symmetric positive definite (SPD) problem using a suitable basis for the space of divergence-free velocities. The reduced problem is solved using parallel conjugate gradients preconditioned with an algebraically determined additive Schwarz domain decomposition preconditioner. The result is a solver which exhibits a good degree of robustness with respect to the mesh size as well as to the variance and to physically relevant values of the correlation length of the underlying permeability field. Numerical experiments exhibit almost optimal levels of parallel efficiency. The domain decomposition solver (DOUG, http://www.maths.bath.ac.uk/~parsoft) used here not only is applicable to this problem but can be used to solve general unstructured finite element systems on a wide range of parallel architectures.
Sobol' sensitivity analysis for stressor impacts on honeybee ...
We employ Monte Carlo simulation and nonlinear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed of hive population trajectories, taking into account queen strength, foraging success, mite impacts, weather, colony resources, population structure, and other important variables. This allows us to test the effects of defined pesticide exposure scenarios versus controlled simulations that lack pesticide exposure. The daily resolution of the model also allows us to conditionally identify sensitivity metrics. We use the variancebased global decomposition sensitivity analysis method, Sobol’, to assess firstand secondorder parameter sensitivities within VarroaPop, allowing us to determine how variance in the output is attributed to each of the input variables across different exposure scenarios. Simulations with VarroaPop indicate queen strength, forager life span and pesticide toxicity parameters are consistent, critical inputs for colony dynamics. Further analysis also reveals that the relative importance of these parameters fluctuates throughout the simulation period according to the status of other inputs. Our preliminary results show that model variability is conditional and can be attributed to different parameters depending on different timescales. By using sensitivity analysis to assess model output and variability, calibrations of simulation models can be better informed to yield more
NASA Astrophysics Data System (ADS)
Agethen, Svenja; Knorr, Klaus-Holger
2017-04-01
More than 90% of peatlands in Europe are degraded by drainage and subsequent land use. However, beneficial effects of functioning peatlands, most of all carbon storage, have long been recognized but remain difficult to recover. Fragmentation and a surrounding of intensively used agricultural catchments with excess nutrients in air and waters further affects the recovery of sites. Under such conditions, highly competitive species such as Juncus effusus colonize restored peatlands instead of peat forming Sphagnum. While the specific stoichiometry and chemical composition makes Sphagnum litter recalcitrant in decomposition and hence, effective in carbon sequestration, we know little about dynamics involving Juncus, although this species provides organic matter in high quantity and of rather labile quality. To better understand decomposition in context of litter quality and nutrient availability, we incubated different peat types for 70 days; I) recent, II) weakly degraded fossil, and III) earthyfied nutrient rich fossil peat, amended with two 13C pulse-labelled Juncus litter types (excessively fertilized "F", and nutrient poor "NF" plants grown for three years watered with MilliQ only), respectively. We determined anaerobic decomposition rates, compared potential rates extrapolated from pure materials with measured rates of the mixtures, and tracked the 13C in the solid, liquid, and gaseous phase. To characterize the biogeochemical conditions, inorganic and organic electron acceptors, hydrogen and organic acids, and total enzyme activity were monitored. For characterization of dissolved organic matter we used UV-Vis and fluorescence spectroscopy (parallel factor analysis), and for solid organic matter elemental analysis and FTIR spectroscopy. There were two main structural differences between litter types: "F" litter and its leachates contained more proteinaceous components, the C/N ratio was 20 in contrast to 60 of the "NF" litter. However, humic components and aromaticity were higher in "F" litter. Generally, decomposition rates of litter were 5-30 times higher than of peat. Rates in batches amended with "F" were lower compared to "NF" for the respective peat, opposing typically reported observations. Nevertheless, the 13C label suggested that in case of peat I and III preferably the litter was decomposed, decomposition of peat II was apparently stimulated when "NF" was added, albeit this litter was poor in nutrients. Multiple linear regression identified specific absorption at 254 nm (SUVA), a measure of aromaticity representative for an array of inter-correlating spectroscopic features, and enzyme activity as most important predictors for C-mineralization rates. These two parameters explained 88% of the variance. Although enzyme activity and SUVA did not correlate in the mixed assays, this was the case for the pure materials (R2=0.95), suggesting an inhibitory effect of aromatic components on enzyme activity. This study confirms that generally litter quality is a major control for mineralization and hence, carbon storage in peatlands. Interestingly, in the case of Juncus effusus, high nutrient availability in peat and litter did not lead to enhanced degradation of the litter itself or priming of decomposition of the surrounding peat. Furthermore, the results underline the substantial contribution of Juncus biomass to C-cycling and potentially high C-emissions in restored peatlands.
Differential Decomposition Among Pig, Rabbit, and Human Remains.
Dautartas, Angela; Kenyhercz, Michael W; Vidoli, Giovanna M; Meadows Jantz, Lee; Mundorff, Amy; Steadman, Dawnie Wolfe
2018-03-30
While nonhuman animal remains are often utilized in forensic research to develop methods to estimate the postmortem interval, systematic studies that directly validate animals as proxies for human decomposition are lacking. The current project compared decomposition rates among pigs, rabbits, and humans at the University of Tennessee's Anthropology Research Facility across three seasonal trials that spanned nearly 2 years. The Total Body Score (TBS) method was applied to quantify decomposition changes and calculate the postmortem interval (PMI) in accumulated degree days (ADD). Decomposition trajectories were analyzed by comparing the estimated and actual ADD for each seasonal trial and by fuzzy cluster analysis. The cluster analysis demonstrated that the rabbits formed one group while pigs and humans, although more similar to each other than either to rabbits, still showed important differences in decomposition patterns. The decomposition trends show that neither nonhuman model captured the pattern, rate, and variability of human decomposition. © 2018 American Academy of Forensic Sciences.
O'Donnell, J. A.; Turetsky, M.R.; Harden, J.W.; Manies, K.L.; Pruett, L.E.; Shetler, G.; Neff, J.C.
2009-01-01
Fire is an important control on the carbon (C) balance of the boreal forest region. Here, we present findings from two complementary studies that examine how fire modifies soil organic matter properties, and how these modifications influence rates of decomposition and C exchange in black spruce (Picea mariana) ecosystems of interior Alaska. First, we used laboratory incubations to explore soil temperature, moisture, and vegetation effects on CO2 and DOC production rates in burned and unburned soils from three study regions in interior Alaska. Second, at one of the study regions used in the incubation experiments, we conducted intensive field measurements of net ecosystem exchange (NEE) and ecosystem respiration (ER) across an unreplicated factorial design of burning (2 year post-fire versus unburned sites) and drainage class (upland forest versus peatland sites). Our laboratory study showed that burning reduced the sensitivity of decomposition to increased temperature, most likely by inducing moisture or substrate quality limitations on decomposition rates. Burning also reduced the decomposability of Sphagnum-derived organic matter, increased the hydrophobicity of feather moss-derived organic matter, and increased the ratio of dissolved organic carbon (DOC) to total dissolved nitrogen (TDN) in both the upland and peatland sites. At the ecosystem scale, our field measurements indicate that the surface organic soil was generally wetter in burned than in unburned sites, whereas soil temperature was not different between the burned and unburned sites. Analysis of variance results showed that ER varied with soil drainage class but not by burn status, averaging 0.9 ?? 0.1 and 1.4 ?? 0.1 g C m-2d-1 in the upland and peatland sites, respectively. However, a more complex general linear model showed that ER was controlled by an interaction between soil temperature, moisture, and burn status, and in general was less variable over time in the burned than in the unburned sites. Together, findings from these studies across different spatial scales suggest that although fire can create some soil climate conditions more conducive to rapid decomposition, rates of C release from soils may be constrained following fire by changes in moisture and/or substrate quality that impede rates of decomposition. ?? 2008 Springer Science+Business Media, LLC.
O'Donnell, Jonathan A.; Turetsky, Merritt R.; Harden, Jennifer W.; Manies, Kristen L.; Pruett, L.E.; Shetler, Gordon; Neff, Jason C.
2009-01-01
Fire is an important control on the carbon (C) balance of the boreal forest region. Here, we present findings from two complementary studies that examine how fire modifies soil organic matter properties, and how these modifications influence rates of decomposition and C exchange in black spruce (Picea mariana) ecosystems of interior Alaska. First, we used laboratory incubations to explore soil temperature, moisture, and vegetation effects on CO2 and DOC production rates in burned and unburned soils from three study regions in interior Alaska. Second, at one of the study regions used in the incubation experiments, we conducted intensive field measurements of net ecosystem exchange (NEE) and ecosystem respiration (ER) across an unreplicated factorial design of burning (2 year post-fire versus unburned sites) and drainage class (upland forest versus peatland sites). Our laboratory study showed that burning reduced the sensitivity of decomposition to increased temperature, most likely by inducing moisture or substrate quality limitations on decomposition rates. Burning also reduced the decomposability of Sphagnum-derived organic matter, increased the hydrophobicity of feather moss-derived organic matter, and increased the ratio of dissolved organic carbon (DOC) to total dissolved nitrogen (TDN) in both the upland and peatland sites. At the ecosystem scale, our field measurements indicate that the surface organic soil was generally wetter in burned than in unburned sites, whereas soil temperature was not different between the burned and unburned sites. Analysis of variance results showed that ER varied with soil drainage class but not by burn status, averaging 0.9 ± 0.1 and 1.4 ± 0.1 g C m−2 d−1 in the upland and peatland sites, respectively. However, a more complex general linear model showed that ER was controlled by an interaction between soil temperature, moisture, and burn status, and in general was less variable over time in the burned than in the unburned sites. Together, findings from these studies across different spatial scales suggest that although fire can create some soil climate conditions more conducive to rapid decomposition, rates of C release from soils may be constrained following fire by changes in moisture and/or substrate quality that impede rates of decomposition.
Matuschek, Hannes; Kliegl, Reinhold; Holschneider, Matthias
2015-01-01
The Smoothing Spline ANOVA (SS-ANOVA) requires a specialized construction of basis and penalty terms in order to incorporate prior knowledge about the data to be fitted. Typically, one resorts to the most general approach using tensor product splines. This implies severe constraints on the correlation structure, i.e. the assumption of isotropy of smoothness can not be incorporated in general. This may increase the variance of the spline fit, especially if only a relatively small set of observations are given. In this article, we propose an alternative method that allows to incorporate prior knowledge without the need to construct specialized bases and penalties, allowing the researcher to choose the spline basis and penalty according to the prior knowledge of the observations rather than choosing them according to the analysis to be done. The two approaches are compared with an artificial example and with analyses of fixation durations during reading. PMID:25816246
Current variability and momentum balance in the along-shore flow for the Catalan inner-shelf.
NASA Astrophysics Data System (ADS)
Grifoll, M.; Aretxabaleta, A.; Espino, M.; Warner, J. C.
2012-04-01
This contribution examines the circulation of the inner-shelf of the Catalan Sea from an observational perspective. Measurements were obtained from a set of ADCPs deployed during March and April 2011 at 25 and 50 meters depth. Analysis reveals a strongly polarized low-frequency flow following the isobaths predominantly in the south-westward direction. The current variance is mostly explained by the two principal modes of an empirical orthogonal decomposition. The first mode represents almost 80% of the variability. Correlation values of 0.4 to 0.7 have been found between the depth-averaged along-shelf flow and the local wind and the Adjusted Sea-level Slope. The momentum balance in the along-shore direction reveals strong frictional effects and an influence of the barotropic pressure gradients. This research provides a physical framework for ongoing numerical modelling activities and climatological studies in the Catalan inner-shelf.
Temporal Associations between Weather and Headache: Analysis by Empirical Mode Decomposition
Yang, Albert C.; Fuh, Jong-Ling; Huang, Norden E.; Shia, Ben-Chang; Peng, Chung-Kang; Wang, Shuu-Jiun
2011-01-01
Background Patients frequently report that weather changes trigger headache or worsen existing headache symptoms. Recently, the method of empirical mode decomposition (EMD) has been used to delineate temporal relationships in certain diseases, and we applied this technique to identify intrinsic weather components associated with headache incidence data derived from a large-scale epidemiological survey of headache in the Greater Taipei area. Methodology/Principal Findings The study sample consisted of 52 randomly selected headache patients. The weather time-series parameters were detrended by the EMD method into a set of embedded oscillatory components, i.e. intrinsic mode functions (IMFs). Multiple linear regression models with forward stepwise methods were used to analyze the temporal associations between weather and headaches. We found no associations between the raw time series of weather variables and headache incidence. For decomposed intrinsic weather IMFs, temperature, sunshine duration, humidity, pressure, and maximal wind speed were associated with headache incidence during the cold period, whereas only maximal wind speed was associated during the warm period. In analyses examining all significant weather variables, IMFs derived from temperature and sunshine duration data accounted for up to 33.3% of the variance in headache incidence during the cold period. The association of headache incidence and weather IMFs in the cold period coincided with the cold fronts. Conclusions/Significance Using EMD analysis, we found a significant association between headache and intrinsic weather components, which was not detected by direct comparisons of raw weather data. Contributing weather parameters may vary in different geographic regions and different seasons. PMID:21297940
Computationally Efficient Adaptive Beamformer for Ultrasound Imaging Based on QR Decomposition.
Park, Jongin; Wi, Seok-Min; Lee, Jin S
2016-02-01
Adaptive beamforming methods for ultrasound imaging have been studied to improve image resolution and contrast. The most common approach is the minimum variance (MV) beamformer which minimizes the power of the beamformed output while maintaining the response from the direction of interest constant. The method achieves higher resolution and better contrast than the delay-and-sum (DAS) beamformer, but it suffers from high computational cost. This cost is mainly due to the computation of the spatial covariance matrix and its inverse, which requires O(L(3)) computations, where L denotes the subarray size. In this study, we propose a computationally efficient MV beamformer based on QR decomposition. The idea behind our approach is to transform the spatial covariance matrix to be a scalar matrix σI and we subsequently obtain the apodization weights and the beamformed output without computing the matrix inverse. To do that, QR decomposition algorithm is used and also can be executed at low cost, and therefore, the computational complexity is reduced to O(L(2)). In addition, our approach is mathematically equivalent to the conventional MV beamformer, thereby showing the equivalent performances. The simulation and experimental results support the validity of our approach.
NASA Technical Reports Server (NTRS)
Platt, M. E.; Lewis, E. E.; Boehm, F.
1991-01-01
A Monte Carlo Fortran computer program was developed that uses two variance reduction techniques for computing system reliability applicable to solving very large highly reliable fault-tolerant systems. The program is consistent with the hybrid automated reliability predictor (HARP) code which employs behavioral decomposition and complex fault-error handling models. This new capability is called MC-HARP which efficiently solves reliability models with non-constant failures rates (Weibull). Common mode failure modeling is also a specialty.
Cerebrospinal fluid PCR analysis and biochemistry in bodies with severe decomposition.
Palmiere, Cristian; Vanhaebost, Jessica; Ventura, Francesco; Bonsignore, Alessandro; Bonetti, Luca Reggiani
2015-02-01
The aim of this study was to assess whether Neisseria meningitidis, Listeria monocytogenes, Streptococcus pneumoniae and Haemophilus influenzae can be identified using the polymerase chain reaction technique in the cerebrospinal fluid of severely decomposed bodies with known, noninfectious causes of death or whether postmortem changes can lead to false positive results and thus erroneous diagnostic information. Biochemical investigations, postmortem bacteriology and real-time polymerase chain reaction analysis in cerebrospinal fluid were performed in a series of medico-legal autopsies that included noninfectious causes of death with decomposition, bacterial meningitis without decomposition, bacterial meningitis with decomposition, low respiratory tract infections with decomposition and abdominal infections with decomposition. In noninfectious causes of death with decomposition, postmortem investigations failed to reveal results consistent with generalized inflammation or bacterial infections at the time of death. Real-time polymerase chain reaction analysis in cerebrospinal fluid did not identify the studied bacteria in any of these cases. The results of this study highlight the usefulness of molecular approaches in bacteriology as well as the use of alternative biological samples in postmortem biochemistry in order to obtain suitable information even in corpses with severe decompositional changes. Copyright © 2014 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Nealis, Logan J; Thompson, Kara D; Krank, Marvin D; Stewart, Sherry H
2016-04-01
While average rates of change in adolescent alcohol consumption are frequently studied, variability arising from situational and dispositional influences on alcohol use has been comparatively neglected. We used variance decomposition to test differences in variability resulting from year-to-year fluctuations in use (i.e., state-like) and from stable individual differences (i.e., trait-like) using data from the Project on Adolescent Trajectories and Health (PATH), a cohort-sequential study spanning grades 7 to 11 using three cohorts starting in grades seven, eight, and nine, respectively. We tested variance components for alcohol volume, frequency, and quantity in the overall sample, and changes in components over time within each cohort. Sex differences were tested. Most variability in alcohol use reflected state-like variation (47-76%), with a relatively smaller proportion of trait-like variation (19-36%). These proportions shifted across cohorts as youth got older, with increases in trait-like variance from early adolescence (14-30%) to later adolescence (30-50%). Trends were similar for males and females, although females showed higher trait-like variance in alcohol frequency than males throughout development (26-43% vs. 11-25%). For alcohol volume and frequency, males showed the greatest increase in trait-like variance earlier in development (i.e., grades 8-10) compared to females (i.e., grades 9-11). The relative strength of situational and dispositional influences on adolescent alcohol use has important implications for preventative interventions. Interventions should ideally target problematic alcohol use before it becomes more ingrained and trait-like. Copyright © 2015 Elsevier Ltd. All rights reserved.
Hydrograph variances over different timescales in hydropower production networks
NASA Astrophysics Data System (ADS)
Zmijewski, Nicholas; Wörman, Anders
2016-08-01
The operation of water reservoirs involves a spectrum of timescales based on the distribution of stream flow travel times between reservoirs, as well as the technical, environmental, and social constraints imposed on the operation. In this research, a hydrodynamically based description of the flow between hydropower stations was implemented to study the relative importance of wave diffusion on the spectrum of hydrograph variance in a regulated watershed. Using spectral decomposition of the effluence hydrograph of a watershed, an exact expression of the variance in the outflow response was derived, as a function of the trends of hydraulic and geomorphologic dispersion and management of production and reservoirs. We show that the power spectra of involved time-series follow nearly fractal patterns, which facilitates examination of the relative importance of wave diffusion and possible changes in production demand on the outflow spectrum. The exact spectral solution can also identify statistical bounds of future demand patterns due to limitations in storage capacity. The impact of the hydraulic description of the stream flow on the reservoir discharge was examined for a given power demand in River Dalälven, Sweden, as function of a stream flow Peclet number. The regulation of hydropower production on the River Dalälven generally increased the short-term variance in the effluence hydrograph, whereas wave diffusion decreased the short-term variance over periods of <1 week, depending on the Peclet number (Pe) of the stream reach. This implies that flow variance becomes more erratic (closer to white noise) as a result of current production objectives.
NASA Astrophysics Data System (ADS)
Biederman, J. A.; Scott, R. L.; Goulden, M.
2014-12-01
Climate change is predicted to increase the frequency and severity of water limitation, altering terrestrial ecosystems and their carbon exchange with the atmosphere. Here we compare site-level temporal sensitivity of annual carbon fluxes to interannual variations in water availability against cross-site spatial patterns over a network of 19 eddy covariance flux sites. This network represents one order of magnitude in mean annual productivity and includes western North American desert shrublands and grasslands, savannahs, woodlands, and forests with continuous records of 4 to 12 years. Our analysis reveals site-specific patterns not identifiable in prior syntheses that pooled sites. We interpret temporal variability as an indicator of ecosystem response to annual water availability due to fast-changing factors such as leaf stomatal response and microbial activity, while cross-site spatial patterns are used to infer ecosystem adjustment to climatic water availability through slow-changing factors such as plant community and organic carbon pools. Using variance decomposition, we directly quantify how terrestrial carbon balance depends on slow- and fast-changing components of gross ecosystem production (GEP) and total ecosystem respiration (TER). Slow factors explain the majority of variance in annual net ecosystem production (NEP) across the dataset, and their relative importance is greater at wetter, forest sites than desert ecosystems. Site-specific offsets from spatial patterns of GEP and TER explain one third of NEP variance, likely due to slow-changing factors not directly linked to water, such as disturbance. TER and GEP are correlated across sites as previously shown, but our site-level analysis reveals surprisingly consistent linear relationships between these fluxes in deserts and savannahs, indicating fast coupling of TER and GEP in more arid ecosystems. Based on the uncertainty associated with slow and fast factors, we suggest a framework for improved prediction of terrestrial carbon balance. We will also present results of ongoing work to quantify fast and slow contributions to the relationship between evapotranspiration and precipitation across a precipitation gradient.
Xingyan Huang; Cornelis F. De Hoop; Jiulong Xie; Chung-Yun Hse; Jinqiu Qi; Yuzhu Chen; Feng Li
2017-01-01
The thermal decomposition characteristics of microwave liquefied rape straw residues with respect to liquefaction condition and pyrolysis conversion were investigated using a thermogravimetric (TG) analyzer at the heating rates of 5, 20, 50 °C min-1. The hemicellulose decomposition peak was absent at the derivative thermogravimetric analysis (DTG...
Domain decomposition for aerodynamic and aeroacoustic analyses, and optimization
NASA Technical Reports Server (NTRS)
Baysal, Oktay
1995-01-01
The overarching theme was the domain decomposition, which intended to improve the numerical solution technique for the partial differential equations at hand; in the present study, those that governed either the fluid flow, or the aeroacoustic wave propagation, or the sensitivity analysis for a gradient-based optimization. The role of the domain decomposition extended beyond the original impetus of discretizing geometrical complex regions or writing modular software for distributed-hardware computers. It induced function-space decompositions and operator decompositions that offered the valuable property of near independence of operator evaluation tasks. The objectives have gravitated about the extensions and implementations of either the previously developed or concurrently being developed methodologies: (1) aerodynamic sensitivity analysis with domain decomposition (SADD); (2) computational aeroacoustics of cavities; and (3) dynamic, multibody computational fluid dynamics using unstructured meshes.
Hilt, Pauline M.; Delis, Ioannis; Pozzo, Thierry; Berret, Bastien
2018-01-01
The modular control hypothesis suggests that motor commands are built from precoded modules whose specific combined recruitment can allow the performance of virtually any motor task. Despite considerable experimental support, this hypothesis remains tentative as classical findings of reduced dimensionality in muscle activity may also result from other constraints (biomechanical couplings, data averaging or low dimensionality of motor tasks). Here we assessed the effectiveness of modularity in describing muscle activity in a comprehensive experiment comprising 72 distinct point-to-point whole-body movements during which the activity of 30 muscles was recorded. To identify invariant modules of a temporal and spatial nature, we used a space-by-time decomposition of muscle activity that has been shown to encompass classical modularity models. To examine the decompositions, we focused not only on the amount of variance they explained but also on whether the task performed on each trial could be decoded from the single-trial activations of modules. For the sake of comparison, we confronted these scores to the scores obtained from alternative non-modular descriptions of the muscle data. We found that the space-by-time decomposition was effective in terms of data approximation and task discrimination at comparable reduction of dimensionality. These findings show that few spatial and temporal modules give a compact yet approximate representation of muscle patterns carrying nearly all task-relevant information for a variety of whole-body reaching movements. PMID:29666576
NASA Astrophysics Data System (ADS)
Hipp, J. R.; Encarnacao, A.; Ballard, S.; Young, C. J.; Phillips, W. S.; Begnaud, M. L.
2011-12-01
Recently our combined SNL-LANL research team has succeeded in developing a global, seamless 3D tomographic P-velocity model (SALSA3D) that provides superior first P travel time predictions at both regional and teleseismic distances. However, given the variable data quality and uneven data sampling associated with this type of model, it is essential that there be a means to calculate high-quality estimates of the path-dependent variance and covariance associated with the predicted travel times of ray paths through the model. In this paper, we show a methodology for accomplishing this by exploiting the full model covariance matrix. Our model has on the order of 1/2 million nodes, so the challenge in calculating the covariance matrix is formidable: 0.9 TB storage for 1/2 of a symmetric matrix, necessitating an Out-Of-Core (OOC) blocked matrix solution technique. With our approach the tomography matrix (G which includes Tikhonov regularization terms) is multiplied by its transpose (GTG) and written in a blocked sub-matrix fashion. We employ a distributed parallel solution paradigm that solves for (GTG)-1 by assigning blocks to individual processing nodes for matrix decomposition update and scaling operations. We first find the Cholesky decomposition of GTG which is subsequently inverted. Next, we employ OOC matrix multiply methods to calculate the model covariance matrix from (GTG)-1 and an assumed data covariance matrix. Given the model covariance matrix we solve for the travel-time covariance associated with arbitrary ray-paths by integrating the model covariance along both ray paths. Setting the paths equal gives variance for that path. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Oita, Azusa; Tsuboi, Yuuri; Date, Yasuhiro; Oshima, Takahiro; Sakata, Kenji; Yokoyama, Akiko; Moriya, Shigeharu; Kikuchi, Jun
2018-04-24
There is an increasing need for assessing aquatic ecosystems that are globally endangered. Since aquatic ecosystems are complex, integrated consideration of multiple factors utilizing omics technologies can help us better understand aquatic ecosystems. An integrated strategy linking three analytical (machine learning, factor mapping, and forecast-error-variance decomposition) approaches for extracting the features of surface water from datasets comprising ions, metabolites, and microorganisms is proposed herein. The three developed approaches can be employed for diverse datasets of sample sizes and experimentally analyzed factors. The three approaches are applied to explore the features of bay water surrounding Odaiba, Tokyo, Japan, as a case study. Firstly, the machine learning approach separated 681 surface water samples within Japan into three clusters, categorizing Odaiba water into seawater with relatively low inorganic ions, including Mg, Ba, and B. Secondly, the factor mapping approach illustrated Odaiba water samples from the summer as rich in multiple amino acids and some other metabolites and poor in inorganic ions relative to other seasons based on their seasonal dynamics. Finally, forecast-error-variance decomposition using vector autoregressive models indicated that a type of microalgae (Raphidophyceae) grows in close correlation with alanine, succinic acid, and valine on filters and with isobutyric acid and 4-hydroxybenzoic acid in filtrate, Ba, and average wind speed. Our integrated strategy can be used to examine many biological, chemical, and environmental physical factors to analyze surface water. Copyright © 2018. Published by Elsevier B.V.
Analysis and Prediction of Sea Ice Evolution using Koopman Mode Decomposition Techniques
2018-04-30
Title: Analysis and Prediction of Sea Ice Evolution using Koopman Mode Decomposition Techniques Subject: Monthly Progress Report Period of...Resources: N/A TOTAL: $18,687 2 TECHNICAL STATUS REPORT Abstract The program goal is analysis of sea ice dynamical behavior using Koopman Mode Decompo...sition (KMD) techniques. The work in the program’s first month consisted of improvements to data processing code, inclusion of additional arctic sea ice
NASA Technical Reports Server (NTRS)
Thompson, James M.; Daniel, Janice D.
1989-01-01
The development of a mass spectrometer/thermal analyzer/computer (MS/TA/Computer) system capable of providing simultaneous thermogravimetry (TG), differential thermal analysis (DTA), derivative thermogravimetry (DTG) and evolved gas detection and analysis (EGD and EGA) under both atmospheric and high pressure conditions is described. The combined system was used to study the thermal decomposition of the nozzle material that constitutes the throat of the solid rocket boosters (SRB).
Source-space ICA for MEG source imaging.
Jonmohamadi, Yaqub; Jones, Richard D
2016-02-01
One of the most widely used approaches in electroencephalography/magnetoencephalography (MEG) source imaging is application of an inverse technique (such as dipole modelling or sLORETA) on the component extracted by independent component analysis (ICA) (sensor-space ICA + inverse technique). The advantage of this approach over an inverse technique alone is that it can identify and localize multiple concurrent sources. Among inverse techniques, the minimum-variance beamformers offer a high spatial resolution. However, in order to have both high spatial resolution of beamformer and be able to take on multiple concurrent sources, sensor-space ICA + beamformer is not an ideal combination. We propose source-space ICA for MEG as a powerful alternative approach which can provide the high spatial resolution of the beamformer and handle multiple concurrent sources. The concept of source-space ICA for MEG is to apply the beamformer first and then singular value decomposition + ICA. In this paper we have compared source-space ICA with sensor-space ICA both in simulation and real MEG. The simulations included two challenging scenarios of correlated/concurrent cluster sources. Source-space ICA provided superior performance in spatial reconstruction of source maps, even though both techniques performed equally from a temporal perspective. Real MEG from two healthy subjects with visual stimuli were also used to compare performance of sensor-space ICA and source-space ICA. We have also proposed a new variant of minimum-variance beamformer called weight-normalized linearly-constrained minimum-variance with orthonormal lead-field. As sensor-space ICA-based source reconstruction is popular in EEG and MEG imaging, and given that source-space ICA has superior spatial performance, it is expected that source-space ICA will supersede its predecessor in many applications.
Rathouz, Paul J.; Van Hulle, Carol A.; Lee Rodgers, Joseph; Waldman, Irwin D.; Lahey, Benjamin B.
2009-01-01
Purcell (2002) proposed a bivariate biometric model for testing and quantifying the interaction between latent genetic influences and measured environments in the presence of gene-environment correlation. Purcell’s model extends the Cholesky model to include gene-environment interaction. We examine a number of closely-related alternative models that do not involve gene-environment interaction but which may fit the data as well Purcell’s model. Because failure to consider these alternatives could lead to spurious detection of gene-environment interaction, we propose alternative models for testing gene-environment interaction in the presence of gene-environment correlation, including one based on the correlated factors model. In addition, we note mathematical errors in the calculation of effect size via variance components in Purcell’s model. We propose a statistical method for deriving and interpreting variance decompositions that are true to the fitted model. PMID:18293078
Duemichen, E; Braun, U; Senz, R; Fabian, G; Sturm, H
2014-08-08
For analysis of the gaseous thermal decomposition products of polymers, the common techniques are thermogravimetry, combined with Fourier transformed infrared spectroscopy (TGA-FTIR) and mass spectrometry (TGA-MS). These methods offer a simple approach to the decomposition mechanism, especially for small decomposition molecules. Complex spectra of gaseous mixtures are very often hard to identify because of overlapping signals. In this paper a new method is described to adsorb the decomposition products during controlled conditions in TGA on solid-phase extraction (SPE) material: twisters. Subsequently the twisters were analysed with thermal desorption gas chromatography mass spectrometry (TDS-GC-MS), which allows the decomposition products to be separated and identified using an MS library. The thermoplastics polyamide 66 (PA 66) and polybutylene terephthalate (PBT) were used as example polymers. The influence of the sample mass and of the purge gas flow during the decomposition process was investigated in TGA. The advantages and limitations of the method were presented in comparison to the common analysis techniques, TGA-FTIR and TGA-MS. Copyright © 2014 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Kunkun, E-mail: ktg@illinois.edu; Inria Bordeaux – Sud-Ouest, Team Cardamom, 200 avenue de la Vieille Tour, 33405 Talence; Congedo, Pietro M.
The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable formore » real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.« less
Variability common to global sea surface temperatures and runoff in the conterminous United States
McCabe, Gregory J.; Wolock, David M.
2014-01-01
Singular value decomposition (SVD) is used to identify the variability common to global sea surface temperatures (SSTs) and water-balance-modeled water-year (WY) runoff in the conterminous United States (CONUS) for the 1900–2012 period. Two modes were identified from the SVD analysis; the two modes explain 25% of the variability in WY runoff and 33% of the variability in WY SSTs. The first SVD mode reflects the variability of the El Niño–Southern Oscillation (ENSO) in the SST data and the hydroclimatic effects of ENSO on WY runoff in the CONUS. The second SVD mode is related to variability of the Atlantic multidecadal oscillation (AMO). An interesting aspect of these results is that both ENSO and AMO appear to have nearly equivalent effects on runoff variability in the CONUS. However, the relatively small amount of variance explained by the SVD analysis indicates that there is little covariation between runoff and SSTs, suggesting that SSTs may not be a viable predictor of runoff variability for most of the conterminous United States.
Premkumar, Thathan; Govindarajan, Subbiah; Coles, Andrew E; Wight, Charles A
2005-04-07
The thermal decomposition kinetics of N(2)H(5)[Ce(pyrazine-2,3-dicarboxylate)(2)(H(2)O)] (Ce-P) have been studied by thermogravimetric analysis (TGA) and differential scanning calorimetry (DSC), for the first time; TGA analysis reveals an oxidative decomposition process yielding CeO(2) as the final product with an activation energy of approximately 160 kJ mol(-1). This complex may be used as a precursor to fine particle cerium oxides due to its low temperature of decomposition.
NASA Astrophysics Data System (ADS)
Vaks, V. L.; Domracheva, E. G.; Chernyaeva, M. B.; Pripolzin, S. I.; Revin, L. S.; Tretyakov, I. V.; Anfertyev, V. A.; Yablokov, A. A.; Lukyanenko, I. A.; Sheikov, Yu. V.
2018-02-01
We show prospects for using the method of high-resolution terahertz spectroscopy for a continuous analysis of the decomposition products of energy substances in the gas phase (including short-lived ones) in a wide temperature range. The experimental setup, which includes a terahertz spectrometer for studying the thermal decomposition reactions, is described. The results of analysis of the gaseous decomposition products of energy substances by the example of ammonium nitrate heated from room temperature to 167°C are presented.
NASA Astrophysics Data System (ADS)
Zeng, R.; Cai, X.
2016-12-01
Irrigation has considerably interfered with hydrological processes in arid and semi-arid areas with heavy irrigated agriculture. With the increasing demand for food production and evaporative demand due to climate change, irrigation water consumption is expected to increase, which would aggravate the interferences to hydrologic processes. Current studies focus on the impact of irrigation on the mean value of evapotranspiration (ET) at either local or regional scale, however, how irrigation changes the variability of ET has not been well understood. This study analyzes the impact of extensive irrigation on ET variability in the Northern High Plains. We apply an ET variance decomposition framework developed from our previous work to quantify the effects of both climate and irrigation on ET variance in the Northern High Plains watersheds. Based on climate and water table observations, we assess the monthly ET variance and its components for two periods: 1930s-1960s with less irrigation development 970s-2010s with more development. It is found that irrigation not only caused the well-recognized groundwater drawdown and stream depletion problems in the region, but also buffered ET variance from climatic fluctuations. In addition to increasing food productivity, irrigation also stabilizes crop yield by mitigating the impact of hydroclimatic variability. With complementary water supply from irrigation, ET often approaches to the potential ET, and thus the observed ET variance is more attributed to climatic variables especially temperature; meanwhile irrigation causes significant seasonal fluctuations to groundwater storage. For sustainable water resources management in the Northern High Plains, we argue that both the mean value and the variance of ET should be considered together for the regulation of irrigation in this region.
Beautemps, D; Badin, P; Bailly, G
2001-05-01
The following contribution addresses several issues concerning speech degrees of freedom in French oral vowels, stop, and fricative consonants based on an analysis of tongue and lip shapes extracted from cineradio- and labio-films. The midsagittal tongue shapes have been submitted to a linear decomposition where some of the loading factors were selected such as jaw and larynx position while four other components were derived from principal component analysis (PCA). For the lips, in addition to the more traditional protrusion and opening components, a supplementary component was extracted to explain the upward movement of both the upper and lower lips in [v] production. A linear articulatory model was developed; the six tongue degrees of freedom were used as the articulatory control parameters of the midsagittal tongue contours and explained 96% of the tongue data variance. These control parameters were also used to specify the frontal lip width dimension derived from the labio-film front views. Finally, this model was complemented by a conversion model going from the midsagittal to the area function, based on a fitting of the midsagittal distances and the formant frequencies for both vowels and consonants.
Camminatiello, Ida; D'Ambra, Antonello; Sarnacchiaro, Pasquale
2014-01-01
In this paper we are proposing a general framework for the analysis of the complete set of log Odds Ratios (ORs) generated by a two-way contingency table. Starting from the RC (M) association model and hypothesizing a Poisson distribution for the counts of the two-way contingency table we are obtaining the weighted Log Ratio Analysis that we are extending to the study of log ORs. Particularly we are obtaining an indirect representation of the log ORs and some synthesis measures. Then for studying the matrix of log ORs we are performing a generalized Singular Value Decomposition that allows us to obtain a direct representation of log ORs. We also expect to get summary measures of association too. We have considered the matrix of complete set of ORs, because, it is linked to the two-way contingency table in terms of variance and it allows us to represent all the ORs on a factorial plan. Finally, a two-way contingency table, which crosses pollution of the Sarno river and sampling points, is to be analyzed to illustrate the proposed framework.
Global sensitivity analysis for fuzzy inputs based on the decomposition of fuzzy output entropy
NASA Astrophysics Data System (ADS)
Shi, Yan; Lu, Zhenzhou; Zhou, Yicheng
2018-06-01
To analyse the component of fuzzy output entropy, a decomposition method of fuzzy output entropy is first presented. After the decomposition of fuzzy output entropy, the total fuzzy output entropy can be expressed as the sum of the component fuzzy entropy contributed by fuzzy inputs. Based on the decomposition of fuzzy output entropy, a new global sensitivity analysis model is established for measuring the effects of uncertainties of fuzzy inputs on the output. The global sensitivity analysis model can not only tell the importance of fuzzy inputs but also simultaneously reflect the structural composition of the response function to a certain degree. Several examples illustrate the validity of the proposed global sensitivity analysis, which is a significant reference in engineering design and optimization of structural systems.
Gholami, Somayeh; Kompany-Zareh, Mohsen
2013-07-01
Actinomycin D (Act D), an oncogenic c-Myc promoter binder, interferes with the action of RNA polymerase. There is great demand for high-throughput technology able to monitor the activity of DNA-binding drugs. To this end, binding of 7-aminoactinomycin D (7AAD) to the duplex c-Myc promoter was investigated by use of 2D-photoluminescence emission (2D-PLE), and the resulting data were subjected to analysis by use of convenient and powerful multi-way approaches. Fluorescence measurements were performed by use of the quantum dot (QD)-conjugated c-Myc promoter. Intercalation of 7AAD within duplex base pairs resulted in efficient energy transfer from drug to QD via fluorescence resonance energy transfer (FRET). Multi-way analysis of the three-way data array obtained from titration experiments was performed by use of restricted Tucker3 and hard trilinear decomposition (HTD). These techniques enable analysis of high-dimensional and complex data from nanobiological systems which include several spectrally overlapped structures. It was almost impossible to obtain robust and meaningful information about the FRET process for such high overlap data by use of classical analysis. The soft approach had the important advantage over univariate classical methods of enabling us to investigate the source of variance in the fluorescence signal of the DNA-drug complex. It was established that hard trilinear decomposition analysis of FRET-measured data overcomes the problem of rank deficiency, enabling calculation of concentration profiles and pure spectra for all species, including non-fluorophores. The hard modeling approach was also used for determination of equilibrium constants for the hybridization and intercalation equilibria, using nonlinear fit data analysis. The intercalation constant 3.6 × 10(6) mol(-1) L and hybridization stability 1.0 × 10(8) mol(-1) L obtained were in good agreement with values reported in the literature. The analytical concentration of the QD-labeled DNA was determined by use of nonlinear fitting, without using external standard calibration samples. This study was a successful application of multi-way chemometric methods to investigation of nano-biotechnological systems where several overlapped species coexist in solution.
Experimental game theory and behavior genetics.
Cesarini, David; Dawes, Christopher T; Johannesson, Magnus; Lichtenstein, Paul; Wallace, Björn
2009-06-01
We summarize the findings from a research program studying the heritability of behavior in a number of widely used economic games, including trust, dictator, and ultimatum games. Results from the standard behavior genetic variance decomposition suggest that strategies and fundamental economic preference parameters are moderately heritable, with estimates ranging from 18 to 42%. In addition, we also report new evidence on so-called "hyperfair" preferences in the ultimatum game. We discuss the implications of our findings with special reference to current efforts that seek to understand the molecular genetic architecture of complex social behaviors.
Gustafsson, Per E.; Sebastián, Miguel San; Mosquera, Paola A.
2016-01-01
Background Intersectionality has received increased interest within population health research in recent years, as a concept and framework to understand entangled dimensions of health inequalities, such as gender and socioeconomic inequalities in health. However, little attention has been paid to the intersectional middle groups, referring to those occupying positions of mixed advantage and disadvantage. Objective This article aimed to 1) examine mental health inequalities between intersectional groups reflecting structural positions of gender and economic affluence and 2) decompose any observed health inequalities, among middle groups, into contributions from experiences and conditions representing processes of privilege and oppression. Design Participants (N=25,585) came from the cross-sectional ‘Health on Equal Terms’ survey covering 16- to 84-year-olds in the four northernmost counties of Sweden. Six intersectional positions were constructed from gender (woman vs. men) and tertiles (low vs. medium vs. high) of disposable income. Mental health was measured through the General Health Questionnaire-12. Explanatory variables covered areas of material conditions, job relations, violence, domestic burden, and healthcare contacts. Analysis of variance (Aim 1) and Blinder-Oaxaca decomposition analysis (Aim 2) were used. Results Significant mental health inequalities were found between dominant (high-income women and middle-income men) and subordinate (middle-income women and low-income men) middle groups. The health inequalities between adjacent middle groups were mostly explained by violence (mid-income women vs. men comparison); material conditions (mid- vs. low-income men comparison); and material needs, job relations, and unmet medical needs (high- vs. mid-income women comparison). Conclusions The study suggests complex processes whereby dominant middle groups in the intersectional space of economic affluence and gender can leverage strategic resources to gain mental health advantage relative to subordinate middle groups. PMID:27887668
NASA Astrophysics Data System (ADS)
Yusa, Yasunori; Okada, Hiroshi; Yamada, Tomonori; Yoshimura, Shinobu
2018-04-01
A domain decomposition method for large-scale elastic-plastic problems is proposed. The proposed method is based on a quasi-Newton method in conjunction with a balancing domain decomposition preconditioner. The use of a quasi-Newton method overcomes two problems associated with the conventional domain decomposition method based on the Newton-Raphson method: (1) avoidance of a double-loop iteration algorithm, which generally has large computational complexity, and (2) consideration of the local concentration of nonlinear deformation, which is observed in elastic-plastic problems with stress concentration. Moreover, the application of a balancing domain decomposition preconditioner ensures scalability. Using the conventional and proposed domain decomposition methods, several numerical tests, including weak scaling tests, were performed. The convergence performance of the proposed method is comparable to that of the conventional method. In particular, in elastic-plastic analysis, the proposed method exhibits better convergence performance than the conventional method.
Machine Learning Techniques for Global Sensitivity Analysis in Climate Models
NASA Astrophysics Data System (ADS)
Safta, C.; Sargsyan, K.; Ricciuto, D. M.
2017-12-01
Climate models studies are not only challenged by the compute intensive nature of these models but also by the high-dimensionality of the input parameter space. In our previous work with the land model components (Sargsyan et al., 2014) we identified subsets of 10 to 20 parameters relevant for each QoI via Bayesian compressive sensing and variance-based decomposition. Nevertheless the algorithms were challenged by the nonlinear input-output dependencies for some of the relevant QoIs. In this work we will explore a combination of techniques to extract relevant parameters for each QoI and subsequently construct surrogate models with quantified uncertainty necessary to future developments, e.g. model calibration and prediction studies. In the first step, we will compare the skill of machine-learning models (e.g. neural networks, support vector machine) to identify the optimal number of classes in selected QoIs and construct robust multi-class classifiers that will partition the parameter space in regions with smooth input-output dependencies. These classifiers will be coupled with techniques aimed at building sparse and/or low-rank surrogate models tailored to each class. Specifically we will explore and compare sparse learning techniques with low-rank tensor decompositions. These models will be used to identify parameters that are important for each QoI. Surrogate accuracy requirements are higher for subsequent model calibration studies and we will ascertain the performance of this workflow for multi-site ALM simulation ensembles.
Transportation Network Analysis and Decomposition Methods
DOT National Transportation Integrated Search
1978-03-01
The report outlines research in transportation network analysis using decomposition techniques as a basis for problem solutions. Two transportation network problems were considered in detail: a freight network flow problem and a scheduling problem fo...
NASA Astrophysics Data System (ADS)
Yang, Honggang; Lin, Huibin; Ding, Kang
2018-05-01
The performance of sparse features extraction by commonly used K-Singular Value Decomposition (K-SVD) method depends largely on the signal segment selected in rolling bearing diagnosis, furthermore, the calculating speed is relatively slow and the dictionary becomes so redundant when the fault signal is relatively long. A new sliding window denoising K-SVD (SWD-KSVD) method is proposed, which uses only one small segment of time domain signal containing impacts to perform sliding window dictionary learning and select an optimal pattern with oscillating information of the rolling bearing fault according to a maximum variance principle. An inner product operation between the optimal pattern and the whole fault signal is performed to enhance the characteristic of the impacts' occurrence moments. Lastly, the signal is reconstructed at peak points of the inner product to realize the extraction of the rolling bearing fault features. Both simulation and experiments verify that the method could extract the fault features effectively.
Electrochemical and Infrared Absorption Spectroscopy Detection of SF₆ Decomposition Products.
Dong, Ming; Zhang, Chongxing; Ren, Ming; Albarracín, Ricardo; Ye, Rixin
2017-11-15
Sulfur hexafluoride (SF₆) gas-insulated electrical equipment is widely used in high-voltage (HV) and extra-high-voltage (EHV) power systems. Partial discharge (PD) and local heating can occur in the electrical equipment because of insulation faults, which results in SF₆ decomposition and ultimately generates several types of decomposition products. These SF₆ decomposition products can be qualitatively and quantitatively detected with relevant detection methods, and such detection contributes to diagnosing the internal faults and evaluating the security risks of the equipment. At present, multiple detection methods exist for analyzing the SF₆ decomposition products, and electrochemical sensing (ES) and infrared (IR) spectroscopy are well suited for application in online detection. In this study, the combination of ES with IR spectroscopy is used to detect SF₆ gas decomposition. First, the characteristics of these two detection methods are studied, and the data analysis matrix is established. Then, a qualitative and quantitative analysis ES-IR model is established by adopting a two-step approach. A SF₆ decomposition detector is designed and manufactured by combining an electrochemical sensor and IR spectroscopy technology. The detector is used to detect SF₆ gas decomposition and is verified to reliably and accurately detect the gas components and concentrations.
NASA Technical Reports Server (NTRS)
Schroeder, M. A.
1980-01-01
A summary of a literature review on thermal decomposition of HMX and RDX is presented. The decomposition apparently fits first order kinetics. Recommended values for Arrhenius parameters for HMX and RDX decomposition in the gaseous and liquid phases and for decomposition of RDX in solution in TNT are given. The apparent importance of autocatalysis is pointed out, as are some possible complications that may be encountered in interpreting extending or extrapolating kinetic data for these compounds from measurements carried out below their melting points to the higher temperatures and pressure characteristic of combustion.
Chao, T.T.; Sanzolone, R.F.
1992-01-01
Sample decomposition is a fundamental and integral step in the procedure of geochemical analysis. It is often the limiting factor to sample throughput, especially with the recent application of the fast and modern multi-element measurement instrumentation. The complexity of geological materials makes it necessary to choose the sample decomposition technique that is compatible with the specific objective of the analysis. When selecting a decomposition technique, consideration should be given to the chemical and mineralogical characteristics of the sample, elements to be determined, precision and accuracy requirements, sample throughput, technical capability of personnel, and time constraints. This paper addresses these concerns and discusses the attributes and limitations of many techniques of sample decomposition along with examples of their application to geochemical analysis. The chemical properties of reagents as to their function as decomposition agents are also reviewed. The section on acid dissolution techniques addresses the various inorganic acids that are used individually or in combination in both open and closed systems. Fluxes used in sample fusion are discussed. The promising microwave-oven technology and the emerging field of automation are also examined. A section on applications highlights the use of decomposition techniques for the determination of Au, platinum group elements (PGEs), Hg, U, hydride-forming elements, rare earth elements (REEs), and multi-elements in geological materials. Partial dissolution techniques used for geochemical exploration which have been treated in detail elsewhere are not discussed here; nor are fire-assaying for noble metals and decomposition techniques for X-ray fluorescence or nuclear methods be discussed. ?? 1992.
Using Microwave Sample Decomposition in Undergraduate Analytical Chemistry
NASA Astrophysics Data System (ADS)
Griff Freeman, R.; McCurdy, David L.
1998-08-01
A shortcoming of many undergraduate classes in analytical chemistry is that students receive little exposure to sample preparation in chemical analysis. This paper reports the progress made in introducing microwave sample decomposition into several quantitative analysis experiments at Truman State University. Two experiments being performed in our current laboratory rotation include closed vessel microwave decomposition applied to the classical gravimetric determination of nickel and the determination of sodium in snack foods by flame atomic emission spectrometry. A third lab, using open-vessel microwave decomposition for the Kjeldahl nitrogen determination is now ready for student trial. Microwave decomposition reduces the time needed to complete these experiments and significantly increases the student awareness of the importance of sample preparation in quantitative chemical analyses, providing greater breadth and realism in the experiments.
NASA Astrophysics Data System (ADS)
Sridhar, J.
2015-12-01
The focus of this work is to examine polarimetric decomposition techniques primarily focussed on Pauli decomposition and Sphere Di-Plane Helix (SDH) decomposition for forest resource assessment. The data processing methods adopted are Pre-processing (Geometric correction and Radiometric calibration), Speckle Reduction, Image Decomposition and Image Classification. Initially to classify forest regions, unsupervised classification was applied to determine different unknown classes. It was observed K-means clustering method gave better results in comparison with ISO Data method.Using the algorithm developed for Radar Tools, the code for decomposition and classification techniques were applied in Interactive Data Language (IDL) and was applied to RISAT-1 image of Mysore-Mandya region of Karnataka, India. This region is chosen for studying forest vegetation and consists of agricultural lands, water and hilly regions. Polarimetric SAR data possess a high potential for classification of earth surface.After applying the decomposition techniques, classification was done by selecting region of interests andpost-classification the over-all accuracy was observed to be higher in the SDH decomposed image, as it operates on individual pixels on a coherent basis and utilises the complete intrinsic coherent nature of polarimetric SAR data. Thereby, making SDH decomposition particularly suited for analysis of high-resolution SAR data. The Pauli Decomposition represents all the polarimetric information in a single SAR image however interpretation of the resulting image is difficult. The SDH decomposition technique seems to produce better results and interpretation as compared to Pauli Decomposition however more quantification and further analysis are being done in this area of research. The comparison of Polarimetric decomposition techniques and evolutionary classification techniques will be the scope of this work.
Ferreira, Verónica; Koricheva, Julia; Duarte, Sofia; Niyogi, Dev K; Guérold, François
2016-03-01
Many streams worldwide are affected by heavy metal contamination, mostly due to past and present mining activities. Here we present a meta-analysis of 38 studies (reporting 133 cases) published between 1978 and 2014 that reported the effects of heavy metal contamination on the decomposition of terrestrial litter in running waters. Overall, heavy metal contamination significantly inhibited litter decomposition. The effect was stronger for laboratory than for field studies, likely due to better control of confounding variables in the former, antagonistic interactions between metals and other environmental variables in the latter or differences in metal identity and concentration between studies. For laboratory studies, only copper + zinc mixtures significantly inhibited litter decomposition, while no significant effects were found for silver, aluminum, cadmium or zinc considered individually. For field studies, coal and metal mine drainage strongly inhibited litter decomposition, while drainage from motorways had no significant effects. The effect of coal mine drainage did not depend on drainage pH. Coal mine drainage negatively affected leaf litter decomposition independently of leaf litter identity; no significant effect was found for wood decomposition, but sample size was low. Considering metal mine drainage, arsenic mines had a stronger negative effect on leaf litter decomposition than gold or pyrite mines. Metal mine drainage significantly inhibited leaf litter decomposition driven by both microbes and invertebrates, independently of leaf litter identity; no significant effect was found for microbially driven decomposition, but sample size was low. Overall, mine drainage negatively affects leaf litter decomposition, likely through negative effects on invertebrates. Copyright © 2015 Elsevier Ltd. All rights reserved.
Muravyev, Nikita V; Koga, Nobuyoshi; Meerov, Dmitry B; Pivkina, Alla N
2017-01-25
This study focused on kinetic modeling of a specific type of multistep heterogeneous reaction comprising exothermic and endothermic reaction steps, as exemplified by the practical kinetic analysis of the experimental kinetic curves for the thermal decomposition of molten ammonium dinitramide (ADN). It is known that the thermal decomposition of ADN occurs as a consecutive two step mass-loss process comprising the decomposition of ADN and subsequent evaporation/decomposition of in situ generated ammonium nitrate. These reaction steps provide exothermic and endothermic contributions, respectively, to the overall thermal effect. The overall reaction process was deconvoluted into two reaction steps using simultaneously recorded thermogravimetry and differential scanning calorimetry (TG-DSC) curves by considering the different physical meanings of the kinetic data derived from TG and DSC by P value analysis. The kinetic data thus separated into exothermic and endothermic reaction steps were kinetically characterized using kinetic computation methods including isoconversional method, combined kinetic analysis, and master plot method. The overall kinetic behavior was reproduced as the sum of the kinetic equations for each reaction step considering the contributions to the rate data derived from TG and DSC. During reproduction of the kinetic behavior, the kinetic parameters and contributions of each reaction step were optimized using kinetic deconvolution analysis. As a result, the thermal decomposition of ADN was successfully modeled as partially overlapping exothermic and endothermic reaction steps. The logic of the kinetic modeling was critically examined, and the practical usefulness of phenomenological modeling for the thermal decomposition of ADN was illustrated to demonstrate the validity of the methodology and its applicability to similar complex reaction processes.
Does energy consumption contribute to environmental pollutants? Evidence from SAARC countries.
Akhmat, Ghulam; Zaman, Khalid; Shukui, Tan; Irfan, Danish; Khan, Muhammad Mushtaq
2014-05-01
The objective of the study is to examine the causal relationship between energy consumption and environmental pollutants in selected South Asian Association for Regional Cooperation (SAARC) countries, namely, Bangladesh, India, Nepal, Pakistan, and Srilanka, over the period of 1975-2011. The results indicate that energy consumption acts as an important driver to increase environmental pollutants in SAARC countries. Granger causality runs from energy consumption to environmental pollutants, but not vice versa, except carbon dioxide (CO2) emissions in Nepal where there exists a bidirectional causality between CO2 and energy consumption. Methane emissions in Bangladesh, Pakistan, and Srilanka and extreme temperature in India and Srilanka do not Granger cause energy consumption via both routes, which holds neutrality hypothesis. Variance decomposition analysis shows that among all the environmental indicators, CO2 in Bangladesh and Nepal exerts the largest contribution to changes in electric power consumption. Average precipitation in India, methane emissions in Pakistan, and extreme temperature in Srilanka exert the largest contribution.
Beyond Principal Component Analysis: A Trilinear Decomposition Model and Least Squares Estimation.
ERIC Educational Resources Information Center
Pham, Tuan Dinh; Mocks, Joachim
1992-01-01
Sufficient conditions are derived for the consistency and asymptotic normality of the least squares estimator of a trilinear decomposition model for multiway data analysis. The limiting covariance matrix is computed. (Author/SLD)
Automatic network coupling analysis for dynamical systems based on detailed kinetic models.
Lebiedz, Dirk; Kammerer, Julia; Brandt-Pollmann, Ulrich
2005-10-01
We introduce a numerical complexity reduction method for the automatic identification and analysis of dynamic network decompositions in (bio)chemical kinetics based on error-controlled computation of a minimal model dimension represented by the number of (locally) active dynamical modes. Our algorithm exploits a generalized sensitivity analysis along state trajectories and subsequent singular value decomposition of sensitivity matrices for the identification of these dominant dynamical modes. It allows for a dynamic coupling analysis of (bio)chemical species in kinetic models that can be exploited for the piecewise computation of a minimal model on small time intervals and offers valuable functional insight into highly nonlinear reaction mechanisms and network dynamics. We present results for the identification of network decompositions in a simple oscillatory chemical reaction, time scale separation based model reduction in a Michaelis-Menten enzyme system and network decomposition of a detailed model for the oscillatory peroxidase-oxidase enzyme system.
ERIC Educational Resources Information Center
Koga, Nobuyoshi; Goshi, Yuri; Yoshikawa, Masahiro; Tatsuoka, Tomoyuki
2014-01-01
An undergraduate kinetic experiment of the thermal decomposition of solids by microscopic observation and thermal analysis was developed by investigating a suitable reaction, applicable techniques of thermal analysis and microscopic observation, and a reliable kinetic calculation method. The thermal decomposition of sodium hydrogen carbonate is…
The Thermal Decomposition of Basic Copper(II) Sulfate.
ERIC Educational Resources Information Center
Tanaka, Haruhiko; Koga, Nobuyoshi
1990-01-01
Discussed is the preparation of synthetic brochantite from solution and a thermogravimetric-differential thermal analysis study of the thermal decomposition of this compound. Other analyses included are chemical analysis and IR spectroscopy. Experimental procedures and results are presented. (CW)
Electrochemical and Infrared Absorption Spectroscopy Detection of SF6 Decomposition Products
Dong, Ming; Ren, Ming; Ye, Rixin
2017-01-01
Sulfur hexafluoride (SF6) gas-insulated electrical equipment is widely used in high-voltage (HV) and extra-high-voltage (EHV) power systems. Partial discharge (PD) and local heating can occur in the electrical equipment because of insulation faults, which results in SF6 decomposition and ultimately generates several types of decomposition products. These SF6 decomposition products can be qualitatively and quantitatively detected with relevant detection methods, and such detection contributes to diagnosing the internal faults and evaluating the security risks of the equipment. At present, multiple detection methods exist for analyzing the SF6 decomposition products, and electrochemical sensing (ES) and infrared (IR) spectroscopy are well suited for application in online detection. In this study, the combination of ES with IR spectroscopy is used to detect SF6 gas decomposition. First, the characteristics of these two detection methods are studied, and the data analysis matrix is established. Then, a qualitative and quantitative analysis ES-IR model is established by adopting a two-step approach. A SF6 decomposition detector is designed and manufactured by combining an electrochemical sensor and IR spectroscopy technology. The detector is used to detect SF6 gas decomposition and is verified to reliably and accurately detect the gas components and concentrations. PMID:29140268
Zhao, Jiang Yan; Xie, Ping; Sang, Yan Fang; Xui, Qiang Qiang; Wu, Zi Yi
2018-04-01
Under the influence of both global climate change and frequent human activities, the variability of second-moment in hydrological time series become obvious, indicating changes in the consistency of hydrological data samples. Therefore, the traditional hydrological series analysis methods, which only consider the variability of mean values, are not suitable for handling all hydrological non-consistency problems. Traditional synthetic duration curve methods for the design of the lowest navigable water level, based on the consistency of samples, would cause more risks to navigation, especially under low water level in dry seasons. Here, we detected both mean variation and variance variation using the hydrological variation diagnosis system. Furthermore, combing the principle of decomposition and composition of time series, we proposed the synthetic duration curve method for designing the lowest navigable water level with inconsistent characters in dry seasons. With the Yunjinghong Station in the Lancang River Basin as an example, we analyzed its designed water levels in the present, the distant past and the recent past, as well as the differences among three situations (i.e., considering second moment variation, only considering mean variation, not considering any variation). Results showed that variability of the second moment changed the trend of designed water levels alteration in the Yunjinghong Station. When considering the first two moments or just considering the mean variation, the difference ofdesigned water levels was as bigger as -1.11 m. When considering the first two moments or not, the difference of designed water levels was as bigger as -1.01 m. Our results indicated the strong effects of variance variation on the designed water levels, and highlighted the importance of the second moment variation analysis for the channel planning and design.
Optimizing Complexity Measures for fMRI Data: Algorithm, Artifact, and Sensitivity
Rubin, Denis; Fekete, Tomer; Mujica-Parodi, Lilianne R.
2013-01-01
Introduction Complexity in the brain has been well-documented at both neuronal and hemodynamic scales, with increasing evidence supporting its use in sensitively differentiating between mental states and disorders. However, application of complexity measures to fMRI time-series, which are short, sparse, and have low signal/noise, requires careful modality-specific optimization. Methods Here we use both simulated and real data to address two fundamental issues: choice of algorithm and degree/type of signal processing. Methods were evaluated with regard to resilience to acquisition artifacts common to fMRI as well as detection sensitivity. Detection sensitivity was quantified in terms of grey-white matter contrast and overlap with activation. We additionally investigated the variation of complexity with activation and emotional content, optimal task length, and the degree to which results scaled with scanner using the same paradigm with two 3T magnets made by different manufacturers. Methods for evaluating complexity were: power spectrum, structure function, wavelet decomposition, second derivative, rescaled range, Higuchi’s estimate of fractal dimension, aggregated variance, and detrended fluctuation analysis. To permit direct comparison across methods, all results were normalized to Hurst exponents. Results Power-spectrum, Higuchi’s fractal dimension, and generalized Hurst exponent based estimates were most successful by all criteria; the poorest-performing measures were wavelet, detrended fluctuation analysis, aggregated variance, and rescaled range. Conclusions Functional MRI data have artifacts that interact with complexity calculations in nontrivially distinct ways compared to other physiological data (such as EKG, EEG) for which these measures are typically used. Our results clearly demonstrate that decisions regarding choice of algorithm, signal processing, time-series length, and scanner have a significant impact on the reliability and sensitivity of complexity estimates. PMID:23700424
A comparison between different error modeling of MEMS applied to GPS/INS integrated systems.
Quinchia, Alex G; Falco, Gianluca; Falletti, Emanuela; Dovis, Fabio; Ferrer, Carles
2013-07-24
Advances in the development of micro-electromechanical systems (MEMS) have made possible the fabrication of cheap and small dimension accelerometers and gyroscopes, which are being used in many applications where the global positioning system (GPS) and the inertial navigation system (INS) integration is carried out, i.e., identifying track defects, terrestrial and pedestrian navigation, unmanned aerial vehicles (UAVs), stabilization of many platforms, etc. Although these MEMS sensors are low-cost, they present different errors, which degrade the accuracy of the navigation systems in a short period of time. Therefore, a suitable modeling of these errors is necessary in order to minimize them and, consequently, improve the system performance. In this work, the most used techniques currently to analyze the stochastic errors that affect these sensors are shown and compared: we examine in detail the autocorrelation, the Allan variance (AV) and the power spectral density (PSD) techniques. Subsequently, an analysis and modeling of the inertial sensors, which combines autoregressive (AR) filters and wavelet de-noising, is also achieved. Since a low-cost INS (MEMS grade) presents error sources with short-term (high-frequency) and long-term (low-frequency) components, we introduce a method that compensates for these error terms by doing a complete analysis of Allan variance, wavelet de-nosing and the selection of the level of decomposition for a suitable combination between these techniques. Eventually, in order to assess the stochastic models obtained with these techniques, the Extended Kalman Filter (EKF) of a loosely-coupled GPS/INS integration strategy is augmented with different states. Results show a comparison between the proposed method and the traditional sensor error models under GPS signal blockages using real data collected in urban roadways.
A Comparison between Different Error Modeling of MEMS Applied to GPS/INS Integrated Systems
Quinchia, Alex G.; Falco, Gianluca; Falletti, Emanuela; Dovis, Fabio; Ferrer, Carles
2013-01-01
Advances in the development of micro-electromechanical systems (MEMS) have made possible the fabrication of cheap and small dimension accelerometers and gyroscopes, which are being used in many applications where the global positioning system (GPS) and the inertial navigation system (INS) integration is carried out, i.e., identifying track defects, terrestrial and pedestrian navigation, unmanned aerial vehicles (UAVs), stabilization of many platforms, etc. Although these MEMS sensors are low-cost, they present different errors, which degrade the accuracy of the navigation systems in a short period of time. Therefore, a suitable modeling of these errors is necessary in order to minimize them and, consequently, improve the system performance. In this work, the most used techniques currently to analyze the stochastic errors that affect these sensors are shown and compared: we examine in detail the autocorrelation, the Allan variance (AV) and the power spectral density (PSD) techniques. Subsequently, an analysis and modeling of the inertial sensors, which combines autoregressive (AR) filters and wavelet de-noising, is also achieved. Since a low-cost INS (MEMS grade) presents error sources with short-term (high-frequency) and long-term (low-frequency) components, we introduce a method that compensates for these error terms by doing a complete analysis of Allan variance, wavelet de-nosing and the selection of the level of decomposition for a suitable combination between these techniques. Eventually, in order to assess the stochastic models obtained with these techniques, the Extended Kalman Filter (EKF) of a loosely-coupled GPS/INS integration strategy is augmented with different states. Results show a comparison between the proposed method and the traditional sensor error models under GPS signal blockages using real data collected in urban roadways. PMID:23887084
Gamal El-Dien, Omnia; Ratcliffe, Blaise; Klápště, Jaroslav; Porth, Ilga; Chen, Charles; El-Kassaby, Yousry A.
2016-01-01
The open-pollinated (OP) family testing combines the simplest known progeny evaluation and quantitative genetics analyses as candidates’ offspring are assumed to represent independent half-sib families. The accuracy of genetic parameter estimates is often questioned as the assumption of “half-sibling” in OP families may often be violated. We compared the pedigree- vs. marker-based genetic models by analysing 22-yr height and 30-yr wood density for 214 white spruce [Picea glauca (Moench) Voss] OP families represented by 1694 individuals growing on one site in Quebec, Canada. Assuming half-sibling, the pedigree-based model was limited to estimating the additive genetic variances which, in turn, were grossly overestimated as they were confounded by very minor dominance and major additive-by-additive epistatic genetic variances. In contrast, the implemented genomic pairwise realized relationship models allowed the disentanglement of additive from all nonadditive factors through genetic variance decomposition. The marker-based models produced more realistic narrow-sense heritability estimates and, for the first time, allowed estimating the dominance and epistatic genetic variances from OP testing. In addition, the genomic models showed better prediction accuracies compared to pedigree models and were able to predict individual breeding values for new individuals from untested families, which was not possible using the pedigree-based model. Clearly, the use of marker-based relationship approach is effective in estimating the quantitative genetic parameters of complex traits even under simple and shallow pedigree structure. PMID:26801647
Automatic treatment of the variance estimation bias in TRIPOLI-4 criticality calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dumonteil, E.; Malvagi, F.
2012-07-01
The central limit (CLT) theorem States conditions under which the mean of a sufficiently large number of independent random variables, each with finite mean and variance, will be approximately normally distributed. The use of Monte Carlo transport codes, such as Tripoli4, relies on those conditions. While these are verified in protection applications (the cycles provide independent measurements of fluxes and related quantities), the hypothesis of independent estimates/cycles is broken in criticality mode. Indeed the power iteration technique used in this mode couples a generation to its progeny. Often, after what is called 'source convergence' this coupling almost disappears (the solutionmore » is closed to equilibrium) but for loosely coupled systems, such as for PWR or large nuclear cores, the equilibrium is never found, or at least may take time to reach, and the variance estimation such as allowed by the CLT is under-evaluated. In this paper we first propose, by the mean of two different methods, to evaluate the typical correlation length, as measured in cycles number, and then use this information to diagnose correlation problems and to provide an improved variance estimation. Those two methods are based on Fourier spectral decomposition and on the lag k autocorrelation calculation. A theoretical modeling of the autocorrelation function, based on Gauss-Markov stochastic processes, will also be presented. Tests will be performed with Tripoli4 on a PWR pin cell. (authors)« less
Baldrian, Petr; López-Mondéjar, Rubén
2014-02-01
Molecular methods for the analysis of biomolecules have undergone rapid technological development in the last decade. The advent of next-generation sequencing methods and improvements in instrumental resolution enabled the analysis of complex transcriptome, proteome and metabolome data, as well as a detailed annotation of microbial genomes. The mechanisms of decomposition by model fungi have been described in unprecedented detail by the combination of genome sequencing, transcriptomics and proteomics. The increasing number of available genomes for fungi and bacteria shows that the genetic potential for decomposition of organic matter is widespread among taxonomically diverse microbial taxa, while expression studies document the importance of the regulation of expression in decomposition efficiency. Importantly, high-throughput methods of nucleic acid analysis used for the analysis of metagenomes and metatranscriptomes indicate the high diversity of decomposer communities in natural habitats and their taxonomic composition. Today, the metaproteomics of natural habitats is of interest. In combination with advanced analytical techniques to explore the products of decomposition and the accumulation of information on the genomes of environmentally relevant microorganisms, advanced methods in microbial ecophysiology should increase our understanding of the complex processes of organic matter transformation.
Cruz-Ramírez, Nicandro; Acosta-Mesa, Héctor Gabriel; Mezura-Montes, Efrén; Guerra-Hernández, Alejandro; Hoyos-Rivera, Guillermo de Jesús; Barrientos-Martínez, Rocío Erandi; Gutiérrez-Fragoso, Karina; Nava-Fernández, Luis Alonso; González-Gaspar, Patricia; Novoa-del-Toro, Elva María; Aguilera-Rueda, Vicente Josué; Ameca-Alducin, María Yaneli
2014-01-01
The bias-variance dilemma is a well-known and important problem in Machine Learning. It basically relates the generalization capability (goodness of fit) of a learning method to its corresponding complexity. When we have enough data at hand, it is possible to use these data in such a way so as to minimize overfitting (the risk of selecting a complex model that generalizes poorly). Unfortunately, there are many situations where we simply do not have this required amount of data. Thus, we need to find methods capable of efficiently exploiting the available data while avoiding overfitting. Different metrics have been proposed to achieve this goal: the Minimum Description Length principle (MDL), Akaike's Information Criterion (AIC) and Bayesian Information Criterion (BIC), among others. In this paper, we focus on crude MDL and empirically evaluate its performance in selecting models with a good balance between goodness of fit and complexity: the so-called bias-variance dilemma, decomposition or tradeoff. Although the graphical interaction between these dimensions (bias and variance) is ubiquitous in the Machine Learning literature, few works present experimental evidence to recover such interaction. In our experiments, we argue that the resulting graphs allow us to gain insights that are difficult to unveil otherwise: that crude MDL naturally selects balanced models in terms of bias-variance, which not necessarily need be the gold-standard ones. We carry out these experiments using a specific model: a Bayesian network. In spite of these motivating results, we also should not overlook three other components that may significantly affect the final model selection: the search procedure, the noise rate and the sample size.
Cruz-Ramírez, Nicandro; Acosta-Mesa, Héctor Gabriel; Mezura-Montes, Efrén; Guerra-Hernández, Alejandro; Hoyos-Rivera, Guillermo de Jesús; Barrientos-Martínez, Rocío Erandi; Gutiérrez-Fragoso, Karina; Nava-Fernández, Luis Alonso; González-Gaspar, Patricia; Novoa-del-Toro, Elva María; Aguilera-Rueda, Vicente Josué; Ameca-Alducin, María Yaneli
2014-01-01
The bias-variance dilemma is a well-known and important problem in Machine Learning. It basically relates the generalization capability (goodness of fit) of a learning method to its corresponding complexity. When we have enough data at hand, it is possible to use these data in such a way so as to minimize overfitting (the risk of selecting a complex model that generalizes poorly). Unfortunately, there are many situations where we simply do not have this required amount of data. Thus, we need to find methods capable of efficiently exploiting the available data while avoiding overfitting. Different metrics have been proposed to achieve this goal: the Minimum Description Length principle (MDL), Akaike’s Information Criterion (AIC) and Bayesian Information Criterion (BIC), among others. In this paper, we focus on crude MDL and empirically evaluate its performance in selecting models with a good balance between goodness of fit and complexity: the so-called bias-variance dilemma, decomposition or tradeoff. Although the graphical interaction between these dimensions (bias and variance) is ubiquitous in the Machine Learning literature, few works present experimental evidence to recover such interaction. In our experiments, we argue that the resulting graphs allow us to gain insights that are difficult to unveil otherwise: that crude MDL naturally selects balanced models in terms of bias-variance, which not necessarily need be the gold-standard ones. We carry out these experiments using a specific model: a Bayesian network. In spite of these motivating results, we also should not overlook three other components that may significantly affect the final model selection: the search procedure, the noise rate and the sample size. PMID:24671204
1987-10-01
34 Proceedings of the 16th JANNAF Com- bustion Meeting, Sept. 1979, Vol. II, pp. 13-34. 44. Schroeder , M. A., " Critical Analysis of Nitramine Decomposition...34 Proceedings of the 19th JANNAF Combustion Meeting, Oct. 1982. 47. Schroeder , M. A., " Critical Analysis of Nitramine Decomposition Data: Ac- tivation...the surface of the propellant. This is consis- tent with the decomposition mechanism considered by Boggs[48] and Schroeder [43J. They concluded that the
Multi-Centrality Graph Spectral Decompositions and Their Application to Cyber Intrusion Detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Pin-Yu; Choudhury, Sutanay; Hero, Alfred
Many modern datasets can be represented as graphs and hence spectral decompositions such as graph principal component analysis (PCA) can be useful. Distinct from previous graph decomposition approaches based on subspace projection of a single topological feature, e.g., the centered graph adjacency matrix (graph Laplacian), we propose spectral decomposition approaches to graph PCA and graph dictionary learning that integrate multiple features, including graph walk statistics, centrality measures and graph distances to reference nodes. In this paper we propose a new PCA method for single graph analysis, called multi-centrality graph PCA (MC-GPCA), and a new dictionary learning method for ensembles ofmore » graphs, called multi-centrality graph dictionary learning (MC-GDL), both based on spectral decomposition of multi-centrality matrices. As an application to cyber intrusion detection, MC-GPCA can be an effective indicator of anomalous connectivity pattern and MC-GDL can provide discriminative basis for attack classification.« less
Nonlinear mode decomposition: A noise-robust, adaptive decomposition method
NASA Astrophysics Data System (ADS)
Iatsenko, Dmytro; McClintock, Peter V. E.; Stefanovska, Aneta
2015-09-01
The signals emanating from complex systems are usually composed of a mixture of different oscillations which, for a reliable analysis, should be separated from each other and from the inevitable background of noise. Here we introduce an adaptive decomposition tool—nonlinear mode decomposition (NMD)—which decomposes a given signal into a set of physically meaningful oscillations for any wave form, simultaneously removing the noise. NMD is based on the powerful combination of time-frequency analysis techniques—which, together with the adaptive choice of their parameters, make it extremely noise robust—and surrogate data tests used to identify interdependent oscillations and to distinguish deterministic from random activity. We illustrate the application of NMD to both simulated and real signals and demonstrate its qualitative and quantitative superiority over other approaches, such as (ensemble) empirical mode decomposition, Karhunen-Loève expansion, and independent component analysis. We point out that NMD is likely to be applicable and useful in many different areas of research, such as geophysics, finance, and the life sciences. The necessary matlab codes for running NMD are freely available for download.
Standing wave contributions to the linear interference effect in stratosphere-troposphere coupling
NASA Astrophysics Data System (ADS)
Watt-Meyer, Oliver; Kushner, Paul
2014-05-01
A body of literature by Hayashi and others [Hayashi 1973, 1977, 1979; Pratt, 1976] developed a decomposition of the wavenumber-frequency spectrum into standing and travelling waves. These techniques directly decompose the power spectrum—that is, the amplitudes squared—into standing and travelling parts. This, incorrectly, does not allow for a term representing the covariance between these waves. We propose a simple decomposition based on the 2D Fourier transform which allows one to directly compute the variance of the standing and travelling waves, as well as the covariance between them. Applying this decomposition to geopotential height anomalies in the Northern Hemisphere winter, we show the dominance of standing waves for planetary wavenumbers 1 through 3, especially in the stratosphere, and that wave-1 anomalies have a significant westward travelling component in the high-latitude (60N to 80N) troposphere. Variations in the relative zonal phasing between a wave anomaly and the background climatological wave pattern—the "linear interference" effect—are known to explain a large part of the planetary wave driving of the polar stratosphere in both hemispheres. While the linear interference effect is robust across observations, models of varying degrees of complexity, and in response to various types of perturbations, it is not well understood dynamically. We use the above-described decomposition into standing and travelling waves to investigate the drivers of linear interference. We find that the linear part of the wave activity flux is primarily driven by the standing waves, at all vertical levels. This can be understood by noting that the longitudinal positions of the antinodes of the standing waves are typically close to being aligned with the maximum and minimum of the background climatology. We discuss implications for predictability of wave activity flux, and hence polar vortex strength variability.
NASA Astrophysics Data System (ADS)
Pourmortazavi, Seied Mahdi; Rahimi-Nasrabadi, Mehdi; Aghazadeh, Mustafa; Ganjali, Mohammad Reza; Karimi, Meisam Sadeghpour; Norouzi, Parviz
2017-12-01
This work focuses on the application of an orthogonal array design to the optimization of the facile direct carbonization reaction for the synthesis of neodymium carbonate nanoparticles, were the product particles are prepared based on the direct precipitation of their ingredients. To optimize the method the influences of the major operating conditions on the dimensions of the neodymium carbonate particles were quantitatively evaluated through the analysis of variance (ANOVA). It was observed that the crystalls of the carbonate salt can be synthesized by controlling neodymium concentration and flow rate, as well as reactor temperature. Based on the results of ANOVA, 0.03 M, 2.5 mL min-1 and 30 °C are the optimum values for the above-mentioend parameters and controlling the parameters at these values yields nanoparticles with the sizes of about of 31 ± 2 nm. The product of this former stage was next used as the feed for a thermal decomposition procedure which yielding neodymium oxide nanoparticles. The products were studied through X-ray diffraction (XRD), SEM, TEM, FT-IR and thermal analysis techniques. In addition, the photocatalytic activity of dyspersium carbonate and dyspersium oxide nanoparticles were investigated using degradation of methyl orange (MO) under ultraviolet light.
Poisson-Gaussian Noise Analysis and Estimation for Low-Dose X-ray Images in the NSCT Domain.
Lee, Sangyoon; Lee, Min Seok; Kang, Moon Gi
2018-03-29
The noise distribution of images obtained by X-ray sensors in low-dosage situations can be analyzed using the Poisson and Gaussian mixture model. Multiscale conversion is one of the most popular noise reduction methods used in recent years. Estimation of the noise distribution of each subband in the multiscale domain is the most important factor in performing noise reduction, with non-subsampled contourlet transform (NSCT) representing an effective method for scale and direction decomposition. In this study, we use artificially generated noise to analyze and estimate the Poisson-Gaussian noise of low-dose X-ray images in the NSCT domain. The noise distribution of the subband coefficients is analyzed using the noiseless low-band coefficients and the variance of the noisy subband coefficients. The noise-after-transform also follows a Poisson-Gaussian distribution, and the relationship between the noise parameters of the subband and the full-band image is identified. We then analyze noise of actual images to validate the theoretical analysis. Comparison of the proposed noise estimation method with an existing noise reduction method confirms that the proposed method outperforms traditional methods.
Poisson–Gaussian Noise Analysis and Estimation for Low-Dose X-ray Images in the NSCT Domain
Lee, Sangyoon; Lee, Min Seok; Kang, Moon Gi
2018-01-01
The noise distribution of images obtained by X-ray sensors in low-dosage situations can be analyzed using the Poisson and Gaussian mixture model. Multiscale conversion is one of the most popular noise reduction methods used in recent years. Estimation of the noise distribution of each subband in the multiscale domain is the most important factor in performing noise reduction, with non-subsampled contourlet transform (NSCT) representing an effective method for scale and direction decomposition. In this study, we use artificially generated noise to analyze and estimate the Poisson–Gaussian noise of low-dose X-ray images in the NSCT domain. The noise distribution of the subband coefficients is analyzed using the noiseless low-band coefficients and the variance of the noisy subband coefficients. The noise-after-transform also follows a Poisson–Gaussian distribution, and the relationship between the noise parameters of the subband and the full-band image is identified. We then analyze noise of actual images to validate the theoretical analysis. Comparison of the proposed noise estimation method with an existing noise reduction method confirms that the proposed method outperforms traditional methods. PMID:29596335
[Glossary of terms used by radiologists in image processing].
Rolland, Y; Collorec, R; Bruno, A; Ramée, A; Morcet, N; Haigron, P
1995-01-01
We give the definition of 166 words used in image processing. Adaptivity, aliazing, analog-digital converter, analysis, approximation, arc, artifact, artificial intelligence, attribute, autocorrelation, bandwidth, boundary, brightness, calibration, class, classification, classify, centre, cluster, coding, color, compression, contrast, connectivity, convolution, correlation, data base, decision, decomposition, deconvolution, deduction, descriptor, detection, digitization, dilation, discontinuity, discretization, discrimination, disparity, display, distance, distorsion, distribution dynamic, edge, energy, enhancement, entropy, erosion, estimation, event, extrapolation, feature, file, filter, filter floaters, fitting, Fourier transform, frequency, fusion, fuzzy, Gaussian, gradient, graph, gray level, group, growing, histogram, Hough transform, Houndsfield, image, impulse response, inertia, intensity, interpolation, interpretation, invariance, isotropy, iterative, JPEG, knowledge base, label, laplacian, learning, least squares, likelihood, matching, Markov field, mask, matching, mathematical morphology, merge (to), MIP, median, minimization, model, moiré, moment, MPEG, neural network, neuron, node, noise, norm, normal, operator, optical system, optimization, orthogonal, parametric, pattern recognition, periodicity, photometry, pixel, polygon, polynomial, prediction, pulsation, pyramidal, quantization, raster, reconstruction, recursive, region, rendering, representation space, resolution, restoration, robustness, ROC, thinning, transform, sampling, saturation, scene analysis, segmentation, separable function, sequential, smoothing, spline, split (to), shape, threshold, tree, signal, speckle, spectrum, spline, stationarity, statistical, stochastic, structuring element, support, syntaxic, synthesis, texture, truncation, variance, vision, voxel, windowing.
Evaluation and error apportionment of an ensemble of ...
Through the comparison of several regional-scale chemistry transport modelling systems that simulate meteorology and air quality over the European and American continents, this study aims at i) apportioning the error to the responsible processes using time-scale analysis, ii) helping to detect causes of models error, and iii) identifying the processes and scales most urgently requiring dedicated investigations. The analysis is conducted within the framework of the third phase of the Air Quality Model Evaluation International Initiative (AQMEII) and tackles model performance gauging through measurement-to-model comparison, error decomposition and time series analysis of the models biases for several fields (ozone, CO, SO2, NO, NO2, PM10, PM2.5, wind speed, and temperature). The operational metrics (magnitude of the error, sign of the bias, associativity) provide an overall sense of model strengths and deficiencies, while apportioning the error to its constituent parts (bias, variance and covariance) can help to assess the nature and quality of the error. Each of the error components is analysed independently and apportioned to specific processes based on the corresponding timescale (long scale, synoptic, diurnal, and intra-day) using the error apportionment technique devised in the former phases of AQMEII.The application of the error apportionment method to the AQMEII Phase 3 simulations provides several key insights. In addition to reaffirming the strong impact
AQMEII3: the EU and NA regional scale program of the ...
The presentation builds on the work presented last year at the 14th CMAS meeting and it is applied to the work performed in the context of the AQMEII-HTAP collaboration. The analysis is conducted within the framework of the third phase of AQMEII (Air Quality Model Evaluation International Initiative) and encompasses the gauging of model performance through measurement-to-model comparison, error decomposition and time series analysis of the models biases. Through the comparison of several regional-scale chemistry transport modelling systems applied to simulate meteorology and air quality over two continental areas, this study aims at i) apportioning the error to the responsible processes through time-scale analysis, and ii) help detecting causes of models error, and iii) identify the processes and scales most urgently requiring dedicated investigations. The operational metrics (magnitude of the error, sign of the bias, associativity) provide an overall sense of model strengths and deficiencies, while the apportioning of the error into its constituent parts (bias, variance and covariance) can help assess the nature and quality of the error. Each of the error components is analysed independently and apportioned to specific processes based on the corresponding timescale (long scale, synoptic, diurnal, and intra-day) using the error apportionment technique devised in the previous phases of AQMEII. The National Exposure Research Laboratory (NERL) Computational Exposur
ERIC Educational Resources Information Center
Luh, Wei-Ming; Guo, Jiin-Huarng
2011-01-01
Sample size determination is an important issue in planning research. In the context of one-way fixed-effect analysis of variance, the conventional sample size formula cannot be applied for the heterogeneous variance cases. This study discusses the sample size requirement for the Welch test in the one-way fixed-effect analysis of variance with…
Yang, Lin; Deng, Chang-chun; Chen Ya-mei; He, Run-lian; Zhang, Jian; Liu, Yang
2015-12-01
The relationships between litter decomposition rate and their initial quality of 14 representative plants in the alpine forest ecotone of western Sichuan were investigated in this paper. The decomposition rate k of the litter ranged from 0.16 to 1.70. Woody leaf litter and moss litter decomposed much slower, and shrubby litter decomposed a little faster. Then, herbaceous litters decomposed fastest among all plant forms. There were significant linear regression relationships between the litter decomposition rate and the N content, lignin content, phenolics content, C/N, C/P and lignin/N. Lignin/N and hemicellulose content could explain 78.4% variation of the litter decomposition rate (k) by path analysis. The lignin/N could explain 69.5% variation of k alone, and the direct path coefficient of lignin/N on k was -0.913. Principal component analysis (PCA) showed that the contribution rate of the first sort axis to k and the decomposition time (t) reached 99.2%. Significant positive correlations existed between lignin/N, lignin content, C/N, C/P and the first sort axis, and the closest relationship existed between lignin/N and the first sort axis (r = 0.923). Lignin/N was the key quality factor affecting plant litter decomposition rate across the alpine timberline ecotone, with the higher the initial lignin/N, the lower the decomposition rate of leaf litter.
A Dual Super-Element Domain Decomposition Approach for Parallel Nonlinear Finite Element Analysis
NASA Astrophysics Data System (ADS)
Jokhio, G. A.; Izzuddin, B. A.
2015-05-01
This article presents a new domain decomposition method for nonlinear finite element analysis introducing the concept of dual partition super-elements. The method extends ideas from the displacement frame method and is ideally suited for parallel nonlinear static/dynamic analysis of structural systems. In the new method, domain decomposition is realized by replacing one or more subdomains in a "parent system," each with a placeholder super-element, where the subdomains are processed separately as "child partitions," each wrapped by a dual super-element along the partition boundary. The analysis of the overall system, including the satisfaction of equilibrium and compatibility at all partition boundaries, is realized through direct communication between all pairs of placeholder and dual super-elements. The proposed method has particular advantages for matrix solution methods based on the frontal scheme, and can be readily implemented for existing finite element analysis programs to achieve parallelization on distributed memory systems with minimal intervention, thus overcoming memory bottlenecks typically faced in the analysis of large-scale problems. Several examples are presented in this article which demonstrate the computational benefits of the proposed parallel domain decomposition approach and its applicability to the nonlinear structural analysis of realistic structural systems.
Yamashita, Satoshi; Masuya, Hayato; Abe, Shin; Masaki, Takashi; Okabe, Kimiko
2015-01-01
We examined the relationship between the community structure of wood-decaying fungi, detected by high-throughput sequencing, and the decomposition rate using 13 years of data from a forest dynamics plot. For molecular analysis and wood density measurements, drill dust samples were collected from logs and stumps of Fagus and Quercus in the plot. Regression using a negative exponential model between wood density and time since death revealed that the decomposition rate of Fagus was greater than that of Quercus. The residual between the expected value obtained from the regression curve and the observed wood density was used as a decomposition rate index. Principal component analysis showed that the fungal community compositions of both Fagus and Quercus changed with time since death. Principal component analysis axis scores were used as an index of fungal community composition. A structural equation model for each wood genus was used to assess the effect of fungal community structure traits on the decomposition rate and how the fungal community structure was determined by the traits of coarse woody debris. Results of the structural equation model suggested that the decomposition rate of Fagus was affected by two fungal community composition components: one that was affected by time since death and another that was not affected by the traits of coarse woody debris. In contrast, the decomposition rate of Quercus was not affected by coarse woody debris traits or fungal community structure. These findings suggest that, in the case of Fagus coarse woody debris, the fungal community structure is related to the decomposition process of its host substrate. Because fungal community structure is affected partly by the decay stage and wood density of its substrate, these factors influence each other. Further research on interactive effects is needed to improve our understanding of the relationship between fungal community structure and the woody debris decomposition process. PMID:26110605
Parallel processing methods for space based power systems
NASA Technical Reports Server (NTRS)
Berry, F. C.
1993-01-01
This report presents a method for doing load-flow analysis of a power system by using a decomposition approach. The power system for the Space Shuttle is used as a basis to build a model for the load-flow analysis. To test the decomposition method for doing load-flow analysis, simulations were performed on power systems of 16, 25, 34, 43, 52, 61, 70, and 79 nodes. Each of the power systems was divided into subsystems and simulated under steady-state conditions. The results from these tests have been found to be as accurate as tests performed using a standard serial simulator. The division of the power systems into different subsystems was done by assigning a processor to each area. There were 13 transputers available, therefore, up to 13 different subsystems could be simulated at the same time. This report has preliminary results for a load-flow analysis using a decomposition principal. The report shows that the decomposition algorithm for load-flow analysis is well suited for parallel processing and provides increases in the speed of execution.
2009-01-01
Background The isotopic composition of generalist consumers may be expected to vary in space as a consequence of spatial heterogeneity in isotope ratios, the abundance of resources, and competition. We aim to account for the spatial variation in the carbon and nitrogen isotopic composition of a generalized predatory species across a 500 ha. tropical rain forest landscape. We test competing models to account for relative influence of resources and competitors to the carbon and nitrogen isotopic enrichment of gypsy ants (Aphaenogaster araneoides), taking into account site-specific differences in baseline isotope ratios. Results We found that 75% of the variance in the fraction of 15N in the tissue of A. araneoides was accounted by one environmental parameter, the concentration of soil phosphorus. After taking into account landscape-scale variation in baseline resources, the most parsimonious model indicated that colony growth and leaf litter biomass accounted for nearly all of the variance in the δ15N discrimination factor, whereas the δ13C discrimination factor was most parsimoniously associated with colony size and the rate of leaf litter decomposition. There was no indication that competitor density or diversity accounted for spatial differences in the isotopic composition of gypsy ants. Conclusion Across a 500 ha. landscape, soil phosphorus accounted for spatial variation in baseline nitrogen isotope ratios. The δ15N discrimination factor of a higher order consumer in this food web was structured by bottom-up influences - the quantity and decomposition rate of leaf litter. Stable isotope studies on the trophic biology of consumers may benefit from explicit spatial design to account for edaphic properties that alter the baseline at fine spatial grains. PMID:19930701
NASA Astrophysics Data System (ADS)
Ballard, S.; Hipp, J. R.; Encarnacao, A.; Young, C. J.; Begnaud, M. L.; Phillips, W. S.
2012-12-01
Seismic event locations can be made more accurate and precise by computing predictions of seismic travel time through high fidelity 3D models of the wave speed in the Earth's interior. Given the variable data quality and uneven data sampling associated with this type of model, it is essential that there be a means to calculate high-quality estimates of the path-dependent variance and covariance associated with the predicted travel times of ray paths through the model. In this paper, we describe a methodology for accomplishing this by exploiting the full model covariance matrix and show examples of path-dependent travel time prediction uncertainty computed from SALSA3D, our global, seamless 3D tomographic P-velocity model. Typical global 3D models have on the order of 1/2 million nodes, so the challenge in calculating the covariance matrix is formidable: 0.9 TB storage for 1/2 of a symmetric matrix, necessitating an Out-Of-Core (OOC) blocked matrix solution technique. With our approach the tomography matrix (G which includes Tikhonov regularization terms) is multiplied by its transpose (GTG) and written in a blocked sub-matrix fashion. We employ a distributed parallel solution paradigm that solves for (GTG)-1 by assigning blocks to individual processing nodes for matrix decomposition update and scaling operations. We first find the Cholesky decomposition of GTG which is subsequently inverted. Next, we employ OOC matrix multiplication methods to calculate the model covariance matrix from (GTG)-1 and an assumed data covariance matrix. Given the model covariance matrix, we solve for the travel-time covariance associated with arbitrary ray-paths by summing the model covariance along both ray paths. Setting the paths equal and taking the square root yields the travel prediction uncertainty for the single path.
2014-01-01
Background Assessing heterogeneity in lung images can be an important diagnosis tool. We present a novel and objective method for assessing lung damage in a rat model of emphysema. We combined a three-dimensional (3D) computer graphics method–octree decomposition–with a geostatistics-based approach for assessing spatial relationships–the variogram–to evaluate disease in 3D computed tomography (CT) image volumes. Methods Male, Sprague-Dawley rats were dosed intratracheally with saline (control), or with elastase dissolved in saline to either the whole lung (for mild, global disease) or a single lobe (for severe, local disease). Gated 3D micro-CT images were acquired on the lungs of all rats at end expiration. Images were masked, and octree decomposition was performed on the images to reduce the lungs to homogeneous blocks of 2 × 2 × 2, 4 × 4 × 4, and 8 × 8 × 8 voxels. To focus on lung parenchyma, small blocks were ignored because they primarily defined boundaries and vascular features, and the spatial variance between all pairs of the 8 × 8 × 8 blocks was calculated as the square of the difference of signal intensity. Variograms–graphs of distance vs. variance–were constructed, and results of a least-squares-fit were compared. The robustness of the approach was tested on images prepared with various filtering protocols. Statistical assessment of the similarity of the three control rats was made with a Kruskal-Wallis rank sum test. A Mann-Whitney-Wilcoxon rank sum test was used to measure statistical distinction between individuals. For comparison with the variogram results, the coefficient of variation and the emphysema index were also calculated for all rats. Results Variogram analysis showed that the control rats were statistically indistinct (p = 0.12), but there were significant differences between control, mild global disease, and severe local disease groups (p < 0.0001). A heterogeneity index was calculated to describe the difference of an individual variogram from the control average. This metric also showed clear separation between dose groups. The coefficient of variation and the emphysema index, on the other hand, did not separate groups. Conclusion These results suggest the octree decomposition and variogram analysis approach may be a rapid, non-subjective, and sensitive imaging-based biomarker for characterizing lung disease. PMID:24393332
Riley, Richard D; Ensor, Joie; Jackson, Dan; Burke, Danielle L
2017-01-01
Many meta-analysis models contain multiple parameters, for example due to multiple outcomes, multiple treatments or multiple regression coefficients. In particular, meta-regression models may contain multiple study-level covariates, and one-stage individual participant data meta-analysis models may contain multiple patient-level covariates and interactions. Here, we propose how to derive percentage study weights for such situations, in order to reveal the (otherwise hidden) contribution of each study toward the parameter estimates of interest. We assume that studies are independent, and utilise a decomposition of Fisher's information matrix to decompose the total variance matrix of parameter estimates into study-specific contributions, from which percentage weights are derived. This approach generalises how percentage weights are calculated in a traditional, single parameter meta-analysis model. Application is made to one- and two-stage individual participant data meta-analyses, meta-regression and network (multivariate) meta-analysis of multiple treatments. These reveal percentage study weights toward clinically important estimates, such as summary treatment effects and treatment-covariate interactions, and are especially useful when some studies are potential outliers or at high risk of bias. We also derive percentage study weights toward methodologically interesting measures, such as the magnitude of ecological bias (difference between within-study and across-study associations) and the amount of inconsistency (difference between direct and indirect evidence in a network meta-analysis).
NASA Astrophysics Data System (ADS)
Spencer, Todd J.; Chen, Yu-Chun; Saha, Rajarshi; Kohl, Paul A.
2011-06-01
Incorporation of copper ions into poly(propylene carbonate) (PPC) films cast from γ-butyrolactone (GBL), trichloroethylene (TCE) or methylene chloride (MeCl) solutions containing a photo-acid generator is shown to stabilize the PPC from thermal decomposition. Copper ions were introduced into the PPC mixtures by bringing the polymer mixture into contact with copper metal. The metal was oxidized and dissolved into the PPC mixture. The dissolved copper interferes with the decomposition mechanism of PPC, raising its decomposition temperature. Thermogravimetric analysis shows that copper ions make PPC more stable by up to 50°C. Spectroscopic analysis indicates that copper ions may stabilize terminal carboxylic acid groups, inhibiting PPC decomposition. The change in thermal stability based on PPC exposure to patterned copper substrates was used to provide a self-aligned patterning method for PPC on copper traces without the need for an additional photopatterning registration step. Thermal decomposition of PPC is then used to create air isolation regions around the copper traces. The spatial resolution of the self-patterning PPC process is limited by the lateral diffusion of the copper ions within the PPC. The concentration profiles of copper within the PPC, patterning resolution, and temperature effects on the PPC decomposition have been studied.
NASA Astrophysics Data System (ADS)
Boniecki, P.; Nowakowski, K.; Slosarz, P.; Dach, J.; Pilarski, K.
2012-04-01
The purpose of the project was to identify the degree of organic matter decomposition by means of a neural model based on graphical information derived from image analysis. Empirical data (photographs of compost content at various stages of maturation) were used to generate an optimal neural classifier (Boniecki et al. 2009, Nowakowski et al. 2009). The best classification properties were found in an RBF (Radial Basis Function) artificial neural network, which demonstrates that the process is non-linear.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salo, Heikki; Laurikainen, Eija; Laine, Jarkko
The Spitzer Survey of Stellar Structure in Galaxies (S{sup 4}G) is a deep 3.6 and 4.5 μm imaging survey of 2352 nearby (<40 Mpc) galaxies. We describe the S{sup 4}G data analysis pipeline 4, which is dedicated to two-dimensional structural surface brightness decompositions of 3.6 μm images, using GALFIT3.0. Besides automatic 1-component Sérsic fits, and 2-component Sérsic bulge + exponential disk fits, we present human-supervised multi-component decompositions, which include, when judged appropriate, a central point source, bulge, disk, and bar components. Comparison of the fitted parameters indicates that multi-component models are needed to obtain reliable estimates for the bulge Sérsicmore » index and bulge-to-total light ratio (B/T), confirming earlier results. Here, we describe the preparations of input data done for decompositions, give examples of our decomposition strategy, and describe the data products released via IRSA and via our web page (www.oulu.fi/astronomy/S4G-PIPELINE4/MAIN). These products include all the input data and decomposition files in electronic form, making it easy to extend the decompositions to suit specific science purposes. We also provide our IDL-based visualization tools (GALFIDL) developed for displaying/running GALFIT-decompositions, as well as our mask editing procedure (MASK-EDIT) used in data preparation. A detailed analysis of the bulge, disk, and bar parameters derived from multi-component decompositions will be published separately.« less
Multidisciplinary Optimization Methods for Aircraft Preliminary Design
NASA Technical Reports Server (NTRS)
Kroo, Ilan; Altus, Steve; Braun, Robert; Gage, Peter; Sobieski, Ian
1994-01-01
This paper describes a research program aimed at improved methods for multidisciplinary design and optimization of large-scale aeronautical systems. The research involves new approaches to system decomposition, interdisciplinary communication, and methods of exploiting coarse-grained parallelism for analysis and optimization. A new architecture, that involves a tight coupling between optimization and analysis, is intended to improve efficiency while simplifying the structure of multidisciplinary, computation-intensive design problems involving many analysis disciplines and perhaps hundreds of design variables. Work in two areas is described here: system decomposition using compatibility constraints to simplify the analysis structure and take advantage of coarse-grained parallelism; and collaborative optimization, a decomposition of the optimization process to permit parallel design and to simplify interdisciplinary communication requirements.
Decomposition of Copper (II) Sulfate Pentahydrate: A Sequential Gravimetric Analysis.
ERIC Educational Resources Information Center
Harris, Arlo D.; Kalbus, Lee H.
1979-01-01
Describes an improved experiment of the thermal dehydration of copper (II) sulfate pentahydrate. The improvements described here are control of the temperature environment and a quantitative study of the decomposition reaction to a thermally stable oxide. Data will suffice to show sequential gravimetric analysis. (Author/SA)
Generalized decompositions of dynamic systems and vector Lyapunov functions
NASA Astrophysics Data System (ADS)
Ikeda, M.; Siljak, D. D.
1981-10-01
The notion of decomposition is generalized to provide more freedom in constructing vector Lyapunov functions for stability analysis of nonlinear dynamic systems. A generalized decomposition is defined as a disjoint decomposition of a system which is obtained by expanding the state-space of a given system. An inclusion principle is formulated for the solutions of the expansion to include the solutions of the original system, so that stability of the expansion implies stability of the original system. Stability of the expansion can then be established by standard disjoint decompositions and vector Lyapunov functions. The applicability of the new approach is demonstrated using the Lotka-Volterra equations.
Dadaser-Celik, Filiz; Azgin, Sukru Taner; Yildiz, Yalcin Sevki
2016-12-01
Biogas production from food waste has been used as an efficient waste treatment option for years. The methane yields from decomposition of waste are, however, highly variable under different operating conditions. In this study, a statistical experimental design method (Taguchi OA 9 ) was implemented to investigate the effects of simultaneous variations of three parameters on methane production. The parameters investigated were solid content (SC), carbon/nitrogen ratio (C/N) and food/inoculum ratio (F/I). Two sets of experiments were conducted with nine anaerobic reactors operating under different conditions. Optimum conditions were determined using statistical analysis, such as analysis of variance (ANOVA). A confirmation experiment was carried out at optimum conditions to investigate the validity of the results. Statistical analysis showed that SC was the most important parameter for methane production with a 45% contribution, followed by F/I ratio with a 35% contribution. The optimum methane yield of 151 l kg -1 volatile solids (VS) was achieved after 24 days of digestion when SC was 4%, C/N was 28 and F/I were 0.3. The confirmation experiment provided a methane yield of 167 l kg -1 VS after 24 days. The analysis showed biogas production from food waste may be increased by optimization of operating conditions. © The Author(s) 2016.
Augmenting the decomposition of EMG signals using supervised feature extraction techniques.
Parsaei, Hossein; Gangeh, Mehrdad J; Stashuk, Daniel W; Kamel, Mohamed S
2012-01-01
Electromyographic (EMG) signal decomposition is the process of resolving an EMG signal into its constituent motor unit potential trains (MUPTs). In this work, the possibility of improving the decomposing results using two supervised feature extraction methods, i.e., Fisher discriminant analysis (FDA) and supervised principal component analysis (SPCA), is explored. Using the MUP labels provided by a decomposition-based quantitative EMG system as a training data for FDA and SPCA, the MUPs are transformed into a new feature space such that the MUPs of a single MU become as close as possible to each other while those created by different MUs become as far as possible. The MUPs are then reclassified using a certainty-based classification algorithm. Evaluation results using 10 simulated EMG signals comprised of 3-11 MUPTs demonstrate that FDA and SPCA on average improve the decomposition accuracy by 6%. The improvement for the most difficult-to-decompose signal is about 12%, which shows the proposed approach is most beneficial in the decomposition of more complex signals.
TENSOR DECOMPOSITIONS AND SPARSE LOG-LINEAR MODELS
Johndrow, James E.; Bhattacharya, Anirban; Dunson, David B.
2017-01-01
Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. We derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions. PMID:29332971
NASA Astrophysics Data System (ADS)
Bonan, G. B.; Wieder, W. R.
2012-12-01
Decomposition is a large term in the global carbon budget, but models of the earth system that simulate carbon cycle-climate feedbacks are largely untested with respect to litter decomposition. Here, we demonstrate a protocol to document model performance with respect to both long-term (10 year) litter decomposition and steady-state soil carbon stocks. First, we test the soil organic matter parameterization of the Community Land Model version 4 (CLM4), the terrestrial component of the Community Earth System Model, with data from the Long-term Intersite Decomposition Experiment Team (LIDET). The LIDET dataset is a 10-year study of litter decomposition at multiple sites across North America and Central America. We show results for 10-year litter decomposition simulations compared with LIDET for 9 litter types and 20 sites in tundra, grassland, and boreal, conifer, deciduous, and tropical forest biomes. We show additional simulations with DAYCENT, a version of the CENTURY model, to ask how well an established ecosystem model matches the observations. The results reveal large discrepancy between the laboratory microcosm studies used to parameterize the CLM4 litter decomposition and the LIDET field study. Simulated carbon loss is more rapid than the observations across all sites, despite using the LIDET-provided climatic decomposition index to constrain temperature and moisture effects on decomposition. Nitrogen immobilization is similarly biased high. Closer agreement with the observations requires much lower decomposition rates, obtained with the assumption that nitrogen severely limits decomposition. DAYCENT better replicates the observations, for both carbon mass remaining and nitrogen, without requirement for nitrogen limitation of decomposition. Second, we compare global observationally-based datasets of soil carbon with simulated steady-state soil carbon stocks for both models. The models simulations were forced with observationally-based estimates of annual litterfall and model-derived climatic decomposition index. While comparison with the LIDET 10-year litterbag study reveals sharp contrasts between CLM4 and DAYCENT, simulations of steady-state soil carbon show less difference between models. Both CLM4 and DAYCENT significantly underestimate soil carbon. Sensitivity analyses highlight causes of the low soil carbon bias. The terrestrial biogeochemistry of earth system models must be critically tested with observations, and the consequences of particular model choices must be documented. Long-term litter decomposition experiments such as LIDET provide a real-world process-oriented benchmark to evaluate models and can critically inform model development. Analysis of steady-state soil carbon estimates reveal additional, but here different, inferences about model performance.
ERIC Educational Resources Information Center
Schizas, Dimitrios; Katrana, Evagelia; Stamou, George
2013-01-01
In the present study we used the technique of word association tests to assess students' cognitive structures during the learning period. In particular, we tried to investigate what students living near a protected area in Greece (Dadia forest) knew about the phenomenon of decomposition. Decomposition was chosen as a stimulus word because it…
Microbial ecological succession during municipal solid waste decomposition.
Staley, Bryan F; de Los Reyes, Francis L; Wang, Ling; Barlaz, Morton A
2018-04-28
The decomposition of landfilled refuse proceeds through distinct phases, each defined by varying environmental factors such as volatile fatty acid concentration, pH, and substrate quality. The succession of microbial communities in response to these changing conditions was monitored in a laboratory-scale simulated landfill to minimize measurement difficulties experienced at field scale. 16S rRNA gene sequences retrieved at separate stages of decomposition showed significant succession in both Bacteria and methanogenic Archaea. A majority of Bacteria sequences in landfilled refuse belong to members of the phylum Firmicutes, while Proteobacteria levels fluctuated and Bacteroidetes levels increased as decomposition proceeded. Roughly 44% of archaeal sequences retrieved under conditions of low pH and high acetate were strictly hydrogenotrophic (Methanomicrobiales, Methanobacteriales). Methanosarcina was present at all stages of decomposition. Correspondence analysis showed bacterial population shifts were attributed to carboxylic acid concentration and solids hydrolysis, while archaeal populations were affected to a higher degree by pH. T-RFLP analysis showed specific taxonomic groups responded differently and exhibited unique responses during decomposition, suggesting that species composition and abundance within Bacteria and Archaea are highly dynamic. This study shows landfill microbial demographics are highly variable across both spatial and temporal transects.
Evaluating the Mechanism of Oil Price Shocks and Fiscal Policy Responses in the Malaysian Economy
NASA Astrophysics Data System (ADS)
Bekhet, Hussain A.; Yusoff, Nora Yusma Mohamed
2013-06-01
The paper aims to explore the symmetric impact of oil price shock on economy, to understand its mechanism channel and how fiscal policy response towards it. The Generalized Impulse Response Function and Variance Decomposition under the VAR methodology were employed. The empirical findings suggest that symmetric oil price shock has a positive and direct impact on oil revenue and government expenditure. However, the real GDP is vulnerable in a short-term but not in the long term period. These results would confirm that fiscal policy is the main mechanism channel that mitigates the adverse effects oil price shocks to the economy.
2014-04-01
Barrier methods for critical exponent problems in geometric analysis and mathematical physics, J. Erway and M. Holst, Submitted for publication ...TR-14-33 A Posteriori Error Analysis and Uncertainty Quantification for Adaptive Multiscale Operator Decomposition Methods for Multiphysics...Problems Approved for public release, distribution is unlimited. April 2014 HDTRA1-09-1-0036 Donald Estep and Michael
NASA Astrophysics Data System (ADS)
Forootan, Ehsan; Kusche, Jürgen; Talpe, Matthieu; Shum, C. K.; Schmidt, Michael
2017-12-01
In recent decades, decomposition techniques have enabled increasingly more applications for dimension reduction, as well as extraction of additional information from geophysical time series. Traditionally, the principal component analysis (PCA)/empirical orthogonal function (EOF) method and more recently the independent component analysis (ICA) have been applied to extract, statistical orthogonal (uncorrelated), and independent modes that represent the maximum variance of time series, respectively. PCA and ICA can be classified as stationary signal decomposition techniques since they are based on decomposing the autocovariance matrix and diagonalizing higher (than two) order statistical tensors from centered time series, respectively. However, the stationarity assumption in these techniques is not justified for many geophysical and climate variables even after removing cyclic components, e.g., the commonly removed dominant seasonal cycles. In this paper, we present a novel decomposition method, the complex independent component analysis (CICA), which can be applied to extract non-stationary (changing in space and time) patterns from geophysical time series. Here, CICA is derived as an extension of real-valued ICA, where (a) we first define a new complex dataset that contains the observed time series in its real part, and their Hilbert transformed series as its imaginary part, (b) an ICA algorithm based on diagonalization of fourth-order cumulants is then applied to decompose the new complex dataset in (a), and finally, (c) the dominant independent complex modes are extracted and used to represent the dominant space and time amplitudes and associated phase propagation patterns. The performance of CICA is examined by analyzing synthetic data constructed from multiple physically meaningful modes in a simulation framework, with known truth. Next, global terrestrial water storage (TWS) data from the Gravity Recovery And Climate Experiment (GRACE) gravimetry mission (2003-2016), and satellite radiometric sea surface temperature (SST) data (1982-2016) over the Atlantic and Pacific Oceans are used with the aim of demonstrating signal separations of the North Atlantic Oscillation (NAO) from the Atlantic Multi-decadal Oscillation (AMO), and the El Niño Southern Oscillation (ENSO) from the Pacific Decadal Oscillation (PDO). CICA results indicate that ENSO-related patterns can be extracted from the Gravity Recovery And Climate Experiment Terrestrial Water Storage (GRACE TWS) with an accuracy of 0.5-1 cm in terms of equivalent water height (EWH). The magnitude of errors in extracting NAO or AMO from SST data using the complex EOF (CEOF) approach reaches up to 50% of the signal itself, while it is reduced to 16% when applying CICA. Larger errors with magnitudes of 100% and 30% of the signal itself are found while separating ENSO from PDO using CEOF and CICA, respectively. We thus conclude that the CICA is more effective than CEOF in separating non-stationary patterns.
Raut, Savita V; Yadav, Dinkar M
2018-03-28
This paper presents an fMRI signal analysis methodology using geometric mean curve decomposition (GMCD) and mutual information-based voxel selection framework. Previously, the fMRI signal analysis has been conducted using empirical mean curve decomposition (EMCD) model and voxel selection on raw fMRI signal. The erstwhile methodology loses frequency component, while the latter methodology suffers from signal redundancy. Both challenges are addressed by our methodology in which the frequency component is considered by decomposing the raw fMRI signal using geometric mean rather than arithmetic mean and the voxels are selected from EMCD signal using GMCD components, rather than raw fMRI signal. The proposed methodologies are adopted for predicting the neural response. Experimentations are conducted in the openly available fMRI data of six subjects, and comparisons are made with existing decomposition models and voxel selection frameworks. Subsequently, the effect of degree of selected voxels and the selection constraints are analyzed. The comparative results and the analysis demonstrate the superiority and the reliability of the proposed methodology.
1985-09-01
larger than the net energies of reaction for the same transitions ) represent energy needed for "freeing-up" of HMX or RDX molecules 70E. R. Lee, R. H...FACTORS FOR HMX AND RDX DECOMPOSITION Michael A. Schroeder DT!C .AECTE September 1985 SEP 3 0 8 * APPROVED FOR PUBUC RELEASE; DISTIR!UTION UNLIMITED. US...Final Activation Energies and Frequency Factors for HMX and RDX Decomposition b PERFORMING ORG. REPORT N, %1ER 7. AUTHOR(@) 6 CONTRACT OR GRANT NuMP
About decomposition approach for solving the classification problem
NASA Astrophysics Data System (ADS)
Andrianova, A. A.
2016-11-01
This article describes the features of the application of an algorithm with using of decomposition methods for solving the binary classification problem of constructing a linear classifier based on Support Vector Machine method. Application of decomposition reduces the volume of calculations, in particular, due to the emerging possibilities to build parallel versions of the algorithm, which is a very important advantage for the solution of problems with big data. The analysis of the results of computational experiments conducted using the decomposition approach. The experiment use known data set for binary classification problem.
Liu, Zhengyan; Mao, Xianqiang; Song, Peng
2017-01-01
Temporal index decomposition analysis and spatial index decomposition analysis were applied to understand the driving forces of the emissions embodied in China's exports and net exports during 2002-2011, respectively. The accumulated emissions embodied in exports accounted for approximately 30% of the total emissions in China; although the contribution of the sectoral total emissions intensity (technique effect) declined, the scale effect was largely responsible for the mounting emissions associated with export, and the composition effect played a largely insignificant role. Calculations of the emissions embodied in net exports suggest that China is generally in an environmentally inferior position compared with its major trade partners. The differences in the economy-wide emission intensities between China and its major trade partners were the biggest contribution to this reality, and the trade balance effect played a less important role. However, a lower degree of specialization in pollution intensive products in exports than in imports helped to reduce slightly the emissions embodied in net exports. The temporal index decomposition analysis results suggest that China should take effective measures to optimize export and supply-side structure and reduce the total emissions intensity. According to spatial index decomposition analysis, it is suggested that a more aggressive import policy was useful for curbing domestic and global emissions, and the transfer of advanced production technologies and emission control technologies from developed to developing countries should be a compulsory global environmental policy option to mitigate the possible leakage of pollution emissions caused by international trade.
Meta-analysis with missing study-level sample variance data.
Chowdhry, Amit K; Dworkin, Robert H; McDermott, Michael P
2016-07-30
We consider a study-level meta-analysis with a normally distributed outcome variable and possibly unequal study-level variances, where the object of inference is the difference in means between a treatment and control group. A common complication in such an analysis is missing sample variances for some studies. A frequently used approach is to impute the weighted (by sample size) mean of the observed variances (mean imputation). Another approach is to include only those studies with variances reported (complete case analysis). Both mean imputation and complete case analysis are only valid under the missing-completely-at-random assumption, and even then the inverse variance weights produced are not necessarily optimal. We propose a multiple imputation method employing gamma meta-regression to impute the missing sample variances. Our method takes advantage of study-level covariates that may be used to provide information about the missing data. Through simulation studies, we show that multiple imputation, when the imputation model is correctly specified, is superior to competing methods in terms of confidence interval coverage probability and type I error probability when testing a specified group difference. Finally, we describe a similar approach to handling missing variances in cross-over studies. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Bahri, A.; Bendersky, M.; Cohen, F. R.; Gitler, S.
2009-01-01
This article gives a natural decomposition of the suspension of a generalized moment-angle complex or partial product space which arises as the polyhedral product functor described below. The introduction and application of the smash product moment-angle complex provides a precise identification of the stable homotopy type of the values of the polyhedral product functor. One direct consequence is an analysis of the associated cohomology. For the special case of the complements of certain subspace arrangements, the geometrical decomposition implies the homological decomposition in earlier work of others as described below. Because the splitting is geometric, an analogous homological decomposition for a generalized moment-angle complex applies for any homology theory. Implied, therefore, is a decomposition for the Stanley–Reisner ring of a finite simplicial complex, and natural generalizations. PMID:19620727
Bahri, A; Bendersky, M; Cohen, F R; Gitler, S
2009-07-28
This article gives a natural decomposition of the suspension of a generalized moment-angle complex or partial product space which arises as the polyhedral product functor described below. The introduction and application of the smash product moment-angle complex provides a precise identification of the stable homotopy type of the values of the polyhedral product functor. One direct consequence is an analysis of the associated cohomology. For the special case of the complements of certain subspace arrangements, the geometrical decomposition implies the homological decomposition in earlier work of others as described below. Because the splitting is geometric, an analogous homological decomposition for a generalized moment-angle complex applies for any homology theory. Implied, therefore, is a decomposition for the Stanley-Reisner ring of a finite simplicial complex, and natural generalizations.
Three geographic decomposition approaches in transportation network analysis
DOT National Transportation Integrated Search
1980-03-01
This document describes the results of research into the application of geographic decomposition techniques to practical transportation network problems. Three approaches are described for the solution of the traffic assignment problem. One approach ...
Application of Decomposition to Transportation Network Analysis
DOT National Transportation Integrated Search
1976-10-01
This document reports preliminary results of five potential applications of the decomposition techniques from mathematical programming to transportation network problems. The five application areas are (1) the traffic assignment problem with fixed de...
Yielding physically-interpretable emulators - A Sparse PCA approach
NASA Astrophysics Data System (ADS)
Galelli, S.; Alsahaf, A.; Giuliani, M.; Castelletti, A.
2015-12-01
Projection-based techniques, such as Principal Orthogonal Decomposition (POD), are a common approach to surrogate high-fidelity process-based models by lower order dynamic emulators. With POD, the dimensionality reduction is achieved by using observations, or 'snapshots' - generated with the high-fidelity model -, to project the entire set of input and state variables of this model onto a smaller set of basis functions that account for most of the variability in the data. While reduction efficiency and variance control of POD techniques are usually very high, the resulting emulators are structurally highly complex and can hardly be given a physically meaningful interpretation as each basis is a projection of the entire set of inputs and states. In this work, we propose a novel approach based on Sparse Principal Component Analysis (SPCA) that combines the several assets of POD methods with the potential for ex-post interpretation of the emulator structure. SPCA reduces the number of non-zero coefficients in the basis functions by identifying a sparse matrix of coefficients. While the resulting set of basis functions may retain less variance of the snapshots, the presence of a few non-zero coefficients assists in the interpretation of the underlying physical processes. The SPCA approach is tested on the reduction of a 1D hydro-ecological model (DYRESM-CAEDYM) used to describe the main ecological and hydrodynamic processes in Tono Dam, Japan. An experimental comparison against a standard POD approach shows that SPCA achieves the same accuracy in emulating a given output variable - for the same level of dimensionality reduction - while yielding better insights of the main process dynamics.
Quantitative Comparison of the Variability in Observed and Simulated Shortwave Reflectance
NASA Technical Reports Server (NTRS)
Roberts, Yolanda, L.; Pilewskie, P.; Kindel, B. C.; Feldman, D. R.; Collins, W. D.
2013-01-01
The Climate Absolute Radiance and Refractivity Observatory (CLARREO) is a climate observation system that has been designed to monitor the Earth's climate with unprecedented absolute radiometric accuracy and SI traceability. Climate Observation System Simulation Experiments (OSSEs) have been generated to simulate CLARREO hyperspectral shortwave imager measurements to help define the measurement characteristics needed for CLARREO to achieve its objectives. To evaluate how well the OSSE-simulated reflectance spectra reproduce the Earth s climate variability at the beginning of the 21st century, we compared the variability of the OSSE reflectance spectra to that of the reflectance spectra measured by the Scanning Imaging Absorption Spectrometer for Atmospheric Cartography (SCIAMACHY). Principal component analysis (PCA) is a multivariate decomposition technique used to represent and study the variability of hyperspectral radiation measurements. Using PCA, between 99.7%and 99.9%of the total variance the OSSE and SCIAMACHY data sets can be explained by subspaces defined by six principal components (PCs). To quantify how much information is shared between the simulated and observed data sets, we spectrally decomposed the intersection of the two data set subspaces. The results from four cases in 2004 showed that the two data sets share eight (January and October) and seven (April and July) dimensions, which correspond to about 99.9% of the total SCIAMACHY variance for each month. The spectral nature of these shared spaces, understood by examining the transformed eigenvectors calculated from the subspace intersections, exhibit similar physical characteristics to the original PCs calculated from each data set, such as water vapor absorption, vegetation reflectance, and cloud reflectance.
Chiew, Mark; Graedel, Nadine N; Miller, Karla L
2018-07-01
Recent developments in highly accelerated fMRI data acquisition have employed low-rank and/or sparsity constraints for image reconstruction, as an alternative to conventional, time-independent parallel imaging. When under-sampling factors are high or the signals of interest are low-variance, however, functional data recovery can be poor or incomplete. We introduce a method for improving reconstruction fidelity using external constraints, like an experimental design matrix, to partially orient the estimated fMRI temporal subspace. Combining these external constraints with low-rank constraints introduces a new image reconstruction model that is analogous to using a mixture of subspace-decomposition (PCA/ICA) and regression (GLM) models in fMRI analysis. We show that this approach improves fMRI reconstruction quality in simulations and experimental data, focusing on the model problem of detecting subtle 1-s latency shifts between brain regions in a block-design task-fMRI experiment. Successful latency discrimination is shown at acceleration factors up to R = 16 in a radial-Cartesian acquisition. We show that this approach works with approximate, or not perfectly informative constraints, where the derived benefit is commensurate with the information content contained in the constraints. The proposed method extends low-rank approximation methods for under-sampled fMRI data acquisition by leveraging knowledge of expected task-based variance in the data, enabling improvements in the speed and efficiency of fMRI data acquisition without the loss of subtle features. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Ying, Kairan; Frederiksen, Carsten S.; Zheng, Xiaogu; Lou, Jiale; Zhao, Tianbao
2018-02-01
The modes of variability that arise from the slow-decadal (potentially predictable) and intra-decadal (unpredictable) components of decadal mean temperature and precipitation over China are examined, in a 1000 year (850-1850 AD) experiment using the CCSM4 model. Solar variations, volcanic aerosols, orbital forcing, land use, and greenhouse gas concentrations provide the main forcing and boundary conditions. The analysis is done using a decadal variance decomposition method that identifies sources of potential decadal predictability and uncertainty. The average potential decadal predictabilities (ratio of slow-to-total decadal variance) are 0.62 and 0.37 for the temperature and rainfall over China, respectively, indicating that the (multi-)decadal variations of temperature are dominated by slow-decadal variability, while precipitation is dominated by unpredictable decadal noise. Possible sources of decadal predictability for the two leading predictable modes of temperature are the external radiative forcing, and the combined effects of slow-decadal variability of the Arctic oscillation (AO) and the Pacific decadal oscillation (PDO), respectively. Combined AO and PDO slow-decadal variability is associated also with the leading predictable mode of precipitation. External radiative forcing as well as the slow-decadal variability of PDO are associated with the second predictable rainfall mode; the slow-decadal variability of Atlantic multi-decadal oscillation (AMO) is associated with the third predictable precipitation mode. The dominant unpredictable decadal modes are associated with intra-decadal/inter-annual phenomena. In particular, the El Niño-Southern Oscillation and the intra-decadal variability of the AMO, PDO and AO are the most important sources of prediction uncertainty.
A Probabilistic Collocation Based Iterative Kalman Filter for Landfill Data Assimilation
NASA Astrophysics Data System (ADS)
Qiang, Z.; Zeng, L.; Wu, L.
2016-12-01
Due to the strong spatial heterogeneity of landfill, uncertainty is ubiquitous in gas transport process in landfill. To accurately characterize the landfill properties, the ensemble Kalman filter (EnKF) has been employed to assimilate the measurements, e.g., the gas pressure. As a Monte Carlo (MC) based method, the EnKF usually requires a large ensemble size, which poses a high computational cost for large scale problems. In this work, we propose a probabilistic collocation based iterative Kalman filter (PCIKF) to estimate permeability in a liquid-gas coupling model. This method employs polynomial chaos expansion (PCE) to represent and propagate the uncertainties of model parameters and states, and an iterative form of Kalman filter to assimilate the current gas pressure data. To further reduce the computation cost, the functional ANOVA (analysis of variance) decomposition is conducted, and only the first order ANOVA components are remained for PCE. Illustrated with numerical case studies, this proposed method shows significant superiority in computation efficiency compared with the traditional MC based iterative EnKF. The developed method has promising potential in reliable prediction and management of landfill gas production.
Population growth and development: the case of Bangladesh.
Nakibullah, A
1998-04-01
In a poor, overly populated country such as Bangladesh, some believe that a high rate of population growth is a cause of poverty which impedes economic development. Population growth would therefore be exogenous to economic development. However, others believe that rapid population growth is a consequence rather than a cause of poverty. Population growth is therefore endogenous to economic development. Findings are presented from an investigation of whether population growth has been exogenous or endogenous with respect to Bangladesh's development process during the past 3 decades. The increase in per capita real gross domestic product (GDP) is used as a measure of development. Data on population, real GDP per capita, and real investment share of GDP are drawn from the Penn World Table prepared by Summers and Heston in 1991. The data are annual and cover the period 1959-90. Analysis of the data indicate that population growth is endogenous to Bangladesh's development process. These findings are reflected both in the Granger causality tests and the decompositions of variances of detrended real GDP per capita and population growth.
NASA Astrophysics Data System (ADS)
Zhang, Dai; Hao, Shiqi; Zhao, Qingsong; Zhao, Qi; Wang, Lei; Wan, Xiongfeng
2018-03-01
Existing wavefront reconstruction methods are usually low in resolution, restricted by structure characteristics of the Shack Hartmann wavefront sensor (SH WFS) and the deformable mirror (DM) in the adaptive optics (AO) system, thus, resulting in weak homodyne detection efficiency for free space optical (FSO) communication. In order to solve this problem, we firstly validate the feasibility of liquid crystal spatial light modulator (LC SLM) using in an AO system. Then, wavefront reconstruction method based on wavelet fractal interpolation is proposed after self-similarity analysis of wavefront distortion caused by atmospheric turbulence. Fast wavelet decomposition is operated to multiresolution analyze the wavefront phase spectrum, during which soft threshold denoising is carried out. The resolution of estimated wavefront phase is then improved by fractal interpolation. Finally, fast wavelet reconstruction is taken to recover wavefront phase. Simulation results reflect the superiority of our method in homodyne detection. Compared with minimum variance estimation (MVE) method based on interpolation techniques, the proposed method could obtain superior homodyne detection efficiency with lower operation complexity. Our research findings have theoretical significance in the design of coherent FSO communication system.
Mohamed, Hala Sh; Dahy, AbdelRahman A; Mahfouz, Refaat M
2017-10-25
Kinetic analysis for the non-isothermal decomposition of un-irradiated and photon-beam-irradiated 5-fluorouracil (5-FU) as anti-cancer drug, was carried out in static air. Thermal decomposition of 5-FU proceeds in two steps. One minor step in the temperature range of (270-283°C) followed by the major step in the temperature range of (285-360°C). The non-isothermal data for un-irradiated and photon-irradiated 5-FU were analyzed using linear (Tang) and non-linear (Vyazovkin) isoconversional methods. The results of the application of these free models on the present kinetic data showed quite a dependence of the activation energy on the extent of conversion. For un-irradiated 5-FU, the non-isothermal data analysis indicates that the decomposition is generally described by A3 and A4 modeles for the minor and major decomposition steps, respectively. For a photon-irradiated sample of 5-FU with total absorbed dose of 10Gy, the decomposition is controlled by A2 model throughout the coversion range. The activation energies calculated in case of photon-irradiated 5-FU were found to be lower compared to the values obtained from the thermal decomposition of the un-irradiated sample probably due to the formation of additional nucleation sites created by a photon-irradiation. The decomposition path was investigated by intrinsic reaction coordinate (IRC) at the B3LYP/6-311++G(d,p) level of DFT. Two transition states were involved in the process by homolytic rupture of NH bond and ring secession, respectively. Published by Elsevier B.V.
Tu, Jun-Ling; Yuan, Jiao-Jiao
2018-02-13
The thermal decomposition behavior of olive hydroxytyrosol (HT) was first studied using thermogravimetry (TG). Cracked chemical bond and evolved gas analysis during the thermal decomposition process of HT were also investigated using thermogravimetry coupled with infrared spectroscopy (TG-FTIR). Thermogravimetry-Differential thermogravimetry (TG-DTG) curves revealed that the thermal decomposition of HT began at 262.8 °C and ended at 409.7 °C with a main mass loss. It was demonstrated that a high heating rate (over 20 K·min -1 ) restrained the thermal decomposition of HT, resulting in an obvious thermal hysteresis. Furthermore, a thermal decomposition kinetics investigation of HT indicated that the non-isothermal decomposition mechanism was one-dimensional diffusion (D1), integral form g ( x ) = x ², and differential form f ( x ) = 1/(2 x ). The four combined approaches were employed to calculate the activation energy ( E = 128.50 kJ·mol -1 ) and Arrhenius preexponential factor (ln A = 24.39 min -1 ). In addition, a tentative mechanism of HT thermal decomposition was further developed. The results provide a theoretical reference for the potential thermal stability of HT.
Decadal climate prediction in the large ensemble limit
NASA Astrophysics Data System (ADS)
Yeager, S. G.; Rosenbloom, N. A.; Strand, G.; Lindsay, K. T.; Danabasoglu, G.; Karspeck, A. R.; Bates, S. C.; Meehl, G. A.
2017-12-01
In order to quantify the benefits of initialization for climate prediction on decadal timescales, two parallel sets of historical simulations are required: one "initialized" ensemble that incorporates observations of past climate states and one "uninitialized" ensemble whose internal climate variations evolve freely and without synchronicity. In the large ensemble limit, ensemble averaging isolates potentially predictable forced and internal variance components in the "initialized" set, but only the forced variance remains after averaging the "uninitialized" set. The ensemble size needed to achieve this variance decomposition, and to robustly distinguish initialized from uninitialized decadal predictions, remains poorly constrained. We examine a large ensemble (LE) of initialized decadal prediction (DP) experiments carried out using the Community Earth System Model (CESM). This 40-member CESM-DP-LE set of experiments represents the "initialized" complement to the CESM large ensemble of 20th century runs (CESM-LE) documented in Kay et al. (2015). Both simulation sets share the same model configuration, historical radiative forcings, and large ensemble sizes. The twin experiments afford an unprecedented opportunity to explore the sensitivity of DP skill assessment, and in particular the skill enhancement associated with initialization, to ensemble size. This talk will highlight the benefits of a large ensemble size for initialized predictions of seasonal climate over land in the Atlantic sector as well as predictions of shifts in the likelihood of climate extremes that have large societal impact.
Polarimetric Decomposition Analysis of the Deepwater Horizon Oil Slick Using L-Band UAVSAR Data
NASA Technical Reports Server (NTRS)
Jones, Cathleen; Minchew, Brent; Holt, Benjamin
2011-01-01
We report here an analysis of the polarization dependence of L-band radar backscatter from the main slick of the Deepwater Horizon oil spill, with specific attention to the utility of polarimetric decomposition analysis for discrimination of oil from clean water and identification of variations in the oil characteristics. For this study we used data collected with the UAVSAR instrument from opposing look directions directly over the main oil slick. We find that both the Cloude-Pottier and Shannon entropy polarimetric decomposition methods offer promise for oil discrimination, with the Shannon entropy method yielding the same information as contained in the Cloude-Pottier entropy and averaged in tensity parameters, but with significantly less computational complexity
A data-driven method to enhance vibration signal decomposition for rolling bearing fault analysis
NASA Astrophysics Data System (ADS)
Grasso, M.; Chatterton, S.; Pennacchi, P.; Colosimo, B. M.
2016-12-01
Health condition analysis and diagnostics of rotating machinery requires the capability of properly characterizing the information content of sensor signals in order to detect and identify possible fault features. Time-frequency analysis plays a fundamental role, as it allows determining both the existence and the causes of a fault. The separation of components belonging to different time-frequency scales, either associated to healthy or faulty conditions, represents a challenge that motivates the development of effective methodologies for multi-scale signal decomposition. In this framework, the Empirical Mode Decomposition (EMD) is a flexible tool, thanks to its data-driven and adaptive nature. However, the EMD usually yields an over-decomposition of the original signals into a large number of intrinsic mode functions (IMFs). The selection of most relevant IMFs is a challenging task, and the reference literature lacks automated methods to achieve a synthetic decomposition into few physically meaningful modes by avoiding the generation of spurious or meaningless modes. The paper proposes a novel automated approach aimed at generating a decomposition into a minimal number of relevant modes, called Combined Mode Functions (CMFs), each consisting in a sum of adjacent IMFs that share similar properties. The final number of CMFs is selected in a fully data driven way, leading to an enhanced characterization of the signal content without any information loss. A novel criterion to assess the dissimilarity between adjacent CMFs is proposed, based on probability density functions of frequency spectra. The method is suitable to analyze vibration signals that may be periodically acquired within the operating life of rotating machineries. A rolling element bearing fault analysis based on experimental data is presented to demonstrate the performances of the method and the provided benefits.
NASA Astrophysics Data System (ADS)
Hu, Shujuan; Chou, Jifan; Cheng, Jianbo
2018-04-01
In order to study the interactions between the atmospheric circulations at the middle-high and low latitudes from the global perspective, the authors proposed the mathematical definition of three-pattern circulations, i.e., horizontal, meridional and zonal circulations with which the actual atmospheric circulation is expanded. This novel decomposition method is proved to accurately describe the actual atmospheric circulation dynamics. The authors used the NCEP/NCAR reanalysis data to calculate the climate characteristics of those three-pattern circulations, and found that the decomposition model agreed with the observed results. Further dynamical analysis indicates that the decomposition model is more accurate to capture the major features of global three dimensional atmospheric motions, compared to the traditional definitions of Rossby wave, Hadley circulation and Walker circulation. The decomposition model for the first time realized the decomposition of global atmospheric circulation using three orthogonal circulations within the horizontal, meridional and zonal planes, offering new opportunities to study the large-scale interactions between the middle-high latitudes and low latitudes circulations.
Batakliev, Todor; Georgiev, Vladimir; Anachkov, Metody; Rakovsky, Slavcho
2014-01-01
Catalytic ozone decomposition is of great significance because ozone is a toxic substance commonly found or generated in human environments (aircraft cabins, offices with photocopiers, laser printers, sterilizers). Considerable work has been done on ozone decomposition reported in the literature. This review provides a comprehensive summary of the literature, concentrating on analysis of the physico-chemical properties, synthesis and catalytic decomposition of ozone. This is supplemented by a review on kinetics and catalyst characterization which ties together the previously reported results. Noble metals and oxides of transition metals have been found to be the most active substances for ozone decomposition. The high price of precious metals stimulated the use of metal oxide catalysts and particularly the catalysts based on manganese oxide. It has been determined that the kinetics of ozone decomposition is of first order importance. A mechanism of the reaction of catalytic ozone decomposition is discussed, based on detailed spectroscopic investigations of the catalytic surface, showing the existence of peroxide and superoxide surface intermediates. PMID:26109880
Pressure-dependent decomposition kinetics of the energetic material HMX up to 3.6 GPa.
Glascoe, Elizabeth A; Zaug, Joseph M; Burnham, Alan K
2009-12-03
The effect of pressure on the global thermal decomposition rate of the energetic material HMX was studied. HMX was precompressed in a diamond anvil cell (DAC) and heated at various rates. The parent species population was monitored as a function of time and temperature using Fourier transform infrared (FTIR) spectroscopy. Global decomposition rates were determined by fitting the fraction reacted to the extended-Prout-Tompkins nucleation-growth model and the Friedman isoconversional method. The results of these experiments and analysis indicate that pressure accelerates the decomposition at low-to-moderate pressures (i.e., between ambient pressure and 0.1 GPa) and decelerates the decomposition at higher pressures. The decomposition acceleration is attributed to pressure-enhanced autocatalysis, whereas the deceleration at high pressures is attributed to pressure-inhibiting bond homolysis step(s), which would result in an increase in volume. These results indicate that both the beta- and delta-polymorphs of HMX are sensitive to pressure in the thermally induced decomposition kinetics.
Radiation Transport in Random Media With Large Fluctuations
NASA Astrophysics Data System (ADS)
Olson, Aaron; Prinja, Anil; Franke, Brian
2017-09-01
Neutral particle transport in media exhibiting large and complex material property spatial variation is modeled by representing cross sections as lognormal random functions of space and generated through a nonlinear memory-less transformation of a Gaussian process with covariance uniquely determined by the covariance of the cross section. A Karhunen-Loève decomposition of the Gaussian process is implemented to effciently generate realizations of the random cross sections and Woodcock Monte Carlo used to transport particles on each realization and generate benchmark solutions for the mean and variance of the particle flux as well as probability densities of the particle reflectance and transmittance. A computationally effcient stochastic collocation method is implemented to directly compute the statistical moments such as the mean and variance, while a polynomial chaos expansion in conjunction with stochastic collocation provides a convenient surrogate model that also produces probability densities of output quantities of interest. Extensive numerical testing demonstrates that use of stochastic reduced-order modeling provides an accurate and cost-effective alternative to random sampling for particle transport in random media.
Alvin H. Yu; Garry Chick
2010-01-01
This study compared the utility of two different post-hoc tests after detecting significant differences within factors on multiple dependent variables using multivariate analysis of variance (MANOVA). We compared the univariate F test (the Scheffé method) to descriptive discriminant analysis (DDA) using an educational-tour survey of university study-...
An optimization approach for fitting canonical tensor decompositions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dunlavy, Daniel M.; Acar, Evrim; Kolda, Tamara Gibson
Tensor decompositions are higher-order analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as the CANDECOMP/PARAFAC decomposition (CPD), which expresses a tensor as the sum of component rank-one tensors and is used in a multitude of applications such as chemometrics, signal processing, neuroscience, and web analysis. The task of computing the CPD, however, can be difficult. The typical approach is based on alternating least squares (ALS) optimization, which can be remarkably fast but is not very accurate. Previously, nonlinear least squares (NLS) methodsmore » have also been recommended; existing NLS methods are accurate but slow. In this paper, we propose the use of gradient-based optimization methods. We discuss the mathematical calculation of the derivatives and further show that they can be computed efficiently, at the same cost as one iteration of ALS. Computational experiments demonstrate that the gradient-based optimization methods are much more accurate than ALS and orders of magnitude faster than NLS.« less
Perfluoropolyalkylether decomposition on catalytic aluminas
NASA Technical Reports Server (NTRS)
Morales, Wilfredo
1994-01-01
The decomposition of Fomblin Z25, a commercial perfluoropolyalkylether liquid lubricant, was studied using the Penn State Micro-oxidation Test, and a thermal gravimetric/differential scanning calorimetry unit. The micro-oxidation test was conducted using 440C stainless steel and pure iron metal catalyst specimens, whereas the thermal gravimetric/differential scanning calorimetry tests were conducted using catalytic alumina pellets. Analysis of the thermal data, high pressure liquid chromatography data, and x-ray photoelectron spectroscopy data support evidence that there are two different decomposition mechanisms for Fomblin Z25, and that reductive sites on the catalytic surfaces are responsible for the decomposition of Fomblin Z25.
Analysis of Self-Excited Combustion Instabilities Using Decomposition Techniques
2016-07-05
are evaluated for the study of self-excited longitudinal combustion instabilities in laboratory-scaled single-element gas turbine and rocket...Air Force Base, California 93524 DOI: 10.2514/1.J054557 Proper orthogonal decomposition and dynamic mode decomposition are evaluated for the study of...instabilities. In addition, we also evaluate the capabilities of the methods to deal with data sets of different spatial extents and temporal resolution
Baskaran, Preetisri; Hyvönen, Riitta; Berglund, S Linnea; Clemmensen, Karina E; Ågren, Göran I; Lindahl, Björn D; Manzoni, Stefano
2017-02-01
Tree growth in boreal forests is limited by nitrogen (N) availability. Most boreal forest trees form symbiotic associations with ectomycorrhizal (ECM) fungi, which improve the uptake of inorganic N and also have the capacity to decompose soil organic matter (SOM) and to mobilize organic N ('ECM decomposition'). To study the effects of 'ECM decomposition' on ecosystem carbon (C) and N balances, we performed a sensitivity analysis on a model of C and N flows between plants, SOM, saprotrophs, ECM fungi, and inorganic N stores. The analysis indicates that C and N balances were sensitive to model parameters regulating ECM biomass and decomposition. Under low N availability, the optimal C allocation to ECM fungi, above which the symbiosis switches from mutualism to parasitism, increases with increasing relative involvement of ECM fungi in SOM decomposition. Under low N conditions, increased ECM organic N mining promotes tree growth but decreases soil C storage, leading to a negative correlation between C stores above- and below-ground. The interplay between plant production and soil C storage is sensitive to the partitioning of decomposition between ECM fungi and saprotrophs. Better understanding of interactions between functional guilds of soil fungi may significantly improve predictions of ecosystem responses to environmental change. © 2016 The Authors. New Phytologist © 2016 New Phytologist Trust.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glascoe, E A; Zaug, J M; Burnham, A K
The effect of pressure on the thermal decomposition rate of the energetic material HMX was studied. HMX was precompressed in a diamond anvil cell (DAC) and heated at various rates. The parent species population was monitored as a function of time and temperature using Fourier transform infrared (FTIR) spectroscopy. Decomposition rates were determined by fitting the fraction reacted to the extended-Prout-Tompkins nucleation-growth model and the Friedman isoconversional method. The results of these experiments and analysis indicate that pressure accelerates the decomposition at low to moderate pressures (i.e. between ambient pressure and 1 GPa) and decelerates the decomposition at higher pressures.more » The decomposition acceleration is attributed to pressure enhanced autocatalysis whereas the deceleration at high pressures is attributed pressure inhibiting bond homolysis step(s), which would result in an increase in volume. These results indicate that both {beta} and {delta} phase HMX are sensitive to pressure in the thermally induced decomposition kinetics.« less
Liu, Zhengyan; Mao, Xianqiang; Song, Peng
2017-01-01
Temporal index decomposition analysis and spatial index decomposition analysis were applied to understand the driving forces of the emissions embodied in China’s exports and net exports during 2002–2011, respectively. The accumulated emissions embodied in exports accounted for approximately 30% of the total emissions in China; although the contribution of the sectoral total emissions intensity (technique effect) declined, the scale effect was largely responsible for the mounting emissions associated with export, and the composition effect played a largely insignificant role. Calculations of the emissions embodied in net exports suggest that China is generally in an environmentally inferior position compared with its major trade partners. The differences in the economy-wide emission intensities between China and its major trade partners were the biggest contribution to this reality, and the trade balance effect played a less important role. However, a lower degree of specialization in pollution intensive products in exports than in imports helped to reduce slightly the emissions embodied in net exports. The temporal index decomposition analysis results suggest that China should take effective measures to optimize export and supply-side structure and reduce the total emissions intensity. According to spatial index decomposition analysis, it is suggested that a more aggressive import policy was useful for curbing domestic and global emissions, and the transfer of advanced production technologies and emission control technologies from developed to developing countries should be a compulsory global environmental policy option to mitigate the possible leakage of pollution emissions caused by international trade. PMID:28441399
Iterative filtering decomposition based on local spectral evolution kernel
Wang, Yang; Wei, Guo-Wei; Yang, Siyang
2011-01-01
The synthesizing information, achieving understanding, and deriving insight from increasingly massive, time-varying, noisy and possibly conflicting data sets are some of most challenging tasks in the present information age. Traditional technologies, such as Fourier transform and wavelet multi-resolution analysis, are inadequate to handle all of the above-mentioned tasks. The empirical model decomposition (EMD) has emerged as a new powerful tool for resolving many challenging problems in data processing and analysis. Recently, an iterative filtering decomposition (IFD) has been introduced to address the stability and efficiency problems of the EMD. Another data analysis technique is the local spectral evolution kernel (LSEK), which provides a near prefect low pass filter with desirable time-frequency localizations. The present work utilizes the LSEK to further stabilize the IFD, and offers an efficient, flexible and robust scheme for information extraction, complexity reduction, and signal and image understanding. The performance of the present LSEK based IFD is intensively validated over a wide range of data processing tasks, including mode decomposition, analysis of time-varying data, information extraction from nonlinear dynamic systems, etc. The utility, robustness and usefulness of the proposed LESK based IFD are demonstrated via a large number of applications, such as the analysis of stock market data, the decomposition of ocean wave magnitudes, the understanding of physiologic signals and information recovery from noisy images. The performance of the proposed method is compared with that of existing methods in the literature. Our results indicate that the LSEK based IFD improves both the efficiency and the stability of conventional EMD algorithms. PMID:22350559
AQMEII3 evaluation of regional NA/EU simulations and ...
Through the comparison of several regional-scale chemistry transport modelling systems that simulate meteorology and air quality over the European and American continents, this study aims at i) apportioning the error to the responsible processes using time-scale analysis, ii) helping to detect causes of models error, and iii) identifying the processes and scales most urgently requiring dedicated investigations. The analysis is conducted within the framework of the third phase of the Air Quality Model Evaluation International Initiative (AQMEII) and tackles model performance gauging through measurement-to-model comparison, error decomposition and time series analysis of the models biases for several fields (ozone, CO, SO2, NO, NO2, PM10, PM2.5, wind speed, and temperature). The operational metrics (magnitude of the error, sign of the bias, associativity) provide an overall sense of model strengths and deficiencies, while apportioning the error to its constituent parts (bias, variance and covariance) can help to assess the nature and quality of the error. Each of the error components is analysed independently and apportioned to specific processes based on the corresponding timescale (long scale, synoptic, diurnal, and intra-day) using the error apportionment technique devised in the former phases of AQMEII. The application of the error apportionment method to the AQMEII Phase 3 simulations provides several key insights. In addition to reaffirming the strong impac
Risk prediction for myocardial infarction via generalized functional regression models.
Ieva, Francesca; Paganoni, Anna M
2016-08-01
In this paper, we propose a generalized functional linear regression model for a binary outcome indicating the presence/absence of a cardiac disease with multivariate functional data among the relevant predictors. In particular, the motivating aim is the analysis of electrocardiographic traces of patients whose pre-hospital electrocardiogram (ECG) has been sent to 118 Dispatch Center of Milan (the Italian free-toll number for emergencies) by life support personnel of the basic rescue units. The statistical analysis starts with a preprocessing of ECGs treated as multivariate functional data. The signals are reconstructed from noisy observations. The biological variability is then removed by a nonlinear registration procedure based on landmarks. Thus, in order to perform a data-driven dimensional reduction, a multivariate functional principal component analysis is carried out on the variance-covariance matrix of the reconstructed and registered ECGs and their first derivatives. We use the scores of the Principal Components decomposition as covariates in a generalized linear model to predict the presence of the disease in a new patient. Hence, a new semi-automatic diagnostic procedure is proposed to estimate the risk of infarction (in the case of interest, the probability of being affected by Left Bundle Brunch Block). The performance of this classification method is evaluated and compared with other methods proposed in literature. Finally, the robustness of the procedure is checked via leave-j-out techniques. © The Author(s) 2013.
NASA Technical Reports Server (NTRS)
Crosson, William L.; Smith, Eric A.
1992-01-01
The behavior of in situ measurements of surface fluxes obtained during FIFE 1987 is examined by using correlative and spectral techniques in order to assess the significance of fluctuations on various time scales, from subdiurnal up to synoptic, intraseasonal, and annual scales. The objectives of this analysis are: (1) to determine which temporal scales have a significant impact on areal averaged fluxes and (2) to design a procedure for filtering an extended flux time series that preserves the basic diurnal features and longer time scales while removing high frequency noise that cannot be attributed to site-induced variation. These objectives are accomplished through the use of a two-dimensional cross-time Fourier transform, which serves to separate processes inherently related to diurnal and subdiurnal variability from those which impact flux variations on the longer time scales. A filtering procedure is desirable before the measurements are utilized as input with an experimental biosphere model, to insure that model based intercomparisons at multiple sites are uncontaminated by input variance not related to true site behavior. Analysis of the spectral decomposition indicates that subdiurnal time scales having periods shorter than 6 hours have little site-to-site consistency and therefore little impact on areal integrated fluxes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khawli, Toufik Al; Eppelt, Urs; Hermanns, Torsten
2016-06-08
In production industries, parameter identification, sensitivity analysis and multi-dimensional visualization are vital steps in the planning process for achieving optimal designs and gaining valuable information. Sensitivity analysis and visualization can help in identifying the most-influential parameters and quantify their contribution to the model output, reduce the model complexity, and enhance the understanding of the model behavior. Typically, this requires a large number of simulations, which can be both very expensive and time consuming when the simulation models are numerically complex and the number of parameter inputs increases. There are three main constituent parts in this work. The first part ismore » to substitute the numerical, physical model by an accurate surrogate model, the so-called metamodel. The second part includes a multi-dimensional visualization approach for the visual exploration of metamodels. In the third part, the metamodel is used to provide the two global sensitivity measures: i) the Elementary Effect for screening the parameters, and ii) the variance decomposition method for calculating the Sobol indices that quantify both the main and interaction effects. The application of the proposed approach is illustrated with an industrial application with the goal of optimizing a drilling process using a Gaussian laser beam.« less
NASA Astrophysics Data System (ADS)
Khawli, Toufik Al; Gebhardt, Sascha; Eppelt, Urs; Hermanns, Torsten; Kuhlen, Torsten; Schulz, Wolfgang
2016-06-01
In production industries, parameter identification, sensitivity analysis and multi-dimensional visualization are vital steps in the planning process for achieving optimal designs and gaining valuable information. Sensitivity analysis and visualization can help in identifying the most-influential parameters and quantify their contribution to the model output, reduce the model complexity, and enhance the understanding of the model behavior. Typically, this requires a large number of simulations, which can be both very expensive and time consuming when the simulation models are numerically complex and the number of parameter inputs increases. There are three main constituent parts in this work. The first part is to substitute the numerical, physical model by an accurate surrogate model, the so-called metamodel. The second part includes a multi-dimensional visualization approach for the visual exploration of metamodels. In the third part, the metamodel is used to provide the two global sensitivity measures: i) the Elementary Effect for screening the parameters, and ii) the variance decomposition method for calculating the Sobol indices that quantify both the main and interaction effects. The application of the proposed approach is illustrated with an industrial application with the goal of optimizing a drilling process using a Gaussian laser beam.
Uncertainty importance analysis using parametric moment ratio functions.
Wei, Pengfei; Lu, Zhenzhou; Song, Jingwen
2014-02-01
This article presents a new importance analysis framework, called parametric moment ratio function, for measuring the reduction of model output uncertainty when the distribution parameters of inputs are changed, and the emphasis is put on the mean and variance ratio functions with respect to the variances of model inputs. The proposed concepts efficiently guide the analyst to achieve a targeted reduction on the model output mean and variance by operating on the variances of model inputs. The unbiased and progressive unbiased Monte Carlo estimators are also derived for the parametric mean and variance ratio functions, respectively. Only a set of samples is needed for implementing the proposed importance analysis by the proposed estimators, thus the computational cost is free of input dimensionality. An analytical test example with highly nonlinear behavior is introduced for illustrating the engineering significance of the proposed importance analysis technique and verifying the efficiency and convergence of the derived Monte Carlo estimators. Finally, the moment ratio function is applied to a planar 10-bar structure for achieving a targeted 50% reduction of the model output variance. © 2013 Society for Risk Analysis.
Decomposition and particle release of a carbon nanotube/epoxy nanocomposite at elevated temperatures
NASA Astrophysics Data System (ADS)
Schlagenhauf, Lukas; Kuo, Yu-Ying; Bahk, Yeon Kyoung; Nüesch, Frank; Wang, Jing
2015-11-01
Carbon nanotubes (CNTs) as fillers in nanocomposites have attracted significant attention, and one of the applications is to use the CNTs as flame retardants. For such nanocomposites, possible release of CNTs at elevated temperatures after decomposition of the polymer matrix poses potential health threats. We investigated the airborne particle release from a decomposing multi-walled carbon nanotube (MWCNT)/epoxy nanocomposite in order to measure a possible release of MWCNTs. An experimental set-up was established that allows decomposing the samples in a furnace by exposure to increasing temperatures at a constant heating rate and under ambient air or nitrogen atmosphere. The particle analysis was performed by aerosol measurement devices and by transmission electron microscopy (TEM) of collected particles. Further, by the application of a thermal denuder, it was also possible to measure non-volatile particles only. Characterization of the tested samples and the decomposition kinetics were determined by the usage of thermogravimetric analysis (TGA). The particle release of different samples was investigated, of a neat epoxy, nanocomposites with 0.1 and 1 wt% MWCNTs, and nanocomposites with functionalized MWCNTs. The results showed that the added MWCNTs had little effect on the decomposition kinetics of the investigated samples, but the weight of the remaining residues after decomposition was influenced significantly. The measurements with decomposition in different atmospheres showed a release of a higher number of particles at temperatures below 300 °C when air was used. Analysis of collected particles by TEM revealed that no detectable amount of MWCNTs was released, but micrometer-sized fibrous particles were collected.
Ingredients of the Eddy Soup: A Geometric Decomposition of Eddy-Mean Flow Interactions
NASA Astrophysics Data System (ADS)
Waterman, S.; Lilly, J. M.
2014-12-01
Understanding eddy-mean flow interactions is a long-standing problem in geophysical fluid dynamics with modern relevance to the task of representing eddy effects in coarse resolution models while preserving their dependence on the underlying dynamics of the flow field. Exploiting the recognition that the velocity covariance matrix/eddy stress tensor that describes eddy fluxes, also encodes information about eddy size, shape and orientation through its geometric representation in the form of the so-called variance ellipse, suggests a potentially fruitful way forward. Here we present a new framework that describes eddy-mean flow interactions in terms of a geometric description of the eddy motion, and illustrate it with an application to an unstable jet. Specifically we show that the eddy vorticity flux divergence F, a key dynamical quantity describing the average effect of fluctuations on the time-mean flow, may be decomposed into two components with distinct geometric interpretations: 1. variations in variance ellipse orientation; and 2. variations in the anisotropic part of the eddy kinetic energy, a function of the variance ellipse size and shape. Application of the divergence theorem shows that F integrated over a region is explained entirely by variations in these two quantities around the region's periphery. This framework has the potential to offer new insights into eddy-mean flow interactions in a number of ways. It identifies the ingredients of the eddy motion that have a mean flow forcing effect, it links eddy effects to spatial patterns of variance ellipse geometry that can suggest the mechanisms underpinning these effects, and finally it illustrates the importance of resolving eddy shape and orientation, and not just eddy size/energy, to accurately represent eddy feedback effects. These concepts will be both discussed and illustrated.
s-core network decomposition: A generalization of k-core analysis to weighted networks
NASA Astrophysics Data System (ADS)
Eidsaa, Marius; Almaas, Eivind
2013-12-01
A broad range of systems spanning biology, technology, and social phenomena may be represented and analyzed as complex networks. Recent studies of such networks using k-core decomposition have uncovered groups of nodes that play important roles. Here, we present s-core analysis, a generalization of k-core (or k-shell) analysis to complex networks where the links have different strengths or weights. We demonstrate the s-core decomposition approach on two random networks (ER and configuration model with scale-free degree distribution) where the link weights are (i) random, (ii) correlated, and (iii) anticorrelated with the node degrees. Finally, we apply the s-core decomposition approach to the protein-interaction network of the yeast Saccharomyces cerevisiae in the context of two gene-expression experiments: oxidative stress in response to cumene hydroperoxide (CHP), and fermentation stress response (FSR). We find that the innermost s-cores are (i) different from innermost k-cores, (ii) different for the two stress conditions CHP and FSR, and (iii) enriched with proteins whose biological functions give insight into how yeast manages these specific stresses.
Catalytic and inhibiting effects of lithium peroxide and hydroxide on sodium chlorate decomposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cannon, J.C.; Zhang, Y.
1995-09-01
Chemical oxygen generators based on sodium chlorate and lithium perchlorate are used in airplanes, submarines, diving, and mine rescue. Catalytic decomposition of sodium chlorate in the presence of cobalt oxide, lithium peroxide, and lithium hydroxide is studied using thermal gravimetric analysis. Lithium peroxide and hydroxide are both moderately active catalysts for the decomposition of sodium chlorate when used alone, and inhibitors when used with the more active catalyst cobalt oxide.
Perrault, Katelynn A; Stefanuto, Pierre-Hugues; Stuart, Barbara H; Rai, Tapan; Focant, Jean-François; Forbes, Shari L
2015-09-01
Cadaver-detection dogs use volatile organic compounds (VOCs) to search for human remains including those deposited on or beneath soil. Soil can act as a sink for VOCs, causing loading of decomposition VOCs in the soil following soft tissue decomposition. The objective of this study was to chemically profile decomposition VOCs from surface decomposition sites after remains were removed from their primary location. Pig carcasses were used as human analogues and were deposited on a soil surface to decompose for 3 months. The remains were then removed from each site and VOCs were collected from the soil for 7 months thereafter and analyzed by comprehensive two-dimensional gas chromatography-time-of-flight mass spectrometry (GC×GC-TOFMS). Decomposition VOCs diminished within 6 weeks and hydrocarbons were the most persistent compound class. Decomposition VOCs could still be detected in the soil after 7 months using Principal Component Analysis. This study demonstrated that the decomposition VOC profile, while detectable by GC×GC-TOFMS in the soil, was considerably reduced and altered in composition upon removal of remains. Chemical reference data is provided by this study for future investigations of canine alert behavior in scenarios involving scattered or scavenged remains.
Implementation of a Parallel Kalman Filter for Stratospheric Chemical Tracer Assimilation
NASA Technical Reports Server (NTRS)
Chang, Lang-Ping; Lyster, Peter M.; Menard, R.; Cohn, S. E.
1998-01-01
A Kalman filter for the assimilation of long-lived atmospheric chemical constituents has been developed for two-dimensional transport models on isentropic surfaces over the globe. An important attribute of the Kalman filter is that it calculates error covariances of the constituent fields using the tracer dynamics. Consequently, the current Kalman-filter assimilation is a five-dimensional problem (coordinates of two points and time), and it can only be handled on computers with large memory and high floating point speed. In this paper, an implementation of the Kalman filter for distributed-memory, message-passing parallel computers is discussed. Two approaches were studied: an operator decomposition and a covariance decomposition. The latter was found to be more scalable than the former, and it possesses the property that the dynamical model does not need to be parallelized, which is of considerable practical advantage. This code is currently used to assimilate constituent data retrieved by limb sounders on the Upper Atmosphere Research Satellite. Tests of the code examined the variance transport and observability properties. Aspects of the parallel implementation, some timing results, and a brief discussion of the physical results will be presented.
A Mean variance analysis of arbitrage portfolios
NASA Astrophysics Data System (ADS)
Fang, Shuhong
2007-03-01
Based on the careful analysis of the definition of arbitrage portfolio and its return, the author presents a mean-variance analysis of the return of arbitrage portfolios, which implies that Korkie and Turtle's results ( B. Korkie, H.J. Turtle, A mean-variance analysis of self-financing portfolios, Manage. Sci. 48 (2002) 427-443) are misleading. A practical example is given to show the difference between the arbitrage portfolio frontier and the usual portfolio frontier.
The impact of oil price on Malaysian sector indices
NASA Astrophysics Data System (ADS)
Ismail, Mohd Tahir; Luan, Yeap Pei; Ee, Ong Joo
2015-12-01
In this paper, vector error correction model (VECM) has been utilized to model the dynamic relationships between world crude oil price and the sector indices of Malaysia. The sector indices have been collected are covering the period Jan 1998 to Dec 2013. Surprisingly, our investigations show that oil price changes do not Granger-cause any of the sectors in all of Malaysia. However, sector indices of Food Producer and Utilities are found to be the cause of the changes in world crude oil prices. Furthermore, from the results of variance decomposition, very high percentage of shocks is explained by world crude oil price itself over the 12 months and small impact from other sector indices.
Applications of non-parametric statistics and analysis of variance on sample variances
NASA Technical Reports Server (NTRS)
Myers, R. H.
1981-01-01
Nonparametric methods that are available for NASA-type applications are discussed. An attempt will be made here to survey what can be used, to attempt recommendations as to when each would be applicable, and to compare the methods, when possible, with the usual normal-theory procedures that are avavilable for the Gaussion analog. It is important here to point out the hypotheses that are being tested, the assumptions that are being made, and limitations of the nonparametric procedures. The appropriateness of doing analysis of variance on sample variances are also discussed and studied. This procedure is followed in several NASA simulation projects. On the surface this would appear to be reasonably sound procedure. However, difficulties involved center around the normality problem and the basic homogeneous variance assumption that is mase in usual analysis of variance problems. These difficulties discussed and guidelines given for using the methods.
NASA Astrophysics Data System (ADS)
Tang, Kunkun; Massa, Luca; Wang, Jonathan; Freund, Jonathan B.
2018-05-01
We introduce an efficient non-intrusive surrogate-based methodology for global sensitivity analysis and uncertainty quantification. Modified covariance-based sensitivity indices (mCov-SI) are defined for outputs that reflect correlated effects. The overall approach is applied to simulations of a complex plasma-coupled combustion system with disparate uncertain parameters in sub-models for chemical kinetics and a laser-induced breakdown ignition seed. The surrogate is based on an Analysis of Variance (ANOVA) expansion, such as widely used in statistics, with orthogonal polynomials representing the ANOVA subspaces and a polynomial dimensional decomposition (PDD) representing its multi-dimensional components. The coefficients of the PDD expansion are obtained using a least-squares regression, which both avoids the direct computation of high-dimensional integrals and affords an attractive flexibility in choosing sampling points. This facilitates importance sampling using a Bayesian calibrated posterior distribution, which is fast and thus particularly advantageous in common practical cases, such as our large-scale demonstration, for which the asymptotic convergence properties of polynomial expansions cannot be realized due to computation expense. Effort, instead, is focused on efficient finite-resolution sampling. Standard covariance-based sensitivity indices (Cov-SI) are employed to account for correlation of the uncertain parameters. Magnitude of Cov-SI is unfortunately unbounded, which can produce extremely large indices that limit their utility. Alternatively, mCov-SI are then proposed in order to bound this magnitude ∈ [ 0 , 1 ]. The polynomial expansion is coupled with an adaptive ANOVA strategy to provide an accurate surrogate as the union of several low-dimensional spaces, avoiding the typical computational cost of a high-dimensional expansion. It is also adaptively simplified according to the relative contribution of the different polynomials to the total variance. The approach is demonstrated for a laser-induced turbulent combustion simulation model, which includes parameters with correlated effects.
NASA Astrophysics Data System (ADS)
Maina, Fadji Zaouna; Guadagnini, Alberto
2018-01-01
We study the contribution of typically uncertain subsurface flow parameters to gravity changes that can be recorded during pumping tests in unconfined aquifers. We do so in the framework of a Global Sensitivity Analysis and quantify the effects of uncertainty of such parameters on the first four statistical moments of the probability distribution of gravimetric variations induced by the operation of the well. System parameters are grouped into two main categories, respectively, governing groundwater flow in the unsaturated and saturated portions of the domain. We ground our work on the three-dimensional analytical model proposed by Mishra and Neuman (2011), which fully takes into account the richness of the physical process taking place across the unsaturated and saturated zones and storage effects in a finite radius pumping well. The relative influence of model parameter uncertainties on drawdown, moisture content, and gravity changes are quantified through (a) the Sobol' indices, derived from a classical decomposition of variance and (b) recently developed indices quantifying the relative contribution of each uncertain model parameter to the (ensemble) mean, skewness, and kurtosis of the model output. Our results document (i) the importance of the effects of the parameters governing the unsaturated flow dynamics on the mean and variance of local drawdown and gravity changes; (ii) the marked sensitivity (as expressed in terms of the statistical moments analyzed) of gravity changes to the employed water retention curve model parameter, specific yield, and storage, and (iii) the influential role of hydraulic conductivity of the unsaturated and saturated zones to the skewness and kurtosis of gravimetric variation distributions. The observed temporal dynamics of the strength of the relative contribution of system parameters to gravimetric variations suggest that gravity data have a clear potential to provide useful information for estimating the key hydraulic parameters of the system.
Determining Sample Sizes for Precise Contrast Analysis with Heterogeneous Variances
ERIC Educational Resources Information Center
Jan, Show-Li; Shieh, Gwowen
2014-01-01
The analysis of variance (ANOVA) is one of the most frequently used statistical analyses in practical applications. Accordingly, the single and multiple comparison procedures are frequently applied to assess the differences among mean effects. However, the underlying assumption of homogeneous variances may not always be tenable. This study…
Non-linear analytic and coanalytic problems ( L_p-theory, Clifford analysis, examples)
NASA Astrophysics Data System (ADS)
Dubinskii, Yu A.; Osipenko, A. S.
2000-02-01
Two kinds of new mathematical model of variational type are put forward: non-linear analytic and coanalytic problems. The formulation of these non-linear boundary-value problems is based on a decomposition of the complete scale of Sobolev spaces into the "orthogonal" sum of analytic and coanalytic subspaces. A similar decomposition is considered in the framework of Clifford analysis. Explicit examples are presented.
Thermal decomposition behavior of nano/micro bimodal feedstock with different solids loading
NASA Astrophysics Data System (ADS)
Oh, Joo Won; Lee, Won Sik; Park, Seong Jin
2018-01-01
Debinding is one of the most critical processes for powder injection molding. The parts in debinding process are vulnerable to defect formation, and long processing time of debinding decreases production rate of whole process. In order to determine the optimal condition for debinding process, decomposition behavior of feedstock should be understood. Since nano powder affects the decomposition behavior of feedstock, nano powder effect needs to be investigated for nano/micro bimodal feedstock. In this research, nano powder effect on decomposition behavior of nano/micro bimodal feedstock has been studied. Bimodal powders were fabricated with different ratios of nano powder, and the critical solids loading of each powder was measured by torque rheometer. Three different feedstocks were fabricated for each powder depending on solids loading condition. Thermogravimetric analysis (TGA) experiment was carried out to analyze the thermal decomposition behavior of the feedstocks, and decomposition activation energy was calculated. The result indicated nano powder showed limited effect on feedstocks in lower solids loading condition than optimal range. Whereas, it highly influenced the decomposition behavior in optimal solids loading condition by causing polymer chain scission with high viscosity.
Buchanan, Piers; Soper, Alan K; Thompson, Helen; Westacott, Robin E; Creek, Jefferson L; Hobson, Greg; Koh, Carolyn A
2005-10-22
Neutron diffraction with HD isotope substitution has been used to study the formation and decomposition of the methane clathrate hydrate. Using this atomistic technique coupled with simultaneous gas consumption measurements, we have successfully tracked the formation of the sI methane hydrate from a water/gas mixture and then the subsequent decomposition of the hydrate from initiation to completion. These studies demonstrate that the application of neutron diffraction with simultaneous gas consumption measurements provides a powerful method for studying the clathrate hydrate crystal growth and decomposition. We have also used neutron diffraction to examine the water structure before the hydrate growth and after the hydrate decomposition. From the neutron-scattering curves and the empirical potential structure refinement analysis of the data, we find that there is no significant difference between the structure of water before the hydrate formation and the structure of water after the hydrate decomposition. Nor is there any significant change to the methane hydration shell. These results are discussed in the context of widely held views on the existence of memory effects after the hydrate decomposition.
The Importance of Variance in Statistical Analysis: Don't Throw Out the Baby with the Bathwater.
ERIC Educational Resources Information Center
Peet, Martha W.
This paper analyzes what happens to the effect size of a given dataset when the variance is removed by categorization for the purpose of applying "OVA" methods (analysis of variance, analysis of covariance). The dataset is from a classic study by Holzinger and Swinefors (1939) in which more than 20 ability test were administered to 301…
Genomic Analysis of Complex Microbial Communities in Wounds
2012-01-01
thoroughly in the ecology literature. Permutation Multivariate Analysis of Variance ( PerMANOVA ). We used PerMANOVA to test the null-hypothesis of no...difference between the bacterial communities found within a single wound compared to those from different patients (α = 0.05). PerMANOVA is a...permutation-based version of the multivariate analysis of variance (MANOVA). PerMANOVA uses the distances between samples to partition variance and
NASA Astrophysics Data System (ADS)
Sugiura, Shinji; Ikeda, Hiroshi
2014-03-01
The decomposition of vertebrate carcasses is an important ecosystem function. Soft tissues of dead vertebrates are rapidly decomposed by diverse animals. However, decomposition of hard tissues such as hairs and feathers is much slower because only a few animals can digest keratin, a protein that is concentrated in hairs and feathers. Although beetles of the family Trogidae are considered keratin feeders, their ecological function has rarely been explored. Here, we investigated the keratin-decomposition function of trogid beetles in heron-breeding colonies where keratin was frequently supplied as feathers. Three trogid species were collected from the colonies and observed feeding on heron feathers under laboratory conditions. We also measured the nitrogen (δ15N) and carbon (δ13C) stable isotope ratios of two trogid species that were maintained on a constant diet (feathers from one heron individual) during 70 days under laboratory conditions. We compared the isotopic signatures of the trogids with the feathers to investigate isotopic shifts from the feathers to the consumers for δ15N and δ13C. We used mixing models (MixSIR and SIAR) to estimate the main diets of individual field-collected trogid beetles. The analysis indicated that heron feathers were more important as food for trogid beetles than were soft tissues under field conditions. Together, the feeding experiment and stable isotope analysis provided strong evidence of keratin decomposition by trogid beetles.
Decoding the Secrets of Carbon Preservation and GHG Flux in Lower-Latitude Peatlands
NASA Astrophysics Data System (ADS)
Richardson, C. J.; Flanagan, N. E.; Wang, H.; Ho, M.; Hodgkins, S. B.; Cooper, W. T.; Chanton, J.; Winton, S.
2017-12-01
The mechanisms regulating peat decomposition and C carbon storage in peatlands are poorly understood, particularly with regard to the importance of the biochemical compounds produced by different plant species and in turn peat quality controls on C storage and GHG flux. To examine the role of carbon quality in C accretion in northern compared to tropical peatlands we completed field and lab studies on bog peats collected in Minnesota, North Carolina, Florida and Peru to answer three fundamental questions; 1) is tropical peat more recalcitrant than northern peat 2) does the addition of aromatic and phenolic C compounds increase towards the tropics 3) do differences in the chemical structure of organic matter explain variances in carbon storage and GHG flux in tropical versus northern peatlands? Our main hypothesize is that high concentrations of phenolics and aromatic C compounds produced in shrub and tree plant communities in peatlands coupled with the fire production of biochar aromatics in peatlands may provide a dual biogeochemical latch mechanism controlling microbial decomposition of peat even under higher temperatures and seasonal drought. By comparing the peat bog soil cores collected from the MN peat bogs, NC Pocosins, FL Everglades and Peru palm swamps we find that the soils in the shrub-dominant Pocosin contain the highest phenolics, which microbial studies indicate have the strongest resistance to microbial decomposition. A chemical comparison of plant driven peat carbon quality along a north to south latitudinal gradient indicates that tropical peatlands have higher aromatic compounds, and enhanced phenolics, especially after light fires, which enhances C storage and affect GHG flux across the latitudinal gradient.
1974-06-17
10-1 I1. Burning Rate Modifiers, D.R. Dillehay ............................. 11-1 12. Spectroscopic Analysis of Azide Decomposition Products for use...solid, and Pit that they ignite a short distance from the surface. Further- more, decomposition of sodium nitrate, which produces the gas to blow the...decreasing U the thermal conductivity of the basic binary. Class 2 compounds, con- sisting of nanganese oxides, catalyze the normal decomposition of
Circular Mixture Modeling of Color Distribution for Blind Stain Separation in Pathology Images.
Li, Xingyu; Plataniotis, Konstantinos N
2017-01-01
In digital pathology, to address color variation and histological component colocalization in pathology images, stain decomposition is usually performed preceding spectral normalization and tissue component segmentation. This paper examines the problem of stain decomposition, which is a naturally nonnegative matrix factorization (NMF) problem in algebra, and introduces a systematical and analytical solution consisting of a circular color analysis module and an NMF-based computation module. Unlike the paradigm of existing stain decomposition algorithms where stain proportions are computed from estimated stain spectra using a matrix inverse operation directly, the introduced solution estimates stain spectra and stain depths via probabilistic reasoning individually. Since the proposed method pays extra attentions to achromatic pixels in color analysis and stain co-occurrence in pixel clustering, it achieves consistent and reliable stain decomposition with minimum decomposition residue. Particularly, aware of the periodic and angular nature of hue, we propose the use of a circular von Mises mixture model to analyze the hue distribution, and provide a complete color-based pixel soft-clustering solution to address color mixing introduced by stain overlap. This innovation combined with saturation-weighted computation makes our study effective for weak stains and broad-spectrum stains. Extensive experimentation on multiple public pathology datasets suggests that our approach outperforms state-of-the-art blind stain separation methods in terms of decomposition effectiveness.
Data analysis using a combination of independent component analysis and empirical mode decomposition
NASA Astrophysics Data System (ADS)
Lin, Shih-Lin; Tung, Pi-Cheng; Huang, Norden E.
2009-06-01
A combination of independent component analysis and empirical mode decomposition (ICA-EMD) is proposed in this paper to analyze low signal-to-noise ratio data. The advantages of ICA-EMD combination are these: ICA needs few sensory clues to separate the original source from unwanted noise and EMD can effectively separate the data into its constituting parts. The case studies reported here involve original sources contaminated by white Gaussian noise. The simulation results show that the ICA-EMD combination is an effective data analysis tool.
A New View of Earthquake Ground Motion Data: The Hilbert Spectral Analysis
NASA Technical Reports Server (NTRS)
Huang, Norden; Busalacchi, Antonio J. (Technical Monitor)
2000-01-01
A brief description of the newly developed Empirical Mode Decomposition (ENID) and Hilbert Spectral Analysis (HSA) method will be given. The decomposition is adaptive and can be applied to both nonlinear and nonstationary data. Example of the method applied to a sample earthquake record will be given. The results indicate those low frequency components, totally missed by the Fourier analysis, are clearly identified by the new method. Comparisons with Wavelet and window Fourier analysis show the new method offers much better temporal and frequency resolutions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Jin-jian; Yancheng Teachers College, Yancheng 224002; Liu, Zu-Liang, E-mail: liuzl@mail.njust.edu.cn
2013-04-15
An energetic lead(II) coordination polymer based on the ligand ANPyO has been synthesized and its crystal structure has been got. The polymer was characterized by FT-IR spectroscopy, elemental analysis, DSC and TG-DTG technologies. Thermal analysis shows that there are one endothermic process and two exothermic decomposition stages in the temperature range of 50–600 °C with final residues 57.09%. The non-isothermal kinetic has also been studied on the main exothermic decomposition using the Kissinger's and Ozawa–Doyle's methods, the apparent activation energy is calculated as 195.2 KJ/mol. Furthermore, DSC measurements show that the polymer has significant catalytic effect on the thermal decompositionmore » of ammonium perchlorate. - Graphical abstract: An energetic lead(II) coordination polymer of ANPyO has been synthesized, structurally characterized and properties tested. Highlights: ► We have synthesized and characterized an energetic lead(II) coordination polymer. ► We have measured its molecular structure and thermal decomposition. ► It has significant catalytic effect on thermal decomposition of AP.« less
NASA Astrophysics Data System (ADS)
Fujii, Hidemichi; Okamoto, Shunsuke; Kagawa, Shigemi; Managi, Shunsuke
2017-12-01
This study investigated the changes in the toxicity of chemical emissions from the US industrial sector over the 1998-2009 period. Specifically, we employed a multiregional input-output analysis framework and integrated a supply-side index decomposition analysis (IDA) with a demand-side structural decomposition analysis (SDA) to clarify the main drivers of changes in the toxicity of production- and consumption-based chemical emissions. The results showed that toxic emissions from the US industrial sector decreased by 83% over the studied period because of pollution abatement efforts adopted by US industries. A variety of pollution abatement efforts were used by different industries, and cleaner production in the mining sector and the use of alternative materials in the manufacture of transportation equipment represented the most important efforts.
A Three-way Decomposition of a Total Effect into Direct, Indirect, and Interactive Effects
VanderWeele, Tyler J.
2013-01-01
Recent theory in causal inference has provided concepts for mediation analysis and effect decomposition that allow one to decompose a total effect into a direct and an indirect effect. Here, it is shown that what is often taken as an indirect effect can in fact be further decomposed into a “pure” indirect effect and a mediated interactive effect, thus yielding a three-way decomposition of a total effect (direct, indirect, and interactive). This three-way decomposition applies to difference scales and also to additive ratio scales and additive hazard scales. Assumptions needed for the identification of each of these three effects are discussed and simple formulae are given for each when regression models allowing for interaction are used. The three-way decomposition is illustrated by examples from genetic and perinatal epidemiology, and discussion is given to what is gained over the traditional two-way decomposition into simply a direct and an indirect effect. PMID:23354283
Stokes, Kathryn L; Forbes, Shari L; Tibbett, Mark
2013-05-01
Taphonomic studies regularly employ animal analogues for human decomposition due to ethical restrictions relating to the use of human tissue. However, the validity of using animal analogues in soil decomposition studies is still questioned. This study compared the decomposition of skeletal muscle tissues (SMTs) from human (Homo sapiens), pork (Sus scrofa), beef (Bos taurus), and lamb (Ovis aries) interred in soil microcosms. Fixed interval samples were collected from the SMT for microbial activity and mass tissue loss determination; samples were also taken from the underlying soil for pH, electrical conductivity, and nutrient (potassium, phosphate, ammonium, and nitrate) analysis. The overall patterns of nutrient fluxes and chemical changes in nonhuman SMT and the underlying soil followed that of human SMT. Ovine tissue was the most similar to human tissue in many of the measured parameters. Although no single analogue was a precise predictor of human decomposition in soil, all models offered close approximations in decomposition dynamics. © 2013 American Academy of Forensic Sciences.
An Analysis of Variance Framework for Matrix Sampling.
ERIC Educational Resources Information Center
Sirotnik, Kenneth
Significant cost savings can be achieved with the use of matrix sampling in estimating population parameters from psychometric data. The statistical design is intuitively simple, using the framework of the two-way classification analysis of variance technique. For example, the mean and variance are derived from the performance of a certain grade…
Data-driven process decomposition and robust online distributed modelling for large-scale processes
NASA Astrophysics Data System (ADS)
Shu, Zhang; Lijuan, Li; Lijuan, Yao; Shipin, Yang; Tao, Zou
2018-02-01
With the increasing attention of networked control, system decomposition and distributed models show significant importance in the implementation of model-based control strategy. In this paper, a data-driven system decomposition and online distributed subsystem modelling algorithm was proposed for large-scale chemical processes. The key controlled variables are first partitioned by affinity propagation clustering algorithm into several clusters. Each cluster can be regarded as a subsystem. Then the inputs of each subsystem are selected by offline canonical correlation analysis between all process variables and its controlled variables. Process decomposition is then realised after the screening of input and output variables. When the system decomposition is finished, the online subsystem modelling can be carried out by recursively block-wise renewing the samples. The proposed algorithm was applied in the Tennessee Eastman process and the validity was verified.
Tanner-Smith, Emily E; Tipton, Elizabeth
2014-03-01
Methodologists have recently proposed robust variance estimation as one way to handle dependent effect sizes in meta-analysis. Software macros for robust variance estimation in meta-analysis are currently available for Stata (StataCorp LP, College Station, TX, USA) and spss (IBM, Armonk, NY, USA), yet there is little guidance for authors regarding the practical application and implementation of those macros. This paper provides a brief tutorial on the implementation of the Stata and spss macros and discusses practical issues meta-analysts should consider when estimating meta-regression models with robust variance estimates. Two example databases are used in the tutorial to illustrate the use of meta-analysis with robust variance estimates. Copyright © 2013 John Wiley & Sons, Ltd.
Excoffier, L; Smouse, P E; Quattro, J M
1992-06-01
We present here a framework for the study of molecular variation within a single species. Information on DNA haplotype divergence is incorporated into an analysis of variance format, derived from a matrix of squared-distances among all pairs of haplotypes. This analysis of molecular variance (AMOVA) produces estimates of variance components and F-statistic analogs, designated here as phi-statistics, reflecting the correlation of haplotypic diversity at different levels of hierarchical subdivision. The method is flexible enough to accommodate several alternative input matrices, corresponding to different types of molecular data, as well as different types of evolutionary assumptions, without modifying the basic structure of the analysis. The significance of the variance components and phi-statistics is tested using a permutational approach, eliminating the normality assumption that is conventional for analysis of variance but inappropriate for molecular data. Application of AMOVA to human mitochondrial DNA haplotype data shows that population subdivisions are better resolved when some measure of molecular differences among haplotypes is introduced into the analysis. At the intraspecific level, however, the additional information provided by knowing the exact phylogenetic relations among haplotypes or by a nonlinear translation of restriction-site change into nucleotide diversity does not significantly modify the inferred population genetic structure. Monte Carlo studies show that site sampling does not fundamentally affect the significance of the molecular variance components. The AMOVA treatment is easily extended in several different directions and it constitutes a coherent and flexible framework for the statistical analysis of molecular data.
2018-06-01
decomposition products from bis-(2-chloroethyl) sulfide (HD). These data were measured using an ASTM International method that is based on differential...2.1 Materials and Method ........................................................................................2 2.2 Data Analysis...and Method The source and purity of the materials studied are listed in Table 1. Table 1. Sample Information for Title Compounds Compound
Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.
Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiplemore » causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.« less
Distribution of lod scores in oligogenic linkage analysis.
Williams, J T; North, K E; Martin, L J; Comuzzie, A G; Göring, H H; Blangero, J
2001-01-01
In variance component oligogenic linkage analysis it can happen that the residual additive genetic variance bounds to zero when estimating the effect of the ith quantitative trait locus. Using quantitative trait Q1 from the Genetic Analysis Workshop 12 simulated general population data, we compare the observed lod scores from oligogenic linkage analysis with the empirical lod score distribution under a null model of no linkage. We find that zero residual additive genetic variance in the null model alters the usual distribution of the likelihood-ratio statistic.
Variance-Based Sensitivity Analysis to Support Simulation-Based Design Under Uncertainty
Opgenoord, Max M. J.; Allaire, Douglas L.; Willcox, Karen E.
2016-09-12
Sensitivity analysis plays a critical role in quantifying uncertainty in the design of engineering systems. A variance-based global sensitivity analysis is often used to rank the importance of input factors, based on their contribution to the variance of the output quantity of interest. However, this analysis assumes that all input variability can be reduced to zero, which is typically not the case in a design setting. Distributional sensitivity analysis (DSA) instead treats the uncertainty reduction in the inputs as a random variable, and defines a variance-based sensitivity index function that characterizes the relative contribution to the output variance as amore » function of the amount of uncertainty reduction. This paper develops a computationally efficient implementation for the DSA formulation and extends it to include distributions commonly used in engineering design under uncertainty. Application of the DSA method to the conceptual design of a commercial jetliner demonstrates how the sensitivity analysis provides valuable information to designers and decision-makers on where and how to target uncertainty reduction efforts.« less
Variance-Based Sensitivity Analysis to Support Simulation-Based Design Under Uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Opgenoord, Max M. J.; Allaire, Douglas L.; Willcox, Karen E.
Sensitivity analysis plays a critical role in quantifying uncertainty in the design of engineering systems. A variance-based global sensitivity analysis is often used to rank the importance of input factors, based on their contribution to the variance of the output quantity of interest. However, this analysis assumes that all input variability can be reduced to zero, which is typically not the case in a design setting. Distributional sensitivity analysis (DSA) instead treats the uncertainty reduction in the inputs as a random variable, and defines a variance-based sensitivity index function that characterizes the relative contribution to the output variance as amore » function of the amount of uncertainty reduction. This paper develops a computationally efficient implementation for the DSA formulation and extends it to include distributions commonly used in engineering design under uncertainty. Application of the DSA method to the conceptual design of a commercial jetliner demonstrates how the sensitivity analysis provides valuable information to designers and decision-makers on where and how to target uncertainty reduction efforts.« less
NASA Astrophysics Data System (ADS)
Hipp, J. R.; Ballard, S.; Begnaud, M. L.; Encarnacao, A. V.; Young, C. J.; Phillips, W. S.
2015-12-01
Recently our combined SNL-LANL research team has succeeded in developing a global, seamless 3D tomographic P- and S-velocity model (SALSA3D) that provides superior first P and first S travel time predictions at both regional and teleseismic distances. However, given the variable data quality and uneven data sampling associated with this type of model, it is essential that there be a means to calculate high-quality estimates of the path-dependent variance and covariance associated with the predicted travel times of ray paths through the model. In this paper, we describe a methodology for accomplishing this by exploiting the full model covariance matrix and show examples of path-dependent travel time prediction uncertainty computed from our latest tomographic model. Typical global 3D SALSA3D models have on the order of 1/2 million nodes, so the challenge in calculating the covariance matrix is formidable: 0.9 TB storage for 1/2 of a symmetric matrix, necessitating an Out-Of-Core (OOC) blocked matrix solution technique. With our approach the tomography matrix (G which includes a prior model covariance constraint) is multiplied by its transpose (GTG) and written in a blocked sub-matrix fashion. We employ a distributed parallel solution paradigm that solves for (GTG)-1 by assigning blocks to individual processing nodes for matrix decomposition update and scaling operations. We first find the Cholesky decomposition of GTG which is subsequently inverted. Next, we employ OOC matrix multiplication methods to calculate the model covariance matrix from (GTG)-1 and an assumed data covariance matrix. Given the model covariance matrix, we solve for the travel-time covariance associated with arbitrary ray-paths by summing the model covariance along both ray paths. Setting the paths equal and taking the square root yields the travel prediction uncertainty for the single path.
Including Effects of Water Stress on Dead Organic Matter Decay to a Forest Carbon Model
NASA Astrophysics Data System (ADS)
Kim, H.; Lee, J.; Han, S. H.; Kim, S.; Son, Y.
2017-12-01
Decay of dead organic matter is a key process of carbon (C) cycling in forest ecosystems. The change in decay rate depends on temperature sensitivity and moisture conditions. The Forest Biomass and Dead organic matter Carbon (FBDC) model includes a decay sub-model considering temperature sensitivity, yet does not consider moisture conditions as drivers of the decay rate change. This study aimed to improve the FBDC model by including a water stress function to the decay sub-model. Also, soil C sequestration under climate change with the FBDC model including the water stress function was simulated. The water stress functions were determined with data from decomposition study on Quercus variabilis forests and Pinus densiflora forests of Korea, and adjustment parameters of the functions were determined for both species. The water stress functions were based on the ratio of precipitation to potential evapotranspiration. Including the water stress function increased the explained variances of the decay rate by 19% for the Q. variabilis forests and 7% for the P. densiflora forests, respectively. The increase of the explained variances resulted from large difference in temperature range and precipitation range across the decomposition study plots. During the period of experiment, the mean annual temperature range was less than 3°C, while the annual precipitation ranged from 720mm to 1466mm. Application of the water stress functions to the FBDC model constrained increasing trend of temperature sensitivity under climate change, and thus increased the model-estimated soil C sequestration (Mg C ha-1) by 6.6 for the Q. variabilis forests and by 3.1 for the P. densiflora forests, respectively. The addition of water stress functions increased reliability of the decay rate estimation and could contribute to reducing the bias in estimating soil C sequestration under varying moisture condition. Acknowledgement: This study was supported by Korea Forest Service (2017044B10-1719-BB01)
Sparse Polynomial Chaos Surrogate for ACME Land Model via Iterative Bayesian Compressive Sensing
NASA Astrophysics Data System (ADS)
Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Debusschere, B.; Najm, H. N.; Thornton, P. E.
2015-12-01
For computationally expensive climate models, Monte-Carlo approaches of exploring the input parameter space are often prohibitive due to slow convergence with respect to ensemble size. To alleviate this, we build inexpensive surrogates using uncertainty quantification (UQ) methods employing Polynomial Chaos (PC) expansions that approximate the input-output relationships using as few model evaluations as possible. However, when many uncertain input parameters are present, such UQ studies suffer from the curse of dimensionality. In particular, for 50-100 input parameters non-adaptive PC representations have infeasible numbers of basis terms. To this end, we develop and employ Weighted Iterative Bayesian Compressive Sensing to learn the most important input parameter relationships for efficient, sparse PC surrogate construction with posterior uncertainty quantified due to insufficient data. Besides drastic dimensionality reduction, the uncertain surrogate can efficiently replace the model in computationally intensive studies such as forward uncertainty propagation and variance-based sensitivity analysis, as well as design optimization and parameter estimation using observational data. We applied the surrogate construction and variance-based uncertainty decomposition to Accelerated Climate Model for Energy (ACME) Land Model for several output QoIs at nearly 100 FLUXNET sites covering multiple plant functional types and climates, varying 65 input parameters over broad ranges of possible values. This work is supported by the U.S. Department of Energy, Office of Science, Biological and Environmental Research, Accelerated Climate Modeling for Energy (ACME) project. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
NASA Astrophysics Data System (ADS)
Voit, E. I.; Didenko, N. A.; Gaivoronskaya, K. A.
2018-03-01
Thermal decomposition of (NH4)2ZrF6 resulting in ZrO2 formation within the temperature range of 20°-750°C has been investigated by means of thermal and X-ray diffraction analysis and IR and Raman spectroscopy. It has been established that thermolysis proceeds in six stages. The vibrational-spectroscopy data for the intermediate products of thermal decomposition have been obtained, systematized, and summarized.
Genetics of human body size and shape: body proportions and indices.
Livshits, Gregory; Roset, A; Yakovenko, K; Trofimov, S; Kobyliansky, E
2002-01-01
The study of the genetic component in morphological variables such as body height and weight, head and chest circumference, etc. has a rather long history. However, only a few studies investigated body proportions and configuration. The major aim of the present study was to evaluate the extent of the possible genetic effects on the inter-individual variation of a number of body configuration indices amenable to clear functional interpretation. Two ethnically different pedigree samples were used in the study: (1) Turkmenians (805 individuals) from Central Asia, and (2) Chuvasha (732 individuals) from the Volga riverside, Russian Federation. To achieve the aim of the present study we proposed three new indices, which were subjected to a statistical-genetic analysis using modified version of "FISHER" software. The proposed indices were: (1) an integral index of torso volume (IND#1), an index reflecting a predisposition of body proportions to maintain a balance in a vertical position (IND#2), and an index of skeletal extremities volume (IND#3). Additionally, the first two principal factors (PF1 and PF2) obtained on 19 measurements of body length and breadth were subjected to genetic analysis. Variance decomposition analysis that simultaneously assess the contribution of gender, age, additive genetic effects and effects of environment shared by the nuclear family members, was applied to fit variation of the above three indices, and PF1 and PF2. The raw familial correlation of all study traits and in both samples showed: (1) all marital correlations did not differ significantly from zero; (2) parent-offspring and sibling correlations were all positive and statistically significant. The parameter estimates obtained in variance analyses showed that from 40% to 75% of inter-individual variation of the studied traits (adjusted for age and sex) were attributable to genetic effects. For PF1 and PF2 in both samples, and for IND#2 (in Chuvasha pedigrees), significant common sib environmental effects were also detectable. Genetic factors substantially influence inter-individual differences in body shape and configuration in two studied samples. However, further studies are needed to clarify the extent of pleiotropy and epigenetic effects on various facets of the human physique.
Wavelet decomposition based principal component analysis for face recognition using MATLAB
NASA Astrophysics Data System (ADS)
Sharma, Mahesh Kumar; Sharma, Shashikant; Leeprechanon, Nopbhorn; Ranjan, Aashish
2016-03-01
For the realization of face recognition systems in the static as well as in the real time frame, algorithms such as principal component analysis, independent component analysis, linear discriminate analysis, neural networks and genetic algorithms are used for decades. This paper discusses an approach which is a wavelet decomposition based principal component analysis for face recognition. Principal component analysis is chosen over other algorithms due to its relative simplicity, efficiency, and robustness features. The term face recognition stands for identifying a person from his facial gestures and having resemblance with factor analysis in some sense, i.e. extraction of the principal component of an image. Principal component analysis is subjected to some drawbacks, mainly the poor discriminatory power and the large computational load in finding eigenvectors, in particular. These drawbacks can be greatly reduced by combining both wavelet transform decomposition for feature extraction and principal component analysis for pattern representation and classification together, by analyzing the facial gestures into space and time domain, where, frequency and time are used interchangeably. From the experimental results, it is envisaged that this face recognition method has made a significant percentage improvement in recognition rate as well as having a better computational efficiency.
Dangers in Using Analysis of Covariance Procedures.
ERIC Educational Resources Information Center
Campbell, Kathleen T.
Problems associated with the use of analysis of covariance (ANCOVA) as a statistical control technique are explained. Three problems relate to the use of "OVA" methods (analysis of variance, analysis of covariance, multivariate analysis of variance, and multivariate analysis of covariance) in general. These are: (1) the wasting of information when…
Relating the Hadamard Variance to MCS Kalman Filter Clock Estimation
NASA Technical Reports Server (NTRS)
Hutsell, Steven T.
1996-01-01
The Global Positioning System (GPS) Master Control Station (MCS) currently makes significant use of the Allan Variance. This two-sample variance equation has proven excellent as a handy, understandable tool, both for time domain analysis of GPS cesium frequency standards, and for fine tuning the MCS's state estimation of these atomic clocks. The Allan Variance does not explicitly converge for the nose types of alpha less than or equal to minus 3 and can be greatly affected by frequency drift. Because GPS rubidium frequency standards exhibit non-trivial aging and aging noise characteristics, the basic Allan Variance analysis must be augmented in order to (a) compensate for a dynamic frequency drift, and (b) characterize two additional noise types, specifically alpha = minus 3, and alpha = minus 4. As the GPS program progresses, we will utilize a larger percentage of rubidium frequency standards than ever before. Hence, GPS rubidium clock characterization will require more attention than ever before. The three sample variance, commonly referred to as a renormalized Hadamard Variance, is unaffected by linear frequency drift, converges for alpha is greater than minus 5, and thus has utility for modeling noise in GPS rubidium frequency standards. This paper demonstrates the potential of Hadamard Variance analysis in GPS operations, and presents an equation that relates the Hadamard Variance to the MCS's Kalman filter process noises.
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Hara, Matthew J.; Kellogg, Cyndi M.; Parker, Cyrena M.
Ammonium bifluoride (ABF, NH4F·HF) is a well-known reagent for converting metal oxides to fluorides and for its applications in breaking down minerals and ores in order to extract useful components. It has been more recently applied to the decomposition of inorganic matrices prior to elemental analysis. Herein, a sample decomposition method that employs molten ABF sample treatment in the initial step is systematically evaluated across a range of inorganic sample types: glass, quartz, zircon, soil, and pitchblende ore. Method performance is evaluated across the two variables: duration of molten ABF treatment and ABF reagent mass to sample mass ratio. Themore » degree of solubilization of these sample classes are compared to the fluoride stoichiometry that is theoretically necessary to enact complete fluorination of the sample types. Finally, the sample decomposition method is performed on several soil and pitchblende ore standard reference materials, after which elemental constituent analysis is performed by ICP-OES and ICP-MS. Elemental recoveries are compared to the certified values; results indicate good to excellent recoveries across a range of alkaline earth, rare earth, transition metal, and actinide elements.« less
NASA Astrophysics Data System (ADS)
Chu, J. G.; Zhang, C.; Fu, G. T.; Li, Y.; Zhou, H. C.
2015-04-01
This study investigates the effectiveness of a sensitivity-informed method for multi-objective operation of reservoir systems, which uses global sensitivity analysis as a screening tool to reduce the computational demands. Sobol's method is used to screen insensitive decision variables and guide the formulation of the optimization problems with a significantly reduced number of decision variables. This sensitivity-informed problem decomposition dramatically reduces the computational demands required for attaining high quality approximations of optimal tradeoff relationships between conflicting design objectives. The search results obtained from the reduced complexity multi-objective reservoir operation problems are then used to pre-condition the full search of the original optimization problem. In two case studies, the Dahuofang reservoir and the inter-basin multi-reservoir system in Liaoning province, China, sensitivity analysis results show that reservoir performance is strongly controlled by a small proportion of decision variables. Sensitivity-informed problem decomposition and pre-conditioning are evaluated in their ability to improve the efficiency and effectiveness of multi-objective evolutionary optimization. Overall, this study illustrates the efficiency and effectiveness of the sensitivity-informed method and the use of global sensitivity analysis to inform problem decomposition when solving the complex multi-objective reservoir operation problems.
Environmental Influences on Well-Being: A Dyadic Latent Panel Analysis of Spousal Similarity
ERIC Educational Resources Information Center
Schimmack, Ulrich; Lucas, Richard E.
2010-01-01
This article uses dyadic latent panel analysis (DLPA) to examine environmental influences on well-being. DLPA requires longitudinal dyadic data. It decomposes the observed variance of both members of a dyad into a trait, state, and an error component. Furthermore, state variance is decomposed into initial and new state variance. Total observed…
An Analysis of Variance Approach for the Estimation of Response Time Distributions in Tests
ERIC Educational Resources Information Center
Attali, Yigal
2010-01-01
Generalizability theory and analysis of variance methods are employed, together with the concept of objective time pressure, to estimate response time distributions and the degree of time pressure in timed tests. By estimating response time variance components due to person, item, and their interaction, and fixed effects due to item types and…
Comparison of the efficiency between two sampling plans for aflatoxins analysis in maize
Mallmann, Adriano Olnei; Marchioro, Alexandro; Oliveira, Maurício Schneider; Rauber, Ricardo Hummes; Dilkin, Paulo; Mallmann, Carlos Augusto
2014-01-01
Variance and performance of two sampling plans for aflatoxins quantification in maize were evaluated. Eight lots of maize were sampled using two plans: manual, using sampling spear for kernels; and automatic, using a continuous flow to collect milled maize. Total variance and sampling, preparation, and analysis variance were determined and compared between plans through multifactor analysis of variance. Four theoretical distribution models were used to compare aflatoxins quantification distributions in eight maize lots. The acceptance and rejection probabilities for a lot under certain aflatoxin concentration were determined using variance and the information on the selected distribution model to build the operational characteristic curves (OC). Sampling and total variance were lower at the automatic plan. The OC curve from the automatic plan reduced both consumer and producer risks in comparison to the manual plan. The automatic plan is more efficient than the manual one because it expresses more accurately the real aflatoxin contamination in maize. PMID:24948911
NASA Astrophysics Data System (ADS)
Sekiguchi, K.; Shirakawa, H.; Yamamoto, Y.; Araidai, M.; Kangawa, Y.; Kakimoto, K.; Shiraishi, K.
2017-06-01
We analyzed the decomposition mechanisms of trimethylgallium (TMG) used for the gallium source of GaN fabrication based on first-principles calculations and thermodynamic analysis. We considered two conditions. One condition is under the total pressure of 1 atm and the other one is under metal organic vapor phase epitaxy (MOVPE) growth of GaN. Our calculated results show that H2 is indispensable for TMG decomposition under both conditions. In GaN MOVPE, TMG with H2 spontaneously decomposes into Ga(CH3) and Ga(CH3) decomposes into Ga atom gas when temperature is higher than 440 K. From these calculations, we confirmed that TMG surely becomes Ga atom gas near the GaN substrate surfaces.
NASA Astrophysics Data System (ADS)
Gu, Rongbao; Shao, Yanmin
2016-07-01
In this paper, a new concept of multi-scales singular value decomposition entropy based on DCCA cross correlation analysis is proposed and its predictive power for the Dow Jones Industrial Average Index is studied. Using Granger causality analysis with different time scales, it is found that, the singular value decomposition entropy has predictive power for the Dow Jones Industrial Average Index for period less than one month, but not for more than one month. This shows how long the singular value decomposition entropy predicts the stock market that extends Caraiani's result obtained in Caraiani (2014). On the other hand, the result also shows an essential characteristic of stock market as a chaotic dynamic system.
Analysis of Wind Tunnel Polar Replicates Using the Modern Design of Experiments
NASA Technical Reports Server (NTRS)
Deloach, Richard; Micol, John R.
2010-01-01
The role of variance in a Modern Design of Experiments analysis of wind tunnel data is reviewed, with distinctions made between explained and unexplained variance. The partitioning of unexplained variance into systematic and random components is illustrated, with examples of the elusive systematic component provided for various types of real-world tests. The importance of detecting and defending against systematic unexplained variance in wind tunnel testing is discussed, and the random and systematic components of unexplained variance are examined for a representative wind tunnel data set acquired in a test in which a missile is used as a test article. The adverse impact of correlated (non-independent) experimental errors is described, and recommendations are offered for replication strategies that facilitate the quantification of random and systematic unexplained variance.
The Use of Decompositions in International Trade Textbooks.
ERIC Educational Resources Information Center
Highfill, Jannett K.; Weber, William V.
1994-01-01
Asserts that international trade, as compared with international finance or even international economics, is primarily an applied microeconomics field. Discusses decomposition analysis in relation to international trade and tariffs. Reports on an evaluation of the treatment of this topic in eight college-level economics textbooks. (CFR)
Effect of pre-heating on the thermal decomposition kinetics of cotton
USDA-ARS?s Scientific Manuscript database
The effect of pre-heating at low temperatures (160-280°C) on the thermal decomposition kinetics of scoured cotton fabrics was investigated by thermogravimetric analysis under nonisothermal conditions. Isoconversional methods were used to calculate the activation energies for the pyrolysis after one-...
Essays on the Determinants of Energy Related CO2 Emissions =
NASA Astrophysics Data System (ADS)
Moutinho, Victor Manuel Ferreira
Overall, amongst the most mentioned factors for Greenhouse Gases (GHG) growth are the economic growth and the energy demand growth. To assess the determinants GHG emissions, this thesis proposed and developed a new analysis which links the emissions intensity to its main driving factors. In the first essay, we used the 'complete decomposition' technique to examine CO2 emissions intensity and its components, considering 36 economic sectors and the 1996-2009 periods in Portugal. The industry (in particular 5 industrial sectors) is contributing largely to the effects of variation of CO2 emissions intensity. We concluded, among others, the emissions intensity reacts more significantly to shocks in the weight of fossil fuels in total energy consumption compared to shocks in other variables. In the second essay, we conducted an analysis for 16 industrial sectors (Group A) and for the group of the 5 most polluting manufacturing sectors (Group B) based on the convergence examination for emissions intensity and its main drivers, as well as on an econometric analysis. We concluded that there is sigma convergence for all the effects with exception to the fossil fuel intensity, while gamma convergence was verified for all the effects, with exception of CO2 emissions by fossil fuel and fossil fuel intensity in Group B. From the econometric approach we concluded that the considered variables have a significant importance in explaining CO2 emissions and CO2 emissions intensity. In the third essay, the Tourism Industry in Portugal over 1996-2009 period was examined, specifically two groups of subsectors that affect the impacts on CO2 emissions intensity. The generalized variance decomposition and the impulse response functions pointed to sectors that affect tourism more directly, i. e. a bidirectional causality between the intensity of emissions and energy intensity. The effect of intensity of emissions is positive on energy intensity, and the effect of energy intensity on emissions intensity is negative. The percentage of fossil fuels used reacts positively to the economic structure and to carbon intensity, i. e., the more the economic importance of the sector, the more it uses fossil fuels, and when it raises its carbon intensity, in the future the use of fossil fuel may rise. On the other hand, positive shocks on energy intensity tend to reduce the percentage of fossil fuels used. In fourth essay, we conducted an analysis to identify the effects that contribute to the intensity of GHG emissions (EI) in agriculture as well as their development. With that aim, we used the 'complete decomposition' technique in the 1995-2008 periods, for a set of European countries. It is shown that the use of Nitrogen per cultivated area is an important factor of emissions and in those countries where labour productivity increases (the inverse of average labour productivity in agriculture decreases), emissions intensity tends to decrease. These results imply that the way to reduce emissions in agriculture would be to provide better training of agricultural workers to increase their productivity, which would lead to a less need for energy and use of Nitrogen. The purpose of the last essay is to examine the long and short-run causality of the share of renewable sources on the environmental relation CO2 per KWh electricity generation- real GDP for 20 European countries over the 2001-2010 periods. It is important to analyze how the percentage of renewable energy used for electricity production affects the relationship between economic growth and emissions from this sector. The study of these relationships is important from the point of view of environmental and energy policy as it gives us information on the costs in terms of economic growth, on the application of restrictive levels of emissions and also on the effects of the policies concerning the use of renewable energy in the electricity sector (see for instance European Commission Directive 2001/77/EC, [4]). For that purpose, in this study we use Cointegration Analysis on the set of cross-country panel data between CO2 emissions from electricity generation (CO2 kWh), economic growth (GDP) and the share of renewable energy for 20 European countries. We estimated the long-run equilibrium to validate the EKC with a new approach specification. Additionally, we have implemented the Innovative Accounting Approach (IAA) that includes Forecast Error Variance Decomposition and Impulse Response Functions (IRFs), applied to those variables. This can allow us, for example, to know (i) how CO2 kWh responds to an impulse in GDP and (ii) how CO2 kWh responds to an impulse in the share of renewable sources. The contributions of this thesis to the energy-related CO2 emissions at sectorial level are threefold: First, it provides a new econometric decomposition approach for analysing and developing CO2 emissions in collaboration with science societies that can serve as a starting point for future research approaches. Second, it presents a hybrid energy-economy mathematic and econometric model which relates CO2 emissions in Portugal based on economic theory. Third, it contributes to explain the change of CO2 emissions in important economic sectors in Europe, in particular in Portugal, taking normative considerations into account more openly and explicitly, with political implications at energy-environment level within the European commitment. None
NASA Astrophysics Data System (ADS)
Benner, Ronald; Hatcher, Patrick G.; Hedges, John I.
1990-07-01
Changes in the chemical composition of mangrove ( Rhizophora mangle) leaves during decomposition in tropical estuarine waters were characterized using solid-state 13C nuclear magnetic resonance (NMR) and elemental (CHNO) analysis. Carbohydrates were the most abundant components of the leaves accounting for about 50 wt% of senescent tissues. Tannins were estimated to account for about 20 wt% of leaf tissues, and lipid components, cutin, and possibly other aliphatic biopolymers in leaf cuticles accounted for about 15 wt%. Carbohydrates were generally less resistant to decomposition than the other constituents and decreased in relative concentration during decomposition. Tannins were of intermediate resistance to decomposition and remained in fairly constant proportion during decomposition. Paraffinic components were very resistant to decomposition and increased in relative concentration as decomposition progressed. Lignin was a minor component of all leaf tissues. Standard methods for the colorimetric determination of tannins (Folin-Dennis reagent) and the gravimetric determination of lignin (Klason lignin) were highly inaccurate when applied to mangrove leaves. The N content of the leaves was particularly dynamic with values ranging from 1.27 wt% in green leaves to 0.65 wt% in senescent yellow leaves attached to trees. During decomposition in the water the N content initially decreased to 0.51 wt% due to leaching, but values steadily increased thereafter to 1.07 wt% in the most degraded leaf samples. The absolute mass of N in the leaves increased during decomposition indicating that N immobilization was occurring as decomposition progressed.
Benner, R.; Hatcher, P.G.; Hedges, J.I.
1990-01-01
Changes in the chemical composition of mangrove (Rhizophora mangle) leaves during decomposition in tropical estuarine waters were characterized using solid-state 13C nuclear magnetic resonance (NMR) and elemental (CHNO) analysis. Carbohydrates were the most abundant components of the leaves accounting for about 50 wt% of senescent tissues. Tannins were estimated to account for about 20 wt% of leaf tissues, and lipid components, cutin, and possibly other aliphatic biopolymers in leaf cuticles accounted for about 15 wt%. Carbohydrates were generally less resistant to decomposition than the other constituents and decreased in relative concentration during decomposition. Tannins were of intermediate resistance to decomposition and remained in fairly constant proportion during decomposition. Paraffinic components were very resistant to decomposition and increased in relative concentration as decomposition progressed. Lignin was a minor component of all leaf tissues. Standard methods for the colorimetric determination of tannins (Folin-Dennis reagent) and the gravimetric determination of lignin (Klason lignin) were highly inaccurate when applied to mangrove leaves. The N content of the leaves was particularly dynamic with values ranging from 1.27 wt% in green leaves to 0.65 wt% in senescent yellow leaves attached to trees. During decomposition in the water the N content initially decreased to 0.51 wt% due to leaching, but values steadily increased thereafter to 1.07 wt% in the most degraded leaf samples. The absolute mass of N in the leaves increased during decomposition indicating that N immobilization was occurring as decomposition progressed. ?? 1990.
Muravyev, Nikita V; Monogarov, Konstantin A; Asachenko, Andrey F; Nechaev, Mikhail S; Ananyev, Ivan V; Fomenkov, Igor V; Kiselev, Vitaly G; Pivkina, Alla N
2016-12-21
Thermal decomposition of a novel promising high-performance explosive dihydroxylammonium 5,5'-bistetrazole-1,1'-diolate (TKX-50) was studied using a number of thermal analysis techniques (thermogravimetry, differential scanning calorimetry, and accelerating rate calorimetry, ARC). To obtain more comprehensive insight into the kinetics and mechanism of TKX-50 decomposition, a variety of complementary thermoanalytical experiments were performed under various conditions. Non-isothermal and isothermal kinetics were obtained at both atmospheric and low (up to 0.3 Torr) pressures. The gas products of thermolysis were detected in situ using IR spectroscopy, and the structure of solid-state decomposition products was determined by X-ray diffraction and scanning electron microscopy. Diammonium 5,5'-bistetrazole-1,1'-diolate (ABTOX) was directly identified to be the most important intermediate of the decomposition process. The important role of bistetrazole diol (BTO) in the mechanism of TKX-50 decomposition was also rationalized by thermolysis experiments with mixtures of TKX-50 and BTO. Several widely used thermoanalytical data processing techniques (Kissinger, isoconversional, formal kinetic approaches, etc.) were independently benchmarked against the ARC data, which are more germane to the real storage and application conditions of energetic materials. Our study revealed that none of the Arrhenius parameters reported before can properly describe the complex two-stage decomposition process of TKX-50. In contrast, we showed the superior performance of the isoconversional methods combined with isothermal measurements, which yielded the most reliable kinetic parameters of TKX-50 thermolysis. In contrast with the existing reports, the thermal stability of TKX-50 was determined in the ARC experiments to be lower than that of hexogen, but close to that of hexanitrohexaazaisowurtzitane (CL-20).
He, Y.; Zhuang, Q.; Harden, Jennifer W.; McGuire, A. David; Fan, Z.; Liu, Y.; Wickland, Kimberly P.
2014-01-01
The large amount of soil carbon in boreal forest ecosystems has the potential to influence the climate system if released in large quantities in response to warming. Thus, there is a need to better understand and represent the environmental sensitivity of soil carbon decomposition. Most soil carbon decomposition models rely on empirical relationships omitting key biogeochemical mechanisms and their response to climate change is highly uncertain. In this study, we developed a multi-layer microbial explicit soil decomposition model framework for boreal forest ecosystems. A thorough sensitivity analysis was conducted to identify dominating biogeochemical processes and to highlight structural limitations. Our results indicate that substrate availability (limited by soil water diffusion and substrate quality) is likely to be a major constraint on soil decomposition in the fibrous horizon (40–60% of soil organic carbon (SOC) pool size variation), while energy limited microbial activity in the amorphous horizon exerts a predominant control on soil decomposition (>70% of SOC pool size variation). Elevated temperature alleviated the energy constraint of microbial activity most notably in amorphous soils, whereas moisture only exhibited a marginal effect on dissolved substrate supply and microbial activity. Our study highlights the different decomposition properties and underlying mechanisms of soil dynamics between fibrous and amorphous soil horizons. Soil decomposition models should consider explicitly representing different boreal soil horizons and soil–microbial interactions to better characterize biogeochemical processes in boreal forest ecosystems. A more comprehensive representation of critical biogeochemical mechanisms of soil moisture effects may be required to improve the performance of the soil model we analyzed in this study.
Lossless and Sufficient - Invariant Decomposition of Deterministic Target
NASA Astrophysics Data System (ADS)
Paladini, Riccardo; Ferro Famil, Laurent; Pottier, Eric; Martorella, Marco; Berizzi, Fabrizio
2011-03-01
The symmetric radar scattering matrix of a reciprocal target is projected on the circular polarization basis and is decomposed into four orientation invariant parameters, relative phase and relative orientation. The physical interpretation of this results is found in the wave-particle nature of radar scattering due to the circular polarization nature of elemental packets of energy. The proposed decomposition, is based on left orthogonal to left Special Unitary basis, providing the target description in term of a unitary vector. A comparison between the proposed CTD and Cameron, Kennaugh and Krogager decompositions is also pointed out. A validation by the use of both anechoic chamber data and airborne EMISAR data of DTU is used to show the effectiveness of this decomposition for the analysis of coherent targets. In the second paper we will show the application of the rotation group U(3) for the decomposition of distributed targets into nine meaningful parameters.
Nitrated graphene oxide and its catalytic activity in thermal decomposition of ammonium perchlorate
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Wenwen; Luo, Qingping; Duan, Xiaohui
2014-02-01
Highlights: • The NGO was synthesized by nitrifying homemade GO. • The N content of resulted NGO is up to 1.45 wt.%. • The NGO can facilitate the decomposition of AP and release much heat. - Abstract: Nitrated graphene oxide (NGO) was synthesized by nitrifying homemade GO with nitro-sulfuric acid. Fourier transform infrared spectroscopy (FTIR), laser Raman spectroscopy, CP/MAS {sup 13}C NMR spectra and X-ray photoelectron spectroscopy (XPS) were used to characterize the structure of NGO. The thickness and the compositions of GO and NGO were analyzed by atomic force microscopy (AFM) and elemental analysis (EA), respectively. The catalytic effectmore » of the NGO for the thermal decomposition of ammonium perchlorate (AP) was investigated by differential scanning calorimetry (DSC). Adding 10% of NGO to AP decreases the decomposition temperature by 106 °C and increases the apparent decomposition heat from 875 to 3236 J/g.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Browning, Katie L; Baggetto, Loic; Unocic, Raymond R
This work reports a method to explore the catalytic reactivity of electrode surfaces towards the decomposition of carbonate solvents [ethylene carbonate (EC), dimethyl carbonate (DMC), and EC/DMC]. We show that the decomposition of a 1:1 wt% EC/DMC mixture is accelerated over certain commercially available LiCoO2 materials resulting in the formation of CO2 while over pure EC or DMC the reaction is much slower or negligible. The solubility of the produced CO2 in carbonate solvents is high (0.025 grams/mL) which masks the effect of electrolyte decomposition during storage or use. The origin of this decomposition is not clear but it ismore » expected to be present on other cathode materials and may affect the analysis of SEI products as well as the safety of Li-ion batteries.« less
Optimal cost design of water distribution networks using a decomposition approach
NASA Astrophysics Data System (ADS)
Lee, Ho Min; Yoo, Do Guen; Sadollah, Ali; Kim, Joong Hoon
2016-12-01
Water distribution network decomposition, which is an engineering approach, is adopted to increase the efficiency of obtaining the optimal cost design of a water distribution network using an optimization algorithm. This study applied the source tracing tool in EPANET, which is a hydraulic and water quality analysis model, to the decomposition of a network to improve the efficiency of the optimal design process. The proposed approach was tested by carrying out the optimal cost design of two water distribution networks, and the results were compared with other optimal cost designs derived from previously proposed optimization algorithms. The proposed decomposition approach using the source tracing technique enables the efficient decomposition of an actual large-scale network, and the results can be combined with the optimal cost design process using an optimization algorithm. This proves that the final design in this study is better than those obtained with other previously proposed optimization algorithms.
Analysis of Decomposition for Structure I Methane Hydrate by Molecular Dynamics Simulation
NASA Astrophysics Data System (ADS)
Wei, Na; Sun, Wan-Tong; Meng, Ying-Feng; Liu, An-Qi; Zhou, Shou-Wei; Guo, Ping; Fu, Qiang; Lv, Xin
2018-05-01
Under multi-nodes of temperatures and pressures, microscopic decomposition mechanisms of structure I methane hydrate in contact with bulk water molecules have been studied through LAMMPS software by molecular dynamics simulation. Simulation system consists of 482 methane molecules in hydrate and 3027 randomly distributed bulk water molecules. Through analyses of simulation results, decomposition number of hydrate cages, density of methane molecules, radial distribution function for oxygen atoms, mean square displacement and coefficient of diffusion of methane molecules have been studied. A significant result shows that structure I methane hydrate decomposes from hydrate-bulk water interface to hydrate interior. As temperature rises and pressure drops, the stabilization of hydrate will weaken, decomposition extent will go deep, and mean square displacement and coefficient of diffusion of methane molecules will increase. The studies can provide important meanings for the microscopic decomposition mechanisms analyses of methane hydrate.
Ozerov, Ivan V; Lezhnina, Ksenia V; Izumchenko, Evgeny; Artemov, Artem V; Medintsev, Sergey; Vanhaelen, Quentin; Aliper, Alexander; Vijg, Jan; Osipov, Andreyan N; Labat, Ivan; West, Michael D; Buzdin, Anton; Cantor, Charles R; Nikolsky, Yuri; Borisov, Nikolay; Irincheeva, Irina; Khokhlovich, Edward; Sidransky, David; Camargo, Miguel Luiz; Zhavoronkov, Alex
2016-11-16
Signalling pathway activation analysis is a powerful approach for extracting biologically relevant features from large-scale transcriptomic and proteomic data. However, modern pathway-based methods often fail to provide stable pathway signatures of a specific phenotype or reliable disease biomarkers. In the present study, we introduce the in silico Pathway Activation Network Decomposition Analysis (iPANDA) as a scalable robust method for biomarker identification using gene expression data. The iPANDA method combines precalculated gene coexpression data with gene importance factors based on the degree of differential gene expression and pathway topology decomposition for obtaining pathway activation scores. Using Microarray Analysis Quality Control (MAQC) data sets and pretreatment data on Taxol-based neoadjuvant breast cancer therapy from multiple sources, we demonstrate that iPANDA provides significant noise reduction in transcriptomic data and identifies highly robust sets of biologically relevant pathway signatures. We successfully apply iPANDA for stratifying breast cancer patients according to their sensitivity to neoadjuvant therapy.
Ozerov, Ivan V.; Lezhnina, Ksenia V.; Izumchenko, Evgeny; Artemov, Artem V.; Medintsev, Sergey; Vanhaelen, Quentin; Aliper, Alexander; Vijg, Jan; Osipov, Andreyan N.; Labat, Ivan; West, Michael D.; Buzdin, Anton; Cantor, Charles R.; Nikolsky, Yuri; Borisov, Nikolay; Irincheeva, Irina; Khokhlovich, Edward; Sidransky, David; Camargo, Miguel Luiz; Zhavoronkov, Alex
2016-01-01
Signalling pathway activation analysis is a powerful approach for extracting biologically relevant features from large-scale transcriptomic and proteomic data. However, modern pathway-based methods often fail to provide stable pathway signatures of a specific phenotype or reliable disease biomarkers. In the present study, we introduce the in silico Pathway Activation Network Decomposition Analysis (iPANDA) as a scalable robust method for biomarker identification using gene expression data. The iPANDA method combines precalculated gene coexpression data with gene importance factors based on the degree of differential gene expression and pathway topology decomposition for obtaining pathway activation scores. Using Microarray Analysis Quality Control (MAQC) data sets and pretreatment data on Taxol-based neoadjuvant breast cancer therapy from multiple sources, we demonstrate that iPANDA provides significant noise reduction in transcriptomic data and identifies highly robust sets of biologically relevant pathway signatures. We successfully apply iPANDA for stratifying breast cancer patients according to their sensitivity to neoadjuvant therapy. PMID:27848968
Analysis of Developmental Data: Comparison Among Alternative Methods
ERIC Educational Resources Information Center
Wilson, Ronald S.
1975-01-01
To examine the ability of the correction factor epsilon to counteract statistical bias in univariate analysis, an analysis of variance (adjusted by epsilon) and a multivariate analysis of variance were performed on the same data. The results indicated that univariate analysis is a fully protected design when used with epsilon. (JMB)
Metagenomic analysis of antibiotic resistance genes (ARGs) during refuse decomposition.
Liu, Xi; Yang, Shu; Wang, Yangqing; Zhao, He-Ping; Song, Liyan
2018-04-12
Landfill is important reservoirs of residual antibiotics and antibiotic resistance genes (ARGs), but the mechanism of landfill application influence on antibiotic resistance remains unclear. Although refuse decomposition plays a crucial role in landfill stabilization, its impact on the antibiotic resistance has not been well characterized. To better understand the impact, we studied the dynamics of ARGs and the bacterial community composition during refuse decomposition in a bench-scale bioreactor after long term operation (265d) based on metagenomics analysis. The total abundances of ARGs increased from 431.0ppm in the initial aerobic phase (AP) to 643.9ppm in the later methanogenic phase (MP) during refuse decomposition, suggesting that application of landfill for municipal solid waste (MSW) treatment may elevate the level of ARGs. A shift from drug-specific (bacitracin, tetracycline and sulfonamide) resistance to multidrug resistance was observed during the refuse decomposition and was driven by a shift of potential bacteria hosts. The elevated abundance of Pseudomonas mainly contributed to the increasing abundance of multidrug ARGs (mexF and mexW). Accordingly, the percentage of ARGs encoding an efflux pump increased during refuse decomposition, suggesting that potential bacteria hosts developed this mechanism to adapt to the carbon and energy shortage when biodegradable substances were depleted. Overall, our findings indicate that the use of landfill for MSW treatment increased antibiotic resistance, and demonstrate the need for a comprehensive investigation of antibiotic resistance in landfill. Copyright © 2018. Published by Elsevier B.V.
Keough, N; L'Abbé, E N; Steyn, M; Pretorius, S
2015-01-01
Forensic anthropologists are tasked with interpreting the sequence of events from death to the discovery of a body. Burned bone often evokes questions as to the timing of burning events. The purpose of this study was to assess the progression of thermal damage on bones with advancement in decomposition. Twenty-five pigs in various stages of decomposition (fresh, early, advanced, early and late skeletonisation) were exposed to fire for 30 min. The scored heat-related features on bone included colour change (unaltered, charred, calcined), brown and heat borders, heat lines, delineation, greasy bone, joint shielding, predictable and minimal cracking, delamination and heat-induced fractures. Colour changes were scored according to a ranked percentage scale (0-3) and the remaining traits as absent or present (0/1). Kappa statistics was used to evaluate intra- and inter-observer error. Transition analysis was used to formulate probability mass functions [P(X=j|i)] to predict decomposition stage from the scored features of thermal destruction. Nine traits displayed potential to predict decomposition stage from burned remains. An increase in calcined and charred bone occurred synchronously with advancement of decomposition with subsequent decrease in unaltered surfaces. Greasy bone appeared more often in the early/fresh stages (fleshed bone). Heat borders, heat lines, delineation, joint shielding, predictable and minimal cracking are associated with advanced decomposition, when bone remains wet but lacks extensive soft tissue protection. Brown burn/borders, delamination and other heat-induced fractures are associated with early and late skeletonisation, showing that organic composition of bone and percentage of flesh present affect the manner in which it burns. No statistically significant difference was noted among observers for the majority of the traits, indicating that they can be scored reliably. Based on the data analysis, the pattern of heat-induced changes may assist in estimating decomposition stage from unknown, burned remains. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Xuerun, E-mail: xuerunli@163.com; Zhang, Yu; Shen, Xiaodong, E-mail: xdshen@njut.edu.cn
The formation kinetics of tricalcium aluminate (C{sub 3}A) and calcium sulfate yielding calcium sulfoaluminate (C{sub 4}A{sub 3}more » $$) and the decomposition kinetics of calcium sulfoaluminate were investigated by sintering a mixture of synthetic C{sub 3}A and gypsum. The quantitative analysis of the phase composition was performed by X-ray powder diffraction analysis using the Rietveld method. The results showed that the formation reaction 3Ca{sub 3}Al{sub 2}O{sub 6} + CaSO{sub 4} → Ca{sub 4}Al{sub 6}O{sub 12}(SO{sub 4}) + 6CaO was the primary reaction < 1350 °C with and activation energy of 231 ± 42 kJ/mol; while the decomposition reaction 2Ca{sub 4}Al{sub 6}O{sub 12}(SO{sub 4}) + 10CaO → 6Ca{sub 3}Al{sub 2}O{sub 6} + 2SO{sub 2} ↑ + O{sub 2} ↑ primarily occurred beyond 1350 °C with an activation energy of 792 ± 64 kJ/mol. The optimal formation region for C{sub 4}A{sub 3}$$ was from 1150 °C to 1350 °C and from 6 h to 1 h, which could provide useful information on the formation of C{sub 4}A{sub 3}$ containing clinkers. The Jander diffusion model was feasible for the formation and decomposition of calcium sulfoaluminate. Ca{sup 2+} and SO{sub 4}{sup 2−} were the diffusive species in both the formation and decomposition reactions. -- Highlights: •Formation and decomposition of calcium sulphoaluminate were studied. •Decomposition of calcium sulphoaluminate combined CaO and yielded C{sub 3}A. •Activation energy for formation was 231 ± 42 kJ/mol. •Activation energy for decomposition was 792 ± 64 kJ/mol. •Both the formation and decomposition were controlled by diffusion.« less
Cockle, Diane Lyn; Bell, Lynne S
2017-03-01
Little is known about the nature and trajectory of human decomposition in Canada. This study involved the examination of 96 retrospective police death investigation cases selected using the Canadian ViCLAS (Violent Crime Linkage Analysis System) and sudden death police databases. A classification system was designed and applied based on the latest visible stages of autolysis (stages 1-2), putrefaction (3-5) and skeletonisation (6-8) observed. The analysis of the progression of decomposition using time (Post Mortem Interval (PMI) in days) and temperature accumulated-degree-days (ADD) score found considerable variability during the putrefaction and skeletonisation phases, with poor predictability noted after stage 5 (post bloat). The visible progression of decomposition outdoors was characterized by a brown to black discolouration at stage 5 and remnant desiccated black tissue at stage 7. No bodies were totally skeletonised in under one year. Mummification of tissue was rare with earlier onset in winter as opposed to summer, considered likely due to lower seasonal humidity. It was found that neither ADD nor the PMI were significant dependent variables for the decomposition score with correlations of 53% for temperature and 41% for time. It took almost twice as much time and 1.5 times more temperature (ADD) for the set of cases exposed to cold and freezing temperatures (4°C or less) to reach putrefaction compared to the warm group. The amount of precipitation and/or clothing had a negligible impact on the advancement of decomposition, whereas the lack of sun exposure (full shade) had a small positive effect. This study found that the poor predictability of onset and the duration of late stage decomposition, combined with our limited understanding of the full range of variables which influence the speed of decomposition, makes PMI estimations for exposed terrestrial cases in Canada unreliable, but also calls in question PMI estimations elsewhere. Copyright © 2016 The Chartered Society of Forensic Sciences. Published by Elsevier B.V. All rights reserved.
An Empirical Assessment of Defense Contractor Risk 1976-1984.
1986-06-01
Model to evaluate the. Department of Defense contract pricing , financing, and profit policies . ’ D*’ ’ *NTV D? 7A’:: TA E *A l ..... -:- A-i SN 0102...defense con- tractor risk-return relationship is performed utilizing four methods: mean-variance analysis of rate of return, the Capital Asset Pricing Model ...relationship is performed utilizing four methods: mean- variance analysis of rate of return, the Capital Asset Pricing Model , mean-variance analysis of total
Statistical analysis of Skylab 3. [endocrine/metabolic studies of astronauts
NASA Technical Reports Server (NTRS)
Johnston, D. A.
1974-01-01
The results of endocrine/metabolic studies of astronauts on Skylab 3 are reported. One-way analysis of variance, contrasts, two-way unbalanced analysis of variance, and analysis of periodic changes in flight are included. Results for blood tests, and urine tests are presented.
Ma, Hong-Wu; Zhao, Xue-Ming; Yuan, Ying-Jin; Zeng, An-Ping
2004-08-12
Metabolic networks are organized in a modular, hierarchical manner. Methods for a rational decomposition of the metabolic network into relatively independent functional subsets are essential to better understand the modularity and organization principle of a large-scale, genome-wide network. Network decomposition is also necessary for functional analysis of metabolism by pathway analysis methods that are often hampered by the problem of combinatorial explosion due to the complexity of metabolic network. Decomposition methods proposed in literature are mainly based on the connection degree of metabolites. To obtain a more reasonable decomposition, the global connectivity structure of metabolic networks should be taken into account. In this work, we use a reaction graph representation of a metabolic network for the identification of its global connectivity structure and for decomposition. A bow-tie connectivity structure similar to that previously discovered for metabolite graph is found also to exist in the reaction graph. Based on this bow-tie structure, a new decomposition method is proposed, which uses a distance definition derived from the path length between two reactions. An hierarchical classification tree is first constructed from the distance matrix among the reactions in the giant strong component of the bow-tie structure. These reactions are then grouped into different subsets based on the hierarchical tree. Reactions in the IN and OUT subsets of the bow-tie structure are subsequently placed in the corresponding subsets according to a 'majority rule'. Compared with the decomposition methods proposed in literature, ours is based on combined properties of the global network structure and local reaction connectivity rather than, primarily, on the connection degree of metabolites. The method is applied to decompose the metabolic network of Escherichia coli. Eleven subsets are obtained. More detailed investigations of the subsets show that reactions in the same subset are really functionally related. The rational decomposition of metabolic networks, and subsequent studies of the subsets, make it more amenable to understand the inherent organization and functionality of metabolic networks at the modular level. http://genome.gbf.de/bioinformatics/
Robust-mode analysis of hydrodynamic flows
NASA Astrophysics Data System (ADS)
Roy, Sukesh; Gord, James R.; Hua, Jia-Chen; Gunaratne, Gemunu H.
2017-04-01
The emergence of techniques to extract high-frequency high-resolution data introduces a new avenue for modal decomposition to assess the underlying dynamics, especially of complex flows. However, this task requires the differentiation of robust, repeatable flow constituents from noise and other irregular features of a flow. Traditional approaches involving low-pass filtering and principle components analysis have shortcomings. The approach outlined here, referred to as robust-mode analysis, is based on Koopman decomposition. Three applications to (a) a counter-rotating cellular flame state, (b) variations in financial markets, and (c) turbulent injector flows are provided.
ERIC Educational Resources Information Center
Tanner-Smith, Emily E.; Tipton, Elizabeth
2014-01-01
Methodologists have recently proposed robust variance estimation as one way to handle dependent effect sizes in meta-analysis. Software macros for robust variance estimation in meta-analysis are currently available for Stata (StataCorp LP, College Station, TX, USA) and SPSS (IBM, Armonk, NY, USA), yet there is little guidance for authors regarding…
ERIC Educational Resources Information Center
Lix, Lisa M.; And Others
1996-01-01
Meta-analytic techniques were used to summarize the statistical robustness literature on Type I error properties of alternatives to the one-way analysis of variance "F" test. The James (1951) and Welch (1951) tests performed best under violations of the variance homogeneity assumption, although their use is not always appropriate. (SLD)
Tracking Hierarchical Processing in Morphological Decomposition with Brain Potentials
ERIC Educational Resources Information Center
Lavric, Aureliu; Elchlepp, Heike; Rastle, Kathleen
2012-01-01
One important debate in psycholinguistics concerns the nature of morphological decomposition processes in visual word recognition (e.g., darkness = {dark} + {-ness}). One theory claims that these processes arise during orthographic analysis and prior to accessing meaning (Rastle & Davis, 2008), and another argues that these processes arise through…
Structural applications of metal foams considering material and geometrical uncertainty
NASA Astrophysics Data System (ADS)
Moradi, Mohammadreza
Metal foam is a relatively new and potentially revolutionary material that allows for components to be replaced with elements capable of large energy dissipation, or components to be stiffened with elements which will generate significant supplementary energy dissipation when buckling occurs. Metal foams provide a means to explore reconfiguring steel structures to mitigate cross-section buckling in many cases and dramatically increase energy dissipation in all cases. The microstructure of metal foams consists of solid and void phases. These voids have random shape and size. Therefore, randomness ,which is introduced into metal foams during the manufacturing processes, creating more uncertainty in the behavior of metal foams compared to solid steel. Therefore, studying uncertainty in the performance metrics of structures which have metal foams is more crucial than for conventional structures. Therefore, in this study, structural application of metal foams considering material and geometrical uncertainty is presented. This study applies the Sobol' decomposition of a function of many random variables to different problem in structural mechanics. First, the Sobol' decomposition itself is reviewed and extended to cover the case in which the input random variables have Gaussian distribution. Then two examples are given for a polynomial function of 3 random variables and the collapse load of a two story frame. In the structural example, the Sobol' decomposition is used to decompose the variance of the response, the collapse load, into contributions from the individual input variables. This decomposition reveals the relative importance of the individual member yield stresses in determining the collapse load of the frame. In applying the Sobol' decomposition to this structural problem the following issues are addressed: calculation of the components of the Sobol' decomposition by Monte Carlo simulation; the effect of input distribution on the Sobol' decomposition; convergence of estimates of the Sobol' decomposition with sample size using various sampling schemes; the possibility of model reduction guided by the results of the Sobol' decomposition. For the rest of the study the different structural applications of metal foam is investigated. In the first application, it is shown that metal foams have the potential to serve as hysteric dampers in the braces of braced building frames. Using metal foams in the structural braces decreases different dynamic responses such as roof drift, base shear and maximum moment in the columns. Optimum metal foam strengths are different for different earthquakes. In order to use metal foam in the structural braces, metal foams need to have stable cyclic response which might be achievable for metal foams with high relative density. The second application is to improve strength and ductility of a steel tube by filling it with steel foam. Steel tube beams and columns are able to provide significant strength for structures. They have an efficient shape with large second moment of inertia which leads to light elements with high bending strength. Steel foams with high strength to weight ratio are used to fill the steel tube to improves its mechanical behavior. The linear eigenvalue and plastic collapse finite element (FE) analysis are performed on steel foam filled tube under pure compression and three point bending simulation. It is shown that foam improves the maximum strength and the ability of energy absorption of the steel tubes significantly. Different configurations with different volume of steel foam and composite behavior are investigated. It is demonstrated that there are some optimum configurations with more efficient behavior. If composite action between steel foam and steel increases, the strength of the element will improve due to the change of the failure mode from local buckling to yielding. Moreover, the Sobol' decomposition is used to investigate uncertainty in the strength and ductility of the composite tube, including the sensitivity of the strength to input parameters such as the foam density, tube wall thickness, steel properties etc. Monte Carlo simulation is performed on aluminum foam filled tubes under three point bending conditions. The simulation method is nonlinear finite element analysis. Results show that the steel foam properties have a greater effect on ductility of the steel foam filled tube than its strength. Moreover, flexural strength is more sensitive to steel properties than to aluminum foam properties. Finally, the properties of hypothetical structural steel foam C-channels foamed are investigated via simulations. In thin-walled structural members, stability of the walls is the primary driver of structural limit states. Moreover, having a light weight is one of the main advantages of the thin-walled structural members. Therefore, thin-walled structural members made of steel foam exhibit improved strength while maintaining their low weight. Linear eigenvalue, finite strip method (FSM) and plastic collapse FE analysis is used to evaluate the strength and ductility of steel foam C-channels under uniform compression and bending. It is found that replacing steel walls of the C-channel with steel foam walls increases the local buckling resistance and decreases the global buckling resistance of the C-channel. By using the Sobol' decomposition, an optimum configuration for the variable density steel foam C-channel can be found. For high relative density, replacing solid steel of the lips and flange elements with steel foam increases the buckling strength. On the other hand, for low relative density replacing solid steel of the lips and flange elements with steel foam deceases the buckling strength. Moreover, it is shown that buckling strength of the steel foam C-channel is sensitive to the second order Sobol' indices. In summary, it is shown in this research that the metal foams have a great potential to improve different types of structural responses, and there are many promising application for metal foam in civil structures.
New insights into the crowd characteristics in Mina
NASA Astrophysics Data System (ADS)
Wang, J. Y.; Weng, W. G.; Zhang, X. L.
2014-11-01
The significance of the study of the characteristics of crowd behavior is indubitable for safely organizing mass activities. There is insufficient material to conduct such research. In this paper, the Mina crowd disaster is quantitatively re-investigated. Its instantaneous velocity field is extracted from video material based on the cross-correlation algorithm. The properties of the stop-and-go waves, including fluctuation frequencies, wave propagation speeds, characteristic speeds, and time and space averaged velocity variances, are analyzed in detail. Thus, the database of the stop-and-go wave features is enriched, which is very important to crowd studies. The ‘turbulent’ flows are investigated with the proper orthogonal decomposition (POD) method which is widely used in fluid mechanics. And time series and spatial analysis are conducted to investigate the characteristics of the ‘turbulent’ flows. In this paper, the coherent structures and movement process are described by the POD method. The relationship between the jamming point and crowd path is analyzed. And the pressure buffer recognized in this paper is consistent with Helbing's high-pressure region. The results revealed here may be helpful for facilities design, modeling crowded scenarios and the organization of large-scale mass activities.
NASA Astrophysics Data System (ADS)
Bickel, Malte; Strack, Micha; Bögeholz, Susanne
2015-06-01
Modern knowledge-based societies, especially their younger members, have largely lost their bonds to farming. However, learning about agriculture and its interrelations with environmental issues may be facilitated by students' individual interests in agriculture. To date, an adequate instrument to investigate agricultural interests has been lacking. Research has infrequently considered students' interest in agricultural content areas as well as influencing factors on students' agricultural interests. In this study, a factorial design of agricultural interests was developed combining five agricultural content areas and four components of individual interest. The instrument was validated with German fifth and sixth graders ( N = 1,085) using a variance decomposition confirmatory factor analysis model. The results demonstrated a second-order factor of general agricultural interest, with animal husbandry, arable farming, vegetable and fruit cropping, primary food processing, and agricultural engineering as discrete content areas of agricultural interest. Multiple regression analyses demonstrated that prior knowledge, garden experience, and disgust sensitivity are predictors of general agricultural interest. In addition, gender influenced interest in four of the five agricultural content areas. Implications are directed at researchers, teachers, and environmental educators concerning how to trigger and develop pupils' agricultural interests.
Efficient least angle regression for identification of linear-in-the-parameters models
Beach, Thomas H.; Rezgui, Yacine
2017-01-01
Least angle regression, as a promising model selection method, differentiates itself from conventional stepwise and stagewise methods, in that it is neither too greedy nor too slow. It is closely related to L1 norm optimization, which has the advantage of low prediction variance through sacrificing part of model bias property in order to enhance model generalization capability. In this paper, we propose an efficient least angle regression algorithm for model selection for a large class of linear-in-the-parameters models with the purpose of accelerating the model selection process. The entire algorithm works completely in a recursive manner, where the correlations between model terms and residuals, the evolving directions and other pertinent variables are derived explicitly and updated successively at every subset selection step. The model coefficients are only computed when the algorithm finishes. The direct involvement of matrix inversions is thereby relieved. A detailed computational complexity analysis indicates that the proposed algorithm possesses significant computational efficiency, compared with the original approach where the well-known efficient Cholesky decomposition is involved in solving least angle regression. Three artificial and real-world examples are employed to demonstrate the effectiveness, efficiency and numerical stability of the proposed algorithm. PMID:28293140
Structured penalties for functional linear models-partially empirical eigenvectors for regression.
Randolph, Timothy W; Harezlak, Jaroslaw; Feng, Ziding
2012-01-01
One of the challenges with functional data is incorporating geometric structure, or local correlation, into the analysis. This structure is inherent in the output from an increasing number of biomedical technologies, and a functional linear model is often used to estimate the relationship between the predictor functions and scalar responses. Common approaches to the problem of estimating a coefficient function typically involve two stages: regularization and estimation. Regularization is usually done via dimension reduction, projecting onto a predefined span of basis functions or a reduced set of eigenvectors (principal components). In contrast, we present a unified approach that directly incorporates geometric structure into the estimation process by exploiting the joint eigenproperties of the predictors and a linear penalty operator. In this sense, the components in the regression are 'partially empirical' and the framework is provided by the generalized singular value decomposition (GSVD). The form of the penalized estimation is not new, but the GSVD clarifies the process and informs the choice of penalty by making explicit the joint influence of the penalty and predictors on the bias, variance and performance of the estimated coefficient function. Laboratory spectroscopy data and simulations are used to illustrate the concepts.
Discrete Inverse and State Estimation Problems
NASA Astrophysics Data System (ADS)
Wunsch, Carl
2006-06-01
The problems of making inferences about the natural world from noisy observations and imperfect theories occur in almost all scientific disciplines. This book addresses these problems using examples taken from geophysical fluid dynamics. It focuses on discrete formulations, both static and time-varying, known variously as inverse, state estimation or data assimilation problems. Starting with fundamental algebraic and statistical ideas, the book guides the reader through a range of inference tools including the singular value decomposition, Gauss-Markov and minimum variance estimates, Kalman filters and related smoothers, and adjoint (Lagrange multiplier) methods. The final chapters discuss a variety of practical applications to geophysical flow problems. Discrete Inverse and State Estimation Problems is an ideal introduction to the topic for graduate students and researchers in oceanography, meteorology, climate dynamics, and geophysical fluid dynamics. It is also accessible to a wider scientific audience; the only prerequisite is an understanding of linear algebra. Provides a comprehensive introduction to discrete methods of inference from incomplete information Based upon 25 years of practical experience using real data and models Develops sequential and whole-domain analysis methods from simple least-squares Contains many examples and problems, and web-based support through MIT opencourseware
A sparse grid based method for generative dimensionality reduction of high-dimensional data
NASA Astrophysics Data System (ADS)
Bohn, Bastian; Garcke, Jochen; Griebel, Michael
2016-03-01
Generative dimensionality reduction methods play an important role in machine learning applications because they construct an explicit mapping from a low-dimensional space to the high-dimensional data space. We discuss a general framework to describe generative dimensionality reduction methods, where the main focus lies on a regularized principal manifold learning variant. Since most generative dimensionality reduction algorithms exploit the representer theorem for reproducing kernel Hilbert spaces, their computational costs grow at least quadratically in the number n of data. Instead, we introduce a grid-based discretization approach which automatically scales just linearly in n. To circumvent the curse of dimensionality of full tensor product grids, we use the concept of sparse grids. Furthermore, in real-world applications, some embedding directions are usually more important than others and it is reasonable to refine the underlying discretization space only in these directions. To this end, we employ a dimension-adaptive algorithm which is based on the ANOVA (analysis of variance) decomposition of a function. In particular, the reconstruction error is used to measure the quality of an embedding. As an application, the study of large simulation data from an engineering application in the automotive industry (car crash simulation) is performed.
Anomalous volatility scaling in high frequency financial data
NASA Astrophysics Data System (ADS)
Nava, Noemi; Di Matteo, T.; Aste, Tomaso
2016-04-01
Volatility of intra-day stock market indices computed at various time horizons exhibits a scaling behaviour that differs from what would be expected from fractional Brownian motion (fBm). We investigate this anomalous scaling by using empirical mode decomposition (EMD), a method which separates time series into a set of cyclical components at different time-scales. By applying the EMD to fBm, we retrieve a scaling law that relates the variance of the components to a power law of the oscillating period. In contrast, when analysing 22 different stock market indices, we observe deviations from the fBm and Brownian motion scaling behaviour. We discuss and quantify these deviations, associating them to the characteristics of financial markets, with larger deviations corresponding to less developed markets.
NASA Astrophysics Data System (ADS)
Narayan, Paresh Kumar
2008-05-01
The goal of this paper is to examine the relative importance of permanent and transitory shocks in explaining variations in macroeconomic aggregates for the UK at business cycle horizons. Using the common trend-common cycle restrictions, we estimate a variance decomposition of shocks, and find that over short horizons the bulk of the variations in income and consumption were due to permanent shocks while transitory shocks explain the bulk of the variations in investment. Our findings for income and consumption are consistent with real business cycle models which emphasize the role of aggregate supply shocks, while our findings for investment are consistent with the Keynesian school of thought, which emphasizes the role of aggregate demand shocks in explaining business cycles.
Xu, Chonggang; Gertner, George
2013-01-01
Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements. PMID:24143037
Xu, Chonggang; Gertner, George
2011-01-01
Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements.
Xu, Nan; Veesler, David; Doerschuk, Peter C; Johnson, John E
2018-05-01
The information content of cryo EM data sets exceeds that of the electron scattering potential (cryo EM) density initially derived for structure determination. Previously we demonstrated the power of data variance analysis for characterizing regions of cryo EM density that displayed functionally important variance anomalies associated with maturation cleavage events in Nudaurelia Omega Capensis Virus and the presence or absence of a maturation protease in bacteriophage HK97 procapsids. Here we extend the analysis in two ways. First, instead of imposing icosahedral symmetry on every particle in the data set during the variance analysis, we only assume that the data set as a whole has icosahedral symmetry. This change removes artifacts of high variance along icosahedral symmetry axes, but retains all of the features previously reported in the HK97 data set. Second we present a covariance analysis that reveals correlations in structural dynamics (variance) between the interior of the HK97 procapsid with the protease and regions of the exterior (not seen in the absence of the protease). The latter analysis corresponds well with hydrogen deuterium exchange studies previously published that reveal the same correlation. Copyright © 2018 Elsevier Inc. All rights reserved.
Isothermal Decomposition of Hydrogen Peroxide Dihydrate
NASA Technical Reports Server (NTRS)
Loeffler, M. J.; Baragiola, R. A.
2011-01-01
We present a new method of growing pure solid hydrogen peroxide in an ultra high vacuum environment and apply it to determine thermal stability of the dihydrate compound that forms when water and hydrogen peroxide are mixed at low temperatures. Using infrared spectroscopy and thermogravimetric analysis, we quantified the isothermal decomposition of the metastable dihydrate at 151.6 K. This decomposition occurs by fractional distillation through the preferential sublimation of water, which leads to the formation of pure hydrogen peroxide. The results imply that in an astronomical environment where condensed mixtures of H2O2 and H2O are shielded from radiolytic decomposition and warmed to temperatures where sublimation is significant, highly concentrated or even pure hydrogen peroxide may form.
HCOOH decomposition on Pt(111): A DFT study
Scaranto, Jessica; Mavrikakis, Manos
2015-10-13
Formic acid (HCOOH) decomposition on transition metal surfaces is important for hydrogen production and for its electro-oxidation in direct HCOOH fuel cells. HCOOH can decompose through dehydrogenation leading to formation of CO 2 and H 2 or dehydration leading to CO and H 2O; because CO can poison metal surfaces, dehydrogenation is typically the desirable decomposition path. Here we report a mechanistic analysis of HCOOH decomposition on Pt(111), obtained from a plane wave density functional theory (DFT-PW91) study. We analyzed the dehydrogenation mechanism by considering the two possible pathways involving the formate (HCOO) or the carboxyl (COOH) intermediate. We alsomore » considered several possible dehydration paths leading to CO formation. We studied HCOO and COOH decomposition both on the clean surface and in the presence of other relevant co-adsorbates. The results suggest that COOH formation is energetically more difficult than HCOO formation. In contrast, COOH dehydrogenation is easier than HCOO decomposition. Here, we found that CO 2 is the main product through both pathways and that CO is produced mainly through the dehydroxylation of the COOH intermediate.« less
HCOOH decomposition on Pt(111): A DFT study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scaranto, Jessica; Mavrikakis, Manos
Formic acid (HCOOH) decomposition on transition metal surfaces is important for hydrogen production and for its electro-oxidation in direct HCOOH fuel cells. HCOOH can decompose through dehydrogenation leading to formation of CO 2 and H 2 or dehydration leading to CO and H 2O; because CO can poison metal surfaces, dehydrogenation is typically the desirable decomposition path. Here we report a mechanistic analysis of HCOOH decomposition on Pt(111), obtained from a plane wave density functional theory (DFT-PW91) study. We analyzed the dehydrogenation mechanism by considering the two possible pathways involving the formate (HCOO) or the carboxyl (COOH) intermediate. We alsomore » considered several possible dehydration paths leading to CO formation. We studied HCOO and COOH decomposition both on the clean surface and in the presence of other relevant co-adsorbates. The results suggest that COOH formation is energetically more difficult than HCOO formation. In contrast, COOH dehydrogenation is easier than HCOO decomposition. Here, we found that CO 2 is the main product through both pathways and that CO is produced mainly through the dehydroxylation of the COOH intermediate.« less
Domain decomposition: A bridge between nature and parallel computers
NASA Technical Reports Server (NTRS)
Keyes, David E.
1992-01-01
Domain decomposition is an intuitive organizing principle for a partial differential equation (PDE) computation, both physically and architecturally. However, its significance extends beyond the readily apparent issues of geometry and discretization, on one hand, and of modular software and distributed hardware, on the other. Engineering and computer science aspects are bridged by an old but recently enriched mathematical theory that offers the subject not only unity, but also tools for analysis and generalization. Domain decomposition induces function-space and operator decompositions with valuable properties. Function-space bases and operator splittings that are not derived from domain decompositions generally lack one or more of these properties. The evolution of domain decomposition methods for elliptically dominated problems has linked two major algorithmic developments of the last 15 years: multilevel and Krylov methods. Domain decomposition methods may be considered descendants of both classes with an inheritance from each: they are nearly optimal and at the same time efficiently parallelizable. Many computationally driven application areas are ripe for these developments. A progression is made from a mathematically informal motivation for domain decomposition methods to a specific focus on fluid dynamics applications. To be introductory rather than comprehensive, simple examples are provided while convergence proofs and algorithmic details are left to the original references; however, an attempt is made to convey their most salient features, especially where this leads to algorithmic insight.
Challenges of including nitrogen effects on decomposition in earth system models
NASA Astrophysics Data System (ADS)
Hobbie, S. E.
2011-12-01
Despite the importance of litter decomposition for ecosystem fertility and carbon balance, key uncertainties remain about how this fundamental process is affected by nitrogen (N) availability. Nevertheless, resolving such uncertainties is critical for mechanistic inclusion of such processes in earth system models, towards predicting the ecosystem consequences of increased anthropogenic reactive N. Towards that end, we have conducted a series of experiments examining nitrogen effects on litter decomposition. We found that both substrate N and externally supplied N (regardless of form) accelerated the initial decomposition rate. Faster initial decomposition rates were linked to the higher activity of carbohydrate-degrading enzymes associated with externally supplied N and the greater relative abundances of Gram negative and Gram positive bacteria associated with green leaves and externally supplied organic N (assessed using phospholipid fatty acid analysis, PLFA). By contrast, later in decomposition, externally supplied N slowed decomposition, increasing the fraction of slowly decomposing litter and reducing lignin-degrading enzyme activity and relative abundances of Gram negative and Gram positive bacteria. Our results suggest that elevated atmospheric N deposition may have contrasting effects on the dynamics of different soil carbon pools, decreasing mean residence times of active fractions comprising very fresh litter, while increasing those of more slowly decomposing fractions including more processed litter. Incorporating these contrasting effects of N on decomposition processes into models is complicated by lingering uncertainties about how these effects generalize across ecosystems and substrates.
Self-similar pyramidal structures and signal reconstruction
NASA Astrophysics Data System (ADS)
Benedetto, John J.; Leon, Manuel; Saliani, Sandra
1998-03-01
Pyramidal structures are defined which are locally a combination of low and highpass filtering. The structures are analogous to but different from wavelet packet structures. In particular, new frequency decompositions are obtained; and these decompositions can be parameterized to establish a correspondence with a large class of Cantor sets. Further correspondences are then established to relate such frequency decompositions with more general self- similarities. The role of the filters in defining these pyramidal structures gives rise to signal reconstruction algorithms, and these, in turn, are used in the analysis of speech data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caballero, F.G.; Yen, Hung-Wei; Australian Centre for Microscopy and Microanalysis, The University of Sydney, NSW 2006
2014-02-15
Interphase carbide precipitation due to austenite decomposition was investigated by high resolution transmission electron microscopy and atom probe tomography in tempered nanostructured bainitic steels. Results showed that cementite (θ) forms by a paraequilibrium transformation mechanism at the bainitic ferrite–austenite interface with a simultaneous three phase crystallographic orientation relationship. - Highlights: • Interphase carbide precipitation due to austenite decomposition • Tempered nanostructured bainitic steels • High resolution transmission electron microscopy and atom probe tomography • Paraequilibrium θ with three phase crystallographic orientation relationship.
Fleischhauer, Monika; Enge, Sören; Miller, Robert; Strobel, Alexander; Strobel, Anja
2013-01-01
Meta-analytic data highlight the value of the Implicit Association Test (IAT) as an indirect measure of personality. Based on evidence suggesting that confounding factors such as cognitive abilities contribute to the IAT effect, this study provides a first investigation of whether basic personality traits explain unwanted variance in the IAT. In a gender-balanced sample of 204 volunteers, the Big-Five dimensions were assessed via self-report, peer-report, and IAT. By means of structural equation modeling (SEM), latent Big-Five personality factors (based on self- and peer-report) were estimated and their predictive value for unwanted variance in the IAT was examined. In a first analysis, unwanted variance was defined in the sense of method-specific variance which may result from differences in task demands between the two IAT block conditions and which can be mirrored by the absolute size of the IAT effects. In a second analysis, unwanted variance was examined in a broader sense defined as those systematic variance components in the raw IAT scores that are not explained by the latent implicit personality factors. In contrast to the absolute IAT scores, this also considers biases associated with the direction of IAT effects (i.e., whether they are positive or negative in sign), biases that might result, for example, from the IAT's stimulus or category features. None of the explicit Big-Five factors was predictive for method-specific variance in the IATs (first analysis). However, when considering unwanted variance that goes beyond pure method-specific variance (second analysis), a substantial effect of neuroticism occurred that may have been driven by the affective valence of IAT attribute categories and the facilitated processing of negative stimuli, typically associated with neuroticism. The findings thus point to the necessity of using attribute category labels and stimuli of similar affective valence in personality IATs to avoid confounding due to recoding.
Aoki, Takatoshi; Yamaguchi, Shinpei; Kinoshita, Shunsuke; Hayashida, Yoshiko; Korogi, Yukunori
2016-09-01
To determine the reproducibility of the quantitative chemical shift-based water-fat separation method with a multiecho gradient echo sequence [iteraterative decomposition of water and fat with echo asymmetry and least-squares estimation quantitation sequence (IDEAL-IQ)] for assessing bone marrow fat fraction (FF); to evaluate variation of FF at different bone sites; and to investigate its association with age and menopause. 31 consecutive females who underwent pelvic iterative decomposition of water and fat with echo asymmetry and least-squares estimation at 3-T MRI were included in this study. Quantitative FF using IDEAL-IQ of four bone sites were analyzed. The coefficients of variance (CV) on each site were evaluated repeatedly 10 times to assess the reproducibility. Correlations between FF and age were evaluated on each site, and the FFs between pre- and post-menopausal groups were compared. The CV in the quantification of marrow FF ranged from 0.69% to 1.70%. A statistically significant correlation was established between the FF and the age in lumbar vertebral body, ilium and intertrochanteric region of the femur (p < 0.001). The average FF of post-menopausal females was significantly higher than that of pre-menopausal females in these sites (p < 0.05). In the greater trochanter of the femur, there was no significant correlation between FF and age. In vivo IDEAL-IQ would provide reliable quantification of bone marrow fat. IDEAL-IQ is simple to perform in a short time and may be practical for providing information on bone quality in clinical settings.
A note on variance estimation in random effects meta-regression.
Sidik, Kurex; Jonkman, Jeffrey N
2005-01-01
For random effects meta-regression inference, variance estimation for the parameter estimates is discussed. Because estimated weights are used for meta-regression analysis in practice, the assumed or estimated covariance matrix used in meta-regression is not strictly correct, due to possible errors in estimating the weights. Therefore, this note investigates the use of a robust variance estimation approach for obtaining variances of the parameter estimates in random effects meta-regression inference. This method treats the assumed covariance matrix of the effect measure variables as a working covariance matrix. Using an example of meta-analysis data from clinical trials of a vaccine, the robust variance estimation approach is illustrated in comparison with two other methods of variance estimation. A simulation study is presented, comparing the three methods of variance estimation in terms of bias and coverage probability. We find that, despite the seeming suitability of the robust estimator for random effects meta-regression, the improved variance estimator of Knapp and Hartung (2003) yields the best performance among the three estimators, and thus may provide the best protection against errors in the estimated weights.
Morais, Helena; Ramos, Cristina; Forgács, Esther; Cserháti, Tibor; Oliviera, José
2002-04-25
The effect of light, storage time and temperature on the decomposition rate of monomeric anthocyanin pigments extracted from skins of grape (Vitis vinifera var. Red globe) was determined by reversed-phase high-performance liquid chromatography (RP-HPLC). The impact of various storage conditions on the pigment stability was assessed by stepwise regression analysis. RP-HPLC separated well the five anthocyanins identified and proved the presence of other unidentified pigments at lower concentrations. Stepwise regression analysis confirmed that the overall decomposition rate of monomeric anthocyanins, peonidin-3-glucoside and malvidin-3-glucoside significantly depended on the time and temperature of storage, the effect of storage time being the most important. The presence or absence of light exerted a negligible impact on the decomposition rate.
NASA Astrophysics Data System (ADS)
Tarai, Madhumita; Kumar, Keshav; Divya, O.; Bairi, Partha; Mishra, Kishor Kumar; Mishra, Ashok Kumar
2017-09-01
The present work compares the dissimilarity and covariance based unsupervised chemometric classification approaches by taking the total synchronous fluorescence spectroscopy data sets acquired for the cumin and non-cumin based herbal preparations. The conventional decomposition method involves eigenvalue-eigenvector analysis of the covariance of the data set and finds the factors that can explain the overall major sources of variation present in the data set. The conventional approach does this irrespective of the fact that the samples belong to intrinsically different groups and hence leads to poor class separation. The present work shows that classification of such samples can be optimized by performing the eigenvalue-eigenvector decomposition on the pair-wise dissimilarity matrix.
NASA Astrophysics Data System (ADS)
Lin, Yinwei
2018-06-01
A three-dimensional modeling of fish school performed by a modified Adomian decomposition method (ADM) discretized by the finite difference method is proposed. To our knowledge, few studies of the fish school are documented due to expensive cost of numerical computing and tedious three-dimensional data analysis. Here, we propose a simple model replied on the Adomian decomposition method to estimate the efficiency of energy saving of the flow motion of the fish school. First, the analytic solutions of Navier-Stokes equations are used for numerical validation. The influences of the distance between the side-by-side two fishes are studied on the energy efficiency of the fish school. In addition, the complete error analysis for this method is presented.
Analysis of Variance: What Is Your Statistical Software Actually Doing?
ERIC Educational Resources Information Center
Li, Jian; Lomax, Richard G.
2011-01-01
Users assume statistical software packages produce accurate results. In this article, the authors systematically examined Statistical Package for the Social Sciences (SPSS) and Statistical Analysis System (SAS) for 3 analysis of variance (ANOVA) designs, mixed-effects ANOVA, fixed-effects analysis of covariance (ANCOVA), and nested ANOVA. For each…
Why risk is not variance: an expository note.
Cox, Louis Anthony Tony
2008-08-01
Variance (or standard deviation) of return is widely used as a measure of risk in financial investment risk analysis applications, where mean-variance analysis is applied to calculate efficient frontiers and undominated portfolios. Why, then, do health, safety, and environmental (HS&E) and reliability engineering risk analysts insist on defining risk more flexibly, as being determined by probabilities and consequences, rather than simply by variances? This note suggests an answer by providing a simple proof that mean-variance decision making violates the principle that a rational decisionmaker should prefer higher to lower probabilities of receiving a fixed gain, all else being equal. Indeed, simply hypothesizing a continuous increasing indifference curve for mean-variance combinations at the origin is enough to imply that a decisionmaker must find unacceptable some prospects that offer a positive probability of gain and zero probability of loss. Unlike some previous analyses of limitations of variance as a risk metric, this expository note uses only simple mathematics and does not require the additional framework of von Neumann Morgenstern utility theory.
Thorlund, Kristian; Thabane, Lehana; Mills, Edward J
2013-01-11
Multiple treatment comparison (MTC) meta-analyses are commonly modeled in a Bayesian framework, and weakly informative priors are typically preferred to mirror familiar data driven frequentist approaches. Random-effects MTCs have commonly modeled heterogeneity under the assumption that the between-trial variance for all involved treatment comparisons are equal (i.e., the 'common variance' assumption). This approach 'borrows strength' for heterogeneity estimation across treatment comparisons, and thus, ads valuable precision when data is sparse. The homogeneous variance assumption, however, is unrealistic and can severely bias variance estimates. Consequently 95% credible intervals may not retain nominal coverage, and treatment rank probabilities may become distorted. Relaxing the homogeneous variance assumption may be equally problematic due to reduced precision. To regain good precision, moderately informative variance priors or additional mathematical assumptions may be necessary. In this paper we describe four novel approaches to modeling heterogeneity variance - two novel model structures, and two approaches for use of moderately informative variance priors. We examine the relative performance of all approaches in two illustrative MTC data sets. We particularly compare between-study heterogeneity estimates and model fits, treatment effect estimates and 95% credible intervals, and treatment rank probabilities. In both data sets, use of moderately informative variance priors constructed from the pair wise meta-analysis data yielded the best model fit and narrower credible intervals. Imposing consistency equations on variance estimates, assuming variances to be exchangeable, or using empirically informed variance priors also yielded good model fits and narrow credible intervals. The homogeneous variance model yielded high precision at all times, but overall inadequate estimates of between-trial variances. Lastly, treatment rankings were similar among the novel approaches, but considerably different when compared with the homogenous variance approach. MTC models using a homogenous variance structure appear to perform sub-optimally when between-trial variances vary between comparisons. Using informative variance priors, assuming exchangeability or imposing consistency between heterogeneity variances can all ensure sufficiently reliable and realistic heterogeneity estimation, and thus more reliable MTC inferences. All four approaches should be viable candidates for replacing or supplementing the conventional homogeneous variance MTC model, which is currently the most widely used in practice.
Thermal Decomposition Model Development of EN-7 and EN-8 Polyurethane Elastomers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keedy, Ryan Michael; Harrison, Kale Warren; Cordaro, Joseph Gabriel
Thermogravimetric analysis - gas chromatography/mass spectrometry (TGA- GC/MS) experiments were performed on EN-7 and EN-8, analyzed, and reported in [1] . This SAND report derives and describes pyrolytic thermal decomposition models for use in predicting the responses of EN-7 and EN-8 in an abnormal thermal environment.
Perrault, Katelynn A; Stefanuto, Pierre-Hugues; Stuart, Barbara H; Rai, Tapan; Focant, Jean-François; Forbes, Shari L
2015-01-01
Challenges in decomposition odour profiling have led to variation in the documented odour profile by different research groups worldwide. Background subtraction and use of controls are important considerations given the variation introduced by decomposition studies conducted in different geographical environments. The collection of volatile organic compounds (VOCs) from soil beneath decomposing remains is challenging due to the high levels of inherent soil VOCs, further confounded by the use of highly sensitive instrumentation. This study presents a method that provides suitable chromatographic resolution for profiling decomposition odour in soil by comprehensive two-dimensional gas chromatography coupled with time-of-flight mass spectrometry using appropriate controls and field blanks. Logarithmic transformation and t-testing of compounds permitted the generation of a compound list of decomposition VOCs in soil. Principal component analysis demonstrated the improved discrimination between experimental and control soil, verifying the value of the data handling method. Data handling procedures have not been well documented in this field and standardisation would thereby reduce misidentification of VOCs present in the surrounding environment as decomposition byproducts. Uniformity of data handling and instrumental procedures will reduce analytical variation, increasing confidence in the future when investigating the effect of taphonomic variables on the decomposition VOC profile. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Vukicevic, T.; Uhlhorn, E.; Reasor, P.; Klotz, B.
2012-12-01
A significant potential for improving numerical model forecast skill of tropical cyclone (TC) intensity by assimilation of airborne inner core observations in high resolution models has been demonstrated in recent studies. Although encouraging , the results so far have not provided clear guidance on the critical information added by the inner core data assimilation with respect to the intensity forecast skill. Better understanding of the relationship between the intensity forecast and the value added by the assimilation is required to further the progress, including the assimilation of satellite observations. One of the major difficulties in evaluating such a relationship is the forecast verification metric of TC intensity: the maximum one-minute sustained wind speed at 10 m above surface. The difficulty results from two issues : 1) the metric refers to a practically unobservable quantity since it is an extreme value in a highly turbulent, and spatially-extensive wind field and 2) model- and observation-based estimates of this measure are not compatible in terms of spatial and temporal scales, even in high-resolution models. Although the need for predicting the extreme value of near surface wind is well justified, and the observation-based estimates that are used in practice are well thought of, a revised metric for the intensity is proposed for the purpose of numerical forecast evaluation and the impacts on the forecast. The metric should enable a robust observation- and model-resolvable and phenomenologically-based evaluation of the impacts. It is shown that the maximum intensity could be represented in terms of decomposition into deterministic and stochastic components of the wind field. Using the vortex-centric cylindrical reference frame, the deterministic component is defined as the sum of amplitudes of azimuthal wave numbers 0 and 1 at the radius of maximum wind, whereas the stochastic component is represented by a non-Gaussian PDF. This decomposition is exact and fully independent of individual TC properties. The decomposition of the maximum wind intensity was first evaluated using several sources of data including Step Frequency Microwave Radiometer surface wind speeds from NOAA and Air Force reconnaissance flights,NOAA P-3 Tail Doppler Radar measurements, and best track maximum intensity estimates as well as the simulations from Hurricane WRF Ensemble Data Assimilation System (HEDAS) experiments for 83 real data cases. The results confirmed validity of the method: the stochastic component of the maximum exibited a non-Gaussian PDF with small mean amplitude and variance that was comparable to the known best track error estimates. The results of the decomposition were then used to evaluate the impact of the improved initial conditions on the forecast. It was shown that the errors in the deterministic component of the intensity had the dominant effect on the forecast skill for the studied cases. This result suggests that the data assimilation of the inner core observations could focus primarily on improving the analysis of wave number 0 and 1 initial structure and on the mechanisms responsible for forcing the evolution of this low-wavenumber structure. For the latter analysis, the assimilation of airborne and satellite remote sensing observations could play significant role.
Odor analysis of decomposing buried human remains
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vass, Arpad Alexander; Smith, Rob R; Thompson, Cyril V
2008-01-01
This study, conducted at the University of Tennessee's Anthropological Research Facility (ARF), lists and ranks the primary chemical constituents which define the odor of decomposition of human remains as detected at the soil surface of shallow burial sites. Triple sorbent traps were used to collect air samples in the field and revealed eight major classes of chemicals which now contain 478 specific volatile compounds associated with burial decomposition. Samples were analyzed using gas chromatography-mass spectrometry (GC-MS) and were collected below and above the body, and at the soil surface of 1.5-3.5 ft. (0.46-1.07 m) deep burial sites of four individualsmore » over a 4-year time span. New data were incorporated into the previously established Decompositional Odor Analysis (DOA) Database providing identification, chemical trends, and semi-quantitation of chemicals for evaluation. This research identifies the 'odor signatures' unique to the decomposition of buried human remains with projected ramifications on human remains detection canine training procedures and in the development of field portable analytical instruments which can be used to locate human remains in shallow burial sites.« less
40 CFR 264.97 - General ground-water monitoring requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... paragraph (i) of this section. (1) A parametric analysis of variance (ANOVA) followed by multiple... mean levels for each constituent. (2) An analysis of variance (ANOVA) based on ranks followed by...
NASA Technical Reports Server (NTRS)
Consoli, Robert David; Sobieszczanski-Sobieski, Jaroslaw
1990-01-01
Advanced multidisciplinary analysis and optimization methods, namely system sensitivity analysis and non-hierarchical system decomposition, are applied to reduce the cost and improve the visibility of an automated vehicle design synthesis process. This process is inherently complex due to the large number of functional disciplines and associated interdisciplinary couplings. Recent developments in system sensitivity analysis as applied to complex non-hierarchic multidisciplinary design optimization problems enable the decomposition of these complex interactions into sub-processes that can be evaluated in parallel. The application of these techniques results in significant cost, accuracy, and visibility benefits for the entire design synthesis process.
Harmonic analysis of traction power supply system based on wavelet decomposition
NASA Astrophysics Data System (ADS)
Dun, Xiaohong
2018-05-01
With the rapid development of high-speed railway and heavy-haul transport, AC drive electric locomotive and EMU large-scale operation in the country on the ground, the electrified railway has become the main harmonic source of China's power grid. In response to this phenomenon, the need for timely monitoring of power quality problems of electrified railway, assessment and governance. Wavelet transform is developed on the basis of Fourier analysis, the basic idea comes from the harmonic analysis, with a rigorous theoretical model, which has inherited and developed the local thought of Garbor transformation, and has overcome the disadvantages such as window fixation and lack of discrete orthogonally, so as to become a more recently studied spectral analysis tool. The wavelet analysis takes the gradual and precise time domain step in the high frequency part so as to focus on any details of the signal being analyzed, thereby comprehensively analyzing the harmonics of the traction power supply system meanwhile use the pyramid algorithm to increase the speed of wavelet decomposition. The matlab simulation shows that the use of wavelet decomposition of the traction power supply system for harmonic spectrum analysis is effective.
Fast flux module detection using matroid theory.
Reimers, Arne C; Bruggeman, Frank J; Olivier, Brett G; Stougie, Leen
2015-05-01
Flux balance analysis (FBA) is one of the most often applied methods on genome-scale metabolic networks. Although FBA uniquely determines the optimal yield, the pathway that achieves this is usually not unique. The analysis of the optimal-yield flux space has been an open challenge. Flux variability analysis is only capturing some properties of the flux space, while elementary mode analysis is intractable due to the enormous number of elementary modes. However, it has been found by Kelk et al. (2012) that the space of optimal-yield fluxes decomposes into flux modules. These decompositions allow a much easier but still comprehensive analysis of the optimal-yield flux space. Using the mathematical definition of module introduced by Müller and Bockmayr (2013b), we discovered useful connections to matroid theory, through which efficient algorithms enable us to compute the decomposition into modules in a few seconds for genome-scale networks. Using that every module can be represented by one reaction that represents its function, in this article, we also present a method that uses this decomposition to visualize the interplay of modules. We expect the new method to replace flux variability analysis in the pipelines for metabolic networks.
NASA Astrophysics Data System (ADS)
Hu, Xiaogang; Rymer, William Z.; Suresh, Nina L.
2014-04-01
Objective. The aim of this study is to assess the accuracy of a surface electromyogram (sEMG) motor unit (MU) decomposition algorithm during low levels of muscle contraction. Approach. A two-source method was used to verify the accuracy of the sEMG decomposition system, by utilizing simultaneous intramuscular and surface EMG recordings from the human first dorsal interosseous muscle recorded during isometric trapezoidal force contractions. Spike trains from each recording type were decomposed independently utilizing two different algorithms, EMGlab and dEMG decomposition algorithms. The degree of agreement of the decomposed spike timings was assessed for three different segments of the EMG signals, corresponding to specified regions in the force task. A regression analysis was performed to examine whether certain properties of the sEMG and force signal can predict the decomposition accuracy. Main results. The average accuracy of successful decomposition among the 119 MUs that were common to both intramuscular and surface records was approximately 95%, and the accuracy was comparable between the different segments of the sEMG signals (i.e., force ramp-up versus steady state force versus combined). The regression function between the accuracy and properties of sEMG and force signals revealed that the signal-to-noise ratio of the action potential and stability in the action potential records were significant predictors of the surface decomposition accuracy. Significance. The outcomes of our study confirm the accuracy of the sEMG decomposition algorithm during low muscle contraction levels and provide confidence in the overall validity of the surface dEMG decomposition algorithm.
von Thiele Schwarz, Ulrica; Sjöberg, Anders; Hasson, Henna; Tafvelin, Susanne
2014-12-01
To test the factor structure and variance components of the productivity subscales of the Health and Work Questionnaire (HWQ). A total of 272 individuals from one company answered the HWQ scale, including three dimensions (efficiency, quality, and quantity) that the respondent rated from three perspectives: their own, their supervisor's, and their coworkers'. A confirmatory factor analysis was performed, and common and unique variance components evaluated. A common factor explained 81% of the variance (reliability 0.95). All dimensions and rater perspectives contributed with unique variance. The final model provided a perfect fit to the data. Efficiency, quality, and quantity and three rater perspectives are valid parts of the self-rated productivity measurement model, but with a large common factor. Thus, the HWQ can be analyzed either as one factor or by extracting the unique variance for each subdimension.
Thermal decomposition of ammonium perchlorate in the presence of Al(OH)(3)·Cr(OH)(3) nanoparticles.
Zhang, WenJing; Li, Ping; Xu, HongBin; Sun, Randi; Qing, Penghui; Zhang, Yi
2014-03-15
An Al(OH)(3)·Cr(OH)(3) nanoparticle preparation procedure and its catalytic effect and mechanism on thermal decomposition of ammonium perchlorate (AP) were investigated using transmission electron microscopy (TEM), X-ray diffraction (XRD), thermogravimetric analysis and differential scanning calorimetry (TG-DSC), X-ray photoelectron spectroscopy (XPS), and thermogravimetric analysis and mass spectroscopy (TG-MS). In the preparation procedure, TEM, SAED, and FT-IR showed that the Al(OH)(3)·Cr(OH)(3) particles were amorphous particles with dimensions in the nanometer size regime containing a large amount of surface hydroxyl under the controllable preparation conditions. When the Al(OH)(3)·Cr(OH)(3) nanoparticles were used as additives for the thermal decomposition of AP, the TG-DSC results showed that the addition of Al(OH)(3)·Cr(OH)(3) nanoparticles to AP remarkably decreased the onset temperature of AP decomposition from approximately 450°C to 245°C. The FT-IR, RS and XPS results confirmed that the surface hydroxyl content of the Al(OH)(3)·Cr(OH)(3) nanoparticles decreased from 67.94% to 63.65%, and Al(OH)3·Cr(OH)3 nanoparticles were limitedly transformed from amorphous to crystalline after used as additives for the thermal decomposition of AP. Such behavior of Al(OH)(3)·Cr(OH)(3) nanoparticles promoted the oxidation of NH3 of AP to decompose to N2O first, as indicated by the TG-MS results, accelerating the AP thermal decomposition. Copyright © 2014 Elsevier B.V. All rights reserved.
Sponge-like silver obtained by decomposition of silver nitrate hexamethylenetetramine complex
DOE Office of Scientific and Technical Information (OSTI.GOV)
Afanasiev, Pavel, E-mail: pavel.afanasiev@ircelyon.univ-lyon.fr
2016-07-15
Silver nitrate hexamethylenetetramine [Ag(NO{sub 3})·N{sub 4}(CH{sub 2}){sub 6}] coordination compound has been prepared via aqueous route and characterized by chemical analysis, XRD and electron microscopy. Decomposition of [Ag(NO{sub 3})·N{sub 4}(CH{sub 2}){sub 6}] under hydrogen and under inert has been studied by thermal analysis and mass spectrometry. Thermal decomposition of [Ag(NO{sub 3})·N{sub 4}(CH{sub 2}){sub 6}] proceeds in the range 200–250 °C as a self-propagating rapid redox process accompanied with the release of multiple gases. The decomposition leads to formation of sponge-like silver having hierarchical open pore system with pore size spanning from 10 µm to 10 nm. The as-obtained silver spongesmore » exhibited favorable activity toward H{sub 2}O{sub 2} electrochemical reduction, making them potentially interesting as non-enzyme hydrogen peroxide sensors. - Graphical abstract: Thermal decomposition of silver nitrate hexamethylenetetramine coordination compound [Ag(NO{sub 3})·N{sub 4}(CH{sub 2}){sub 6}] leads to sponge like silver that possesses open porous structure and demonstrates interesting properties as an electrochemical hydrogen peroxide sensor. Display Omitted - Highlights: • [Ag(NO{sub 3})·N{sub 4}(CH{sub 2}){sub 6}] orthorhombic phase prepared and characterized. • Decomposition of [Ag(NO{sub 3})·N{sub 4}(CH{sub 2}){sub 6}] leads to metallic silver sponge with opened porosity. • Ag sponge showed promising properties as a material for hydrogen peroxide sensors.« less
NASA Astrophysics Data System (ADS)
Alkemade, R.; Van Rijswijk, P.
Large amounts of seaweed are deposited along the coast of Admiralty Bay, King George Island, Antarctica. The stranded seaweed partly decomposes on the beach and supports populations of meiofauna species, mostly nematodes. The factors determining the number of nematodes found in the seaweed packages were studied. Seaweed/sediment samples were collected from different locations, along the coast near Arctowski station, covering gradients of salinity, elevation and proximity of Penguin rookeries. On the same locations decomposition rate was determined by means of permeable containers with seaweed material. Models, including the relations between location, seaweed and sediment characteristics, number of nematodes and decomposition rates, were postulated and verified using path analysis. The most plausible and significant models are presented. The number of nematodes was directly correlated with the height of the location, the carbon-to-nitrogen ratio, and the salinity of the sample. Nematode numbers were apparently indirectly dependent on sediment composition and water content. We hypothesize that the different influences of melt water and tidal water, which affect both salinity and water content of the deposits, are important phenomena underlying these results. Analysis of the relation between decomposition rate and abiotic, location-related characteristics showed that decomposition rate was dependent on the water content of the stranded seaweed and sediment composition. Decomposition rates were high on locations where water content of the deposits was high. There the running water from melt water run-off or from the surf probably increased weight losses of seaweed.
Experimental Modal Analysis and Dynamic Component Synthesis. Volume 3. Modal Parameter Estimation
1987-12-01
residues as well as poles is achieved. A singular value decomposition method has been used to develop a complex mode indicator function ( CMIF )[70...which can be used to help determine the number of poles before the analysis. The CMIF is formed by performing a singular value decomposition of all of...servo systems which can include both low and high damping modes. "• CMIF can be used to indicate close or repeated eigenvalues before the parameter
Lott, Michael J; Howa, John D; Chesson, Lesley A; Ehleringer, James R
2015-08-15
Elemental analyzer systems generate N(2) and CO(2) for elemental composition and isotope ratio measurements. As quantitative conversion of nitrogen in some materials (i.e., nitrate salts and nitro-organic compounds) is difficult, this study tests a recently published method - thermal decomposition without the addition of O(2) - for the analysis of these materials. Elemental analyzer/isotope ratio mass spectrometry (EA/IRMS) was used to compare the traditional combustion method (CM) and the thermal decomposition method (TDM), where additional O(2) is eliminated from the reaction. The comparisons used organic and inorganic materials with oxidized and/or reduced nitrogen and included ureas, nitrate salts, ammonium sulfate, nitro esters, and nitramines. Previous TDM applications were limited to nitrate salts and ammonium sulfate. The measurement precision and accuracy were compared to determine the effectiveness of converting materials containing different fractions of oxidized nitrogen into N(2). The δ(13) C(VPDB) values were not meaningfully different when measured via CM or TDM, allowing for the analysis of multiple elements in one sample. For materials containing oxidized nitrogen, (15) N measurements made using thermal decomposition were more precise than those made using combustion. The precision was similar between the methods for materials containing reduced nitrogen. The %N values were closer to theoretical when measured by TDM than by CM. The δ(15) N(AIR) values of purchased nitrate salts and ureas were nearer to the known values when analyzed using thermal decomposition than using combustion. The thermal decomposition method addresses insufficient recovery of nitrogen during elemental analysis in a variety of organic and inorganic materials. Its implementation requires relatively few changes to the elemental analyzer. Using TDM, it is possible to directly calibrate certain organic materials to international nitrate isotope reference materials without off-line preparation. Copyright © 2015 John Wiley & Sons, Ltd.
On the Relations among Regular, Equal Unique Variances, and Image Factor Analysis Models.
ERIC Educational Resources Information Center
Hayashi, Kentaro; Bentler, Peter M.
2000-01-01
Investigated the conditions under which the matrix of factor loadings from the factor analysis model with equal unique variances will give a good approximation to the matrix of factor loadings from the regular factor analysis model. Extends the results to the image factor analysis model. Discusses implications for practice. (SLD)
An operational modal analysis method in frequency and spatial domain
NASA Astrophysics Data System (ADS)
Wang, Tong; Zhang, Lingmi; Tamura, Yukio
2005-12-01
A frequency and spatial domain decomposition method (FSDD) for operational modal analysis (OMA) is presented in this paper, which is an extension of the complex mode indicator function (CMIF) method for experimental modal analysis (EMA). The theoretical background of the FSDD method is clarified. Singular value decomposition is adopted to separate the signal space from the noise space. Finally, an enhanced power spectrum density (PSD) is proposed to obtain more accurate modal parameters by curve fitting in the frequency domain. Moreover, a simulation case and an application case are used to validate this method.
Linear stability analysis of detonations via numerical computation and dynamic mode decomposition
NASA Astrophysics Data System (ADS)
Kabanov, Dmitry I.; Kasimov, Aslan R.
2018-03-01
We introduce a new method to investigate linear stability of gaseous detonations that is based on an accurate shock-fitting numerical integration of the linearized reactive Euler equations with a subsequent analysis of the computed solution via the dynamic mode decomposition. The method is applied to the detonation models based on both the standard one-step Arrhenius kinetics and two-step exothermic-endothermic reaction kinetics. Stability spectra for all cases are computed and analyzed. The new approach is shown to be a viable alternative to the traditional normal-mode analysis used in detonation theory.
Mode decomposition and Lagrangian structures of the flow dynamics in orbitally shaken bioreactors
NASA Astrophysics Data System (ADS)
Weheliye, Weheliye Hashi; Cagney, Neil; Rodriguez, Gregorio; Micheletti, Martina; Ducci, Andrea
2018-03-01
In this study, two mode decomposition techniques were applied and compared to assess the flow dynamics in an orbital shaken bioreactor (OSB) of cylindrical geometry and flat bottom: proper orthogonal decomposition and dynamic mode decomposition. Particle Image Velocimetry (PIV) experiments were carried out for different operating conditions including fluid height, h, and shaker rotational speed, N. A detailed flow analysis is provided for conditions when the fluid and vessel motions are in-phase (Fr = 0.23) and out-of-phase (Fr = 0.47). PIV measurements in vertical and horizontal planes were combined to reconstruct low order models of the full 3D flow and to determine its Finite-Time Lyapunov Exponent (FTLE) within OSBs. The combined results from the mode decomposition and the FTLE fields provide a useful insight into the flow dynamics and Lagrangian coherent structures in OSBs and offer a valuable tool to optimise bioprocess design in terms of mixing and cell suspension.
Further insights into the kinetics of thermal decomposition during continuous cooling.
Liavitskaya, Tatsiana; Guigo, Nathanaël; Sbirrazzuoli, Nicolas; Vyazovkin, Sergey
2017-07-26
Following the previous work (Phys. Chem. Chem. Phys., 2016, 18, 32021), this study continues to investigate the intriguing phenomenon of thermal decomposition during continuous cooling. The phenomenon can be detected and its kinetics can be measured by means of thermogravimetric analysis (TGA). The kinetics of the thermal decomposition of ammonium nitrate (NH 4 NO 3 ), nickel oxalate (NiC 2 O 4 ), and lithium sulfate monohydrate (Li 2 SO 4 ·H 2 O) have been measured upon heating and cooling and analyzed by means of the isoconversional methodology. The results have confirmed the hypothesis that the respective kinetics should be similar for single-step processes (NH 4 NO 3 decomposition) but different for multi-step ones (NiC 2 O 4 decomposition and Li 2 SO 4 ·H 2 O dehydration). It has been discovered that the differences in the kinetics can be either quantitative or qualitative. Physical insights into the nature of the differences have been proposed.
3D quantitative analysis of early decomposition changes of the human face.
Caplova, Zuzana; Gibelli, Daniele Maria; Poppa, Pasquale; Cummaudo, Marco; Obertova, Zuzana; Sforza, Chiarella; Cattaneo, Cristina
2018-03-01
Decomposition of the human body and human face is influenced, among other things, by environmental conditions. The early decomposition changes that modify the appearance of the face may hamper the recognition and identification of the deceased. Quantitative assessment of those changes may provide important information for forensic identification. This report presents a pilot 3D quantitative approach of tracking early decomposition changes of a single cadaver in controlled environmental conditions by summarizing the change with weekly morphological descriptions. The root mean square (RMS) value was used to evaluate the changes of the face after death. The results showed a high correlation (r = 0.863) between the measured RMS and the time since death. RMS values of each scan are presented, as well as the average weekly RMS values. The quantification of decomposition changes could improve the accuracy of antemortem facial approximation and potentially could allow the direct comparisons of antemortem and postmortem 3D scans.
Kinetics of Thermal Decomposition of Ammonium Perchlorate by TG/DSC-MS-FTIR
NASA Astrophysics Data System (ADS)
Zhu, Yan-Li; Huang, Hao; Ren, Hui; Jiao, Qing-Jie
2014-01-01
The method of thermogravimetry/differential scanning calorimetry-mass spectrometry-Fourier transform infrared (TG/DSC-MS-FTIR) simultaneous analysis has been used to study thermal decomposition of ammonium perchlorate (AP). The processing of nonisothermal data at various heating rates was performed using NETZSCH Thermokinetics. The MS-FTIR spectra showed that N2O and NO2 were the main gaseous products of the thermal decomposition of AP, and there was a competition between the formation reaction of N2O and that of NO2 during the process with an iso-concentration point of N2O and NO2. The dependence of the activation energy calculated by Friedman's iso-conversional method on the degree of conversion indicated that the AP decomposition process can be divided into three stages, which are autocatalytic, low-temperature diffusion and high-temperature, stable-phase reaction. The corresponding kinetic parameters were determined by multivariate nonlinear regression and the mechanism of the AP decomposition process was proposed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glascoe, E A; Hsu, P C; Springer, H K
PBXN-9, an HMX-formulation, is thermally damaged and thermally decomposed in order to determine the morphological changes and decomposition kinetics that occur in the material after mild to moderate heating. The material and its constituents were decomposed using standard thermal analysis techniques (DSC and TGA) and the decomposition kinetics are reported using different kinetic models. Pressed parts and prill were thermally damaged, i.e. heated to temperatures that resulted in material changes but did not result in significant decomposition or explosion, and analyzed. In general, the thermally damaged samples showed a significant increase in porosity and decrease in density and a smallmore » amount of weight loss. These PBXN-9 samples appear to sustain more thermal damage than similar HMX-Viton A formulations and the most likely reasons are the decomposition/evaporation of a volatile plasticizer and a polymorphic transition of the HMX from {beta} to {delta} phase.« less
Hu, Pingsha; Maiti, Tapabrata
2011-01-01
Microarray is a powerful tool for genome-wide gene expression analysis. In microarray expression data, often mean and variance have certain relationships. We present a non-parametric mean-variance smoothing method (NPMVS) to analyze differentially expressed genes. In this method, a nonlinear smoothing curve is fitted to estimate the relationship between mean and variance. Inference is then made upon shrinkage estimation of posterior means assuming variances are known. Different methods have been applied to simulated datasets, in which a variety of mean and variance relationships were imposed. The simulation study showed that NPMVS outperformed the other two popular shrinkage estimation methods in some mean-variance relationships; and NPMVS was competitive with the two methods in other relationships. A real biological dataset, in which a cold stress transcription factor gene, CBF2, was overexpressed, has also been analyzed with the three methods. Gene ontology and cis-element analysis showed that NPMVS identified more cold and stress responsive genes than the other two methods did. The good performance of NPMVS is mainly due to its shrinkage estimation for both means and variances. In addition, NPMVS exploits a non-parametric regression between mean and variance, instead of assuming a specific parametric relationship between mean and variance. The source code written in R is available from the authors on request.
Hu, Pingsha; Maiti, Tapabrata
2011-01-01
Microarray is a powerful tool for genome-wide gene expression analysis. In microarray expression data, often mean and variance have certain relationships. We present a non-parametric mean-variance smoothing method (NPMVS) to analyze differentially expressed genes. In this method, a nonlinear smoothing curve is fitted to estimate the relationship between mean and variance. Inference is then made upon shrinkage estimation of posterior means assuming variances are known. Different methods have been applied to simulated datasets, in which a variety of mean and variance relationships were imposed. The simulation study showed that NPMVS outperformed the other two popular shrinkage estimation methods in some mean-variance relationships; and NPMVS was competitive with the two methods in other relationships. A real biological dataset, in which a cold stress transcription factor gene, CBF2, was overexpressed, has also been analyzed with the three methods. Gene ontology and cis-element analysis showed that NPMVS identified more cold and stress responsive genes than the other two methods did. The good performance of NPMVS is mainly due to its shrinkage estimation for both means and variances. In addition, NPMVS exploits a non-parametric regression between mean and variance, instead of assuming a specific parametric relationship between mean and variance. The source code written in R is available from the authors on request. PMID:21611181
Pi2 detection using Empirical Mode Decomposition (EMD)
NASA Astrophysics Data System (ADS)
Mieth, Johannes Z. D.; Frühauff, Dennis; Glassmeier, Karl-Heinz
2017-04-01
Empirical Mode Decomposition has been used as an alternative method to wavelet transformation to identify onset times of Pi2 pulsations in data sets of the Scandinavian Magnetometer Array (SMA). Pi2 pulsations are magnetohydrodynamic waves occurring during magnetospheric substorms. Almost always Pi2 are observed at substorm onset in mid to low latitudes on Earth's nightside. They are fed by magnetic energy release caused by dipolarization processes. Their periods lie between 40 to 150 seconds. Usually, Pi2 are detected using wavelet transformation. Here, Empirical Mode Decomposition (EMD) is presented as an alternative approach to the traditional procedure. EMD is a young signal decomposition method designed for nonlinear and non-stationary time series. It provides an adaptive, data driven, and complete decomposition of time series into slow and fast oscillations. An optimized version using Monte-Carlo-type noise assistance is used here. By displaying the results in a time-frequency space a characteristic frequency modulation is observed. This frequency modulation can be correlated with the onset of Pi2 pulsations. A basic algorithm to find the onset is presented. Finally, the results are compared to classical wavelet-based analysis. The use of different SMA stations furthermore allows the spatial analysis of Pi2 onset times. EMD mostly finds application in the fields of engineering and medicine. This work demonstrates the applicability of this method to geomagnetic time series.
Stochiometry, Microbial community composition and decomposition, a modelling analysis
NASA Astrophysics Data System (ADS)
Berninger, Frank; Zhou, Xuan; Aaltonen, Heidi; Köster, Kajar; Heinonsalo, Jussi; Pumpanen, Jukka
2017-04-01
Enzyme activity based litter decomposition models describe the decomposition of soil organic matter as a function of microbial biomass and its activity. In these models, decomposition depends largely on microbial and litter stoïchiometry. We, used the model of Schimel and Weintraub (Soil Biology & Biochemistry 35 (2003) 549-563 largely relying on the modification of Waring B et al. Ecology Letters, (2013) 16: 887-894) and we modified the model to include bacteria, fungi and mycorrizal fungi as decomposer groups assuming different stochiometries. The model was tested against previously published data from a fire chronosequence from northern Finland. The model reconstructed well the development of soil organic matter, microbial biomasses, enzyme actitivies with time after fire. In a theoretical model analysis we tried to understand how the exchange of carbon and nitrogen between mycorrhiza and the plant as different litter stoïchiometries interact. The results indicate that if a high percentage of fungal N uptake is transferred to the plant mycorrhizal biomass will decrease drastically and does decrease, due to low mycorrhizal biomasses, the N uptake of plants. If a lower proportion of the fungal N uptake is transferred to the plant the N uptake of the plants is reasonable stable while the proportion of mycorrhiza of the total fungal biomass varies. The model is also able to simulate priming of soil organic matter decomposition.
Wang, Liqiong; Chen, Hongyan; Zhang, Tonglai; Zhang, Jianguo; Yang, Li
2007-08-17
Three different substituted potassium salts of trinitrophloroglucinol (H(3)TNPG) were prepared and characterized. The salts are all hydrates, and thermogravimetric analysis (TG) and elemental analysis confirmed that these salts contain crystal H2O and that the amount crystal H2O in potassium salts of H3TNPG is 1.0 hydrate for mono-substituted potassium salts of H3TNPG [K(H2TNPG)] and di-substituted potassium salt of H3TNPG [K2(HTNPG)], and 2.0 hydrate for tri-substituted potassium salt of H3TNPG [K3(TNPG)]. Their thermal decomposition mechanisms and kinetic parameters from 50 to 500 degrees C were studied under a linear heating rate by differential scanning calorimetry (DSC). Their thermal decomposition mechanisms undergo dehydration stage and intensive exothermic decomposition stage. FT-IR and TG studies verify that their final residua of decomposition are potassium cyanide or potassium carbonate. According to the onset temperature of the first exothermic decomposition process of dehydrated salts, the order of the thermal stability from low to high is from K(H2TNPG) and K2(HTNPG) to K3(TNPG), which is conform to the results of apparent activation energy calculated by Kissinger's and Ozawa-Doyle's method. Sensitivity test results showed that potassium salts of H3TNPG demonstrated higher sensitivity properties and had greater explosive probabilities.
NASA Astrophysics Data System (ADS)
Engelbrecht, Nicolaas; Chiuta, Steven; Bessarabov, Dmitri G.
2018-05-01
The experimental evaluation of an autothermal microchannel reactor for H2 production from NH3 decomposition is described. The reactor design incorporates an autothermal approach, with added NH3 oxidation, for coupled heat supply to the endothermic decomposition reaction. An alternating catalytic plate arrangement is used to accomplish this thermal coupling in a cocurrent flow strategy. Detailed analysis of the transient operating regime associated with reactor start-up and steady-state results is presented. The effects of operating parameters on reactor performance are investigated, specifically, the NH3 decomposition flow rate, NH3 oxidation flow rate, and fuel-oxygen equivalence ratio. Overall, the reactor exhibits rapid response time during start-up; within 60 min, H2 production is approximately 95% of steady-state values. The recommended operating point for steady-state H2 production corresponds to an NH3 decomposition flow rate of 6 NL min-1, NH3 oxidation flow rate of 4 NL min-1, and fuel-oxygen equivalence ratio of 1.4. Under these flows, NH3 conversion of 99.8% and H2 equivalent fuel cell power output of 0.71 kWe is achieved. The reactor shows good heat utilization with a thermal efficiency of 75.9%. An efficient autothermal reactor design is therefore demonstrated, which may be upscaled to a multi-kW H2 production system for commercial implementation.
Effect of Isomorphous Substitution on the Thermal Decomposition Mechanism of Hydrotalcites
Crosby, Sergio; Tran, Doanh; Cocke, David; Duraia, El-Shazly M.; Beall, Gary W.
2014-01-01
Hydrotalcites have many important applications in catalysis, wastewater treatment, gene delivery and polymer stabilization, all depending on preparation history and treatment scenarios. In catalysis and polymer stabilization, thermal decomposition is of great importance. Hydrotalcites form easily with atmospheric carbon dioxide and often interfere with the study of other anion containing systems, particularly if formed at room temperature. The dehydroxylation and decomposition of carbonate occurs simultaneously, making it difficult to distinguish the dehydroxylation mechanisms directly. To date, the majority of work on understanding the decomposition mechanism has utilized hydrotalcite precipitated at room temperature. In this study, evolved gas analysis combined with thermal analysis has been used to show that CO2 contamination is problematic in materials being formed at RT that are poorly crystalline. This has led to some dispute as to the nature of the dehydroxylation mechanism. In this paper, data for the thermal decomposition of the chloride form of hydrotalcite are reported. In addition, carbonate-free hydrotalcites have been synthesized with different charge densities and at different growth temperatures. This combination of parameters has allowed a better understanding of the mechanism of dehydroxylation and the role that isomorphous substitution plays in these mechanisms to be delineated. In addition, the effect of anion type on thermal stability is also reported. A stepwise dehydroxylation model is proposed that is mediated by the level of aluminum substitution. PMID:28788231
NASA Astrophysics Data System (ADS)
Haris, A.; Pradana, G. S.; Riyanto, A.
2017-07-01
Tectonic setting of the Bird Head Papua Island becomes an important model for petroleum system in Eastern part of Indonesia. The current exploration has been started since the oil seepage finding in Bintuni and Salawati Basin. The biogenic gas in shallow layer turns out to become an interesting issue in the hydrocarbon exploration. The hydrocarbon accumulation appearance in a shallow layer with dry gas type, appeal biogenic gas for further research. This paper aims at delineating the sweet spot hydrocarbon potential in shallow layer by applying the spectral decomposition technique. The spectral decomposition is decomposing the seismic signal into an individual frequency, which has significant geological meaning. One of spectral decomposition methods is Continuous Wavelet Transform (CWT), which transforms the seismic signal into individual time and frequency simultaneously. This method is able to make easier time-frequency map analysis. When time resolution increases, the frequency resolution will be decreased, and vice versa. In this study, we perform low-frequency shadow zone analysis in which the amplitude anomaly at a low frequency of 15 Hz was observed and we then compare it to the amplitude at the mid (20 Hz) and the high-frequency (30 Hz). The appearance of the amplitude anomaly at a low frequency was disappeared at high frequency, this anomaly disappears. The spectral decomposition by using CWT algorithm has been successfully applied to delineate the sweet spot zone.
NASA Astrophysics Data System (ADS)
Gruszczynska, Marta; Rosat, Severine; Klos, Anna; Bogusz, Janusz
2017-04-01
Seasonal oscillations in the GPS position time series can arise from real geophysical effects and numerical artefacts. According to Dong et al. (2002) environmental loading effects can account for approximately 40% of the total variance of the annual signals in GPS time series, however using generally acknowledged methods (e.g. Least Squares Estimation, Wavelet Decomposition, Singular Spectrum Analysis) to model seasonal signals we are not able to separate real from spurious signals (effects of mismodelling aliased into annual period as well as draconitic). Therefore, we propose to use Multichannel Singular Spectrum Analysis (MSSA) to determine seasonal oscillations (with annual and semi-annual periods) from GPS position time series and environmental loading displacement models. The MSSA approach is an extension of the classical Karhunen-Loève method and it is a special case of SSA for multivariate time series. The main advantage of MSSA is the possibility to extract common seasonal signals for stations from selected area and to investigate the causality between a set of time series as well. In this research, we explored the ability of MSSA application to separate real geophysical effects from spurious effects in GPS time series. For this purpose, we used GPS position changes and environmental loading models. We analysed the topocentric time series from 250 selected stations located worldwide, delivered from Network Solution obtained by the International GNSS Service (IGS) as a contribution to the latest realization of the International Terrestrial Reference System (namely ITRF2014, Rebishung et al., 2016). We also researched atmospheric, hydrological and non-tidal oceanic loading models provided by the EOST/IPGS Loading Service in the Centre-of-Figure (CF) reference frame. The analysed displacements were estimated from ERA-Interim (surface pressure), MERRA-land (soil moisture and snow) as well as ECCO2 ocean bottom pressure. We used Multichannel Singular Spectrum Analysis to determine common seasonal signals in two case studies with adopted a 3-years lag-window as the optimal window size. We also inferred the statistical significance of oscillations through the Monte Carlo MSSA method (Allen and Robertson, 1996). In the first case study, we investigated the common spatio-temporal seasonal signals for all stations. For this purpose, we divided selected stations with respect to the continents. For instance, for stations located in Europe, seasonal oscillations accounts for approximately 45% of the GPS-derived data variance. Much higher variance of seasonal signals is explained by hydrological loadings of about 92%, while the non-tidal oceanic loading accounted for 31% of total variance. In the second case study, we analysed the capability of the MSSA method to establish a causality between several time series. Each of estimated Principal Component represents pattern of the common signal for all analysed data. For ZIMM station (Zimmerwald, Switzerland), the 1st, 2nd and 9th, 10th Principal Components, which accounts for 35% of the variance, corresponds to the annual and semi-annual signals. In this part, we applied the non-parametric MSSA approach to extract the common seasonal signals for GPS time series and environmental loadings for each of the 250 stations with clear statement, that some part of seasonal signal reflects the real geophysical effects. REFERENCES: 1. Allen, M. and Robertson, A.: 1996, Distinguishing modulated oscillations from coloured noise in multivariate datasets. Climate Dynamics, 12, No. 11, 775-784. DOI: 10.1007/s003820050142. 2. Dong, D., Fang, P., Bock, Y., Cheng, M.K. and Miyazaki, S.: 2002, Anatomy of apparent seasonal variations from GPS-derived site position time series. Journal of Geophysical Research, 107, No. B4, 2075. DOI: 10.1029/2001JB000573. 3. Rebischung, P., Altamimi, Z., Ray, J. and Garayt, B.: 2016, The IGS contribution to ITRF2014. Journal of Geodesy, 90, No. 7, 611-630. DOI:10.1007/s00190-016-0897-6.
Modal decomposition of turbulent supersonic cavity
NASA Astrophysics Data System (ADS)
Soni, R. K.; Arya, N.; De, A.
2018-06-01
Self-sustained oscillations in a Mach 3 supersonic cavity with a length-to-depth ratio of three are investigated using wall-modeled large eddy simulation methodology for ReD = 3.39× 105 . The unsteady data obtained through computation are utilized to investigate the spatial and temporal evolution of the flow field, especially the second invariant of the velocity tensor, while the phase-averaged data are analyzed over a feedback cycle to study the spatial structures. This analysis is accompanied by the proper orthogonal decomposition (POD) data, which reveals the presence of discrete vortices along the shear layer. The POD analysis is performed in both the spanwise and streamwise planes to extract the coherence in flow structures. Finally, dynamic mode decomposition is performed on the data sequence to obtain the dynamic information and deeper insight into the self-sustained mechanism.
Three dimensional empirical mode decomposition analysis apparatus, method and article manufacture
NASA Technical Reports Server (NTRS)
Gloersen, Per (Inventor)
2004-01-01
An apparatus and method of analysis for three-dimensional (3D) physical phenomena. The physical phenomena may include any varying 3D phenomena such as time varying polar ice flows. A repesentation of the 3D phenomena is passed through a Hilbert transform to convert the data into complex form. A spatial variable is separated from the complex representation by producing a time based covariance matrix. The temporal parts of the principal components are produced by applying Singular Value Decomposition (SVD). Based on the rapidity with which the eigenvalues decay, the first 3-10 complex principal components (CPC) are selected for Empirical Mode Decomposition into intrinsic modes. The intrinsic modes produced are filtered in order to reconstruct the spatial part of the CPC. Finally, a filtered time series may be reconstructed from the first 3-10 filtered complex principal components.
Decomposition of Proteins into Dynamic Units from Atomic Cross-Correlation Functions.
Calligari, Paolo; Gerolin, Marco; Abergel, Daniel; Polimeno, Antonino
2017-01-10
In this article, we present a clustering method of atoms in proteins based on the analysis of the correlation times of interatomic distance correlation functions computed from MD simulations. The goal is to provide a coarse-grained description of the protein in terms of fewer elements that can be treated as dynamically independent subunits. Importantly, this domain decomposition method does not take into account structural properties of the protein. Instead, the clustering of protein residues in terms of networks of dynamically correlated domains is defined on the basis of the effective correlation times of the pair distance correlation functions. For these properties, our method stands as a complementary analysis to the customary protein decomposition in terms of quasi-rigid, structure-based domains. Results obtained for a prototypal protein structure illustrate the approach proposed.
NASA Astrophysics Data System (ADS)
Sekiguchi, Kazuki; Shirakawa, Hiroki; Chokawa, Kenta; Araidai, Masaaki; Kangawa, Yoshihiro; Kakimoto, Koichi; Shiraishi, Kenji
2018-04-01
We analyzed the decomposition of Ga(CH3)3 (TMG) during the metal organic vapor phase epitaxy (MOVPE) of GaN on the basis of first-principles calculations and thermodynamic analysis. We performed activation energy calculations of TMG decomposition and determined the main reaction processes of TMG during GaN MOVPE. We found that TMG reacts with the H2 carrier gas and that (CH3)2GaH is generated after the desorption of the methyl group. Next, (CH3)2GaH decomposes into (CH3)GaH2 and this decomposes into GaH3. Finally, GaH3 becomes GaH. In the MOVPE growth of GaN, TMG decomposes into GaH by the successive desorption of its methyl groups. The results presented here concur with recent high-resolution mass spectroscopy results.
Tarai, Madhumita; Kumar, Keshav; Divya, O; Bairi, Partha; Mishra, Kishor Kumar; Mishra, Ashok Kumar
2017-09-05
The present work compares the dissimilarity and covariance based unsupervised chemometric classification approaches by taking the total synchronous fluorescence spectroscopy data sets acquired for the cumin and non-cumin based herbal preparations. The conventional decomposition method involves eigenvalue-eigenvector analysis of the covariance of the data set and finds the factors that can explain the overall major sources of variation present in the data set. The conventional approach does this irrespective of the fact that the samples belong to intrinsically different groups and hence leads to poor class separation. The present work shows that classification of such samples can be optimized by performing the eigenvalue-eigenvector decomposition on the pair-wise dissimilarity matrix. Copyright © 2017 Elsevier B.V. All rights reserved.
Artifact removal from EEG data with empirical mode decomposition
NASA Astrophysics Data System (ADS)
Grubov, Vadim V.; Runnova, Anastasiya E.; Efremova, Tatyana Yu.; Hramov, Alexander E.
2017-03-01
In the paper we propose the novel method for dealing with the physiological artifacts caused by intensive activity of facial and neck muscles and other movements in experimental human EEG recordings. The method is based on analysis of EEG signals with empirical mode decomposition (Hilbert-Huang transform). We introduce the mathematical algorithm of the method with following steps: empirical mode decomposition of EEG signal, choosing of empirical modes with artifacts, removing empirical modes with artifacts, reconstruction of the initial EEG signal. We test the method on filtration of experimental human EEG signals from movement artifacts and show high efficiency of the method.
NASA Astrophysics Data System (ADS)
Yamamoto, Takanori; Bannai, Hideo; Nagasaki, Masao; Miyano, Satoru
We present new decomposition heuristics for finding the optimal solution for the maximum-weight connected graph problem, which is known to be NP-hard. Previous optimal algorithms for solving the problem decompose the input graph into subgraphs using heuristics based on node degree. We propose new heuristics based on betweenness centrality measures, and show through computational experiments that our new heuristics tend to reduce the number of subgraphs in the decomposition, and therefore could lead to the reduction in computational time for finding the optimal solution. The method is further applied to analysis of biological pathway data.
Mostafanezhad, Isar; Boric-Lubecke, Olga; Lubecke, Victor; Mandic, Danilo P
2009-01-01
Empirical Mode Decomposition has been shown effective in the analysis of non-stationary and non-linear signals. As an application in wireless life signs monitoring in this paper we use this method in conditioning the signals obtained from the Doppler device. Random physical movements, fidgeting, of the human subject during a measurement can fall on the same frequency of the heart or respiration rate and interfere with the measurement. It will be shown how Empirical Mode Decomposition can break the radar signal down into its components and help separate and remove the fidgeting interference.
Using dynamic mode decomposition for real-time background/foreground separation in video
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kutz, Jose Nathan; Grosek, Jacob; Brunton, Steven
The technique of dynamic mode decomposition (DMD) is disclosed herein for the purpose of robustly separating video frames into background (low-rank) and foreground (sparse) components in real-time. Foreground/background separation is achieved at the computational cost of just one singular value decomposition (SVD) and one linear equation solve, thus producing results orders of magnitude faster than robust principal component analysis (RPCA). Additional techniques, including techniques for analyzing the video for multi-resolution time-scale components, and techniques for reusing computations to allow processing of streaming video in real time, are also described herein.
NASA Technical Reports Server (NTRS)
Worstell, J. H.; Daniel, S. R.
1981-01-01
A method for the separation and analysis of tetralin hydroperoxide and its decomposition products by high pressure liquid chromatography has been developed. Elution with a single, mixed solvent from a micron-Porasil column was employed. Constant response factors (internal standard method) over large concentration ranges and reproducible retention parameters are reported.
Educational Outcomes and Socioeconomic Status: A Decomposition Analysis for Middle-Income Countries
ERIC Educational Resources Information Center
Nieto, Sandra; Ramos, Raúl
2015-01-01
This article analyzes the factors that explain the gap in educational outcomes between the top and bottom quartile of students in different countries, according to their socioeconomic status. To do so, it uses PISA microdata for 10 middle-income and 2 high-income countries, and applies the Oaxaca-Blinder decomposition method. Its results show that…
Absolute continuity for operator valued completely positive maps on C∗-algebras
NASA Astrophysics Data System (ADS)
Gheondea, Aurelian; Kavruk, Ali Şamil
2009-02-01
Motivated by applicability to quantum operations, quantum information, and quantum probability, we investigate the notion of absolute continuity for operator valued completely positive maps on C∗-algebras, previously introduced by Parthasarathy [in Athens Conference on Applied Probability and Time Series Analysis I (Springer-Verlag, Berlin, 1996), pp. 34-54]. We obtain an intrinsic definition of absolute continuity, we show that the Lebesgue decomposition defined by Parthasarathy is the maximal one among all other Lebesgue-type decompositions and that this maximal Lebesgue decomposition does not depend on the jointly dominating completely positive map, we obtain more flexible formulas for calculating the maximal Lebesgue decomposition, and we point out the nonuniqueness of the Lebesgue decomposition as well as a sufficient condition for uniqueness. In addition, we consider Radon-Nikodym derivatives for absolutely continuous completely positive maps that, in general, are unbounded positive self-adjoint operators affiliated to a certain von Neumann algebra, and we obtain a spectral approximation by bounded Radon-Nikodym derivatives. An application to the existence of the infimum of two completely positive maps is indicated, and formulas in terms of Choi's matrices for the Lebesgue decomposition of completely positive maps in matrix algebras are obtained.
Watterson, James H; Donohue, Joseph P
2011-09-01
Skeletal tissues (rat) were analyzed for ketamine (KET) and norketamine (NKET) following acute ketamine exposure (75 mg/kg i.p.) to examine the influence of bone type and decomposition period on drug levels. Following euthanasia, drug-free (n = 6) and drug-positive (n = 20) animals decomposed outdoors in rural Ontario for 0, 1, or 2 weeks. Skeletal remains were recovered and ground samples of various bones underwent passive methanolic extraction and analysis by GC-MS after solid-phase extraction. Drug levels, expressed as mass normalized response ratios, were compared across tissue types and decomposition periods. Bone type was a main effect (p < 0.05) for drug level and drug/metabolite level ratio (DMLR) for all decomposition times, except for DMLR after 2 weeks of decomposition. Mean drug level (KET and NKET) and DMLR varied by up to 23-fold, 18-fold, and 5-fold, respectively, between tissue types. Decomposition time was significantly related to DMLR, KET level, and NKET level in 3/7, 4/7, and 1/7 tissue types, respectively. Although substantial sitedependence may exist in measured bone drug levels, ratios of drug and metabolite levels should be investigated for utility in discrimination of drug administration patterns in forensic work.
Yuan, Jie; Zheng, Xiaofeng; Cheng, Fei; Zhu, Xian; Hou, Lin; Li, Jingxia; Zhang, Shuoxin
2017-10-24
Historically, intense forest hazards have resulted in an increase in the quantity of fallen wood in the Qinling Mountains. Fallen wood has a decisive influence on the nutrient cycling, carbon budget and ecosystem biodiversity of forests, and fungi are essential for the decomposition of fallen wood. Moreover, decaying dead wood alters fungal communities. The development of high-throughput sequencing methods has facilitated the ongoing investigation of relevant molecular forest ecosystems with a focus on fungal communities. In this study, fallen wood and its associated fungal communities were compared at different stages of decomposition to evaluate relative species abundance and species diversity. The physical and chemical factors that alter fungal communities were also compared by performing correspondence analysis according to host tree species across all stages of decomposition. Tree species were the major source of differences in fungal community diversity at all decomposition stages, and fungal communities achieved the highest levels of diversity at the intermediate and late decomposition stages. Interactions between various physical and chemical factors and fungal communities shared the same regulatory mechanisms, and there was no tree species-specific influence. Improving our knowledge of wood-inhabiting fungal communities is crucial for forest ecosystem conservation.
Analysis of Variance with Summary Statistics in Microsoft® Excel®
ERIC Educational Resources Information Center
Larson, David A.; Hsu, Ko-Cheng
2010-01-01
Students regularly are asked to solve Single Factor Analysis of Variance problems given only the sample summary statistics (number of observations per category, category means, and corresponding category standard deviations). Most undergraduate students today use Excel for data analysis of this type. However, Excel, like all other statistical…
Nunes, F P; Garcia, Q S
2015-05-01
The study of litter decomposition and nutrient cycling is essential to know native forests structure and functioning. Mathematical models can help to understand the local and temporal litter fall variations and their environmental variables relationships. The objective of this study was test the adequacy of mathematical models for leaf litter decomposition in the Atlantic Forest in southeastern Brazil. We study four native forest sites in Parque Estadual do Rio Doce, a Biosphere Reserve of the Atlantic, which were installed 200 bags of litter decomposing with 20 × 20 cm nylon screen of 2 mm, with 10 grams of litter. Monthly from 09/2007 to 04/2009, 10 litterbags were removed for determination of the mass loss. We compared 3 nonlinear models: 1 - Olson Exponential Model (1963), which considers the constant K, 2 - Model proposed by Fountain and Schowalter (2004), 3 - Model proposed by Coelho and Borges (2005), which considers the variable K through QMR, SQR, SQTC, DMA and Test F. The Fountain and Schowalter (2004) model was inappropriate for this study by overestimating decomposition rate. The decay curve analysis showed that the model with the variable K was more appropriate, although the values of QMR and DMA revealed no significant difference (p > 0.05) between the models. The analysis showed a better adjustment of DMA using K variable, reinforced by the values of the adjustment coefficient (R2). However, convergence problems were observed in this model for estimate study areas outliers, which did not occur with K constant model. This problem can be related to the non-linear fit of mass/time values to K variable generated. The model with K constant shown to be adequate to describe curve decomposition for separately areas and best adjustability without convergence problems. The results demonstrated the adequacy of Olson model to estimate tropical forest litter decomposition. Although use of reduced number of parameters equaling the steps of the decomposition process, no difficulties of convergence were observed in Olson model. So, this model can be used to describe decomposition curves in different types of environments, estimating K appropriately.
Handling nonnormality and variance heterogeneity for quantitative sublethal toxicity tests.
Ritz, Christian; Van der Vliet, Leana
2009-09-01
The advantages of using regression-based techniques to derive endpoints from environmental toxicity data are clear, and slowly, this superior analytical technique is gaining acceptance. As use of regression-based analysis becomes more widespread, some of the associated nuances and potential problems come into sharper focus. Looking at data sets that cover a broad spectrum of standard test species, we noticed that some model fits to data failed to meet two key assumptions-variance homogeneity and normality-that are necessary for correct statistical analysis via regression-based techniques. Failure to meet these assumptions often is caused by reduced variance at the concentrations showing severe adverse effects. Although commonly used with linear regression analysis, transformation of the response variable only is not appropriate when fitting data using nonlinear regression techniques. Through analysis of sample data sets, including Lemna minor, Eisenia andrei (terrestrial earthworm), and algae, we show that both the so-called Box-Cox transformation and use of the Poisson distribution can help to correct variance heterogeneity and nonnormality and so allow nonlinear regression analysis to be implemented. Both the Box-Cox transformation and the Poisson distribution can be readily implemented into existing protocols for statistical analysis. By correcting for nonnormality and variance heterogeneity, these two statistical tools can be used to encourage the transition to regression-based analysis and the depreciation of less-desirable and less-flexible analytical techniques, such as linear interpolation.
COMPADRE: an R and web resource for pathway activity analysis by component decompositions.
Ramos-Rodriguez, Roberto-Rafael; Cuevas-Diaz-Duran, Raquel; Falciani, Francesco; Tamez-Peña, Jose-Gerardo; Trevino, Victor
2012-10-15
The analysis of biological networks has become essential to study functional genomic data. Compadre is a tool to estimate pathway/gene sets activity indexes using sub-matrix decompositions for biological networks analyses. The Compadre pipeline also includes one of the direct uses of activity indexes to detect altered gene sets. For this, the gene expression sub-matrix of a gene set is decomposed into components, which are used to test differences between groups of samples. This procedure is performed with and without differentially expressed genes to decrease false calls. During this process, Compadre also performs an over-representation test. Compadre already implements four decomposition methods [principal component analysis (PCA), Isomaps, independent component analysis (ICA) and non-negative matrix factorization (NMF)], six statistical tests (t- and f-test, SAM, Kruskal-Wallis, Welch and Brown-Forsythe), several gene sets (KEGG, BioCarta, Reactome, GO and MsigDB) and can be easily expanded. Our simulation results shown in Supplementary Information suggest that Compadre detects more pathways than over-representation tools like David, Babelomics and Webgestalt and less false positives than PLAGE. The output is composed of results from decomposition and over-representation analyses providing a more complete biological picture. Examples provided in Supplementary Information show the utility, versatility and simplicity of Compadre for analyses of biological networks. Compadre is freely available at http://bioinformatica.mty.itesm.mx:8080/compadre. The R package is also available at https://sourceforge.net/p/compadre.
NASA Astrophysics Data System (ADS)
Zhi, Y.; Yang, Z. F.; Yin, X. A.
2014-05-01
Decomposition analysis of water footprint (WF) changes, or assessing the changes in WF and identifying the contributions of factors leading to the changes, is important to water resource management. Instead of focusing on WF from the perspective of administrative regions, we built a framework in which the input-output (IO) model, the structural decomposition analysis (SDA) model and the generating regional IO tables (GRIT) method are combined to implement decomposition analysis for WF in a river basin. This framework is illustrated in the WF in Haihe River basin (HRB) from 2002 to 2007, which is a typical water-limited river basin. It shows that the total WF in the HRB increased from 4.3 × 1010 m3 in 2002 to 5.6 × 1010 m3 in 2007, and the agriculture sector makes the dominant contribution to the increase. Both the WF of domestic products (internal) and the WF of imported products (external) increased, and the proportion of external WF rose from 29.1 to 34.4%. The technological effect was the dominant contributor to offsetting the increase of WF. However, the growth of WF caused by the economic structural effect and the scale effect was greater, so the total WF increased. This study provides insights about water challenges in the HRB and proposes possible strategies for the future, and serves as a reference for WF management and policy-making in other water-limited river basins.
Decomposition rates and termite assemblage composition in semiarid Africa
Schuurman, G.
2005-01-01
Outside of the humid tropics, abiotic factors are generally considered the dominant regulators of decomposition, and biotic influences are frequently not considered in predicting decomposition rates. In this study, I examined the effect of termite assemblage composition and abundance on decomposition of wood litter of an indigenous species (Croton megalobotrys) in five terrestrial habitats of the highly seasonal semiarid Okavango Delta region of northern Botswana, to determine whether natural variation in decomposer community composition and abundance influences decomposition rates. 1 conducted the study in two areas, Xudum and Santawani, with the Xudum study preceding the Santawani study. I assessed termite assemblage composition and abundance using a grid of survey baits (rolls of toilet paper) placed on the soil surface and checked 2-4 times/month. I placed a billet (a section of wood litter) next to each survey bait and measured decomposition in a plot by averaging the mass loss of its billets. Decomposition rates varied up to sixfold among plots within the same habitat and locality, despite the fact that these plots experienced the same climate. In addition, billets decomposed significantly faster during the cooler and drier Santawani study, contradicting climate-based predictions. Because termite incidence was generally higher in Santawani plots, termite abundance initially seemed a likely determinant of decomposition in this system. However, no significant effect of termite incidence on billet mass loss rates was observed among the Xudum plots, where decomposition rates remained low even though termite incidence varied considerably. Considering the incidences of fungus-growing termites and non-fungus-growing termites separately resolves this apparent contradiction: in both Santawani and Xudum, only fungus-growing termites play a significant role in decomposition. This result is mirrored in an analysis of the full data set of combined Xudum and Santawani data. The determination that natural variation in the abundance of a single taxonomic group of soil fauna, a termite subfamily, determines almost all observed variation in decomposition rates supports the emerging view that biotic influences may be important in many biomes and that consideration of decomposer community composition and abundance may be critical for accurate prediction of decomposition rates. ?? 2005 by the Ecological Society of America.
NASA Astrophysics Data System (ADS)
Jacquin, A. P.; Shamseldin, A. Y.
2009-04-01
This study analyses the sensitivity of the parameters of Takagi-Sugeno-Kang rainfall-runoff fuzzy models previously developed by the authors. These models can be classified in two types, where the first type is intended to account for the effect of changes in catchment wetness and the second type incorporates seasonality as a source of non-linearity in the rainfall-runoff relationship. The sensitivity analysis is performed using two global sensitivity analysis methods, namely Regional Sensitivity Analysis (RSA) and Sobol's Variance Decomposition (SVD). In general, the RSA method has the disadvantage of not being able to detect sensitivities arising from parameter interactions. By contrast, the SVD method is suitable for analysing models where the model response surface is expected to be affected by interactions at a local scale and/or local optima, such as the case of the rainfall-runoff fuzzy models analysed in this study. The data of six catchments from different geographical locations and sizes are used in the sensitivity analysis. The sensitivity of the model parameters is analysed in terms of two measures of goodness of fit, assessing the model performance from different points of view. These measures are the Nash-Sutcliffe criterion and the index of volumetric fit. The results of the study show that the sensitivity of the model parameters depends on both the type of non-linear effects (i.e. changes in catchment wetness or seasonality) that dominates the catchment's rainfall-runoff relationship and the measure used to assess the model performance. Acknowledgements: This research was supported by FONDECYT, Research Grant 11070130. We would also like to express our gratitude to Prof. Kieran M. O'Connor from the National University of Ireland, Galway, for providing the data used in this study.
NASA Astrophysics Data System (ADS)
L'vov, Boris V.
2008-02-01
This paper sums up the evolution of thermochemical approach to the interpretation of solid decompositions for the past 25 years. This period includes two stages related to decomposition studies by different techniques: by ET AAS and QMS in 1981-2001 and by TG in 2002-2007. As a result of ET AAS and QMS investigations, the method for determination of absolute rates of solid decompositions was developed and the mechanism of decompositions through the congruent dissociative vaporization was discovered. On this basis, in the period from 1997 to 2001, the decomposition mechanisms of several classes of reactants were interpreted and some unusual effects observed in TA were explained. However, the thermochemical approach has not received any support by other TA researchers. One of the potential reasons of this distrust was the unreliability of the E values measured by the traditional Arrhenius plot method. The theoretical analysis and comparison of metrological features of different methods used in the determinations of thermochemical quantities permitted to conclude that in comparison with the Arrhenius plot and second-law methods, the third-law method is to be very much preferred. However, this method cannot be used in the kinetic studies by the Arrhenius approach because its use suggests the measuring of the equilibrium pressures of decomposition products. On the contrary, the method of absolute rates is ideally suitable for this purpose. As a result of much higher precision of the third-law method, some quantitative conclusions that follow from the theory were confirmed, and several new effects, which were invisible in the framework of the Arrhenius approach, have been revealed. In spite of great progress reached in the development of reliable methodology, based on the third-law method, the thermochemical approach remains unclaimed as before.
Aligning Event Logs to Task-Time Matrix Clinical Pathways in BPMN for Variance Analysis.
Yan, Hui; Van Gorp, Pieter; Kaymak, Uzay; Lu, Xudong; Ji, Lei; Chiau, Choo Chiap; Korsten, Hendrikus H M; Duan, Huilong
2018-03-01
Clinical pathways (CPs) are popular healthcare management tools to standardize care and ensure quality. Analyzing CP compliance levels and variances is known to be useful for training and CP redesign purposes. Flexible semantics of the business process model and notation (BPMN) language has been shown to be useful for the modeling and analysis of complex protocols. However, in practical cases one may want to exploit that CPs often have the form of task-time matrices. This paper presents a new method parsing complex BPMN models and aligning traces to the models heuristically. A case study on variance analysis is undertaken, where a CP from the practice and two large sets of patients data from an electronic medical record (EMR) database are used. The results demonstrate that automated variance analysis between BPMN task-time models and real-life EMR data are feasible, whereas that was not the case for the existing analysis techniques. We also provide meaningful insights for further improvement.
Decomposition Odour Profiling in the Air and Soil Surrounding Vertebrate Carrion
2014-01-01
Chemical profiling of decomposition odour is conducted in the environmental sciences to detect malodourous target sources in air, water or soil. More recently decomposition odour profiling has been employed in the forensic sciences to generate a profile of the volatile organic compounds (VOCs) produced by decomposed remains. The chemical profile of decomposition odour is still being debated with variations in the VOC profile attributed to the sample collection technique, method of chemical analysis, and environment in which decomposition occurred. To date, little consideration has been given to the partitioning of odour between different matrices and the impact this has on developing an accurate VOC profile. The purpose of this research was to investigate the decomposition odour profile surrounding vertebrate carrion to determine how VOCs partition between soil and air. Four pig carcasses (Sus scrofa domesticus L.) were placed on a soil surface to decompose naturally and their odour profile monitored over a period of two months. Corresponding control sites were also monitored to determine the VOC profile of the surrounding environment. Samples were collected from the soil below and the air (headspace) above the decomposed remains using sorbent tubes and analysed using gas chromatography-mass spectrometry. A total of 249 compounds were identified but only 58 compounds were common to both air and soil samples. This study has demonstrated that soil and air samples produce distinct subsets of VOCs that contribute to the overall decomposition odour. Sample collection from only one matrix will reduce the likelihood of detecting the complete spectrum of VOCs, which further confounds the issue of determining a complete and accurate decomposition odour profile. Confirmation of this profile will enhance the performance of cadaver-detection dogs that are tasked with detecting decomposition odour in both soil and air to locate victim remains. PMID:24740412
Mao, Lingai; Chen, Zhizong; Wu, Xinyue; Tang, Xiujuan; Yao, Shuiliang; Zhang, Xuming; Jiang, Boqiong; Han, Jingyi; Wu, Zuliang; Lu, Hao; Nozaki, Tomohiro
2018-04-05
A dielectric barrier discharge (DBD) catalyst hybrid reactor with CeO 2 /γ-Al 2 O 3 catalyst balls was investigated for benzene decomposition at atmospheric pressure and 30 °C. At an energy density of 37-40 J/L, benzene decomposition was as high as 92.5% when using the hybrid reactor with 5.0wt%CeO 2 /γ-Al 2 O 3 ; while it was 10%-20% when using a normal DBD reactor without a catalyst. Benzene decomposition using the hybrid reactor was almost the same as that using an O 3 catalyst reactor with the same CeO 2 /γ-Al 2 O 3 catalyst, indicating that O 3 plays a key role in the benzene decomposition. Fourier transform infrared spectroscopy analysis showed that O 3 adsorption on CeO 2 /γ-Al 2 O 3 promotes the production of adsorbed O 2 - and O 2 2‒ , which contribute benzene decomposition over heterogeneous catalysts. Nano particles as by-products (phenol and 1,4-benzoquinone) from benzene decomposition can be significantly reduced using the CeO 2 /γ-Al 2 O 3 catalyst. H 2 O inhibits benzene decomposition; however, it improves CO 2 selectivity. The deactivated CeO 2 /γ-Al 2 O 3 catalyst can be regenerated by performing discharges at 100 °C and 192-204 J/L. The decomposition mechanism of benzene over CeO 2 /γ-Al 2 O 3 catalyst was proposed. Copyright © 2017 Elsevier B.V. All rights reserved.
Decomposition odour profiling in the air and soil surrounding vertebrate carrion.
Forbes, Shari L; Perrault, Katelynn A
2014-01-01
Chemical profiling of decomposition odour is conducted in the environmental sciences to detect malodourous target sources in air, water or soil. More recently decomposition odour profiling has been employed in the forensic sciences to generate a profile of the volatile organic compounds (VOCs) produced by decomposed remains. The chemical profile of decomposition odour is still being debated with variations in the VOC profile attributed to the sample collection technique, method of chemical analysis, and environment in which decomposition occurred. To date, little consideration has been given to the partitioning of odour between different matrices and the impact this has on developing an accurate VOC profile. The purpose of this research was to investigate the decomposition odour profile surrounding vertebrate carrion to determine how VOCs partition between soil and air. Four pig carcasses (Sus scrofa domesticus L.) were placed on a soil surface to decompose naturally and their odour profile monitored over a period of two months. Corresponding control sites were also monitored to determine the VOC profile of the surrounding environment. Samples were collected from the soil below and the air (headspace) above the decomposed remains using sorbent tubes and analysed using gas chromatography-mass spectrometry. A total of 249 compounds were identified but only 58 compounds were common to both air and soil samples. This study has demonstrated that soil and air samples produce distinct subsets of VOCs that contribute to the overall decomposition odour. Sample collection from only one matrix will reduce the likelihood of detecting the complete spectrum of VOCs, which further confounds the issue of determining a complete and accurate decomposition odour profile. Confirmation of this profile will enhance the performance of cadaver-detection dogs that are tasked with detecting decomposition odour in both soil and air to locate victim remains.
NASA Astrophysics Data System (ADS)
Asanuma, Jun
Variances of the velocity components and scalars are important as indicators of the turbulence intensity. They also can be utilized to estimate surface fluxes in several types of "variance methods", and the estimated fluxes can be regional values if the variances from which they are calculated are regionally representative measurements. On these motivations, variances measured by an aircraft in the unstable ABL over a flat pine forest during HAPEX-Mobilhy were analyzed within the context of the similarity scaling arguments. The variances of temperature and vertical velocity within the atmospheric surface layer were found to follow closely the Monin-Obukhov similarity theory, and to yield reasonable estimates of the surface sensible heat fluxes when they are used in variance methods. This gives a validation to the variance methods with aircraft measurements. On the other hand, the specific humidity variances were influenced by the surface heterogeneity and clearly fail to obey MOS. A simple analysis based on the similarity law for free convection produced a comprehensible and quantitative picture regarding the effect of the surface flux heterogeneity on the statistical moments, and revealed that variances of the active and passive scalars become dissimilar because of their different roles in turbulence. The analysis also indicated that the mean quantities are also affected by the heterogeneity but to a less extent than the variances. The temperature variances in the mixed layer (ML) were examined by using a generalized top-down bottom-up diffusion model with some combinations of velocity scales and inversion flux models. The results showed that the surface shear stress exerts considerable influence on the lower ML. Also with the temperature and vertical velocity variances ML variance methods were tested, and their feasibility was investigated. Finally, the variances in the ML were analyzed in terms of the local similarity concept; the results confirmed the original hypothesis by Panofsky and McCormick that the local scaling in terms of the local buoyancy flux defines the lower bound of the moments.
NASA Astrophysics Data System (ADS)
Williams, E. K.; Rosenheim, B. E.
2011-12-01
Ramped pyrolysis methodology, such as that used in the programmed-temperature pyrolysis/combustion system (PTP/CS), improves radiocarbon analysis of geologic materials devoid of authigenic carbonate compounds and with low concentrations of extractable authochthonous organic molecules. The approach has improved sediment chronology in organic-rich sediments proximal to Antarctic ice shelves (Rosenheim et al., 2008) and constrained the carbon sequestration potential of suspended sediments in the lower Mississippi River (Roe et al., in review). Although ramped pyrolysis allows for separation of sedimentary organic material based upon relative reactivity, chemical information (i.e. chemical composition of pyrolysis products) is lost during the in-line combustion of pyrolysis products. A first order approximation of ramped pyrolysis/combustion system CO2 evolution, employing a simple Gaussian decomposition routine, has been useful (Rosenheim et al., 2008), but improvements may be possible. First, without prior compound-specific extractions, the molecular composition of sedimentary organic matter is unknown and/or unidentifiable. Second, even if determined as constituents of sedimentary organic material, many organic compounds have unknown or variable decomposition temperatures. Third, mixtures of organic compounds may result in significant chemistry within the pyrolysis reactor, prior to introduction of oxygen along the flow path. Gaussian decomposition of the reaction rate may be too simple to fully explain the combination of these factors. To relate both the radiocarbon age over different temperature intervals and the pyrolysis reaction thermograph (temperature (°C) vs. CO2 evolved (μmol)) obtained from PTP/CS to chemical composition of sedimentary organic material, we present a modeling framework developed based upon the ramped pyrolysis decomposition of simple mixtures of organic compounds (i.e. cellulose, lignin, plant fatty acids, etc.) often found in sedimentary organic material to account for changes in thermograph shape. The decompositions will be compositionally verified by 13C NMR analysis of pyrolysis residues from interrupted reactions. This will allow for constraint of decomposition temperatures of individual compounds as well as chemical reactions between volatilized moieties in mixtures of these compounds. We will apply this framework with 13C NMR analysis of interrupted pyrolysis residues and radiocarbon data from PTP/CS analysis of sedimentary organic material from a freshwater marsh wetland in Barataria Bay, Louisiana. We expect to characterize the bulk chemical composition during pyrolysis and as well as diagenetic changes with depth. Most importantly, we expect to constrain the potential and the limitations of this modeling framework for application to other depositional environments.
NASA Technical Reports Server (NTRS)
Alston, D. W.
1981-01-01
The considered research had the objective to design a statistical model that could perform an error analysis of curve fits of wind tunnel test data using analysis of variance and regression analysis techniques. Four related subproblems were defined, and by solving each of these a solution to the general research problem was obtained. The capabilities of the evolved true statistical model are considered. The least squares fit is used to determine the nature of the force, moment, and pressure data. The order of the curve fit is increased in order to delete the quadratic effect in the residuals. The analysis of variance is used to determine the magnitude and effect of the error factor associated with the experimental data.
NASA Astrophysics Data System (ADS)
Crossley, David; de Linage, Caroline; Hinderer, Jacques; Boy, Jean-Paul; Famiglietti, James
2012-05-01
We analyse data from seven superconducting gravimeter (SG) stations in Europe from 2002 to 2007 from the Global Geodynamics Project (GGP) and compare seasonal variations with data from GRACE and several global hydrological models - GLDAS, WGHM and ERA-Interim. Our technique is empirical orthogonal function (EOF) decomposition of the fields that allows for the inherent incompatibility of length scales between ground and satellite observations. GGP stations below the ground surface pose a problem because part of the attraction from soil moisture comes from above the gravimeter, and this gives rise to a complex (mixed) gravity response. The first principle component (PC) of the EOF decomposition is the main indicator for comparing the fields, although for some of the series it accounts for only about 50 per cent of the variance reduction. PCs for GRACE solutions RL04 from CSR and GFZ are filtered with a cosine taper (degrees 20-40) and a Gaussian window (350 km). Significant differences are evident between GRACE solutions from different groups and filters, though they all agree reasonably well with the global hydrological models for the predominantly seasonal signal. We estimate the first PC at 10-d sampling to be accurate to 1 μGal for GGP data, 1.5 μGal for GRACE data and 1 μGal between the three global hydrological models. Within these limits the CNES/GRGS solution and ground GGP data agree at the 79 per cent level, and better when the GGP solution is restricted to the three above-ground stations. The major limitation on the GGP side comes from the water mass distribution surrounding the underground instruments that leads to a complex gravity effect. To solve this we propose a method for correcting the SG residual gravity series for the effects of soil moisture above the station.
Interannual Variability and Trends of Extratropical Ozone. Part 1; Northern Hemisphere
NASA Technical Reports Server (NTRS)
Yung, Yuk L.
2008-01-01
The authors apply principal component analysis (PCA) to the extratropical total column ozone from the combined merged ozone data product and the European Centre for Medium-Range Weather Forecasts assimilated ozone from January 1979 to August 2002. The interannual variability (IAV) of extratropical O-3 in the Northern Hemisphere (NH) is characterized by four main modes. Attributable to dominant dynamical effects, these four modes account for nearly 60% of the total ozone variance in the NH. The patterns of variability are distinctly different from those derived for total O-3 in the tropics. To relate the derived patterns of O-3 to atmospheric dynamics, similar decompositions are performed for the 30 100-Wa geopotential thickness. The results reveal intimate connections between the IAV of total ozone and the atmospheric circulation. The first two leading modes are nearly zonally symmetric and represent the connections to the annular modes and the quasi-biennial oscillation. The other two modes exhibit in-quadrature, wavenumber-1 structures that, when combined, describe the displacement of the polar vortices in response to planetary waves. In the NH, the extrema of these combined modes have preferred locations that suggest fixed topographical and land-sea thermal forcing of the involved planetary waves. Similar spatial patterns and trends in extratropical column ozone are simulated by the Goddard Earth Observation System chemistryclimate model (GEOS-CCM). The decreasing O-3 trend is captured in the first mode. The largest trend occurs at the North Pole, with values similar to-1 Dobson Unit (DU) yr(-1). There is almost no trend in tropical O-3. The trends derived from PCA are confirmed using a completely independent method, empirical mode decomposition, for zonally averaged O-3 data. The O-3 trend is also captured by mode 1 in the GEOS-CCM, but the decrease is substantially larger than that in the real atmosphere.
Decomposing delta, theta, and alpha time–frequency ERP activity from a visual oddball task using PCA
Bernat, Edward M.; Malone, Stephen M.; Williams, William J.; Patrick, Christopher J.; Iacono, William G.
2008-01-01
Objective Time–frequency (TF) analysis has become an important tool for assessing electrical and magnetic brain activity from event-related paradigms. In electrical potential data, theta and delta activities have been shown to underlie P300 activity, and alpha has been shown to be inhibited during P300 activity. Measures of delta, theta, and alpha activity are commonly taken from TF surfaces. However, methods for extracting relevant activity do not commonly go beyond taking means of windows on the surface, analogous to measuring activity within a defined P300 window in time-only signal representations. The current objective was to use a data driven method to derive relevant TF components from event-related potential data from a large number of participants in an oddball paradigm. Methods A recently developed PCA approach was employed to extract TF components [Bernat, E. M., Williams, W. J., and Gehring, W. J. (2005). Decomposing ERP time-frequency energy using PCA. Clin Neurophysiol, 116(6), 1314–1334] from an ERP dataset of 2068 17 year olds (979 males). TF activity was taken from both individual trials and condition averages. Activity including frequencies ranging from 0 to 14 Hz and time ranging from stimulus onset to 1312.5 ms were decomposed. Results A coordinated set of time–frequency events was apparent across the decompositions. Similar TF components representing earlier theta followed by delta were extracted from both individual trials and averaged data. Alpha activity, as predicted, was apparent only when time–frequency surfaces were generated from trial level data, and was characterized by a reduction during the P300. Conclusions Theta, delta, and alpha activities were extracted with predictable time-courses. Notably, this approach was effective at characterizing data from a single-electrode. Finally, decomposition of TF data generated from individual trials and condition averages produced similar results, but with predictable differences. Specifically, trial level data evidenced more and more varied theta measures, and accounted for less overall variance. PMID:17027110
Double Bounce Component in Cross-Polarimetric SAR from a New Scattering Target Decomposition
NASA Astrophysics Data System (ADS)
Hong, Sang-Hoon; Wdowinski, Shimon
2013-08-01
Common vegetation scattering theories assume that the Synthetic Aperture Radar (SAR) cross-polarization (cross-pol) signal represents solely volume scattering. We found this assumption incorrect based on SAR phase measurements acquired over the south Florida Everglades wetlands indicating that the cross-pol radar signal often samples the water surface beneath the vegetation. Based on these new observations, we propose that the cross-pol measurement consists of both volume scattering and double bounce components. The simplest multi-bounce scattering mechanism that generates cross-pol signal occurs by rotated dihedrals. Thus, we use the rotated dihedral mechanism with probability density function to revise some of the vegetation scattering theories and develop a three- component decomposition algorithm with single bounce, double bounce from both co-pol and cross-pol, and volume scattering components. We applied the new decomposition analysis to both urban and rural environments using Radarsat-2 quad-pol datasets. The decomposition of the San Francisco's urban area shows higher double bounce scattering and reduced volume scattering compared to other common three-component decomposition. The decomposition of the rural Everglades area shows that the relations between volume and cross-pol double bounce depend on the vegetation density. The new decomposition can be useful to better understand vegetation scattering behavior over the various surfaces and the estimation of above ground biomass using SAR observations.
Vega, Daniel R; Baggio, Ricardo; Roca, Mariana; Tombari, Dora
2011-04-01
The "aging-driven" decomposition of zolpidem hemitartrate hemihydrate (form A) has been followed by X-ray powder diffraction (XRPD), and the crystal and molecular structures of the decomposition products studied by single-crystal methods. The process is very similar to the "thermally driven" one, recently described in the literature for form E (Halasz and Dinnebier. 2010. J Pharm Sci 99(2): 871-874), resulting in a two-phase system: the neutral free base (common to both decomposition processes) and, in the present case, a novel zolpidem tartrate monohydrate, unique to the "aging-driven" decomposition. Our room-temperature single-crystal analysis gives for the free base comparable results as the high-temperature XRPD ones already reported by Halasz and Dinnebier: orthorhombic, Pcba, a = 9.6360(10) Å, b = 18.2690(5) Å, c = 18.4980(11) Å, and V = 3256.4(4) Å(3) . The unreported zolpidem tartrate monohydrate instead crystallizes in monoclinic P21 , which, for comparison purposes, we treated in the nonstandard setting P1121 with a = 20.7582(9) Å, b = 15.2331(5) Å, c = 7.2420(2) Å, γ = 90.826(2)°, and V = 2289.73(14) Å(3) . The structure presents two complete moieties in the asymmetric unit (z = 4, z' = 2). The different phases obtained in both decompositions are readily explained, considering the diverse genesis of both processes. Copyright © 2010 Wiley-Liss, Inc.
Wang, Xiaoyue; Wang, Feng; Jiang, Yuji
2013-01-01
Decomposition of plant residues is largely mediated by soil-dwelling microorganisms whose activities are influenced by both climate conditions and properties of the soil. However, a comprehensive understanding of their relative importance remains elusive, mainly because traditional methods, such as soil incubation and environmental surveys, have a limited ability to differentiate between the combined effects of climate and soil. Here, we performed a large-scale reciprocal soil transplantation experiment, whereby microbial communities associated with straw decomposition were examined in three initially identical soils placed in parallel in three climate regions of China (red soil, Chao soil, and black soil, located in midsubtropical, warm-temperate, and cold-temperate zones). Maize straws buried in mesh bags were sampled at 0.5, 1, and 2 years after the burial and subjected to chemical, physical, and microbiological analyses, e.g., phospholipid fatty acid analysis for microbial abundance, community-level physiological profiling, and 16S rRNA gene denaturing gradient gel electrophoresis, respectively, for functional and phylogenic diversity. Results of aggregated boosted tree analysis show that location rather soil is the primary determining factor for the rate of straw decomposition and structures of the associated microbial communities. Principal component analysis indicates that the straw communities are primarily grouped by location at any of the three time points. In contrast, microbial communities in bulk soil remained closely related to one another for each soil. Together, our data suggest that climate (specifically, geographic location) has stronger effects than soil on straw decomposition; moreover, the successive process of microbial communities in soils is slower than those in straw residues in response to climate changes. PMID:23524671
Thermal decomposition of ammonium hexachloroosmate.
Asanova, T I; Kantor, I; Asanov, I P; Korenev, S V; Yusenko, K V
2016-12-07
Structural changes of (NH 4 ) 2 [OsCl 6 ] occurring during thermal decomposition in a reduction atmosphere have been studied in situ using combined energy-dispersive X-ray absorption spectroscopy (ED-XAFS) and powder X-ray diffraction (PXRD). According to PXRD, (NH 4 ) 2 [OsCl 6 ] transforms directly to metallic Os without the formation of any crystalline intermediates but through a plateau where no reactions occur. XANES and EXAFS data by means of Multivariate Curve Resolution (MCR) analysis show that thermal decomposition occurs with the formation of an amorphous intermediate {OsCl 4 } x with a possible polymeric structure. Being revealed for the first time the intermediate was subjected to determine the local atomic structure around osmium. The thermal decomposition of hexachloroosmate is much more complex and occurs within a minimum two-step process, which has never been observed before.
Applications of singular value analysis and partial-step algorithm for nonlinear orbit determination
NASA Technical Reports Server (NTRS)
Ryne, Mark S.; Wang, Tseng-Chan
1991-01-01
An adaptive method in which cruise and nonlinear orbit determination problems can be solved using a single program is presented. It involves singular value decomposition augmented with an extended partial step algorithm. The extended partial step algorithm constrains the size of the correction to the spacecraft state and other solve-for parameters. The correction is controlled by an a priori covariance and a user-supplied bounds parameter. The extended partial step method is an extension of the update portion of the singular value decomposition algorithm. It thus preserves the numerical stability of the singular value decomposition method, while extending the region over which it converges. In linear cases, this method reduces to the singular value decomposition algorithm with the full rank solution. Two examples are presented to illustrate the method's utility.
May, I.; Rowe, J.J.
1965-01-01
A modified Morey bomb was designed which contains a removable nichromecased 3.5-ml platinium crucible. This bomb is particularly useful for decompositions of refractory samples for micro- and semimicro-analysis. Temperatures of 400-450?? and pressures estimated as great as 6000 p.s.i. were maintained in the bomb for periods as long as 24 h. Complete decompositions of rocks, garnet, beryl, chrysoberyl, phenacite, sapphirine, and kyanite were obtained with hydrofluoric acid or a mixture of hydrofluoric and sulfuric acids; the decomposition of chrome refractory was made with hydrochloric acid. Aluminum-rich samples formed difficultly soluble aluminum fluoride precipitates. Because no volatilization losses occur, silica can be determined on sample solutions by a molybdenum-blue procedure using aluminum(III) to complex interfering fluoride. ?? 1965.
2013-01-01
Background Multiple treatment comparison (MTC) meta-analyses are commonly modeled in a Bayesian framework, and weakly informative priors are typically preferred to mirror familiar data driven frequentist approaches. Random-effects MTCs have commonly modeled heterogeneity under the assumption that the between-trial variance for all involved treatment comparisons are equal (i.e., the ‘common variance’ assumption). This approach ‘borrows strength’ for heterogeneity estimation across treatment comparisons, and thus, ads valuable precision when data is sparse. The homogeneous variance assumption, however, is unrealistic and can severely bias variance estimates. Consequently 95% credible intervals may not retain nominal coverage, and treatment rank probabilities may become distorted. Relaxing the homogeneous variance assumption may be equally problematic due to reduced precision. To regain good precision, moderately informative variance priors or additional mathematical assumptions may be necessary. Methods In this paper we describe four novel approaches to modeling heterogeneity variance - two novel model structures, and two approaches for use of moderately informative variance priors. We examine the relative performance of all approaches in two illustrative MTC data sets. We particularly compare between-study heterogeneity estimates and model fits, treatment effect estimates and 95% credible intervals, and treatment rank probabilities. Results In both data sets, use of moderately informative variance priors constructed from the pair wise meta-analysis data yielded the best model fit and narrower credible intervals. Imposing consistency equations on variance estimates, assuming variances to be exchangeable, or using empirically informed variance priors also yielded good model fits and narrow credible intervals. The homogeneous variance model yielded high precision at all times, but overall inadequate estimates of between-trial variances. Lastly, treatment rankings were similar among the novel approaches, but considerably different when compared with the homogenous variance approach. Conclusions MTC models using a homogenous variance structure appear to perform sub-optimally when between-trial variances vary between comparisons. Using informative variance priors, assuming exchangeability or imposing consistency between heterogeneity variances can all ensure sufficiently reliable and realistic heterogeneity estimation, and thus more reliable MTC inferences. All four approaches should be viable candidates for replacing or supplementing the conventional homogeneous variance MTC model, which is currently the most widely used in practice. PMID:23311298
Association of Psoriasis With the Risk for Type 2 Diabetes Mellitus and Obesity.
Lønnberg, Ann Sophie; Skov, Lone; Skytthe, Axel; Kyvik, Kirsten Ohm; Pedersen, Ole Birger; Thomsen, Simon Francis
2016-07-01
Psoriasis has been shown to be associated with overweight and type 2 diabetes mellitus. The genetic association is unclear. To examine the association among psoriasis, type 2 diabetes mellitus, and body mass index (BMI) (calculated as weight in kilograms divided by height in meters squared) in twins. This cross-sectional, population-based twin study included 34 781 Danish twins, 20 to 71 years of age. Data from a questionnaire on psoriasis was validated against hospital discharge diagnoses of psoriasis and compared with hospital discharge diagnoses of type 2 diabetes mellitus and self-reported BMI. Data were collected in the spring of 2002. Data were analyzed from January 1 to October 31, 2014. Crude and adjusted odds ratios (ORs) were calculated for psoriasis in relation to type 2 diabetes mellitus, increasing BMI, and obesity in the whole population of twins and in 449 psoriasis-discordant twins. Variance component analysis was used to measure genetic and nongenetic effects on the associations. Among the 34 781 questionnaire respondents, 33 588 with complete data were included in the study (15 443 men [46.0%]; 18 145 women [54.0%]; mean [SD] age, 44.5 [7.6] years). After multivariable adjustment, a significant association was found between psoriasis and type 2 diabetes mellitus (odds ratio [OR], 1.53; 95% CI, 1.03-2.27; P = .04) and between psoriasis and increasing BMI (OR, 1.81; 95% CI, 1.28-2.55; P = .001 in individuals with a BMI>35.0). Among psoriasis-discordant twin pairs, the association between psoriasis and obesity was diluted in monozygotic twins (OR, 1.43; 95% CI, 0.50-4.07; P = .50) relative to dizygotic twins (OR, 2.13; 95% CI, 1.03-4.39; P = .04). Variance decomposition showed that additive genetic factors accounted for 68% (95% CI, 60%-75%) of the variance in the susceptibility to psoriasis, for 73% (95% CI, 58%-83%) of the variance in susceptibility to type 2 diabetes mellitus, and for 74% (95% CI, 72%-76%) of the variance in BMI. The genetic correlation between psoriasis and type 2 diabetes mellitus was 0.13 (-0.06 to 0.31; P = .17); between psoriasis and BMI, 0.12 (0.08 to 0.19; P < .001). The environmental correlation between psoriasis and type 2 diabetes mellitus was 0.10 (-0.71 to 0.17; P = .63); between psoriasis and BMI, -0.05 (-0.14 to 0.04; P = .44). This study determines the contribution of genetic and environmental factors to the interaction between obesity, type 2 diabetes mellitus, and psoriasis. Psoriasis, type 2 diabetes mellitus, and obesity are also strongly associated in adults after taking key confounding factors, such as sex, age, and smoking, into account. Results indicate a common genetic etiology for psoriasis and obesity.
Theodorsson-Norheim, E
1986-08-01
Multiple t tests at a fixed p level are frequently used to analyse biomedical data where analysis of variance followed by multiple comparisons or the adjustment of the p values according to Bonferroni would be more appropriate. The Kruskal-Wallis test is a nonparametric 'analysis of variance' which may be used to compare several independent samples. The present program is written in an elementary subset of BASIC and will perform Kruskal-Wallis test followed by multiple comparisons between the groups on practically any computer programmable in BASIC.
Vortmann, Britta; Nowak, Sascha; Engelhard, Carsten
2013-03-19
Lithium ion batteries (LIBs) are key components for portable electronic devices that are used around the world. However, thermal decomposition products in the battery reduce its lifetime, and decomposition processes are still not understood. In this study, a rapid method for in situ analysis and reaction monitoring in LIB electrolytes is presented based on high-resolution mass spectrometry (HR-MS) with low-temperature plasma probe (LTP) ambient desorption/ionization for the first time. This proof-of-principle study demonstrates the capabilities of ambient mass spectrometry in battery research. LTP-HR-MS is ideally suited for qualitative analysis in the ambient environment because it allows direct sample analysis independent of the sample size, geometry, and structure. Further, it is environmental friendly because it eliminates the need of organic solvents that are typically used in separation techniques coupled to mass spectrometry. Accurate mass measurements were used to identify the time-/condition-dependent formation of electrolyte decomposition compounds. A LIB model electrolyte containing ethylene carbonate and dimethyl carbonate was analyzed before and after controlled thermal stress and over the course of several weeks. Major decomposition products identified include difluorophosphoric acid, monofluorophosphoric acid methyl ester, monofluorophosphoric acid dimethyl ester, and hexafluorophosphate. Solvents (i.e., dimethyl carbonate) were partly consumed via an esterification pathway. LTP-HR-MS is considered to be an attractive method for fundamental LIB studies.
Carbon dioxide emissions from the electricity sector in major countries: a decomposition analysis.
Li, Xiangzheng; Liao, Hua; Du, Yun-Fei; Wang, Ce; Wang, Jin-Wei; Liu, Yanan
2018-03-01
The electric power sector is one of the primary sources of CO 2 emissions. Analyzing the influential factors that result in CO 2 emissions from the power sector would provide valuable information to reduce the world's CO 2 emissions. Herein, we applied the Divisia decomposition method to analyze the influential factors for CO 2 emissions from the power sector from 11 countries, which account for 67% of the world's emissions from 1990 to 2013. We decompose the influential factors for CO 2 emissions into seven areas: the emission coefficient, energy intensity, the share of electricity generation, the share of thermal power generation, electricity intensity, economic activity, and population. The decomposition analysis results show that economic activity, population, and the emission coefficient have positive roles in increasing CO 2 emissions, and their contribution rates are 119, 23.9, and 0.5%, respectively. Energy intensity, electricity intensity, the share of electricity generation, and the share of thermal power generation curb CO 2 emissions and their contribution rates are 17.2, 15.7, 7.7, and 2.8%, respectively. Through decomposition analysis for each country, economic activity and population are the major factors responsible for increasing CO 2 emissions from the power sector. However, the other factors from developed countries can offset the growth in CO 2 emissions due to economic activities.
Flexible Mediation Analysis With Multiple Mediators.
Steen, Johan; Loeys, Tom; Moerkerke, Beatrijs; Vansteelandt, Stijn
2017-07-15
The advent of counterfactual-based mediation analysis has triggered enormous progress on how, and under what assumptions, one may disentangle path-specific effects upon combining arbitrary (possibly nonlinear) models for mediator and outcome. However, current developments have largely focused on single mediators because required identification assumptions prohibit simple extensions to settings with multiple mediators that may depend on one another. In this article, we propose a procedure for obtaining fine-grained decompositions that may still be recovered from observed data in such complex settings. We first show that existing analytical approaches target specific instances of a more general set of decompositions and may therefore fail to provide a comprehensive assessment of the processes that underpin cause-effect relationships between exposure and outcome. We then outline conditions for obtaining the remaining set of decompositions. Because the number of targeted decompositions increases rapidly with the number of mediators, we introduce natural effects models along with estimation methods that allow for flexible and parsimonious modeling. Our procedure can easily be implemented using off-the-shelf software and is illustrated using a reanalysis of the World Health Organization's Large Analysis and Review of European Housing and Health Status (WHO-LARES) study on the effect of mold exposure on mental health (2002-2003). © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
A two-stage linear discriminant analysis via QR-decomposition.
Ye, Jieping; Li, Qi
2005-06-01
Linear Discriminant Analysis (LDA) is a well-known method for feature extraction and dimension reduction. It has been used widely in many applications involving high-dimensional data, such as image and text classification. An intrinsic limitation of classical LDA is the so-called singularity problems; that is, it fails when all scatter matrices are singular. Many LDA extensions were proposed in the past to overcome the singularity problems. Among these extensions, PCA+LDA, a two-stage method, received relatively more attention. In PCA+LDA, the LDA stage is preceded by an intermediate dimension reduction stage using Principal Component Analysis (PCA). Most previous LDA extensions are computationally expensive, and not scalable, due to the use of Singular Value Decomposition or Generalized Singular Value Decomposition. In this paper, we propose a two-stage LDA method, namely LDA/QR, which aims to overcome the singularity problems of classical LDA, while achieving efficiency and scalability simultaneously. The key difference between LDA/QR and PCA+LDA lies in the first stage, where LDA/QR applies QR decomposition to a small matrix involving the class centroids, while PCA+LDA applies PCA to the total scatter matrix involving all training data points. We further justify the proposed algorithm by showing the relationship among LDA/QR and previous LDA methods. Extensive experiments on face images and text documents are presented to show the effectiveness of the proposed algorithm.
Merlo, J; Ohlsson, H; Lynch, K F; Chaix, B; Subramanian, S V
2009-12-01
Social epidemiology investigates both individuals and their collectives. Although the limits that define the individual bodies are very apparent, the collective body's geographical or cultural limits (eg "neighbourhood") are more difficult to discern. Also, epidemiologists normally investigate causation as changes in group means. However, many variables of interest in epidemiology may cause a change in the variance of the distribution of the dependent variable. In spite of that, variance is normally considered a measure of uncertainty or a nuisance rather than a source of substantive information. This reasoning is also true in many multilevel investigations, whereas understanding the distribution of variance across levels should be fundamental. This means-centric reductionism is mostly concerned with risk factors and creates a paradoxical situation, as social medicine is not only interested in increasing the (mean) health of the population, but also in understanding and decreasing inappropriate health and health care inequalities (variance). Critical essay and literature review. The present study promotes (a) the application of measures of variance and clustering to evaluate the boundaries one uses in defining collective levels of analysis (eg neighbourhoods), (b) the combined use of measures of variance and means-centric measures of association, and (c) the investigation of causes of health variation (variance-altering causation). Both measures of variance and means-centric measures of association need to be included when performing contextual analyses. The variance approach, a new aspect of contextual analysis that cannot be interpreted in means-centric terms, allows perspectives to be expanded.
Modular analysis of biological networks.
Kaltenbach, Hans-Michael; Stelling, Jörg
2012-01-01
The analysis of complex biological networks has traditionally relied on decomposition into smaller, semi-autonomous units such as individual signaling pathways. With the increased scope of systems biology (models), rational approaches to modularization have become an important topic. With increasing acceptance of de facto modularity in biology, widely different definitions of what constitutes a module have sparked controversies. Here, we therefore review prominent classes of modular approaches based on formal network representations. Despite some promising research directions, several important theoretical challenges remain open on the way to formal, function-centered modular decompositions for dynamic biological networks.
NASA Astrophysics Data System (ADS)
Tobler, M.; White, D. A.; Abbene, M. L.; Burst, S. L.; McCulley, R. L.; Barnes, P. W.
2016-02-01
Decomposition is a crucial component of global biogeochemical cycles that influences the fate and residence time of carbon and nutrients in organic matter pools, yet the processes controlling litter decomposition in coastal marshes are not fully understood. We conducted a series of field studies to examine what role photodegradation, a process driven in part by solar UV radiation (280-400 nm), plays in the decomposition of the standing dead litter of Sagittaria lancifolia and Spartina patens, two common species in marshes of intermediate salinity in southern Louisiana, USA. Results indicate that the exclusion of solar UV significantly altered litter mass loss, but the magnitude and direction of these effects varied depending on species, height of the litter above the water surface and the stage of decomposition. Over one growing season, S. lancifolia litter exposed to ambient solar UV had significantly less mass loss compared to litter exposed to attenuated UV over the initial phase of decomposition (0-5 months; ANOVA P=0.004) then treatment effects switched in the latter phase of the study (5-7 months; ANOVA P<0.001). Similar results were found in S. patens over an 11-month period. UV exposure reduced total C, N and lignin by 24-33% in remaining tissue with treatment differences most pronounced in S. patens. Phospholipid fatty-acid analysis (PFLA) indicated that UV also significantly altered microbial (bacterial) biomass and bacteria:fungi ratios of decomposing litter. These findings, and others, indicate that solar UV can have positive and negative net effects on litter decomposition in marsh plants with inhibition of biotic (microbial) processes occurring early in the decomposition process then shifting to enhancement of decomposition via abiotic (photodegradation) processes later in decomposition. Photodegradation of standing litter represents a potentially significant pathway of C and N loss from these coastal wetland ecosystems.
Variation of gene expression in Bacillus subtilis samples of fermentation replicates.
Zhou, Ying; Yu, Wen-Bang; Ye, Bang-Ce
2011-06-01
The application of comprehensive gene expression profiling technologies to compare wild and mutated microorganism samples or to assess molecular differences between various treatments has been widely used. However, little is known about the normal variation of gene expression in microorganisms. In this study, an Agilent customized microarray representing 4,106 genes was used to quantify transcript levels of five-repeated flasks to assess normal variation in Bacillus subtilis gene expression. CV analysis and analysis of variance were employed to investigate the normal variance of genes and the components of variance, respectively. The results showed that above 80% of the total variation was caused by biological variance. For the 12 replicates, 451 of 4,106 genes exhibited variance with CV values over 10%. The functional category enrichment analysis demonstrated that these variable genes were mainly involved in cell type differentiation, cell type localization, cell cycle and DNA processing, and spore or cyst coat. Using power analysis, the minimal biological replicate number for a B. subtilis microarray experiment was determined to be six. The results contribute to the definition of the baseline level of variability in B. subtilis gene expression and emphasize the importance of replicate microarray experiments.
Decomposition of the Total Effect in the Presence of Multiple Mediators and Interactions.
Bellavia, Andrea; Valeri, Linda
2018-06-01
Mediation analysis allows decomposing a total effect into a direct effect of the exposure on the outcome and an indirect effect operating through a number of possible hypothesized pathways. Recent studies have provided formal definitions of direct and indirect effects when multiple mediators are of interest and have described parametric and semiparametric methods for their estimation. Investigating direct and indirect effects with multiple mediators, however, can be challenging in the presence of multiple exposure-mediator and mediator-mediator interactions. In this paper we derive a decomposition of the total effect that unifies mediation and interaction when multiple mediators are present. We illustrate the properties of the proposed framework in a secondary analysis of a pragmatic trial for the treatment of schizophrenia. The decomposition is employed to investigate the interplay of side effects and psychiatric symptoms in explaining the effect of antipsychotic medication on quality of life in schizophrenia patients. Our result offers a valuable tool to identify the proportions of total effect due to mediation and interaction when more than one mediator is present, providing the finest decomposition of the total effect that unifies multiple mediators and interactions.
Zhang, Lisha; Zhang, Songhe; Lv, Xiaoyang; Qiu, Zheng; Zhang, Ziqiu; Yan, Liying
2018-08-15
This study investigated the alterations in biomass, nutrients and dissolved organic matter concentration in overlying water and determined the bacterial 16S rRNA gene in biofilms attached to plant residual during the decomposition of Myriophyllum verticillatum. The 55-day decomposition experimental results show that plant decay process can be well described by the exponential model, with the average decomposition rate of 0.037d -1 . Total organic carbon, total nitrogen, and organic nitrogen concentrations increased significantly in overlying water during decomposition compared to control within 35d. Results from excitation emission matrix-parallel factor analysis showed humic acid-like and tyrosine acid-like substances might originate from plant degradation processes. Tyrosine acid-like substances had an obvious correlation to organic nitrogen and total nitrogen (p<0.01). Decomposition rates were positively related to pH, total organic carbon, oxidation-reduction potential and dissolved oxygen but negatively related to temperature in overlying water. Microbe densities attached to plant residues increased with decomposition process. The most dominant phylum was Bacteroidetes (>46%) at 7d, Chlorobi (20%-44%) or Proteobacteria (25%-34%) at 21d and Chlorobi (>40%) at 55d. In microbes attached to plant residues, sugar- and polysaccharides-degrading genus including Bacteroides, Blvii28, Fibrobacter, and Treponema dominated at 7d while Chlorobaculum, Rhodobacter, Methanobacterium, Thiobaca, Methanospirillum and Methanosarcina at 21d and 55d. These results gain the insight into the dissolved organic matter release and bacterial community shifts during submerged macrophytes decomposition. Copyright © 2018 Elsevier B.V. All rights reserved.
Study on the decomposition of trace benzene over V2O5-WO3 ...
Commercial and laboratory-prepared V2O5–WO3/TiO2-based catalysts with different compositions were tested for catalytic decomposition of chlorobenzene (ClBz) in simulated flue gas. Resonance enhanced multiphoton ionization-time of flight mass spectrometry (REMPI-TOFMS) was employed to measure real-time, trace concentrations of ClBz contained in the flue gas before and after the catalyst. The effects of various parameters, including vanadium content of the catalyst, the catalyst support, as well as the reaction temperature on decomposition of ClBz were investigated. The results showed that the ClBz decomposition efficiency was significantly enhanced when nano-TiO2 instead of conventional TiO2 was used as the catalyst support. No promotion effects were found in the ClBz decomposition process when the catalysts were wet-impregnated with CuO and CeO2. Tests with different concentrations (1,000, 500, and 100 ppb) of ClBz showed that ClBz-decomposition efficiency decreased with increasing concentration, unless active sites were plentiful. A comparison between ClBz and benzene decomposition on the V2O5–WO3/TiO2-based catalyst and the relative kinetics analysis showed that two different active sites were likely involved in the decomposition mechanism and the V=O and V-O-Ti groups may only work for the degradation of the phenyl group and the benzene ring rather than the C-Cl bond. V2O5-WO3/TiO2 based catalysts, that have been used for destruction of a wide variet
Thermal properties of Bentonite Modified with 3-aminopropyltrimethoxysilane
NASA Astrophysics Data System (ADS)
Pramono, E.; Pratiwi, W.; Wahyuningrum, D.; Radiman, C. L.
2018-03-01
Chemical modifications of Bentonite (BNT) clay have been carried out by using 3-aminoprophyltrimethoxysilane (APS) in various solvent media. The degradation properties of products (BNTAPS) were characterized by thermogravimetric analysis (TGA). Samples were heated at 30 to 700°C with a heating rate 10 deg/min, and the total silane-grafted amount was determined by calculating the weight loss at 200 – 600°C. The thermogram of TGA showed that there were three main decomposition regions which are attributed to the elimination of physically adsorbed water, decomposition of silane and dehydroxylation of Bentonite. High weight loss attributed to the thermal decomposition of silane was observed between 200 to 550°C. Quantitative analysis of grafted silane results high silane loaded using a solvent with high surface energy, which indicates the type of solvent affected interaction and adsorption of APS in BNT platelets.
NASA Astrophysics Data System (ADS)
Dekavalla, Maria; Argialas, Demetre
2017-07-01
The analysis of undersea topography and geomorphological features provides necessary information to related disciplines and many applications. The development of an automated knowledge-based classification approach of undersea topography and geomorphological features is challenging due to their multi-scale nature. The aim of the study is to develop and evaluate an automated knowledge-based OBIA approach to: i) decompose the global undersea topography to multi-scale regions of distinct morphometric properties, and ii) assign the derived regions to characteristic geomorphological features. First, the global undersea topography was decomposed through the SRTM30_PLUS bathymetry data to the so-called morphometric objects of discrete morphometric properties and spatial scales defined by data-driven methods (local variance graphs and nested means) and multi-scale analysis. The derived morphometric objects were combined with additional relative topographic position information computed with a self-adaptive pattern recognition method (geomorphons), and auxiliary data and were assigned to characteristic undersea geomorphological feature classes through a knowledge base, developed from standard definitions. The decomposition of the SRTM30_PLUS data to morphometric objects was considered successful for the requirements of maximizing intra-object and inter-object heterogeneity, based on the near zero values of the Moran's I and the low values of the weighted variance index. The knowledge-based classification approach was tested for its transferability in six case studies of various tectonic settings and achieved the efficient extraction of 11 undersea geomorphological feature classes. The classification results for the six case studies were compared with the digital global seafloor geomorphic features map (GSFM). The 11 undersea feature classes and their producer's accuracies in respect to the GSFM relevant areas were Basin (95%), Continental Shelf (94.9%), Trough (88.4%), Plateau (78.9%), Continental Slope (76.4%), Trench (71.2%), Abyssal Hill (62.9%), Abyssal Plain (62.4%), Ridge (49.8%), Seamount (48.8%) and Continental Rise (25.4%). The knowledge-based OBIA classification approach was considered transferable since the percentages of spatial and thematic agreement between the most of the classified undersea feature classes and the GSFM exhibited low deviations across the six case studies.
The evolutionary stability of cross-sex, cross-trait genetic covariances.
Gosden, Thomas P; Chenoweth, Stephen F
2014-06-01
Although knowledge of the selective agents behind the evolution of sexual dimorphism has advanced considerably in recent years, we still lack a clear understanding of the evolutionary durability of cross-sex genetic covariances that often constrain its evolution. We tested the relative stability of cross-sex genetic covariances for a suite of homologous contact pheromones of the fruit fly Drosophila serrata, along a latitudinal gradient where these traits have diverged in mean. Using a Bayesian framework, which allowed us to account for uncertainty in all parameter estimates, we compared divergence in the total amount and orientation of genetic variance across populations, finding divergence in orientation but not total variance. We then statistically compared orientation divergence of within-sex (G) to cross-sex (B) covariance matrices. In line with a previous theoretical prediction, we find that the cross-sex covariance matrix, B, is more variable than either within-sex G matrix. Decomposition of B matrices into their symmetrical and nonsymmetrical components revealed that instability is linked to the degree of asymmetry. We also find that the degree of asymmetry correlates with latitude suggesting a role for spatially varying natural selection in shaping genetic constraints on the evolution of sexual dimorphism. © 2014 The Author(s). Evolution © 2014 The Society for the Study of Evolution.
A behavioral-genetic investigation of bulimia nervosa and its relationship with alcohol use disorder
Trace, Sara Elizabeth; Thornton, Laura Marie; Baker, Jessica Helen; Root, Tammy Lynn; Janson, Lauren Elizabeth; Lichtenstein, Paul; Pedersen, Nancy Lee; Bulik, Cynthia Marie
2013-01-01
Bulimia nervosa (BN) and alcohol use disorder (AUD) frequently co-occur and may share genetic factors; however, the nature of their association is not fully understood. We assessed the extent to which the same genetic and environmental factors contribute to liability to BN and AUD. A bivariate structural equation model using a Cholesky decomposition was fit to data from 7,241 women who participated in the Swedish Twin study of Adults: Genes and Environment. The proportion of variance accounted for by genetic and environmental factors for BN and AUD and the genetic and environmental correlations between these disorders were estimated. In the best-fitting model, the heritability estimates were 0.55 (95% CI: 0.37; 0.70) for BN and 0.62 (95% CI: 0.54; 0.70) for AUD. Unique environmental factors accounted for the remainder of variance for BN. The genetic correlation between BN and AUD was 0.23 (95% CI: 0.01; 0.44), and the correlation between the unique environmental factors for the two disorders was 0.35 (95% CI: 0.08; 0.61), suggesting moderate overlap in these factors. Findings from this investigation provide additional support that some of the same genetic factors may influence liability to both BN and AUD. PMID:23790978
NASA Astrophysics Data System (ADS)
Zhang, Yi; Zhao, Yanxia; Wang, Chunyi; Chen, Sining
2017-11-01
Assessment of the impact of climate change on crop productions with considering uncertainties is essential for properly identifying and decision-making agricultural practices that are sustainable. In this study, we employed 24 climate projections consisting of the combinations of eight GCMs and three emission scenarios representing the climate projections uncertainty, and two crop statistical models with 100 sets of parameters in each model representing parameter uncertainty within the crop models. The goal of this study was to evaluate the impact of climate change on maize ( Zea mays L.) yield at three locations (Benxi, Changling, and Hailun) across Northeast China (NEC) in periods 2010-2039 and 2040-2069, taking 1976-2005 as the baseline period. The multi-models ensembles method is an effective way to deal with the uncertainties. The results of ensemble simulations showed that maize yield reductions were less than 5 % in both future periods relative to the baseline. To further understand the contributions of individual sources of uncertainty, such as climate projections and crop model parameters, in ensemble yield simulations, variance decomposition was performed. The results indicated that the uncertainty from climate projections was much larger than that contributed by crop model parameters. Increased ensemble yield variance revealed the increasing uncertainty in the yield simulation in the future periods.
Vowel category dependence of the relationship between palate height, tongue height, and oral area.
Hasegawa-Johnson, Mark; Pizza, Shamala; Alwan, Abeer; Cha, Jul Setsu; Haker, Katherine
2003-06-01
This article evaluates intertalker variance of oral area, logarithm of the oral area, tongue height, and formant frequencies as a function of vowel category. The data consist of coronal magnetic resonance imaging (MRI) sequences and acoustic recordings of 5 talkers, each producing 11 different vowels. Tongue height (left, right, and midsagittal), palate height, and oral area were measured in 3 coronal sections anterior to the oropharyngeal bend and were subjected to multivariate analysis of variance, variance ratio analysis, and regression analysis. The primary finding of this article is that oral area (between palate and tongue) showed less intertalker variance during production of vowels with an oral place of articulation (palatal and velar vowels) than during production of vowels with a uvular or pharyngeal place of articulation. Although oral area variance is place dependent, percentage variance (log area variance) is not place dependent. Midsagittal tongue height in the molar region was positively correlated with palate height during production of palatal vowels, but not during production of nonpalatal vowels. Taken together, these results suggest that small oral areas are characterized by relatively talker-independent vowel targets and that meeting these talker-independent targets is important enough that each talker adjusts his or her own tongue height to compensate for talker-dependent differences in constriction anatomy. Computer simulation results are presented to demonstrate that these results may be explained by an acoustic control strategy: When talkers with very different anatomical characteristics try to match talker-independent formant targets, the resulting area variances are minimized near the primary vocal tract constriction.
WASP (Write a Scientific Paper) using Excel 9: Analysis of variance.
Grech, Victor
2018-06-01
Analysis of variance (ANOVA) may be required by researchers as an inferential statistical test when more than two means require comparison. This paper explains how to perform ANOVA in Microsoft Excel. Copyright © 2018 Elsevier B.V. All rights reserved.
Noise and drift analysis of non-equally spaced timing data
NASA Technical Reports Server (NTRS)
Vernotte, F.; Zalamansky, G.; Lantz, E.
1994-01-01
Generally, it is possible to obtain equally spaced timing data from oscillators. The measurement of the drifts and noises affecting oscillators is then performed by using a variance (Allan variance, modified Allan variance, or time variance) or a system of several variances (multivariance method). However, in some cases, several samples, or even several sets of samples, are missing. In the case of millisecond pulsar timing data, for instance, observations are quite irregularly spaced in time. Nevertheless, since some observations are very close together (one minute) and since the timing data sequence is very long (more than ten years), information on both short-term and long-term stability is available. Unfortunately, a direct variance analysis is not possible without interpolating missing data. Different interpolation algorithms (linear interpolation, cubic spline) are used to calculate variances in order to verify that they neither lose information nor add erroneous information. A comparison of the results of the different algorithms is given. Finally, the multivariance method was adapted to the measurement sequence of the millisecond pulsar timing data: the responses of each variance of the system are calculated for each type of noise and drift, with the same missing samples as in the pulsar timing sequence. An estimation of precision, dynamics, and separability of this method is given.
NASA Astrophysics Data System (ADS)
Makovetskii, A. N.; Tabatchikova, T. I.; Yakovleva, I. L.; Tereshchenko, N. A.; Mirzaev, D. A.
2013-06-01
The decomposition kinetics of austenite that appears in the 13KhFA low-alloyed pipe steel upon heating the samples in an intercritical temperature interval (ICI) and exposure for 5 or 30 min has been studied by the method of high-speed dilatometry. The results of dilatometry are supplemented by the microstructure analysis. Thermokinetic diagrams of the decomposition of the γ phase are represented. The conclusion has been drawn that an increase in the duration of exposure in the intercritical interval leads to a significant increase in the stability of the γ phase.
Relation between SM-covers and SM-decompositions of Petri nets
NASA Astrophysics Data System (ADS)
Karatkevich, Andrei; Wiśniewski, Remigiusz
2015-12-01
A task of finding for a given Petri net a set of sequential components being able to represent together the behavior of the net arises often in formal analysis of Petri nets and in applications of Petri net to logical control. Such task can be met in two different variants: obtaining a Petri net cover or a decomposition. Petri net cover supposes that a set of the subnets of given net is selected, and the sequential nets forming a decomposition may have additional places, which do not belong to the decomposed net. The paper discusses difference and relations between two mentioned tasks and their results.
NASA Astrophysics Data System (ADS)
Ladriere, J.
1992-04-01
The thermal decompositions of K3Fe(ox)3 3 H2O and K2Fe(ox)2 2 H2O in nitrogen have been studied using Mössbauer spectroscopy, X-ray diffraction and thermal analysis methods in order to determine the nature of the solid residues obtained after each stage of decomposition. Particularly, after dehydration at 113°C, the ferric complex is reduced into a ferrous compound, with a quadrupole splitting of 3.89 mm/s, which corresponds to the anhydrous form of K2Fe(ox)2 2 H2O.
Tan, Linghua; Xu, Jianhua; Li, Shiying; Li, Dongnan; Dai, Yuming; Kou, Bo; Chen, Yu
2017-05-02
Novel graphitic carbon nitride/CuO (g-C₃N₄/CuO) nanocomposite was synthesized through a facile precipitation method. Due to the strong ion-dipole interaction between copper ions and nitrogen atoms of g-C₃N₄, CuO nanorods (length 200-300 nm, diameter 5-10 nm) were directly grown on g-C₃N₄, forming a g-C₃N₄/CuO nanocomposite, which was confirmed via X-ray diffraction (XRD), transmission electron microscopy (TEM), field emission scanning electron microscopy (FESEM), and X-ray photoelectron spectroscopy (XPS). Finally, thermal decomposition of ammonium perchlorate (AP) in the absence and presence of the prepared g-C₃N₄/CuO nanocomposite was examined by differential thermal analysis (DTA), and thermal gravimetric analysis (TGA). The g-C₃N₄/CuO nanocomposite showed promising catalytic effects for the thermal decomposition of AP. Upon addition of 2 wt % nanocomposite with the best catalytic performance (g-C₃N₄/20 wt % CuO), the decomposition temperature of AP was decreased by up to 105.5 °C and only one decomposition step was found instead of the two steps commonly reported in other examples, demonstrating the synergistic catalytic activity of the as-synthesized nanocomposite. This study demonstrated a successful example regarding the direct growth of metal oxide on g-C₃N₄ by ion-dipole interaction between metallic ions, and the lone pair electrons on nitrogen atoms, which could provide a novel strategy for the preparation of g-C₃N₄-based nanocomposite.
Niang, Oumar; Thioune, Abdoulaye; El Gueirea, Mouhamed Cheikh; Deléchelle, Eric; Lemoine, Jacques
2012-09-01
The major problem with the empirical mode decomposition (EMD) algorithm is its lack of a theoretical framework. So, it is difficult to characterize and evaluate this approach. In this paper, we propose, in the 2-D case, the use of an alternative implementation to the algorithmic definition of the so-called "sifting process" used in the original Huang's EMD method. This approach, especially based on partial differential equations (PDEs), was presented by Niang in previous works, in 2005 and 2007, and relies on a nonlinear diffusion-based filtering process to solve the mean envelope estimation problem. In the 1-D case, the efficiency of the PDE-based method, compared to the original EMD algorithmic version, was also illustrated in a recent paper. Recently, several 2-D extensions of the EMD method have been proposed. Despite some effort, 2-D versions for EMD appear poorly performing and are very time consuming. So in this paper, an extension to the 2-D space of the PDE-based approach is extensively described. This approach has been applied in cases of both signal and image decomposition. The obtained results confirm the usefulness of the new PDE-based sifting process for the decomposition of various kinds of data. Some results have been provided in the case of image decomposition. The effectiveness of the approach encourages its use in a number of signal and image applications such as denoising, detrending, or texture analysis.
Mathew, Boby; Holand, Anna Marie; Koistinen, Petri; Léon, Jens; Sillanpää, Mikko J
2016-02-01
A novel reparametrization-based INLA approach as a fast alternative to MCMC for the Bayesian estimation of genetic parameters in multivariate animal model is presented. Multi-trait genetic parameter estimation is a relevant topic in animal and plant breeding programs because multi-trait analysis can take into account the genetic correlation between different traits and that significantly improves the accuracy of the genetic parameter estimates. Generally, multi-trait analysis is computationally demanding and requires initial estimates of genetic and residual correlations among the traits, while those are difficult to obtain. In this study, we illustrate how to reparametrize covariance matrices of a multivariate animal model/animal models using modified Cholesky decompositions. This reparametrization-based approach is used in the Integrated Nested Laplace Approximation (INLA) methodology to estimate genetic parameters of multivariate animal model. Immediate benefits are: (1) to avoid difficulties of finding good starting values for analysis which can be a problem, for example in Restricted Maximum Likelihood (REML); (2) Bayesian estimation of (co)variance components using INLA is faster to execute than using Markov Chain Monte Carlo (MCMC) especially when realized relationship matrices are dense. The slight drawback is that priors for covariance matrices are assigned for elements of the Cholesky factor but not directly to the covariance matrix elements as in MCMC. Additionally, we illustrate the concordance of the INLA results with the traditional methods like MCMC and REML approaches. We also present results obtained from simulated data sets with replicates and field data in rice.
Livshits, G; Yakovenko, K; Ginsburg, E; Kobyliansky, E
1998-01-01
The present study utilized pedigree data from three ethnically different populations of Kirghizstan, Turkmenia and Chuvasha. Principal component analysis was performed on a matrix of genetic correlations between 22 measures of adiposity, including skinfolds, circumferences and indices. Findings are summarized as follows: (1) All three genetic matrices were not positive definite and the first four factors retained even after exclusion RG > or = 1.0, explained from 88% to 97% of the total additive genetic variation in the 22 trials studied. This clearly emphasizes the massive involvement of pleiotropic gene effects in the variability of adiposity traits. (2) Despite the quite natural differences in pairwise correlations between the adiposity traits in the three ethnically different samples under study, factor analysis revealed a common basic pattern of covariability for the adiposity traits. In each of the three samples, four genetic factors were retained, namely, the amount of subcutaneous fat, the total body obesity, the pattern of distribution of subcutaneous fat and the central adiposity distribution. (3) Genetic correlations between the retained four factors were virtually non-existent, suggesting that several independent genetic sources may be governing the variation of adiposity traits. (4) Variance decomposition analysis on the obtained genetic factors leaves no doubt regarding the substantial familial and (most probably genetic) effects on variation of each factor in each studied population. The similarity of results in the three different samples indicates that the findings may be deemed valid and reliable descriptions of the genetic variation and covariation pattern of adiposity traits in the human species.
Vinson, Amanda; Prongay, Kamm; Ferguson, Betsy
2013-01-01
Complex diseases (e.g., cardiovascular disease and type 2 diabetes, among many others) pose the biggest threat to human health worldwide and are among the most challenging to investigate. Susceptibility to complex disease may be caused by multiple genetic variants (GVs) and their interaction, by environmental factors, and by interaction between GVs and environment, and large study cohorts with substantial analytical power are typically required to elucidate these individual contributions. Here, we discuss the advantages of both power and feasibility afforded by the use of extended pedigrees of rhesus macaques (Macaca mulatta) for genetic studies of complex human disease based on next-generation sequence data. We present these advantages in the context of previous research conducted in rhesus macaques for several representative complex diseases. We also describe a single, multigeneration pedigree of Indian-origin rhesus macaques and a sample biobank we have developed for genetic analysis of complex disease, including power of this pedigree to detect causal GVs using either genetic linkage or association methods in a variance decomposition approach. Finally, we summarize findings of significant heritability for a number of quantitative traits that demonstrate that genetic contributions to risk factors for complex disease can be detected and measured in this pedigree. We conclude that the development and application of an extended pedigree to analysis of complex disease traits in the rhesus macaque have shown promising early success and that genome-wide genetic and higher order -omics studies in this pedigree are likely to yield useful insights into the architecture of complex human disease. PMID:24174435
Grouping individual independent BOLD effects: a new way to ICA group analysis
NASA Astrophysics Data System (ADS)
Duann, Jeng-Ren; Jung, Tzyy-Ping; Sejnowski, Terrence J.; Makeig, Scott
2009-04-01
A new group analysis method to summarize the task-related BOLD responses based on independent component analysis (ICA) was presented. As opposite to the previously proposed group ICA (gICA) method, which first combined multi-subject fMRI data in either temporal or spatial domain and applied ICA decomposition only once to the combined fMRI data to extract the task-related BOLD effects, the method presented here applied ICA decomposition to the individual subjects' fMRI data to first find the independent BOLD effects specifically for each individual subject. Then, the task-related independent BOLD component was selected among the resulting independent components from the single-subject ICA decomposition and hence grouped across subjects to derive the group inference. In this new ICA group analysis (ICAga) method, one does not need to assume that the task-related BOLD time courses are identical across brain areas and subjects as used in the grand ICA decomposition on the spatially concatenated fMRI data. Neither does one need to assume that after spatial normalization, the voxels at the same coordinates represent exactly the same functional or structural brain anatomies across different subjects. These two assumptions have been problematic given the recent BOLD activation evidences. Further, since the independent BOLD effects were obtained from each individual subject, the ICAga method can better account for the individual differences in the task-related BOLD effects. Unlike the gICA approach whereby the task-related BOLD effects could only be accounted for by a single unified BOLD model across multiple subjects. As a result, the newly proposed method, ICAga, was able to better fit the task-related BOLD effects at individual level and thus allow grouping more appropriate multisubject BOLD effects in the group analysis.
NASA Astrophysics Data System (ADS)
Barbini, L.; Eltabach, M.; Hillis, A. J.; du Bois, J. L.
2018-03-01
In rotating machine diagnosis different spectral tools are used to analyse vibration signals. Despite the good diagnostic performance such tools are usually refined, computationally complex to implement and require oversight of an expert user. This paper introduces an intuitive and easy to implement method for vibration analysis: amplitude cyclic frequency decomposition. This method firstly separates vibration signals accordingly to their spectral amplitudes and secondly uses the squared envelope spectrum to reveal the presence of cyclostationarity in each amplitude level. The intuitive idea is that in a rotating machine different components contribute vibrations at different amplitudes, for instance defective bearings contribute a very weak signal in contrast to gears. This paper also introduces a new quantity, the decomposition squared envelope spectrum, which enables separation between the components of a rotating machine. The amplitude cyclic frequency decomposition and the decomposition squared envelope spectrum are tested on real word signals, both at stationary and varying speeds, using data from a wind turbine gearbox and an aircraft engine. In addition a benchmark comparison to the spectral correlation method is presented.
Bayesian Factor Analysis When Only a Sample Covariance Matrix Is Available
ERIC Educational Resources Information Center
Hayashi, Kentaro; Arav, Marina
2006-01-01
In traditional factor analysis, the variance-covariance matrix or the correlation matrix has often been a form of inputting data. In contrast, in Bayesian factor analysis, the entire data set is typically required to compute the posterior estimates, such as Bayes factor loadings and Bayes unique variances. We propose a simple method for computing…
Meta-analysis for explaining the variance in public transport demand elasticities in Europe
DOT National Transportation Integrated Search
1998-01-01
Results from past studies on transport demand elasticities show a large variance. This paper assesses key factors that influence the sensitivity of public transport users to transport costs in Europe, by carrying out a comparative analysis of the dif...
NASA Astrophysics Data System (ADS)
Williams, E. K.; Plante, A. F.
2017-12-01
The stability and cycling of natural organic matter depends on the input of energy needed to decompose it and the net energy gained from its decomposition. In soils, this relationship is complicated by microbial enzymatic activity which decreases the activation energies associated with soil organic matter (SOM) decomposition and by chemical and physical protection mechanisms which decreases the concentrations of the available organic matter substrate and also require additional energies to overcome for decomposition. In this study, we utilize differential scanning calorimetry and evolved CO2 gas analysis to characterize differences in the energetics (activation energy and energy density) in soils that have undergone degradation in natural (bare fallow), field (changes in land-use), chemical (acid hydrolysis), and laboratory (high temperature incubation) experimental conditions. We will present this data in a novel conceptual framework relating these energy dynamics to organic matter inputs, decomposition, and molecular complexity.
NASA Astrophysics Data System (ADS)
Chen, Dongyue; Lin, Jianhui; Li, Yanping
2018-06-01
Complementary ensemble empirical mode decomposition (CEEMD) has been developed for the mode-mixing problem in Empirical Mode Decomposition (EMD) method. Compared to the ensemble empirical mode decomposition (EEMD), the CEEMD method reduces residue noise in the signal reconstruction. Both CEEMD and EEMD need enough ensemble number to reduce the residue noise, and hence it would be too much computation cost. Moreover, the selection of intrinsic mode functions (IMFs) for further analysis usually depends on experience. A modified CEEMD method and IMFs evaluation index are proposed with the aim of reducing the computational cost and select IMFs automatically. A simulated signal and in-service high-speed train gearbox vibration signals are employed to validate the proposed method in this paper. The results demonstrate that the modified CEEMD can decompose the signal efficiently with less computation cost, and the IMFs evaluation index can select the meaningful IMFs automatically.
Kinetics of non-isothermal decomposition of cinnamic acid
NASA Astrophysics Data System (ADS)
Zhao, Ming-rui; Qi, Zhen-li; Chen, Fei-xiong; Yue, Xia-xin
2014-07-01
The thermal stability and kinetics of decomposition of cinnamic acid were investigated by thermogravimetry and differential scanning calorimetry at four heating rates. The activation energies of this process were calculated from analysis of TG curves by methods of Flynn-Wall-Ozawa, Doyle, Distributed Activation Energy Model, Šatava-Šesták and Kissinger, respectively. There are only one stage of thermal decomposition process in TG and two endothermic peaks in DSC. For this decomposition process of cinnamic acid, E and log A[s-1] were determined to be 81.74 kJ mol-1 and 8.67, respectively. The mechanism was Mampel Power law (the reaction order, n = 1), with integral form G(α) = α (α = 0.1-0.9). Moreover, thermodynamic properties of Δ H ≠, Δ S ≠, Δ G ≠ were 77.96 kJ mol-1, -90.71 J mol-1 K-1, 119.41 kJ mol-1.
NASA Astrophysics Data System (ADS)
Yong, Yingqiong; Nguyen, Mai Thanh; Tsukamoto, Hiroki; Matsubara, Masaki; Liao, Ying-Chih; Yonezawa, Tetsu
2017-03-01
Mixtures of a copper complex and copper fine particles as copper-based metal-organic decomposition (MOD) dispersions have been demonstrated to be effective for low-temperature sintering of conductive copper film. However, the copper particle size effect on decomposition process of the dispersion during heating and the effect of organic residues on the resistivity have not been studied. In this study, the decomposition process of dispersions containing mixtures of a copper complex and copper particles with various sizes was studied. The effect of organic residues on the resistivity was also studied using thermogravimetric analysis. In addition, the choice of copper salts in the copper complex was also discussed. In this work, a low-resistivity sintered copper film (7 × 10-6 Ω·m) at a temperature as low as 100 °C was achieved without using any reductive gas.
Investigation of automated task learning, decomposition and scheduling
NASA Technical Reports Server (NTRS)
Livingston, David L.; Serpen, Gursel; Masti, Chandrashekar L.
1990-01-01
The details and results of research conducted in the application of neural networks to task planning and decomposition are presented. Task planning and decomposition are operations that humans perform in a reasonably efficient manner. Without the use of good heuristics and usually much human interaction, automatic planners and decomposers generally do not perform well due to the intractable nature of the problems under consideration. The human-like performance of neural networks has shown promise for generating acceptable solutions to intractable problems such as planning and decomposition. This was the primary reasoning behind attempting the study. The basis for the work is the use of state machines to model tasks. State machine models provide a useful means for examining the structure of tasks since many formal techniques have been developed for their analysis and synthesis. It is the approach to integrate the strong algebraic foundations of state machines with the heretofore trial-and-error approach to neural network synthesis.
Global Sensitivity Analysis for Large-scale Socio-hydrological Models using the Cloud
NASA Astrophysics Data System (ADS)
Hu, Y.; Garcia-Cabrejo, O.; Cai, X.; Valocchi, A. J.; Dupont, B.
2014-12-01
In the context of coupled human and natural system (CHNS), incorporating human factors into water resource management provides us with the opportunity to understand the interactions between human and environmental systems. A multi-agent system (MAS) model is designed to couple with the physically-based Republican River Compact Administration (RRCA) groundwater model, in an attempt to understand the declining water table and base flow in the heavily irrigated Republican River basin. For MAS modelling, we defined five behavioral parameters (κ_pr, ν_pr, κ_prep, ν_prep and λ) to characterize the agent's pumping behavior given the uncertainties of the future crop prices and precipitation. κ and ν describe agent's beliefs in their prior knowledge of the mean and variance of crop prices (κ_pr, ν_pr) and precipitation (κ_prep, ν_prep), and λ is used to describe the agent's attitude towards the fluctuation of crop profits. Notice that these human behavioral parameters as inputs to the MAS model are highly uncertain and even not measurable. Thus, we estimate the influences of these behavioral parameters on the coupled models using Global Sensitivity Analysis (GSA). In this paper, we address two main challenges arising from GSA with such a large-scale socio-hydrological model by using Hadoop-based Cloud Computing techniques and Polynomial Chaos Expansion (PCE) based variance decomposition approach. As a result, 1,000 scenarios of the coupled models are completed within two hours with the Hadoop framework, rather than about 28days if we run those scenarios sequentially. Based on the model results, GSA using PCE is able to measure the impacts of the spatial and temporal variations of these behavioral parameters on crop profits and water table, and thus identifies two influential parameters, κ_pr and λ. The major contribution of this work is a methodological framework for the application of GSA in large-scale socio-hydrological models. This framework attempts to find a balance between the heavy computational burden regarding model execution and the number of model evaluations required in the GSA analysis, particularly through an organic combination of Hadoop-based Cloud Computing to efficiently evaluate the socio-hydrological model and PCE where the sensitivity indices are efficiently estimated from its coefficients.
Universal Distribution of Litter Decay Rates
NASA Astrophysics Data System (ADS)
Forney, D. C.; Rothman, D. H.
2008-12-01
Degradation of litter is the result of many physical, chemical and biological processes. The high variability of these processes likely accounts for the progressive slowdown of decay with litter age. This age dependence is commonly thought to result from the superposition of processes with different decay rates k. Here we assume an underlying continuous yet unknown distribution p(k) of decay rates [1]. To seek its form, we analyze the mass-time history of 70 LIDET [2] litter data sets obtained under widely varying conditions. We construct a regularized inversion procedure to find the best fitting distribution p(k) with the least degrees of freedom. We find that the resulting p(k) is universally consistent with a lognormal distribution, i.e.~a Gaussian distribution of log k, characterized by a dataset-dependent mean and variance of log k. This result is supported by a recurring observation that microbial populations on leaves are log-normally distributed [3]. Simple biological processes cause the frequent appearance of the log-normal distribution in ecology [4]. Environmental factors, such as soil nitrate, soil aggregate size, soil hydraulic conductivity, total soil nitrogen, soil denitrification, soil respiration have been all observed to be log-normally distributed [5]. Litter degradation rates depend on many coupled, multiplicative factors, which provides a fundamental basis for the lognormal distribution. Using this insight, we systematically estimated the mean and variance of log k for 512 data sets from the LIDET study. We find the mean strongly correlates with temperature and precipitation, while the variance appears to be uncorrelated with main environmental factors and is thus likely more correlated with chemical composition and/or ecology. Results indicate the possibility that the distribution in rates reflects, at least in part, the distribution of microbial niches. [1] B. P. Boudreau, B.~R. Ruddick, American Journal of Science,291, 507, (1991). [2] M. Harmon, Forest Science Data Bank: TD023 [Database]. LTER Intersite Fine Litter Decomposition Experiment (LIDET): Long-Term Ecological Research, (2007). [3] G.~A. Beattie, S.~E. Lindow, Phytopathology 89, 353 (1999). [4] R.~A. May, Ecology and Evolution of Communities/, A pattern of Species Abundance and Diversity, 81 (1975). [5] T.~B. Parkin, J.~A. Robinson, Advances in Soil Science 20, Analysis of Lognormal Data, 194 (1992).
Measuring Glial Metabolism in Repetitive Brain Trauma and Alzheimer’s Disease
2016-09-01
Six methods: Single value decomposition (SVD), wavelet, sliding window, sliding window with Gaussian weighting, spline and spectral improvements...comparison of a range of different denoising methods for dynamic MRS. Six denoising methods were considered: Single value decomposition (SVD), wavelet...project by improving the software required for the data analysis by developing six different denoising methods. He also assisted with the testing
Decomposition of the Inequality of Income Distribution by Income Types—Application for Romania
NASA Astrophysics Data System (ADS)
Andrei, Tudorel; Oancea, Bogdan; Richmond, Peter; Dhesi, Gurjeet; Herteliu, Claudiu
2017-09-01
This paper identifies the salient factors that characterize the inequality income distribution for Romania. Data analysis is rigorously carried out using sophisticated techniques borrowed from classical statistics (Theil). Decomposition of the inequalities measured by the Theil index is also performed. This study relies on an exhaustive (11.1 million records for 2014) data-set for total personal gross income of Romanian citizens.
ERIC Educational Resources Information Center
Man, Yiu-Kwong; Leung, Allen
2012-01-01
In this paper, we introduce a new approach to compute the partial fraction decompositions of rational functions and describe the results of its trials at three secondary schools in Hong Kong. The data were collected via quizzes, questionnaire and interviews. In general, according to the responses from the teachers and students concerned, this new…
Long-term litter decomposition controlled by manganese redox cycling
Keiluweit, Marco; Nico, Peter; Harmon, Mark E.; Mao, Jingdong; Pett-Ridge, Jennifer; Kleber, Markus
2015-01-01
Litter decomposition is a keystone ecosystem process impacting nutrient cycling and productivity, soil properties, and the terrestrial carbon (C) balance, but the factors regulating decomposition rate are still poorly understood. Traditional models assume that the rate is controlled by litter quality, relying on parameters such as lignin content as predictors. However, a strong correlation has been observed between the manganese (Mn) content of litter and decomposition rates across a variety of forest ecosystems. Here, we show that long-term litter decomposition in forest ecosystems is tightly coupled to Mn redox cycling. Over 7 years of litter decomposition, microbial transformation of litter was paralleled by variations in Mn oxidation state and concentration. A detailed chemical imaging analysis of the litter revealed that fungi recruit and redistribute unreactive Mn2+ provided by fresh plant litter to produce oxidative Mn3+ species at sites of active decay, with Mn eventually accumulating as insoluble Mn3+/4+ oxides. Formation of reactive Mn3+ species coincided with the generation of aromatic oxidation products, providing direct proof of the previously posited role of Mn3+-based oxidizers in the breakdown of litter. Our results suggest that the litter-decomposing machinery at our coniferous forest site depends on the ability of plants and microbes to supply, accumulate, and regenerate short-lived Mn3+ species in the litter layer. This observation indicates that biogeochemical constraints on bioavailability, mobility, and reactivity of Mn in the plant–soil system may have a profound impact on litter decomposition rates. PMID:26372954
[Progress in Raman spectroscopic measurement of methane hydrate].
Xu, Feng; Zhu, Li-hua; Wu, Qiang; Xu, Long-jun
2009-09-01
Complex thermodynamics and kinetics problems are involved in the methane hydrate formation and decomposition, and these problems are crucial to understanding the mechanisms of hydrate formation and hydrate decomposition. However, it was difficult to accurately obtain such information due to the difficulty of measurement since methane hydrate is only stable under low temperature and high pressure condition, and until recent years, methane hydrate has been measured in situ using Raman spectroscopy. Raman spectroscopy, a non-destructive and non-invasive technique, is used to study vibrational modes of molecules. Studies of methane hydrate using Raman spectroscopy have been developed over the last decade. The Raman spectra of CH4 in vapor phase and in hydrate phase are presented in this paper. The progress in the research on methane hydrate formation thermodynamics, formation kinetics, decomposition kinetics and decomposition mechanism based on Raman spectroscopic measurements in the laboratory and deep sea are reviewed. Formation thermodynamic studies, including in situ observation of formation condition of methane hydrate, analysis of structure, and determination of hydrate cage occupancy and hydration numbers by using Raman spectroscopy, are emphasized. In the aspect of formation kinetics, research on variation in hydrate cage amount and methane concentration in water during the growth of hydrate using Raman spectroscopy is also introduced. For the methane hydrate decomposition, the investigation associated with decomposition mechanism, the mutative law of cage occupancy ratio and the formulation of decomposition rate in porous media are described. The important aspects for future hydrate research based on Raman spectroscopy are discussed.
NASA Astrophysics Data System (ADS)
Sierra, Carlos A.; Trumbore, Susan E.; Davidson, Eric A.; Vicca, Sara; Janssens, I.
2015-03-01
The sensitivity of soil organic matter decomposition to global environmental change is a topic of prominent relevance for the global carbon cycle. Decomposition depends on multiple factors that are being altered simultaneously as a result of global environmental change; therefore, it is important to study the sensitivity of the rates of soil organic matter decomposition with respect to multiple and interacting drivers. In this manuscript, we present an analysis of the potential response of decomposition rates to simultaneous changes in temperature and moisture. To address this problem, we first present a theoretical framework to study the sensitivity of soil organic matter decomposition when multiple driving factors change simultaneously. We then apply this framework to models and data at different levels of abstraction: (1) to a mechanistic model that addresses the limitation of enzyme activity by simultaneous effects of temperature and soil water content, the latter controlling substrate supply and oxygen concentration for microbial activity; (2) to different mathematical functions used to represent temperature and moisture effects on decomposition in biogeochemical models. To contrast model predictions at these two levels of organization, we compiled different data sets of observed responses in field and laboratory studies. Then we applied our conceptual framework to: (3) observations of heterotrophic respiration at the ecosystem level; (4) laboratory experiments looking at the response of heterotrophic respiration to independent changes in moisture and temperature; and (5) ecosystem-level experiments manipulating soil temperature and water content simultaneously.
Long-term litter decomposition controlled by manganese redox cycling.
Keiluweit, Marco; Nico, Peter; Harmon, Mark E; Mao, Jingdong; Pett-Ridge, Jennifer; Kleber, Markus
2015-09-22
Litter decomposition is a keystone ecosystem process impacting nutrient cycling and productivity, soil properties, and the terrestrial carbon (C) balance, but the factors regulating decomposition rate are still poorly understood. Traditional models assume that the rate is controlled by litter quality, relying on parameters such as lignin content as predictors. However, a strong correlation has been observed between the manganese (Mn) content of litter and decomposition rates across a variety of forest ecosystems. Here, we show that long-term litter decomposition in forest ecosystems is tightly coupled to Mn redox cycling. Over 7 years of litter decomposition, microbial transformation of litter was paralleled by variations in Mn oxidation state and concentration. A detailed chemical imaging analysis of the litter revealed that fungi recruit and redistribute unreactive Mn(2+) provided by fresh plant litter to produce oxidative Mn(3+) species at sites of active decay, with Mn eventually accumulating as insoluble Mn(3+/4+) oxides. Formation of reactive Mn(3+) species coincided with the generation of aromatic oxidation products, providing direct proof of the previously posited role of Mn(3+)-based oxidizers in the breakdown of litter. Our results suggest that the litter-decomposing machinery at our coniferous forest site depends on the ability of plants and microbes to supply, accumulate, and regenerate short-lived Mn(3+) species in the litter layer. This observation indicates that biogeochemical constraints on bioavailability, mobility, and reactivity of Mn in the plant-soil system may have a profound impact on litter decomposition rates.
Saviane, Chiara; Silver, R Angus
2006-06-15
Synapses play a crucial role in information processing in the brain. Amplitude fluctuations of synaptic responses can be used to extract information about the mechanisms underlying synaptic transmission and its modulation. In particular, multiple-probability fluctuation analysis can be used to estimate the number of functional release sites, the mean probability of release and the amplitude of the mean quantal response from fits of the relationship between the variance and mean amplitude of postsynaptic responses, recorded at different probabilities. To determine these quantal parameters, calculate their uncertainties and the goodness-of-fit of the model, it is important to weight the contribution of each data point in the fitting procedure. We therefore investigated the errors associated with measuring the variance by determining the best estimators of the variance of the variance and have used simulations of synaptic transmission to test their accuracy and reliability under different experimental conditions. For central synapses, which generally have a low number of release sites, the amplitude distribution of synaptic responses is not normal, thus the use of a theoretical variance of the variance based on the normal assumption is not a good approximation. However, appropriate estimators can be derived for the population and for limited sample sizes using a more general expression that involves higher moments and introducing unbiased estimators based on the h-statistics. Our results are likely to be relevant for various applications of fluctuation analysis when few channels or release sites are present.
Motor unit number estimation based on high-density surface electromyography decomposition.
Peng, Yun; He, Jinbao; Yao, Bo; Li, Sheng; Zhou, Ping; Zhang, Yingchun
2016-09-01
To advance the motor unit number estimation (MUNE) technique using high density surface electromyography (EMG) decomposition. The K-means clustering convolution kernel compensation algorithm was employed to detect the single motor unit potentials (SMUPs) from high-density surface EMG recordings of the biceps brachii muscles in eight healthy subjects. Contraction forces were controlled at 10%, 20% and 30% of the maximal voluntary contraction (MVC). Achieved MUNE results and the representativeness of the SMUP pools were evaluated using a high-density weighted-average method. Mean numbers of motor units were estimated as 288±132, 155±87, 107±99 and 132±61 by using the developed new MUNE at 10%, 20%, 30% and 10-30% MVCs, respectively. Over 20 SMUPs were obtained at each contraction level, and the mean residual variances were lower than 10%. The new MUNE method allows a convenient and non-invasive collection of a large size of SMUP pool with great representativeness. It provides a useful tool for estimating the motor unit number of proximal muscles. The present new MUNE method successfully avoids the use of intramuscular electrodes or multiple electrical stimuli which is required in currently available MUNE techniques; as such the new MUNE method can minimize patient discomfort for MUNE tests. Copyright © 2016 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Ceciliason, Ann-Sofie; Andersson, M Gunnar; Lindström, Anders; Sandler, Håkan
2018-02-01
This study's objective is to obtain accuracy and precision in estimating the postmortem interval (PMI) for decomposing human remains discovered in indoor settings. Data were collected prospectively from 140 forensic cases with a known date of death, scored according to the Total Body Score (TBS) scale at the post-mortem examination. In our model setting, it is estimated that, in cases with or without the presence of blowfly larvae, approximately 45% or 66% respectively, of the variance in TBS can be derived from Accumulated Degree-Days (ADD). The precision in estimating ADD/PMI from TBS is, in our setting, moderate to low. However, dividing the cases into defined subgroups suggests the possibility to increase the precision of the model. Our findings also suggest a significant seasonal difference with concomitant influence on TBS in the complete data set, possibly initiated by the presence of insect activity mainly during summer. PMI may be underestimated in cases with presence of desiccation. Likewise, there is a need for evaluating the effect of insect activity, to avoid overestimating the PMI. Our data sample indicates that the scoring method might need to be slightly modified to better reflect indoor decomposition, especially in cases with insect infestations or/and extensive desiccation. When applying TBS in an indoor setting, the model requires distinct inclusion criteria and a defined population. Copyright © 2017 Elsevier B.V. All rights reserved.
Chen, Bo; Chen, Minhua; Paisley, John; Zaas, Aimee; Woods, Christopher; Ginsburg, Geoffrey S; Hero, Alfred; Lucas, Joseph; Dunson, David; Carin, Lawrence
2010-11-09
Nonparametric Bayesian techniques have been developed recently to extend the sophistication of factor models, allowing one to infer the number of appropriate factors from the observed data. We consider such techniques for sparse factor analysis, with application to gene-expression data from three virus challenge studies. Particular attention is placed on employing the Beta Process (BP), the Indian Buffet Process (IBP), and related sparseness-promoting techniques to infer a proper number of factors. The posterior density function on the model parameters is computed using Gibbs sampling and variational Bayesian (VB) analysis. Time-evolving gene-expression data are considered for respiratory syncytial virus (RSV), Rhino virus, and influenza, using blood samples from healthy human subjects. These data were acquired in three challenge studies, each executed after receiving institutional review board (IRB) approval from Duke University. Comparisons are made between several alternative means of per-forming nonparametric factor analysis on these data, with comparisons as well to sparse-PCA and Penalized Matrix Decomposition (PMD), closely related non-Bayesian approaches. Applying the Beta Process to the factor scores, or to the singular values of a pseudo-SVD construction, the proposed algorithms infer the number of factors in gene-expression data. For real data the "true" number of factors is unknown; in our simulations we consider a range of noise variances, and the proposed Bayesian models inferred the number of factors accurately relative to other methods in the literature, such as sparse-PCA and PMD. We have also identified a "pan-viral" factor of importance for each of the three viruses considered in this study. We have identified a set of genes associated with this pan-viral factor, of interest for early detection of such viruses based upon the host response, as quantified via gene-expression data.
An empirical analysis of the distribution of overshoots in a stationary Gaussian stochastic process
NASA Technical Reports Server (NTRS)
Carter, M. C.; Madison, M. W.
1973-01-01
The frequency distribution of overshoots in a stationary Gaussian stochastic process is analyzed. The primary processes involved in this analysis are computer simulation and statistical estimation. Computer simulation is used to simulate stationary Gaussian stochastic processes that have selected autocorrelation functions. An analysis of the simulation results reveals a frequency distribution for overshoots with a functional dependence on the mean and variance of the process. Statistical estimation is then used to estimate the mean and variance of a process. It is shown that for an autocorrelation function, the mean and the variance for the number of overshoots, a frequency distribution for overshoots can be estimated.
NASA Astrophysics Data System (ADS)
Shao, Feng; Evanschitzky, Peter; Fühner, Tim; Erdmann, Andreas
2009-10-01
This paper employs the Waveguide decomposition method as an efficient rigorous electromagnetic field (EMF) solver to investigate three dimensional mask-induced imaging artifacts in EUV lithography. The major mask diffraction induced imaging artifacts are first identified by applying the Zernike analysis of the mask nearfield spectrum of 2D lines/spaces. Three dimensional mask features like 22nm semidense/dense contacts/posts, isolated elbows and line-ends are then investigated in terms of lithographic results. After that, the 3D mask-induced imaging artifacts such as feature orientation dependent best focus shift, process window asymmetries, and other aberration-like phenomena are explored for the studied mask features. The simulation results can help lithographers to understand the reasons of EUV-specific imaging artifacts and to devise illumination and feature dependent strategies for their compensation in the optical proximity correction (OPC) for EUV masks. At last, an efficient approach using the Zernike analysis together with the Waveguide decomposition technique is proposed to characterize the impact of mask properties for the future OPC process.
Impact of Damping Uncertainty on SEA Model Response Variance
NASA Technical Reports Server (NTRS)
Schiller, Noah; Cabell, Randolph; Grosveld, Ferdinand
2010-01-01
Statistical Energy Analysis (SEA) is commonly used to predict high-frequency vibroacoustic levels. This statistical approach provides the mean response over an ensemble of random subsystems that share the same gross system properties such as density, size, and damping. Recently, techniques have been developed to predict the ensemble variance as well as the mean response. However these techniques do not account for uncertainties in the system properties. In the present paper uncertainty in the damping loss factor is propagated through SEA to obtain more realistic prediction bounds that account for both ensemble and damping variance. The analysis is performed on a floor-equipped cylindrical test article that resembles an aircraft fuselage. Realistic bounds on the damping loss factor are determined from measurements acquired on the sidewall of the test article. The analysis demonstrates that uncertainties in damping have the potential to significantly impact the mean and variance of the predicted response.
NASA Astrophysics Data System (ADS)
Wang, Yani; Wang, Jun; Tao, Guiping
2017-12-01
Haze pollution has become a hot issue concerned with the process of modernization and one serious problem requiring urgent solution, especially in Beijing. PM2.5 is the main reason causing haze and its harm. Although there has been research centering on factors affecting PM2.5, little attention has been devoted to the microcosmic and dynamic effects on it. Vector auto-regression (VAR) mode is applied in this study to explore the interaction between PM2.5, PM10, SO2, CO and NO2. Results of Granger causality tests tell that there exists causal relationship between PM10, SO2, CO, NO2 and PM2.5. Impulse response functions (IRFs) show that the response of PM2.5 to a shock in CO is positive and large in the short period, while the reaction of PM2.5 to a shock in SO2 increases over time. Meanwhile, variance decomposition indicate that PM2.5 is more closely related to CO in the short term while SO2’ influence accounts for a higher proportion in the long run. The findings provide a novel perspective to analyze the factors influencing PM2.5 dynamically and contribute to a better understanding of haze and its relationship with sustainable development.
Javan, Gulnaz T; Finley, Sheree J; Smith, Tasia; Miller, Joselyn; Wilkinson, Jeremy E
2017-01-01
Human thanatomicrobiome studies have established that an abundant number of putrefactive bacteria within internal organs of decaying bodies are obligate anaerobes, Clostridium spp. These microorganisms have been implicated as etiological agents in potentially life-threatening infections; notwithstanding, the scale and trajectory of these microbes after death have not been elucidated. We performed phylogenetic surveys of thanatomicrobiome signatures of cadavers' internal organs to compare the microbial diversity between the 16S rRNA gene V4 hypervariable region and V3-4 conjoined regions from livers and spleens of 45 cadavers undergoing forensic microbiological studies. Phylogenetic analyses of 16S rRNA gene sequences revealed that the V4 region had a significantly higher mean Chao1 richness within the total microbiome data. Permutational multivariate analysis of variance statistical tests, based on unweighted UniFrac distances, demonstrated that taxa compositions were significantly different between V4 and V3-4 hypervariable regions ( p < 0.001). Of note, we present the first study, using the largest cohort of criminal cases to date, that two hypervariable regions show discriminatory power for human postmortem microbial diversity. In conclusion, here we propose the impact of hypervariable region selection for the 16S rRNA gene in differentiating thanatomicrobiomic profiles to provide empirical data to explain a unique concept, the Postmortem Clostridium Effect.
Apipattanavis, S.; McCabe, G.J.; Rajagopalan, B.; Gangopadhyay, S.
2009-01-01
Dominant modes of individual and joint variability in global sea surface temperatures (SST) and global Palmer drought severity index (PDSI) values for the twentieth century are identified through a multivariate frequency domain singular value decomposition. This analysis indicates that a secular trend and variability related to the El Niño–Southern Oscillation (ENSO) are the dominant modes of variance shared among the global datasets. For the SST data the secular trend corresponds to a positive trend in Indian Ocean and South Atlantic SSTs, and a negative trend in North Pacific and North Atlantic SSTs. The ENSO reconstruction shows a strong signal in the tropical Pacific, North Pacific, and Indian Ocean regions. For the PDSI data, the secular trend reconstruction shows high amplitudes over central Africa including the Sahel, whereas the regions with strong ENSO amplitudes in PDSI are the southwestern and northwestern United States, South Africa, northeastern Brazil, central Africa, the Indian subcontinent, and Australia. An additional significant frequency, multidecadal variability, is identified for the Northern Hemisphere. This multidecadal frequency appears to be related to the Atlantic multidecadal oscillation (AMO). The multidecadal frequency is statistically significant in the Northern Hemisphere SST data, but is statistically nonsignificant in the PDSI data.
The heritable basis of gene-environment interactions in cardiometabolic traits.
Poveda, Alaitz; Chen, Yan; Brändström, Anders; Engberg, Elisabeth; Hallmans, Göran; Johansson, Ingegerd; Renström, Frida; Kurbasic, Azra; Franks, Paul W
2017-03-01
Little is known about the heritable basis of gene-environment interactions in humans. We therefore screened multiple cardiometabolic traits to assess the probability that they are influenced by genotype-environment interactions. Fourteen established environmental risk exposures and 11 cardiometabolic traits were analysed in the VIKING study, a cohort of 16,430 Swedish adults from 1682 extended pedigrees with available detailed genealogical, phenotypic and demographic information, using a maximum likelihood variance decomposition method in Sequential Oligogenic Linkage Analysis Routines software. All cardiometabolic traits had statistically significant heritability estimates, with narrow-sense heritabilities (h 2 ) ranging from 24% to 47%. Genotype-environment interactions were detected for age and sex (for the majority of traits), physical activity (for triacylglycerols, 2 h glucose and diastolic BP), smoking (for weight), alcohol intake (for weight, BMI and 2 h glucose) and diet pattern (for weight, BMI, glycaemic traits and systolic BP). Genotype-age interactions for weight and systolic BP, genotype-sex interactions for BMI and triacylglycerols and genotype-alcohol intake interactions for weight remained significant after multiple test correction. Age, sex and alcohol intake are likely to be major modifiers of genetic effects for a range of cardiometabolic traits. This information may prove valuable for studies that seek to identify specific loci that modify the effects of lifestyle in cardiometabolic disease.
Revisiting the emissions-energy-trade nexus: evidence from the newly industrializing countries.
Ahmed, Khalid; Shahbaz, Muhammad; Kyophilavong, Phouphet
2016-04-01
This paper applies Pedroni's panel cointegration approach to explore the causal relationship between trade openness, carbon dioxide emissions, energy consumption, and economic growth for the panel of newly industrialized economies (i.e., Brazil, India, China, and South Africa) over the period of 1970-2013. Our panel cointegration estimation results found majority of the variables cointegrated and confirm the long-run association among the variables. The Granger causality test indicates bidirectional causality between carbon dioxide emissions and energy consumption. A unidirectional causality is found running from trade openness to carbon dioxide emission and energy consumption and economic growth to carbon dioxide emissions. The results of causality analysis suggest that the trade liberalization in newly industrialized economies induces higher energy consumption and carbon dioxide emissions. Furthermore, the causality results are checked using an innovative accounting approach which includes forecast-error variance decomposition test and impulse response function. The long-run coefficients are estimated using fully modified ordinary least square (FMOLS) method, and results conclude that the trade openness and economic growth reduce carbon dioxide emissions in the long run. The results of FMOLS test sound the existence of environmental Kuznets curve hypothesis. It means that trade liberalization induces carbon dioxide emission with increased national output, but it offsets that impact in the long run with reduced level of carbon dioxide emissions.
Modelling uncertainty in incompressible flow simulation using Galerkin based generalized ANOVA
NASA Astrophysics Data System (ADS)
Chakraborty, Souvik; Chowdhury, Rajib
2016-11-01
This paper presents a new algorithm, referred to here as Galerkin based generalized analysis of variance decomposition (GG-ANOVA) for modelling input uncertainties and its propagation in incompressible fluid flow. The proposed approach utilizes ANOVA to represent the unknown stochastic response. Further, the unknown component functions of ANOVA are represented using the generalized polynomial chaos expansion (PCE). The resulting functional form obtained by coupling the ANOVA and PCE is substituted into the stochastic Navier-Stokes equation (NSE) and Galerkin projection is employed to decompose it into a set of coupled deterministic 'Navier-Stokes alike' equations. Temporal discretization of the set of coupled deterministic equations is performed by employing Adams-Bashforth scheme for convective term and Crank-Nicolson scheme for diffusion term. Spatial discretization is performed by employing finite difference scheme. Implementation of the proposed approach has been illustrated by two examples. In the first example, a stochastic ordinary differential equation has been considered. This example illustrates the performance of proposed approach with change in nature of random variable. Furthermore, convergence characteristics of GG-ANOVA has also been demonstrated. The second example investigates flow through a micro channel. Two case studies, namely the stochastic Kelvin-Helmholtz instability and stochastic vortex dipole, have been investigated. For all the problems results obtained using GG-ANOVA are in excellent agreement with benchmark solutions.
NASA Astrophysics Data System (ADS)
Ťupek, Boris; Launiainen, Samuli; Peltoniemi, Mikko; Heikkinen, Jukka; Lehtonen, Aleksi
2016-04-01
Litter decomposition rates of the most process based soil carbon models affected by environmental conditions are linked with soil heterotrophic CO2 emissions and serve for estimating soil carbon sequestration; thus due to the mass balance equation the variation in measured litter inputs and measured heterotrophic soil CO2 effluxes should indicate soil carbon stock changes, needed by soil carbon management for mitigation of anthropogenic CO2 emissions, if sensitivity functions of the applied model suit to the environmental conditions e.g. soil temperature and moisture. We evaluated the response forms of autotrophic and heterotrophic forest floor respiration to soil temperature and moisture in four boreal forest sites of the International Cooperative Programme on Assessment and Monitoring of Air Pollution Effects on Forests (ICP Forests) by a soil trenching experiment during year 2015 in southern Finland. As expected both autotrophic and heterotrophic forest floor respiration components were primarily controlled by soil temperature and exponential regression models generally explained more than 90% of the variance. Soil moisture regression models on average explained less than 10% of the variance and the response forms varied between Gaussian for the autotrophic forest floor respiration component and linear for the heterotrophic forest floor respiration component. Although the percentage of explained variance of soil heterotrophic respiration by the soil moisture was small, the observed reduction of CO2 emissions with higher moisture levels suggested that soil moisture response of soil carbon models not accounting for the reduction due to excessive moisture should be re-evaluated in order to estimate right levels of soil carbon stock changes. Our further study will include evaluation of process based soil carbon models by the annual heterotrophic respiration and soil carbon stocks.
Testing Interaction Effects without Discarding Variance.
ERIC Educational Resources Information Center
Lopez, Kay A.
Analysis of variance (ANOVA) and multiple regression are two of the most commonly used methods of data analysis in behavioral science research. Although ANOVA was intended for use with experimental designs, educational researchers have used ANOVA extensively in aptitude-treatment interaction (ATI) research. This practice tends to make researchers…
NASA Technical Reports Server (NTRS)
Ploutz-Snyder, Robert
2011-01-01
This slide presentation is a series of educational presentations that are on the statistical function of analysis of variance (ANOVA). Analysis of Variance (ANOVA) examines variability between groups, relative to within groups, to determine whether there's evidence that the groups are not from the same population. One other presentation reviews hypothesis testing.
Multidecadal climate variability of global lands and oceans
McCabe, G.J.; Palecki, M.A.
2006-01-01
Principal components analysis (PCA) and singular value decomposition (SVD) are used to identify the primary modes of decadal and multidecadal variability in annual global Palmer Drought Severity Index (PDSI) values and sea-surface temperature (SSTs). The PDSI and SST data for 1925-2003 were detrended and smoothed (with a 10-year moving average) to isolate the decadal and multidecadal variability. The first two principal components (PCs) of the PDSI PCA explained almost 38% of the decadal and multidecadal variance in the detrended and smoothed global annual PDSI data. The first two PCs of detrended and smoothed global annual SSTs explained nearly 56% of the decadal variability in global SSTs. The PDSI PCs and the SST PCs are directly correlated in a pairwise fashion. The first PDSI and SST PCs reflect variability of the detrended and smoothed annual Pacific Decadal Oscillation (PDO), as well as detrended and smoothed annual Indian Ocean SSTs. The second set of PCs is strongly associated with the Atlantic Multidecadal Oscillation (AMO). The SVD analysis of the cross-covariance of the PDSI and SST data confirmed the close link between the PDSI and SST modes of decadal and multidecadal variation and provided a verification of the PCA results. These findings indicate that the major modes of multidecadal variations in SSTs and land-surface climate conditions are highly interrelated through a small number of spatially complex but slowly varying teleconnections. Therefore, these relations may be adaptable to providing improved baseline conditions for seasonal climate forecasting. Published in 2006 by John Wiley & Sons, Ltd.
A Four-Stage Hybrid Model for Hydrological Time Series Forecasting
Di, Chongli; Yang, Xiaohua; Wang, Xiaochao
2014-01-01
Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of ‘denoising, decomposition and ensemble’. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models. PMID:25111782
Yang, Caiqin; Guo, Wei; Lin, Yulong; Lin, Qianqian; Wang, Jiaojiao; Wang, Jing; Zeng, Yanli
2018-05-30
In this study, a new cocrystal of felodipine (Fel) and glutaric acid (Glu) with a high dissolution rate was developed using the solvent ultrasonic method. The prepared cocrystal was characterized using X-ray powder diffraction, differential scanning calorimetry, thermogravimetric (TG) analysis, and infrared (IR) spectroscopy. To provide basic information about the optimization of pharmaceutical preparations of Fel-based cocrystals, this work investigated the thermal decomposition kinetics of the Fel-Glu cocrystal through non-isothermal thermogravimetry. Density functional theory (DFT) simulations were also performed on the Fel monomer and the trimolecular cocrystal compound for exploring the mechanisms underlying hydrogen bonding formation and thermal decomposition. Combined results of IR spectroscopy and DFT simulation verified that the Fel-Glu cocrystal formed via the NH⋯OC and CO⋯HO hydrogen bonds between Fel and Glu at the ratio of 1:2. The TG/derivative TG curves indicated that the thermal decomposition of the Fel-Glu cocrystal underwent a two-step process. The apparent activation energy (E a ) and pre-exponential factor (A) of the thermal decomposition for the first stage were 84.90 kJ mol -1 and 7.03 × 10 7 min -1 , respectively. The mechanism underlying thermal decomposition possibly involved nucleation and growth, with the integral mechanism function G(α) of α 3/2 . DFT calculation revealed that the hydrogen bonding between Fel and Glu weakened the terminal methoxyl, methyl, and ethyl groups in the Fel molecule. As a result, these groups were lost along with the Glu molecule in the first thermal decomposition. In conclusion, the formed cocrystal exhibited different thermal decomposition kinetics and showed different E a , A, and shelf life from the intact active pharmaceutical ingredient. Copyright © 2018 Elsevier B.V. All rights reserved.
Dynamics of Potassium Release and Adsorption on Rice Straw Residue
Li, Jifu; Lu, Jianwei; Li, Xiaokun; Ren, Tao; Cong, Rihuan; Zhou, Li
2014-01-01
Straw application can not only increase crop yields, improve soil structure and enrich soil fertility, but can also enhance water and nutrient retention. The aim of this study was to ascertain the relationships between straw decomposition and the release-adsorption processes of K+. This study increases the understanding of the roles played by agricultural crop residues in the soil environment, informs more effective straw recycling and provides a method for reducing potassium loss. The influence of straw decomposition on the K+ release rate in paddy soil under flooded condition was studied using incubation experiments, which indicated the decomposition process of rice straw could be divided into two main stages: (a) a rapid decomposition stage from 0 to 60 d and (b) a slow decomposition stage from 60 to 110 d. However, the characteristics of the straw potassium release were different from those of the overall straw decomposition, as 90% of total K was released by the third day of the study. The batches of the K sorption experiments showed that crop residues could adsorb K+ from the ambient environment, which was subject to decomposition periods and extra K+ concentration. In addition, a number of materials or binding sites were observed on straw residues using IR analysis, indicating possible coupling sites for K+ ions. The aqueous solution experiments indicated that raw straw could absorb water at 3.88 g g−1, and this rate rose to its maximum 15 d after incubation. All of the experiments demonstrated that crop residues could absorb large amount of aqueous solution to preserve K+ indirectly during the initial decomposition period. These crop residues could also directly adsorb K+ via physical and chemical adsorption in the later period, allowing part of this K+ to be absorbed by plants for the next growing season. PMID:24587364
Dynamics of potassium release and adsorption on rice straw residue.
Li, Jifu; Lu, Jianwei; Li, Xiaokun; Ren, Tao; Cong, Rihuan; Zhou, Li
2014-01-01
Straw application can not only increase crop yields, improve soil structure and enrich soil fertility, but can also enhance water and nutrient retention. The aim of this study was to ascertain the relationships between straw decomposition and the release-adsorption processes of K(+). This study increases the understanding of the roles played by agricultural crop residues in the soil environment, informs more effective straw recycling and provides a method for reducing potassium loss. The influence of straw decomposition on the K(+) release rate in paddy soil under flooded condition was studied using incubation experiments, which indicated the decomposition process of rice straw could be divided into two main stages: (a) a rapid decomposition stage from 0 to 60 d and (b) a slow decomposition stage from 60 to 110 d. However, the characteristics of the straw potassium release were different from those of the overall straw decomposition, as 90% of total K was released by the third day of the study. The batches of the K sorption experiments showed that crop residues could adsorb K(+) from the ambient environment, which was subject to decomposition periods and extra K(+) concentration. In addition, a number of materials or binding sites were observed on straw residues using IR analysis, indicating possible coupling sites for K(+) ions. The aqueous solution experiments indicated that raw straw could absorb water at 3.88 g g(-1), and this rate rose to its maximum 15 d after incubation. All of the experiments demonstrated that crop residues could absorb large amount of aqueous solution to preserve K(+) indirectly during the initial decomposition period. These crop residues could also directly adsorb K(+) via physical and chemical adsorption in the later period, allowing part of this K(+) to be absorbed by plants for the next growing season.
A four-stage hybrid model for hydrological time series forecasting.
Di, Chongli; Yang, Xiaohua; Wang, Xiaochao
2014-01-01
Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of 'denoising, decomposition and ensemble'. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models.
2014-01-01
Background Meta-regression is becoming increasingly used to model study level covariate effects. However this type of statistical analysis presents many difficulties and challenges. Here two methods for calculating confidence intervals for the magnitude of the residual between-study variance in random effects meta-regression models are developed. A further suggestion for calculating credible intervals using informative prior distributions for the residual between-study variance is presented. Methods Two recently proposed and, under the assumptions of the random effects model, exact methods for constructing confidence intervals for the between-study variance in random effects meta-analyses are extended to the meta-regression setting. The use of Generalised Cochran heterogeneity statistics is extended to the meta-regression setting and a Newton-Raphson procedure is developed to implement the Q profile method for meta-analysis and meta-regression. WinBUGS is used to implement informative priors for the residual between-study variance in the context of Bayesian meta-regressions. Results Results are obtained for two contrasting examples, where the first example involves a binary covariate and the second involves a continuous covariate. Intervals for the residual between-study variance are wide for both examples. Conclusions Statistical methods, and R computer software, are available to compute exact confidence intervals for the residual between-study variance under the random effects model for meta-regression. These frequentist methods are almost as easily implemented as their established counterparts for meta-analysis. Bayesian meta-regressions are also easily performed by analysts who are comfortable using WinBUGS. Estimates of the residual between-study variance in random effects meta-regressions should be routinely reported and accompanied by some measure of their uncertainty. Confidence and/or credible intervals are well-suited to this purpose. PMID:25196829