Sample records for pairwise decomposition analysis

  1. Eigenvalue-eigenvector decomposition (EED) analysis of dissimilarity and covariance matrix obtained from total synchronous fluorescence spectral (TSFS) data sets of herbal preparations: Optimizing the classification approach

    NASA Astrophysics Data System (ADS)

    Tarai, Madhumita; Kumar, Keshav; Divya, O.; Bairi, Partha; Mishra, Kishor Kumar; Mishra, Ashok Kumar

    2017-09-01

    The present work compares the dissimilarity and covariance based unsupervised chemometric classification approaches by taking the total synchronous fluorescence spectroscopy data sets acquired for the cumin and non-cumin based herbal preparations. The conventional decomposition method involves eigenvalue-eigenvector analysis of the covariance of the data set and finds the factors that can explain the overall major sources of variation present in the data set. The conventional approach does this irrespective of the fact that the samples belong to intrinsically different groups and hence leads to poor class separation. The present work shows that classification of such samples can be optimized by performing the eigenvalue-eigenvector decomposition on the pair-wise dissimilarity matrix.

  2. Eigenvalue-eigenvector decomposition (EED) analysis of dissimilarity and covariance matrix obtained from total synchronous fluorescence spectral (TSFS) data sets of herbal preparations: Optimizing the classification approach.

    PubMed

    Tarai, Madhumita; Kumar, Keshav; Divya, O; Bairi, Partha; Mishra, Kishor Kumar; Mishra, Ashok Kumar

    2017-09-05

    The present work compares the dissimilarity and covariance based unsupervised chemometric classification approaches by taking the total synchronous fluorescence spectroscopy data sets acquired for the cumin and non-cumin based herbal preparations. The conventional decomposition method involves eigenvalue-eigenvector analysis of the covariance of the data set and finds the factors that can explain the overall major sources of variation present in the data set. The conventional approach does this irrespective of the fact that the samples belong to intrinsically different groups and hence leads to poor class separation. The present work shows that classification of such samples can be optimized by performing the eigenvalue-eigenvector decomposition on the pair-wise dissimilarity matrix. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Relationship of host recurrence in fungi to rates of tropical leaf decomposition

    Treesearch

    Mirna E. Santanaa; JeanD. Lodgeb; Patricia Lebowc

    2004-01-01

    Here we explore the significance of fungal diversity on ecosystem processes by testing whether microfungal ‘preferences’ for (i.e., host recurrence) different tropical leaf species increases the rate of decomposition. We used pairwise combinations of girradiated litter of five tree species with cultures of two dominant microfungi derived from each plant in a microcosm...

  4. Relationship of host recurrence in fungi to rates of tropical leaf decomposition

    Treesearch

    Mirna E. Santana; D. Jean Lodge; Patricia Lebow

    2005-01-01

    Here we explore the significance of fungal diversity on ecosystem processes by testing whether microfungal ‘preferences’ for (i.e., host recurrence) different tropical leaf species increases the rate of decomposition. We used pairwise combinations of [gamma]-irradiated litter of five tree species with cultures of two dominant microfungi derived from each plant in a...

  5. Threesomes destabilise certain relationships: multispecies interactions between wood decay fungi in natural resources

    PubMed Central

    Savoury, Melanie; Toledo, Selin; Kingscott-Edmunds, James; Bettridge, Aimee; Waili, Nasra Al; Boddy, Lynne

    2017-01-01

    Abstract Understanding interspecific interactions is key to explaining and modelling community development and associated ecosystem function. Most interactions research has focused on pairwise combinations, overlooking the complexity of multispecies communities. This study investigated three-way interactions between saprotrophic fungi in wood and across soil, and indicated that pairwise combinations are often inaccurate predictors of the outcomes of multispecies competition in wood block interactions. This inconsistency was especially true of intransitive combinations, resulting in increased species coexistence within the resource. Furthermore, the addition of a third competitor frequently destabilised the otherwise consistent outcomes of pairwise combinations in wood blocks, which occasionally resulted in altered resource decomposition rates, depending on the relative decay abilities of the species involved. Conversely, interaction outcomes in soil microcosms were unaffected by the presence of a third combatant. Multispecies interactions promoted species diversity within natural resources, and made community dynamics less consistent than could be predicted from pairwise interaction studies. PMID:28175239

  6. Independent EEG Sources Are Dipolar

    PubMed Central

    Delorme, Arnaud; Palmer, Jason; Onton, Julie; Oostenveld, Robert; Makeig, Scott

    2012-01-01

    Independent component analysis (ICA) and blind source separation (BSS) methods are increasingly used to separate individual brain and non-brain source signals mixed by volume conduction in electroencephalographic (EEG) and other electrophysiological recordings. We compared results of decomposing thirteen 71-channel human scalp EEG datasets by 22 ICA and BSS algorithms, assessing the pairwise mutual information (PMI) in scalp channel pairs, the remaining PMI in component pairs, the overall mutual information reduction (MIR) effected by each decomposition, and decomposition ‘dipolarity’ defined as the number of component scalp maps matching the projection of a single equivalent dipole with less than a given residual variance. The least well-performing algorithm was principal component analysis (PCA); best performing were AMICA and other likelihood/mutual information based ICA methods. Though these and other commonly-used decomposition methods returned many similar components, across 18 ICA/BSS algorithms mean dipolarity varied linearly with both MIR and with PMI remaining between the resulting component time courses, a result compatible with an interpretation of many maximally independent EEG components as being volume-conducted projections of partially-synchronous local cortical field activity within single compact cortical domains. To encourage further method comparisons, the data and software used to prepare the results have been made available (http://sccn.ucsd.edu/wiki/BSSComparison). PMID:22355308

  7. Spectral simplicity of apparent complexity. II. Exact complexities and complexity spectra

    NASA Astrophysics Data System (ADS)

    Riechers, Paul M.; Crutchfield, James P.

    2018-03-01

    The meromorphic functional calculus developed in Part I overcomes the nondiagonalizability of linear operators that arises often in the temporal evolution of complex systems and is generic to the metadynamics of predicting their behavior. Using the resulting spectral decomposition, we derive closed-form expressions for correlation functions, finite-length Shannon entropy-rate approximates, asymptotic entropy rate, excess entropy, transient information, transient and asymptotic state uncertainties, and synchronization information of stochastic processes generated by finite-state hidden Markov models. This introduces analytical tractability to investigating information processing in discrete-event stochastic processes, symbolic dynamics, and chaotic dynamical systems. Comparisons reveal mathematical similarities between complexity measures originally thought to capture distinct informational and computational properties. We also introduce a new kind of spectral analysis via coronal spectrograms and the frequency-dependent spectra of past-future mutual information. We analyze a number of examples to illustrate the methods, emphasizing processes with multivariate dependencies beyond pairwise correlation. This includes spectral decomposition calculations for one representative example in full detail.

  8. Importance of Force Decomposition for Local Stress Calculations in Biomembrane Molecular Simulations.

    PubMed

    Vanegas, Juan M; Torres-Sánchez, Alejandro; Arroyo, Marino

    2014-02-11

    Local stress fields are routinely computed from molecular dynamics trajectories to understand the structure and mechanical properties of lipid bilayers. These calculations can be systematically understood with the Irving-Kirkwood-Noll theory. In identifying the stress tensor, a crucial step is the decomposition of the forces on the particles into pairwise contributions. However, such a decomposition is not unique in general, leading to an ambiguity in the definition of the stress tensor, particularly for multibody potentials. Furthermore, a theoretical treatment of constraints in local stress calculations has been lacking. Here, we present a new implementation of local stress calculations that systematically treats constraints and considers a privileged decomposition, the central force decomposition, that leads to a symmetric stress tensor by construction. We focus on biomembranes, although the methodology presented here is widely applicable. Our results show that some unphysical behavior obtained with previous implementations (e.g. nonconstant normal stress profiles along an isotropic bilayer in equilibrium) is a consequence of an improper treatment of constraints. Furthermore, other valid force decompositions produce significantly different stress profiles, particularly in the presence of dihedral potentials. Our methodology reveals the striking effect of unsaturations on the bilayer mechanics, missed by previous stress calculation implementations.

  9. ChIP-PIT: Enhancing the Analysis of ChIP-Seq Data Using Convex-Relaxed Pair-Wise Interaction Tensor Decomposition.

    PubMed

    Zhu, Lin; Guo, Wei-Li; Deng, Su-Ping; Huang, De-Shuang

    2016-01-01

    In recent years, thanks to the efforts of individual scientists and research consortiums, a huge amount of chromatin immunoprecipitation followed by high-throughput sequencing (ChIP-seq) experimental data have been accumulated. Instead of investigating them independently, several recent studies have convincingly demonstrated that a wealth of scientific insights can be gained by integrative analysis of these ChIP-seq data. However, when used for the purpose of integrative analysis, a serious drawback of current ChIP-seq technique is that it is still expensive and time-consuming to generate ChIP-seq datasets of high standard. Most researchers are therefore unable to obtain complete ChIP-seq data for several TFs in a wide variety of cell lines, which considerably limits the understanding of transcriptional regulation pattern. In this paper, we propose a novel method called ChIP-PIT to overcome the aforementioned limitation. In ChIP-PIT, ChIP-seq data corresponding to a diverse collection of cell types, TFs and genes are fused together using the three-mode pair-wise interaction tensor (PIT) model, and the prediction of unperformed ChIP-seq experimental results is formulated as a tensor completion problem. Computationally, we propose efficient first-order method based on extensions of coordinate descent method to learn the optimal solution of ChIP-PIT, which makes it particularly suitable for the analysis of massive scale ChIP-seq data. Experimental evaluation the ENCODE data illustrate the usefulness of the proposed model.

  10. Statistical methods for change-point detection in surface temperature records

    NASA Astrophysics Data System (ADS)

    Pintar, A. L.; Possolo, A.; Zhang, N. F.

    2013-09-01

    We describe several statistical methods to detect possible change-points in a time series of values of surface temperature measured at a meteorological station, and to assess the statistical significance of such changes, taking into account the natural variability of the measured values, and the autocorrelations between them. These methods serve to determine whether the record may suffer from biases unrelated to the climate signal, hence whether there may be a need for adjustments as considered by M. J. Menne and C. N. Williams (2009) "Homogenization of Temperature Series via Pairwise Comparisons", Journal of Climate 22 (7), 1700-1717. We also review methods to characterize patterns of seasonality (seasonal decomposition using monthly medians or robust local regression), and explain the role they play in the imputation of missing values, and in enabling robust decompositions of the measured values into a seasonal component, a possible climate signal, and a station-specific remainder. The methods for change-point detection that we describe include statistical process control, wavelet multi-resolution analysis, adaptive weights smoothing, and a Bayesian procedure, all of which are applicable to single station records.

  11. When do correlations increase with firing rates in recurrent networks?

    PubMed Central

    2017-01-01

    A central question in neuroscience is to understand how noisy firing patterns are used to transmit information. Because neural spiking is noisy, spiking patterns are often quantified via pairwise correlations, or the probability that two cells will spike coincidentally, above and beyond their baseline firing rate. One observation frequently made in experiments, is that correlations can increase systematically with firing rate. Theoretical studies have determined that stimulus-dependent correlations that increase with firing rate can have beneficial effects on information coding; however, we still have an incomplete understanding of what circuit mechanisms do, or do not, produce this correlation-firing rate relationship. Here, we studied the relationship between pairwise correlations and firing rates in recurrently coupled excitatory-inhibitory spiking networks with conductance-based synapses. We found that with stronger excitatory coupling, a positive relationship emerged between pairwise correlations and firing rates. To explain these findings, we used linear response theory to predict the full correlation matrix and to decompose correlations in terms of graph motifs. We then used this decomposition to explain why covariation of correlations with firing rate—a relationship previously explained in feedforward networks driven by correlated input—emerges in some recurrent networks but not in others. Furthermore, when correlations covary with firing rate, this relationship is reflected in low-rank structure in the correlation matrix. PMID:28448499

  12. Pairwise additivity of energy components in protein-ligand binding: The HIV II protease-Indinavir case

    NASA Astrophysics Data System (ADS)

    Ucisik, Melek N.; Dashti, Danial S.; Faver, John C.; Merz, Kenneth M.

    2011-08-01

    An energy expansion (binding energy decomposition into n-body interaction terms for n ≥ 2) to express the receptor-ligand binding energy for the fragmented HIV II protease-Indinavir system is described to address the role of cooperativity in ligand binding. The outcome of this energy expansion is compared to the total receptor-ligand binding energy at the Hartree-Fock, density functional theory, and semiempirical levels of theory. We find that the sum of the pairwise interaction energies approximates the total binding energy to ˜82% for HF and to >95% for both the M06-L density functional and PM6-DH2 semiempirical method. The contribution of the three-body interactions amounts to 18.7%, 3.8%, and 1.4% for HF, M06-L, and PM6-DH2, respectively. We find that the expansion can be safely truncated after n = 3. That is, the contribution of the interactions involving more than three parties to the total binding energy of Indinavir to the HIV II protease receptor is negligible. Overall, we find that the two-body terms represent a good approximation to the total binding energy of the system, which points to pairwise additivity in the present case. This basic principle of pairwise additivity is utilized in fragment-based drug design approaches and our results support its continued use. The present results can also aid in the validation of non-bonded terms contained within common force fields and in the correction of systematic errors in physics-based score functions.

  13. Gap locations influence the release of carbon, nitrogen and phosphorus in two shrub foliar litter in an alpine fir forest

    PubMed Central

    He, Wei; Wu, Fuzhong; Yang, Wanqin; Zhang, Danju; Xu, Zhenfeng; Tan, Bo; Zhao, Yeyi; Justine, Meta Francis

    2016-01-01

    Gap formation favors the growth of understory plants and affects the decomposition process of plant debris inside and outside of gaps. Little information is available regarding how bioelement release from shrub litter is affected by gap formation during critical periods. The release of carbon (C), nitrogen (N), and phosphorus (P) in the foliar litter of Fargesia nitida and Salix paraplesia in response to gap locations was determined in an alpine forest of the eastern Qinghai-Tibet Plateau via a 2-year litter decomposition experiment. The daily release rates of C, N, and P increased from the closed canopy to the gap centers during the two winters, the two later growing seasons and the entire 2 years, whereas this trend was reversed during the two early growing seasons. The pairwise ratios among C, N, and P converged as the litter decomposition proceeded. Compared with the closed canopy, the gap centers displayed higher C:P and N:P ratio but a lower C:N ratio as the decomposition proceeded. Alpine forest gaps accelerate the release of C, N, and P in decomposing shrub litter, implying that reduced snow cover resulting from vanishing gaps may inhibit the release of these elements in alpine forests. PMID:26906762

  14. Reaction mechanism and reaction coordinates from the viewpoint of energy flow

    PubMed Central

    2016-01-01

    Reaction coordinates are of central importance for correct understanding of reaction dynamics in complex systems, but their counter-intuitive nature made it a daunting challenge to identify them. Starting from an energetic view of a reaction process as stochastic energy flows biased towards preferred channels, which we deemed the reaction coordinates, we developed a rigorous scheme for decomposing energy changes of a system, both potential and kinetic, into pairwise components. The pairwise energy flows between different coordinates provide a concrete statistical mechanical language for depicting reaction mechanisms. Application of this scheme to the C7eq → C7ax transition of the alanine dipeptide in vacuum revealed novel and intriguing mechanisms that eluded previous investigations of this well studied prototype system for biomolecular conformational dynamics. Using a cost function developed from the energy decomposition components by proper averaging over the transition path ensemble, we were able to identify signatures of the reaction coordinates of this system without requiring any input from human intuition. PMID:27004858

  15. Reaction mechanism and reaction coordinates from the viewpoint of energy flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Wenjin; Ma, Ao, E-mail: aoma@uic.edu

    Reaction coordinates are of central importance for correct understanding of reaction dynamics in complex systems, but their counter-intuitive nature made it a daunting challenge to identify them. Starting from an energetic view of a reaction process as stochastic energy flows biased towards preferred channels, which we deemed the reaction coordinates, we developed a rigorous scheme for decomposing energy changes of a system, both potential and kinetic, into pairwise components. The pairwise energy flows between different coordinates provide a concrete statistical mechanical language for depicting reaction mechanisms. Application of this scheme to the C{sub 7eq} → C{sub 7ax} transition of themore » alanine dipeptide in vacuum revealed novel and intriguing mechanisms that eluded previous investigations of this well studied prototype system for biomolecular conformational dynamics. Using a cost function developed from the energy decomposition components by proper averaging over the transition path ensemble, we were able to identify signatures of the reaction coordinates of this system without requiring any input from human intuition.« less

  16. Compression of hyper-spectral images using an accelerated nonnegative tensor decomposition

    NASA Astrophysics Data System (ADS)

    Li, Jin; Liu, Zilong

    2017-12-01

    Nonnegative tensor Tucker decomposition (NTD) in a transform domain (e.g., 2D-DWT, etc) has been used in the compression of hyper-spectral images because it can remove redundancies between spectrum bands and also exploit spatial correlations of each band. However, the use of a NTD has a very high computational cost. In this paper, we propose a low complexity NTD-based compression method of hyper-spectral images. This method is based on a pair-wise multilevel grouping approach for the NTD to overcome its high computational cost. The proposed method has a low complexity under a slight decrease of the coding performance compared to conventional NTD. We experimentally confirm this method, which indicates that this method has the less processing time and keeps a better coding performance than the case that the NTD is not used. The proposed approach has a potential application in the loss compression of hyper-spectral or multi-spectral images

  17. In silico local structure approach: a case study on outer membrane proteins.

    PubMed

    Martin, Juliette; de Brevern, Alexandre G; Camproux, Anne-Claude

    2008-04-01

    The detection of Outer Membrane Proteins (OMP) in whole genomes is an actual question, their sequence characteristics have thus been intensively studied. This class of protein displays a common beta-barrel architecture, formed by adjacent antiparallel strands. However, due to the lack of available structures, few structural studies have been made on this class of proteins. Here we propose a novel OMP local structure investigation, based on a structural alphabet approach, i.e., the decomposition of 3D structures using a library of four-residue protein fragments. The optimal decomposition of structures using hidden Markov model results in a specific structural alphabet of 20 fragments, six of them dedicated to the decomposition of beta-strands. This optimal alphabet, called SA20-OMP, is analyzed in details, in terms of local structures and transitions between fragments. It highlights a particular and strong organization of beta-strands as series of regular canonical structural fragments. The comparison with alphabets learned on globular structures indicates that the internal organization of OMP structures is more constrained than in globular structures. The analysis of OMP structures using SA20-OMP reveals some recurrent structural patterns. The preferred location of fragments in the distinct regions of the membrane is investigated. The study of pairwise specificity of fragments reveals that some contacts between structural fragments in beta-sheets are clearly favored whereas others are avoided. This contact specificity is stronger in OMP than in globular structures. Moreover, SA20-OMP also captured sequential information. This can be integrated in a scoring function for structural model ranking with very promising results. (c) 2007 Wiley-Liss, Inc.

  18. Pairwise-Comparison Software

    NASA Technical Reports Server (NTRS)

    Ricks, Wendell R.

    1995-01-01

    Pairwise comparison (PWC) is computer program that collects data for psychometric scaling techniques now used in cognitive research. It applies technique of pairwise comparisons, which is one of many techniques commonly used to acquire the data necessary for analyses. PWC administers task, collects data from test subject, and formats data for analysis. Written in Turbo Pascal v6.0.

  19. Fast Decentralized Averaging via Multi-scale Gossip

    NASA Astrophysics Data System (ADS)

    Tsianos, Konstantinos I.; Rabbat, Michael G.

    We are interested in the problem of computing the average consensus in a distributed fashion on random geometric graphs. We describe a new algorithm called Multi-scale Gossip which employs a hierarchical decomposition of the graph to partition the computation into tractable sub-problems. Using only pairwise messages of fixed size that travel at most O(n^{1/3}) hops, our algorithm is robust and has communication cost of O(n loglogn logɛ - 1) transmissions, which is order-optimal up to the logarithmic factor in n. Simulated experiments verify the good expected performance on graphs of many thousands of nodes.

  20. Multiscale Simulations of Magnetic Island Coalescence

    NASA Technical Reports Server (NTRS)

    Dorelli, John C.

    2010-01-01

    We describe a new interactive parallel Adaptive Mesh Refinement (AMR) framework written in the Python programming language. This new framework, PyAMR, hides the details of parallel AMR data structures and algorithms (e.g., domain decomposition, grid partition, and inter-process communication), allowing the user to focus on the development of algorithms for advancing the solution of a systems of partial differential equations on a single uniform mesh. We demonstrate the use of PyAMR by simulating the pairwise coalescence of magnetic islands using the resistive Hall MHD equations. Techniques for coupling different physics models on different levels of the AMR grid hierarchy are discussed.

  1. Multivariate modelling of endophenotypes associated with the metabolic syndrome in Chinese twins.

    PubMed

    Pang, Z; Zhang, D; Li, S; Duan, H; Hjelmborg, J; Kruse, T A; Kyvik, K O; Christensen, K; Tan, Q

    2010-12-01

    The common genetic and environmental effects on endophenotypes related to the metabolic syndrome have been investigated using bivariate and multivariate twin models. This paper extends the pairwise analysis approach by introducing independent and common pathway models to Chinese twin data. The aim was to explore the common genetic architecture in the development of these phenotypes in the Chinese population. Three multivariate models including the full saturated Cholesky decomposition model, the common factor independent pathway model and the common factor common pathway model were fitted to 695 pairs of Chinese twins representing six phenotypes including BMI, total cholesterol, total triacylglycerol, fasting glucose, HDL and LDL. Performances of the nested models were compared with that of the full Cholesky model. Cross-phenotype correlation coefficients gave clear indication of common genetic or environmental backgrounds in the phenotypes. Decomposition of phenotypic correlation by the Cholesky model revealed that the observed phenotypic correlation among lipid phenotypes had genetic and unique environmental backgrounds. Both pathway models suggest a common genetic architecture for lipid phenotypes, which is distinct from that of the non-lipid phenotypes. The declining performance with model restriction indicates biological heterogeneity in development among some of these phenotypes. Our multivariate analyses revealed common genetic and environmental backgrounds for the studied lipid phenotypes in Chinese twins. Model performance showed that physiologically distinct endophenotypes may follow different genetic regulations.

  2. A three-way parallel ICA approach to analyze links among genetics, brain structure and brain function.

    PubMed

    Vergara, Victor M; Ulloa, Alvaro; Calhoun, Vince D; Boutte, David; Chen, Jiayu; Liu, Jingyu

    2014-09-01

    Multi-modal data analysis techniques, such as the Parallel Independent Component Analysis (pICA), are essential in neuroscience, medical imaging and genetic studies. The pICA algorithm allows the simultaneous decomposition of up to two data modalities achieving better performance than separate ICA decompositions and enabling the discovery of links between modalities. However, advances in data acquisition techniques facilitate the collection of more than two data modalities from each subject. Examples of commonly measured modalities include genetic information, structural magnetic resonance imaging (MRI) and functional MRI. In order to take full advantage of the available data, this work extends the pICA approach to incorporate three modalities in one comprehensive analysis. Simulations demonstrate the three-way pICA performance in identifying pairwise links between modalities and estimating independent components which more closely resemble the true sources than components found by pICA or separate ICA analyses. In addition, the three-way pICA algorithm is applied to real experimental data obtained from a study that investigate genetic effects on alcohol dependence. Considered data modalities include functional MRI (contrast images during alcohol exposure paradigm), gray matter concentration images from structural MRI and genetic single nucleotide polymorphism (SNP). The three-way pICA approach identified links between a SNP component (pointing to brain function and mental disorder associated genes, including BDNF, GRIN2B and NRG1), a functional component related to increased activation in the precuneus area, and a gray matter component comprising part of the default mode network and the caudate. Although such findings need further verification, the simulation and in-vivo results validate the three-way pICA algorithm presented here as a useful tool in biomedical data fusion applications. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Sparse approximation of currents for statistics on curves and surfaces.

    PubMed

    Durrleman, Stanley; Pennec, Xavier; Trouvé, Alain; Ayache, Nicholas

    2008-01-01

    Computing, processing, visualizing statistics on shapes like curves or surfaces is a real challenge with many applications ranging from medical image analysis to computational geometry. Modelling such geometrical primitives with currents avoids feature-based approach as well as point-correspondence method. This framework has been proved to be powerful to register brain surfaces or to measure geometrical invariants. However, if the state-of-the-art methods perform efficiently pairwise registrations, new numerical schemes are required to process groupwise statistics due to an increasing complexity when the size of the database is growing. Statistics such as mean and principal modes of a set of shapes often have a heavy and highly redundant representation. We propose therefore to find an adapted basis on which mean and principal modes have a sparse decomposition. Besides the computational improvement, this sparse representation offers a way to visualize and interpret statistics on currents. Experiments show the relevance of the approach on 34 sets of 70 sulcal lines and on 50 sets of 10 meshes of deep brain structures.

  4. Nonlocal van der Waals functionals: The case of rare-gas dimers and solids

    NASA Astrophysics Data System (ADS)

    Tran, Fabien; Hutter, Jürg

    2013-05-01

    Recently, the nonlocal van der Waals (vdW) density functionals [M. Dion, H. Rydberg, E. Schröder, D. C. Langreth, and B. I. Lundqvist, Phys. Rev. Lett. 92, 246401 (2004), 10.1103/PhysRevLett.92.246401] have attracted considerable attention due to their good performance for systems where weak interactions are important. Since the physics of dispersion is included in these functionals, they are usually more accurate and show less erratic behavior than the semilocal and hybrid methods. In this work, several variants of the vdW functionals have been tested on rare-gas dimers (from He2 to Kr2) and solids (Ne, Ar, and Kr) and their accuracy compared to standard semilocal approximations, supplemented or not by an atom-pairwise dispersion correction [S. Grimme, J. Antony, S. Ehrlich, and H. Krieg, J. Chem. Phys. 132, 154104 (2010), 10.1063/1.3382344]. An analysis of the results in terms of energy decomposition is also provided.

  5. Precoded spatial multiplexing MIMO system with spatial component interleaver.

    PubMed

    Gao, Xiang; Wu, Zhanji

    In this paper, the performance of precoded bit-interleaved coded modulation (BICM) spatial multiplexing multiple-input multiple-output (MIMO) system with spatial component interleaver is investigated. For the ideal precoded spatial multiplexing MIMO system with spatial component interleaver based on singular value decomposition (SVD) of the MIMO channel, the average pairwise error probability (PEP) of coded bits is derived. Based on the PEP analysis, the optimum spatial Q-component interleaver design criterion is provided to achieve the minimum error probability. For the limited feedback precoded proposed scheme with linear zero forcing (ZF) receiver, in order to minimize a bound on the average probability of a symbol vector error, a novel effective signal-to-noise ratio (SNR)-based precoding matrix selection criterion and a simplified criterion are proposed. Based on the average mutual information (AMI)-maximization criterion, the optimal constellation rotation angles are investigated. Simulation results indicate that the optimized spatial multiplexing MIMO system with spatial component interleaver can achieve significant performance advantages compared to the conventional spatial multiplexing MIMO system.

  6. Living network meta-analysis compared with pairwise meta-analysis in comparative effectiveness research: empirical study

    PubMed Central

    Nikolakopoulou, Adriani; Mavridis, Dimitris; Furukawa, Toshi A; Cipriani, Andrea; Tricco, Andrea C; Straus, Sharon E; Siontis, George C M; Egger, Matthias

    2018-01-01

    Abstract Objective To examine whether the continuous updating of networks of prospectively planned randomised controlled trials (RCTs) (“living” network meta-analysis) provides strong evidence against the null hypothesis in comparative effectiveness of medical interventions earlier than the updating of conventional, pairwise meta-analysis. Design Empirical study of the accumulating evidence about the comparative effectiveness of clinical interventions. Data sources Database of network meta-analyses of RCTs identified through searches of Medline, Embase, and the Cochrane Database of Systematic Reviews until 14 April 2015. Eligibility criteria for study selection Network meta-analyses published after January 2012 that compared at least five treatments and included at least 20 RCTs. Clinical experts were asked to identify in each network the treatment comparison of greatest clinical interest. Comparisons were excluded for which direct and indirect evidence disagreed, based on side, or node, splitting test (P<0.10). Outcomes and analysis Cumulative pairwise and network meta-analyses were performed for each selected comparison. Monitoring boundaries of statistical significance were constructed and the evidence against the null hypothesis was considered to be strong when the monitoring boundaries were crossed. A significance level was defined as α=5%, power of 90% (β=10%), and an anticipated treatment effect to detect equal to the final estimate from the network meta-analysis. The frequency and time to strong evidence was compared against the null hypothesis between pairwise and network meta-analyses. Results 49 comparisons of interest from 44 networks were included; most (n=39, 80%) were between active drugs, mainly from the specialties of cardiology, endocrinology, psychiatry, and rheumatology. 29 comparisons were informed by both direct and indirect evidence (59%), 13 by indirect evidence (27%), and 7 by direct evidence (14%). Both network and pairwise meta-analysis provided strong evidence against the null hypothesis for seven comparisons, but for an additional 10 comparisons only network meta-analysis provided strong evidence against the null hypothesis (P=0.002). The median time to strong evidence against the null hypothesis was 19 years with living network meta-analysis and 23 years with living pairwise meta-analysis (hazard ratio 2.78, 95% confidence interval 1.00 to 7.72, P=0.05). Studies directly comparing the treatments of interest continued to be published for eight comparisons after strong evidence had become evident in network meta-analysis. Conclusions In comparative effectiveness research, prospectively planned living network meta-analyses produced strong evidence against the null hypothesis more often and earlier than conventional, pairwise meta-analyses. PMID:29490922

  7. Living network meta-analysis compared with pairwise meta-analysis in comparative effectiveness research: empirical study.

    PubMed

    Nikolakopoulou, Adriani; Mavridis, Dimitris; Furukawa, Toshi A; Cipriani, Andrea; Tricco, Andrea C; Straus, Sharon E; Siontis, George C M; Egger, Matthias; Salanti, Georgia

    2018-02-28

    To examine whether the continuous updating of networks of prospectively planned randomised controlled trials (RCTs) ("living" network meta-analysis) provides strong evidence against the null hypothesis in comparative effectiveness of medical interventions earlier than the updating of conventional, pairwise meta-analysis. Empirical study of the accumulating evidence about the comparative effectiveness of clinical interventions. Database of network meta-analyses of RCTs identified through searches of Medline, Embase, and the Cochrane Database of Systematic Reviews until 14 April 2015. Network meta-analyses published after January 2012 that compared at least five treatments and included at least 20 RCTs. Clinical experts were asked to identify in each network the treatment comparison of greatest clinical interest. Comparisons were excluded for which direct and indirect evidence disagreed, based on side, or node, splitting test (P<0.10). Cumulative pairwise and network meta-analyses were performed for each selected comparison. Monitoring boundaries of statistical significance were constructed and the evidence against the null hypothesis was considered to be strong when the monitoring boundaries were crossed. A significance level was defined as α=5%, power of 90% (β=10%), and an anticipated treatment effect to detect equal to the final estimate from the network meta-analysis. The frequency and time to strong evidence was compared against the null hypothesis between pairwise and network meta-analyses. 49 comparisons of interest from 44 networks were included; most (n=39, 80%) were between active drugs, mainly from the specialties of cardiology, endocrinology, psychiatry, and rheumatology. 29 comparisons were informed by both direct and indirect evidence (59%), 13 by indirect evidence (27%), and 7 by direct evidence (14%). Both network and pairwise meta-analysis provided strong evidence against the null hypothesis for seven comparisons, but for an additional 10 comparisons only network meta-analysis provided strong evidence against the null hypothesis (P=0.002). The median time to strong evidence against the null hypothesis was 19 years with living network meta-analysis and 23 years with living pairwise meta-analysis (hazard ratio 2.78, 95% confidence interval 1.00 to 7.72, P=0.05). Studies directly comparing the treatments of interest continued to be published for eight comparisons after strong evidence had become evident in network meta-analysis. In comparative effectiveness research, prospectively planned living network meta-analyses produced strong evidence against the null hypothesis more often and earlier than conventional, pairwise meta-analyses. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  8. Assessing many-body contributions to intermolecular interactions of the AMOEBA force field using energy decomposition analysis of electronic structure calculations.

    PubMed

    Demerdash, Omar; Mao, Yuezhi; Liu, Tianyi; Head-Gordon, Martin; Head-Gordon, Teresa

    2017-10-28

    In this work, we evaluate the accuracy of the classical AMOEBA model for representing many-body interactions, such as polarization, charge transfer, and Pauli repulsion and dispersion, through comparison against an energy decomposition method based on absolutely localized molecular orbitals (ALMO-EDA) for the water trimer and a variety of ion-water systems. When the 2- and 3-body contributions according to the many-body expansion are analyzed for the ion-water trimer systems examined here, the 3-body contributions to Pauli repulsion and dispersion are found to be negligible under ALMO-EDA, thereby supporting the validity of the pairwise-additive approximation in AMOEBA's 14-7 van der Waals term. However AMOEBA shows imperfect cancellation of errors for the missing effects of charge transfer and incorrectness in the distance dependence for polarization when compared with the corresponding ALMO-EDA terms. We trace the larger 2-body followed by 3-body polarization errors to the Thole damping scheme used in AMOEBA, and although the width parameter in Thole damping can be changed to improve agreement with the ALMO-EDA polarization for points about equilibrium, the correct profile of polarization as a function of intermolecular distance cannot be reproduced. The results suggest that there is a need for re-examining the damping and polarization model used in the AMOEBA force field and provide further insights into the formulations of polarizable force fields in general.

  9. Assessing many-body contributions to intermolecular interactions of the AMOEBA force field using energy decomposition analysis of electronic structure calculations

    NASA Astrophysics Data System (ADS)

    Demerdash, Omar; Mao, Yuezhi; Liu, Tianyi; Head-Gordon, Martin; Head-Gordon, Teresa

    2017-10-01

    In this work, we evaluate the accuracy of the classical AMOEBA model for representing many-body interactions, such as polarization, charge transfer, and Pauli repulsion and dispersion, through comparison against an energy decomposition method based on absolutely localized molecular orbitals (ALMO-EDA) for the water trimer and a variety of ion-water systems. When the 2- and 3-body contributions according to the many-body expansion are analyzed for the ion-water trimer systems examined here, the 3-body contributions to Pauli repulsion and dispersion are found to be negligible under ALMO-EDA, thereby supporting the validity of the pairwise-additive approximation in AMOEBA's 14-7 van der Waals term. However AMOEBA shows imperfect cancellation of errors for the missing effects of charge transfer and incorrectness in the distance dependence for polarization when compared with the corresponding ALMO-EDA terms. We trace the larger 2-body followed by 3-body polarization errors to the Thole damping scheme used in AMOEBA, and although the width parameter in Thole damping can be changed to improve agreement with the ALMO-EDA polarization for points about equilibrium, the correct profile of polarization as a function of intermolecular distance cannot be reproduced. The results suggest that there is a need for re-examining the damping and polarization model used in the AMOEBA force field and provide further insights into the formulations of polarizable force fields in general.

  10. Learning Factors Transfer Analysis: Using Learning Curve Analysis to Automatically Generate Domain Models

    ERIC Educational Resources Information Center

    Pavlik, Philip I. Jr.; Cen, Hao; Koedinger, Kenneth R.

    2009-01-01

    This paper describes a novel method to create a quantitative model of an educational content domain of related practice item-types using learning curves. By using a pairwise test to search for the relationships between learning curves for these item-types, we show how the test results in a set of pairwise transfer relationships that can be…

  11. GetReal in network meta-analysis: a review of the methodology.

    PubMed

    Efthimiou, Orestis; Debray, Thomas P A; van Valkenhoef, Gert; Trelle, Sven; Panayidou, Klea; Moons, Karel G M; Reitsma, Johannes B; Shang, Aijing; Salanti, Georgia

    2016-09-01

    Pairwise meta-analysis is an established statistical tool for synthesizing evidence from multiple trials, but it is informative only about the relative efficacy of two specific interventions. The usefulness of pairwise meta-analysis is thus limited in real-life medical practice, where many competing interventions may be available for a certain condition and studies informing some of the pairwise comparisons may be lacking. This commonly encountered scenario has led to the development of network meta-analysis (NMA). In the last decade, several applications, methodological developments, and empirical studies in NMA have been published, and the area is thriving as its relevance to public health is increasingly recognized. This article presents a review of the relevant literature on NMA methodology aiming to pinpoint the developments that have appeared in the field. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  12. Computing the non-Markovian coarse-grained interactions derived from the Mori-Zwanzig formalism in molecular systems: Application to polymer melts

    NASA Astrophysics Data System (ADS)

    Li, Zhen; Lee, Hee Sun; Darve, Eric; Karniadakis, George Em

    2017-01-01

    Memory effects are often introduced during coarse-graining of a complex dynamical system. In particular, a generalized Langevin equation (GLE) for the coarse-grained (CG) system arises in the context of Mori-Zwanzig formalism. Upon a pairwise decomposition, GLE can be reformulated into its pairwise version, i.e., non-Markovian dissipative particle dynamics (DPD). GLE models the dynamics of a single coarse particle, while DPD considers the dynamics of many interacting CG particles, with both CG systems governed by non-Markovian interactions. We compare two different methods for the practical implementation of the non-Markovian interactions in GLE and DPD systems. More specifically, a direct evaluation of the non-Markovian (NM) terms is performed in LE-NM and DPD-NM models, which requires the storage of historical information that significantly increases computational complexity. Alternatively, we use a few auxiliary variables in LE-AUX and DPD-AUX models to replace the non-Markovian dynamics with a Markovian dynamics in a higher dimensional space, leading to a much reduced memory footprint and computational cost. In our numerical benchmarks, the GLE and non-Markovian DPD models are constructed from molecular dynamics (MD) simulations of star-polymer melts. Results show that a Markovian dynamics with auxiliary variables successfully generates equivalent non-Markovian dynamics consistent with the reference MD system, while maintaining a tractable computational cost. Also, transient subdiffusion of the star-polymers observed in the MD system can be reproduced by the coarse-grained models. The non-interacting particle models, LE-NM/AUX, are computationally much cheaper than the interacting particle models, DPD-NM/AUX. However, the pairwise models with momentum conservation are more appropriate for correctly reproducing the long-time hydrodynamics characterised by an algebraic decay in the velocity autocorrelation function.

  13. A periodic energy decomposition analysis method for the investigation of chemical bonding in extended systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raupach, Marc; Tonner, Ralf, E-mail: tonner@chemie.uni-marburg.de

    The development and first applications of a new periodic energy decomposition analysis (pEDA) scheme for extended systems based on the Kohn-Sham approach to density functional theory are described. The pEDA decomposes the bonding energy between two fragments (e.g., the adsorption energy of a molecule on a surface) into several well-defined terms: preparation, electrostatic, Pauli repulsion, and orbital relaxation energies. This is complemented by consideration of dispersion interactions via a pairwise scheme. One major extension toward a previous implementation [Philipsen and Baerends, J. Phys. Chem. B 110, 12470 (2006)] lies in the separate discussion of electrostatic and Pauli and the additionmore » of a dispersion term. The pEDA presented here for an implementation based on atomic orbitals can handle restricted and unrestricted fragments for 0D to 3D systems considering periodic boundary conditions with and without the determination of fragment occupations. For the latter case, reciprocal space sampling is enabled. The new method gives comparable results to established schemes for molecular systems and shows good convergence with respect to the basis set (TZ2P), the integration accuracy, and k-space sampling. Four typical bonding scenarios for surface-adsorbate complexes were chosen to highlight the performance of the method representing insulating (CO on MgO(001)), metallic (H{sub 2} on M(001), M = Pd, Cu), and semiconducting (CO and C{sub 2}H{sub 2} on Si(001)) substrates. These examples cover diverse substrates as well as bonding scenarios ranging from weakly interacting to covalent (shared electron and donor acceptor) bonding. The results presented lend confidence that the pEDA will be a powerful tool for the analysis of surface-adsorbate bonding in the future, enabling the transfer of concepts like ionic and covalent bonding, donor-acceptor interaction, steric repulsion, and others to extended systems.« less

  14. Pairwise Classifier Ensemble with Adaptive Sub-Classifiers for fMRI Pattern Analysis.

    PubMed

    Kim, Eunwoo; Park, HyunWook

    2017-02-01

    The multi-voxel pattern analysis technique is applied to fMRI data for classification of high-level brain functions using pattern information distributed over multiple voxels. In this paper, we propose a classifier ensemble for multiclass classification in fMRI analysis, exploiting the fact that specific neighboring voxels can contain spatial pattern information. The proposed method converts the multiclass classification to a pairwise classifier ensemble, and each pairwise classifier consists of multiple sub-classifiers using an adaptive feature set for each class-pair. Simulated and real fMRI data were used to verify the proposed method. Intra- and inter-subject analyses were performed to compare the proposed method with several well-known classifiers, including single and ensemble classifiers. The comparison results showed that the proposed method can be generally applied to multiclass classification in both simulations and real fMRI analyses.

  15. Power independent EMG based gesture recognition for robotics.

    PubMed

    Li, Ling; Looney, David; Park, Cheolsoo; Rehman, Naveed U; Mandic, Danilo P

    2011-01-01

    A novel method for detecting muscle contraction is presented. This method is further developed for identifying four different gestures to facilitate a hand gesture controlled robot system. It is achieved based on surface Electromyograph (EMG) measurements of groups of arm muscles. The cross-information is preserved through a simultaneous processing of EMG channels using a recent multivariate extension of Empirical Mode Decomposition (EMD). Next, phase synchrony measures are employed to make the system robust to different power levels due to electrode placements and impedances. The multiple pairwise muscle synchronies are used as features of a discrete gesture space comprising four gestures (flexion, extension, pronation, supination). Simulations on real-time robot control illustrate the enhanced accuracy and robustness of the proposed methodology.

  16. Pairwise Multiple Comparisons in Single Group Repeated Measures Analysis.

    ERIC Educational Resources Information Center

    Barcikowski, Robert S.; Elliott, Ronald S.

    Research was conducted to provide educational researchers with a choice of pairwise multiple comparison procedures (P-MCPs) to use with single group repeated measures designs. The following were studied through two Monte Carlo (MC) simulations: (1) The T procedure of J. W. Tukey (1953); (2) a modification of Tukey's T (G. Keppel, 1973); (3) the…

  17. Propensity-score matching in economic analyses: comparison with regression models, instrumental variables, residual inclusion, differences-in-differences, and decomposition methods.

    PubMed

    Crown, William H

    2014-02-01

    This paper examines the use of propensity score matching in economic analyses of observational data. Several excellent papers have previously reviewed practical aspects of propensity score estimation and other aspects of the propensity score literature. The purpose of this paper is to compare the conceptual foundation of propensity score models with alternative estimators of treatment effects. References are provided to empirical comparisons among methods that have appeared in the literature. These comparisons are available for a subset of the methods considered in this paper. However, in some cases, no pairwise comparisons of particular methods are yet available, and there are no examples of comparisons across all of the methods surveyed here. Irrespective of the availability of empirical comparisons, the goal of this paper is to provide some intuition about the relative merits of alternative estimators in health economic evaluations where nonlinearity, sample size, availability of pre/post data, heterogeneity, and missing variables can have important implications for choice of methodology. Also considered is the potential combination of propensity score matching with alternative methods such as differences-in-differences and decomposition methods that have not yet appeared in the empirical literature.

  18. Power and sample-size estimation for microbiome studies using pairwise distances and PERMANOVA.

    PubMed

    Kelly, Brendan J; Gross, Robert; Bittinger, Kyle; Sherrill-Mix, Scott; Lewis, James D; Collman, Ronald G; Bushman, Frederic D; Li, Hongzhe

    2015-08-01

    The variation in community composition between microbiome samples, termed beta diversity, can be measured by pairwise distance based on either presence-absence or quantitative species abundance data. PERMANOVA, a permutation-based extension of multivariate analysis of variance to a matrix of pairwise distances, partitions within-group and between-group distances to permit assessment of the effect of an exposure or intervention (grouping factor) upon the sampled microbiome. Within-group distance and exposure/intervention effect size must be accurately modeled to estimate statistical power for a microbiome study that will be analyzed with pairwise distances and PERMANOVA. We present a framework for PERMANOVA power estimation tailored to marker-gene microbiome studies that will be analyzed by pairwise distances, which includes: (i) a novel method for distance matrix simulation that permits modeling of within-group pairwise distances according to pre-specified population parameters; (ii) a method to incorporate effects of different sizes within the simulated distance matrix; (iii) a simulation-based method for estimating PERMANOVA power from simulated distance matrices; and (iv) an R statistical software package that implements the above. Matrices of pairwise distances can be efficiently simulated to satisfy the triangle inequality and incorporate group-level effects, which are quantified by the adjusted coefficient of determination, omega-squared (ω2). From simulated distance matrices, available PERMANOVA power or necessary sample size can be estimated for a planned microbiome study. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  19. A Comparative Study of Pairwise Learning Methods Based on Kernel Ridge Regression.

    PubMed

    Stock, Michiel; Pahikkala, Tapio; Airola, Antti; De Baets, Bernard; Waegeman, Willem

    2018-06-12

    Many machine learning problems can be formulated as predicting labels for a pair of objects. Problems of that kind are often referred to as pairwise learning, dyadic prediction, or network inference problems. During the past decade, kernel methods have played a dominant role in pairwise learning. They still obtain a state-of-the-art predictive performance, but a theoretical analysis of their behavior has been underexplored in the machine learning literature. In this work we review and unify kernel-based algorithms that are commonly used in different pairwise learning settings, ranging from matrix filtering to zero-shot learning. To this end, we focus on closed-form efficient instantiations of Kronecker kernel ridge regression. We show that independent task kernel ridge regression, two-step kernel ridge regression, and a linear matrix filter arise naturally as a special case of Kronecker kernel ridge regression, implying that all these methods implicitly minimize a squared loss. In addition, we analyze universality, consistency, and spectral filtering properties. Our theoretical results provide valuable insights into assessing the advantages and limitations of existing pairwise learning methods.

  20. Effect of interacting second- and third-order stimulus-dependent correlations on population-coding asymmetries.

    PubMed

    Montangie, Lisandro; Montani, Fernando

    2016-10-01

    Spike correlations among neurons are widely encountered in the brain. Although models accounting for pairwise interactions have proved able to capture some of the most important features of population activity at the level of the retina, the evidence shows that pairwise neuronal correlation analysis does not resolve cooperative population dynamics by itself. By means of a series expansion for short time scales of the mutual information conveyed by a population of neurons, the information transmission can be broken down into firing rate and correlational components. In a proposed extension of this framework, we investigate the information components considering both second- and higher-order correlations. We show that the existence of a mixed stimulus-dependent correlation term defines a new scenario for the interplay between pairwise and higher-than-pairwise interactions in noise and signal correlations that would lead either to redundancy or synergy in the information-theoretic sense.

  1. Power and sample-size estimation for microbiome studies using pairwise distances and PERMANOVA

    PubMed Central

    Kelly, Brendan J.; Gross, Robert; Bittinger, Kyle; Sherrill-Mix, Scott; Lewis, James D.; Collman, Ronald G.; Bushman, Frederic D.; Li, Hongzhe

    2015-01-01

    Motivation: The variation in community composition between microbiome samples, termed beta diversity, can be measured by pairwise distance based on either presence–absence or quantitative species abundance data. PERMANOVA, a permutation-based extension of multivariate analysis of variance to a matrix of pairwise distances, partitions within-group and between-group distances to permit assessment of the effect of an exposure or intervention (grouping factor) upon the sampled microbiome. Within-group distance and exposure/intervention effect size must be accurately modeled to estimate statistical power for a microbiome study that will be analyzed with pairwise distances and PERMANOVA. Results: We present a framework for PERMANOVA power estimation tailored to marker-gene microbiome studies that will be analyzed by pairwise distances, which includes: (i) a novel method for distance matrix simulation that permits modeling of within-group pairwise distances according to pre-specified population parameters; (ii) a method to incorporate effects of different sizes within the simulated distance matrix; (iii) a simulation-based method for estimating PERMANOVA power from simulated distance matrices; and (iv) an R statistical software package that implements the above. Matrices of pairwise distances can be efficiently simulated to satisfy the triangle inequality and incorporate group-level effects, which are quantified by the adjusted coefficient of determination, omega-squared (ω2). From simulated distance matrices, available PERMANOVA power or necessary sample size can be estimated for a planned microbiome study. Availability and implementation: http://github.com/brendankelly/micropower. Contact: brendank@mail.med.upenn.edu or hongzhe@upenn.edu PMID:25819674

  2. Decomposition-based transfer distance metric learning for image classification.

    PubMed

    Luo, Yong; Liu, Tongliang; Tao, Dacheng; Xu, Chao

    2014-09-01

    Distance metric learning (DML) is a critical factor for image analysis and pattern recognition. To learn a robust distance metric for a target task, we need abundant side information (i.e., the similarity/dissimilarity pairwise constraints over the labeled data), which is usually unavailable in practice due to the high labeling cost. This paper considers the transfer learning setting by exploiting the large quantity of side information from certain related, but different source tasks to help with target metric learning (with only a little side information). The state-of-the-art metric learning algorithms usually fail in this setting because the data distributions of the source task and target task are often quite different. We address this problem by assuming that the target distance metric lies in the space spanned by the eigenvectors of the source metrics (or other randomly generated bases). The target metric is represented as a combination of the base metrics, which are computed using the decomposed components of the source metrics (or simply a set of random bases); we call the proposed method, decomposition-based transfer DML (DTDML). In particular, DTDML learns a sparse combination of the base metrics to construct the target metric by forcing the target metric to be close to an integration of the source metrics. The main advantage of the proposed method compared with existing transfer metric learning approaches is that we directly learn the base metric coefficients instead of the target metric. To this end, far fewer variables need to be learned. We therefore obtain more reliable solutions given the limited side information and the optimization tends to be faster. Experiments on the popular handwritten image (digit, letter) classification and challenge natural image annotation tasks demonstrate the effectiveness of the proposed method.

  3. Dispersion- and Exchange-Corrected Density Functional Theory for Sodium Ion Hydration.

    PubMed

    Soniat, Marielle; Rogers, David M; Rempe, Susan B

    2015-07-14

    A challenge in density functional theory is developing functionals that simultaneously describe intermolecular electron correlation and electron delocalization. Recent exchange-correlation functionals address those two issues by adding corrections important at long ranges: an atom-centered pairwise dispersion term to account for correlation and a modified long-range component of the electron exchange term to correct for delocalization. Here we investigate how those corrections influence the accuracy of binding free energy predictions for sodium-water clusters. We find that the dual-corrected ωB97X-D functional gives cluster binding energies closest to high-level ab initio methods (CCSD(T)). Binding energy decomposition shows that the ωB97X-D functional predicts the smallest ion-water (pairwise) interaction energy and larger multibody contributions for a four-water cluster than most other functionals - a trend consistent with CCSD(T) results. Also, ωB97X-D produces the smallest amounts of charge transfer and the least polarizable waters of the density functionals studied, which mimics the lower polarizability of CCSD. When compared with experimental binding free energies, however, the exchange-corrected CAM-B3LYP functional performs best (error <1 kcal/mol), possibly because of its parametrization to experimental formation enthalpies. For clusters containing more than four waters, "split-shell" coordination must be considered to obtain accurate free energies in comparison with experiment.

  4. Dynamics of pairwise motions in the Cosmic Web

    NASA Astrophysics Data System (ADS)

    Hellwing, Wojciech A.

    2016-10-01

    We present results of analysis of the dark matter (DM) pairwise velocity statistics in different Cosmic Web environments. We use the DM velocity and density field from the Millennium 2 simulation together with the NEXUS+ algorithm to segment the simulation volume into voxels uniquely identifying one of the four possible environments: nodes, filaments, walls or cosmic voids. We show that the PDFs of the mean infall velocities v 12 as well as its spatial dependence together with the perpendicular and parallel velocity dispersions bear a significant signal of the large-scale structure environment in which DM particle pairs are embedded. The pairwise flows are notably colder and have smaller mean magnitude in wall and voids, when compared to much denser environments of filaments and nodes. We discuss on our results, indicating that they are consistent with a simple theoretical predictions for pairwise motions as induced by gravitational instability mechanism. Our results indicate that the Cosmic Web elements are coherent dynamical entities rather than just temporal geometrical associations. In addition it should be possible to observationally test various Cosmic Web finding algorithms by segmenting available peculiar velocity data and studying resulting pairwise velocity statistics.

  5. Distance-Based Functional Diversity Measures and Their Decomposition: A Framework Based on Hill Numbers

    PubMed Central

    Chiu, Chun-Huo; Chao, Anne

    2014-01-01

    Hill numbers (or the “effective number of species”) are increasingly used to characterize species diversity of an assemblage. This work extends Hill numbers to incorporate species pairwise functional distances calculated from species traits. We derive a parametric class of functional Hill numbers, which quantify “the effective number of equally abundant and (functionally) equally distinct species” in an assemblage. We also propose a class of mean functional diversity (per species), which quantifies the effective sum of functional distances between a fixed species to all other species. The product of the functional Hill number and the mean functional diversity thus quantifies the (total) functional diversity, i.e., the effective total distance between species of the assemblage. The three measures (functional Hill numbers, mean functional diversity and total functional diversity) quantify different aspects of species trait space, and all are based on species abundance and species pairwise functional distances. When all species are equally distinct, our functional Hill numbers reduce to ordinary Hill numbers. When species abundances are not considered or species are equally abundant, our total functional diversity reduces to the sum of all pairwise distances between species of an assemblage. The functional Hill numbers and the mean functional diversity both satisfy a replication principle, implying the total functional diversity satisfies a quadratic replication principle. When there are multiple assemblages defined by the investigator, each of the three measures of the pooled assemblage (gamma) can be multiplicatively decomposed into alpha and beta components, and the two components are independent. The resulting beta component measures pure functional differentiation among assemblages and can be further transformed to obtain several classes of normalized functional similarity (or differentiation) measures, including N-assemblage functional generalizations of the classic Jaccard, Sørensen, Horn and Morisita-Horn similarity indices. The proposed measures are applied to artificial and real data for illustration. PMID:25000299

  6. Distance-based functional diversity measures and their decomposition: a framework based on Hill numbers.

    PubMed

    Chiu, Chun-Huo; Chao, Anne

    2014-01-01

    Hill numbers (or the "effective number of species") are increasingly used to characterize species diversity of an assemblage. This work extends Hill numbers to incorporate species pairwise functional distances calculated from species traits. We derive a parametric class of functional Hill numbers, which quantify "the effective number of equally abundant and (functionally) equally distinct species" in an assemblage. We also propose a class of mean functional diversity (per species), which quantifies the effective sum of functional distances between a fixed species to all other species. The product of the functional Hill number and the mean functional diversity thus quantifies the (total) functional diversity, i.e., the effective total distance between species of the assemblage. The three measures (functional Hill numbers, mean functional diversity and total functional diversity) quantify different aspects of species trait space, and all are based on species abundance and species pairwise functional distances. When all species are equally distinct, our functional Hill numbers reduce to ordinary Hill numbers. When species abundances are not considered or species are equally abundant, our total functional diversity reduces to the sum of all pairwise distances between species of an assemblage. The functional Hill numbers and the mean functional diversity both satisfy a replication principle, implying the total functional diversity satisfies a quadratic replication principle. When there are multiple assemblages defined by the investigator, each of the three measures of the pooled assemblage (gamma) can be multiplicatively decomposed into alpha and beta components, and the two components are independent. The resulting beta component measures pure functional differentiation among assemblages and can be further transformed to obtain several classes of normalized functional similarity (or differentiation) measures, including N-assemblage functional generalizations of the classic Jaccard, Sørensen, Horn and Morisita-Horn similarity indices. The proposed measures are applied to artificial and real data for illustration.

  7. Time-frequency analysis : mathematical analysis of the empirical mode decomposition.

    DOT National Transportation Integrated Search

    2009-01-01

    Invented over 10 years ago, empirical mode : decomposition (EMD) provides a nonlinear : time-frequency analysis with the ability to successfully : analyze nonstationary signals. Mathematical : Analysis of the Empirical Mode Decomposition : is a...

  8. Evaluating the Quality of Evidence from a Network Meta-Analysis

    PubMed Central

    Salanti, Georgia; Del Giovane, Cinzia; Chaimani, Anna; Caldwell, Deborah M.; Higgins, Julian P. T.

    2014-01-01

    Systematic reviews that collate data about the relative effects of multiple interventions via network meta-analysis are highly informative for decision-making purposes. A network meta-analysis provides two types of findings for a specific outcome: the relative treatment effect for all pairwise comparisons, and a ranking of the treatments. It is important to consider the confidence with which these two types of results can enable clinicians, policy makers and patients to make informed decisions. We propose an approach to determining confidence in the output of a network meta-analysis. Our proposed approach is based on methodology developed by the Grading of Recommendations Assessment, Development and Evaluation (GRADE) Working Group for pairwise meta-analyses. The suggested framework for evaluating a network meta-analysis acknowledges (i) the key role of indirect comparisons (ii) the contributions of each piece of direct evidence to the network meta-analysis estimates of effect size; (iii) the importance of the transitivity assumption to the validity of network meta-analysis; and (iv) the possibility of disagreement between direct evidence and indirect evidence. We apply our proposed strategy to a systematic review comparing topical antibiotics without steroids for chronically discharging ears with underlying eardrum perforations. The proposed framework can be used to determine confidence in the results from a network meta-analysis. Judgements about evidence from a network meta-analysis can be different from those made about evidence from pairwise meta-analyses. PMID:24992266

  9. Spatial assignment of symmetry adapted perturbation theory interaction energy components: The atomic SAPT partition

    NASA Astrophysics Data System (ADS)

    Parrish, Robert M.; Sherrill, C. David

    2014-07-01

    We develop a physically-motivated assignment of symmetry adapted perturbation theory for intermolecular interactions (SAPT) into atom-pairwise contributions (the A-SAPT partition). The basic precept of A-SAPT is that the many-body interaction energy components are computed normally under the formalism of SAPT, following which a spatially-localized two-body quasiparticle interaction is extracted from the many-body interaction terms. For electrostatics and induction source terms, the relevant quasiparticles are atoms, which are obtained in this work through the iterative stockholder analysis (ISA) procedure. For the exchange, induction response, and dispersion terms, the relevant quasiparticles are local occupied orbitals, which are obtained in this work through the Pipek-Mezey procedure. The local orbital atomic charges obtained from ISA additionally allow the terms involving local orbitals to be assigned in an atom-pairwise manner. Further summation over the atoms of one or the other monomer allows for a chemically intuitive visualization of the contribution of each atom and interaction component to the overall noncovalent interaction strength. Herein, we present the intuitive development and mathematical form for A-SAPT applied in the SAPT0 approximation (the A-SAPT0 partition). We also provide an efficient series of algorithms for the computation of the A-SAPT0 partition with essentially the same computational cost as the corresponding SAPT0 decomposition. We probe the sensitivity of the A-SAPT0 partition to the ISA grid and convergence parameter, orbital localization metric, and induction coupling treatment, and recommend a set of practical choices which closes the definition of the A-SAPT0 partition. We demonstrate the utility and computational tractability of the A-SAPT0 partition in the context of side-on cation-π interactions and the intercalation of DNA by proflavine. A-SAPT0 clearly shows the key processes in these complicated noncovalent interactions, in systems with up to 220 atoms and 2845 basis functions.

  10. Spatial assignment of symmetry adapted perturbation theory interaction energy components: The atomic SAPT partition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parrish, Robert M.; Sherrill, C. David, E-mail: sherrill@gatech.edu

    2014-07-28

    We develop a physically-motivated assignment of symmetry adapted perturbation theory for intermolecular interactions (SAPT) into atom-pairwise contributions (the A-SAPT partition). The basic precept of A-SAPT is that the many-body interaction energy components are computed normally under the formalism of SAPT, following which a spatially-localized two-body quasiparticle interaction is extracted from the many-body interaction terms. For electrostatics and induction source terms, the relevant quasiparticles are atoms, which are obtained in this work through the iterative stockholder analysis (ISA) procedure. For the exchange, induction response, and dispersion terms, the relevant quasiparticles are local occupied orbitals, which are obtained in this work throughmore » the Pipek-Mezey procedure. The local orbital atomic charges obtained from ISA additionally allow the terms involving local orbitals to be assigned in an atom-pairwise manner. Further summation over the atoms of one or the other monomer allows for a chemically intuitive visualization of the contribution of each atom and interaction component to the overall noncovalent interaction strength. Herein, we present the intuitive development and mathematical form for A-SAPT applied in the SAPT0 approximation (the A-SAPT0 partition). We also provide an efficient series of algorithms for the computation of the A-SAPT0 partition with essentially the same computational cost as the corresponding SAPT0 decomposition. We probe the sensitivity of the A-SAPT0 partition to the ISA grid and convergence parameter, orbital localization metric, and induction coupling treatment, and recommend a set of practical choices which closes the definition of the A-SAPT0 partition. We demonstrate the utility and computational tractability of the A-SAPT0 partition in the context of side-on cation-π interactions and the intercalation of DNA by proflavine. A-SAPT0 clearly shows the key processes in these complicated noncovalent interactions, in systems with up to 220 atoms and 2845 basis functions.« less

  11. Sample size and power considerations in network meta-analysis

    PubMed Central

    2012-01-01

    Background Network meta-analysis is becoming increasingly popular for establishing comparative effectiveness among multiple interventions for the same disease. Network meta-analysis inherits all methodological challenges of standard pairwise meta-analysis, but with increased complexity due to the multitude of intervention comparisons. One issue that is now widely recognized in pairwise meta-analysis is the issue of sample size and statistical power. This issue, however, has so far only received little attention in network meta-analysis. To date, no approaches have been proposed for evaluating the adequacy of the sample size, and thus power, in a treatment network. Findings In this article, we develop easy-to-use flexible methods for estimating the ‘effective sample size’ in indirect comparison meta-analysis and network meta-analysis. The effective sample size for a particular treatment comparison can be interpreted as the number of patients in a pairwise meta-analysis that would provide the same degree and strength of evidence as that which is provided in the indirect comparison or network meta-analysis. We further develop methods for retrospectively estimating the statistical power for each comparison in a network meta-analysis. We illustrate the performance of the proposed methods for estimating effective sample size and statistical power using data from a network meta-analysis on interventions for smoking cessation including over 100 trials. Conclusion The proposed methods are easy to use and will be of high value to regulatory agencies and decision makers who must assess the strength of the evidence supporting comparative effectiveness estimates. PMID:22992327

  12. Effectiveness of oral hydration in preventing contrast-induced acute kidney injury in patients undergoing coronary angiography or intervention: a pairwise and network meta-analysis.

    PubMed

    Zhang, Weidai; Zhang, Jiawei; Yang, Baojun; Wu, Kefei; Lin, Hanfei; Wang, Yanping; Zhou, Lihong; Wang, Huatao; Zeng, Chujuan; Chen, Xiao; Wang, Zhixing; Zhu, Junxing; Songming, Chen

    2018-06-01

    The effectiveness of oral hydration in preventing contrast-induced acute kidney injury (CI-AKI) in patients undergoing coronary angiography or intervention has not been well established. This study aims to evaluate the efficacy of oral hydration compared with intravenous hydration and other frequently used hydration strategies. PubMed, Embase, Web of Science, and the Cochrane central register of controlled trials were searched from inception to 8 October 2017. To be eligible for analysis, studies had to evaluate the relative efficacy of different prophylactic hydration strategies. We selected and assessed the studies that fulfilled the inclusion criteria and carried out a pairwise and network meta-analysis using RevMan5.2 and Aggregate Data Drug Information System 1.16.8 software. A total of four studies (538 participants) were included in our pairwise meta-analysis and 1754 participants from eight studies with four frequently used hydration strategies were included in a network meta-analysis. Pairwise meta-analysis indicated that oral hydration was as effective as intravenous hydration for the prevention of CI-AKI (5.88 vs. 8.43%; odds ratio: 0.73; 95% confidence interval: 0.36-1.47; P>0.05), with no significant heterogeneity between studies. Network meta-analysis showed that there was no significant difference in the prevention of CI-AKI. However, the rank probability plot suggested that oral plus intravenous hydration had a higher probability (51%) of being the best strategy, followed by diuretic plus intravenous hydration (39%) and oral hydration alone (10%). Intravenous hydration alone was the strategy with the highest probability (70%) of being the worst hydration strategy. Our study shows that oral hydration is not inferior to intravenous hydration for the prevention of CI-AKI in patients with normal or mild-to-moderate renal dysfunction undergoing coronary angiography or intervention.

  13. The influence of arene-ring size on stacking interaction with canonical base pairs

    NASA Astrophysics Data System (ADS)

    Formánek, Martin; Burda, Jaroslav V.

    2014-04-01

    Stacking interactions between aromatic molecules (benzene, p-cymene, biphenyl, and di- and tetra-hydrogen anthracene) and G.C and A.T canonical Watson-Crick (WC) base pairs are explored. Two functionals with dispersion corrections: ω-B97XD and B3LYP-D3 are used. For a comparison also the MP2 and B3LYP-D3/PCM methods were used for the most stable p-cymene…WC geometries. It was found that the stacking interaction increases with the size of π-conjugation system. Its extent is in agreement with experimental finding on anticancer activity of Ru(II) piano-stool complexes where intercalation of these aromatic molecules should play an important role. The explored structures are considered as ternary system so that decomposition of the interaction energy to pairwise and non-additivity contributions is also examined.

  14. Hybrid pairwise likelihood analysis of animal behavior experiments.

    PubMed

    Cattelan, Manuela; Varin, Cristiano

    2013-12-01

    The study of the determinants of fights between animals is an important issue in understanding animal behavior. For this purpose, tournament experiments among a set of animals are often used by zoologists. The results of these tournament experiments are naturally analyzed by paired comparison models. Proper statistical analysis of these models is complicated by the presence of dependence between the outcomes of fights because the same animal is involved in different contests. This paper discusses two different model specifications to account for between-fights dependence. Models are fitted through the hybrid pairwise likelihood method that iterates between optimal estimating equations for the regression parameters and pairwise likelihood inference for the association parameters. This approach requires the specification of means and covariances only. For this reason, the method can be applied also when the computation of the joint distribution is difficult or inconvenient. The proposed methodology is investigated by simulation studies and applied to real data about adult male Cape Dwarf Chameleons. © 2013, The International Biometric Society.

  15. Pairwise velocities in the "Running FLRW" cosmological model

    NASA Astrophysics Data System (ADS)

    Bibiano, Antonio; Croton, Darren J.

    2017-05-01

    We present an analysis of the pairwise velocity statistics from a suite of cosmological N-body simulations describing the 'Running Friedmann-Lemaître-Robertson-Walker' (R-FLRW) cosmological model. This model is based on quantum field theory in a curved space-time and extends Λ cold dark matter (CDM) with a time-evolving vacuum energy density, ρ _Λ. To enforce local conservation of matter, a time-evolving gravitational coupling is also included. Our results constitute the first study of velocities in the R-FLRW cosmology, and we also compare with other dark energy simulations suites, repeating the same analysis. We find a strong degeneracy between the pairwise velocity and σ8 at z = 0 for almost all scenarios considered, which remains even when we look back to epochs as early as z = 2. We also investigate various coupled dark energy models, some of which show minimal degeneracy, and reveal interesting deviations from ΛCDM that could be readily exploited by future cosmological observations to test and further constrain our understanding of dark energy.

  16. Bispectral pairwise interacting source analysis for identifying systems of cross-frequency interacting brain sources from electroencephalographic or magnetoencephalographic signals

    NASA Astrophysics Data System (ADS)

    Chella, Federico; Pizzella, Vittorio; Zappasodi, Filippo; Nolte, Guido; Marzetti, Laura

    2016-05-01

    Brain cognitive functions arise through the coordinated activity of several brain regions, which actually form complex dynamical systems operating at multiple frequencies. These systems often consist of interacting subsystems, whose characterization is of importance for a complete understanding of the brain interaction processes. To address this issue, we present a technique, namely the bispectral pairwise interacting source analysis (biPISA), for analyzing systems of cross-frequency interacting brain sources when multichannel electroencephalographic (EEG) or magnetoencephalographic (MEG) data are available. Specifically, the biPISA makes it possible to identify one or many subsystems of cross-frequency interacting sources by decomposing the antisymmetric components of the cross-bispectra between EEG or MEG signals, based on the assumption that interactions are pairwise. Thanks to the properties of the antisymmetric components of the cross-bispectra, biPISA is also robust to spurious interactions arising from mixing artifacts, i.e., volume conduction or field spread, which always affect EEG or MEG functional connectivity estimates. This method is an extension of the pairwise interacting source analysis (PISA), which was originally introduced for investigating interactions at the same frequency, to the study of cross-frequency interactions. The effectiveness of this approach is demonstrated in simulations for up to three interacting source pairs and for real MEG recordings of spontaneous brain activity. Simulations show that the performances of biPISA in estimating the phase difference between the interacting sources are affected by the increasing level of noise rather than by the number of the interacting subsystems. The analysis of real MEG data reveals an interaction between two pairs of sources of central mu and beta rhythms, localizing in the proximity of the left and right central sulci.

  17. Direct and Indirect Effects of UV-B Exposure on Litter Decomposition: A Meta-Analysis

    PubMed Central

    Song, Xinzhang; Peng, Changhui; Jiang, Hong; Zhu, Qiuan; Wang, Weifeng

    2013-01-01

    Ultraviolet-B (UV-B) exposure in the course of litter decomposition may have a direct effect on decomposition rates via changing states of photodegradation or decomposer constitution in litter while UV-B exposure during growth periods may alter chemical compositions and physical properties of plants. Consequently, these changes will indirectly affect subsequent litter decomposition processes in soil. Although studies are available on both the positive and negative effects (including no observable effects) of UV-B exposure on litter decomposition, a comprehensive analysis leading to an adequate understanding remains unresolved. Using data from 93 studies across six biomes, this introductory meta-analysis found that elevated UV-B directly increased litter decomposition rates by 7% and indirectly by 12% while attenuated UV-B directly decreased litter decomposition rates by 23% and indirectly increased litter decomposition rates by 7%. However, neither positive nor negative effects were statistically significant. Woody plant litter decomposition seemed more sensitive to UV-B than herbaceous plant litter except under conditions of indirect effects of elevated UV-B. Furthermore, levels of UV-B intensity significantly affected litter decomposition response to UV-B (P<0.05). UV-B effects on litter decomposition were to a large degree compounded by climatic factors (e.g., MAP and MAT) (P<0.05) and litter chemistry (e.g., lignin content) (P<0.01). Results suggest these factors likely have a bearing on masking the important role of UV-B on litter decomposition. No significant differences in UV-B effects on litter decomposition were found between study types (field experiment vs. laboratory incubation), litter forms (leaf vs. needle), and decay duration. Indirect effects of elevated UV-B on litter decomposition significantly increased with decay duration (P<0.001). Additionally, relatively small changes in UV-B exposure intensity (30%) had significant direct effects on litter decomposition (P<0.05). The intent of this meta-analysis was to improve our understanding of the overall effects of UV-B on litter decomposition. PMID:23818993

  18. [Analysis of variance of repeated data measured by water maze with SPSS].

    PubMed

    Qiu, Hong; Jin, Guo-qin; Jin, Ru-feng; Zhao, Wei-kang

    2007-01-01

    To introduce the method of analyzing repeated data measured by water maze with SPSS 11.0, and offer a reference statistical method to clinical and basic medicine researchers who take the design of repeated measures. Using repeated measures and multivariate analysis of variance (ANOVA) process of the general linear model in SPSS and giving comparison among different groups and different measure time pairwise. Firstly, Mauchly's test of sphericity should be used to judge whether there were relations among the repeatedly measured data. If any (P

  19. Validating the performance of one-time decomposition for fMRI analysis using ICA with automatic target generation process.

    PubMed

    Yao, Shengnan; Zeng, Weiming; Wang, Nizhuan; Chen, Lei

    2013-07-01

    Independent component analysis (ICA) has been proven to be effective for functional magnetic resonance imaging (fMRI) data analysis. However, ICA decomposition requires to optimize the unmixing matrix iteratively whose initial values are generated randomly. Thus the randomness of the initialization leads to different ICA decomposition results. Therefore, just one-time decomposition for fMRI data analysis is not usually reliable. Under this circumstance, several methods about repeated decompositions with ICA (RDICA) were proposed to reveal the stability of ICA decomposition. Although utilizing RDICA has achieved satisfying results in validating the performance of ICA decomposition, RDICA cost much computing time. To mitigate the problem, in this paper, we propose a method, named ATGP-ICA, to do the fMRI data analysis. This method generates fixed initial values with automatic target generation process (ATGP) instead of being produced randomly. We performed experimental tests on both hybrid data and fMRI data to indicate the effectiveness of the new method and made a performance comparison of the traditional one-time decomposition with ICA (ODICA), RDICA and ATGP-ICA. The proposed method demonstrated that it not only could eliminate the randomness of ICA decomposition, but also could save much computing time compared to RDICA. Furthermore, the ROC (Receiver Operating Characteristic) power analysis also denoted the better signal reconstruction performance of ATGP-ICA than that of RDICA. Copyright © 2013 Elsevier Inc. All rights reserved.

  20. Pairwise Maximum Entropy Models for Studying Large Biological Systems: When They Can Work and When They Can't

    PubMed Central

    Roudi, Yasser; Nirenberg, Sheila; Latham, Peter E.

    2009-01-01

    One of the most critical problems we face in the study of biological systems is building accurate statistical descriptions of them. This problem has been particularly challenging because biological systems typically contain large numbers of interacting elements, which precludes the use of standard brute force approaches. Recently, though, several groups have reported that there may be an alternate strategy. The reports show that reliable statistical models can be built without knowledge of all the interactions in a system; instead, pairwise interactions can suffice. These findings, however, are based on the analysis of small subsystems. Here, we ask whether the observations will generalize to systems of realistic size, that is, whether pairwise models will provide reliable descriptions of true biological systems. Our results show that, in most cases, they will not. The reason is that there is a crossover in the predictive power of pairwise models: If the size of the subsystem is below the crossover point, then the results have no predictive power for large systems. If the size is above the crossover point, then the results may have predictive power. This work thus provides a general framework for determining the extent to which pairwise models can be used to predict the behavior of large biological systems. Applied to neural data, the size of most systems studied so far is below the crossover point. PMID:19424487

  1. Analysis of laser printer and photocopier toners by spectral properties and chemometrics

    NASA Astrophysics Data System (ADS)

    Verma, Neha; Kumar, Raj; Sharma, Vishal

    2018-05-01

    The use of printers to generate falsified documents has become a common practice in today's world. The examination and identification of the printed matter in the suspected documents (civil or criminal cases) may provide important information about the authenticity of the document. In the present study, a total number of 100 black toner samples both from laser printers and photocopiers were examined using diffuse reflectance UV-Vis Spectroscopy. The present research is divided into two parts; visual discrimination and discrimination by using multivariate analysis. A comparison between qualitative and quantitative analysis showed that multivariate analysis (Principal component analysis) provides 99.59%pair-wise discriminating power for laser printer toners while 99.84% pair-wise discriminating power for photocopier toners. The overall results obtained confirm the applicability of UV-Vis spectroscopy and chemometrics, in the nondestructive analysis of toner printed documents while enhancing their evidential value for forensic applications.

  2. Darwin v. 2.0: an interpreted computer language for the biosciences.

    PubMed

    Gonnet, G H; Hallett, M T; Korostensky, C; Bernardin, L

    2000-02-01

    We announce the availability of the second release of Darwin v. 2.0, an interpreted computer language especially tailored to researchers in the biosciences. The system is a general tool applicable to a wide range of problems. This second release improves Darwin version 1.6 in several ways: it now contains (1) a larger set of libraries touching most of the classical problems from computational biology (pairwise alignment, all versus all alignments, tree construction, multiple sequence alignment), (2) an expanded set of general purpose algorithms (search algorithms for discrete problems, matrix decomposition routines, complex/long integer arithmetic operations), (3) an improved language with a cleaner syntax, (4) better on-line help, and (5) a number of fixes to user-reported bugs. Darwin is made available for most operating systems free of char ge from the Computational Biochemistry Research Group (CBRG), reachable at http://chrg.inf.ethz.ch. darwin@inf.ethz.ch

  3. Deciphering life history transcriptomes in different environments

    PubMed Central

    Etges, William J.; Trotter, Meredith V.; de Oliveira, Cássia C.; Rajpurohit, Subhash; Gibbs, Allen G.; Tuljapurkar, Shripad

    2014-01-01

    We compared whole transcriptome variation in six preadult stages and seven adult female ages in two populations of cactophilic Drosophila mojavensis reared on two host plants in order to understand how differences in gene expression influence standing life history variation. We used Singular Value Decomposition (SVD) to identify dominant trajectories of life cycle gene expression variation, performed pair-wise comparisons of stage and age differences in gene expression across the life cycle, identified when genes exhibited maximum levels of life cycle gene expression, and assessed population and host cactus effects on gene expression. Life cycle SVD analysis returned four significant components of transcriptional variation, revealing functional enrichment of genes responsible for growth, metabolic function, sensory perception, neural function, translation and aging. Host cactus effects on female gene expression revealed population and stage specific differences, including significant host plant effects on larval metabolism and development, as well as adult neurotransmitter binding and courtship behavior gene expression levels. In 3 - 6 day old virgin females, significant up-regulation of genes associated with meiosis and oogenesis was accompanied by down-regulation of genes associated with somatic maintenance, evidence for a life history tradeoff. The transcriptome of D. mojavensis reared in natural environments throughout its life cycle revealed core developmental transitions and genome wide influences on life history variation in natural populations. PMID:25442828

  4. R-based Tool for a Pairwise Structure-activity Relationship Analysis.

    PubMed

    Klimenko, Kyrylo

    2018-04-01

    The Structure-Activity Relationship analysis is a complex process that can be enhanced by computational techniques. This article describes a simple tool for SAR analysis that has a graphic user interface and a flexible approach towards the input of molecular data. The application allows calculating molecular similarity represented by Tanimoto index & Euclid distance, as well as, determining activity cliffs by means of Structure-Activity Landscape Index. The calculation is performed in a pairwise manner either for the reference compound and other compounds or for all possible pairs in the data set. The results of SAR analysis are visualized using two types of plot. The application capability is demonstrated by the analysis of a set of COX2 inhibitors with respect to Isoxicam. This tool is available online: it includes manual and input file examples. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Genetics of human body size and shape: pleiotropic and independent genetic determinants of adiposity.

    PubMed

    Livshits, G; Yakovenko, K; Ginsburg, E; Kobyliansky, E

    1998-01-01

    The present study utilized pedigree data from three ethnically different populations of Kirghizstan, Turkmenia and Chuvasha. Principal component analysis was performed on a matrix of genetic correlations between 22 measures of adiposity, including skinfolds, circumferences and indices. Findings are summarized as follows: (1) All three genetic matrices were not positive definite and the first four factors retained even after exclusion RG > or = 1.0, explained from 88% to 97% of the total additive genetic variation in the 22 trials studied. This clearly emphasizes the massive involvement of pleiotropic gene effects in the variability of adiposity traits. (2) Despite the quite natural differences in pairwise correlations between the adiposity traits in the three ethnically different samples under study, factor analysis revealed a common basic pattern of covariability for the adiposity traits. In each of the three samples, four genetic factors were retained, namely, the amount of subcutaneous fat, the total body obesity, the pattern of distribution of subcutaneous fat and the central adiposity distribution. (3) Genetic correlations between the retained four factors were virtually non-existent, suggesting that several independent genetic sources may be governing the variation of adiposity traits. (4) Variance decomposition analysis on the obtained genetic factors leaves no doubt regarding the substantial familial and (most probably genetic) effects on variation of each factor in each studied population. The similarity of results in the three different samples indicates that the findings may be deemed valid and reliable descriptions of the genetic variation and covariation pattern of adiposity traits in the human species.

  6. Multidecadal climate variability of global lands and oceans

    USGS Publications Warehouse

    McCabe, G.J.; Palecki, M.A.

    2006-01-01

    Principal components analysis (PCA) and singular value decomposition (SVD) are used to identify the primary modes of decadal and multidecadal variability in annual global Palmer Drought Severity Index (PDSI) values and sea-surface temperature (SSTs). The PDSI and SST data for 1925-2003 were detrended and smoothed (with a 10-year moving average) to isolate the decadal and multidecadal variability. The first two principal components (PCs) of the PDSI PCA explained almost 38% of the decadal and multidecadal variance in the detrended and smoothed global annual PDSI data. The first two PCs of detrended and smoothed global annual SSTs explained nearly 56% of the decadal variability in global SSTs. The PDSI PCs and the SST PCs are directly correlated in a pairwise fashion. The first PDSI and SST PCs reflect variability of the detrended and smoothed annual Pacific Decadal Oscillation (PDO), as well as detrended and smoothed annual Indian Ocean SSTs. The second set of PCs is strongly associated with the Atlantic Multidecadal Oscillation (AMO). The SVD analysis of the cross-covariance of the PDSI and SST data confirmed the close link between the PDSI and SST modes of decadal and multidecadal variation and provided a verification of the PCA results. These findings indicate that the major modes of multidecadal variations in SSTs and land-surface climate conditions are highly interrelated through a small number of spatially complex but slowly varying teleconnections. Therefore, these relations may be adaptable to providing improved baseline conditions for seasonal climate forecasting. Published in 2006 by John Wiley & Sons, Ltd.

  7. Lotka-Volterra pairwise modeling fails to capture diverse pairwise microbial interactions

    PubMed Central

    Momeni, Babak; Xie, Li; Shou, Wenying

    2017-01-01

    Pairwise models are commonly used to describe many-species communities. In these models, an individual receives additive fitness effects from pairwise interactions with each species in the community ('additivity assumption'). All pairwise interactions are typically represented by a single equation where parameters reflect signs and strengths of fitness effects ('universality assumption'). Here, we show that a single equation fails to qualitatively capture diverse pairwise microbial interactions. We build mechanistic reference models for two microbial species engaging in commonly-found chemical-mediated interactions, and attempt to derive pairwise models. Different equations are appropriate depending on whether a mediator is consumable or reusable, whether an interaction is mediated by one or more mediators, and sometimes even on quantitative details of the community (e.g. relative fitness of the two species, initial conditions). Our results, combined with potential violation of the additivity assumption in many-species communities, suggest that pairwise modeling will often fail to predict microbial dynamics. DOI: http://dx.doi.org/10.7554/eLife.25051.001 PMID:28350295

  8. Causal analysis of ordinal treatments and binary outcomes under truncation by death.

    PubMed

    Wang, Linbo; Richardson, Thomas S; Zhou, Xiao-Hua

    2017-06-01

    It is common that in multi-arm randomized trials, the outcome of interest is "truncated by death," meaning that it is only observed or well-defined conditioning on an intermediate outcome. In this case, in addition to pairwise contrasts, the joint inference for all treatment arms is also of interest. Under a monotonicity assumption we present methods for both pairwise and joint causal analyses of ordinal treatments and binary outcomes in presence of truncation by death. We illustrate via examples the appropriateness of our assumptions in different scientific contexts.

  9. Differential Decomposition Among Pig, Rabbit, and Human Remains.

    PubMed

    Dautartas, Angela; Kenyhercz, Michael W; Vidoli, Giovanna M; Meadows Jantz, Lee; Mundorff, Amy; Steadman, Dawnie Wolfe

    2018-03-30

    While nonhuman animal remains are often utilized in forensic research to develop methods to estimate the postmortem interval, systematic studies that directly validate animals as proxies for human decomposition are lacking. The current project compared decomposition rates among pigs, rabbits, and humans at the University of Tennessee's Anthropology Research Facility across three seasonal trials that spanned nearly 2 years. The Total Body Score (TBS) method was applied to quantify decomposition changes and calculate the postmortem interval (PMI) in accumulated degree days (ADD). Decomposition trajectories were analyzed by comparing the estimated and actual ADD for each seasonal trial and by fuzzy cluster analysis. The cluster analysis demonstrated that the rabbits formed one group while pigs and humans, although more similar to each other than either to rabbits, still showed important differences in decomposition patterns. The decomposition trends show that neither nonhuman model captured the pattern, rate, and variability of human decomposition. © 2018 American Academy of Forensic Sciences.

  10. Ultrahigh-Dimensional Multiclass Linear Discriminant Analysis by Pairwise Sure Independence Screening

    PubMed Central

    Pan, Rui; Wang, Hansheng; Li, Runze

    2016-01-01

    This paper is concerned with the problem of feature screening for multi-class linear discriminant analysis under ultrahigh dimensional setting. We allow the number of classes to be relatively large. As a result, the total number of relevant features is larger than usual. This makes the related classification problem much more challenging than the conventional one, where the number of classes is small (very often two). To solve the problem, we propose a novel pairwise sure independence screening method for linear discriminant analysis with an ultrahigh dimensional predictor. The proposed procedure is directly applicable to the situation with many classes. We further prove that the proposed method is screening consistent. Simulation studies are conducted to assess the finite sample performance of the new procedure. We also demonstrate the proposed methodology via an empirical analysis of a real life example on handwritten Chinese character recognition. PMID:28127109

  11. Cerebrospinal fluid PCR analysis and biochemistry in bodies with severe decomposition.

    PubMed

    Palmiere, Cristian; Vanhaebost, Jessica; Ventura, Francesco; Bonsignore, Alessandro; Bonetti, Luca Reggiani

    2015-02-01

    The aim of this study was to assess whether Neisseria meningitidis, Listeria monocytogenes, Streptococcus pneumoniae and Haemophilus influenzae can be identified using the polymerase chain reaction technique in the cerebrospinal fluid of severely decomposed bodies with known, noninfectious causes of death or whether postmortem changes can lead to false positive results and thus erroneous diagnostic information. Biochemical investigations, postmortem bacteriology and real-time polymerase chain reaction analysis in cerebrospinal fluid were performed in a series of medico-legal autopsies that included noninfectious causes of death with decomposition, bacterial meningitis without decomposition, bacterial meningitis with decomposition, low respiratory tract infections with decomposition and abdominal infections with decomposition. In noninfectious causes of death with decomposition, postmortem investigations failed to reveal results consistent with generalized inflammation or bacterial infections at the time of death. Real-time polymerase chain reaction analysis in cerebrospinal fluid did not identify the studied bacteria in any of these cases. The results of this study highlight the usefulness of molecular approaches in bacteriology as well as the use of alternative biological samples in postmortem biochemistry in order to obtain suitable information even in corpses with severe decompositional changes. Copyright © 2014 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  12. An Extension of Dominance Analysis to Canonical Correlation Analysis

    ERIC Educational Resources Information Center

    Huo, Yan; Budescu, David V.

    2009-01-01

    Dominance analysis (Budescu, 1993) offers a general framework for determination of relative importance of predictors in univariate and multivariate multiple regression models. This approach relies on pairwise comparisons of the contribution of predictors in all relevant subset models. In this article we extend dominance analysis to canonical…

  13. Time-Frequency Analysis Reveals Pairwise Interactions in Insect Swarms

    NASA Astrophysics Data System (ADS)

    Puckett, James G.; Ni, Rui; Ouellette, Nicholas T.

    2015-06-01

    The macroscopic emergent behavior of social animal groups is a classic example of dynamical self-organization, and is thought to arise from the local interactions between individuals. Determining these interactions from empirical data sets of real animal groups, however, is challenging. Using multicamera imaging and tracking, we studied the motion of individual flying midges in laboratory mating swarms. By performing a time-frequency analysis of the midge trajectories, we show that the midge behavior can be segmented into two distinct modes: one that is independent and composed of low-frequency maneuvers, and one that consists of higher-frequency nearly harmonic oscillations conducted in synchrony with another midge. We characterize these pairwise interactions, and make a hypothesis as to their biological function.

  14. Performance analysis of a dual-tree algorithm for computing spatial distance histograms

    PubMed Central

    Chen, Shaoping; Tu, Yi-Cheng; Xia, Yuni

    2011-01-01

    Many scientific and engineering fields produce large volume of spatiotemporal data. The storage, retrieval, and analysis of such data impose great challenges to database systems design. Analysis of scientific spatiotemporal data often involves computing functions of all point-to-point interactions. One such analytics, the Spatial Distance Histogram (SDH), is of vital importance to scientific discovery. Recently, algorithms for efficient SDH processing in large-scale scientific databases have been proposed. These algorithms adopt a recursive tree-traversing strategy to process point-to-point distances in the visited tree nodes in batches, thus require less time when compared to the brute-force approach where all pairwise distances have to be computed. Despite the promising experimental results, the complexity of such algorithms has not been thoroughly studied. In this paper, we present an analysis of such algorithms based on a geometric modeling approach. The main technique is to transform the analysis of point counts into a problem of quantifying the area of regions where pairwise distances can be processed in batches by the algorithm. From the analysis, we conclude that the number of pairwise distances that are left to be processed decreases exponentially with more levels of the tree visited. This leads to the proof of a time complexity lower than the quadratic time needed for a brute-force algorithm and builds the foundation for a constant-time approximate algorithm. Our model is also general in that it works for a wide range of point spatial distributions, histogram types, and space-partitioning options in building the tree. PMID:21804753

  15. Thermal decomposition characteristics of microwave liquefied rape straw residues using thermogravimetric analysis

    Treesearch

    Xingyan Huang; Cornelis F. De Hoop; Jiulong Xie; Chung-Yun Hse; Jinqiu Qi; Yuzhu Chen; Feng Li

    2017-01-01

    The thermal decomposition characteristics of microwave liquefied rape straw residues with respect to liquefaction condition and pyrolysis conversion were investigated using a thermogravimetric (TG) analyzer at the heating rates of 5, 20, 50 °C min-1. The hemicellulose decomposition peak was absent at the derivative thermogravimetric analysis (DTG...

  16. Ensemble survival tree models to reveal pairwise interactions of variables with time-to-events outcomes in low-dimensional setting

    PubMed Central

    Dazard, Jean-Eudes; Ishwaran, Hemant; Mehlotra, Rajeev; Weinberg, Aaron; Zimmerman, Peter

    2018-01-01

    Unraveling interactions among variables such as genetic, clinical, demographic and environmental factors is essential to understand the development of common and complex diseases. To increase the power to detect such variables interactions associated with clinical time-to-events outcomes, we borrowed established concepts from random survival forest (RSF) models. We introduce a novel RSF-based pairwise interaction estimator and derive a randomization method with bootstrap confidence intervals for inferring interaction significance. Using various linear and nonlinear time-to-events survival models in simulation studies, we first show the efficiency of our approach: true pairwise interaction-effects between variables are uncovered, while they may not be accompanied with their corresponding main-effects, and may not be detected by standard semi-parametric regression modeling and test statistics used in survival analysis. Moreover, using a RSF-based cross-validation scheme for generating prediction estimators, we show that informative predictors may be inferred. We applied our approach to an HIV cohort study recording key host gene polymorphisms and their association with HIV change of tropism or AIDS progression. Altogether, this shows how linear or nonlinear pairwise statistical interactions of variables may be efficiently detected with a predictive value in observational studies with time-to-event outcomes. PMID:29453930

  17. Ensemble survival tree models to reveal pairwise interactions of variables with time-to-events outcomes in low-dimensional setting.

    PubMed

    Dazard, Jean-Eudes; Ishwaran, Hemant; Mehlotra, Rajeev; Weinberg, Aaron; Zimmerman, Peter

    2018-02-17

    Unraveling interactions among variables such as genetic, clinical, demographic and environmental factors is essential to understand the development of common and complex diseases. To increase the power to detect such variables interactions associated with clinical time-to-events outcomes, we borrowed established concepts from random survival forest (RSF) models. We introduce a novel RSF-based pairwise interaction estimator and derive a randomization method with bootstrap confidence intervals for inferring interaction significance. Using various linear and nonlinear time-to-events survival models in simulation studies, we first show the efficiency of our approach: true pairwise interaction-effects between variables are uncovered, while they may not be accompanied with their corresponding main-effects, and may not be detected by standard semi-parametric regression modeling and test statistics used in survival analysis. Moreover, using a RSF-based cross-validation scheme for generating prediction estimators, we show that informative predictors may be inferred. We applied our approach to an HIV cohort study recording key host gene polymorphisms and their association with HIV change of tropism or AIDS progression. Altogether, this shows how linear or nonlinear pairwise statistical interactions of variables may be efficiently detected with a predictive value in observational studies with time-to-event outcomes.

  18. Domain decomposition for aerodynamic and aeroacoustic analyses, and optimization

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay

    1995-01-01

    The overarching theme was the domain decomposition, which intended to improve the numerical solution technique for the partial differential equations at hand; in the present study, those that governed either the fluid flow, or the aeroacoustic wave propagation, or the sensitivity analysis for a gradient-based optimization. The role of the domain decomposition extended beyond the original impetus of discretizing geometrical complex regions or writing modular software for distributed-hardware computers. It induced function-space decompositions and operator decompositions that offered the valuable property of near independence of operator evaluation tasks. The objectives have gravitated about the extensions and implementations of either the previously developed or concurrently being developed methodologies: (1) aerodynamic sensitivity analysis with domain decomposition (SADD); (2) computational aeroacoustics of cavities; and (3) dynamic, multibody computational fluid dynamics using unstructured meshes.

  19. Adverse events and treatment failure leading to discontinuation of recently approved antipsychotic drugs in schizophrenia: A network meta-analysis.

    PubMed

    Tonin, Fernanda S; Piazza, Thais; Wiens, Astrid; Fernandez-Llimos, Fernando; Pontarolo, Roberto

    2015-12-01

    Objective:We aimed to gather evidence of the discontinuation rates owing to adverse events or treatment failure for four recently approved antipsychotics (asenapine, blonanserin, iloperidone, and lurasidone).Methods: A systematic review followed by pairwise meta-analysis and mixed treatment comparison meta analysis(MTC) was performed, including randomized controlled trials (RCTs) that compared the use of the above-mentioned drugs versus placebo in patients with schizophrenia. An electronic search was conducted in PubMed, Scopus, Science Direct, Scielo, the Cochrane Library, and International Pharmaceutical Abstracts(January 2015). The included trials were at least single blinded. The main outcome measures extracted were discontinuation owing to adverse events and discontinuation owing to treatment failure.Results: Fifteen RCTs were identified (n = 5400 participants) and 13 of them were amenable for use in our meta-analyses. No significant differences were observed between any of the four drugs and placebo as regards discontinuation owing to adverse events, whether in pairwise meta-analysis or in MTC. All drugs presented a better profile than placebo on discontinuation owing to treatment failure, both in pairwise meta-analysis and MTC. Asenapine was found to be the best therapy in terms of tolerability owing to failure,while lurasidone was the worst treatment in terms of adverse events. The evidence around blonanserin is weak.Conclusion: MTCs allowed the creation of two different rank orders of these four antipsychotic drugs in two outcome measures. This evidence-generating method allows direct and indirect comparisons, supporting approval and pricing decisions when lacking sufficient, direct, head-to-head trials.

  20. Methods for Mediation Analysis with Missing Data

    ERIC Educational Resources Information Center

    Zhang, Zhiyong; Wang, Lijuan

    2013-01-01

    Despite wide applications of both mediation models and missing data techniques, formal discussion of mediation analysis with missing data is still rare. We introduce and compare four approaches to dealing with missing data in mediation analysis including list wise deletion, pairwise deletion, multiple imputation (MI), and a two-stage maximum…

  1. Analysis and Prediction of Sea Ice Evolution using Koopman Mode Decomposition Techniques

    DTIC Science & Technology

    2018-04-30

    Title: Analysis and Prediction of Sea Ice Evolution using Koopman Mode Decomposition Techniques Subject: Monthly Progress Report Period of...Resources: N/A TOTAL: $18,687 2 TECHNICAL STATUS REPORT Abstract The program goal is analysis of sea ice dynamical behavior using Koopman Mode Decompo...sition (KMD) techniques. The work in the program’s first month consisted of improvements to data processing code, inclusion of additional arctic sea ice

  2. Analysis of cured carbon-phenolic decomposition products to investigate the thermal decomposition of nozzle materials

    NASA Technical Reports Server (NTRS)

    Thompson, James M.; Daniel, Janice D.

    1989-01-01

    The development of a mass spectrometer/thermal analyzer/computer (MS/TA/Computer) system capable of providing simultaneous thermogravimetry (TG), differential thermal analysis (DTA), derivative thermogravimetry (DTG) and evolved gas detection and analysis (EGD and EGA) under both atmospheric and high pressure conditions is described. The combined system was used to study the thermal decomposition of the nozzle material that constitutes the throat of the solid rocket boosters (SRB).

  3. Application of the Interacting Quantum Atoms Approach to the S66 and Ionic-Hydrogen-Bond Datasets for Noncovalent Interactions.

    PubMed

    Suárez, Dimas; Díaz, Natalia; Francisco, Evelio; Martín Pendás, Angel

    2018-04-17

    The interacting quantum atoms (IQA) method can assess, systematically and in great detail, the strength and physics of both covalent and noncovalent interactions. The lack of a pair density in density functional theory (DFT), which precludes the direct IQA decomposition of the characteristic exchange-correlation energy, has been recently overcome by means of a scaling technique, which can largely expand the applicability of the method. To better assess the utility of the augmented IQA methodology to derive quantum chemical decompositions at the atomic and molecular levels, we report the results of Hartree-Fock (HF) and DFT calculations on the complexes included in the S66 and the ionic H-bond databases of benchmark geometry and binding energies. For all structures, we perform single-point and geometry optimizations using HF and selected DFT methods with triple-ζ basis sets followed by full IQA calculations. Pairwise dispersion energies are accounted for by the D3 method. We analyze the goodness of the HF-D3 and DFT-D3 binding energies, the magnitude of numerical errors, the fragment and atomic distribution of formation energies, etc. It is shown that fragment-based IQA decomposes the formation energies in comparable terms to those of perturbative approaches and that the atomic IQA energies hold the promise of rigorously quantifying atomic and group energy contributions in larger biomolecular systems. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Assessment of a new method for the analysis of decomposition gases of polymers by a combining thermogravimetric solid-phase extraction and thermal desorption gas chromatography mass spectrometry.

    PubMed

    Duemichen, E; Braun, U; Senz, R; Fabian, G; Sturm, H

    2014-08-08

    For analysis of the gaseous thermal decomposition products of polymers, the common techniques are thermogravimetry, combined with Fourier transformed infrared spectroscopy (TGA-FTIR) and mass spectrometry (TGA-MS). These methods offer a simple approach to the decomposition mechanism, especially for small decomposition molecules. Complex spectra of gaseous mixtures are very often hard to identify because of overlapping signals. In this paper a new method is described to adsorb the decomposition products during controlled conditions in TGA on solid-phase extraction (SPE) material: twisters. Subsequently the twisters were analysed with thermal desorption gas chromatography mass spectrometry (TDS-GC-MS), which allows the decomposition products to be separated and identified using an MS library. The thermoplastics polyamide 66 (PA 66) and polybutylene terephthalate (PBT) were used as example polymers. The influence of the sample mass and of the purge gas flow during the decomposition process was investigated in TGA. The advantages and limitations of the method were presented in comparison to the common analysis techniques, TGA-FTIR and TGA-MS. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Experimental characterization of pairwise correlations from triple quantum correlated beams generated by cascaded four-wave mixing processes

    NASA Astrophysics Data System (ADS)

    Wang, Wei; Cao, Leiming; Lou, Yanbo; Du, Jinjian; Jing, Jietai

    2018-01-01

    We theoretically and experimentally characterize the performance of the pairwise correlations from triple quantum correlated beams based on the cascaded four-wave mixing (FWM) processes. The pairwise correlations between any two of the beams are theoretically calculated and experimentally measured. The experimental and theoretical results are in good agreement. We find that two of the three pairwise correlations can be in the quantum regime. The other pairwise correlation is always in the classical regime. In addition, we also measure the triple-beam correlation which is always in the quantum regime. Such unbalanced and controllable pairwise correlation structures may be taken as advantages in practical quantum communications, for example, hierarchical quantum secret sharing. Our results also open the way for the classification and application of quantum states generated from the cascaded FWM processes.

  6. Thermal decomposition kinetics of hydrazinium cerium 2,3-Pyrazinedicarboxylate hydrate: a new precursor for CeO2.

    PubMed

    Premkumar, Thathan; Govindarajan, Subbiah; Coles, Andrew E; Wight, Charles A

    2005-04-07

    The thermal decomposition kinetics of N(2)H(5)[Ce(pyrazine-2,3-dicarboxylate)(2)(H(2)O)] (Ce-P) have been studied by thermogravimetric analysis (TGA) and differential scanning calorimetry (DSC), for the first time; TGA analysis reveals an oxidative decomposition process yielding CeO(2) as the final product with an activation energy of approximately 160 kJ mol(-1). This complex may be used as a precursor to fine particle cerium oxides due to its low temperature of decomposition.

  7. On the Possibility of Studying the Reactions of the Thermal Decomposition of Energy Substances by the Methods of High-Resolution Terahertz Spectroscopy

    NASA Astrophysics Data System (ADS)

    Vaks, V. L.; Domracheva, E. G.; Chernyaeva, M. B.; Pripolzin, S. I.; Revin, L. S.; Tretyakov, I. V.; Anfertyev, V. A.; Yablokov, A. A.; Lukyanenko, I. A.; Sheikov, Yu. V.

    2018-02-01

    We show prospects for using the method of high-resolution terahertz spectroscopy for a continuous analysis of the decomposition products of energy substances in the gas phase (including short-lived ones) in a wide temperature range. The experimental setup, which includes a terahertz spectrometer for studying the thermal decomposition reactions, is described. The results of analysis of the gaseous decomposition products of energy substances by the example of ammonium nitrate heated from room temperature to 167°C are presented.

  8. Comparative safety and efficacy of vasopressors for mortality in septic shock: A network meta-analysis.

    PubMed

    Nagendran, Myura; Maruthappu, Mahiben; Gordon, Anthony C; Gurusamy, Kurinchi S

    2016-05-01

    Septic shock is a life-threatening condition requiring vasopressor agents to support the circulatory system. Several agents exist with choice typically guided by the specific clinical scenario. We used a network meta-analysis approach to rate the comparative efficacy and safety of vasopressors for mortality and arrhythmia incidence in septic shock patients. We performed a comprehensive electronic database search including Medline, Embase, Science Citation Index Expanded and the Cochrane database. Randomised trials investigating vasopressor agents in septic shock patients and specifically assessing 28-day mortality or arrhythmia incidence were included. A Bayesian network meta-analysis was performed using Markov chain Monte Carlo methods. Thirteen trials of low to moderate risk of bias in which 3146 patients were randomised were included. There was no pairwise evidence to suggest one agent was superior over another for mortality. In the network meta-analysis, vasopressin was significantly superior to dopamine (OR 0.68 (95% CI 0.5 to 0.94)) for mortality. For arrhythmia incidence, standard pairwise meta-analyses confirmed that dopamine led to a higher incidence of arrhythmias than norepinephrine (OR 2.69 (95% CI 2.08 to 3.47)). In the network meta-analysis, there was no evidence of superiority of one agent over another. In this network meta-analysis, vasopressin was superior to dopamine for 28-day mortality in septic shock. Existing pairwise information supports the use of norepinephrine over dopamine. Our findings suggest that dopamine should be avoided in patients with septic shock and that other vasopressor agents should continue to be based on existing guidelines and clinical judgement of the specific presentation of the patient.

  9. Weak Higher-Order Interactions in Macroscopic Functional Networks of the Resting Brain.

    PubMed

    Huang, Xuhui; Xu, Kaibin; Chu, Congying; Jiang, Tianzi; Yu, Shan

    2017-10-25

    Interactions among different brain regions are usually examined through functional connectivity (FC) analysis, which is exclusively based on measuring pairwise correlations in activities. However, interactions beyond the pairwise level, that is, higher-order interactions (HOIs), are vital in understanding the behavior of many complex systems. So far, whether HOIs exist among brain regions and how they can affect the brain's activities remains largely elusive. To address these issues, here, we analyzed blood oxygenation level-dependent (BOLD) signals recorded from six typical macroscopic functional networks of the brain in 100 human subjects (46 males and 54 females) during the resting state. Through examining the binarized BOLD signals, we found that HOIs within and across individual networks were both very weak regardless of the network size, topology, degree of spatial proximity, spatial scales, and whether the global signal was regressed. To investigate the potential mechanisms underlying the weak HOIs, we analyzed the dynamics of a network model and also found that HOIs were generally weak within a wide range of key parameters provided that the overall dynamic feature of the model was similar to the empirical data and it was operating close to a linear fluctuation regime. Our results suggest that weak HOI may be a general property of brain's macroscopic functional networks, which implies the dominance of pairwise interactions in shaping brain activities at such a scale and warrants the validity of widely used pairwise-based FC approaches. SIGNIFICANCE STATEMENT To explain how activities of different brain areas are coordinated through interactions is essential to revealing the mechanisms underlying various brain functions. Traditionally, such an interaction structure is commonly studied using pairwise-based functional network analyses. It is unclear whether the interactions beyond the pairwise level (higher-order interactions or HOIs) play any role in this process. Here, we show that HOIs are generally weak in macroscopic brain networks. We also suggest a possible dynamical mechanism that may underlie this phenomenon. These results provide plausible explanation for the effectiveness of widely used pairwise-based approaches in analyzing brain networks. More importantly, it reveals a previously unknown, simple organization of the brain's macroscopic functional systems. Copyright © 2017 the authors 0270-6474/17/3710481-17$15.00/0.

  10. Statistical mechanical foundation of the peridynamic nonlocal continuum theory: energy and momentum conservation laws.

    PubMed

    Lehoucq, R B; Sears, Mark P

    2011-09-01

    The purpose of this paper is to derive the energy and momentum conservation laws of the peridynamic nonlocal continuum theory using the principles of classical statistical mechanics. The peridynamic laws allow the consideration of discontinuous motion, or deformation, by relying on integral operators. These operators sum forces and power expenditures separated by a finite distance and so represent nonlocal interaction. The integral operators replace the differential divergence operators conventionally used, thereby obviating special treatment at points of discontinuity. The derivation presented employs a general multibody interatomic potential, avoiding the standard assumption of a pairwise decomposition. The integral operators are also expressed in terms of a stress tensor and heat flux vector under the assumption that these fields are differentiable, demonstrating that the classical continuum energy and momentum conservation laws are consequences of the more general peridynamic laws. An important conclusion is that nonlocal interaction is intrinsic to continuum conservation laws when derived using the principles of statistical mechanics.

  11. On orbital allotments for geostationary satellites

    NASA Technical Reports Server (NTRS)

    Gonsalvez, David J. A.; Reilly, Charles H.; Mount-Campbell, Clark A.

    1986-01-01

    The following satellite synthesis problem is addressed: communication satellites are to be allotted positions on the geostationary arc so that interference does not exceed a given acceptable level by enforcing conservative pairwise satellite separation. A desired location is specified for each satellite, and the objective is to minimize the sum of the deviations between the satellites' prescribed and desired locations. Two mixed integer programming models for the satellite synthesis problem are presented. Four solution strategies, branch-and-bound, Benders' decomposition, linear programming with restricted basis entry, and a switching heuristic, are used to find solutions to example synthesis problems. Computational results indicate the switching algorithm yields solutions of good quality in reasonable execution times when compared to the other solution methods. It is demonstrated that the switching algorithm can be applied to synthesis problems with the objective of minimizing the largest deviation between a prescribed location and the corresponding desired location. Furthermore, it is shown that the switching heuristic can use no conservative, location-dependent satellite separations in order to satisfy interference criteria.

  12. Global sensitivity analysis for fuzzy inputs based on the decomposition of fuzzy output entropy

    NASA Astrophysics Data System (ADS)

    Shi, Yan; Lu, Zhenzhou; Zhou, Yicheng

    2018-06-01

    To analyse the component of fuzzy output entropy, a decomposition method of fuzzy output entropy is first presented. After the decomposition of fuzzy output entropy, the total fuzzy output entropy can be expressed as the sum of the component fuzzy entropy contributed by fuzzy inputs. Based on the decomposition of fuzzy output entropy, a new global sensitivity analysis model is established for measuring the effects of uncertainties of fuzzy inputs on the output. The global sensitivity analysis model can not only tell the importance of fuzzy inputs but also simultaneously reflect the structural composition of the response function to a certain degree. Several examples illustrate the validity of the proposed global sensitivity analysis, which is a significant reference in engineering design and optimization of structural systems.

  13. Sythesis of MCMC and Belief Propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahn, Sungsoo; Chertkov, Michael; Shin, Jinwoo

    Markov Chain Monte Carlo (MCMC) and Belief Propagation (BP) are the most popular algorithms for computational inference in Graphical Models (GM). In principle, MCMC is an exact probabilistic method which, however, often suffers from exponentially slow mixing. In contrast, BP is a deterministic method, which is typically fast, empirically very successful, however in general lacking control of accuracy over loopy graphs. In this paper, we introduce MCMC algorithms correcting the approximation error of BP, i.e., we provide a way to compensate for BP errors via a consecutive BP-aware MCMC. Our framework is based on the Loop Calculus (LC) approach whichmore » allows to express the BP error as a sum of weighted generalized loops. Although the full series is computationally intractable, it is known that a truncated series, summing up all 2-regular loops, is computable in polynomial-time for planar pair-wise binary GMs and it also provides a highly accurate approximation empirically. Motivated by this, we first propose a polynomial-time approximation MCMC scheme for the truncated series of general (non-planar) pair-wise binary models. Our main idea here is to use the Worm algorithm, known to provide fast mixing in other (related) problems, and then design an appropriate rejection scheme to sample 2-regular loops. Furthermore, we also design an efficient rejection-free MCMC scheme for approximating the full series. The main novelty underlying our design is in utilizing the concept of cycle basis, which provides an efficient decomposition of the generalized loops. In essence, the proposed MCMC schemes run on transformed GM built upon the non-trivial BP solution, and our experiments show that this synthesis of BP and MCMC outperforms both direct MCMC and bare BP schemes.« less

  14. Affective Outcomes of Schooling: Full-Information Item Factor Analysis of a Student Questionnaire.

    ERIC Educational Resources Information Center

    Muraki, Eiji; Engelhard, George, Jr.

    Recent developments in dichotomous factor analysis based on multidimensional item response models (Bock and Aitkin, 1981; Muthen, 1978) provide an effective method for exploring the dimensionality of questionnaire items. Implemented in the TESTFACT program, this "full information" item factor analysis accounts not only for the pairwise joint…

  15. Manipulation of Karyotype in Caenorhabditis elegans Reveals Multiple Inputs Driving Pairwise Chromosome Synapsis During Meiosis

    PubMed Central

    Roelens, Baptiste; Schvarzstein, Mara; Villeneuve, Anne M.

    2015-01-01

    Meiotic chromosome segregation requires pairwise association between homologs, stabilized by the synaptonemal complex (SC). Here, we investigate factors contributing to pairwise synapsis by investigating meiosis in polyploid worms. We devised a strategy, based on transient inhibition of cohesin function, to generate polyploid derivatives of virtually any Caenorhabditis elegans strain. We exploited this strategy to investigate the contribution of recombination to pairwise synapsis in tetraploid and triploid worms. In otherwise wild-type polyploids, chromosomes first sort into homolog groups, then multipartner interactions mature into exclusive pairwise associations. Pairwise synapsis associations still form in recombination-deficient tetraploids, confirming a propensity for synapsis to occur in a strictly pairwise manner. However, the transition from multipartner to pairwise association was perturbed in recombination-deficient triploids, implying a role for recombination in promoting this transition when three partners compete for synapsis. To evaluate the basis of synapsis partner preference, we generated polyploid worms heterozygous for normal sequence and rearranged chromosomes sharing the same pairing center (PC). Tetraploid worms had no detectable preference for identical partners, indicating that PC-adjacent homology drives partner choice in this context. In contrast, triploid worms exhibited a clear preference for identical partners, indicating that homology outside the PC region can influence partner choice. Together, our findings, suggest a two-phase model for C. elegans synapsis: an early phase, in which initial synapsis interactions are driven primarily by recombination-independent assessment of homology near PCs and by a propensity for pairwise SC assembly, and a later phase in which mature synaptic interactions are promoted by recombination. PMID:26500263

  16. Scalable parallel elastic-plastic finite element analysis using a quasi-Newton method with a balancing domain decomposition preconditioner

    NASA Astrophysics Data System (ADS)

    Yusa, Yasunori; Okada, Hiroshi; Yamada, Tomonori; Yoshimura, Shinobu

    2018-04-01

    A domain decomposition method for large-scale elastic-plastic problems is proposed. The proposed method is based on a quasi-Newton method in conjunction with a balancing domain decomposition preconditioner. The use of a quasi-Newton method overcomes two problems associated with the conventional domain decomposition method based on the Newton-Raphson method: (1) avoidance of a double-loop iteration algorithm, which generally has large computational complexity, and (2) consideration of the local concentration of nonlinear deformation, which is observed in elastic-plastic problems with stress concentration. Moreover, the application of a balancing domain decomposition preconditioner ensures scalability. Using the conventional and proposed domain decomposition methods, several numerical tests, including weak scaling tests, were performed. The convergence performance of the proposed method is comparable to that of the conventional method. In particular, in elastic-plastic analysis, the proposed method exhibits better convergence performance than the conventional method.

  17. Reporting of analyses from randomized controlled trials with multiple arms: a systematic review.

    PubMed

    Baron, Gabriel; Perrodeau, Elodie; Boutron, Isabelle; Ravaud, Philippe

    2013-03-27

    Multiple-arm randomized trials can be more complex in their design, data analysis, and result reporting than two-arm trials. We conducted a systematic review to assess the reporting of analyses in reports of randomized controlled trials (RCTs) with multiple arms. The literature in the MEDLINE database was searched for reports of RCTs with multiple arms published in 2009 in the core clinical journals. Two reviewers extracted data using a standardized extraction form. In total, 298 reports were identified. Descriptions of the baseline characteristics and outcomes per group were missing in 45 reports (15.1%) and 48 reports (16.1%), respectively. More than half of the articles (n = 171, 57.4%) reported that a planned global test comparison was used (that is, assessment of the global differences between all groups), but 67 (39.2%) of these 171 articles did not report details of the planned analysis. Of the 116 articles reporting a global comparison test, 12 (10.3%) did not report the analysis as planned. In all, 60% of publications (n = 180) described planned pairwise test comparisons (that is, assessment of the difference between two groups), but 20 of these 180 articles (11.1%) did not report the pairwise test comparisons. Of the 204 articles reporting pairwise test comparisons, the comparisons were not planned for 44 (21.6%) of them. Less than half the reports (n = 137; 46%) provided baseline and outcome data per arm and reported the analysis as planned. Our findings highlight discrepancies between the planning and reporting of analyses in reports of multiple-arm trials.

  18. Structure based alignment and clustering of proteins (STRALCP)

    DOEpatents

    Zemla, Adam T.; Zhou, Carol E.; Smith, Jason R.; Lam, Marisa W.

    2013-06-18

    Disclosed are computational methods of clustering a set of protein structures based on local and pair-wise global similarity values. Pair-wise local and global similarity values are generated based on pair-wise structural alignments for each protein in the set of protein structures. Initially, the protein structures are clustered based on pair-wise local similarity values. The protein structures are then clustered based on pair-wise global similarity values. For each given cluster both a representative structure and spans of conserved residues are identified. The representative protein structure is used to assign newly-solved protein structures to a group. The spans are used to characterize conservation and assign a "structural footprint" to the cluster.

  19. Using structural equation modeling for network meta-analysis.

    PubMed

    Tu, Yu-Kang; Wu, Yun-Chun

    2017-07-14

    Network meta-analysis overcomes the limitations of traditional pair-wise meta-analysis by incorporating all available evidence into a general statistical framework for simultaneous comparisons of several treatments. Currently, network meta-analyses are undertaken either within the Bayesian hierarchical linear models or frequentist generalized linear mixed models. Structural equation modeling (SEM) is a statistical method originally developed for modeling causal relations among observed and latent variables. As random effect is explicitly modeled as a latent variable in SEM, it is very flexible for analysts to specify complex random effect structure and to make linear and nonlinear constraints on parameters. The aim of this article is to show how to undertake a network meta-analysis within the statistical framework of SEM. We used an example dataset to demonstrate the standard fixed and random effect network meta-analysis models can be easily implemented in SEM. It contains results of 26 studies that directly compared three treatment groups A, B and C for prevention of first bleeding in patients with liver cirrhosis. We also showed that a new approach to network meta-analysis based on the technique of unrestricted weighted least squares (UWLS) method can also be undertaken using SEM. For both the fixed and random effect network meta-analysis, SEM yielded similar coefficients and confidence intervals to those reported in the previous literature. The point estimates of two UWLS models were identical to those in the fixed effect model but the confidence intervals were greater. This is consistent with results from the traditional pairwise meta-analyses. Comparing to UWLS model with common variance adjusted factor, UWLS model with unique variance adjusted factor has greater confidence intervals when the heterogeneity was larger in the pairwise comparison. The UWLS model with unique variance adjusted factor reflects the difference in heterogeneity within each comparison. SEM provides a very flexible framework for univariate and multivariate meta-analysis, and its potential as a powerful tool for advanced meta-analysis is still to be explored.

  20. Analysis of Geographic and Pairwise Distances among Chinese Cashmere Goat Populations

    PubMed Central

    Liu, Jian-Bin; Wang, Fan; Lang, Xia; Zha, Xi; Sun, Xiao-Ping; Yue, Yao-Jing; Feng, Rui-Lin; Yang, Bo-Hui; Guo, Jian

    2013-01-01

    This study investigated the geographic and pairwise distances of nine Chinese local Cashmere goat populations through the analysis of 20 microsatellite DNA markers. Fluorescence PCR was used to identify the markers, which were selected based on their significance as identified by the Food and Agriculture Organization of the United Nations (FAO) and the International Society for Animal Genetics (ISAG). In total, 206 alleles were detected; the average allele number was 10.30; the polymorphism information content of loci ranged from 0.5213 to 0.7582; the number of effective alleles ranged from 4.0484 to 4.6178; the observed heterozygosity was from 0.5023 to 0.5602 for the practical sample; the expected heterozygosity ranged from 0.5783 to 0.6464; and Allelic richness ranged from 4.7551 to 8.0693. These results indicated that Chinese Cashmere goat populations exhibited rich genetic diversity. Further, the Wright’s F-statistics of subpopulation within total (FST) was 0.1184; the genetic differentiation coefficient (GST) was 0.0940; and the average gene flow (Nm) was 2.0415. All pairwise FST values among the populations were highly significant (p<0.01 or p<0.001), suggesting that the populations studied should all be considered to be separate breeds. Finally, the clustering analysis divided the Chinese Cashmere goat populations into at least four clusters, with the Hexi and Yashan goat populations alone in one cluster. These results have provided useful, practical, and important information for the future of Chinese Cashmere goat breeding. PMID:25049794

  1. Transportation Network Analysis and Decomposition Methods

    DOT National Transportation Integrated Search

    1978-03-01

    The report outlines research in transportation network analysis using decomposition techniques as a basis for problem solutions. Two transportation network problems were considered in detail: a freight network flow problem and a scheduling problem fo...

  2. Electrochemical and Infrared Absorption Spectroscopy Detection of SF₆ Decomposition Products.

    PubMed

    Dong, Ming; Zhang, Chongxing; Ren, Ming; Albarracín, Ricardo; Ye, Rixin

    2017-11-15

    Sulfur hexafluoride (SF₆) gas-insulated electrical equipment is widely used in high-voltage (HV) and extra-high-voltage (EHV) power systems. Partial discharge (PD) and local heating can occur in the electrical equipment because of insulation faults, which results in SF₆ decomposition and ultimately generates several types of decomposition products. These SF₆ decomposition products can be qualitatively and quantitatively detected with relevant detection methods, and such detection contributes to diagnosing the internal faults and evaluating the security risks of the equipment. At present, multiple detection methods exist for analyzing the SF₆ decomposition products, and electrochemical sensing (ES) and infrared (IR) spectroscopy are well suited for application in online detection. In this study, the combination of ES with IR spectroscopy is used to detect SF₆ gas decomposition. First, the characteristics of these two detection methods are studied, and the data analysis matrix is established. Then, a qualitative and quantitative analysis ES-IR model is established by adopting a two-step approach. A SF₆ decomposition detector is designed and manufactured by combining an electrochemical sensor and IR spectroscopy technology. The detector is used to detect SF₆ gas decomposition and is verified to reliably and accurately detect the gas components and concentrations.

  3. Critical analysis of nitramine decomposition data: Activation energies and frequency factors for HMX and RDX decomposition

    NASA Technical Reports Server (NTRS)

    Schroeder, M. A.

    1980-01-01

    A summary of a literature review on thermal decomposition of HMX and RDX is presented. The decomposition apparently fits first order kinetics. Recommended values for Arrhenius parameters for HMX and RDX decomposition in the gaseous and liquid phases and for decomposition of RDX in solution in TNT are given. The apparent importance of autocatalysis is pointed out, as are some possible complications that may be encountered in interpreting extending or extrapolating kinetic data for these compounds from measurements carried out below their melting points to the higher temperatures and pressure characteristic of combustion.

  4. Potential linkage for schizophrenia on chromosome 22q12-q13: A replication study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schwab, S.G.; Bondy, B.; Wildenauer, D.B.

    1995-10-09

    In an attempt to replicate a potential linkage on chromosome 22q12-q13.1 reported by Pulver et al., we have analyzed 4 microsatellite markers which span this chromosomal region, including the IL2RB locus, for linkage with schizophrenia in 30 families from Israel and Germany. Linkage analysis by pairwise lod score analysis as well as by multipoint analysis did not provide evidence for a single major gene locus. However, a lod score of Z{sub max} = 0.612 was obtained for a dominant model of inheritance with the marker D22S304 at recombination fraction 0.2 by pairwise analysis. In addition, using a nonparametric method, sibmore » pair analysis, a P value of 0.068 corresponding to a lod score of 0.48 was obtained for this marker. This finding, together with those of Pulver et al., is suggestive of a genetic factor in this region, predisposing for schizophrenia in a subset of families. Further studies using nonparametric methods should be conducted in order to clarify this point. 32 refs., 1 fig., 4 tabs.« less

  5. Decomposition techniques

    USGS Publications Warehouse

    Chao, T.T.; Sanzolone, R.F.

    1992-01-01

    Sample decomposition is a fundamental and integral step in the procedure of geochemical analysis. It is often the limiting factor to sample throughput, especially with the recent application of the fast and modern multi-element measurement instrumentation. The complexity of geological materials makes it necessary to choose the sample decomposition technique that is compatible with the specific objective of the analysis. When selecting a decomposition technique, consideration should be given to the chemical and mineralogical characteristics of the sample, elements to be determined, precision and accuracy requirements, sample throughput, technical capability of personnel, and time constraints. This paper addresses these concerns and discusses the attributes and limitations of many techniques of sample decomposition along with examples of their application to geochemical analysis. The chemical properties of reagents as to their function as decomposition agents are also reviewed. The section on acid dissolution techniques addresses the various inorganic acids that are used individually or in combination in both open and closed systems. Fluxes used in sample fusion are discussed. The promising microwave-oven technology and the emerging field of automation are also examined. A section on applications highlights the use of decomposition techniques for the determination of Au, platinum group elements (PGEs), Hg, U, hydride-forming elements, rare earth elements (REEs), and multi-elements in geological materials. Partial dissolution techniques used for geochemical exploration which have been treated in detail elsewhere are not discussed here; nor are fire-assaying for noble metals and decomposition techniques for X-ray fluorescence or nuclear methods be discussed. ?? 1992.

  6. Using Microwave Sample Decomposition in Undergraduate Analytical Chemistry

    NASA Astrophysics Data System (ADS)

    Griff Freeman, R.; McCurdy, David L.

    1998-08-01

    A shortcoming of many undergraduate classes in analytical chemistry is that students receive little exposure to sample preparation in chemical analysis. This paper reports the progress made in introducing microwave sample decomposition into several quantitative analysis experiments at Truman State University. Two experiments being performed in our current laboratory rotation include closed vessel microwave decomposition applied to the classical gravimetric determination of nickel and the determination of sodium in snack foods by flame atomic emission spectrometry. A third lab, using open-vessel microwave decomposition for the Kjeldahl nitrogen determination is now ready for student trial. Microwave decomposition reduces the time needed to complete these experiments and significantly increases the student awareness of the importance of sample preparation in quantitative chemical analyses, providing greater breadth and realism in the experiments.

  7. A Scalar Product Model for the Multidimensional Scaling of Choice

    ERIC Educational Resources Information Center

    Bechtel, Gordon G.; And Others

    1971-01-01

    Contains a solution for the multidimensional scaling of pairwise choice when individuals are represented as dimensional weights. The analysis supplies an exact least squares solution and estimates of group unscalability parameters. (DG)

  8. Development Of Polarimetric Decomposition Techniques For Indian Forest Resource Assessment Using Radar Imaging Satellite (Risat-1) Images

    NASA Astrophysics Data System (ADS)

    Sridhar, J.

    2015-12-01

    The focus of this work is to examine polarimetric decomposition techniques primarily focussed on Pauli decomposition and Sphere Di-Plane Helix (SDH) decomposition for forest resource assessment. The data processing methods adopted are Pre-processing (Geometric correction and Radiometric calibration), Speckle Reduction, Image Decomposition and Image Classification. Initially to classify forest regions, unsupervised classification was applied to determine different unknown classes. It was observed K-means clustering method gave better results in comparison with ISO Data method.Using the algorithm developed for Radar Tools, the code for decomposition and classification techniques were applied in Interactive Data Language (IDL) and was applied to RISAT-1 image of Mysore-Mandya region of Karnataka, India. This region is chosen for studying forest vegetation and consists of agricultural lands, water and hilly regions. Polarimetric SAR data possess a high potential for classification of earth surface.After applying the decomposition techniques, classification was done by selecting region of interests andpost-classification the over-all accuracy was observed to be higher in the SDH decomposed image, as it operates on individual pixels on a coherent basis and utilises the complete intrinsic coherent nature of polarimetric SAR data. Thereby, making SDH decomposition particularly suited for analysis of high-resolution SAR data. The Pauli Decomposition represents all the polarimetric information in a single SAR image however interpretation of the resulting image is difficult. The SDH decomposition technique seems to produce better results and interpretation as compared to Pauli Decomposition however more quantification and further analysis are being done in this area of research. The comparison of Polarimetric decomposition techniques and evolutionary classification techniques will be the scope of this work.

  9. Effects of anthropogenic heavy metal contamination on litter decomposition in streams - A meta-analysis.

    PubMed

    Ferreira, Verónica; Koricheva, Julia; Duarte, Sofia; Niyogi, Dev K; Guérold, François

    2016-03-01

    Many streams worldwide are affected by heavy metal contamination, mostly due to past and present mining activities. Here we present a meta-analysis of 38 studies (reporting 133 cases) published between 1978 and 2014 that reported the effects of heavy metal contamination on the decomposition of terrestrial litter in running waters. Overall, heavy metal contamination significantly inhibited litter decomposition. The effect was stronger for laboratory than for field studies, likely due to better control of confounding variables in the former, antagonistic interactions between metals and other environmental variables in the latter or differences in metal identity and concentration between studies. For laboratory studies, only copper + zinc mixtures significantly inhibited litter decomposition, while no significant effects were found for silver, aluminum, cadmium or zinc considered individually. For field studies, coal and metal mine drainage strongly inhibited litter decomposition, while drainage from motorways had no significant effects. The effect of coal mine drainage did not depend on drainage pH. Coal mine drainage negatively affected leaf litter decomposition independently of leaf litter identity; no significant effect was found for wood decomposition, but sample size was low. Considering metal mine drainage, arsenic mines had a stronger negative effect on leaf litter decomposition than gold or pyrite mines. Metal mine drainage significantly inhibited leaf litter decomposition driven by both microbes and invertebrates, independently of leaf litter identity; no significant effect was found for microbially driven decomposition, but sample size was low. Overall, mine drainage negatively affects leaf litter decomposition, likely through negative effects on invertebrates. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Kinetic analysis of overlapping multistep thermal decomposition comprising exothermic and endothermic processes: thermolysis of ammonium dinitramide.

    PubMed

    Muravyev, Nikita V; Koga, Nobuyoshi; Meerov, Dmitry B; Pivkina, Alla N

    2017-01-25

    This study focused on kinetic modeling of a specific type of multistep heterogeneous reaction comprising exothermic and endothermic reaction steps, as exemplified by the practical kinetic analysis of the experimental kinetic curves for the thermal decomposition of molten ammonium dinitramide (ADN). It is known that the thermal decomposition of ADN occurs as a consecutive two step mass-loss process comprising the decomposition of ADN and subsequent evaporation/decomposition of in situ generated ammonium nitrate. These reaction steps provide exothermic and endothermic contributions, respectively, to the overall thermal effect. The overall reaction process was deconvoluted into two reaction steps using simultaneously recorded thermogravimetry and differential scanning calorimetry (TG-DSC) curves by considering the different physical meanings of the kinetic data derived from TG and DSC by P value analysis. The kinetic data thus separated into exothermic and endothermic reaction steps were kinetically characterized using kinetic computation methods including isoconversional method, combined kinetic analysis, and master plot method. The overall kinetic behavior was reproduced as the sum of the kinetic equations for each reaction step considering the contributions to the rate data derived from TG and DSC. During reproduction of the kinetic behavior, the kinetic parameters and contributions of each reaction step were optimized using kinetic deconvolution analysis. As a result, the thermal decomposition of ADN was successfully modeled as partially overlapping exothermic and endothermic reaction steps. The logic of the kinetic modeling was critically examined, and the practical usefulness of phenomenological modeling for the thermal decomposition of ADN was illustrated to demonstrate the validity of the methodology and its applicability to similar complex reaction processes.

  11. Beyond Principal Component Analysis: A Trilinear Decomposition Model and Least Squares Estimation.

    ERIC Educational Resources Information Center

    Pham, Tuan Dinh; Mocks, Joachim

    1992-01-01

    Sufficient conditions are derived for the consistency and asymptotic normality of the least squares estimator of a trilinear decomposition model for multiway data analysis. The limiting covariance matrix is computed. (Author/SLD)

  12. Automatic network coupling analysis for dynamical systems based on detailed kinetic models.

    PubMed

    Lebiedz, Dirk; Kammerer, Julia; Brandt-Pollmann, Ulrich

    2005-10-01

    We introduce a numerical complexity reduction method for the automatic identification and analysis of dynamic network decompositions in (bio)chemical kinetics based on error-controlled computation of a minimal model dimension represented by the number of (locally) active dynamical modes. Our algorithm exploits a generalized sensitivity analysis along state trajectories and subsequent singular value decomposition of sensitivity matrices for the identification of these dominant dynamical modes. It allows for a dynamic coupling analysis of (bio)chemical species in kinetic models that can be exploited for the piecewise computation of a minimal model on small time intervals and offers valuable functional insight into highly nonlinear reaction mechanisms and network dynamics. We present results for the identification of network decompositions in a simple oscillatory chemical reaction, time scale separation based model reduction in a Michaelis-Menten enzyme system and network decomposition of a detailed model for the oscillatory peroxidase-oxidase enzyme system.

  13. Physico-Geometrical Kinetics of Solid-State Reactions in an Undergraduate Thermal Analysis Laboratory

    ERIC Educational Resources Information Center

    Koga, Nobuyoshi; Goshi, Yuri; Yoshikawa, Masahiro; Tatsuoka, Tomoyuki

    2014-01-01

    An undergraduate kinetic experiment of the thermal decomposition of solids by microscopic observation and thermal analysis was developed by investigating a suitable reaction, applicable techniques of thermal analysis and microscopic observation, and a reliable kinetic calculation method. The thermal decomposition of sodium hydrogen carbonate is…

  14. Broadband continuous wave source localization via pair-wise, cochleagram processing

    NASA Astrophysics Data System (ADS)

    Nosal, Eva-Marie; Frazer, L. Neil

    2005-04-01

    A pair-wise processor has been developed for the passive localization of broadband continuous-wave underwater sources. The algorithm uses sparse hydrophone arrays and does not require previous knowledge of the source signature. It is applicable in multiple source situations. A spectrogram/cochleagram version of the algorithm has been developed in order to utilize higher frequencies at longer ranges where signal incoherence, and limited computational resources, preclude the use of full waveforms. Simulations demonstrating the robustness of the algorithm with respect to noise and environmental mismatch will be presented, together with initial results from the analysis of humpback whale song recorded at the Pacific Missile Range Facility off Kauai. [Work supported by MHPCC and ONR.

  15. The Thermal Decomposition of Basic Copper(II) Sulfate.

    ERIC Educational Resources Information Center

    Tanaka, Haruhiko; Koga, Nobuyoshi

    1990-01-01

    Discussed is the preparation of synthetic brochantite from solution and a thermogravimetric-differential thermal analysis study of the thermal decomposition of this compound. Other analyses included are chemical analysis and IR spectroscopy. Experimental procedures and results are presented. (CW)

  16. Untargeted analysis of chromatographic data for green and fermented rooibos: Problem with size effect removal.

    PubMed

    Tobin, Jade; Walach, Jan; de Beer, Dalene; Williams, Paul J; Filzmoser, Peter; Walczak, Beata

    2017-11-24

    While analyzing chromatographic data, it is necessary to preprocess it properly before exploration and/or supervised modeling. To make chromatographic signals comparable, it is crucial to remove the scaling effect, caused by differences in overall sample concentrations. One of the efficient methods of signal scaling is Probabilistic Quotient Normalization (PQN) [1]. However, it can be applied only to data for which the majority of features do not vary systematically among the studied classes of signals. When studying the influence of the traditional "fermentation" (oxidation) process on the concentration of 56 individual peaks detected in rooibos plant material, this assumption is not fulfilled. In this case, the only possible solution is the analysis of pairwise log-ratios, which are not influenced by the scaling constant. To estimate significant features, i.e., peaks differentiating the studied classes of samples (green and fermented rooibos plant material), we propose the application of rPLR (robust pair-wise log-ratios) as proposed by Walach et al. [2]. It allows for fast computation and identification of the significant features in terms of original variables (peaks) which is problematic, while working with the unfolded pair-wise log ratios. As demonstrated, it can be applied to designed data sets and in the case of contaminated data, it allows proper conclusions. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Electrochemical and Infrared Absorption Spectroscopy Detection of SF6 Decomposition Products

    PubMed Central

    Dong, Ming; Ren, Ming; Ye, Rixin

    2017-01-01

    Sulfur hexafluoride (SF6) gas-insulated electrical equipment is widely used in high-voltage (HV) and extra-high-voltage (EHV) power systems. Partial discharge (PD) and local heating can occur in the electrical equipment because of insulation faults, which results in SF6 decomposition and ultimately generates several types of decomposition products. These SF6 decomposition products can be qualitatively and quantitatively detected with relevant detection methods, and such detection contributes to diagnosing the internal faults and evaluating the security risks of the equipment. At present, multiple detection methods exist for analyzing the SF6 decomposition products, and electrochemical sensing (ES) and infrared (IR) spectroscopy are well suited for application in online detection. In this study, the combination of ES with IR spectroscopy is used to detect SF6 gas decomposition. First, the characteristics of these two detection methods are studied, and the data analysis matrix is established. Then, a qualitative and quantitative analysis ES-IR model is established by adopting a two-step approach. A SF6 decomposition detector is designed and manufactured by combining an electrochemical sensor and IR spectroscopy technology. The detector is used to detect SF6 gas decomposition and is verified to reliably and accurately detect the gas components and concentrations. PMID:29140268

  18. Additional considerations are required when preparing a protocol for a systematic review with multiple interventions.

    PubMed

    Chaimani, Anna; Caldwell, Deborah M; Li, Tianjing; Higgins, Julian P T; Salanti, Georgia

    2017-03-01

    The number of systematic reviews that aim to compare multiple interventions using network meta-analysis is increasing. In this study, we highlight aspects of a standard systematic review protocol that may need modification when multiple interventions are to be compared. We take the protocol format suggested by Cochrane for a standard systematic review as our reference and compare the considerations for a pairwise review with those required for a valid comparison of multiple interventions. We suggest new sections for protocols of systematic reviews including network meta-analyses with a focus on how to evaluate their assumptions. We provide example text from published protocols to exemplify the considerations. Standard systematic review protocols for pairwise meta-analyses need extensions to accommodate the increased complexity of network meta-analysis. Our suggested modifications are widely applicable to both Cochrane and non-Cochrane systematic reviews involving network meta-analyses. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Intrasubject multimodal groupwise registration with the conditional template entropy.

    PubMed

    Polfliet, Mathias; Klein, Stefan; Huizinga, Wyke; Paulides, Margarethus M; Niessen, Wiro J; Vandemeulebroucke, Jef

    2018-05-01

    Image registration is an important task in medical image analysis. Whereas most methods are designed for the registration of two images (pairwise registration), there is an increasing interest in simultaneously aligning more than two images using groupwise registration. Multimodal registration in a groupwise setting remains difficult, due to the lack of generally applicable similarity metrics. In this work, a novel similarity metric for such groupwise registration problems is proposed. The metric calculates the sum of the conditional entropy between each image in the group and a representative template image constructed iteratively using principal component analysis. The proposed metric is validated in extensive experiments on synthetic and intrasubject clinical image data. These experiments showed equivalent or improved registration accuracy compared to other state-of-the-art (dis)similarity metrics and improved transformation consistency compared to pairwise mutual information. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  20. Microbial genomics, transcriptomics and proteomics: new discoveries in decomposition research using complementary methods.

    PubMed

    Baldrian, Petr; López-Mondéjar, Rubén

    2014-02-01

    Molecular methods for the analysis of biomolecules have undergone rapid technological development in the last decade. The advent of next-generation sequencing methods and improvements in instrumental resolution enabled the analysis of complex transcriptome, proteome and metabolome data, as well as a detailed annotation of microbial genomes. The mechanisms of decomposition by model fungi have been described in unprecedented detail by the combination of genome sequencing, transcriptomics and proteomics. The increasing number of available genomes for fungi and bacteria shows that the genetic potential for decomposition of organic matter is widespread among taxonomically diverse microbial taxa, while expression studies document the importance of the regulation of expression in decomposition efficiency. Importantly, high-throughput methods of nucleic acid analysis used for the analysis of metagenomes and metatranscriptomes indicate the high diversity of decomposer communities in natural habitats and their taxonomic composition. Today, the metaproteomics of natural habitats is of interest. In combination with advanced analytical techniques to explore the products of decomposition and the accumulation of information on the genomes of environmentally relevant microorganisms, advanced methods in microbial ecophysiology should increase our understanding of the complex processes of organic matter transformation.

  1. Laser Ignition of Nitramine Composite Propellants and Crack Propagation and Branching in Burning Solid Propellants

    DTIC Science & Technology

    1987-10-01

    34 Proceedings of the 16th JANNAF Com- bustion Meeting, Sept. 1979, Vol. II, pp. 13-34. 44. Schroeder , M. A., " Critical Analysis of Nitramine Decomposition...34 Proceedings of the 19th JANNAF Combustion Meeting, Oct. 1982. 47. Schroeder , M. A., " Critical Analysis of Nitramine Decomposition Data: Ac- tivation...the surface of the propellant. This is consis- tent with the decomposition mechanism considered by Boggs[48] and Schroeder [43J. They concluded that the

  2. Comorbidities in the diseasome are more apparent than real: What Bayesian filtering reveals about the comorbidities of depression

    PubMed Central

    Bolgar, Bence; Deakin, Bill

    2017-01-01

    Comorbidity patterns have become a major source of information to explore shared mechanisms of pathogenesis between disorders. In hypothesis-free exploration of comorbid conditions, disease-disease networks are usually identified by pairwise methods. However, interpretation of the results is hindered by several confounders. In particular a very large number of pairwise associations can arise indirectly through other comorbidity associations and they increase exponentially with the increasing breadth of the investigated diseases. To investigate and filter this effect, we computed and compared pairwise approaches with a systems-based method, which constructs a sparse Bayesian direct multimorbidity map (BDMM) by systematically eliminating disease-mediated comorbidity relations. Additionally, focusing on depression-related parts of the BDMM, we evaluated correspondence with results from logistic regression, text-mining and molecular-level measures for comorbidities such as genetic overlap and the interactome-based association score. We used a subset of the UK Biobank Resource, a cross-sectional dataset including 247 diseases and 117,392 participants who filled out a detailed questionnaire about mental health. The sparse comorbidity map confirmed that depressed patients frequently suffer from both psychiatric and somatic comorbid disorders. Notably, anxiety and obesity show strong and direct relationships with depression. The BDMM identified further directly co-morbid somatic disorders, e.g. irritable bowel syndrome, fibromyalgia, or migraine. Using the subnetwork of depression and metabolic disorders for functional analysis, the interactome-based system-level score showed the best agreement with the sparse disease network. This indicates that these epidemiologically strong disease-disease relations have improved correspondence with expected molecular-level mechanisms. The substantially fewer number of comorbidity relations in the BDMM compared to pairwise methods implies that biologically meaningful comorbid relations may be less frequent than earlier pairwise methods suggested. The computed interactive comprehensive multimorbidity views over the diseasome are available on the web at Co=MorNet: bioinformatics.mit.bme.hu/UKBNetworks. PMID:28644851

  3. Comparison of type 2 diabetes mellitus incidence in different phases of hepatitis B virus infection: A meta-analysis.

    PubMed

    Shen, Yi; Zhang, Sheng; Wang, Xulin; Wang, Yuanyuan; Zhang, Jian; Qin, Gang; Li, Wenchao; Ding, Kun; Zhang, Lei; Liang, Feng

    2017-10-01

    Because whether hepatitis B virus infection increases the risk of type 2 diabetes mellitus has been a controversial topic, pair-wise and network meta-analyses of published literature were carried out to accurately evaluate the association between different phases of hepatitis B virus infection and the risk of type 2 diabetes mellitus. A comprehensive literature retrieval was conducted from the PubMed, Embase, Cochrane Library and Chinese Database to identify epidemiological studies on the association between hepatitis B virus infection and the risk of type 2 diabetes mellitus that were published from 1999 to 2015. A pair-wise meta-analysis of direct evidence was performed to estimate the pooled odds ratios and 95% confidence intervals. A network meta-analysis was conducted, including the construction of a network plot, inconsistency plot, predictive interval plot, comparison-adjusted funnel plot and rank diagram, to graphically link the direct and indirect comparisons between different hepatitis B virus infective phases. Eighteen publications (n=113 639) describing 32 studies were included in this meta-analysis. In the pair-wise meta-analysis, the pooled odds ratio for type 2 diabetes mellitus in chronic hepatitis B cirrhosis patients was 1.76 (95% confidence interval: 1.44-2.14) when compared with non-cirrhotic chronic hepatitis B patients. In the network meta-analysis, six comparisons of four hepatitis B virus infectious states indicated the following descending order for the risk of type 2 diabetes mellitus: hepatitis B cirrhosis patients, non-cirrhotic chronic hepatitis B patients, hepatitis B virus carriers and non-hepatitis B virus controls. This study suggests that hepatitis B virus infection is not an independent risk factor for type 2 diabetes mellitus, but the development of cirrhosis may increase the incidence of type 2 diabetes mellitus cirrhosis. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  4. Plus Disease in Retinopathy of Prematurity: Improving Diagnosis by Ranking Disease Severity and Using Quantitative Image Analysis.

    PubMed

    Kalpathy-Cramer, Jayashree; Campbell, J Peter; Erdogmus, Deniz; Tian, Peng; Kedarisetti, Dharanish; Moleta, Chace; Reynolds, James D; Hutcheson, Kelly; Shapiro, Michael J; Repka, Michael X; Ferrone, Philip; Drenser, Kimberly; Horowitz, Jason; Sonmez, Kemal; Swan, Ryan; Ostmo, Susan; Jonas, Karyn E; Chan, R V Paul; Chiang, Michael F

    2016-11-01

    To determine expert agreement on relative retinopathy of prematurity (ROP) disease severity and whether computer-based image analysis can model relative disease severity, and to propose consideration of a more continuous severity score for ROP. We developed 2 databases of clinical images of varying disease severity (100 images and 34 images) as part of the Imaging and Informatics in ROP (i-ROP) cohort study and recruited expert physician, nonexpert physician, and nonphysician graders to classify and perform pairwise comparisons on both databases. Six participating expert ROP clinician-scientists, each with a minimum of 10 years of clinical ROP experience and 5 ROP publications, and 5 image graders (3 physicians and 2 nonphysician graders) who analyzed images that were obtained during routine ROP screening in neonatal intensive care units. Images in both databases were ranked by average disease classification (classification ranking), by pairwise comparison using the Elo rating method (comparison ranking), and by correlation with the i-ROP computer-based image analysis system. Interexpert agreement (weighted κ statistic) compared with the correlation coefficient (CC) between experts on pairwise comparisons and correlation between expert rankings and computer-based image analysis modeling. There was variable interexpert agreement on diagnostic classification of disease (plus, preplus, or normal) among the 6 experts (mean weighted κ, 0.27; range, 0.06-0.63), but good correlation between experts on comparison ranking of disease severity (mean CC, 0.84; range, 0.74-0.93) on the set of 34 images. Comparison ranking provided a severity ranking that was in good agreement with ranking obtained by classification ranking (CC, 0.92). Comparison ranking on the larger dataset by both expert and nonexpert graders demonstrated good correlation (mean CC, 0.97; range, 0.95-0.98). The i-ROP system was able to model this continuous severity with good correlation (CC, 0.86). Experts diagnose plus disease on a continuum, with poor absolute agreement on classification but good relative agreement on disease severity. These results suggest that the use of pairwise rankings and a continuous severity score, such as that provided by the i-ROP system, may improve agreement on disease severity in the future. Copyright © 2016 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  5. Bilinearity in Spatiotemporal Integration of Synaptic Inputs

    PubMed Central

    Li, Songting; Liu, Nan; Zhang, Xiao-hui; Zhou, Douglas; Cai, David

    2014-01-01

    Neurons process information via integration of synaptic inputs from dendrites. Many experimental results demonstrate dendritic integration could be highly nonlinear, yet few theoretical analyses have been performed to obtain a precise quantitative characterization analytically. Based on asymptotic analysis of a two-compartment passive cable model, given a pair of time-dependent synaptic conductance inputs, we derive a bilinear spatiotemporal dendritic integration rule. The summed somatic potential can be well approximated by the linear summation of the two postsynaptic potentials elicited separately, plus a third additional bilinear term proportional to their product with a proportionality coefficient . The rule is valid for a pair of synaptic inputs of all types, including excitation-inhibition, excitation-excitation, and inhibition-inhibition. In addition, the rule is valid during the whole dendritic integration process for a pair of synaptic inputs with arbitrary input time differences and input locations. The coefficient is demonstrated to be nearly independent of the input strengths but is dependent on input times and input locations. This rule is then verified through simulation of a realistic pyramidal neuron model and in electrophysiological experiments of rat hippocampal CA1 neurons. The rule is further generalized to describe the spatiotemporal dendritic integration of multiple excitatory and inhibitory synaptic inputs. The integration of multiple inputs can be decomposed into the sum of all possible pairwise integration, where each paired integration obeys the bilinear rule. This decomposition leads to a graph representation of dendritic integration, which can be viewed as functionally sparse. PMID:25521832

  6. Multi-Centrality Graph Spectral Decompositions and Their Application to Cyber Intrusion Detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Pin-Yu; Choudhury, Sutanay; Hero, Alfred

    Many modern datasets can be represented as graphs and hence spectral decompositions such as graph principal component analysis (PCA) can be useful. Distinct from previous graph decomposition approaches based on subspace projection of a single topological feature, e.g., the centered graph adjacency matrix (graph Laplacian), we propose spectral decomposition approaches to graph PCA and graph dictionary learning that integrate multiple features, including graph walk statistics, centrality measures and graph distances to reference nodes. In this paper we propose a new PCA method for single graph analysis, called multi-centrality graph PCA (MC-GPCA), and a new dictionary learning method for ensembles ofmore » graphs, called multi-centrality graph dictionary learning (MC-GDL), both based on spectral decomposition of multi-centrality matrices. As an application to cyber intrusion detection, MC-GPCA can be an effective indicator of anomalous connectivity pattern and MC-GDL can provide discriminative basis for attack classification.« less

  7. Nonlinear mode decomposition: A noise-robust, adaptive decomposition method

    NASA Astrophysics Data System (ADS)

    Iatsenko, Dmytro; McClintock, Peter V. E.; Stefanovska, Aneta

    2015-09-01

    The signals emanating from complex systems are usually composed of a mixture of different oscillations which, for a reliable analysis, should be separated from each other and from the inevitable background of noise. Here we introduce an adaptive decomposition tool—nonlinear mode decomposition (NMD)—which decomposes a given signal into a set of physically meaningful oscillations for any wave form, simultaneously removing the noise. NMD is based on the powerful combination of time-frequency analysis techniques—which, together with the adaptive choice of their parameters, make it extremely noise robust—and surrogate data tests used to identify interdependent oscillations and to distinguish deterministic from random activity. We illustrate the application of NMD to both simulated and real signals and demonstrate its qualitative and quantitative superiority over other approaches, such as (ensemble) empirical mode decomposition, Karhunen-Loève expansion, and independent component analysis. We point out that NMD is likely to be applicable and useful in many different areas of research, such as geophysics, finance, and the life sciences. The necessary matlab codes for running NMD are freely available for download.

  8. Recent advances in the UltraScan SOlution MOdeller (US-SOMO) hydrodynamic and small-angle scattering data analysis and simulation suite.

    PubMed

    Brookes, Emre; Rocco, Mattia

    2018-03-28

    The UltraScan SOlution MOdeller (US-SOMO) is a comprehensive, public domain, open-source suite of computer programs centred on hydrodynamic modelling and small-angle scattering (SAS) data analysis and simulation. We describe here the advances that have been implemented since its last official release (#3087, 2017), which are available from release #3141 for Windows, Linux and Mac operating systems. A major effort has been the transition from the legacy Qt3 cross platform software development and user interface library to the modern Qt5 release. Apart from improved graphical support, this has allowed the direct implementation of the newest, almost two-orders of magnitude faster version of the ZENO hydrodynamic computation algorithm for all operating systems. Coupled with the SoMo-generated bead models with overlaps, ZENO provides the most accurate translational friction computations from atomic-level structures available (Rocco and Byron Eur Biophys J 44:417-431, 2015a), with computational times comparable with or faster than those of other methods. In addition, it has allowed us to introduce the direct representation of each atom in a structure as a (hydrated) bead, opening interesting new modelling possibilities. In the small-angle scattering (SAS) part of the suite, an indirect Fourier transform Bayesian algorithm has been implemented for the computation of the pairwise distance distribution function from SAS data. Finally, the SAS HPLC module, recently upgraded with improved baseline correction and Gaussian decomposition of not baseline-resolved peaks and with advanced statistical evaluation tools (Brookes et al. J Appl Cryst 49:1827-1841, 2016), now allows automatic top-peak frame selection and averaging.

  9. [Relationships between decomposition rate of leaf litter and initial quality across the alpine timberline ecotone in Western Sichuan, China].

    PubMed

    Yang, Lin; Deng, Chang-chun; Chen Ya-mei; He, Run-lian; Zhang, Jian; Liu, Yang

    2015-12-01

    The relationships between litter decomposition rate and their initial quality of 14 representative plants in the alpine forest ecotone of western Sichuan were investigated in this paper. The decomposition rate k of the litter ranged from 0.16 to 1.70. Woody leaf litter and moss litter decomposed much slower, and shrubby litter decomposed a little faster. Then, herbaceous litters decomposed fastest among all plant forms. There were significant linear regression relationships between the litter decomposition rate and the N content, lignin content, phenolics content, C/N, C/P and lignin/N. Lignin/N and hemicellulose content could explain 78.4% variation of the litter decomposition rate (k) by path analysis. The lignin/N could explain 69.5% variation of k alone, and the direct path coefficient of lignin/N on k was -0.913. Principal component analysis (PCA) showed that the contribution rate of the first sort axis to k and the decomposition time (t) reached 99.2%. Significant positive correlations existed between lignin/N, lignin content, C/N, C/P and the first sort axis, and the closest relationship existed between lignin/N and the first sort axis (r = 0.923). Lignin/N was the key quality factor affecting plant litter decomposition rate across the alpine timberline ecotone, with the higher the initial lignin/N, the lower the decomposition rate of leaf litter.

  10. A Dual Super-Element Domain Decomposition Approach for Parallel Nonlinear Finite Element Analysis

    NASA Astrophysics Data System (ADS)

    Jokhio, G. A.; Izzuddin, B. A.

    2015-05-01

    This article presents a new domain decomposition method for nonlinear finite element analysis introducing the concept of dual partition super-elements. The method extends ideas from the displacement frame method and is ideally suited for parallel nonlinear static/dynamic analysis of structural systems. In the new method, domain decomposition is realized by replacing one or more subdomains in a "parent system," each with a placeholder super-element, where the subdomains are processed separately as "child partitions," each wrapped by a dual super-element along the partition boundary. The analysis of the overall system, including the satisfaction of equilibrium and compatibility at all partition boundaries, is realized through direct communication between all pairs of placeholder and dual super-elements. The proposed method has particular advantages for matrix solution methods based on the frontal scheme, and can be readily implemented for existing finite element analysis programs to achieve parallelization on distributed memory systems with minimal intervention, thus overcoming memory bottlenecks typically faced in the analysis of large-scale problems. Several examples are presented in this article which demonstrate the computational benefits of the proposed parallel domain decomposition approach and its applicability to the nonlinear structural analysis of realistic structural systems.

  11. Dimensions of landscape preferences from pairwise comparisons

    Treesearch

    F. González Bernaldez; F. Parra

    1979-01-01

    Analysis of landscape preferences allows the detection of major dimensions as:(1) the opposition between "natural and humanized", (comprising features like vegetation cover, cultivation, pattern of landscape elements, artifacts, excavations, etc.); (2) polarity "precision/ambiguity" (involving opposition between: predominance of straight, vertical...

  12. Prediction of Spatiotemporal Patterns of Neural Activity from Pairwise Correlations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marre, O.; El Boustani, S.; Fregnac, Y.

    We designed a model-based analysis to predict the occurrence of population patterns in distributed spiking activity. Using a maximum entropy principle with a Markovian assumption, we obtain a model that accounts for both spatial and temporal pairwise correlations among neurons. This model is tested on data generated with a Glauber spin-glass system and is shown to correctly predict the occurrence probabilities of spatiotemporal patterns significantly better than Ising models only based on spatial correlations. This increase of predictability was also observed on experimental data recorded in parietal cortex during slow-wave sleep. This approach can also be used to generate surrogatesmore » that reproduce the spatial and temporal correlations of a given data set.« less

  13. Market Competitiveness Evaluation of Mechanical Equipment with a Pairwise Comparisons Hierarchical Model.

    PubMed

    Hou, Fujun

    2016-01-01

    This paper provides a description of how market competitiveness evaluations concerning mechanical equipment can be made in the context of multi-criteria decision environments. It is assumed that, when we are evaluating the market competitiveness, there are limited number of candidates with some required qualifications, and the alternatives will be pairwise compared on a ratio scale. The qualifications are depicted as criteria in hierarchical structure. A hierarchical decision model called PCbHDM was used in this study based on an analysis of its desirable traits. Illustration and comparison shows that the PCbHDM provides a convenient and effective tool for evaluating the market competitiveness of mechanical equipment. The researchers and practitioners might use findings of this paper in application of PCbHDM.

  14. Relationship between the Decomposition Process of Coarse Woody Debris and Fungal Community Structure as Detected by High-Throughput Sequencing in a Deciduous Broad-Leaved Forest in Japan

    PubMed Central

    Yamashita, Satoshi; Masuya, Hayato; Abe, Shin; Masaki, Takashi; Okabe, Kimiko

    2015-01-01

    We examined the relationship between the community structure of wood-decaying fungi, detected by high-throughput sequencing, and the decomposition rate using 13 years of data from a forest dynamics plot. For molecular analysis and wood density measurements, drill dust samples were collected from logs and stumps of Fagus and Quercus in the plot. Regression using a negative exponential model between wood density and time since death revealed that the decomposition rate of Fagus was greater than that of Quercus. The residual between the expected value obtained from the regression curve and the observed wood density was used as a decomposition rate index. Principal component analysis showed that the fungal community compositions of both Fagus and Quercus changed with time since death. Principal component analysis axis scores were used as an index of fungal community composition. A structural equation model for each wood genus was used to assess the effect of fungal community structure traits on the decomposition rate and how the fungal community structure was determined by the traits of coarse woody debris. Results of the structural equation model suggested that the decomposition rate of Fagus was affected by two fungal community composition components: one that was affected by time since death and another that was not affected by the traits of coarse woody debris. In contrast, the decomposition rate of Quercus was not affected by coarse woody debris traits or fungal community structure. These findings suggest that, in the case of Fagus coarse woody debris, the fungal community structure is related to the decomposition process of its host substrate. Because fungal community structure is affected partly by the decay stage and wood density of its substrate, these factors influence each other. Further research on interactive effects is needed to improve our understanding of the relationship between fungal community structure and the woody debris decomposition process. PMID:26110605

  15. Parallel processing methods for space based power systems

    NASA Technical Reports Server (NTRS)

    Berry, F. C.

    1993-01-01

    This report presents a method for doing load-flow analysis of a power system by using a decomposition approach. The power system for the Space Shuttle is used as a basis to build a model for the load-flow analysis. To test the decomposition method for doing load-flow analysis, simulations were performed on power systems of 16, 25, 34, 43, 52, 61, 70, and 79 nodes. Each of the power systems was divided into subsystems and simulated under steady-state conditions. The results from these tests have been found to be as accurate as tests performed using a standard serial simulator. The division of the power systems into different subsystems was done by assigning a processor to each area. There were 13 transputers available, therefore, up to 13 different subsystems could be simulated at the same time. This report has preliminary results for a load-flow analysis using a decomposition principal. The report shows that the decomposition algorithm for load-flow analysis is well suited for parallel processing and provides increases in the speed of execution.

  16. Differential Item Functioning Detection across Two Methods of Defining Group Comparisons: Pairwise and Composite Group Comparisons

    ERIC Educational Resources Information Center

    Sari, Halil Ibrahim; Huggins, Anne Corinne

    2015-01-01

    This study compares two methods of defining groups for the detection of differential item functioning (DIF): (a) pairwise comparisons and (b) composite group comparisons. We aim to emphasize and empirically support the notion that the choice of pairwise versus composite group definitions in DIF is a reflection of how one defines fairness in DIF…

  17. Metabolic network prediction through pairwise rational kernels.

    PubMed

    Roche-Lima, Abiel; Domaratzki, Michael; Fristensky, Brian

    2014-09-26

    Metabolic networks are represented by the set of metabolic pathways. Metabolic pathways are a series of biochemical reactions, in which the product (output) from one reaction serves as the substrate (input) to another reaction. Many pathways remain incompletely characterized. One of the major challenges of computational biology is to obtain better models of metabolic pathways. Existing models are dependent on the annotation of the genes. This propagates error accumulation when the pathways are predicted by incorrectly annotated genes. Pairwise classification methods are supervised learning methods used to classify new pair of entities. Some of these classification methods, e.g., Pairwise Support Vector Machines (SVMs), use pairwise kernels. Pairwise kernels describe similarity measures between two pairs of entities. Using pairwise kernels to handle sequence data requires long processing times and large storage. Rational kernels are kernels based on weighted finite-state transducers that represent similarity measures between sequences or automata. They have been effectively used in problems that handle large amount of sequence information such as protein essentiality, natural language processing and machine translations. We create a new family of pairwise kernels using weighted finite-state transducers (called Pairwise Rational Kernel (PRK)) to predict metabolic pathways from a variety of biological data. PRKs take advantage of the simpler representations and faster algorithms of transducers. Because raw sequence data can be used, the predictor model avoids the errors introduced by incorrect gene annotations. We then developed several experiments with PRKs and Pairwise SVM to validate our methods using the metabolic network of Saccharomyces cerevisiae. As a result, when PRKs are used, our method executes faster in comparison with other pairwise kernels. Also, when we use PRKs combined with other simple kernels that include evolutionary information, the accuracy values have been improved, while maintaining lower construction and execution times. The power of using kernels is that almost any sort of data can be represented using kernels. Therefore, completely disparate types of data can be combined to add power to kernel-based machine learning methods. When we compared our proposal using PRKs with other similar kernel, the execution times were decreased, with no compromise of accuracy. We also proved that by combining PRKs with other kernels that include evolutionary information, the accuracy can also also be improved. As our proposal can use any type of sequence data, genes do not need to be properly annotated, avoiding accumulation errors because of incorrect previous annotations.

  18. When things don't add up: quantifying impacts of multiple stressors from individual metabolism to ecosystem processing.

    PubMed

    Galic, Nika; Sullivan, Lauren L; Grimm, Volker; Forbes, Valery E

    2018-04-01

    Ecosystems are exposed to multiple stressors which can compromise functioning and service delivery. These stressors often co-occur and interact in different ways which are not yet fully understood. Here, we applied a population model representing a freshwater amphipod feeding on leaf litter in forested streams. We simulated impacts of hypothetical stressors, individually and in pairwise combinations that target the individuals' feeding, maintenance, growth and reproduction. Impacts were quantified by examining responses at three levels of biological organisation: individual-level body sizes and cumulative reproduction, population-level abundance and biomass and ecosystem-level leaf litter decomposition. Interactive effects of multiple stressors at the individual level were mostly antagonistic, that is, less negative than expected. Most population- and ecosystem-level responses to multiple stressors were stronger than expected from an additive model, that is, synergistic. Our results suggest that across levels of biological organisation responses to multiple stressors are rarely only additive. We suggest methods for efficiently quantifying impacts of multiple stressors at different levels of biological organisation. © 2018 John Wiley & Sons Ltd/CNRS.

  19. Stabilization of the Thermal Decomposition of Poly(Propylene Carbonate) Through Copper Ion Incorporation and Use in Self-Patterning

    NASA Astrophysics Data System (ADS)

    Spencer, Todd J.; Chen, Yu-Chun; Saha, Rajarshi; Kohl, Paul A.

    2011-06-01

    Incorporation of copper ions into poly(propylene carbonate) (PPC) films cast from γ-butyrolactone (GBL), trichloroethylene (TCE) or methylene chloride (MeCl) solutions containing a photo-acid generator is shown to stabilize the PPC from thermal decomposition. Copper ions were introduced into the PPC mixtures by bringing the polymer mixture into contact with copper metal. The metal was oxidized and dissolved into the PPC mixture. The dissolved copper interferes with the decomposition mechanism of PPC, raising its decomposition temperature. Thermogravimetric analysis shows that copper ions make PPC more stable by up to 50°C. Spectroscopic analysis indicates that copper ions may stabilize terminal carboxylic acid groups, inhibiting PPC decomposition. The change in thermal stability based on PPC exposure to patterned copper substrates was used to provide a self-aligned patterning method for PPC on copper traces without the need for an additional photopatterning registration step. Thermal decomposition of PPC is then used to create air isolation regions around the copper traces. The spatial resolution of the self-patterning PPC process is limited by the lateral diffusion of the copper ions within the PPC. The concentration profiles of copper within the PPC, patterning resolution, and temperature effects on the PPC decomposition have been studied.

  20. Tools for Protecting the Privacy of Specific Individuals in Video

    NASA Astrophysics Data System (ADS)

    Chen, Datong; Chang, Yi; Yan, Rong; Yang, Jie

    2007-12-01

    This paper presents a system for protecting the privacy of specific individuals in video recordings. We address the following two problems: automatic people identification with limited labeled data, and human body obscuring with preserved structure and motion information. In order to address the first problem, we propose a new discriminative learning algorithm to improve people identification accuracy using limited training data labeled from the original video and imperfect pairwise constraints labeled from face obscured video data. We employ a robust face detection and tracking algorithm to obscure human faces in the video. Our experiments in a nursing home environment show that the system can obtain a high accuracy of people identification using limited labeled data and noisy pairwise constraints. The study result indicates that human subjects can perform reasonably well in labeling pairwise constraints with the face masked data. For the second problem, we propose a novel method of body obscuring, which removes the appearance information of the people while preserving rich structure and motion information. The proposed approach provides a way to minimize the risk of exposing the identities of the protected people while maximizing the use of the captured data for activity/behavior analysis.

  1. Measuring pair-wise molecular interactions in a complex mixture

    NASA Astrophysics Data System (ADS)

    Chakraborty, Krishnendu; Varma, Manoj M.; Venkatapathi, Murugesan

    2016-03-01

    Complex biological samples such as serum contain thousands of proteins and other molecules spanning up to 13 orders of magnitude in concentration. Present measurement techniques do not permit the analysis of all pair-wise interactions between the components of such a complex mixture to a given target molecule. In this work we explore the use of nanoparticle tags which encode the identity of the molecule to obtain the statistical distribution of pair-wise interactions using their Localized Surface Plasmon Resonance (LSPR) signals. The nanoparticle tags are chosen such that the binding between two molecules conjugated to the respective nanoparticle tags can be recognized by the coupling of their LSPR signals. This numerical simulation is done by DDA to investigate this approach using a reduced system consisting of three nanoparticles (a gold ellipsoid with aspect ratio 2.5 and short axis 16 nm, and two silver ellipsoids with aspect ratios 3 and 2 and short axes 8 nm and 10 nm respectively) and the set of all possible dimers formed between them. Incident light was circularly polarized and all possible particle and dimer orientations were considered. We observed that minimum peak separation between two spectra is 5 nm while maximum is 184nm.

  2. Process perspective on image quality evaluation

    NASA Astrophysics Data System (ADS)

    Leisti, Tuomas; Halonen, Raisa; Kokkonen, Anna; Weckman, Hanna; Mettänen, Marja; Lensu, Lasse; Ritala, Risto; Oittinen, Pirkko; Nyman, Göte

    2008-01-01

    The psychological complexity of multivariate image quality evaluation makes it difficult to develop general image quality metrics. Quality evaluation includes several mental processes and ignoring these processes and the use of a few test images can lead to biased results. By using a qualitative/quantitative (Interpretation Based Quality, IBQ) methodology, we examined the process of pair-wise comparison in a setting, where the quality of the images printed by laser printer on different paper grades was evaluated. Test image consisted of a picture of a table covered with several objects. Three other images were also used, photographs of a woman, cityscape and countryside. In addition to the pair-wise comparisons, observers (N=10) were interviewed about the subjective quality attributes they used in making their quality decisions. An examination of the individual pair-wise comparisons revealed serious inconsistencies in observers' evaluations on the test image content, but not on other contexts. The qualitative analysis showed that this inconsistency was due to the observers' focus of attention. The lack of easily recognizable context in the test image may have contributed to this inconsistency. To obtain reliable knowledge of the effect of image context or attention on subjective image quality, a qualitative methodology is needed.

  3. Neural image analysis for estimating aerobic and anaerobic decomposition of organic matter based on the example of straw decomposition

    NASA Astrophysics Data System (ADS)

    Boniecki, P.; Nowakowski, K.; Slosarz, P.; Dach, J.; Pilarski, K.

    2012-04-01

    The purpose of the project was to identify the degree of organic matter decomposition by means of a neural model based on graphical information derived from image analysis. Empirical data (photographs of compost content at various stages of maturation) were used to generate an optimal neural classifier (Boniecki et al. 2009, Nowakowski et al. 2009). The best classification properties were found in an RBF (Radial Basis Function) artificial neural network, which demonstrates that the process is non-linear.

  4. THE SPITZER SURVEY OF STELLAR STRUCTURE IN GALAXIES (S{sup 4}G): MULTI-COMPONENT DECOMPOSITION STRATEGIES AND DATA RELEASE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salo, Heikki; Laurikainen, Eija; Laine, Jarkko

    The Spitzer Survey of Stellar Structure in Galaxies (S{sup 4}G) is a deep 3.6 and 4.5 μm imaging survey of 2352 nearby (<40 Mpc) galaxies. We describe the S{sup 4}G data analysis pipeline 4, which is dedicated to two-dimensional structural surface brightness decompositions of 3.6 μm images, using GALFIT3.0. Besides automatic 1-component Sérsic fits, and 2-component Sérsic bulge + exponential disk fits, we present human-supervised multi-component decompositions, which include, when judged appropriate, a central point source, bulge, disk, and bar components. Comparison of the fitted parameters indicates that multi-component models are needed to obtain reliable estimates for the bulge Sérsicmore » index and bulge-to-total light ratio (B/T), confirming earlier results. Here, we describe the preparations of input data done for decompositions, give examples of our decomposition strategy, and describe the data products released via IRSA and via our web page (www.oulu.fi/astronomy/S4G-PIPELINE4/MAIN). These products include all the input data and decomposition files in electronic form, making it easy to extend the decompositions to suit specific science purposes. We also provide our IDL-based visualization tools (GALFIDL) developed for displaying/running GALFIT-decompositions, as well as our mask editing procedure (MASK-EDIT) used in data preparation. A detailed analysis of the bulge, disk, and bar parameters derived from multi-component decompositions will be published separately.« less

  5. A Higher-Order Generalized Singular Value Decomposition for Comparison of Global mRNA Expression from Multiple Organisms

    PubMed Central

    Ponnapalli, Sri Priya; Saunders, Michael A.; Van Loan, Charles F.; Alter, Orly

    2011-01-01

    The number of high-dimensional datasets recording multiple aspects of a single phenomenon is increasing in many areas of science, accompanied by a need for mathematical frameworks that can compare multiple large-scale matrices with different row dimensions. The only such framework to date, the generalized singular value decomposition (GSVD), is limited to two matrices. We mathematically define a higher-order GSVD (HO GSVD) for N≥2 matrices , each with full column rank. Each matrix is exactly factored as Di = UiΣiVT, where V, identical in all factorizations, is obtained from the eigensystem SV = VΛ of the arithmetic mean S of all pairwise quotients of the matrices , i≠j. We prove that this decomposition extends to higher orders almost all of the mathematical properties of the GSVD. The matrix S is nondefective with V and Λ real. Its eigenvalues satisfy λk≥1. Equality holds if and only if the corresponding eigenvector vk is a right basis vector of equal significance in all matrices Di and Dj, that is σi,k/σj,k = 1 for all i and j, and the corresponding left basis vector ui,k is orthogonal to all other vectors in Ui for all i. The eigenvalues λk = 1, therefore, define the “common HO GSVD subspace.” We illustrate the HO GSVD with a comparison of genome-scale cell-cycle mRNA expression from S. pombe, S. cerevisiae and human. Unlike existing algorithms, a mapping among the genes of these disparate organisms is not required. We find that the approximately common HO GSVD subspace represents the cell-cycle mRNA expression oscillations, which are similar among the datasets. Simultaneous reconstruction in the common subspace, therefore, removes the experimental artifacts, which are dissimilar, from the datasets. In the simultaneous sequence-independent classification of the genes of the three organisms in this common subspace, genes of highly conserved sequences but significantly different cell-cycle peak times are correctly classified. PMID:22216090

  6. Multidisciplinary Optimization Methods for Aircraft Preliminary Design

    NASA Technical Reports Server (NTRS)

    Kroo, Ilan; Altus, Steve; Braun, Robert; Gage, Peter; Sobieski, Ian

    1994-01-01

    This paper describes a research program aimed at improved methods for multidisciplinary design and optimization of large-scale aeronautical systems. The research involves new approaches to system decomposition, interdisciplinary communication, and methods of exploiting coarse-grained parallelism for analysis and optimization. A new architecture, that involves a tight coupling between optimization and analysis, is intended to improve efficiency while simplifying the structure of multidisciplinary, computation-intensive design problems involving many analysis disciplines and perhaps hundreds of design variables. Work in two areas is described here: system decomposition using compatibility constraints to simplify the analysis structure and take advantage of coarse-grained parallelism; and collaborative optimization, a decomposition of the optimization process to permit parallel design and to simplify interdisciplinary communication requirements.

  7. Decomposition of Copper (II) Sulfate Pentahydrate: A Sequential Gravimetric Analysis.

    ERIC Educational Resources Information Center

    Harris, Arlo D.; Kalbus, Lee H.

    1979-01-01

    Describes an improved experiment of the thermal dehydration of copper (II) sulfate pentahydrate. The improvements described here are control of the temperature environment and a quantitative study of the decomposition reaction to a thermally stable oxide. Data will suffice to show sequential gravimetric analysis. (Author/SA)

  8. Criterion Predictability: Identifying Differences Between [r-squares

    ERIC Educational Resources Information Center

    Malgady, Robert G.

    1976-01-01

    An analysis of variance procedure for testing differences in r-squared, the coefficient of determination, across independent samples is proposed and briefly discussed. The principal advantage of the procedure is to minimize Type I error for follow-up tests of pairwise differences. (Author/JKS)

  9. Generalized decompositions of dynamic systems and vector Lyapunov functions

    NASA Astrophysics Data System (ADS)

    Ikeda, M.; Siljak, D. D.

    1981-10-01

    The notion of decomposition is generalized to provide more freedom in constructing vector Lyapunov functions for stability analysis of nonlinear dynamic systems. A generalized decomposition is defined as a disjoint decomposition of a system which is obtained by expanding the state-space of a given system. An inclusion principle is formulated for the solutions of the expansion to include the solutions of the original system, so that stability of the expansion implies stability of the original system. Stability of the expansion can then be established by standard disjoint decompositions and vector Lyapunov functions. The applicability of the new approach is demonstrated using the Lotka-Volterra equations.

  10. Augmenting the decomposition of EMG signals using supervised feature extraction techniques.

    PubMed

    Parsaei, Hossein; Gangeh, Mehrdad J; Stashuk, Daniel W; Kamel, Mohamed S

    2012-01-01

    Electromyographic (EMG) signal decomposition is the process of resolving an EMG signal into its constituent motor unit potential trains (MUPTs). In this work, the possibility of improving the decomposing results using two supervised feature extraction methods, i.e., Fisher discriminant analysis (FDA) and supervised principal component analysis (SPCA), is explored. Using the MUP labels provided by a decomposition-based quantitative EMG system as a training data for FDA and SPCA, the MUPs are transformed into a new feature space such that the MUPs of a single MU become as close as possible to each other while those created by different MUs become as far as possible. The MUPs are then reclassified using a certainty-based classification algorithm. Evaluation results using 10 simulated EMG signals comprised of 3-11 MUPTs demonstrate that FDA and SPCA on average improve the decomposition accuracy by 6%. The improvement for the most difficult-to-decompose signal is about 12%, which shows the proposed approach is most beneficial in the decomposition of more complex signals.

  11. TENSOR DECOMPOSITIONS AND SPARSE LOG-LINEAR MODELS

    PubMed Central

    Johndrow, James E.; Bhattacharya, Anirban; Dunson, David B.

    2017-01-01

    Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. We derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions. PMID:29332971

  12. Evaluating litter decomposition and soil organic matter dynamics in earth system models: contrasting analysis of long-term litter decomposition and steady-state soil carbon

    NASA Astrophysics Data System (ADS)

    Bonan, G. B.; Wieder, W. R.

    2012-12-01

    Decomposition is a large term in the global carbon budget, but models of the earth system that simulate carbon cycle-climate feedbacks are largely untested with respect to litter decomposition. Here, we demonstrate a protocol to document model performance with respect to both long-term (10 year) litter decomposition and steady-state soil carbon stocks. First, we test the soil organic matter parameterization of the Community Land Model version 4 (CLM4), the terrestrial component of the Community Earth System Model, with data from the Long-term Intersite Decomposition Experiment Team (LIDET). The LIDET dataset is a 10-year study of litter decomposition at multiple sites across North America and Central America. We show results for 10-year litter decomposition simulations compared with LIDET for 9 litter types and 20 sites in tundra, grassland, and boreal, conifer, deciduous, and tropical forest biomes. We show additional simulations with DAYCENT, a version of the CENTURY model, to ask how well an established ecosystem model matches the observations. The results reveal large discrepancy between the laboratory microcosm studies used to parameterize the CLM4 litter decomposition and the LIDET field study. Simulated carbon loss is more rapid than the observations across all sites, despite using the LIDET-provided climatic decomposition index to constrain temperature and moisture effects on decomposition. Nitrogen immobilization is similarly biased high. Closer agreement with the observations requires much lower decomposition rates, obtained with the assumption that nitrogen severely limits decomposition. DAYCENT better replicates the observations, for both carbon mass remaining and nitrogen, without requirement for nitrogen limitation of decomposition. Second, we compare global observationally-based datasets of soil carbon with simulated steady-state soil carbon stocks for both models. The models simulations were forced with observationally-based estimates of annual litterfall and model-derived climatic decomposition index. While comparison with the LIDET 10-year litterbag study reveals sharp contrasts between CLM4 and DAYCENT, simulations of steady-state soil carbon show less difference between models. Both CLM4 and DAYCENT significantly underestimate soil carbon. Sensitivity analyses highlight causes of the low soil carbon bias. The terrestrial biogeochemistry of earth system models must be critically tested with observations, and the consequences of particular model choices must be documented. Long-term litter decomposition experiments such as LIDET provide a real-world process-oriented benchmark to evaluate models and can critically inform model development. Analysis of steady-state soil carbon estimates reveal additional, but here different, inferences about model performance.

  13. Pair-Wise Trajectory Management-Oceanic (PTM-O) . [Concept of Operations—Version 3.9

    NASA Technical Reports Server (NTRS)

    Jones, Kenneth M.

    2014-01-01

    This document describes the Pair-wise Trajectory Management-Oceanic (PTM-O) Concept of Operations (ConOps). Pair-wise Trajectory Management (PTM) is a concept that includes airborne and ground-based capabilities designed to enable and to benefit from, airborne pair-wise distance-monitoring capability. PTM includes the capabilities needed for the controller to issue a PTM clearance that resolves a conflict for a specific pair of aircraft. PTM avionics include the capabilities needed for the flight crew to manage their trajectory relative to specific designated aircraft. Pair-wise Trajectory Management PTM-Oceanic (PTM-O) is a regional specific application of the PTM concept. PTM is sponsored by the National Aeronautics and Space Administration (NASA) Concept and Technology Development Project (part of NASA's Airspace Systems Program). The goal of PTM is to use enhanced and distributed communications and surveillance along with airborne tools to permit reduced separation standards for given aircraft pairs, thereby increasing the capacity and efficiency of aircraft operations at a given altitude or volume of airspace.

  14. A pairwise maximum entropy model accurately describes resting-state human brain networks

    PubMed Central

    Watanabe, Takamitsu; Hirose, Satoshi; Wada, Hiroyuki; Imai, Yoshio; Machida, Toru; Shirouzu, Ichiro; Konishi, Seiki; Miyashita, Yasushi; Masuda, Naoki

    2013-01-01

    The resting-state human brain networks underlie fundamental cognitive functions and consist of complex interactions among brain regions. However, the level of complexity of the resting-state networks has not been quantified, which has prevented comprehensive descriptions of the brain activity as an integrative system. Here, we address this issue by demonstrating that a pairwise maximum entropy model, which takes into account region-specific activity rates and pairwise interactions, can be robustly and accurately fitted to resting-state human brain activities obtained by functional magnetic resonance imaging. Furthermore, to validate the approximation of the resting-state networks by the pairwise maximum entropy model, we show that the functional interactions estimated by the pairwise maximum entropy model reflect anatomical connexions more accurately than the conventional functional connectivity method. These findings indicate that a relatively simple statistical model not only captures the structure of the resting-state networks but also provides a possible method to derive physiological information about various large-scale brain networks. PMID:23340410

  15. Introducing Network Analysis into Science Education: Methodological Research Examining Secondary School Students' Understanding of "Decomposition"

    ERIC Educational Resources Information Center

    Schizas, Dimitrios; Katrana, Evagelia; Stamou, George

    2013-01-01

    In the present study we used the technique of word association tests to assess students' cognitive structures during the learning period. In particular, we tried to investigate what students living near a protected area in Greece (Dadia forest) knew about the phenomenon of decomposition. Decomposition was chosen as a stimulus word because it…

  16. Microbial ecological succession during municipal solid waste decomposition.

    PubMed

    Staley, Bryan F; de Los Reyes, Francis L; Wang, Ling; Barlaz, Morton A

    2018-04-28

    The decomposition of landfilled refuse proceeds through distinct phases, each defined by varying environmental factors such as volatile fatty acid concentration, pH, and substrate quality. The succession of microbial communities in response to these changing conditions was monitored in a laboratory-scale simulated landfill to minimize measurement difficulties experienced at field scale. 16S rRNA gene sequences retrieved at separate stages of decomposition showed significant succession in both Bacteria and methanogenic Archaea. A majority of Bacteria sequences in landfilled refuse belong to members of the phylum Firmicutes, while Proteobacteria levels fluctuated and Bacteroidetes levels increased as decomposition proceeded. Roughly 44% of archaeal sequences retrieved under conditions of low pH and high acetate were strictly hydrogenotrophic (Methanomicrobiales, Methanobacteriales). Methanosarcina was present at all stages of decomposition. Correspondence analysis showed bacterial population shifts were attributed to carboxylic acid concentration and solids hydrolysis, while archaeal populations were affected to a higher degree by pH. T-RFLP analysis showed specific taxonomic groups responded differently and exhibited unique responses during decomposition, suggesting that species composition and abundance within Bacteria and Archaea are highly dynamic. This study shows landfill microbial demographics are highly variable across both spatial and temporal transects.

  17. Multiple alignment analysis on phylogenetic tree of the spread of SARS epidemic using distance method

    NASA Astrophysics Data System (ADS)

    Amiroch, S.; Pradana, M. S.; Irawan, M. I.; Mukhlash, I.

    2017-09-01

    Multiple Alignment (MA) is a particularly important tool for studying the viral genome and determine the evolutionary process of the specific virus. Application of MA in the case of the spread of the Severe acute respiratory syndrome (SARS) epidemic is an interesting thing because this virus epidemic a few years ago spread so quickly that medical attention in many countries. Although there has been a lot of software to process multiple sequences, but the use of pairwise alignment to process MA is very important to consider. In previous research, the alignment between the sequences to process MA algorithm, Super Pairwise Alignment, but in this study used a dynamic programming algorithm Needleman wunchs simulated in Matlab. From the analysis of MA obtained and stable region and unstable which indicates the position where the mutation occurs, the system network topology that produced the phylogenetic tree of the SARS epidemic distance method, and system area networks mutation.

  18. Why rate when you could compare? Using the "EloChoice" package to assess pairwise comparisons of perceived physical strength.

    PubMed

    Clark, Andrew P; Howard, Kate L; Woods, Andy T; Penton-Voak, Ian S; Neumann, Christof

    2018-01-01

    We introduce "EloChoice", a package for R which uses Elo rating to assess pairwise comparisons between stimuli in order to measure perceived stimulus characteristics. To demonstrate the package and compare results from forced choice pairwise comparisons to those from more standard single stimulus rating tasks using Likert (or Likert-type) items, we investigated perceptions of physical strength from images of male bodies. The stimulus set comprised images of 82 men standing on a raised platform with minimal clothing. Strength-related anthropometrics and grip strength measurements were available for each man in the set. UK laboratory participants (Study 1) and US online participants (Study 2) viewed all images in both a Likert rating task, to collect mean Likert scores, and a pairwise comparison task, to calculate Elo, mean Elo (mElo), and Bradley-Terry scores. Within both studies, Likert, Elo and Bradley-Terry scores were closely correlated to mElo scores (all rs > 0.95), and all measures were correlated with stimulus grip strength (all rs > 0.38) and body size (all rs > 0.59). However, mElo scores were less variable than Elo scores and were hundreds of times quicker to compute than Bradley-Terry scores. Responses in pairwise comparison trials were 2/3 quicker than in Likert tasks, indicating that participants found pairwise comparisons to be easier. In addition, mElo scores generated from a data set with half the participants randomly excluded produced very comparable results to those produced with Likert scores from the full participant set, indicating that researchers require fewer participants when using pairwise comparisons.

  19. A Posteriori Error Analysis and Uncertainty Quantification for Adaptive Multiscale Operator Decomposition Methods for Multiphysics Problems

    DTIC Science & Technology

    2014-04-01

    Barrier methods for critical exponent problems in geometric analysis and mathematical physics, J. Erway and M. Holst, Submitted for publication ...TR-14-33 A Posteriori Error Analysis and Uncertainty Quantification for Adaptive Multiscale Operator Decomposition Methods for Multiphysics...Problems Approved for public release, distribution is unlimited. April 2014 HDTRA1-09-1-0036 Donald Estep and Michael

  20. A decomposition model and voxel selection framework for fMRI analysis to predict neural response of visual stimuli.

    PubMed

    Raut, Savita V; Yadav, Dinkar M

    2018-03-28

    This paper presents an fMRI signal analysis methodology using geometric mean curve decomposition (GMCD) and mutual information-based voxel selection framework. Previously, the fMRI signal analysis has been conducted using empirical mean curve decomposition (EMCD) model and voxel selection on raw fMRI signal. The erstwhile methodology loses frequency component, while the latter methodology suffers from signal redundancy. Both challenges are addressed by our methodology in which the frequency component is considered by decomposing the raw fMRI signal using geometric mean rather than arithmetic mean and the voxels are selected from EMCD signal using GMCD components, rather than raw fMRI signal. The proposed methodologies are adopted for predicting the neural response. Experimentations are conducted in the openly available fMRI data of six subjects, and comparisons are made with existing decomposition models and voxel selection frameworks. Subsequently, the effect of degree of selected voxels and the selection constraints are analyzed. The comparative results and the analysis demonstrate the superiority and the reliability of the proposed methodology.

  1. Critical Analysis of Nitramine Decomposition Data: Activation Energies and Frequency Factors for HMX and RDX Decomposition

    DTIC Science & Technology

    1985-09-01

    larger than the net energies of reaction for the same transitions ) represent energy needed for "freeing-up" of HMX or RDX molecules 70E. R. Lee, R. H...FACTORS FOR HMX AND RDX DECOMPOSITION Michael A. Schroeder DT!C .AECTE September 1985 SEP 3 0 8 * APPROVED FOR PUBUC RELEASE; DISTIR!UTION UNLIMITED. US...Final Activation Energies and Frequency Factors for HMX and RDX Decomposition b PERFORMING ORG. REPORT N, %1ER 7. AUTHOR(@) 6 CONTRACT OR GRANT NuMP

  2. About decomposition approach for solving the classification problem

    NASA Astrophysics Data System (ADS)

    Andrianova, A. A.

    2016-11-01

    This article describes the features of the application of an algorithm with using of decomposition methods for solving the binary classification problem of constructing a linear classifier based on Support Vector Machine method. Application of decomposition reduces the volume of calculations, in particular, due to the emerging possibilities to build parallel versions of the algorithm, which is a very important advantage for the solution of problems with big data. The analysis of the results of computational experiments conducted using the decomposition approach. The experiment use known data set for binary classification problem.

  3. GHGs and air pollutants embodied in China's international trade: Temporal and spatial index decomposition analysis.

    PubMed

    Liu, Zhengyan; Mao, Xianqiang; Song, Peng

    2017-01-01

    Temporal index decomposition analysis and spatial index decomposition analysis were applied to understand the driving forces of the emissions embodied in China's exports and net exports during 2002-2011, respectively. The accumulated emissions embodied in exports accounted for approximately 30% of the total emissions in China; although the contribution of the sectoral total emissions intensity (technique effect) declined, the scale effect was largely responsible for the mounting emissions associated with export, and the composition effect played a largely insignificant role. Calculations of the emissions embodied in net exports suggest that China is generally in an environmentally inferior position compared with its major trade partners. The differences in the economy-wide emission intensities between China and its major trade partners were the biggest contribution to this reality, and the trade balance effect played a less important role. However, a lower degree of specialization in pollution intensive products in exports than in imports helped to reduce slightly the emissions embodied in net exports. The temporal index decomposition analysis results suggest that China should take effective measures to optimize export and supply-side structure and reduce the total emissions intensity. According to spatial index decomposition analysis, it is suggested that a more aggressive import policy was useful for curbing domestic and global emissions, and the transfer of advanced production technologies and emission control technologies from developed to developing countries should be a compulsory global environmental policy option to mitigate the possible leakage of pollution emissions caused by international trade.

  4. A spreadsheet template compatible with Microsoft Excel and iWork Numbers that returns the simultaneous confidence intervals for all pairwise differences between multiple sample means.

    PubMed

    Brown, Angus M

    2010-04-01

    The objective of the method described in this paper is to develop a spreadsheet template for the purpose of comparing multiple sample means. An initial analysis of variance (ANOVA) test on the data returns F--the test statistic. If F is larger than the critical F value drawn from the F distribution at the appropriate degrees of freedom, convention dictates rejection of the null hypothesis and allows subsequent multiple comparison testing to determine where the inequalities between the sample means lie. A variety of multiple comparison methods are described that return the 95% confidence intervals for differences between means using an inclusive pairwise comparison of the sample means. 2009 Elsevier Ireland Ltd. All rights reserved.

  5. Decompositions of the polyhedral product functor with applications to moment-angle complexes and related spaces

    PubMed Central

    Bahri, A.; Bendersky, M.; Cohen, F. R.; Gitler, S.

    2009-01-01

    This article gives a natural decomposition of the suspension of a generalized moment-angle complex or partial product space which arises as the polyhedral product functor described below. The introduction and application of the smash product moment-angle complex provides a precise identification of the stable homotopy type of the values of the polyhedral product functor. One direct consequence is an analysis of the associated cohomology. For the special case of the complements of certain subspace arrangements, the geometrical decomposition implies the homological decomposition in earlier work of others as described below. Because the splitting is geometric, an analogous homological decomposition for a generalized moment-angle complex applies for any homology theory. Implied, therefore, is a decomposition for the Stanley–Reisner ring of a finite simplicial complex, and natural generalizations. PMID:19620727

  6. Decompositions of the polyhedral product functor with applications to moment-angle complexes and related spaces.

    PubMed

    Bahri, A; Bendersky, M; Cohen, F R; Gitler, S

    2009-07-28

    This article gives a natural decomposition of the suspension of a generalized moment-angle complex or partial product space which arises as the polyhedral product functor described below. The introduction and application of the smash product moment-angle complex provides a precise identification of the stable homotopy type of the values of the polyhedral product functor. One direct consequence is an analysis of the associated cohomology. For the special case of the complements of certain subspace arrangements, the geometrical decomposition implies the homological decomposition in earlier work of others as described below. Because the splitting is geometric, an analogous homological decomposition for a generalized moment-angle complex applies for any homology theory. Implied, therefore, is a decomposition for the Stanley-Reisner ring of a finite simplicial complex, and natural generalizations.

  7. Three geographic decomposition approaches in transportation network analysis

    DOT National Transportation Integrated Search

    1980-03-01

    This document describes the results of research into the application of geographic decomposition techniques to practical transportation network problems. Three approaches are described for the solution of the traffic assignment problem. One approach ...

  8. Application of Decomposition to Transportation Network Analysis

    DOT National Transportation Integrated Search

    1976-10-01

    This document reports preliminary results of five potential applications of the decomposition techniques from mathematical programming to transportation network problems. The five application areas are (1) the traffic assignment problem with fixed de...

  9. Enhance-Synergism and Suppression Effects in Multiple Regression

    ERIC Educational Resources Information Center

    Lipovetsky, Stan; Conklin, W. Michael

    2004-01-01

    Relations between pairwise correlations and the coefficient of multiple determination in regression analysis are considered. The conditions for the occurrence of enhance-synergism and suppression effects when multiple determination becomes bigger than the total of squared correlations of the dependent variable with the regressors are discussed. It…

  10. Breaking the computational barriers of pairwise genome comparison.

    PubMed

    Torreno, Oscar; Trelles, Oswaldo

    2015-08-11

    Conventional pairwise sequence comparison software algorithms are being used to process much larger datasets than they were originally designed for. This can result in processing bottlenecks that limit software capabilities or prevent full use of the available hardware resources. Overcoming the barriers that limit the efficient computational analysis of large biological sequence datasets by retrofitting existing algorithms or by creating new applications represents a major challenge for the bioinformatics community. We have developed C libraries for pairwise sequence comparison within diverse architectures, ranging from commodity systems to high performance and cloud computing environments. Exhaustive tests were performed using different datasets of closely- and distantly-related sequences that span from small viral genomes to large mammalian chromosomes. The tests demonstrated that our solution is capable of generating high quality results with a linear-time response and controlled memory consumption, being comparable or faster than the current state-of-the-art methods. We have addressed the problem of pairwise and all-versus-all comparison of large sequences in general, greatly increasing the limits on input data size. The approach described here is based on a modular out-of-core strategy that uses secondary storage to avoid reaching memory limits during the identification of High-scoring Segment Pairs (HSPs) between the sequences under comparison. Software engineering concepts were applied to avoid intermediate result re-calculation, to minimise the performance impact of input/output (I/O) operations and to modularise the process, thus enhancing application flexibility and extendibility. Our computationally-efficient approach allows tasks such as the massive comparison of complete genomes, evolutionary event detection, the identification of conserved synteny blocks and inter-genome distance calculations to be performed more effectively.

  11. Pairwise contact energy statistical potentials can help to find probability of point mutations.

    PubMed

    Saravanan, K M; Suvaithenamudhan, S; Parthasarathy, S; Selvaraj, S

    2017-01-01

    To adopt a particular fold, a protein requires several interactions between its amino acid residues. The energetic contribution of these residue-residue interactions can be approximated by extracting statistical potentials from known high resolution structures. Several methods based on statistical potentials extracted from unrelated proteins are found to make a better prediction of probability of point mutations. We postulate that the statistical potentials extracted from known structures of similar folds with varying sequence identity can be a powerful tool to examine probability of point mutation. By keeping this in mind, we have derived pairwise residue and atomic contact energy potentials for the different functional families that adopt the (α/β) 8 TIM-Barrel fold. We carried out computational point mutations at various conserved residue positions in yeast Triose phosphate isomerase enzyme for which experimental results are already reported. We have also performed molecular dynamics simulations on a subset of point mutants to make a comparative study. The difference in pairwise residue and atomic contact energy of wildtype and various point mutations reveals probability of mutations at a particular position. Interestingly, we found that our computational prediction agrees with the experimental studies of Silverman et al. (Proc Natl Acad Sci 2001;98:3092-3097) and perform better prediction than i Mutant and Cologne University Protein Stability Analysis Tool. The present work thus suggests deriving pairwise contact energy potentials and molecular dynamics simulations of functionally important folds could help us to predict probability of point mutations which may ultimately reduce the time and cost of mutation experiments. Proteins 2016; 85:54-64. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  12. Isoconversional approach for non-isothermal decomposition of un-irradiated and photon-irradiated 5-fluorouracil.

    PubMed

    Mohamed, Hala Sh; Dahy, AbdelRahman A; Mahfouz, Refaat M

    2017-10-25

    Kinetic analysis for the non-isothermal decomposition of un-irradiated and photon-beam-irradiated 5-fluorouracil (5-FU) as anti-cancer drug, was carried out in static air. Thermal decomposition of 5-FU proceeds in two steps. One minor step in the temperature range of (270-283°C) followed by the major step in the temperature range of (285-360°C). The non-isothermal data for un-irradiated and photon-irradiated 5-FU were analyzed using linear (Tang) and non-linear (Vyazovkin) isoconversional methods. The results of the application of these free models on the present kinetic data showed quite a dependence of the activation energy on the extent of conversion. For un-irradiated 5-FU, the non-isothermal data analysis indicates that the decomposition is generally described by A3 and A4 modeles for the minor and major decomposition steps, respectively. For a photon-irradiated sample of 5-FU with total absorbed dose of 10Gy, the decomposition is controlled by A2 model throughout the coversion range. The activation energies calculated in case of photon-irradiated 5-FU were found to be lower compared to the values obtained from the thermal decomposition of the un-irradiated sample probably due to the formation of additional nucleation sites created by a photon-irradiation. The decomposition path was investigated by intrinsic reaction coordinate (IRC) at the B3LYP/6-311++G(d,p) level of DFT. Two transition states were involved in the process by homolytic rupture of NH bond and ring secession, respectively. Published by Elsevier B.V.

  13. Thermal Decomposition Behavior of Hydroxytyrosol (HT) in Nitrogen Atmosphere Based on TG-FTIR Methods.

    PubMed

    Tu, Jun-Ling; Yuan, Jiao-Jiao

    2018-02-13

    The thermal decomposition behavior of olive hydroxytyrosol (HT) was first studied using thermogravimetry (TG). Cracked chemical bond and evolved gas analysis during the thermal decomposition process of HT were also investigated using thermogravimetry coupled with infrared spectroscopy (TG-FTIR). Thermogravimetry-Differential thermogravimetry (TG-DTG) curves revealed that the thermal decomposition of HT began at 262.8 °C and ended at 409.7 °C with a main mass loss. It was demonstrated that a high heating rate (over 20 K·min -1 ) restrained the thermal decomposition of HT, resulting in an obvious thermal hysteresis. Furthermore, a thermal decomposition kinetics investigation of HT indicated that the non-isothermal decomposition mechanism was one-dimensional diffusion (D1), integral form g ( x ) = x ², and differential form f ( x ) = 1/(2 x ). The four combined approaches were employed to calculate the activation energy ( E = 128.50 kJ·mol -1 ) and Arrhenius preexponential factor (ln A = 24.39 min -1 ). In addition, a tentative mechanism of HT thermal decomposition was further developed. The results provide a theoretical reference for the potential thermal stability of HT.

  14. Polarimetric Decomposition Analysis of the Deepwater Horizon Oil Slick Using L-Band UAVSAR Data

    NASA Technical Reports Server (NTRS)

    Jones, Cathleen; Minchew, Brent; Holt, Benjamin

    2011-01-01

    We report here an analysis of the polarization dependence of L-band radar backscatter from the main slick of the Deepwater Horizon oil spill, with specific attention to the utility of polarimetric decomposition analysis for discrimination of oil from clean water and identification of variations in the oil characteristics. For this study we used data collected with the UAVSAR instrument from opposing look directions directly over the main oil slick. We find that both the Cloude-Pottier and Shannon entropy polarimetric decomposition methods offer promise for oil discrimination, with the Shannon entropy method yielding the same information as contained in the Cloude-Pottier entropy and averaged in tensity parameters, but with significantly less computational complexity

  15. A data-driven method to enhance vibration signal decomposition for rolling bearing fault analysis

    NASA Astrophysics Data System (ADS)

    Grasso, M.; Chatterton, S.; Pennacchi, P.; Colosimo, B. M.

    2016-12-01

    Health condition analysis and diagnostics of rotating machinery requires the capability of properly characterizing the information content of sensor signals in order to detect and identify possible fault features. Time-frequency analysis plays a fundamental role, as it allows determining both the existence and the causes of a fault. The separation of components belonging to different time-frequency scales, either associated to healthy or faulty conditions, represents a challenge that motivates the development of effective methodologies for multi-scale signal decomposition. In this framework, the Empirical Mode Decomposition (EMD) is a flexible tool, thanks to its data-driven and adaptive nature. However, the EMD usually yields an over-decomposition of the original signals into a large number of intrinsic mode functions (IMFs). The selection of most relevant IMFs is a challenging task, and the reference literature lacks automated methods to achieve a synthetic decomposition into few physically meaningful modes by avoiding the generation of spurious or meaningless modes. The paper proposes a novel automated approach aimed at generating a decomposition into a minimal number of relevant modes, called Combined Mode Functions (CMFs), each consisting in a sum of adjacent IMFs that share similar properties. The final number of CMFs is selected in a fully data driven way, leading to an enhanced characterization of the signal content without any information loss. A novel criterion to assess the dissimilarity between adjacent CMFs is proposed, based on probability density functions of frequency spectra. The method is suitable to analyze vibration signals that may be periodically acquired within the operating life of rotating machineries. A rolling element bearing fault analysis based on experimental data is presented to demonstrate the performances of the method and the provided benefits.

  16. Why rate when you could compare? Using the “EloChoice” package to assess pairwise comparisons of perceived physical strength

    PubMed Central

    Howard, Kate L.; Woods, Andy T.; Penton-Voak, Ian S.; Neumann, Christof

    2018-01-01

    We introduce “EloChoice”, a package for R which uses Elo rating to assess pairwise comparisons between stimuli in order to measure perceived stimulus characteristics. To demonstrate the package and compare results from forced choice pairwise comparisons to those from more standard single stimulus rating tasks using Likert (or Likert-type) items, we investigated perceptions of physical strength from images of male bodies. The stimulus set comprised images of 82 men standing on a raised platform with minimal clothing. Strength-related anthropometrics and grip strength measurements were available for each man in the set. UK laboratory participants (Study 1) and US online participants (Study 2) viewed all images in both a Likert rating task, to collect mean Likert scores, and a pairwise comparison task, to calculate Elo, mean Elo (mElo), and Bradley-Terry scores. Within both studies, Likert, Elo and Bradley-Terry scores were closely correlated to mElo scores (all rs > 0.95), and all measures were correlated with stimulus grip strength (all rs > 0.38) and body size (all rs > 0.59). However, mElo scores were less variable than Elo scores and were hundreds of times quicker to compute than Bradley-Terry scores. Responses in pairwise comparison trials were 2/3 quicker than in Likert tasks, indicating that participants found pairwise comparisons to be easier. In addition, mElo scores generated from a data set with half the participants randomly excluded produced very comparable results to those produced with Likert scores from the full participant set, indicating that researchers require fewer participants when using pairwise comparisons. PMID:29293615

  17. TSP Symposium 2012 Proceedings

    DTIC Science & Technology

    2012-11-01

    and Statistical Model 78 7.3 Analysis and Results 79 7.4 Threats to Validity and Limitations 85 7.5 Conclusions 86 7.6 Acknowledgments 87 7.7...Table 12: Overall Statistics of the Experiment 32 Table 13: Results of Pairwise ANOVA Analysis, Highlighting Statistically Significant Differences...we calculated the percentage of defects injected. The distribution statistics are shown in Table 2. Table 2: Mean Lower, Upper Confidence Interval

  18. Three-pattern decomposition of global atmospheric circulation: part I—decomposition model and theorems

    NASA Astrophysics Data System (ADS)

    Hu, Shujuan; Chou, Jifan; Cheng, Jianbo

    2018-04-01

    In order to study the interactions between the atmospheric circulations at the middle-high and low latitudes from the global perspective, the authors proposed the mathematical definition of three-pattern circulations, i.e., horizontal, meridional and zonal circulations with which the actual atmospheric circulation is expanded. This novel decomposition method is proved to accurately describe the actual atmospheric circulation dynamics. The authors used the NCEP/NCAR reanalysis data to calculate the climate characteristics of those three-pattern circulations, and found that the decomposition model agreed with the observed results. Further dynamical analysis indicates that the decomposition model is more accurate to capture the major features of global three dimensional atmospheric motions, compared to the traditional definitions of Rossby wave, Hadley circulation and Walker circulation. The decomposition model for the first time realized the decomposition of global atmospheric circulation using three orthogonal circulations within the horizontal, meridional and zonal planes, offering new opportunities to study the large-scale interactions between the middle-high latitudes and low latitudes circulations.

  19. Ozone decomposition

    PubMed Central

    Batakliev, Todor; Georgiev, Vladimir; Anachkov, Metody; Rakovsky, Slavcho

    2014-01-01

    Catalytic ozone decomposition is of great significance because ozone is a toxic substance commonly found or generated in human environments (aircraft cabins, offices with photocopiers, laser printers, sterilizers). Considerable work has been done on ozone decomposition reported in the literature. This review provides a comprehensive summary of the literature, concentrating on analysis of the physico-chemical properties, synthesis and catalytic decomposition of ozone. This is supplemented by a review on kinetics and catalyst characterization which ties together the previously reported results. Noble metals and oxides of transition metals have been found to be the most active substances for ozone decomposition. The high price of precious metals stimulated the use of metal oxide catalysts and particularly the catalysts based on manganese oxide. It has been determined that the kinetics of ozone decomposition is of first order importance. A mechanism of the reaction of catalytic ozone decomposition is discussed, based on detailed spectroscopic investigations of the catalytic surface, showing the existence of peroxide and superoxide surface intermediates. PMID:26109880

  20. Pressure-dependent decomposition kinetics of the energetic material HMX up to 3.6 GPa.

    PubMed

    Glascoe, Elizabeth A; Zaug, Joseph M; Burnham, Alan K

    2009-12-03

    The effect of pressure on the global thermal decomposition rate of the energetic material HMX was studied. HMX was precompressed in a diamond anvil cell (DAC) and heated at various rates. The parent species population was monitored as a function of time and temperature using Fourier transform infrared (FTIR) spectroscopy. Global decomposition rates were determined by fitting the fraction reacted to the extended-Prout-Tompkins nucleation-growth model and the Friedman isoconversional method. The results of these experiments and analysis indicate that pressure accelerates the decomposition at low-to-moderate pressures (i.e., between ambient pressure and 0.1 GPa) and decelerates the decomposition at higher pressures. The decomposition acceleration is attributed to pressure-enhanced autocatalysis, whereas the deceleration at high pressures is attributed to pressure-inhibiting bond homolysis step(s), which would result in an increase in volume. These results indicate that both the beta- and delta-polymorphs of HMX are sensitive to pressure in the thermally induced decomposition kinetics.

  1. Performance analysis of model based iterative reconstruction with dictionary learning in transportation security CT

    NASA Astrophysics Data System (ADS)

    Haneda, Eri; Luo, Jiajia; Can, Ali; Ramani, Sathish; Fu, Lin; De Man, Bruno

    2016-05-01

    In this study, we implement and compare model based iterative reconstruction (MBIR) with dictionary learning (DL) over MBIR with pairwise pixel-difference regularization, in the context of transportation security. DL is a technique of sparse signal representation using an over complete dictionary which has provided promising results in image processing applications including denoising,1 as well as medical CT reconstruction.2 It has been previously reported that DL produces promising results in terms of noise reduction and preservation of structural details, especially for low dose and few-view CT acquisitions.2 A distinguishing feature of transportation security CT is that scanned baggage may contain items with a wide range of material densities. While medical CT typically scans soft tissues, blood with and without contrast agents, and bones, luggage typically contains more high density materials (i.e. metals and glass), which can produce severe distortions such as metal streaking artifacts. Important factors of security CT are the emphasis on image quality such as resolution, contrast, noise level, and CT number accuracy for target detection. While MBIR has shown exemplary performance in the trade-off of noise reduction and resolution preservation, we demonstrate that DL may further improve this trade-off. In this study, we used the KSVD-based DL3 combined with the MBIR cost-minimization framework and compared results to Filtered Back Projection (FBP) and MBIR with pairwise pixel-difference regularization. We performed a parameter analysis to show the image quality impact of each parameter. We also investigated few-view CT acquisitions where DL can show an additional advantage relative to pairwise pixel difference regularization.

  2. On the streaming model for redshift-space distortions

    NASA Astrophysics Data System (ADS)

    Kuruvilla, Joseph; Porciani, Cristiano

    2018-06-01

    The streaming model describes the mapping between real and redshift space for 2-point clustering statistics. Its key element is the probability density function (PDF) of line-of-sight pairwise peculiar velocities. Following a kinetic-theory approach, we derive the fundamental equations of the streaming model for ordered and unordered pairs. In the first case, we recover the classic equation while we demonstrate that modifications are necessary for unordered pairs. We then discuss several statistical properties of the pairwise velocities for DM particles and haloes by using a suite of high-resolution N-body simulations. We test the often used Gaussian ansatz for the PDF of pairwise velocities and discuss its limitations. Finally, we introduce a mixture of Gaussians which is known in statistics as the generalised hyperbolic distribution and show that it provides an accurate fit to the PDF. Once inserted in the streaming equation, the fit yields an excellent description of redshift-space correlations at all scales that vastly outperforms the Gaussian and exponential approximations. Using a principal-component analysis, we reduce the complexity of our model for large redshift-space separations. Our results increase the robustness of studies of anisotropic galaxy clustering and are useful for extending them towards smaller scales in order to test theories of gravity and interacting dark-energy models.

  3. Will kinematic Sunyaev-Zel'dovich measurements enhance the science return from galaxy redshift surveys?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sugiyama, Naonori S.; Okumura, Teppei; Spergel, David N., E-mail: nao.s.sugiyama@gmail.com, E-mail: tokumura@asiaa.sinica.edu.tw, E-mail: dns@astro.princeton.edu

    2017-01-01

    Yes. Future CMB experiments such as Advanced ACTPol and CMB-S4 should achieve measurements with S/N of > 0.1 for the typical host halo of galaxies in redshift surveys. These measurements will provide complementary measurements of the growth rate of large scale structure f and the expansion rate of the Universe H to galaxy clustering measurements. This paper emphasizes that there is significant information in the anisotropy of the relative pairwise kSZ measurements. We expand the relative pairwise kSZ power spectrum in Legendre polynomials and consider up to its octopole. Assuming that the noise in the filtered maps is uncorrelated betweenmore » the positions of galaxies in the survey, we derive a simple analytic form for the power spectrum covariance of the relative pairwise kSZ temperature in redshift space. While many previous studies have assumed optimistically that the optical depth of the galaxies τ{sub T} in the survey is known, we marginalize over τ{sub T}, to compute constraints on the growth rate f and the expansion rate H . For realistic survey parameters, we find that combining kSZ and galaxy redshift survey data reduces the marginalized 1-σ errors on H and f to ∼50-70% compared to the galaxy-only analysis.« less

  4. Will kinematic Sunyaev-Zel'dovich measurements enhance the science return from galaxy redshift surveys?

    NASA Astrophysics Data System (ADS)

    Sugiyama, Naonori S.; Okumura, Teppei; Spergel, David N.

    2017-01-01

    Yes. Future CMB experiments such as Advanced ACTPol and CMB-S4 should achieve measurements with S/N of > 0.1 for the typical host halo of galaxies in redshift surveys. These measurements will provide complementary measurements of the growth rate of large scale structure f and the expansion rate of the Universe H to galaxy clustering measurements. This paper emphasizes that there is significant information in the anisotropy of the relative pairwise kSZ measurements. We expand the relative pairwise kSZ power spectrum in Legendre polynomials and consider up to its octopole. Assuming that the noise in the filtered maps is uncorrelated between the positions of galaxies in the survey, we derive a simple analytic form for the power spectrum covariance of the relative pairwise kSZ temperature in redshift space. While many previous studies have assumed optimistically that the optical depth of the galaxies τT in the survey is known, we marginalize over τT, to compute constraints on the growth rate f and the expansion rate H. For realistic survey parameters, we find that combining kSZ and galaxy redshift survey data reduces the marginalized 1-σ errors on H and f to ~50-70% compared to the galaxy-only analysis.

  5. An analysis of scatter decomposition

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Saltz, Joel H.

    1990-01-01

    A formal analysis of a powerful mapping technique known as scatter decomposition is presented. Scatter decomposition divides an irregular computational domain into a large number of equal sized pieces, and distributes them modularly among processors. A probabilistic model of workload in one dimension is used to formally explain why, and when scatter decomposition works. The first result is that if correlation in workload is a convex function of distance, then scattering a more finely decomposed domain yields a lower average processor workload variance. The second result shows that if the workload process is stationary Gaussian and the correlation function decreases linearly in distance until becoming zero and then remains zero, scattering a more finely decomposed domain yields a lower expected maximum processor workload. Finally it is shown that if the correlation function decreases linearly across the entire domain, then among all mappings that assign an equal number of domain pieces to each processor, scatter decomposition minimizes the average processor workload variance. The dependence of these results on the assumption of decreasing correlation is illustrated with situations where a coarser granularity actually achieves better load balance.

  6. An optimization approach for fitting canonical tensor decompositions.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dunlavy, Daniel M.; Acar, Evrim; Kolda, Tamara Gibson

    Tensor decompositions are higher-order analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as the CANDECOMP/PARAFAC decomposition (CPD), which expresses a tensor as the sum of component rank-one tensors and is used in a multitude of applications such as chemometrics, signal processing, neuroscience, and web analysis. The task of computing the CPD, however, can be difficult. The typical approach is based on alternating least squares (ALS) optimization, which can be remarkably fast but is not very accurate. Previously, nonlinear least squares (NLS) methodsmore » have also been recommended; existing NLS methods are accurate but slow. In this paper, we propose the use of gradient-based optimization methods. We discuss the mathematical calculation of the derivatives and further show that they can be computed efficiently, at the same cost as one iteration of ALS. Computational experiments demonstrate that the gradient-based optimization methods are much more accurate than ALS and orders of magnitude faster than NLS.« less

  7. Comparative analysis of microarray data in Arabidopsis transcriptome during compatible interactions with plant viruses

    USDA-ARS?s Scientific Manuscript database

    To analyze transcriptome response to virus infection, we have assembled currently available microarray data on changes in gene expression levels in compatible Arabidopsis-virus interactions. We used the mean r (Pearson’s correlation coefficient) for neighboring pairs to estimate pairwise local simil...

  8. Near-road measurements for nitrogen dioxide and its association with traffic exposure zones

    EPA Science Inventory

    Near-road measurements for nitrogen dioxide (NO2) using passive air samplers were collected weekly in traffic exposure zones (TEZs) in the Research Triangle area of North Carolina (USA) during Fall 2014. Land use regression (LUR) analysis and pairwise comparisons of T...

  9. Heterogeneous fractionation profiles of meta-analytic coactivation networks.

    PubMed

    Laird, Angela R; Riedel, Michael C; Okoe, Mershack; Jianu, Radu; Ray, Kimberly L; Eickhoff, Simon B; Smith, Stephen M; Fox, Peter T; Sutherland, Matthew T

    2017-04-01

    Computational cognitive neuroimaging approaches can be leveraged to characterize the hierarchical organization of distributed, functionally specialized networks in the human brain. To this end, we performed large-scale mining across the BrainMap database of coordinate-based activation locations from over 10,000 task-based experiments. Meta-analytic coactivation networks were identified by jointly applying independent component analysis (ICA) and meta-analytic connectivity modeling (MACM) across a wide range of model orders (i.e., d=20-300). We then iteratively computed pairwise correlation coefficients for consecutive model orders to compare spatial network topologies, ultimately yielding fractionation profiles delineating how "parent" functional brain systems decompose into constituent "child" sub-networks. Fractionation profiles differed dramatically across canonical networks: some exhibited complex and extensive fractionation into a large number of sub-networks across the full range of model orders, whereas others exhibited little to no decomposition as model order increased. Hierarchical clustering was applied to evaluate this heterogeneity, yielding three distinct groups of network fractionation profiles: high, moderate, and low fractionation. BrainMap-based functional decoding of resultant coactivation networks revealed a multi-domain association regardless of fractionation complexity. Rather than emphasize a cognitive-motor-perceptual gradient, these outcomes suggest the importance of inter-lobar connectivity in functional brain organization. We conclude that high fractionation networks are complex and comprised of many constituent sub-networks reflecting long-range, inter-lobar connectivity, particularly in fronto-parietal regions. In contrast, low fractionation networks may reflect persistent and stable networks that are more internally coherent and exhibit reduced inter-lobar communication. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Heterogeneous fractionation profiles of meta-analytic coactivation networks

    PubMed Central

    Laird, Angela R.; Riedel, Michael C.; Okoe, Mershack; Jianu, Radu; Ray, Kimberly L.; Eickhoff, Simon B.; Smith, Stephen M.; Fox, Peter T.; Sutherland, Matthew T.

    2017-01-01

    Computational cognitive neuroimaging approaches can be leveraged to characterize the hierarchical organization of distributed, functionally specialized networks in the human brain. To this end, we performed large-scale mining across the BrainMap database of coordinate-based activation locations from over 10,000 task-based experiments. Meta-analytic coactivation networks were identified by jointly applying independent component analysis (ICA) and meta-analytic connectivity modeling (MACM) across a wide range of model orders (i.e., d = 20 to 300). We then iteratively computed pairwise correlation coefficients for consecutive model orders to compare spatial network topologies, ultimately yielding fractionation profiles delineating how “parent” functional brain systems decompose into constituent “child” sub-networks. Fractionation profiles differed dramatically across canonical networks: some exhibited complex and extensive fractionation into a large number of sub-networks across the full range of model orders, whereas others exhibited little to no decomposition as model order increased. Hierarchical clustering was applied to evaluate this heterogeneity, yielding three distinct groups of network fractionation profiles: high, moderate, and low fractionation. BrainMap-based functional decoding of resultant coactivation networks revealed a multi-domain association regardless of fractionation complexity. Rather than emphasize a cognitive-motor-perceptual gradient, these outcomes suggest the importance of inter-lobar connectivity in functional brain organization. We conclude that high fractionation networks are complex and comprised of many constituent sub-networks reflecting long-range, inter-lobar connectivity, particularly in fronto-parietal regions. In contrast, low fractionation networks may reflect persistent and stable networks that are more internally coherent and exhibit reduced inter-lobar communication. PMID:28222386

  11. Rhythmic Components in Extracranial Brain Signals Reveal Multifaceted Task Modulation of Overlapping Neuronal Activity

    PubMed Central

    van Ede, Freek; Maris, Eric

    2016-01-01

    Oscillatory neuronal activity is implicated in many cognitive functions, and its phase coupling between sensors may reflect networks of communicating neuronal populations. Oscillatory activity is often studied using extracranial recordings and compared between experimental conditions. This is challenging, because there is overlap between sensor-level activity generated by different sources, and this can obscure differential experimental modulations of these sources. Additionally, in extracranial data, sensor-level phase coupling not only reflects communicating populations, but can also be generated by a current dipole, whose sensor-level phase coupling does not reflect source-level interactions. We present a novel method, which is capable of separating and characterizing sources on the basis of their phase coupling patterns as a function of space, frequency and time (trials). Importantly, this method depends on a plausible model of a neurobiological rhythm. We present this model and an accompanying analysis pipeline. Next, we demonstrate our approach, using magnetoencephalographic (MEG) recordings during a cued tactile detection task as a case study. We show that the extracted components have overlapping spatial maps and frequency content, which are difficult to resolve using conventional pairwise measures. Because our decomposition also provides trial loadings, components can be readily contrasted between experimental conditions. Strikingly, we observed heterogeneity in alpha and beta sources with respect to whether their activity was suppressed or enhanced as a function of attention and performance, and this happened both in task relevant and irrelevant regions. This heterogeneity contrasts with the common view that alpha and beta amplitude over sensory areas are always negatively related to attention and performance. PMID:27336159

  12. Fast and accurate estimation of the covariance between pairwise maximum likelihood distances.

    PubMed

    Gil, Manuel

    2014-01-01

    Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989) which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error.

  13. Fast and accurate estimation of the covariance between pairwise maximum likelihood distances

    PubMed Central

    2014-01-01

    Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989) which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error. PMID:25279263

  14. Perfluoropolyalkylether decomposition on catalytic aluminas

    NASA Technical Reports Server (NTRS)

    Morales, Wilfredo

    1994-01-01

    The decomposition of Fomblin Z25, a commercial perfluoropolyalkylether liquid lubricant, was studied using the Penn State Micro-oxidation Test, and a thermal gravimetric/differential scanning calorimetry unit. The micro-oxidation test was conducted using 440C stainless steel and pure iron metal catalyst specimens, whereas the thermal gravimetric/differential scanning calorimetry tests were conducted using catalytic alumina pellets. Analysis of the thermal data, high pressure liquid chromatography data, and x-ray photoelectron spectroscopy data support evidence that there are two different decomposition mechanisms for Fomblin Z25, and that reductive sites on the catalytic surfaces are responsible for the decomposition of Fomblin Z25.

  15. Analysis of Self-Excited Combustion Instabilities Using Decomposition Techniques

    DTIC Science & Technology

    2016-07-05

    are evaluated for the study of self-excited longitudinal combustion instabilities in laboratory-scaled single-element gas turbine and rocket...Air Force Base, California 93524 DOI: 10.2514/1.J054557 Proper orthogonal decomposition and dynamic mode decomposition are evaluated for the study of...instabilities. In addition, we also evaluate the capabilities of the methods to deal with data sets of different spatial extents and temporal resolution

  16. Modelling the influence of ectomycorrhizal decomposition on plant nutrition and soil carbon sequestration in boreal forest ecosystems.

    PubMed

    Baskaran, Preetisri; Hyvönen, Riitta; Berglund, S Linnea; Clemmensen, Karina E; Ågren, Göran I; Lindahl, Björn D; Manzoni, Stefano

    2017-02-01

    Tree growth in boreal forests is limited by nitrogen (N) availability. Most boreal forest trees form symbiotic associations with ectomycorrhizal (ECM) fungi, which improve the uptake of inorganic N and also have the capacity to decompose soil organic matter (SOM) and to mobilize organic N ('ECM decomposition'). To study the effects of 'ECM decomposition' on ecosystem carbon (C) and N balances, we performed a sensitivity analysis on a model of C and N flows between plants, SOM, saprotrophs, ECM fungi, and inorganic N stores. The analysis indicates that C and N balances were sensitive to model parameters regulating ECM biomass and decomposition. Under low N availability, the optimal C allocation to ECM fungi, above which the symbiosis switches from mutualism to parasitism, increases with increasing relative involvement of ECM fungi in SOM decomposition. Under low N conditions, increased ECM organic N mining promotes tree growth but decreases soil C storage, leading to a negative correlation between C stores above- and below-ground. The interplay between plant production and soil C storage is sensitive to the partitioning of decomposition between ECM fungi and saprotrophs. Better understanding of interactions between functional guilds of soil fungi may significantly improve predictions of ecosystem responses to environmental change. © 2016 The Authors. New Phytologist © 2016 New Phytologist Trust.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glascoe, E A; Zaug, J M; Burnham, A K

    The effect of pressure on the thermal decomposition rate of the energetic material HMX was studied. HMX was precompressed in a diamond anvil cell (DAC) and heated at various rates. The parent species population was monitored as a function of time and temperature using Fourier transform infrared (FTIR) spectroscopy. Decomposition rates were determined by fitting the fraction reacted to the extended-Prout-Tompkins nucleation-growth model and the Friedman isoconversional method. The results of these experiments and analysis indicate that pressure accelerates the decomposition at low to moderate pressures (i.e. between ambient pressure and 1 GPa) and decelerates the decomposition at higher pressures.more » The decomposition acceleration is attributed to pressure enhanced autocatalysis whereas the deceleration at high pressures is attributed pressure inhibiting bond homolysis step(s), which would result in an increase in volume. These results indicate that both {beta} and {delta} phase HMX are sensitive to pressure in the thermally induced decomposition kinetics.« less

  18. GHGs and air pollutants embodied in China’s international trade: Temporal and spatial index decomposition analysis

    PubMed Central

    Liu, Zhengyan; Mao, Xianqiang; Song, Peng

    2017-01-01

    Temporal index decomposition analysis and spatial index decomposition analysis were applied to understand the driving forces of the emissions embodied in China’s exports and net exports during 2002–2011, respectively. The accumulated emissions embodied in exports accounted for approximately 30% of the total emissions in China; although the contribution of the sectoral total emissions intensity (technique effect) declined, the scale effect was largely responsible for the mounting emissions associated with export, and the composition effect played a largely insignificant role. Calculations of the emissions embodied in net exports suggest that China is generally in an environmentally inferior position compared with its major trade partners. The differences in the economy-wide emission intensities between China and its major trade partners were the biggest contribution to this reality, and the trade balance effect played a less important role. However, a lower degree of specialization in pollution intensive products in exports than in imports helped to reduce slightly the emissions embodied in net exports. The temporal index decomposition analysis results suggest that China should take effective measures to optimize export and supply-side structure and reduce the total emissions intensity. According to spatial index decomposition analysis, it is suggested that a more aggressive import policy was useful for curbing domestic and global emissions, and the transfer of advanced production technologies and emission control technologies from developed to developing countries should be a compulsory global environmental policy option to mitigate the possible leakage of pollution emissions caused by international trade. PMID:28441399

  19. Iterative filtering decomposition based on local spectral evolution kernel

    PubMed Central

    Wang, Yang; Wei, Guo-Wei; Yang, Siyang

    2011-01-01

    The synthesizing information, achieving understanding, and deriving insight from increasingly massive, time-varying, noisy and possibly conflicting data sets are some of most challenging tasks in the present information age. Traditional technologies, such as Fourier transform and wavelet multi-resolution analysis, are inadequate to handle all of the above-mentioned tasks. The empirical model decomposition (EMD) has emerged as a new powerful tool for resolving many challenging problems in data processing and analysis. Recently, an iterative filtering decomposition (IFD) has been introduced to address the stability and efficiency problems of the EMD. Another data analysis technique is the local spectral evolution kernel (LSEK), which provides a near prefect low pass filter with desirable time-frequency localizations. The present work utilizes the LSEK to further stabilize the IFD, and offers an efficient, flexible and robust scheme for information extraction, complexity reduction, and signal and image understanding. The performance of the present LSEK based IFD is intensively validated over a wide range of data processing tasks, including mode decomposition, analysis of time-varying data, information extraction from nonlinear dynamic systems, etc. The utility, robustness and usefulness of the proposed LESK based IFD are demonstrated via a large number of applications, such as the analysis of stock market data, the decomposition of ocean wave magnitudes, the understanding of physiologic signals and information recovery from noisy images. The performance of the proposed method is compared with that of existing methods in the literature. Our results indicate that the LSEK based IFD improves both the efficiency and the stability of conventional EMD algorithms. PMID:22350559

  20. Preliminary Classification of Novel Hemorrhagic Fever-Causing Viruses Using Sequence-Based PAirwise Sequence Comparison (PASC) Analysis.

    PubMed

    Bào, Yīmíng; Kuhn, Jens H

    2018-01-01

    During the last decade, genome sequence-based classification of viruses has become increasingly prominent. Viruses can be even classified based on coding-complete genome sequence data alone. Nevertheless, classification remains arduous as experts are required to establish phylogenetic trees to depict the evolutionary relationships of such sequences for preliminary taxonomic placement. Pairwise sequence comparison (PASC) of genomes is one of several novel methods for establishing relationships among viruses. This method, provided by the US National Center for Biotechnology Information as an open-access tool, circumvents phylogenetics, and yet PASC results are often in agreement with those of phylogenetic analyses. Computationally inexpensive, PASC can be easily performed by non-taxonomists. Here we describe how to use the PASC tool for the preliminary classification of novel viral hemorrhagic fever-causing viruses.

  1. Non-rigid multi-frame registration of cell nuclei in live cell fluorescence microscopy image data.

    PubMed

    Tektonidis, Marco; Kim, Il-Han; Chen, Yi-Chun M; Eils, Roland; Spector, David L; Rohr, Karl

    2015-01-01

    The analysis of the motion of subcellular particles in live cell microscopy images is essential for understanding biological processes within cells. For accurate quantification of the particle motion, compensation of the motion and deformation of the cell nucleus is required. We introduce a non-rigid multi-frame registration approach for live cell fluorescence microscopy image data. Compared to existing approaches using pairwise registration, our approach exploits information from multiple consecutive images simultaneously to improve the registration accuracy. We present three intensity-based variants of the multi-frame registration approach and we investigate two different temporal weighting schemes. The approach has been successfully applied to synthetic and live cell microscopy image sequences, and an experimental comparison with non-rigid pairwise registration has been carried out. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Decomposition and particle release of a carbon nanotube/epoxy nanocomposite at elevated temperatures

    NASA Astrophysics Data System (ADS)

    Schlagenhauf, Lukas; Kuo, Yu-Ying; Bahk, Yeon Kyoung; Nüesch, Frank; Wang, Jing

    2015-11-01

    Carbon nanotubes (CNTs) as fillers in nanocomposites have attracted significant attention, and one of the applications is to use the CNTs as flame retardants. For such nanocomposites, possible release of CNTs at elevated temperatures after decomposition of the polymer matrix poses potential health threats. We investigated the airborne particle release from a decomposing multi-walled carbon nanotube (MWCNT)/epoxy nanocomposite in order to measure a possible release of MWCNTs. An experimental set-up was established that allows decomposing the samples in a furnace by exposure to increasing temperatures at a constant heating rate and under ambient air or nitrogen atmosphere. The particle analysis was performed by aerosol measurement devices and by transmission electron microscopy (TEM) of collected particles. Further, by the application of a thermal denuder, it was also possible to measure non-volatile particles only. Characterization of the tested samples and the decomposition kinetics were determined by the usage of thermogravimetric analysis (TGA). The particle release of different samples was investigated, of a neat epoxy, nanocomposites with 0.1 and 1 wt% MWCNTs, and nanocomposites with functionalized MWCNTs. The results showed that the added MWCNTs had little effect on the decomposition kinetics of the investigated samples, but the weight of the remaining residues after decomposition was influenced significantly. The measurements with decomposition in different atmospheres showed a release of a higher number of particles at temperatures below 300 °C when air was used. Analysis of collected particles by TEM revealed that no detectable amount of MWCNTs was released, but micrometer-sized fibrous particles were collected.

  3. s-core network decomposition: A generalization of k-core analysis to weighted networks

    NASA Astrophysics Data System (ADS)

    Eidsaa, Marius; Almaas, Eivind

    2013-12-01

    A broad range of systems spanning biology, technology, and social phenomena may be represented and analyzed as complex networks. Recent studies of such networks using k-core decomposition have uncovered groups of nodes that play important roles. Here, we present s-core analysis, a generalization of k-core (or k-shell) analysis to complex networks where the links have different strengths or weights. We demonstrate the s-core decomposition approach on two random networks (ER and configuration model with scale-free degree distribution) where the link weights are (i) random, (ii) correlated, and (iii) anticorrelated with the node degrees. Finally, we apply the s-core decomposition approach to the protein-interaction network of the yeast Saccharomyces cerevisiae in the context of two gene-expression experiments: oxidative stress in response to cumene hydroperoxide (CHP), and fermentation stress response (FSR). We find that the innermost s-cores are (i) different from innermost k-cores, (ii) different for the two stress conditions CHP and FSR, and (iii) enriched with proteins whose biological functions give insight into how yeast manages these specific stresses.

  4. Catalytic and inhibiting effects of lithium peroxide and hydroxide on sodium chlorate decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cannon, J.C.; Zhang, Y.

    1995-09-01

    Chemical oxygen generators based on sodium chlorate and lithium perchlorate are used in airplanes, submarines, diving, and mine rescue. Catalytic decomposition of sodium chlorate in the presence of cobalt oxide, lithium peroxide, and lithium hydroxide is studied using thermal gravimetric analysis. Lithium peroxide and hydroxide are both moderately active catalysts for the decomposition of sodium chlorate when used alone, and inhibitors when used with the more active catalyst cobalt oxide.

  5. Detection of decomposition volatile organic compounds in soil following removal of remains from a surface deposition site.

    PubMed

    Perrault, Katelynn A; Stefanuto, Pierre-Hugues; Stuart, Barbara H; Rai, Tapan; Focant, Jean-François; Forbes, Shari L

    2015-09-01

    Cadaver-detection dogs use volatile organic compounds (VOCs) to search for human remains including those deposited on or beneath soil. Soil can act as a sink for VOCs, causing loading of decomposition VOCs in the soil following soft tissue decomposition. The objective of this study was to chemically profile decomposition VOCs from surface decomposition sites after remains were removed from their primary location. Pig carcasses were used as human analogues and were deposited on a soil surface to decompose for 3 months. The remains were then removed from each site and VOCs were collected from the soil for 7 months thereafter and analyzed by comprehensive two-dimensional gas chromatography-time-of-flight mass spectrometry (GC×GC-TOFMS). Decomposition VOCs diminished within 6 weeks and hydrocarbons were the most persistent compound class. Decomposition VOCs could still be detected in the soil after 7 months using Principal Component Analysis. This study demonstrated that the decomposition VOC profile, while detectable by GC×GC-TOFMS in the soil, was considerably reduced and altered in composition upon removal of remains. Chemical reference data is provided by this study for future investigations of canine alert behavior in scenarios involving scattered or scavenged remains.

  6. MIDAS: software for analysis and visualisation of interallelic disequilibrium between multiallelic markers

    PubMed Central

    Gaunt, Tom R; Rodriguez, Santiago; Zapata, Carlos; Day, Ian NM

    2006-01-01

    Background Various software tools are available for the display of pairwise linkage disequilibrium across multiple single nucleotide polymorphisms. The HapMap project also presents these graphics within their website. However, these approaches are limited in their use of data from multiallelic markers and provide limited information in a graphical form. Results We have developed a software package (MIDAS – Multiallelic Interallelic Disequilibrium Analysis Software) for the estimation and graphical display of interallelic linkage disequilibrium. Linkage disequilibrium is analysed for each allelic combination (of one allele from each of two loci), between all pairwise combinations of any type of multiallelic loci in a contig (or any set) of many loci (including single nucleotide polymorphisms, microsatellites, minisatellites and haplotypes). Data are presented graphically in a novel and informative way, and can also be exported in tabular form for other analyses. This approach facilitates visualisation of patterns of linkage disequilibrium across genomic regions, analysis of the relationships between different alleles of multiallelic markers and inferences about patterns of evolution and selection. Conclusion MIDAS is a linkage disequilibrium analysis program with a comprehensive graphical user interface providing novel views of patterns of linkage disequilibrium between all types of multiallelic and biallelic markers. Availability Available from and PMID:16643648

  7. Non-pairwise additivity of the leading-order dispersion energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hollett, Joshua W., E-mail: j.hollett@uwinnipeg.ca

    2015-02-28

    The leading-order (i.e., dipole-dipole) dispersion energy is calculated for one-dimensional (1D) and two-dimensional (2D) infinite lattices, and an infinite 1D array of infinitely long lines, of doubly occupied locally harmonic wells. The dispersion energy is decomposed into pairwise and non-pairwise additive components. By varying the force constant and separation of the wells, the non-pairwise additive contribution to the dispersion energy is shown to depend on the overlap of density between neighboring wells. As well separation is increased, the non-pairwise additivity of the dispersion energy decays. The different rates of decay for 1D and 2D lattices of wells is explained inmore » terms of a Jacobian effect that influences the number of nearest neighbors. For an array of infinitely long lines of wells spaced 5 bohrs apart, and an inter-well spacing of 3 bohrs within a line, the non-pairwise additive component of the leading-order dispersion energy is −0.11 kJ mol{sup −1} well{sup −1}, which is 7% of the total. The polarizability of the wells and the density overlap between them are small in comparison to that of the atomic densities that arise from the molecular density partitioning used in post-density-functional theory (DFT) damped dispersion corrections, or DFT-D methods. Therefore, the nonadditivity of the leading-order dispersion observed here is a conservative estimate of that in molecular clusters.« less

  8. Non-linear analytic and coanalytic problems ( L_p-theory, Clifford analysis, examples)

    NASA Astrophysics Data System (ADS)

    Dubinskii, Yu A.; Osipenko, A. S.

    2000-02-01

    Two kinds of new mathematical model of variational type are put forward: non-linear analytic and coanalytic problems. The formulation of these non-linear boundary-value problems is based on a decomposition of the complete scale of Sobolev spaces into the "orthogonal" sum of analytic and coanalytic subspaces. A similar decomposition is considered in the framework of Clifford analysis. Explicit examples are presented.

  9. Thermal decomposition behavior of nano/micro bimodal feedstock with different solids loading

    NASA Astrophysics Data System (ADS)

    Oh, Joo Won; Lee, Won Sik; Park, Seong Jin

    2018-01-01

    Debinding is one of the most critical processes for powder injection molding. The parts in debinding process are vulnerable to defect formation, and long processing time of debinding decreases production rate of whole process. In order to determine the optimal condition for debinding process, decomposition behavior of feedstock should be understood. Since nano powder affects the decomposition behavior of feedstock, nano powder effect needs to be investigated for nano/micro bimodal feedstock. In this research, nano powder effect on decomposition behavior of nano/micro bimodal feedstock has been studied. Bimodal powders were fabricated with different ratios of nano powder, and the critical solids loading of each powder was measured by torque rheometer. Three different feedstocks were fabricated for each powder depending on solids loading condition. Thermogravimetric analysis (TGA) experiment was carried out to analyze the thermal decomposition behavior of the feedstocks, and decomposition activation energy was calculated. The result indicated nano powder showed limited effect on feedstocks in lower solids loading condition than optimal range. Whereas, it highly influenced the decomposition behavior in optimal solids loading condition by causing polymer chain scission with high viscosity.

  10. Search for memory effects in methane hydrate: structure of water before hydrate formation and after hydrate decomposition.

    PubMed

    Buchanan, Piers; Soper, Alan K; Thompson, Helen; Westacott, Robin E; Creek, Jefferson L; Hobson, Greg; Koh, Carolyn A

    2005-10-22

    Neutron diffraction with HD isotope substitution has been used to study the formation and decomposition of the methane clathrate hydrate. Using this atomistic technique coupled with simultaneous gas consumption measurements, we have successfully tracked the formation of the sI methane hydrate from a water/gas mixture and then the subsequent decomposition of the hydrate from initiation to completion. These studies demonstrate that the application of neutron diffraction with simultaneous gas consumption measurements provides a powerful method for studying the clathrate hydrate crystal growth and decomposition. We have also used neutron diffraction to examine the water structure before the hydrate growth and after the hydrate decomposition. From the neutron-scattering curves and the empirical potential structure refinement analysis of the data, we find that there is no significant difference between the structure of water before the hydrate formation and the structure of water after the hydrate decomposition. Nor is there any significant change to the methane hydration shell. These results are discussed in the context of widely held views on the existence of memory effects after the hydrate decomposition.

  11. Intransitivity is infrequent and fails to promote annual plant coexistence without pairwise niche differences.

    PubMed

    Godoy, Oscar; Stouffer, Daniel B; Kraft, Nathan J B; Levine, Jonathan M

    2017-05-01

    Intransitive competition is often projected to be a widespread mechanism of species coexistence in ecological communities. However, it is unknown how much of the coexistence we observe in nature results from this mechanism when species interactions are also stabilized by pairwise niche differences. We combined field-parameterized models of competition among 18 annual plant species with tools from network theory to quantify the prevalence of intransitive competitive relationships. We then analyzed the predicted outcome of competitive interactions with and without pairwise niche differences. Intransitive competition was found for just 15-19% of the 816 possible triplets, and this mechanism was never sufficient to stabilize the coexistence of the triplet when the pair-wise niche differences between competitors were removed. Of the transitive and intransitive triplets, only four were predicted to coexist and these were more similar in multidimensional trait space defined by 11 functional traits than non-coexisting triplets. Our results argue that intransitive competition may be less frequent than recently posed, and that even when it does operate, pairwise niche differences may be key to possible coexistence. © 2017 by the Ecological Society of America.

  12. Keratin decomposition by trogid beetles: evidence from a feeding experiment and stable isotope analysis

    NASA Astrophysics Data System (ADS)

    Sugiura, Shinji; Ikeda, Hiroshi

    2014-03-01

    The decomposition of vertebrate carcasses is an important ecosystem function. Soft tissues of dead vertebrates are rapidly decomposed by diverse animals. However, decomposition of hard tissues such as hairs and feathers is much slower because only a few animals can digest keratin, a protein that is concentrated in hairs and feathers. Although beetles of the family Trogidae are considered keratin feeders, their ecological function has rarely been explored. Here, we investigated the keratin-decomposition function of trogid beetles in heron-breeding colonies where keratin was frequently supplied as feathers. Three trogid species were collected from the colonies and observed feeding on heron feathers under laboratory conditions. We also measured the nitrogen (δ15N) and carbon (δ13C) stable isotope ratios of two trogid species that were maintained on a constant diet (feathers from one heron individual) during 70 days under laboratory conditions. We compared the isotopic signatures of the trogids with the feathers to investigate isotopic shifts from the feathers to the consumers for δ15N and δ13C. We used mixing models (MixSIR and SIAR) to estimate the main diets of individual field-collected trogid beetles. The analysis indicated that heron feathers were more important as food for trogid beetles than were soft tissues under field conditions. Together, the feeding experiment and stable isotope analysis provided strong evidence of keratin decomposition by trogid beetles.

  13. Development of Sizing Systems for Navy Women’s Uniforms

    DTIC Science & Technology

    1991-12-01

    sample. Table la indicates the distance (D2) between the racial groups. Group 1 is White, Group 2 is Black, and Group 4 is Hispanic. There were too...TABLE ,. DEscriminant Analysis Pairwise Squared Generalized Distances between Groups D2 (IIJ) - (XI - Xj)’ COY’ (XI - Xj) Generalized Square( Distaace To

  14. Saving the Best for Last? A Cross-Species Analysis of Choices between Reinforcer Sequences

    ERIC Educational Resources Information Center

    Andrade, Leonardo F.; Hackenberg, Timothy D.

    2012-01-01

    Two experiments were conducted to compare choices between sequences of reinforcers in pigeon (Experiment 1) and human (Experiment 2) subjects, using functionally analogous procedures. The subjects made pairwise choices among 3 sequence types, all of which provided the same overall reinforcement rate, but differed in their temporal patterning.…

  15. Comparison of Human and Latent Semantic Analysis (LSA) Judgements of Pairwise Document Similarities for a News Corpus

    DTIC Science & Technology

    2004-09-01

    University. Miro Kraetzl critically assessed the manuscript before it was sent for review. References Allan, J., Callan, J., Croft, W.B., Ballesteros, L...Conference (TREC 6). NIST Special Publication 500-240. Baayen,R.H. (2001). Word Frequency Distributions. Kluwer Academic Publishers, P.O. Box 322 , 3300

  16. A Study of Retention Trends of International Students at a Southwestern Public University

    ERIC Educational Resources Information Center

    Wong Davis, Kristina Marie

    2012-01-01

    Literature on factors contributing to the retention of international students remained limited. The purpose of this study was to examine the factors related to retention of international undergraduate degree seeking students through conducting pairwise correlational analysis to test the relationship between retention and age, gender, country of…

  17. Data Stream Mining Based Dynamic Link Anomaly Analysis Using Paired Sliding Time Window Data

    DTIC Science & Technology

    2014-11-01

    Conference on Knowledge Dis- covery and Data Mining, PAKDD’10, Hyderabad, India , (2010). [2] Almansoori, W., Gao, S., Jarada, T. N., Elsheikh, A. M...F., Greif, C., and Lakshmanan, L. V., “Fast Matrix Computations for Pairwise and Columnwise Commute Times and Katz Scores,” Internet Mathematics, Vol

  18. Effect of congenital blindness on the semantic representation of some everyday concepts.

    PubMed

    Connolly, Andrew C; Gleitman, Lila R; Thompson-Schill, Sharon L

    2007-05-15

    This study explores how the lack of first-hand experience with color, as a result of congenital blindness, affects implicit judgments about "higher-order" concepts, such as "fruits and vegetables" (FV), but not others, such as "household items" (HHI). We demonstrate how the differential diagnosticity of color across our test categories interacts with visual experience to produce, in effect, a category-specific difference in implicit similarity. Implicit pair-wise similarity judgments were collected by using an odd-man-out triad task. Pair-wise similarities for both FV and for HHI were derived from this task and were compared by using cluster analysis and regression analyses. Color was found to be a significant component in the structure of implicit similarity for FV for sighted participants but not for blind participants; and this pattern remained even when the analysis was restricted to blind participants who had good explicit color knowledge of the stimulus items. There was also no evidence that either subject group used color knowledge in making decisions about HHI, nor was there an indication of any qualitative differences between blind and sighted subjects' judgments on HHI.

  19. The demodulated band transform

    PubMed Central

    Kovach, Christopher K.; Gander, Phillip E.

    2016-01-01

    Background Windowed Fourier decompositions (WFD) are widely used in measuring stationary and non-stationary spectral phenomena and in describing pairwise relationships among multiple signals. Although a variety of WFDs see frequent application in electrophysiological research, including the short-time Fourier transform, continuous wavelets, band-pass filtering and multitaper-based approaches, each carries certain drawbacks related to computational efficiency and spectral leakage. This work surveys the advantages of a WFD not previously applied in electrophysiological settings. New Methods A computationally efficient form of complex demodulation, the demodulated band transform (DBT), is described. Results DBT is shown to provide an efficient approach to spectral estimation with minimal susceptibility to spectral leakage. In addition, it lends itself well to adaptive filtering of non-stationary narrowband noise. Comparison with existing methods A detailed comparison with alternative WFDs is offered, with an emphasis on the relationship between DBT and Thomson's multitaper. DBT is shown to perform favorably in combining computational efficiency with minimal introduction of spectral leakage. Conclusion DBT is ideally suited to efficient estimation of both stationary and non-stationary spectral and cross-spectral statistics with minimal susceptibility to spectral leakage. These qualities are broadly desirable in many settings. PMID:26711370

  20. Proceedings of International Pyrotechnics Seminar (4th), Held at Steamboat Village, Colorado, 22-26 July 1974

    DTIC Science & Technology

    1974-06-17

    10-1 I1. Burning Rate Modifiers, D.R. Dillehay ............................. 11-1 12. Spectroscopic Analysis of Azide Decomposition Products for use...solid, and Pit that they ignite a short distance from the surface. Further- more, decomposition of sodium nitrate, which produces the gas to blow the...decreasing U the thermal conductivity of the basic binary. Class 2 compounds, con- sisting of nanganese oxides, catalyze the normal decomposition of

  1. Circular Mixture Modeling of Color Distribution for Blind Stain Separation in Pathology Images.

    PubMed

    Li, Xingyu; Plataniotis, Konstantinos N

    2017-01-01

    In digital pathology, to address color variation and histological component colocalization in pathology images, stain decomposition is usually performed preceding spectral normalization and tissue component segmentation. This paper examines the problem of stain decomposition, which is a naturally nonnegative matrix factorization (NMF) problem in algebra, and introduces a systematical and analytical solution consisting of a circular color analysis module and an NMF-based computation module. Unlike the paradigm of existing stain decomposition algorithms where stain proportions are computed from estimated stain spectra using a matrix inverse operation directly, the introduced solution estimates stain spectra and stain depths via probabilistic reasoning individually. Since the proposed method pays extra attentions to achromatic pixels in color analysis and stain co-occurrence in pixel clustering, it achieves consistent and reliable stain decomposition with minimum decomposition residue. Particularly, aware of the periodic and angular nature of hue, we propose the use of a circular von Mises mixture model to analyze the hue distribution, and provide a complete color-based pixel soft-clustering solution to address color mixing introduced by stain overlap. This innovation combined with saturation-weighted computation makes our study effective for weak stains and broad-spectrum stains. Extensive experimentation on multiple public pathology datasets suggests that our approach outperforms state-of-the-art blind stain separation methods in terms of decomposition effectiveness.

  2. Data analysis using a combination of independent component analysis and empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Lin, Shih-Lin; Tung, Pi-Cheng; Huang, Norden E.

    2009-06-01

    A combination of independent component analysis and empirical mode decomposition (ICA-EMD) is proposed in this paper to analyze low signal-to-noise ratio data. The advantages of ICA-EMD combination are these: ICA needs few sensory clues to separate the original source from unwanted noise and EMD can effectively separate the data into its constituting parts. The case studies reported here involve original sources contaminated by white Gaussian noise. The simulation results show that the ICA-EMD combination is an effective data analysis tool.

  3. A New View of Earthquake Ground Motion Data: The Hilbert Spectral Analysis

    NASA Technical Reports Server (NTRS)

    Huang, Norden; Busalacchi, Antonio J. (Technical Monitor)

    2000-01-01

    A brief description of the newly developed Empirical Mode Decomposition (ENID) and Hilbert Spectral Analysis (HSA) method will be given. The decomposition is adaptive and can be applied to both nonlinear and nonstationary data. Example of the method applied to a sample earthquake record will be given. The results indicate those low frequency components, totally missed by the Fourier analysis, are clearly identified by the new method. Comparisons with Wavelet and window Fourier analysis show the new method offers much better temporal and frequency resolutions.

  4. Synthesis, crystal structure and catalytic effect on thermal decomposition of RDX and AP: An energetic coordination polymer [Pb{sub 2}(C{sub 5}H{sub 3}N{sub 5}O{sub 5}){sub 2}(NMP)·NMP]{sub n}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Jin-jian; Yancheng Teachers College, Yancheng 224002; Liu, Zu-Liang, E-mail: liuzl@mail.njust.edu.cn

    2013-04-15

    An energetic lead(II) coordination polymer based on the ligand ANPyO has been synthesized and its crystal structure has been got. The polymer was characterized by FT-IR spectroscopy, elemental analysis, DSC and TG-DTG technologies. Thermal analysis shows that there are one endothermic process and two exothermic decomposition stages in the temperature range of 50–600 °C with final residues 57.09%. The non-isothermal kinetic has also been studied on the main exothermic decomposition using the Kissinger's and Ozawa–Doyle's methods, the apparent activation energy is calculated as 195.2 KJ/mol. Furthermore, DSC measurements show that the polymer has significant catalytic effect on the thermal decompositionmore » of ammonium perchlorate. - Graphical abstract: An energetic lead(II) coordination polymer of ANPyO has been synthesized, structurally characterized and properties tested. Highlights: ► We have synthesized and characterized an energetic lead(II) coordination polymer. ► We have measured its molecular structure and thermal decomposition. ► It has significant catalytic effect on thermal decomposition of AP.« less

  5. Allowing for uncertainty due to missing continuous outcome data in pairwise and network meta-analysis.

    PubMed

    Mavridis, Dimitris; White, Ian R; Higgins, Julian P T; Cipriani, Andrea; Salanti, Georgia

    2015-02-28

    Missing outcome data are commonly encountered in randomized controlled trials and hence may need to be addressed in a meta-analysis of multiple trials. A common and simple approach to deal with missing data is to restrict analysis to individuals for whom the outcome was obtained (complete case analysis). However, estimated treatment effects from complete case analyses are potentially biased if informative missing data are ignored. We develop methods for estimating meta-analytic summary treatment effects for continuous outcomes in the presence of missing data for some of the individuals within the trials. We build on a method previously developed for binary outcomes, which quantifies the degree of departure from a missing at random assumption via the informative missingness odds ratio. Our new model quantifies the degree of departure from missing at random using either an informative missingness difference of means or an informative missingness ratio of means, both of which relate the mean value of the missing outcome data to that of the observed data. We propose estimating the treatment effects, adjusted for informative missingness, and their standard errors by a Taylor series approximation and by a Monte Carlo method. We apply the methodology to examples of both pairwise and network meta-analysis with multi-arm trials. © 2014 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  6. Decomposition of toxicity emission changes on the demand and supply sides: empirical study of the US industrial sector

    NASA Astrophysics Data System (ADS)

    Fujii, Hidemichi; Okamoto, Shunsuke; Kagawa, Shigemi; Managi, Shunsuke

    2017-12-01

    This study investigated the changes in the toxicity of chemical emissions from the US industrial sector over the 1998-2009 period. Specifically, we employed a multiregional input-output analysis framework and integrated a supply-side index decomposition analysis (IDA) with a demand-side structural decomposition analysis (SDA) to clarify the main drivers of changes in the toxicity of production- and consumption-based chemical emissions. The results showed that toxic emissions from the US industrial sector decreased by 83% over the studied period because of pollution abatement efforts adopted by US industries. A variety of pollution abatement efforts were used by different industries, and cleaner production in the mining sector and the use of alternative materials in the manufacture of transportation equipment represented the most important efforts.

  7. Automatic Camera Calibration Using Multiple Sets of Pairwise Correspondences.

    PubMed

    Vasconcelos, Francisco; Barreto, Joao P; Boyer, Edmond

    2018-04-01

    We propose a new method to add an uncalibrated node into a network of calibrated cameras using only pairwise point correspondences. While previous methods perform this task using triple correspondences, these are often difficult to establish when there is limited overlap between different views. In such challenging cases we must rely on pairwise correspondences and our solution becomes more advantageous. Our method includes an 11-point minimal solution for the intrinsic and extrinsic calibration of a camera from pairwise correspondences with other two calibrated cameras, and a new inlier selection framework that extends the traditional RANSAC family of algorithms to sampling across multiple datasets. Our method is validated on different application scenarios where a lack of triple correspondences might occur: addition of a new node to a camera network; calibration and motion estimation of a moving camera inside a camera network; and addition of views with limited overlap to a Structure-from-Motion model.

  8. Shaped Ceria Nanocrystals Catalyze Efficient and Selective Para-Hydrogen-Enhanced Polarization.

    PubMed

    Zhao, Evan W; Zheng, Haibin; Zhou, Ronghui; Hagelin-Weaver, Helena E; Bowers, Clifford R

    2015-11-23

    Intense para-hydrogen-enhanced NMR signals are observed in the hydrogenation of propene and propyne over ceria nanocubes, nano-octahedra, and nanorods. The well-defined ceria shapes, synthesized by a hydrothermal method, expose different crystalline facets with various oxygen vacancy densities, which are known to play a role in hydrogenation and oxidation catalysis. While the catalytic activity of the hydrogenation of propene over ceria is strongly facet-dependent, the pairwise selectivity is low (2.4% at 375 °C), which is consistent with stepwise H atom transfer, and it is the same for all three nanocrystal shapes. Selective semi-hydrogenation of propyne over ceria nanocubes yields hyperpolarized propene with a similar pairwise selectivity of (2.7% at 300 °C), indicating product formation predominantly by a non-pairwise addition. Ceria is also shown to be an efficient pairwise replacement catalyst for propene. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Pairwise Force Smoothed Particle Hydrodynamics model for multiphase flow: Surface tension and contact line dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tartakovsky, Alexandre M.; Panchenko, Alexander

    2016-01-01

    We present a novel formulation of the Pairwise Force Smoothed Particle Hydrodynamics Model (PF-SPH) and use it to simulate two- and three-phase flows in bounded domains. In the PF-SPH model, the Navier-Stokes equations are discretized with the Smoothed Particle Hydrodynamics (SPH) method and the Young-Laplace boundary condition at the fluid-fluid interface and the Young boundary condition at the fluid-fluid-solid interface are replaced with pairwise forces added into the Navier-Stokes equations. We derive a relationship between the parameters in the pairwise forces and the surface tension and static contact angle. Next, we demonstrate the accuracy of the model under static andmore » dynamic conditions. Finally, to demonstrate the capabilities and robustness of the model we use it to simulate flow of three fluids in a porous material.« less

  10. Leveraging CyVerse Resources for De Novo Comparative Transcriptomics of Underserved (Non-model) Organisms

    PubMed Central

    Joyce, Blake L.; Haug-Baltzell, Asher K.; Hulvey, Jonathan P.; McCarthy, Fiona; Devisetty, Upendra Kumar; Lyons, Eric

    2017-01-01

    This workflow allows novice researchers to leverage advanced computational resources such as cloud computing to carry out pairwise comparative transcriptomics. It also serves as a primer for biologists to develop data scientist computational skills, e.g. executing bash commands, visualization and management of large data sets. All command line code and further explanations of each command or step can be found on the wiki (https://wiki.cyverse.org/wiki/x/dgGtAQ). The Discovery Environment and Atmosphere platforms are connected together through the CyVerse Data Store. As such, once the initial raw sequencing data has been uploaded there is no more need to transfer large data files over an Internet connection, minimizing the amount of time needed to conduct analyses. This protocol is designed to analyze only two experimental treatments or conditions. Differential gene expression analysis is conducted through pairwise comparisons, and will not be suitable to test multiple factors. This workflow is also designed to be manual rather than automated. Each step must be executed and investigated by the user, yielding a better understanding of data and analytical outputs, and therefore better results for the user. Once complete, this protocol will yield de novo assembled transcriptome(s) for underserved (non-model) organisms without the need to map to previously assembled reference genomes (which are usually not available in underserved organism). These de novo transcriptomes are further used in pairwise differential gene expression analysis to investigate genes differing between two experimental conditions. Differentially expressed genes are then functionally annotated to understand the genetic response organisms have to experimental conditions. In total, the data derived from this protocol is used to test hypotheses about biological responses of underserved organisms. PMID:28518075

  11. A Three-way Decomposition of a Total Effect into Direct, Indirect, and Interactive Effects

    PubMed Central

    VanderWeele, Tyler J.

    2013-01-01

    Recent theory in causal inference has provided concepts for mediation analysis and effect decomposition that allow one to decompose a total effect into a direct and an indirect effect. Here, it is shown that what is often taken as an indirect effect can in fact be further decomposed into a “pure” indirect effect and a mediated interactive effect, thus yielding a three-way decomposition of a total effect (direct, indirect, and interactive). This three-way decomposition applies to difference scales and also to additive ratio scales and additive hazard scales. Assumptions needed for the identification of each of these three effects are discussed and simple formulae are given for each when regression models allowing for interaction are used. The three-way decomposition is illustrated by examples from genetic and perinatal epidemiology, and discussion is given to what is gained over the traditional two-way decomposition into simply a direct and an indirect effect. PMID:23354283

  12. Human versus animal: contrasting decomposition dynamics of mammalian analogues in experimental taphonomy.

    PubMed

    Stokes, Kathryn L; Forbes, Shari L; Tibbett, Mark

    2013-05-01

    Taphonomic studies regularly employ animal analogues for human decomposition due to ethical restrictions relating to the use of human tissue. However, the validity of using animal analogues in soil decomposition studies is still questioned. This study compared the decomposition of skeletal muscle tissues (SMTs) from human (Homo sapiens), pork (Sus scrofa), beef (Bos taurus), and lamb (Ovis aries) interred in soil microcosms. Fixed interval samples were collected from the SMT for microbial activity and mass tissue loss determination; samples were also taken from the underlying soil for pH, electrical conductivity, and nutrient (potassium, phosphate, ammonium, and nitrate) analysis. The overall patterns of nutrient fluxes and chemical changes in nonhuman SMT and the underlying soil followed that of human SMT. Ovine tissue was the most similar to human tissue in many of the measured parameters. Although no single analogue was a precise predictor of human decomposition in soil, all models offered close approximations in decomposition dynamics. © 2013 American Academy of Forensic Sciences.

  13. Data-driven process decomposition and robust online distributed modelling for large-scale processes

    NASA Astrophysics Data System (ADS)

    Shu, Zhang; Lijuan, Li; Lijuan, Yao; Shipin, Yang; Tao, Zou

    2018-02-01

    With the increasing attention of networked control, system decomposition and distributed models show significant importance in the implementation of model-based control strategy. In this paper, a data-driven system decomposition and online distributed subsystem modelling algorithm was proposed for large-scale chemical processes. The key controlled variables are first partitioned by affinity propagation clustering algorithm into several clusters. Each cluster can be regarded as a subsystem. Then the inputs of each subsystem are selected by offline canonical correlation analysis between all process variables and its controlled variables. Process decomposition is then realised after the screening of input and output variables. When the system decomposition is finished, the online subsystem modelling can be carried out by recursively block-wise renewing the samples. The proposed algorithm was applied in the Tennessee Eastman process and the validity was verified.

  14. Vapor Pressure Data and Analysis for Selected HD Decomposition Products: 1,4-Thioxane, Divinyl Sulfoxide, Chloroethyl Acetylsulfide, and 1,4-Dithiane

    DTIC Science & Technology

    2018-06-01

    decomposition products from bis-(2-chloroethyl) sulfide (HD). These data were measured using an ASTM International method that is based on differential...2.1 Materials and Method ........................................................................................2 2.2 Data Analysis...and Method The source and purity of the materials studied are listed in Table 1. Table 1. Sample Information for Title Compounds Compound

  15. From micro-correlations to macro-correlations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eliazar, Iddo, E-mail: iddo.eliazar@intel.com

    2016-11-15

    Random vectors with a symmetric correlation structure share a common value of pair-wise correlation between their different components. The symmetric correlation structure appears in a multitude of settings, e.g. mixture models. In a mixture model the components of the random vector are drawn independently from a general probability distribution that is determined by an underlying parameter, and the parameter itself is randomized. In this paper we study the overall correlation of high-dimensional random vectors with a symmetric correlation structure. Considering such a random vector, and terming its pair-wise correlation “micro-correlation”, we use an asymptotic analysis to derive the random vector’smore » “macro-correlation” : a score that takes values in the unit interval, and that quantifies the random vector’s overall correlation. The method of obtaining macro-correlations from micro-correlations is then applied to a diverse collection of frameworks that demonstrate the method’s wide applicability.« less

  16. Ethanol modifies the effect of handling stress on gene expression: problems in the analysis of two-way gene expression studies in mouse brain.

    PubMed

    Rulten, Stuart L; Ripley, Tamzin L; Manerakis, Ektor; Stephens, David N; Mayne, Lynne V

    2006-08-02

    Studies analysing the effects of acute treatments on animal behaviour and brain biochemistry frequently use pairwise comparisons between sham-treated and -untreated animals. In this study, we analyse expression of tPA, Grik2, Smarca2 and the transcription factor, Sp1, in mouse cerebellum following acute ethanol treatment. Expression is compared to saline-injected and -untreated control animals. We demonstrate that acute i.p. injection of saline may alter gene expression in a gene-specific manner and that ethanol may modify the effects of sham treatment on gene expression, as well as inducing specific effects independent of any handling related stress. In addition to demonstrating the complexity of gene expression in response to physical and environmental stress, this work raises questions on the interpretation and validity of studies relying on pairwise comparisons.

  17. Calibration of Smartphone-Based Weather Measurements Using Pairwise Gossip.

    PubMed

    Zamora, Jane Louie Fresco; Kashihara, Shigeru; Yamaguchi, Suguru

    2015-01-01

    Accurate and reliable daily global weather reports are necessary for weather forecasting and climate analysis. However, the availability of these reports continues to decline due to the lack of economic support and policies in maintaining ground weather measurement systems from where these reports are obtained. Thus, to mitigate data scarcity, it is required to utilize weather information from existing sensors and built-in smartphone sensors. However, as smartphone usage often varies according to human activity, it is difficult to obtain accurate measurement data. In this paper, we present a heuristic-based pairwise gossip algorithm that will calibrate smartphone-based pressure sensors with respect to fixed weather stations as our referential ground truth. Based on actual measurements, we have verified that smartphone-based readings are unstable when observed during movement. Using our calibration algorithm on actual smartphone-based pressure readings, the updated values were significantly closer to the ground truth values.

  18. Calibration of Smartphone-Based Weather Measurements Using Pairwise Gossip

    PubMed Central

    Yamaguchi, Suguru

    2015-01-01

    Accurate and reliable daily global weather reports are necessary for weather forecasting and climate analysis. However, the availability of these reports continues to decline due to the lack of economic support and policies in maintaining ground weather measurement systems from where these reports are obtained. Thus, to mitigate data scarcity, it is required to utilize weather information from existing sensors and built-in smartphone sensors. However, as smartphone usage often varies according to human activity, it is difficult to obtain accurate measurement data. In this paper, we present a heuristic-based pairwise gossip algorithm that will calibrate smartphone-based pressure sensors with respect to fixed weather stations as our referential ground truth. Based on actual measurements, we have verified that smartphone-based readings are unstable when observed during movement. Using our calibration algorithm on actual smartphone-based pressure readings, the updated values were significantly closer to the ground truth values. PMID:26421312

  19. Beyond pairwise strategy updating in the prisoner's dilemma game

    NASA Astrophysics Data System (ADS)

    Wang, Xiaofeng; Perc, Matjaž; Liu, Yongkui; Chen, Xiaojie; Wang, Long

    2012-10-01

    In spatial games players typically alter their strategy by imitating the most successful or one randomly selected neighbor. Since a single neighbor is taken as reference, the information stemming from other neighbors is neglected, which begets the consideration of alternative, possibly more realistic approaches. Here we show that strategy changes inspired not only by the performance of individual neighbors but rather by entire neighborhoods introduce a qualitatively different evolutionary dynamics that is able to support the stable existence of very small cooperative clusters. This leads to phase diagrams that differ significantly from those obtained by means of pairwise strategy updating. In particular, the survivability of cooperators is possible even by high temptations to defect and over a much wider uncertainty range. We support the simulation results by means of pair approximations and analysis of spatial patterns, which jointly highlight the importance of local information for the resolution of social dilemmas.

  20. Remarkable sequence conservation of the last intron in the PKD1 gene.

    PubMed

    Rodova, Marianna; Islam, M Rafiq; Peterson, Kenneth R; Calvet, James P

    2003-10-01

    The last intron of the PKD1 gene (intron 45) was found to have exceptionally high sequence conservation across four mammalian species: human, mouse, rat, and dog. This conservation did not extend to the comparable intron in pufferfish. Pairwise comparisons for intron 45 showed 91% identity (human vs. dog) to 100% identity (mouse vs. rat) for an average for all four species of 94% identity. In contrast, introns 43 and 44 of the PKD1 gene had average pairwise identities of 57% and 54%, and exons 43, 44, and 45 and the coding region of exon 46 had average pairwise identities of 80%, 84%, 82%, and 80%. Intron 45 is 90 to 95 bp in length, with the major region of sequence divergence being in a central 4-bp to 9-bp variable region. RNA secondary structure analysis of intron 45 predicts a branching stem-loop structure in which the central variable region lies in one loop and the putative branch point sequence lies in another loop, suggesting that the intron adopts a specific stem-loop structure that may be important for its removal. Although intron 45 appears to conform to the class of small, G-triplet-containing introns that are spliced by a mechanism utilizing intron definition, its high sequence conservation may be a reflection of constraints imposed by a unique mechanism that coordinates splicing of this last PKD1 intron with polyadenylation.

  1. Benchmarking Inverse Statistical Approaches for Protein Structure and Design with Exactly Solvable Models.

    PubMed

    Jacquin, Hugo; Gilson, Amy; Shakhnovich, Eugene; Cocco, Simona; Monasson, Rémi

    2016-05-01

    Inverse statistical approaches to determine protein structure and function from Multiple Sequence Alignments (MSA) are emerging as powerful tools in computational biology. However the underlying assumptions of the relationship between the inferred effective Potts Hamiltonian and real protein structure and energetics remain untested so far. Here we use lattice protein model (LP) to benchmark those inverse statistical approaches. We build MSA of highly stable sequences in target LP structures, and infer the effective pairwise Potts Hamiltonians from those MSA. We find that inferred Potts Hamiltonians reproduce many important aspects of 'true' LP structures and energetics. Careful analysis reveals that effective pairwise couplings in inferred Potts Hamiltonians depend not only on the energetics of the native structure but also on competing folds; in particular, the coupling values reflect both positive design (stabilization of native conformation) and negative design (destabilization of competing folds). In addition to providing detailed structural information, the inferred Potts models used as protein Hamiltonian for design of new sequences are able to generate with high probability completely new sequences with the desired folds, which is not possible using independent-site models. Those are remarkable results as the effective LP Hamiltonians used to generate MSA are not simple pairwise models due to the competition between the folds. Our findings elucidate the reasons for the success of inverse approaches to the modelling of proteins from sequence data, and their limitations.

  2. Profiling cellular protein complexes by proximity ligation with dual tag microarray readout.

    PubMed

    Hammond, Maria; Nong, Rachel Yuan; Ericsson, Olle; Pardali, Katerina; Landegren, Ulf

    2012-01-01

    Patterns of protein interactions provide important insights in basic biology, and their analysis plays an increasing role in drug development and diagnostics of disease. We have established a scalable technique to compare two biological samples for the levels of all pairwise interactions among a set of targeted protein molecules. The technique is a combination of the proximity ligation assay with readout via dual tag microarrays. In the proximity ligation assay protein identities are encoded as DNA sequences by attaching DNA oligonucleotides to antibodies directed against the proteins of interest. Upon binding by pairs of antibodies to proteins present in the same molecular complexes, ligation reactions give rise to reporter DNA molecules that contain the combined sequence information from the two DNA strands. The ligation reactions also serve to incorporate a sample barcode in the reporter molecules to allow for direct comparison between pairs of samples. The samples are evaluated using a dual tag microarray where information is decoded, revealing which pairs of tags that have become joined. As a proof-of-concept we demonstrate that this approach can be used to detect a set of five proteins and their pairwise interactions both in cellular lysates and in fixed tissue culture cells. This paper provides a general strategy to analyze the extent of any pairwise interactions in large sets of molecules by decoding reporter DNA strands that identify the interacting molecules.

  3. The Composition of Intermediate Products of the Thermal Decomposition of (NH4)2ZrF6 to ZrO2 from Vibrational-Spectroscopy Data

    NASA Astrophysics Data System (ADS)

    Voit, E. I.; Didenko, N. A.; Gaivoronskaya, K. A.

    2018-03-01

    Thermal decomposition of (NH4)2ZrF6 resulting in ZrO2 formation within the temperature range of 20°-750°C has been investigated by means of thermal and X-ray diffraction analysis and IR and Raman spectroscopy. It has been established that thermolysis proceeds in six stages. The vibrational-spectroscopy data for the intermediate products of thermal decomposition have been obtained, systematized, and summarized.

  4. RNA-TVcurve: a Web server for RNA secondary structure comparison based on a multi-scale similarity of its triple vector curve representation.

    PubMed

    Li, Ying; Shi, Xiaohu; Liang, Yanchun; Xie, Juan; Zhang, Yu; Ma, Qin

    2017-01-21

    RNAs have been found to carry diverse functionalities in nature. Inferring the similarity between two given RNAs is a fundamental step to understand and interpret their functional relationship. The majority of functional RNAs show conserved secondary structures, rather than sequence conservation. Those algorithms relying on sequence-based features usually have limitations in their prediction performance. Hence, integrating RNA structure features is very critical for RNA analysis. Existing algorithms mainly fall into two categories: alignment-based and alignment-free. The alignment-free algorithms of RNA comparison usually have lower time complexity than alignment-based algorithms. An alignment-free RNA comparison algorithm was proposed, in which novel numerical representations RNA-TVcurve (triple vector curve representation) of RNA sequence and corresponding secondary structure features are provided. Then a multi-scale similarity score of two given RNAs was designed based on wavelet decomposition of their numerical representation. In support of RNA mutation and phylogenetic analysis, a web server (RNA-TVcurve) was designed based on this alignment-free RNA comparison algorithm. It provides three functional modules: 1) visualization of numerical representation of RNA secondary structure; 2) detection of single-point mutation based on secondary structure; and 3) comparison of pairwise and multiple RNA secondary structures. The inputs of the web server require RNA primary sequences, while corresponding secondary structures are optional. For the primary sequences alone, the web server can compute the secondary structures using free energy minimization algorithm in terms of RNAfold tool from Vienna RNA package. RNA-TVcurve is the first integrated web server, based on an alignment-free method, to deliver a suite of RNA analysis functions, including visualization, mutation analysis and multiple RNAs structure comparison. The comparison results with two popular RNA comparison tools, RNApdist and RNAdistance, showcased that RNA-TVcurve can efficiently capture subtle relationships among RNAs for mutation detection and non-coding RNA classification. All the relevant results were shown in an intuitive graphical manner, and can be freely downloaded from this server. RNA-TVcurve, along with test examples and detailed documents, are available at: http://ml.jlu.edu.cn/tvcurve/ .

  5. A decomposition theory for phylogenetic networks and incompatible characters.

    PubMed

    Gusfield, Dan; Bansal, Vikas; Bafna, Vineet; Song, Yun S

    2007-12-01

    Phylogenetic networks are models of evolution that go beyond trees, incorporating non-tree-like biological events such as recombination (or more generally reticulation), which occur either in a single species (meiotic recombination) or between species (reticulation due to lateral gene transfer and hybrid speciation). The central algorithmic problems are to reconstruct a plausible history of mutations and non-tree-like events, or to determine the minimum number of such events needed to derive a given set of binary sequences, allowing one mutation per site. Meiotic recombination, reticulation and recurrent mutation can cause conflict or incompatibility between pairs of sites (or characters) of the input. Previously, we used "conflict graphs" and "incompatibility graphs" to compute lower bounds on the minimum number of recombination nodes needed, and to efficiently solve constrained cases of the minimization problem. Those results exposed the structural and algorithmic importance of the non-trivial connected components of those two graphs. In this paper, we more fully develop the structural importance of non-trivial connected components of the incompatibility and conflict graphs, proving a general decomposition theorem (Gusfield and Bansal, 2005) for phylogenetic networks. The decomposition theorem depends only on the incompatibilities in the input sequences, and hence applies to many types of phylogenetic networks, and to any biological phenomena that causes pairwise incompatibilities. More generally, the proof of the decomposition theorem exposes a maximal embedded tree structure that exists in the network when the sequences cannot be derived on a perfect phylogenetic tree. This extends the theory of perfect phylogeny in a natural and important way. The proof is constructive and leads to a polynomial-time algorithm to find the unique underlying maximal tree structure. We next examine and fully solve the major open question from Gusfield and Bansal (2005): Is it true that for every input there must be a fully decomposed phylogenetic network that minimizes the number of recombination nodes used, over all phylogenetic networks for the input. We previously conjectured that the answer is yes. In this paper, we show that the answer in is no, both for the case that only single-crossover recombination is allowed, and also for the case that unbounded multiple-crossover recombination is allowed. The latter case also resolves a conjecture recently stated in (Huson and Klopper, 2007) in the context of reticulation networks. Although the conjecture from Gusfield and Bansal (2005) is disproved in general, we show that the answer to the conjecture is yes in several natural special cases, and establish necessary combinatorial structure that counterexamples to the conjecture must possess. We also show that counterexamples to the conjecture are rare (for the case of single-crossover recombination) in simulated data.

  6. Wavelet decomposition based principal component analysis for face recognition using MATLAB

    NASA Astrophysics Data System (ADS)

    Sharma, Mahesh Kumar; Sharma, Shashikant; Leeprechanon, Nopbhorn; Ranjan, Aashish

    2016-03-01

    For the realization of face recognition systems in the static as well as in the real time frame, algorithms such as principal component analysis, independent component analysis, linear discriminate analysis, neural networks and genetic algorithms are used for decades. This paper discusses an approach which is a wavelet decomposition based principal component analysis for face recognition. Principal component analysis is chosen over other algorithms due to its relative simplicity, efficiency, and robustness features. The term face recognition stands for identifying a person from his facial gestures and having resemblance with factor analysis in some sense, i.e. extraction of the principal component of an image. Principal component analysis is subjected to some drawbacks, mainly the poor discriminatory power and the large computational load in finding eigenvectors, in particular. These drawbacks can be greatly reduced by combining both wavelet transform decomposition for feature extraction and principal component analysis for pattern representation and classification together, by analyzing the facial gestures into space and time domain, where, frequency and time are used interchangeably. From the experimental results, it is envisaged that this face recognition method has made a significant percentage improvement in recognition rate as well as having a better computational efficiency.

  7. Decomposition of diverse solid inorganic matrices with molten ammonium bifluoride salt for constituent elemental analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Hara, Matthew J.; Kellogg, Cyndi M.; Parker, Cyrena M.

    Ammonium bifluoride (ABF, NH4F·HF) is a well-known reagent for converting metal oxides to fluorides and for its applications in breaking down minerals and ores in order to extract useful components. It has been more recently applied to the decomposition of inorganic matrices prior to elemental analysis. Herein, a sample decomposition method that employs molten ABF sample treatment in the initial step is systematically evaluated across a range of inorganic sample types: glass, quartz, zircon, soil, and pitchblende ore. Method performance is evaluated across the two variables: duration of molten ABF treatment and ABF reagent mass to sample mass ratio. Themore » degree of solubilization of these sample classes are compared to the fluoride stoichiometry that is theoretically necessary to enact complete fluorination of the sample types. Finally, the sample decomposition method is performed on several soil and pitchblende ore standard reference materials, after which elemental constituent analysis is performed by ICP-OES and ICP-MS. Elemental recoveries are compared to the certified values; results indicate good to excellent recoveries across a range of alkaline earth, rare earth, transition metal, and actinide elements.« less

  8. Improving multi-objective reservoir operation optimization with sensitivity-informed problem decomposition

    NASA Astrophysics Data System (ADS)

    Chu, J. G.; Zhang, C.; Fu, G. T.; Li, Y.; Zhou, H. C.

    2015-04-01

    This study investigates the effectiveness of a sensitivity-informed method for multi-objective operation of reservoir systems, which uses global sensitivity analysis as a screening tool to reduce the computational demands. Sobol's method is used to screen insensitive decision variables and guide the formulation of the optimization problems with a significantly reduced number of decision variables. This sensitivity-informed problem decomposition dramatically reduces the computational demands required for attaining high quality approximations of optimal tradeoff relationships between conflicting design objectives. The search results obtained from the reduced complexity multi-objective reservoir operation problems are then used to pre-condition the full search of the original optimization problem. In two case studies, the Dahuofang reservoir and the inter-basin multi-reservoir system in Liaoning province, China, sensitivity analysis results show that reservoir performance is strongly controlled by a small proportion of decision variables. Sensitivity-informed problem decomposition and pre-conditioning are evaluated in their ability to improve the efficiency and effectiveness of multi-objective evolutionary optimization. Overall, this study illustrates the efficiency and effectiveness of the sensitivity-informed method and the use of global sensitivity analysis to inform problem decomposition when solving the complex multi-objective reservoir operation problems.

  9. First-principles and thermodynamic analysis of trimethylgallium (TMG) decomposition during MOVPE growth of GaN

    NASA Astrophysics Data System (ADS)

    Sekiguchi, K.; Shirakawa, H.; Yamamoto, Y.; Araidai, M.; Kangawa, Y.; Kakimoto, K.; Shiraishi, K.

    2017-06-01

    We analyzed the decomposition mechanisms of trimethylgallium (TMG) used for the gallium source of GaN fabrication based on first-principles calculations and thermodynamic analysis. We considered two conditions. One condition is under the total pressure of 1 atm and the other one is under metal organic vapor phase epitaxy (MOVPE) growth of GaN. Our calculated results show that H2 is indispensable for TMG decomposition under both conditions. In GaN MOVPE, TMG with H2 spontaneously decomposes into Ga(CH3) and Ga(CH3) decomposes into Ga atom gas when temperature is higher than 440 K. From these calculations, we confirmed that TMG surely becomes Ga atom gas near the GaN substrate surfaces.

  10. How long the singular value decomposed entropy predicts the stock market? - Evidence from the Dow Jones Industrial Average Index

    NASA Astrophysics Data System (ADS)

    Gu, Rongbao; Shao, Yanmin

    2016-07-01

    In this paper, a new concept of multi-scales singular value decomposition entropy based on DCCA cross correlation analysis is proposed and its predictive power for the Dow Jones Industrial Average Index is studied. Using Granger causality analysis with different time scales, it is found that, the singular value decomposition entropy has predictive power for the Dow Jones Industrial Average Index for period less than one month, but not for more than one month. This shows how long the singular value decomposition entropy predicts the stock market that extends Caraiani's result obtained in Caraiani (2014). On the other hand, the result also shows an essential characteristic of stock market as a chaotic dynamic system.

  11. Online Pairwise Learning Algorithms.

    PubMed

    Ying, Yiming; Zhou, Ding-Xuan

    2016-04-01

    Pairwise learning usually refers to a learning task that involves a loss function depending on pairs of examples, among which the most notable ones are bipartite ranking, metric learning, and AUC maximization. In this letter we study an online algorithm for pairwise learning with a least-square loss function in an unconstrained setting of a reproducing kernel Hilbert space (RKHS) that we refer to as the Online Pairwise lEaRning Algorithm (OPERA). In contrast to existing works (Kar, Sriperumbudur, Jain, & Karnick, 2013 ; Wang, Khardon, Pechyony, & Jones, 2012 ), which require that the iterates are restricted to a bounded domain or the loss function is strongly convex, OPERA is associated with a non-strongly convex objective function and learns the target function in an unconstrained RKHS. Specifically, we establish a general theorem that guarantees the almost sure convergence for the last iterate of OPERA without any assumptions on the underlying distribution. Explicit convergence rates are derived under the condition of polynomially decaying step sizes. We also establish an interesting property for a family of widely used kernels in the setting of pairwise learning and illustrate the convergence results using such kernels. Our methodology mainly depends on the characterization of RKHSs using its associated integral operators and probability inequalities for random variables with values in a Hilbert space.

  12. Cosmology with the pairwise kinematic SZ effect: Calibration and validation using hydrodynamical simulations

    NASA Astrophysics Data System (ADS)

    Soergel, Bjoern; Saro, Alexandro; Giannantonio, Tommaso; Efstathiou, George; Dolag, Klaus

    2018-05-01

    We study the potential of the kinematic SZ effect as a probe for cosmology, focusing on the pairwise method. The main challenge is disentangling the cosmologically interesting mean pairwise velocity from the cluster optical depth and the associated uncertainties on the baryonic physics in clusters. Furthermore, the pairwise kSZ signal might be affected by internal cluster motions or correlations between velocity and optical depth. We investigate these effects using the Magneticum cosmological hydrodynamical simulations, one of the largest simulations of this kind performed to date. We produce tSZ and kSZ maps with an area of ≃ 1600 deg2, and the corresponding cluster catalogues with M500c ≳ 3 × 1013 h-1M⊙ and z ≲ 2. From these data sets we calibrate a scaling relation between the average Compton-y parameter and optical depth. We show that this relation can be used to recover an accurate estimate of the mean pairwise velocity from the kSZ effect, and that this effect can be used as an important probe of cosmology. We discuss the impact of theoretical and observational systematic effects, and find that further work on feedback models is required to interpret future high-precision measurements of the kSZ effect.

  13. The Use of Decompositions in International Trade Textbooks.

    ERIC Educational Resources Information Center

    Highfill, Jannett K.; Weber, William V.

    1994-01-01

    Asserts that international trade, as compared with international finance or even international economics, is primarily an applied microeconomics field. Discusses decomposition analysis in relation to international trade and tariffs. Reports on an evaluation of the treatment of this topic in eight college-level economics textbooks. (CFR)

  14. Effect of pre-heating on the thermal decomposition kinetics of cotton

    USDA-ARS?s Scientific Manuscript database

    The effect of pre-heating at low temperatures (160-280°C) on the thermal decomposition kinetics of scoured cotton fabrics was investigated by thermogravimetric analysis under nonisothermal conditions. Isoconversional methods were used to calculate the activation energies for the pyrolysis after one-...

  15. Early diagenesis of mangrove leaves in a tropical estuary: Bulk chemical characterization using solid-state 13C NMR and elemental analyses

    NASA Astrophysics Data System (ADS)

    Benner, Ronald; Hatcher, Patrick G.; Hedges, John I.

    1990-07-01

    Changes in the chemical composition of mangrove ( Rhizophora mangle) leaves during decomposition in tropical estuarine waters were characterized using solid-state 13C nuclear magnetic resonance (NMR) and elemental (CHNO) analysis. Carbohydrates were the most abundant components of the leaves accounting for about 50 wt% of senescent tissues. Tannins were estimated to account for about 20 wt% of leaf tissues, and lipid components, cutin, and possibly other aliphatic biopolymers in leaf cuticles accounted for about 15 wt%. Carbohydrates were generally less resistant to decomposition than the other constituents and decreased in relative concentration during decomposition. Tannins were of intermediate resistance to decomposition and remained in fairly constant proportion during decomposition. Paraffinic components were very resistant to decomposition and increased in relative concentration as decomposition progressed. Lignin was a minor component of all leaf tissues. Standard methods for the colorimetric determination of tannins (Folin-Dennis reagent) and the gravimetric determination of lignin (Klason lignin) were highly inaccurate when applied to mangrove leaves. The N content of the leaves was particularly dynamic with values ranging from 1.27 wt% in green leaves to 0.65 wt% in senescent yellow leaves attached to trees. During decomposition in the water the N content initially decreased to 0.51 wt% due to leaching, but values steadily increased thereafter to 1.07 wt% in the most degraded leaf samples. The absolute mass of N in the leaves increased during decomposition indicating that N immobilization was occurring as decomposition progressed.

  16. Early diagenesis of mangrove leaves in a tropical estuary: Bulk chemical characterization using solid-state 13C NMR and elemental analyses

    USGS Publications Warehouse

    Benner, R.; Hatcher, P.G.; Hedges, J.I.

    1990-01-01

    Changes in the chemical composition of mangrove (Rhizophora mangle) leaves during decomposition in tropical estuarine waters were characterized using solid-state 13C nuclear magnetic resonance (NMR) and elemental (CHNO) analysis. Carbohydrates were the most abundant components of the leaves accounting for about 50 wt% of senescent tissues. Tannins were estimated to account for about 20 wt% of leaf tissues, and lipid components, cutin, and possibly other aliphatic biopolymers in leaf cuticles accounted for about 15 wt%. Carbohydrates were generally less resistant to decomposition than the other constituents and decreased in relative concentration during decomposition. Tannins were of intermediate resistance to decomposition and remained in fairly constant proportion during decomposition. Paraffinic components were very resistant to decomposition and increased in relative concentration as decomposition progressed. Lignin was a minor component of all leaf tissues. Standard methods for the colorimetric determination of tannins (Folin-Dennis reagent) and the gravimetric determination of lignin (Klason lignin) were highly inaccurate when applied to mangrove leaves. The N content of the leaves was particularly dynamic with values ranging from 1.27 wt% in green leaves to 0.65 wt% in senescent yellow leaves attached to trees. During decomposition in the water the N content initially decreased to 0.51 wt% due to leaching, but values steadily increased thereafter to 1.07 wt% in the most degraded leaf samples. The absolute mass of N in the leaves increased during decomposition indicating that N immobilization was occurring as decomposition progressed. ?? 1990.

  17. Three-dimensional analysis of the uniqueness of the anterior dentition in orthodontically treated patients and twins.

    PubMed

    Franco, A; Willems, G; Souza, P H C; Tanaka, O M; Coucke, W; Thevissen, P

    2017-04-01

    Dental uniqueness can be proven if no perfect match in pair-wise morphological comparisons of human dentitions is detected. Establishing these comparisons in a worldwide random population is practically unfeasible due to the need for a large and representative sample size. Sample stratification is an option to reduce sample size. The present study investigated the uniqueness of the human dentition in randomly selected subjects (Group 1), orthodontically treated patients (Group 2), twins (Group 3), and orthodontically treated twins (Group 4) in comparison with a threshold control sample of identical dentitions (Group 5). The samples consisted of digital cast files (DCF) obtained through extraoral 3D scanning. A total of 2.013 pair-wise morphological comparisons were performed (Group 1 n=110, Group 2 n=1.711, Group 3 n=172, Group 4 n=10, Group 5 n=10) with Geomagic Studio ® (3D Systems ® , Rock Hill, SC, USA) software package. Comparisons within groups were performed quantifying the morphological differences between DCF in Euclidean distances. Comparisons between groups were established applying One-way ANOVA. To ensure fair comparisons a post-hoc Power Analysis was performed. ROC analysis was applied to distinguish unique from non-unique dentures. Identical DCF were not detected within the experimental groups (from 1 to 4). The most similar DCF had Euclidian distance of 5.19mm in Group 1, 2.06mm in Group 2, 2.03mm in Group 3, and 1.88mm in Group 4. Groups 2 and 3 were statistically different from Group 5 (p<0.05). Statistically significant difference between Group 4 and 5 revealed to be possible including more pair-wise comparisons in both groups. The ROC analysis revealed sensitivity rate of 80% and specificity between 66.7% and 81.6%. Evidence to sustain the uniqueness of the human dentition in random and stratified populations was observed in the present study. Further studies testing the influence of the quantity of tooth material on morphological difference between dentitions and its impact on uniqueness remain necessary. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. A conditional Granger causality model approach for group analysis in functional MRI

    PubMed Central

    Zhou, Zhenyu; Wang, Xunheng; Klahr, Nelson J.; Liu, Wei; Arias, Diana; Liu, Hongzhi; von Deneen, Karen M.; Wen, Ying; Lu, Zuhong; Xu, Dongrong; Liu, Yijun

    2011-01-01

    Granger causality model (GCM) derived from multivariate vector autoregressive models of data has been employed for identifying effective connectivity in the human brain with functional MR imaging (fMRI) and to reveal complex temporal and spatial dynamics underlying a variety of cognitive processes. In the most recent fMRI effective connectivity measures, pairwise GCM has commonly been applied based on single voxel values or average values from special brain areas at the group level. Although a few novel conditional GCM methods have been proposed to quantify the connections between brain areas, our study is the first to propose a viable standardized approach for group analysis of an fMRI data with GCM. To compare the effectiveness of our approach with traditional pairwise GCM models, we applied a well-established conditional GCM to pre-selected time series of brain regions resulting from general linear model (GLM) and group spatial kernel independent component analysis (ICA) of an fMRI dataset in the temporal domain. Datasets consisting of one task-related and one resting-state fMRI were used to investigate connections among brain areas with the conditional GCM method. With the GLM detected brain activation regions in the emotion related cortex during the block design paradigm, the conditional GCM method was proposed to study the causality of the habituation between the left amygdala and pregenual cingulate cortex during emotion processing. For the resting-state dataset, it is possible to calculate not only the effective connectivity between networks but also the heterogeneity within a single network. Our results have further shown a particular interacting pattern of default mode network (DMN) that can be characterized as both afferent and efferent influences on the medial prefrontal cortex (mPFC) and posterior cingulate cortex (PCC). These results suggest that the conditional GCM approach based on a linear multivariate vector autoregressive (MVAR) model can achieve greater accuracy in detecting network connectivity than the widely used pairwise GCM, and this group analysis methodology can be quite useful to extend the information obtainable in fMRI. PMID:21232892

  19. Pursuing reliable thermal analysis techniques for energetic materials: decomposition kinetics and thermal stability of dihydroxylammonium 5,5'-bistetrazole-1,1'-diolate (TKX-50).

    PubMed

    Muravyev, Nikita V; Monogarov, Konstantin A; Asachenko, Andrey F; Nechaev, Mikhail S; Ananyev, Ivan V; Fomenkov, Igor V; Kiselev, Vitaly G; Pivkina, Alla N

    2016-12-21

    Thermal decomposition of a novel promising high-performance explosive dihydroxylammonium 5,5'-bistetrazole-1,1'-diolate (TKX-50) was studied using a number of thermal analysis techniques (thermogravimetry, differential scanning calorimetry, and accelerating rate calorimetry, ARC). To obtain more comprehensive insight into the kinetics and mechanism of TKX-50 decomposition, a variety of complementary thermoanalytical experiments were performed under various conditions. Non-isothermal and isothermal kinetics were obtained at both atmospheric and low (up to 0.3 Torr) pressures. The gas products of thermolysis were detected in situ using IR spectroscopy, and the structure of solid-state decomposition products was determined by X-ray diffraction and scanning electron microscopy. Diammonium 5,5'-bistetrazole-1,1'-diolate (ABTOX) was directly identified to be the most important intermediate of the decomposition process. The important role of bistetrazole diol (BTO) in the mechanism of TKX-50 decomposition was also rationalized by thermolysis experiments with mixtures of TKX-50 and BTO. Several widely used thermoanalytical data processing techniques (Kissinger, isoconversional, formal kinetic approaches, etc.) were independently benchmarked against the ARC data, which are more germane to the real storage and application conditions of energetic materials. Our study revealed that none of the Arrhenius parameters reported before can properly describe the complex two-stage decomposition process of TKX-50. In contrast, we showed the superior performance of the isoconversional methods combined with isothermal measurements, which yielded the most reliable kinetic parameters of TKX-50 thermolysis. In contrast with the existing reports, the thermal stability of TKX-50 was determined in the ARC experiments to be lower than that of hexogen, but close to that of hexanitrohexaazaisowurtzitane (CL-20).

  20. The implications of microbial and substrate limitation for the fates of carbon in different organic soil horizon types of boreal forest ecosystems: a mechanistically based model analysis

    USGS Publications Warehouse

    He, Y.; Zhuang, Q.; Harden, Jennifer W.; McGuire, A. David; Fan, Z.; Liu, Y.; Wickland, Kimberly P.

    2014-01-01

    The large amount of soil carbon in boreal forest ecosystems has the potential to influence the climate system if released in large quantities in response to warming. Thus, there is a need to better understand and represent the environmental sensitivity of soil carbon decomposition. Most soil carbon decomposition models rely on empirical relationships omitting key biogeochemical mechanisms and their response to climate change is highly uncertain. In this study, we developed a multi-layer microbial explicit soil decomposition model framework for boreal forest ecosystems. A thorough sensitivity analysis was conducted to identify dominating biogeochemical processes and to highlight structural limitations. Our results indicate that substrate availability (limited by soil water diffusion and substrate quality) is likely to be a major constraint on soil decomposition in the fibrous horizon (40–60% of soil organic carbon (SOC) pool size variation), while energy limited microbial activity in the amorphous horizon exerts a predominant control on soil decomposition (>70% of SOC pool size variation). Elevated temperature alleviated the energy constraint of microbial activity most notably in amorphous soils, whereas moisture only exhibited a marginal effect on dissolved substrate supply and microbial activity. Our study highlights the different decomposition properties and underlying mechanisms of soil dynamics between fibrous and amorphous soil horizons. Soil decomposition models should consider explicitly representing different boreal soil horizons and soil–microbial interactions to better characterize biogeochemical processes in boreal forest ecosystems. A more comprehensive representation of critical biogeochemical mechanisms of soil moisture effects may be required to improve the performance of the soil model we analyzed in this study.

  1. Pairwise comparisons of ten porcine tissues identify differential transcriptional regulation at the gene, isoform, promoter and transcription start site level

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farajzadeh, Leila; Hornshøj, Henrik; Momeni, Jamal

    Highlights: •Transcriptome sequencing yielded 223 mill porcine RNA-seq reads, and 59,000 transcribed locations. •Establishment of unique transcription profiles for ten porcine tissues including four brain tissues. •Comparison of transcription profiles at gene, isoform, promoter and transcription start site level. •Highlights a high level of regulation of neuro-related genes at both gene, isoform, and TSS level. •Our results emphasize the pig as a valuable animal model with respect to human biological issues. -- Abstract: The transcriptome is the absolute set of transcripts in a tissue or cell at the time of sampling. In this study RNA-Seq is employed to enable themore » differential analysis of the transcriptome profile for ten porcine tissues in order to evaluate differences between the tissues at the gene and isoform expression level, together with an analysis of variation in transcription start sites, promoter usage, and splicing. Totally, 223 million RNA fragments were sequenced leading to the identification of 59,930 transcribed gene locations and 290,936 transcript variants using Cufflinks with similarity to approximately 13,899 annotated human genes. Pairwise analysis of tissues for differential expression at the gene level showed that the smallest differences were between tissues originating from the porcine brain. Interestingly, the relative level of differential expression at the isoform level did generally not vary between tissue contrasts. Furthermore, analysis of differential promoter usage between tissues, revealed a proportionally higher variation between cerebellum (CBE) versus frontal cortex and cerebellum versus hypothalamus (HYP) than in the remaining comparisons. In addition, the comparison of differential transcription start sites showed that the number of these sites is generally increased in comparisons including hypothalamus in contrast to other pairwise assessments. A comprehensive analysis of one of the tissue contrasts, i.e. cerebellum versus heart for differential variation at the gene, isoform, and transcription start site (TSS), and promoter level showed that several of the genes differed at all four levels. Interestingly, these genes were mainly annotated to the “electron transport chain” and neuronal differentiation, emphasizing that “tissue important” genes are regulated at several levels. Furthermore, our analysis shows that the “across tissue approach” has a promising potential when screening for possible explanations for variations, such as those observed at the gene expression levels.« less

  2. Lossless and Sufficient - Invariant Decomposition of Deterministic Target

    NASA Astrophysics Data System (ADS)

    Paladini, Riccardo; Ferro Famil, Laurent; Pottier, Eric; Martorella, Marco; Berizzi, Fabrizio

    2011-03-01

    The symmetric radar scattering matrix of a reciprocal target is projected on the circular polarization basis and is decomposed into four orientation invariant parameters, relative phase and relative orientation. The physical interpretation of this results is found in the wave-particle nature of radar scattering due to the circular polarization nature of elemental packets of energy. The proposed decomposition, is based on left orthogonal to left Special Unitary basis, providing the target description in term of a unitary vector. A comparison between the proposed CTD and Cameron, Kennaugh and Krogager decompositions is also pointed out. A validation by the use of both anechoic chamber data and airborne EMISAR data of DTU is used to show the effectiveness of this decomposition for the analysis of coherent targets. In the second paper we will show the application of the rotation group U(3) for the decomposition of distributed targets into nine meaningful parameters.

  3. Nitrated graphene oxide and its catalytic activity in thermal decomposition of ammonium perchlorate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Wenwen; Luo, Qingping; Duan, Xiaohui

    2014-02-01

    Highlights: • The NGO was synthesized by nitrifying homemade GO. • The N content of resulted NGO is up to 1.45 wt.%. • The NGO can facilitate the decomposition of AP and release much heat. - Abstract: Nitrated graphene oxide (NGO) was synthesized by nitrifying homemade GO with nitro-sulfuric acid. Fourier transform infrared spectroscopy (FTIR), laser Raman spectroscopy, CP/MAS {sup 13}C NMR spectra and X-ray photoelectron spectroscopy (XPS) were used to characterize the structure of NGO. The thickness and the compositions of GO and NGO were analyzed by atomic force microscopy (AFM) and elemental analysis (EA), respectively. The catalytic effectmore » of the NGO for the thermal decomposition of ammonium perchlorate (AP) was investigated by differential scanning calorimetry (DSC). Adding 10% of NGO to AP decreases the decomposition temperature by 106 °C and increases the apparent decomposition heat from 875 to 3236 J/g.« less

  4. Gas evolution from cathode materials: A pathway to solvent decomposition concomitant to SEI formation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Browning, Katie L; Baggetto, Loic; Unocic, Raymond R

    This work reports a method to explore the catalytic reactivity of electrode surfaces towards the decomposition of carbonate solvents [ethylene carbonate (EC), dimethyl carbonate (DMC), and EC/DMC]. We show that the decomposition of a 1:1 wt% EC/DMC mixture is accelerated over certain commercially available LiCoO2 materials resulting in the formation of CO2 while over pure EC or DMC the reaction is much slower or negligible. The solubility of the produced CO2 in carbonate solvents is high (0.025 grams/mL) which masks the effect of electrolyte decomposition during storage or use. The origin of this decomposition is not clear but it ismore » expected to be present on other cathode materials and may affect the analysis of SEI products as well as the safety of Li-ion batteries.« less

  5. Optimal cost design of water distribution networks using a decomposition approach

    NASA Astrophysics Data System (ADS)

    Lee, Ho Min; Yoo, Do Guen; Sadollah, Ali; Kim, Joong Hoon

    2016-12-01

    Water distribution network decomposition, which is an engineering approach, is adopted to increase the efficiency of obtaining the optimal cost design of a water distribution network using an optimization algorithm. This study applied the source tracing tool in EPANET, which is a hydraulic and water quality analysis model, to the decomposition of a network to improve the efficiency of the optimal design process. The proposed approach was tested by carrying out the optimal cost design of two water distribution networks, and the results were compared with other optimal cost designs derived from previously proposed optimization algorithms. The proposed decomposition approach using the source tracing technique enables the efficient decomposition of an actual large-scale network, and the results can be combined with the optimal cost design process using an optimization algorithm. This proves that the final design in this study is better than those obtained with other previously proposed optimization algorithms.

  6. Analysis of Decomposition for Structure I Methane Hydrate by Molecular Dynamics Simulation

    NASA Astrophysics Data System (ADS)

    Wei, Na; Sun, Wan-Tong; Meng, Ying-Feng; Liu, An-Qi; Zhou, Shou-Wei; Guo, Ping; Fu, Qiang; Lv, Xin

    2018-05-01

    Under multi-nodes of temperatures and pressures, microscopic decomposition mechanisms of structure I methane hydrate in contact with bulk water molecules have been studied through LAMMPS software by molecular dynamics simulation. Simulation system consists of 482 methane molecules in hydrate and 3027 randomly distributed bulk water molecules. Through analyses of simulation results, decomposition number of hydrate cages, density of methane molecules, radial distribution function for oxygen atoms, mean square displacement and coefficient of diffusion of methane molecules have been studied. A significant result shows that structure I methane hydrate decomposes from hydrate-bulk water interface to hydrate interior. As temperature rises and pressure drops, the stabilization of hydrate will weaken, decomposition extent will go deep, and mean square displacement and coefficient of diffusion of methane molecules will increase. The studies can provide important meanings for the microscopic decomposition mechanisms analyses of methane hydrate.

  7. Pairwise Trajectory Management (PTM): Concept Description and Documentation

    NASA Technical Reports Server (NTRS)

    Jones, Kenneth M.; Graff, Thomas J.; Carreno, Victor; Chartrand, Ryan C.; Kibler, Jennifer L.

    2018-01-01

    Pairwise Trajectory Management (PTM) is an Interval Management (IM) concept that utilizes airborne and ground-based capabilities to enable the implementation of airborne pairwise spacing capabilities in oceanic regions. The goal of PTM is to use airborne surveillance and tools to manage an "at or greater than" inter-aircraft spacing. Due to the accuracy of Automatic Dependent Surveillance-Broadcast (ADS-B) information and the use of airborne spacing guidance, the minimum PTM spacing distance will be less than distances a controller can support with current automation systems that support oceanic operations. Ground tools assist the controller in evaluating the traffic picture and determining appropriate PTM clearances to be issued. Avionics systems provide guidance information that allows the flight crew to conform to the PTM clearance issued by the controller. The combination of a reduced minimum distance and airborne spacing management will increase the capacity and efficiency of aircraft operations at a given altitude or volume of airspace. This document provides an overview of the proposed application, a description of several key scenarios, a high level discussion of expected air and ground equipment and procedure changes, a description of a NASA human-machine interface (HMI) prototype for the flight crew that would support PTM operations, and initial benefits analysis results. Additionally, included as appendices, are the following documents: the PTM Operational Services and Environment Definition (OSED) document and a companion "Future Considerations for the Pairwise Trajectory Management (PTM) Concept: Potential Future Updates for the PTM OSED" paper, a detailed description of the PTM algorithm and PTM Limit Mach rules, initial PTM safety requirements and safety assessment documents, a detailed description of the design, development, and initial evaluations of the proposed flight crew HMI, an overview of the methodology and results of PTM pilot training requirements focus group and human-in-the-loop testing activities, and the PTM Pilot Guide.

  8. PWC - PAIRWISE COMPARISON SOFTWARE: SOFTWARE PROGRAM FOR PAIRWISE COMPARISON TASK FOR PSYCHOMETRIC SCALING AND COGNITIVE RESEARCH

    NASA Technical Reports Server (NTRS)

    Ricks, W. R.

    1994-01-01

    PWC is used for pair-wise comparisons in both psychometric scaling techniques and cognitive research. The cognitive tasks and processes of a human operator of automated systems are now prominent considerations when defining system requirements. Recent developments in cognitive research have emphasized the potential utility of psychometric scaling techniques, such as multidimensional scaling, for representing human knowledge and cognitive processing structures. Such techniques involve collecting measurements of stimulus-relatedness from human observers. When data are analyzed using this scaling approach, an n-dimensional representation of the stimuli is produced. This resulting representation is said to describe the subject's cognitive or perceptual view of the stimuli. PWC applies one of the many techniques commonly used to acquire the data necessary for these types of analyses: pair-wise comparisons. PWC administers the task, collects the data from the test subject, and formats the data for analysis. It therefore addresses many of the limitations of the traditional "pen-and-paper" methods. By automating the data collection process, subjects are prevented from going back to check previous responses, the possibility of erroneous data transfer is eliminated, and the burden of the administration and taking of the test is eased. By using randomization, PWC ensures that subjects see the stimuli pairs presented in random order, and that each subject sees pairs in a different random order. PWC is written in Turbo Pascal v6.0 for IBM PC compatible computers running MS-DOS. The program has also been successfully compiled with Turbo Pascal v7.0. A sample executable is provided. PWC requires 30K of RAM for execution. The standard distribution medium for this program is a 5.25 inch 360K MS-DOS format diskette. Two electronic versions of the documentation are included on the diskette: one in ASCII format and one in MS Word for Windows format. PWC was developed in 1993.

  9. Efficient pairwise RNA structure prediction using probabilistic alignment constraints in Dynalign

    PubMed Central

    2007-01-01

    Background Joint alignment and secondary structure prediction of two RNA sequences can significantly improve the accuracy of the structural predictions. Methods addressing this problem, however, are forced to employ constraints that reduce computation by restricting the alignments and/or structures (i.e. folds) that are permissible. In this paper, a new methodology is presented for the purpose of establishing alignment constraints based on nucleotide alignment and insertion posterior probabilities. Using a hidden Markov model, posterior probabilities of alignment and insertion are computed for all possible pairings of nucleotide positions from the two sequences. These alignment and insertion posterior probabilities are additively combined to obtain probabilities of co-incidence for nucleotide position pairs. A suitable alignment constraint is obtained by thresholding the co-incidence probabilities. The constraint is integrated with Dynalign, a free energy minimization algorithm for joint alignment and secondary structure prediction. The resulting method is benchmarked against the previous version of Dynalign and against other programs for pairwise RNA structure prediction. Results The proposed technique eliminates manual parameter selection in Dynalign and provides significant computational time savings in comparison to prior constraints in Dynalign while simultaneously providing a small improvement in the structural prediction accuracy. Savings are also realized in memory. In experiments over a 5S RNA dataset with average sequence length of approximately 120 nucleotides, the method reduces computation by a factor of 2. The method performs favorably in comparison to other programs for pairwise RNA structure prediction: yielding better accuracy, on average, and requiring significantly lesser computational resources. Conclusion Probabilistic analysis can be utilized in order to automate the determination of alignment constraints for pairwise RNA structure prediction methods in a principled fashion. These constraints can reduce the computational and memory requirements of these methods while maintaining or improving their accuracy of structural prediction. This extends the practical reach of these methods to longer length sequences. The revised Dynalign code is freely available for download. PMID:17445273

  10. In silico Pathway Activation Network Decomposition Analysis (iPANDA) as a method for biomarker development.

    PubMed

    Ozerov, Ivan V; Lezhnina, Ksenia V; Izumchenko, Evgeny; Artemov, Artem V; Medintsev, Sergey; Vanhaelen, Quentin; Aliper, Alexander; Vijg, Jan; Osipov, Andreyan N; Labat, Ivan; West, Michael D; Buzdin, Anton; Cantor, Charles R; Nikolsky, Yuri; Borisov, Nikolay; Irincheeva, Irina; Khokhlovich, Edward; Sidransky, David; Camargo, Miguel Luiz; Zhavoronkov, Alex

    2016-11-16

    Signalling pathway activation analysis is a powerful approach for extracting biologically relevant features from large-scale transcriptomic and proteomic data. However, modern pathway-based methods often fail to provide stable pathway signatures of a specific phenotype or reliable disease biomarkers. In the present study, we introduce the in silico Pathway Activation Network Decomposition Analysis (iPANDA) as a scalable robust method for biomarker identification using gene expression data. The iPANDA method combines precalculated gene coexpression data with gene importance factors based on the degree of differential gene expression and pathway topology decomposition for obtaining pathway activation scores. Using Microarray Analysis Quality Control (MAQC) data sets and pretreatment data on Taxol-based neoadjuvant breast cancer therapy from multiple sources, we demonstrate that iPANDA provides significant noise reduction in transcriptomic data and identifies highly robust sets of biologically relevant pathway signatures. We successfully apply iPANDA for stratifying breast cancer patients according to their sensitivity to neoadjuvant therapy.

  11. In silico Pathway Activation Network Decomposition Analysis (iPANDA) as a method for biomarker development

    PubMed Central

    Ozerov, Ivan V.; Lezhnina, Ksenia V.; Izumchenko, Evgeny; Artemov, Artem V.; Medintsev, Sergey; Vanhaelen, Quentin; Aliper, Alexander; Vijg, Jan; Osipov, Andreyan N.; Labat, Ivan; West, Michael D.; Buzdin, Anton; Cantor, Charles R.; Nikolsky, Yuri; Borisov, Nikolay; Irincheeva, Irina; Khokhlovich, Edward; Sidransky, David; Camargo, Miguel Luiz; Zhavoronkov, Alex

    2016-01-01

    Signalling pathway activation analysis is a powerful approach for extracting biologically relevant features from large-scale transcriptomic and proteomic data. However, modern pathway-based methods often fail to provide stable pathway signatures of a specific phenotype or reliable disease biomarkers. In the present study, we introduce the in silico Pathway Activation Network Decomposition Analysis (iPANDA) as a scalable robust method for biomarker identification using gene expression data. The iPANDA method combines precalculated gene coexpression data with gene importance factors based on the degree of differential gene expression and pathway topology decomposition for obtaining pathway activation scores. Using Microarray Analysis Quality Control (MAQC) data sets and pretreatment data on Taxol-based neoadjuvant breast cancer therapy from multiple sources, we demonstrate that iPANDA provides significant noise reduction in transcriptomic data and identifies highly robust sets of biologically relevant pathway signatures. We successfully apply iPANDA for stratifying breast cancer patients according to their sensitivity to neoadjuvant therapy. PMID:27848968

  12. Metagenomic analysis of antibiotic resistance genes (ARGs) during refuse decomposition.

    PubMed

    Liu, Xi; Yang, Shu; Wang, Yangqing; Zhao, He-Ping; Song, Liyan

    2018-04-12

    Landfill is important reservoirs of residual antibiotics and antibiotic resistance genes (ARGs), but the mechanism of landfill application influence on antibiotic resistance remains unclear. Although refuse decomposition plays a crucial role in landfill stabilization, its impact on the antibiotic resistance has not been well characterized. To better understand the impact, we studied the dynamics of ARGs and the bacterial community composition during refuse decomposition in a bench-scale bioreactor after long term operation (265d) based on metagenomics analysis. The total abundances of ARGs increased from 431.0ppm in the initial aerobic phase (AP) to 643.9ppm in the later methanogenic phase (MP) during refuse decomposition, suggesting that application of landfill for municipal solid waste (MSW) treatment may elevate the level of ARGs. A shift from drug-specific (bacitracin, tetracycline and sulfonamide) resistance to multidrug resistance was observed during the refuse decomposition and was driven by a shift of potential bacteria hosts. The elevated abundance of Pseudomonas mainly contributed to the increasing abundance of multidrug ARGs (mexF and mexW). Accordingly, the percentage of ARGs encoding an efflux pump increased during refuse decomposition, suggesting that potential bacteria hosts developed this mechanism to adapt to the carbon and energy shortage when biodegradable substances were depleted. Overall, our findings indicate that the use of landfill for MSW treatment increased antibiotic resistance, and demonstrate the need for a comprehensive investigation of antibiotic resistance in landfill. Copyright © 2018. Published by Elsevier B.V.

  13. Assessment of skeletal changes after post-mortem exposure to fire as an indicator of decomposition stage.

    PubMed

    Keough, N; L'Abbé, E N; Steyn, M; Pretorius, S

    2015-01-01

    Forensic anthropologists are tasked with interpreting the sequence of events from death to the discovery of a body. Burned bone often evokes questions as to the timing of burning events. The purpose of this study was to assess the progression of thermal damage on bones with advancement in decomposition. Twenty-five pigs in various stages of decomposition (fresh, early, advanced, early and late skeletonisation) were exposed to fire for 30 min. The scored heat-related features on bone included colour change (unaltered, charred, calcined), brown and heat borders, heat lines, delineation, greasy bone, joint shielding, predictable and minimal cracking, delamination and heat-induced fractures. Colour changes were scored according to a ranked percentage scale (0-3) and the remaining traits as absent or present (0/1). Kappa statistics was used to evaluate intra- and inter-observer error. Transition analysis was used to formulate probability mass functions [P(X=j|i)] to predict decomposition stage from the scored features of thermal destruction. Nine traits displayed potential to predict decomposition stage from burned remains. An increase in calcined and charred bone occurred synchronously with advancement of decomposition with subsequent decrease in unaltered surfaces. Greasy bone appeared more often in the early/fresh stages (fleshed bone). Heat borders, heat lines, delineation, joint shielding, predictable and minimal cracking are associated with advanced decomposition, when bone remains wet but lacks extensive soft tissue protection. Brown burn/borders, delamination and other heat-induced fractures are associated with early and late skeletonisation, showing that organic composition of bone and percentage of flesh present affect the manner in which it burns. No statistically significant difference was noted among observers for the majority of the traits, indicating that they can be scored reliably. Based on the data analysis, the pattern of heat-induced changes may assist in estimating decomposition stage from unknown, burned remains. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  14. Kinetics of calcium sulfoaluminate formation from tricalcium aluminate, calcium sulfate and calcium oxide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Xuerun, E-mail: xuerunli@163.com; Zhang, Yu; Shen, Xiaodong, E-mail: xdshen@njut.edu.cn

    The formation kinetics of tricalcium aluminate (C{sub 3}A) and calcium sulfate yielding calcium sulfoaluminate (C{sub 4}A{sub 3}more » $$) and the decomposition kinetics of calcium sulfoaluminate were investigated by sintering a mixture of synthetic C{sub 3}A and gypsum. The quantitative analysis of the phase composition was performed by X-ray powder diffraction analysis using the Rietveld method. The results showed that the formation reaction 3Ca{sub 3}Al{sub 2}O{sub 6} + CaSO{sub 4} → Ca{sub 4}Al{sub 6}O{sub 12}(SO{sub 4}) + 6CaO was the primary reaction < 1350 °C with and activation energy of 231 ± 42 kJ/mol; while the decomposition reaction 2Ca{sub 4}Al{sub 6}O{sub 12}(SO{sub 4}) + 10CaO → 6Ca{sub 3}Al{sub 2}O{sub 6} + 2SO{sub 2} ↑ + O{sub 2} ↑ primarily occurred beyond 1350 °C with an activation energy of 792 ± 64 kJ/mol. The optimal formation region for C{sub 4}A{sub 3}$$ was from 1150 °C to 1350 °C and from 6 h to 1 h, which could provide useful information on the formation of C{sub 4}A{sub 3}$ containing clinkers. The Jander diffusion model was feasible for the formation and decomposition of calcium sulfoaluminate. Ca{sup 2+} and SO{sub 4}{sup 2−} were the diffusive species in both the formation and decomposition reactions. -- Highlights: •Formation and decomposition of calcium sulphoaluminate were studied. •Decomposition of calcium sulphoaluminate combined CaO and yielded C{sub 3}A. •Activation energy for formation was 231 ± 42 kJ/mol. •Activation energy for decomposition was 792 ± 64 kJ/mol. •Both the formation and decomposition were controlled by diffusion.« less

  15. The environmental variables that impact human decomposition in terrestrially exposed contexts within Canada.

    PubMed

    Cockle, Diane Lyn; Bell, Lynne S

    2017-03-01

    Little is known about the nature and trajectory of human decomposition in Canada. This study involved the examination of 96 retrospective police death investigation cases selected using the Canadian ViCLAS (Violent Crime Linkage Analysis System) and sudden death police databases. A classification system was designed and applied based on the latest visible stages of autolysis (stages 1-2), putrefaction (3-5) and skeletonisation (6-8) observed. The analysis of the progression of decomposition using time (Post Mortem Interval (PMI) in days) and temperature accumulated-degree-days (ADD) score found considerable variability during the putrefaction and skeletonisation phases, with poor predictability noted after stage 5 (post bloat). The visible progression of decomposition outdoors was characterized by a brown to black discolouration at stage 5 and remnant desiccated black tissue at stage 7. No bodies were totally skeletonised in under one year. Mummification of tissue was rare with earlier onset in winter as opposed to summer, considered likely due to lower seasonal humidity. It was found that neither ADD nor the PMI were significant dependent variables for the decomposition score with correlations of 53% for temperature and 41% for time. It took almost twice as much time and 1.5 times more temperature (ADD) for the set of cases exposed to cold and freezing temperatures (4°C or less) to reach putrefaction compared to the warm group. The amount of precipitation and/or clothing had a negligible impact on the advancement of decomposition, whereas the lack of sun exposure (full shade) had a small positive effect. This study found that the poor predictability of onset and the duration of late stage decomposition, combined with our limited understanding of the full range of variables which influence the speed of decomposition, makes PMI estimations for exposed terrestrial cases in Canada unreliable, but also calls in question PMI estimations elsewhere. Copyright © 2016 The Chartered Society of Forensic Sciences. Published by Elsevier B.V. All rights reserved.

  16. Population structure and covariate analysis based on pairwise microsatellite allele matching frequencies.

    PubMed

    Givens, Geof H; Ozaksoy, Isin

    2007-01-01

    We describe a general model for pairwise microsatellite allele matching probabilities. The model can be used for analysis of population substructure, and is particularly focused on relating genetic correlation to measurable covariates. The approach is intended for cases when the existence of subpopulations is uncertain and a priori assignment of samples to hypothesized subpopulations is difficult. Such a situation arises, for example, with western Arctic bowhead whales, where genetic samples are available only from a possibly mixed migratory assemblage. We estimate genetic structure associated with spatial, temporal, or other variables that may confound the detection of population structure. In the bowhead case, the model permits detection of genetic patterns associated with a temporally pulsed multi-population assemblage in the annual migration. Hypothesis tests for population substructure and for covariate effects can be carried out using permutation methods. Simulated and real examples illustrate the effectiveness and reliability of the approach and enable comparisons with other familiar approaches. Analysis of the bowhead data finds no evidence for two temporally pulsed subpopulations using the best available data, although a significant pattern found by other researchers using preliminary data is also confirmed here. Code in the R language is available from www.stat.colostate.edu/~geof/gammmp.html.

  17. Functional connectivity in resting state as a phonemic fluency ability measure.

    PubMed

    Miró-Padilla, Anna; Bueichekú, Elisenda; Ventura-Campos, Noelia; Palomar-García, María-Ángeles; Ávila, César

    2017-03-01

    There is some evidence that functional connectivity (FC) measures obtained at rest may reflect individual differences in cognitive capabilities. We tested this possibility by using the FAS test as a measure of phonemic fluency. Seed regions of the main brain areas involved in this task were extracted from meta-analysis results (Wagner et al., 2014) and used for pairwise resting-state FC analysis. Ninety-three undergraduates completed the FAS test outside the scanner. A correlation analysis was conducted between the F-A-S scores (behavioral testing) and the pairwise FC pattern of verbal fluency regions of interest. Results showed that the higher FC between the thalamus and the cerebellum, and the lower FCs between the left inferior frontal gyrus and the right insula and between the supplementary motor area and the right insula were associated with better performance on the FAS test. Regression analyses revealed that the first two FCs contributed independently to this better phonemic fluency, reflecting a more general attentional factor (FC between thalamus and cerebellum) and a more specific fluency factor (FC between the left inferior frontal gyrus and the right insula). The results support the Spontaneous Trait Reactivation hypothesis, which explains how resting-state derived measures may reflect individual differences in cognitive abilities. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Describing the complexity of systems: multivariable "set complexity" and the information basis of systems biology.

    PubMed

    Galas, David J; Sakhanenko, Nikita A; Skupin, Alexander; Ignac, Tomasz

    2014-02-01

    Context dependence is central to the description of complexity. Keying on the pairwise definition of "set complexity," we use an information theory approach to formulate general measures of systems complexity. We examine the properties of multivariable dependency starting with the concept of interaction information. We then present a new measure for unbiased detection of multivariable dependency, "differential interaction information." This quantity for two variables reduces to the pairwise "set complexity" previously proposed as a context-dependent measure of information in biological systems. We generalize it here to an arbitrary number of variables. Critical limiting properties of the "differential interaction information" are key to the generalization. This measure extends previous ideas about biological information and provides a more sophisticated basis for the study of complexity. The properties of "differential interaction information" also suggest new approaches to data analysis. Given a data set of system measurements, differential interaction information can provide a measure of collective dependence, which can be represented in hypergraphs describing complex system interaction patterns. We investigate this kind of analysis using simulated data sets. The conjoining of a generalized set complexity measure, multivariable dependency analysis, and hypergraphs is our central result. While our focus is on complex biological systems, our results are applicable to any complex system.

  19. Novel presentational approaches were developed for reporting network meta-analysis.

    PubMed

    Tan, Sze Huey; Cooper, Nicola J; Bujkiewicz, Sylwia; Welton, Nicky J; Caldwell, Deborah M; Sutton, Alexander J

    2014-06-01

    To present graphical tools for reporting network meta-analysis (NMA) results aiming to increase the accessibility, transparency, interpretability, and acceptability of NMA analyses. The key components of NMA results were identified based on recommendations by agencies such as the National Institute for Health and Care Excellence (United Kingdom). Three novel graphs were designed to amalgamate the identified components using familiar graphical tools such as the bar, line, or pie charts and adhering to good graphical design principles. Three key components for presentation of NMA results were identified, namely relative effects and their uncertainty, probability of an intervention being best, and between-study heterogeneity. Two of the three graphs developed present results (for each pairwise comparison of interventions in the network) obtained from both NMA and standard pairwise meta-analysis for easy comparison. They also include options to display the probability best, ranking statistics, heterogeneity, and prediction intervals. The third graph presents rankings of interventions in terms of their effectiveness to enable clinicians to easily identify "top-ranking" interventions. The graphical tools presented can display results tailored to the research question of interest, and targeted at a whole spectrum of users from the technical analyst to the nontechnical clinician. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Decomposition of metabolic network into functional modules based on the global connectivity structure of reaction graph.

    PubMed

    Ma, Hong-Wu; Zhao, Xue-Ming; Yuan, Ying-Jin; Zeng, An-Ping

    2004-08-12

    Metabolic networks are organized in a modular, hierarchical manner. Methods for a rational decomposition of the metabolic network into relatively independent functional subsets are essential to better understand the modularity and organization principle of a large-scale, genome-wide network. Network decomposition is also necessary for functional analysis of metabolism by pathway analysis methods that are often hampered by the problem of combinatorial explosion due to the complexity of metabolic network. Decomposition methods proposed in literature are mainly based on the connection degree of metabolites. To obtain a more reasonable decomposition, the global connectivity structure of metabolic networks should be taken into account. In this work, we use a reaction graph representation of a metabolic network for the identification of its global connectivity structure and for decomposition. A bow-tie connectivity structure similar to that previously discovered for metabolite graph is found also to exist in the reaction graph. Based on this bow-tie structure, a new decomposition method is proposed, which uses a distance definition derived from the path length between two reactions. An hierarchical classification tree is first constructed from the distance matrix among the reactions in the giant strong component of the bow-tie structure. These reactions are then grouped into different subsets based on the hierarchical tree. Reactions in the IN and OUT subsets of the bow-tie structure are subsequently placed in the corresponding subsets according to a 'majority rule'. Compared with the decomposition methods proposed in literature, ours is based on combined properties of the global network structure and local reaction connectivity rather than, primarily, on the connection degree of metabolites. The method is applied to decompose the metabolic network of Escherichia coli. Eleven subsets are obtained. More detailed investigations of the subsets show that reactions in the same subset are really functionally related. The rational decomposition of metabolic networks, and subsequent studies of the subsets, make it more amenable to understand the inherent organization and functionality of metabolic networks at the modular level. http://genome.gbf.de/bioinformatics/

  1. Robust-mode analysis of hydrodynamic flows

    NASA Astrophysics Data System (ADS)

    Roy, Sukesh; Gord, James R.; Hua, Jia-Chen; Gunaratne, Gemunu H.

    2017-04-01

    The emergence of techniques to extract high-frequency high-resolution data introduces a new avenue for modal decomposition to assess the underlying dynamics, especially of complex flows. However, this task requires the differentiation of robust, repeatable flow constituents from noise and other irregular features of a flow. Traditional approaches involving low-pass filtering and principle components analysis have shortcomings. The approach outlined here, referred to as robust-mode analysis, is based on Koopman decomposition. Three applications to (a) a counter-rotating cellular flame state, (b) variations in financial markets, and (c) turbulent injector flows are provided.

  2. Tracking Hierarchical Processing in Morphological Decomposition with Brain Potentials

    ERIC Educational Resources Information Center

    Lavric, Aureliu; Elchlepp, Heike; Rastle, Kathleen

    2012-01-01

    One important debate in psycholinguistics concerns the nature of morphological decomposition processes in visual word recognition (e.g., darkness = {dark} + {-ness}). One theory claims that these processes arise during orthographic analysis and prior to accessing meaning (Rastle & Davis, 2008), and another argues that these processes arise through…

  3. Demonstrating microbial co-occurrence pattern analyses within and between ecosystems

    PubMed Central

    Williams, Ryan J.; Howe, Adina; Hofmockel, Kirsten S.

    2014-01-01

    Co-occurrence patterns are used in ecology to explore interactions between organisms and environmental effects on coexistence within biological communities. Analysis of co-occurrence patterns among microbial communities has ranged from simple pairwise comparisons between all community members to direct hypothesis testing between focal species. However, co-occurrence patterns are rarely studied across multiple ecosystems or multiple scales of biological organization within the same study. Here we outline an approach to produce co-occurrence analyses that are focused at three different scales: co-occurrence patterns between ecosystems at the community scale, modules of co-occurring microorganisms within communities, and co-occurring pairs within modules that are nested within microbial communities. To demonstrate our co-occurrence analysis approach, we gathered publicly available 16S rRNA amplicon datasets to compare and contrast microbial co-occurrence at different taxonomic levels across different ecosystems. We found differences in community composition and co-occurrence that reflect environmental filtering at the community scale and consistent pairwise occurrences that may be used to infer ecological traits about poorly understood microbial taxa. However, we also found that conclusions derived from applying network statistics to microbial relationships can vary depending on the taxonomic level chosen and criteria used to build co-occurrence networks. We present our statistical analysis and code for public use in analysis of co-occurrence patterns across microbial communities. PMID:25101065

  4. Adaptive selection of diurnal minimum variation: a statistical strategy to obtain representative atmospheric CO2 data and its application to European elevated mountain stations

    NASA Astrophysics Data System (ADS)

    Yuan, Ye; Ries, Ludwig; Petermeier, Hannes; Steinbacher, Martin; Gómez-Peláez, Angel J.; Leuenberger, Markus C.; Schumacher, Marcus; Trickl, Thomas; Couret, Cedric; Meinhardt, Frank; Menzel, Annette

    2018-03-01

    Critical data selection is essential for determining representative baseline levels of atmospheric trace gases even at remote measurement sites. Different data selection techniques have been used around the world, which could potentially lead to reduced compatibility when comparing data from different stations. This paper presents a novel statistical data selection method named adaptive diurnal minimum variation selection (ADVS) based on CO2 diurnal patterns typically occurring at elevated mountain stations. Its capability and applicability were studied on records of atmospheric CO2 observations at six Global Atmosphere Watch stations in Europe, namely, Zugspitze-Schneefernerhaus (Germany), Sonnblick (Austria), Jungfraujoch (Switzerland), Izaña (Spain), Schauinsland (Germany), and Hohenpeissenberg (Germany). Three other frequently applied statistical data selection methods were included for comparison. Among the studied methods, our ADVS method resulted in a lower fraction of data selected as a baseline with lower maxima during winter and higher minima during summer in the selected data. The measured time series were analyzed for long-term trends and seasonality by a seasonal-trend decomposition technique. In contrast to unselected data, mean annual growth rates of all selected datasets were not significantly different among the sites, except for the data recorded at Schauinsland. However, clear differences were found in the annual amplitudes as well as the seasonal time structure. Based on a pairwise analysis of correlations between stations on the seasonal-trend decomposed components by statistical data selection, we conclude that the baseline identified by the ADVS method is a better representation of lower free tropospheric (LFT) conditions than baselines identified by the other methods.

  5. Large-scale compensation of errors in pairwise-additive empirical force fields: comparison of AMBER intermolecular terms with rigorous DFT-SAPT calculations.

    PubMed

    Zgarbová, Marie; Otyepka, Michal; Sponer, Jirí; Hobza, Pavel; Jurecka, Petr

    2010-09-21

    The intermolecular interaction energy components for several molecular complexes were calculated using force fields available in the AMBER suite of programs and compared with Density Functional Theory-Symmetry Adapted Perturbation Theory (DFT-SAPT) values. The extent to which such comparison is meaningful is discussed. The comparability is shown to depend strongly on the intermolecular distance, which means that comparisons made at one distance only are of limited value. At large distances the coulombic and van der Waals 1/r(6) empirical terms correspond fairly well with the DFT-SAPT electrostatics and dispersion terms, respectively. At the onset of electronic overlap the empirical values deviate from the reference values considerably. However, the errors in the force fields tend to cancel out in a systematic manner at equilibrium distances. Thus, the overall performance of the force fields displays errors an order of magnitude smaller than those of the individual interaction energy components. The repulsive 1/r(12) component of the van der Waals expression seems to be responsible for a significant part of the deviation of the force field results from the reference values. We suggest that further improvement of the force fields for intermolecular interactions would require replacement of the nonphysical 1/r(12) term by an exponential function. Dispersion anisotropy and its effects are discussed. Our analysis is intended to show that although comparing the empirical and non-empirical interaction energy components is in general problematic, it might bring insights useful for the construction of new force fields. Our results are relevant to often performed force-field-based interaction energy decompositions.

  6. Storage assignment optimization in a multi-tier shuttle warehousing system

    NASA Astrophysics Data System (ADS)

    Wang, Yanyan; Mou, Shandong; Wu, Yaohua

    2016-03-01

    The current mathematical models for the storage assignment problem are generally established based on the traveling salesman problem(TSP), which has been widely applied in the conventional automated storage and retrieval system(AS/RS). However, the previous mathematical models in conventional AS/RS do not match multi-tier shuttle warehousing systems(MSWS) because the characteristics of parallel retrieval in multiple tiers and progressive vertical movement destroy the foundation of TSP. In this study, a two-stage open queuing network model in which shuttles and a lift are regarded as servers at different stages is proposed to analyze system performance in the terms of shuttle waiting period (SWP) and lift idle period (LIP) during transaction cycle time. A mean arrival time difference matrix for pairwise stock keeping units(SKUs) is presented to determine the mean waiting time and queue length to optimize the storage assignment problem on the basis of SKU correlation. The decomposition method is applied to analyze the interactions among outbound task time, SWP, and LIP. The ant colony clustering algorithm is designed to determine storage partitions using clustering items. In addition, goods are assigned for storage according to the rearranging permutation and the combination of storage partitions in a 2D plane. This combination is derived based on the analysis results of the queuing network model and on three basic principles. The storage assignment method and its entire optimization algorithm method as applied in a MSWS are verified through a practical engineering project conducted in the tobacco industry. The applying results show that the total SWP and LIP can be reduced effectively to improve the utilization rates of all devices and to increase the throughput of the distribution center.

  7. Maximally informative pairwise interactions in networks

    PubMed Central

    Fitzgerald, Jeffrey D.; Sharpee, Tatyana O.

    2010-01-01

    Several types of biological networks have recently been shown to be accurately described by a maximum entropy model with pairwise interactions, also known as the Ising model. Here we present an approach for finding the optimal mappings between input signals and network states that allow the network to convey the maximal information about input signals drawn from a given distribution. This mapping also produces a set of linear equations for calculating the optimal Ising-model coupling constants, as well as geometric properties that indicate the applicability of the pairwise Ising model. We show that the optimal pairwise interactions are on average zero for Gaussian and uniformly distributed inputs, whereas they are nonzero for inputs approximating those in natural environments. These nonzero network interactions are predicted to increase in strength as the noise in the response functions of each network node increases. This approach also suggests ways for how interactions with unmeasured parts of the network can be inferred from the parameters of response functions for the measured network nodes. PMID:19905153

  8. Isothermal Decomposition of Hydrogen Peroxide Dihydrate

    NASA Technical Reports Server (NTRS)

    Loeffler, M. J.; Baragiola, R. A.

    2011-01-01

    We present a new method of growing pure solid hydrogen peroxide in an ultra high vacuum environment and apply it to determine thermal stability of the dihydrate compound that forms when water and hydrogen peroxide are mixed at low temperatures. Using infrared spectroscopy and thermogravimetric analysis, we quantified the isothermal decomposition of the metastable dihydrate at 151.6 K. This decomposition occurs by fractional distillation through the preferential sublimation of water, which leads to the formation of pure hydrogen peroxide. The results imply that in an astronomical environment where condensed mixtures of H2O2 and H2O are shielded from radiolytic decomposition and warmed to temperatures where sublimation is significant, highly concentrated or even pure hydrogen peroxide may form.

  9. HCOOH decomposition on Pt(111): A DFT study

    DOE PAGES

    Scaranto, Jessica; Mavrikakis, Manos

    2015-10-13

    Formic acid (HCOOH) decomposition on transition metal surfaces is important for hydrogen production and for its electro-oxidation in direct HCOOH fuel cells. HCOOH can decompose through dehydrogenation leading to formation of CO 2 and H 2 or dehydration leading to CO and H 2O; because CO can poison metal surfaces, dehydrogenation is typically the desirable decomposition path. Here we report a mechanistic analysis of HCOOH decomposition on Pt(111), obtained from a plane wave density functional theory (DFT-PW91) study. We analyzed the dehydrogenation mechanism by considering the two possible pathways involving the formate (HCOO) or the carboxyl (COOH) intermediate. We alsomore » considered several possible dehydration paths leading to CO formation. We studied HCOO and COOH decomposition both on the clean surface and in the presence of other relevant co-adsorbates. The results suggest that COOH formation is energetically more difficult than HCOO formation. In contrast, COOH dehydrogenation is easier than HCOO decomposition. Here, we found that CO 2 is the main product through both pathways and that CO is produced mainly through the dehydroxylation of the COOH intermediate.« less

  10. HCOOH decomposition on Pt(111): A DFT study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scaranto, Jessica; Mavrikakis, Manos

    Formic acid (HCOOH) decomposition on transition metal surfaces is important for hydrogen production and for its electro-oxidation in direct HCOOH fuel cells. HCOOH can decompose through dehydrogenation leading to formation of CO 2 and H 2 or dehydration leading to CO and H 2O; because CO can poison metal surfaces, dehydrogenation is typically the desirable decomposition path. Here we report a mechanistic analysis of HCOOH decomposition on Pt(111), obtained from a plane wave density functional theory (DFT-PW91) study. We analyzed the dehydrogenation mechanism by considering the two possible pathways involving the formate (HCOO) or the carboxyl (COOH) intermediate. We alsomore » considered several possible dehydration paths leading to CO formation. We studied HCOO and COOH decomposition both on the clean surface and in the presence of other relevant co-adsorbates. The results suggest that COOH formation is energetically more difficult than HCOO formation. In contrast, COOH dehydrogenation is easier than HCOO decomposition. Here, we found that CO 2 is the main product through both pathways and that CO is produced mainly through the dehydroxylation of the COOH intermediate.« less

  11. Corrected confidence bands for functional data using principal components.

    PubMed

    Goldsmith, J; Greven, S; Crainiceanu, C

    2013-03-01

    Functional principal components (FPC) analysis is widely used to decompose and express functional observations. Curve estimates implicitly condition on basis functions and other quantities derived from FPC decompositions; however these objects are unknown in practice. In this article, we propose a method for obtaining correct curve estimates by accounting for uncertainty in FPC decompositions. Additionally, pointwise and simultaneous confidence intervals that account for both model- and decomposition-based variability are constructed. Standard mixed model representations of functional expansions are used to construct curve estimates and variances conditional on a specific decomposition. Iterated expectation and variance formulas combine model-based conditional estimates across the distribution of decompositions. A bootstrap procedure is implemented to understand the uncertainty in principal component decomposition quantities. Our method compares favorably to competing approaches in simulation studies that include both densely and sparsely observed functions. We apply our method to sparse observations of CD4 cell counts and to dense white-matter tract profiles. Code for the analyses and simulations is publicly available, and our method is implemented in the R package refund on CRAN. Copyright © 2013, The International Biometric Society.

  12. Corrected Confidence Bands for Functional Data Using Principal Components

    PubMed Central

    Goldsmith, J.; Greven, S.; Crainiceanu, C.

    2014-01-01

    Functional principal components (FPC) analysis is widely used to decompose and express functional observations. Curve estimates implicitly condition on basis functions and other quantities derived from FPC decompositions; however these objects are unknown in practice. In this article, we propose a method for obtaining correct curve estimates by accounting for uncertainty in FPC decompositions. Additionally, pointwise and simultaneous confidence intervals that account for both model- and decomposition-based variability are constructed. Standard mixed model representations of functional expansions are used to construct curve estimates and variances conditional on a specific decomposition. Iterated expectation and variance formulas combine model-based conditional estimates across the distribution of decompositions. A bootstrap procedure is implemented to understand the uncertainty in principal component decomposition quantities. Our method compares favorably to competing approaches in simulation studies that include both densely and sparsely observed functions. We apply our method to sparse observations of CD4 cell counts and to dense white-matter tract profiles. Code for the analyses and simulations is publicly available, and our method is implemented in the R package refund on CRAN. PMID:23003003

  13. Domain decomposition: A bridge between nature and parallel computers

    NASA Technical Reports Server (NTRS)

    Keyes, David E.

    1992-01-01

    Domain decomposition is an intuitive organizing principle for a partial differential equation (PDE) computation, both physically and architecturally. However, its significance extends beyond the readily apparent issues of geometry and discretization, on one hand, and of modular software and distributed hardware, on the other. Engineering and computer science aspects are bridged by an old but recently enriched mathematical theory that offers the subject not only unity, but also tools for analysis and generalization. Domain decomposition induces function-space and operator decompositions with valuable properties. Function-space bases and operator splittings that are not derived from domain decompositions generally lack one or more of these properties. The evolution of domain decomposition methods for elliptically dominated problems has linked two major algorithmic developments of the last 15 years: multilevel and Krylov methods. Domain decomposition methods may be considered descendants of both classes with an inheritance from each: they are nearly optimal and at the same time efficiently parallelizable. Many computationally driven application areas are ripe for these developments. A progression is made from a mathematically informal motivation for domain decomposition methods to a specific focus on fluid dynamics applications. To be introductory rather than comprehensive, simple examples are provided while convergence proofs and algorithmic details are left to the original references; however, an attempt is made to convey their most salient features, especially where this leads to algorithmic insight.

  14. Challenges of including nitrogen effects on decomposition in earth system models

    NASA Astrophysics Data System (ADS)

    Hobbie, S. E.

    2011-12-01

    Despite the importance of litter decomposition for ecosystem fertility and carbon balance, key uncertainties remain about how this fundamental process is affected by nitrogen (N) availability. Nevertheless, resolving such uncertainties is critical for mechanistic inclusion of such processes in earth system models, towards predicting the ecosystem consequences of increased anthropogenic reactive N. Towards that end, we have conducted a series of experiments examining nitrogen effects on litter decomposition. We found that both substrate N and externally supplied N (regardless of form) accelerated the initial decomposition rate. Faster initial decomposition rates were linked to the higher activity of carbohydrate-degrading enzymes associated with externally supplied N and the greater relative abundances of Gram negative and Gram positive bacteria associated with green leaves and externally supplied organic N (assessed using phospholipid fatty acid analysis, PLFA). By contrast, later in decomposition, externally supplied N slowed decomposition, increasing the fraction of slowly decomposing litter and reducing lignin-degrading enzyme activity and relative abundances of Gram negative and Gram positive bacteria. Our results suggest that elevated atmospheric N deposition may have contrasting effects on the dynamics of different soil carbon pools, decreasing mean residence times of active fractions comprising very fresh litter, while increasing those of more slowly decomposing fractions including more processed litter. Incorporating these contrasting effects of N on decomposition processes into models is complicated by lingering uncertainties about how these effects generalize across ecosystems and substrates.

  15. Self-similar pyramidal structures and signal reconstruction

    NASA Astrophysics Data System (ADS)

    Benedetto, John J.; Leon, Manuel; Saliani, Sandra

    1998-03-01

    Pyramidal structures are defined which are locally a combination of low and highpass filtering. The structures are analogous to but different from wavelet packet structures. In particular, new frequency decompositions are obtained; and these decompositions can be parameterized to establish a correspondence with a large class of Cantor sets. Further correspondences are then established to relate such frequency decompositions with more general self- similarities. The role of the filters in defining these pyramidal structures gives rise to signal reconstruction algorithms, and these, in turn, are used in the analysis of speech data.

  16. Three phase crystallography and solute distribution analysis during residual austenite decomposition in tempered nanocrystalline bainitic steels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caballero, F.G.; Yen, Hung-Wei; Australian Centre for Microscopy and Microanalysis, The University of Sydney, NSW 2006

    2014-02-15

    Interphase carbide precipitation due to austenite decomposition was investigated by high resolution transmission electron microscopy and atom probe tomography in tempered nanostructured bainitic steels. Results showed that cementite (θ) forms by a paraequilibrium transformation mechanism at the bainitic ferrite–austenite interface with a simultaneous three phase crystallographic orientation relationship. - Highlights: • Interphase carbide precipitation due to austenite decomposition • Tempered nanostructured bainitic steels • High resolution transmission electron microscopy and atom probe tomography • Paraequilibrium θ with three phase crystallographic orientation relationship.

  17. Influence of storage conditions on the stability of monomeric anthocyanins studied by reversed-phase high-performance liquid chromatography.

    PubMed

    Morais, Helena; Ramos, Cristina; Forgács, Esther; Cserháti, Tibor; Oliviera, José

    2002-04-25

    The effect of light, storage time and temperature on the decomposition rate of monomeric anthocyanin pigments extracted from skins of grape (Vitis vinifera var. Red globe) was determined by reversed-phase high-performance liquid chromatography (RP-HPLC). The impact of various storage conditions on the pigment stability was assessed by stepwise regression analysis. RP-HPLC separated well the five anthocyanins identified and proved the presence of other unidentified pigments at lower concentrations. Stepwise regression analysis confirmed that the overall decomposition rate of monomeric anthocyanins, peonidin-3-glucoside and malvidin-3-glucoside significantly depended on the time and temperature of storage, the effect of storage time being the most important. The presence or absence of light exerted a negligible impact on the decomposition rate.

  18. Numerical prediction of the energy efficiency of the three-dimensional fish school using the discretized Adomian decomposition method

    NASA Astrophysics Data System (ADS)

    Lin, Yinwei

    2018-06-01

    A three-dimensional modeling of fish school performed by a modified Adomian decomposition method (ADM) discretized by the finite difference method is proposed. To our knowledge, few studies of the fish school are documented due to expensive cost of numerical computing and tedious three-dimensional data analysis. Here, we propose a simple model replied on the Adomian decomposition method to estimate the efficiency of energy saving of the flow motion of the fish school. First, the analytic solutions of Navier-Stokes equations are used for numerical validation. The influences of the distance between the side-by-side two fishes are studied on the energy efficiency of the fish school. In addition, the complete error analysis for this method is presented.

  19. Blazing Signature Filter: a library for fast pairwise similarity comparisons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Joon-Yong; Fujimoto, Grant M.; Wilson, Ryan

    Identifying similarities between datasets is a fundamental task in data mining and has become an integral part of modern scientific investigation. Whether the task is to identify co-expressed genes in large-scale expression surveys or to predict combinations of gene knockouts which would elicit a similar phenotype, the underlying computational task is often a multi-dimensional similarity test. As datasets continue to grow, improvements to the efficiency, sensitivity or specificity of such computation will have broad impacts as it allows scientists to more completely explore the wealth of scientific data. A significant practical drawback of large-scale data mining is the vast majoritymore » of pairwise comparisons are unlikely to be relevant, meaning that they do not share a signature of interest. It is therefore essential to efficiently identify these unproductive comparisons as rapidly as possible and exclude them from more time-intensive similarity calculations. The Blazing Signature Filter (BSF) is a highly efficient pairwise similarity algorithm which enables extensive data mining within a reasonable amount of time. The algorithm transforms datasets into binary metrics, allowing it to utilize the computationally efficient bit operators and provide a coarse measure of similarity. As a result, the BSF can scale to high dimensionality and rapidly filter unproductive pairwise comparison. Two bioinformatics applications of the tool are presented to demonstrate the ability to scale to billions of pairwise comparisons and the usefulness of this approach.« less

  20. The structure of pairwise correlation in mouse primary visual cortex reveals functional organization in the absence of an orientation map.

    PubMed

    Denman, Daniel J; Contreras, Diego

    2014-10-01

    Neural responses to sensory stimuli are not independent. Pairwise correlation can reduce coding efficiency, occur independent of stimulus representation, or serve as an additional channel of information, depending on the timescale of correlation and the method of decoding. Any role for correlation depends on its magnitude and structure. In sensory areas with maps, like the orientation map in primary visual cortex (V1), correlation is strongly related to the underlying functional architecture, but it is unclear whether this correlation structure is an essential feature of the system or arises from the arrangement of cells in the map. We assessed the relationship between functional architecture and pairwise correlation by measuring both synchrony and correlated spike count variability in mouse V1, which lacks an orientation map. We observed significant pairwise synchrony, which was organized by distance and relative orientation preference between cells. We also observed nonzero correlated variability in both the anesthetized (0.16) and awake states (0.18). Our results indicate that the structure of pairwise correlation is maintained in the absence of an underlying anatomical organization and may be an organizing principle of the mammalian visual system preserved by nonrandom connectivity within local networks. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  1. Tissue Non-Specific Genes and Pathways Associated with Diabetes: An Expression Meta-Analysis.

    PubMed

    Mei, Hao; Li, Lianna; Liu, Shijian; Jiang, Fan; Griswold, Michael; Mosley, Thomas

    2017-01-21

    We performed expression studies to identify tissue non-specific genes and pathways of diabetes by meta-analysis. We searched curated datasets of the Gene Expression Omnibus (GEO) database and identified 13 and five expression studies of diabetes and insulin responses at various tissues, respectively. We tested differential gene expression by empirical Bayes-based linear method and investigated gene set expression association by knowledge-based enrichment analysis. Meta-analysis by different methods was applied to identify tissue non-specific genes and gene sets. We also proposed pathway mapping analysis to infer functions of the identified gene sets, and correlation and independent analysis to evaluate expression association profile of genes and gene sets between studies and tissues. Our analysis showed that PGRMC1 and HADH genes were significant over diabetes studies, while IRS1 and MPST genes were significant over insulin response studies, and joint analysis showed that HADH and MPST genes were significant over all combined data sets. The pathway analysis identified six significant gene sets over all studies. The KEGG pathway mapping indicated that the significant gene sets are related to diabetes pathogenesis. The results also presented that 12.8% and 59.0% pairwise studies had significantly correlated expression association for genes and gene sets, respectively; moreover, 12.8% pairwise studies had independent expression association for genes, but no studies were observed significantly different for expression association of gene sets. Our analysis indicated that there are both tissue specific and non-specific genes and pathways associated with diabetes pathogenesis. Compared to the gene expression, pathway association tends to be tissue non-specific, and a common pathway influencing diabetes development is activated through different genes at different tissues.

  2. RCLUS, a new program for clustering associated species: A demonstration using a Mojave Desert plant community dataset

    Treesearch

    Stewart C. Sanderson; Jeffrey E. Ott; E. Durant McArthur; Kimball T. Harper

    2006-01-01

    This paper presents a new clustering program named RCLUS that was developed for species (R-mode) analysis of plant community data. RCLUS identifies clusters of co-occurring species that meet a user-specified cutoff level of positive association with each other. The "strict affinity" clustering algorithm in RCLUS builds clusters of species whose pairwise...

  3. Effect of congenital blindness on the semantic representation of some everyday concepts

    PubMed Central

    Connolly, Andrew C.; Gleitman, Lila R.; Thompson-Schill, Sharon L.

    2007-01-01

    This study explores how the lack of first-hand experience with color, as a result of congenital blindness, affects implicit judgments about “higher-order” concepts, such as “fruits and vegetables” (FV), but not others, such as “household items” (HHI). We demonstrate how the differential diagnosticity of color across our test categories interacts with visual experience to produce, in effect, a category-specific difference in implicit similarity. Implicit pair-wise similarity judgments were collected by using an odd-man-out triad task. Pair-wise similarities for both FV and for HHI were derived from this task and were compared by using cluster analysis and regression analyses. Color was found to be a significant component in the structure of implicit similarity for FV for sighted participants but not for blind participants; and this pattern remained even when the analysis was restricted to blind participants who had good explicit color knowledge of the stimulus items. There was also no evidence that either subject group used color knowledge in making decisions about HHI, nor was there an indication of any qualitative differences between blind and sighted subjects' judgments on HHI. PMID:17483447

  4. Eigenbeam analysis of the diversity in bat biosonar beampatterns.

    PubMed

    Caspers, Philip; Müller, Rolf

    2015-03-01

    A quantitative analysis of the interspecific variability in bat biosonar beampatterns has been carried out on 267 numerical predictions of emission and reception beampatterns from 98 different species. Since these beampatterns did not share a common orientation, an alignment was necessary to analyze the variability in the shape of the patterns. To achieve this, beampatterns were aligned using a pairwise optimization framework based on a rotation-dependent cost function. The sum of the p-norms between beam-gain functions across frequency served as a figure of merit. For a representative subset of the data, it was found that all pairwise beampattern alignments resulted in a unique global minimum. This minimum was found to be contained in a subset of all possible beampattern rotations that could be predicted by the overall beam orientation. Following alignment, the beampatterns were decomposed into principal components. The average beampattern consisted of a symmetric, positionally static single lobe that narrows and became progressively asymmetric with increasing frequency. The first three "eigenbeams" controlled the beam width of the beampattern across frequency while higher rank eigenbeams account for symmetry and lobe motion. Reception and emission beampatterns could be distinguished (85% correct classification) based on the first 14 eigenbeams.

  5. Thermal Decomposition Model Development of EN-7 and EN-8 Polyurethane Elastomers.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keedy, Ryan Michael; Harrison, Kale Warren; Cordaro, Joseph Gabriel

    Thermogravimetric analysis - gas chromatography/mass spectrometry (TGA- GC/MS) experiments were performed on EN-7 and EN-8, analyzed, and reported in [1] . This SAND report derives and describes pyrolytic thermal decomposition models for use in predicting the responses of EN-7 and EN-8 in an abnormal thermal environment.

  6. Reducing variation in decomposition odour profiling using comprehensive two-dimensional gas chromatography.

    PubMed

    Perrault, Katelynn A; Stefanuto, Pierre-Hugues; Stuart, Barbara H; Rai, Tapan; Focant, Jean-François; Forbes, Shari L

    2015-01-01

    Challenges in decomposition odour profiling have led to variation in the documented odour profile by different research groups worldwide. Background subtraction and use of controls are important considerations given the variation introduced by decomposition studies conducted in different geographical environments. The collection of volatile organic compounds (VOCs) from soil beneath decomposing remains is challenging due to the high levels of inherent soil VOCs, further confounded by the use of highly sensitive instrumentation. This study presents a method that provides suitable chromatographic resolution for profiling decomposition odour in soil by comprehensive two-dimensional gas chromatography coupled with time-of-flight mass spectrometry using appropriate controls and field blanks. Logarithmic transformation and t-testing of compounds permitted the generation of a compound list of decomposition VOCs in soil. Principal component analysis demonstrated the improved discrimination between experimental and control soil, verifying the value of the data handling method. Data handling procedures have not been well documented in this field and standardisation would thereby reduce misidentification of VOCs present in the surrounding environment as decomposition byproducts. Uniformity of data handling and instrumental procedures will reduce analytical variation, increasing confidence in the future when investigating the effect of taphonomic variables on the decomposition VOC profile. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Odor analysis of decomposing buried human remains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vass, Arpad Alexander; Smith, Rob R; Thompson, Cyril V

    2008-01-01

    This study, conducted at the University of Tennessee's Anthropological Research Facility (ARF), lists and ranks the primary chemical constituents which define the odor of decomposition of human remains as detected at the soil surface of shallow burial sites. Triple sorbent traps were used to collect air samples in the field and revealed eight major classes of chemicals which now contain 478 specific volatile compounds associated with burial decomposition. Samples were analyzed using gas chromatography-mass spectrometry (GC-MS) and were collected below and above the body, and at the soil surface of 1.5-3.5 ft. (0.46-1.07 m) deep burial sites of four individualsmore » over a 4-year time span. New data were incorporated into the previously established Decompositional Odor Analysis (DOA) Database providing identification, chemical trends, and semi-quantitation of chemicals for evaluation. This research identifies the 'odor signatures' unique to the decomposition of buried human remains with projected ramifications on human remains detection canine training procedures and in the development of field portable analytical instruments which can be used to locate human remains in shallow burial sites.« less

  8. Application of advanced multidisciplinary analysis and optimization methods to vehicle design synthesis

    NASA Technical Reports Server (NTRS)

    Consoli, Robert David; Sobieszczanski-Sobieski, Jaroslaw

    1990-01-01

    Advanced multidisciplinary analysis and optimization methods, namely system sensitivity analysis and non-hierarchical system decomposition, are applied to reduce the cost and improve the visibility of an automated vehicle design synthesis process. This process is inherently complex due to the large number of functional disciplines and associated interdisciplinary couplings. Recent developments in system sensitivity analysis as applied to complex non-hierarchic multidisciplinary design optimization problems enable the decomposition of these complex interactions into sub-processes that can be evaluated in parallel. The application of these techniques results in significant cost, accuracy, and visibility benefits for the entire design synthesis process.

  9. Harmonic analysis of traction power supply system based on wavelet decomposition

    NASA Astrophysics Data System (ADS)

    Dun, Xiaohong

    2018-05-01

    With the rapid development of high-speed railway and heavy-haul transport, AC drive electric locomotive and EMU large-scale operation in the country on the ground, the electrified railway has become the main harmonic source of China's power grid. In response to this phenomenon, the need for timely monitoring of power quality problems of electrified railway, assessment and governance. Wavelet transform is developed on the basis of Fourier analysis, the basic idea comes from the harmonic analysis, with a rigorous theoretical model, which has inherited and developed the local thought of Garbor transformation, and has overcome the disadvantages such as window fixation and lack of discrete orthogonally, so as to become a more recently studied spectral analysis tool. The wavelet analysis takes the gradual and precise time domain step in the high frequency part so as to focus on any details of the signal being analyzed, thereby comprehensively analyzing the harmonics of the traction power supply system meanwhile use the pyramid algorithm to increase the speed of wavelet decomposition. The matlab simulation shows that the use of wavelet decomposition of the traction power supply system for harmonic spectrum analysis is effective.

  10. Fast flux module detection using matroid theory.

    PubMed

    Reimers, Arne C; Bruggeman, Frank J; Olivier, Brett G; Stougie, Leen

    2015-05-01

    Flux balance analysis (FBA) is one of the most often applied methods on genome-scale metabolic networks. Although FBA uniquely determines the optimal yield, the pathway that achieves this is usually not unique. The analysis of the optimal-yield flux space has been an open challenge. Flux variability analysis is only capturing some properties of the flux space, while elementary mode analysis is intractable due to the enormous number of elementary modes. However, it has been found by Kelk et al. (2012) that the space of optimal-yield fluxes decomposes into flux modules. These decompositions allow a much easier but still comprehensive analysis of the optimal-yield flux space. Using the mathematical definition of module introduced by Müller and Bockmayr (2013b), we discovered useful connections to matroid theory, through which efficient algorithms enable us to compute the decomposition into modules in a few seconds for genome-scale networks. Using that every module can be represented by one reaction that represents its function, in this article, we also present a method that uses this decomposition to visualize the interplay of modules. We expect the new method to replace flux variability analysis in the pipelines for metabolic networks.

  11. Accuracy assessment of a surface electromyogram decomposition system in human first dorsal interosseus muscle

    NASA Astrophysics Data System (ADS)

    Hu, Xiaogang; Rymer, William Z.; Suresh, Nina L.

    2014-04-01

    Objective. The aim of this study is to assess the accuracy of a surface electromyogram (sEMG) motor unit (MU) decomposition algorithm during low levels of muscle contraction. Approach. A two-source method was used to verify the accuracy of the sEMG decomposition system, by utilizing simultaneous intramuscular and surface EMG recordings from the human first dorsal interosseous muscle recorded during isometric trapezoidal force contractions. Spike trains from each recording type were decomposed independently utilizing two different algorithms, EMGlab and dEMG decomposition algorithms. The degree of agreement of the decomposed spike timings was assessed for three different segments of the EMG signals, corresponding to specified regions in the force task. A regression analysis was performed to examine whether certain properties of the sEMG and force signal can predict the decomposition accuracy. Main results. The average accuracy of successful decomposition among the 119 MUs that were common to both intramuscular and surface records was approximately 95%, and the accuracy was comparable between the different segments of the sEMG signals (i.e., force ramp-up versus steady state force versus combined). The regression function between the accuracy and properties of sEMG and force signals revealed that the signal-to-noise ratio of the action potential and stability in the action potential records were significant predictors of the surface decomposition accuracy. Significance. The outcomes of our study confirm the accuracy of the sEMG decomposition algorithm during low muscle contraction levels and provide confidence in the overall validity of the surface dEMG decomposition algorithm.

  12. Design, Implementation and Deployment of PAIRwise

    ERIC Educational Resources Information Center

    Knight, Allan; Almeroth, Kevin; Bimber, Bruce

    2008-01-01

    Increased access to the Internet has dramatically increased the sources from which students can deliberately or accidentally copy information. This article discusses our motivation to design, implement, and deploy an Internet based plagiarism detection system, called PAIRwise, to address this growing problem. We give details as to how we detect…

  13. Thermal decomposition of ammonium perchlorate in the presence of Al(OH)(3)·Cr(OH)(3) nanoparticles.

    PubMed

    Zhang, WenJing; Li, Ping; Xu, HongBin; Sun, Randi; Qing, Penghui; Zhang, Yi

    2014-03-15

    An Al(OH)(3)·Cr(OH)(3) nanoparticle preparation procedure and its catalytic effect and mechanism on thermal decomposition of ammonium perchlorate (AP) were investigated using transmission electron microscopy (TEM), X-ray diffraction (XRD), thermogravimetric analysis and differential scanning calorimetry (TG-DSC), X-ray photoelectron spectroscopy (XPS), and thermogravimetric analysis and mass spectroscopy (TG-MS). In the preparation procedure, TEM, SAED, and FT-IR showed that the Al(OH)(3)·Cr(OH)(3) particles were amorphous particles with dimensions in the nanometer size regime containing a large amount of surface hydroxyl under the controllable preparation conditions. When the Al(OH)(3)·Cr(OH)(3) nanoparticles were used as additives for the thermal decomposition of AP, the TG-DSC results showed that the addition of Al(OH)(3)·Cr(OH)(3) nanoparticles to AP remarkably decreased the onset temperature of AP decomposition from approximately 450°C to 245°C. The FT-IR, RS and XPS results confirmed that the surface hydroxyl content of the Al(OH)(3)·Cr(OH)(3) nanoparticles decreased from 67.94% to 63.65%, and Al(OH)3·Cr(OH)3 nanoparticles were limitedly transformed from amorphous to crystalline after used as additives for the thermal decomposition of AP. Such behavior of Al(OH)(3)·Cr(OH)(3) nanoparticles promoted the oxidation of NH3 of AP to decompose to N2O first, as indicated by the TG-MS results, accelerating the AP thermal decomposition. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Sponge-like silver obtained by decomposition of silver nitrate hexamethylenetetramine complex

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Afanasiev, Pavel, E-mail: pavel.afanasiev@ircelyon.univ-lyon.fr

    2016-07-15

    Silver nitrate hexamethylenetetramine [Ag(NO{sub 3})·N{sub 4}(CH{sub 2}){sub 6}] coordination compound has been prepared via aqueous route and characterized by chemical analysis, XRD and electron microscopy. Decomposition of [Ag(NO{sub 3})·N{sub 4}(CH{sub 2}){sub 6}] under hydrogen and under inert has been studied by thermal analysis and mass spectrometry. Thermal decomposition of [Ag(NO{sub 3})·N{sub 4}(CH{sub 2}){sub 6}] proceeds in the range 200–250 °C as a self-propagating rapid redox process accompanied with the release of multiple gases. The decomposition leads to formation of sponge-like silver having hierarchical open pore system with pore size spanning from 10 µm to 10 nm. The as-obtained silver spongesmore » exhibited favorable activity toward H{sub 2}O{sub 2} electrochemical reduction, making them potentially interesting as non-enzyme hydrogen peroxide sensors. - Graphical abstract: Thermal decomposition of silver nitrate hexamethylenetetramine coordination compound [Ag(NO{sub 3})·N{sub 4}(CH{sub 2}){sub 6}] leads to sponge like silver that possesses open porous structure and demonstrates interesting properties as an electrochemical hydrogen peroxide sensor. Display Omitted - Highlights: • [Ag(NO{sub 3})·N{sub 4}(CH{sub 2}){sub 6}] orthorhombic phase prepared and characterized. • Decomposition of [Ag(NO{sub 3})·N{sub 4}(CH{sub 2}){sub 6}] leads to metallic silver sponge with opened porosity. • Ag sponge showed promising properties as a material for hydrogen peroxide sensors.« less

  15. Path analyses of the influence of substrate composition on nematode numbers and on decomposition of stranded seaweed at an Antarctic coast

    NASA Astrophysics Data System (ADS)

    Alkemade, R.; Van Rijswijk, P.

    Large amounts of seaweed are deposited along the coast of Admiralty Bay, King George Island, Antarctica. The stranded seaweed partly decomposes on the beach and supports populations of meiofauna species, mostly nematodes. The factors determining the number of nematodes found in the seaweed packages were studied. Seaweed/sediment samples were collected from different locations, along the coast near Arctowski station, covering gradients of salinity, elevation and proximity of Penguin rookeries. On the same locations decomposition rate was determined by means of permeable containers with seaweed material. Models, including the relations between location, seaweed and sediment characteristics, number of nematodes and decomposition rates, were postulated and verified using path analysis. The most plausible and significant models are presented. The number of nematodes was directly correlated with the height of the location, the carbon-to-nitrogen ratio, and the salinity of the sample. Nematode numbers were apparently indirectly dependent on sediment composition and water content. We hypothesize that the different influences of melt water and tidal water, which affect both salinity and water content of the deposits, are important phenomena underlying these results. Analysis of the relation between decomposition rate and abiotic, location-related characteristics showed that decomposition rate was dependent on the water content of the stranded seaweed and sediment composition. Decomposition rates were high on locations where water content of the deposits was high. There the running water from melt water run-off or from the surf probably increased weight losses of seaweed.

  16. Information-geometric measures estimate neural interactions during oscillatory brain states

    PubMed Central

    Nie, Yimin; Fellous, Jean-Marc; Tatsuno, Masami

    2014-01-01

    The characterization of functional network structures among multiple neurons is essential to understanding neural information processing. Information geometry (IG), a theory developed for investigating a space of probability distributions has recently been applied to spike-train analysis and has provided robust estimations of neural interactions. Although neural firing in the equilibrium state is often assumed in these studies, in reality, neural activity is non-stationary. The brain exhibits various oscillations depending on cognitive demands or when an animal is asleep. Therefore, the investigation of the IG measures during oscillatory network states is important for testing how the IG method can be applied to real neural data. Using model networks of binary neurons or more realistic spiking neurons, we studied how the single- and pairwise-IG measures were influenced by oscillatory neural activity. Two general oscillatory mechanisms, externally driven oscillations and internally induced oscillations, were considered. In both mechanisms, we found that the single-IG measure was linearly related to the magnitude of the external input, and that the pairwise-IG measure was linearly related to the sum of connection strengths between two neurons. We also observed that the pairwise-IG measure was not dependent on the oscillation frequency. These results are consistent with the previous findings that were obtained under the equilibrium conditions. Therefore, we demonstrate that the IG method provides useful insights into neural interactions under the oscillatory condition that can often be observed in the real brain. PMID:24605089

  17. Information-geometric measures estimate neural interactions during oscillatory brain states.

    PubMed

    Nie, Yimin; Fellous, Jean-Marc; Tatsuno, Masami

    2014-01-01

    The characterization of functional network structures among multiple neurons is essential to understanding neural information processing. Information geometry (IG), a theory developed for investigating a space of probability distributions has recently been applied to spike-train analysis and has provided robust estimations of neural interactions. Although neural firing in the equilibrium state is often assumed in these studies, in reality, neural activity is non-stationary. The brain exhibits various oscillations depending on cognitive demands or when an animal is asleep. Therefore, the investigation of the IG measures during oscillatory network states is important for testing how the IG method can be applied to real neural data. Using model networks of binary neurons or more realistic spiking neurons, we studied how the single- and pairwise-IG measures were influenced by oscillatory neural activity. Two general oscillatory mechanisms, externally driven oscillations and internally induced oscillations, were considered. In both mechanisms, we found that the single-IG measure was linearly related to the magnitude of the external input, and that the pairwise-IG measure was linearly related to the sum of connection strengths between two neurons. We also observed that the pairwise-IG measure was not dependent on the oscillation frequency. These results are consistent with the previous findings that were obtained under the equilibrium conditions. Therefore, we demonstrate that the IG method provides useful insights into neural interactions under the oscillatory condition that can often be observed in the real brain.

  18. Experimental Modal Analysis and Dynamic Component Synthesis. Volume 3. Modal Parameter Estimation

    DTIC Science & Technology

    1987-12-01

    residues as well as poles is achieved. A singular value decomposition method has been used to develop a complex mode indicator function ( CMIF )[70...which can be used to help determine the number of poles before the analysis. The CMIF is formed by performing a singular value decomposition of all of...servo systems which can include both low and high damping modes. "• CMIF can be used to indicate close or repeated eigenvalues before the parameter

  19. Improved accuracy and precision in δ15 NAIR measurements of explosives, urea, and inorganic nitrates by elemental analyzer/isotope ratio mass spectrometry using thermal decomposition.

    PubMed

    Lott, Michael J; Howa, John D; Chesson, Lesley A; Ehleringer, James R

    2015-08-15

    Elemental analyzer systems generate N(2) and CO(2) for elemental composition and isotope ratio measurements. As quantitative conversion of nitrogen in some materials (i.e., nitrate salts and nitro-organic compounds) is difficult, this study tests a recently published method - thermal decomposition without the addition of O(2) - for the analysis of these materials. Elemental analyzer/isotope ratio mass spectrometry (EA/IRMS) was used to compare the traditional combustion method (CM) and the thermal decomposition method (TDM), where additional O(2) is eliminated from the reaction. The comparisons used organic and inorganic materials with oxidized and/or reduced nitrogen and included ureas, nitrate salts, ammonium sulfate, nitro esters, and nitramines. Previous TDM applications were limited to nitrate salts and ammonium sulfate. The measurement precision and accuracy were compared to determine the effectiveness of converting materials containing different fractions of oxidized nitrogen into N(2). The δ(13) C(VPDB) values were not meaningfully different when measured via CM or TDM, allowing for the analysis of multiple elements in one sample. For materials containing oxidized nitrogen, (15) N measurements made using thermal decomposition were more precise than those made using combustion. The precision was similar between the methods for materials containing reduced nitrogen. The %N values were closer to theoretical when measured by TDM than by CM. The δ(15) N(AIR) values of purchased nitrate salts and ureas were nearer to the known values when analyzed using thermal decomposition than using combustion. The thermal decomposition method addresses insufficient recovery of nitrogen during elemental analysis in a variety of organic and inorganic materials. Its implementation requires relatively few changes to the elemental analyzer. Using TDM, it is possible to directly calibrate certain organic materials to international nitrate isotope reference materials without off-line preparation. Copyright © 2015 John Wiley & Sons, Ltd.

  20. An operational modal analysis method in frequency and spatial domain

    NASA Astrophysics Data System (ADS)

    Wang, Tong; Zhang, Lingmi; Tamura, Yukio

    2005-12-01

    A frequency and spatial domain decomposition method (FSDD) for operational modal analysis (OMA) is presented in this paper, which is an extension of the complex mode indicator function (CMIF) method for experimental modal analysis (EMA). The theoretical background of the FSDD method is clarified. Singular value decomposition is adopted to separate the signal space from the noise space. Finally, an enhanced power spectrum density (PSD) is proposed to obtain more accurate modal parameters by curve fitting in the frequency domain. Moreover, a simulation case and an application case are used to validate this method.

  1. Linear stability analysis of detonations via numerical computation and dynamic mode decomposition

    NASA Astrophysics Data System (ADS)

    Kabanov, Dmitry I.; Kasimov, Aslan R.

    2018-03-01

    We introduce a new method to investigate linear stability of gaseous detonations that is based on an accurate shock-fitting numerical integration of the linearized reactive Euler equations with a subsequent analysis of the computed solution via the dynamic mode decomposition. The method is applied to the detonation models based on both the standard one-step Arrhenius kinetics and two-step exothermic-endothermic reaction kinetics. Stability spectra for all cases are computed and analyzed. The new approach is shown to be a viable alternative to the traditional normal-mode analysis used in detonation theory.

  2. Mode decomposition and Lagrangian structures of the flow dynamics in orbitally shaken bioreactors

    NASA Astrophysics Data System (ADS)

    Weheliye, Weheliye Hashi; Cagney, Neil; Rodriguez, Gregorio; Micheletti, Martina; Ducci, Andrea

    2018-03-01

    In this study, two mode decomposition techniques were applied and compared to assess the flow dynamics in an orbital shaken bioreactor (OSB) of cylindrical geometry and flat bottom: proper orthogonal decomposition and dynamic mode decomposition. Particle Image Velocimetry (PIV) experiments were carried out for different operating conditions including fluid height, h, and shaker rotational speed, N. A detailed flow analysis is provided for conditions when the fluid and vessel motions are in-phase (Fr = 0.23) and out-of-phase (Fr = 0.47). PIV measurements in vertical and horizontal planes were combined to reconstruct low order models of the full 3D flow and to determine its Finite-Time Lyapunov Exponent (FTLE) within OSBs. The combined results from the mode decomposition and the FTLE fields provide a useful insight into the flow dynamics and Lagrangian coherent structures in OSBs and offer a valuable tool to optimise bioprocess design in terms of mixing and cell suspension.

  3. Further insights into the kinetics of thermal decomposition during continuous cooling.

    PubMed

    Liavitskaya, Tatsiana; Guigo, Nathanaël; Sbirrazzuoli, Nicolas; Vyazovkin, Sergey

    2017-07-26

    Following the previous work (Phys. Chem. Chem. Phys., 2016, 18, 32021), this study continues to investigate the intriguing phenomenon of thermal decomposition during continuous cooling. The phenomenon can be detected and its kinetics can be measured by means of thermogravimetric analysis (TGA). The kinetics of the thermal decomposition of ammonium nitrate (NH 4 NO 3 ), nickel oxalate (NiC 2 O 4 ), and lithium sulfate monohydrate (Li 2 SO 4 ·H 2 O) have been measured upon heating and cooling and analyzed by means of the isoconversional methodology. The results have confirmed the hypothesis that the respective kinetics should be similar for single-step processes (NH 4 NO 3 decomposition) but different for multi-step ones (NiC 2 O 4 decomposition and Li 2 SO 4 ·H 2 O dehydration). It has been discovered that the differences in the kinetics can be either quantitative or qualitative. Physical insights into the nature of the differences have been proposed.

  4. 3D quantitative analysis of early decomposition changes of the human face.

    PubMed

    Caplova, Zuzana; Gibelli, Daniele Maria; Poppa, Pasquale; Cummaudo, Marco; Obertova, Zuzana; Sforza, Chiarella; Cattaneo, Cristina

    2018-03-01

    Decomposition of the human body and human face is influenced, among other things, by environmental conditions. The early decomposition changes that modify the appearance of the face may hamper the recognition and identification of the deceased. Quantitative assessment of those changes may provide important information for forensic identification. This report presents a pilot 3D quantitative approach of tracking early decomposition changes of a single cadaver in controlled environmental conditions by summarizing the change with weekly morphological descriptions. The root mean square (RMS) value was used to evaluate the changes of the face after death. The results showed a high correlation (r = 0.863) between the measured RMS and the time since death. RMS values of each scan are presented, as well as the average weekly RMS values. The quantification of decomposition changes could improve the accuracy of antemortem facial approximation and potentially could allow the direct comparisons of antemortem and postmortem 3D scans.

  5. Kinetics of Thermal Decomposition of Ammonium Perchlorate by TG/DSC-MS-FTIR

    NASA Astrophysics Data System (ADS)

    Zhu, Yan-Li; Huang, Hao; Ren, Hui; Jiao, Qing-Jie

    2014-01-01

    The method of thermogravimetry/differential scanning calorimetry-mass spectrometry-Fourier transform infrared (TG/DSC-MS-FTIR) simultaneous analysis has been used to study thermal decomposition of ammonium perchlorate (AP). The processing of nonisothermal data at various heating rates was performed using NETZSCH Thermokinetics. The MS-FTIR spectra showed that N2O and NO2 were the main gaseous products of the thermal decomposition of AP, and there was a competition between the formation reaction of N2O and that of NO2 during the process with an iso-concentration point of N2O and NO2. The dependence of the activation energy calculated by Friedman's iso-conversional method on the degree of conversion indicated that the AP decomposition process can be divided into three stages, which are autocatalytic, low-temperature diffusion and high-temperature, stable-phase reaction. The corresponding kinetic parameters were determined by multivariate nonlinear regression and the mechanism of the AP decomposition process was proposed.

  6. The response of the HMX-based material PBXN-9 to thermal insults: thermal decomposition kinetics and morphological changes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glascoe, E A; Hsu, P C; Springer, H K

    PBXN-9, an HMX-formulation, is thermally damaged and thermally decomposed in order to determine the morphological changes and decomposition kinetics that occur in the material after mild to moderate heating. The material and its constituents were decomposed using standard thermal analysis techniques (DSC and TGA) and the decomposition kinetics are reported using different kinetic models. Pressed parts and prill were thermally damaged, i.e. heated to temperatures that resulted in material changes but did not result in significant decomposition or explosion, and analyzed. In general, the thermally damaged samples showed a significant increase in porosity and decrease in density and a smallmore » amount of weight loss. These PBXN-9 samples appear to sustain more thermal damage than similar HMX-Viton A formulations and the most likely reasons are the decomposition/evaporation of a volatile plasticizer and a polymorphic transition of the HMX from {beta} to {delta} phase.« less

  7. Generation of Synthetic Spike Trains with Defined Pairwise Correlations

    PubMed Central

    Niebur, Ernst

    2008-01-01

    Recent technological advances as well as progress in theoretical understanding of neural systems have created a need for synthetic spike trains with controlled mean rate and pairwise cross-correlation. This report introduces and analyzes a novel algorithm for the generation of discretized spike trains with arbitrary mean rates and controlled cross correlation. Pairs of spike trains with any pairwise correlation can be generated, and higher-order correlations are compatible with common synaptic input. Relations between allowable mean rates and correlations within a population are discussed. The algorithm is highly efficient, its complexity increasing linearly with the number of spike trains generated and therefore inversely with the number of cross-correlated pairs. PMID:17521277

  8. Pi2 detection using Empirical Mode Decomposition (EMD)

    NASA Astrophysics Data System (ADS)

    Mieth, Johannes Z. D.; Frühauff, Dennis; Glassmeier, Karl-Heinz

    2017-04-01

    Empirical Mode Decomposition has been used as an alternative method to wavelet transformation to identify onset times of Pi2 pulsations in data sets of the Scandinavian Magnetometer Array (SMA). Pi2 pulsations are magnetohydrodynamic waves occurring during magnetospheric substorms. Almost always Pi2 are observed at substorm onset in mid to low latitudes on Earth's nightside. They are fed by magnetic energy release caused by dipolarization processes. Their periods lie between 40 to 150 seconds. Usually, Pi2 are detected using wavelet transformation. Here, Empirical Mode Decomposition (EMD) is presented as an alternative approach to the traditional procedure. EMD is a young signal decomposition method designed for nonlinear and non-stationary time series. It provides an adaptive, data driven, and complete decomposition of time series into slow and fast oscillations. An optimized version using Monte-Carlo-type noise assistance is used here. By displaying the results in a time-frequency space a characteristic frequency modulation is observed. This frequency modulation can be correlated with the onset of Pi2 pulsations. A basic algorithm to find the onset is presented. Finally, the results are compared to classical wavelet-based analysis. The use of different SMA stations furthermore allows the spatial analysis of Pi2 onset times. EMD mostly finds application in the fields of engineering and medicine. This work demonstrates the applicability of this method to geomagnetic time series.

  9. Stochiometry, Microbial community composition and decomposition, a modelling analysis

    NASA Astrophysics Data System (ADS)

    Berninger, Frank; Zhou, Xuan; Aaltonen, Heidi; Köster, Kajar; Heinonsalo, Jussi; Pumpanen, Jukka

    2017-04-01

    Enzyme activity based litter decomposition models describe the decomposition of soil organic matter as a function of microbial biomass and its activity. In these models, decomposition depends largely on microbial and litter stoïchiometry. We, used the model of Schimel and Weintraub (Soil Biology & Biochemistry 35 (2003) 549-563 largely relying on the modification of Waring B et al. Ecology Letters, (2013) 16: 887-894) and we modified the model to include bacteria, fungi and mycorrizal fungi as decomposer groups assuming different stochiometries. The model was tested against previously published data from a fire chronosequence from northern Finland. The model reconstructed well the development of soil organic matter, microbial biomasses, enzyme actitivies with time after fire. In a theoretical model analysis we tried to understand how the exchange of carbon and nitrogen between mycorrhiza and the plant as different litter stoïchiometries interact. The results indicate that if a high percentage of fungal N uptake is transferred to the plant mycorrhizal biomass will decrease drastically and does decrease, due to low mycorrhizal biomasses, the N uptake of plants. If a lower proportion of the fungal N uptake is transferred to the plant the N uptake of the plants is reasonable stable while the proportion of mycorrhiza of the total fungal biomass varies. The model is also able to simulate priming of soil organic matter decomposition.

  10. Synthesis, characterization, thermal and explosive properties of potassium salts of trinitrophloroglucinol.

    PubMed

    Wang, Liqiong; Chen, Hongyan; Zhang, Tonglai; Zhang, Jianguo; Yang, Li

    2007-08-17

    Three different substituted potassium salts of trinitrophloroglucinol (H(3)TNPG) were prepared and characterized. The salts are all hydrates, and thermogravimetric analysis (TG) and elemental analysis confirmed that these salts contain crystal H2O and that the amount crystal H2O in potassium salts of H3TNPG is 1.0 hydrate for mono-substituted potassium salts of H3TNPG [K(H2TNPG)] and di-substituted potassium salt of H3TNPG [K2(HTNPG)], and 2.0 hydrate for tri-substituted potassium salt of H3TNPG [K3(TNPG)]. Their thermal decomposition mechanisms and kinetic parameters from 50 to 500 degrees C were studied under a linear heating rate by differential scanning calorimetry (DSC). Their thermal decomposition mechanisms undergo dehydration stage and intensive exothermic decomposition stage. FT-IR and TG studies verify that their final residua of decomposition are potassium cyanide or potassium carbonate. According to the onset temperature of the first exothermic decomposition process of dehydrated salts, the order of the thermal stability from low to high is from K(H2TNPG) and K2(HTNPG) to K3(TNPG), which is conform to the results of apparent activation energy calculated by Kissinger's and Ozawa-Doyle's method. Sensitivity test results showed that potassium salts of H3TNPG demonstrated higher sensitivity properties and had greater explosive probabilities.

  11. A highly efficient autothermal microchannel reactor for ammonia decomposition: Analysis of hydrogen production in transient and steady-state regimes

    NASA Astrophysics Data System (ADS)

    Engelbrecht, Nicolaas; Chiuta, Steven; Bessarabov, Dmitri G.

    2018-05-01

    The experimental evaluation of an autothermal microchannel reactor for H2 production from NH3 decomposition is described. The reactor design incorporates an autothermal approach, with added NH3 oxidation, for coupled heat supply to the endothermic decomposition reaction. An alternating catalytic plate arrangement is used to accomplish this thermal coupling in a cocurrent flow strategy. Detailed analysis of the transient operating regime associated with reactor start-up and steady-state results is presented. The effects of operating parameters on reactor performance are investigated, specifically, the NH3 decomposition flow rate, NH3 oxidation flow rate, and fuel-oxygen equivalence ratio. Overall, the reactor exhibits rapid response time during start-up; within 60 min, H2 production is approximately 95% of steady-state values. The recommended operating point for steady-state H2 production corresponds to an NH3 decomposition flow rate of 6 NL min-1, NH3 oxidation flow rate of 4 NL min-1, and fuel-oxygen equivalence ratio of 1.4. Under these flows, NH3 conversion of 99.8% and H2 equivalent fuel cell power output of 0.71 kWe is achieved. The reactor shows good heat utilization with a thermal efficiency of 75.9%. An efficient autothermal reactor design is therefore demonstrated, which may be upscaled to a multi-kW H2 production system for commercial implementation.

  12. Effect of Isomorphous Substitution on the Thermal Decomposition Mechanism of Hydrotalcites

    PubMed Central

    Crosby, Sergio; Tran, Doanh; Cocke, David; Duraia, El-Shazly M.; Beall, Gary W.

    2014-01-01

    Hydrotalcites have many important applications in catalysis, wastewater treatment, gene delivery and polymer stabilization, all depending on preparation history and treatment scenarios. In catalysis and polymer stabilization, thermal decomposition is of great importance. Hydrotalcites form easily with atmospheric carbon dioxide and often interfere with the study of other anion containing systems, particularly if formed at room temperature. The dehydroxylation and decomposition of carbonate occurs simultaneously, making it difficult to distinguish the dehydroxylation mechanisms directly. To date, the majority of work on understanding the decomposition mechanism has utilized hydrotalcite precipitated at room temperature. In this study, evolved gas analysis combined with thermal analysis has been used to show that CO2 contamination is problematic in materials being formed at RT that are poorly crystalline. This has led to some dispute as to the nature of the dehydroxylation mechanism. In this paper, data for the thermal decomposition of the chloride form of hydrotalcite are reported. In addition, carbonate-free hydrotalcites have been synthesized with different charge densities and at different growth temperatures. This combination of parameters has allowed a better understanding of the mechanism of dehydroxylation and the role that isomorphous substitution plays in these mechanisms to be delineated. In addition, the effect of anion type on thermal stability is also reported. A stepwise dehydroxylation model is proposed that is mediated by the level of aluminum substitution. PMID:28788231

  13. Delineating gas bearing reservoir by using spectral decomposition attribute: Case study of Steenkool formation, Bintuni Basin

    NASA Astrophysics Data System (ADS)

    Haris, A.; Pradana, G. S.; Riyanto, A.

    2017-07-01

    Tectonic setting of the Bird Head Papua Island becomes an important model for petroleum system in Eastern part of Indonesia. The current exploration has been started since the oil seepage finding in Bintuni and Salawati Basin. The biogenic gas in shallow layer turns out to become an interesting issue in the hydrocarbon exploration. The hydrocarbon accumulation appearance in a shallow layer with dry gas type, appeal biogenic gas for further research. This paper aims at delineating the sweet spot hydrocarbon potential in shallow layer by applying the spectral decomposition technique. The spectral decomposition is decomposing the seismic signal into an individual frequency, which has significant geological meaning. One of spectral decomposition methods is Continuous Wavelet Transform (CWT), which transforms the seismic signal into individual time and frequency simultaneously. This method is able to make easier time-frequency map analysis. When time resolution increases, the frequency resolution will be decreased, and vice versa. In this study, we perform low-frequency shadow zone analysis in which the amplitude anomaly at a low frequency of 15 Hz was observed and we then compare it to the amplitude at the mid (20 Hz) and the high-frequency (30 Hz). The appearance of the amplitude anomaly at a low frequency was disappeared at high frequency, this anomaly disappears. The spectral decomposition by using CWT algorithm has been successfully applied to delineate the sweet spot zone.

  14. Modal decomposition of turbulent supersonic cavity

    NASA Astrophysics Data System (ADS)

    Soni, R. K.; Arya, N.; De, A.

    2018-06-01

    Self-sustained oscillations in a Mach 3 supersonic cavity with a length-to-depth ratio of three are investigated using wall-modeled large eddy simulation methodology for ReD = 3.39× 105 . The unsteady data obtained through computation are utilized to investigate the spatial and temporal evolution of the flow field, especially the second invariant of the velocity tensor, while the phase-averaged data are analyzed over a feedback cycle to study the spatial structures. This analysis is accompanied by the proper orthogonal decomposition (POD) data, which reveals the presence of discrete vortices along the shear layer. The POD analysis is performed in both the spanwise and streamwise planes to extract the coherence in flow structures. Finally, dynamic mode decomposition is performed on the data sequence to obtain the dynamic information and deeper insight into the self-sustained mechanism.

  15. Three dimensional empirical mode decomposition analysis apparatus, method and article manufacture

    NASA Technical Reports Server (NTRS)

    Gloersen, Per (Inventor)

    2004-01-01

    An apparatus and method of analysis for three-dimensional (3D) physical phenomena. The physical phenomena may include any varying 3D phenomena such as time varying polar ice flows. A repesentation of the 3D phenomena is passed through a Hilbert transform to convert the data into complex form. A spatial variable is separated from the complex representation by producing a time based covariance matrix. The temporal parts of the principal components are produced by applying Singular Value Decomposition (SVD). Based on the rapidity with which the eigenvalues decay, the first 3-10 complex principal components (CPC) are selected for Empirical Mode Decomposition into intrinsic modes. The intrinsic modes produced are filtered in order to reconstruct the spatial part of the CPC. Finally, a filtered time series may be reconstructed from the first 3-10 filtered complex principal components.

  16. Decomposition of Proteins into Dynamic Units from Atomic Cross-Correlation Functions.

    PubMed

    Calligari, Paolo; Gerolin, Marco; Abergel, Daniel; Polimeno, Antonino

    2017-01-10

    In this article, we present a clustering method of atoms in proteins based on the analysis of the correlation times of interatomic distance correlation functions computed from MD simulations. The goal is to provide a coarse-grained description of the protein in terms of fewer elements that can be treated as dynamically independent subunits. Importantly, this domain decomposition method does not take into account structural properties of the protein. Instead, the clustering of protein residues in terms of networks of dynamically correlated domains is defined on the basis of the effective correlation times of the pair distance correlation functions. For these properties, our method stands as a complementary analysis to the customary protein decomposition in terms of quasi-rigid, structure-based domains. Results obtained for a prototypal protein structure illustrate the approach proposed.

  17. Thermodynamic analysis of trimethylgallium decomposition during GaN metal organic vapor phase epitaxy

    NASA Astrophysics Data System (ADS)

    Sekiguchi, Kazuki; Shirakawa, Hiroki; Chokawa, Kenta; Araidai, Masaaki; Kangawa, Yoshihiro; Kakimoto, Koichi; Shiraishi, Kenji

    2018-04-01

    We analyzed the decomposition of Ga(CH3)3 (TMG) during the metal organic vapor phase epitaxy (MOVPE) of GaN on the basis of first-principles calculations and thermodynamic analysis. We performed activation energy calculations of TMG decomposition and determined the main reaction processes of TMG during GaN MOVPE. We found that TMG reacts with the H2 carrier gas and that (CH3)2GaH is generated after the desorption of the methyl group. Next, (CH3)2GaH decomposes into (CH3)GaH2 and this decomposes into GaH3. Finally, GaH3 becomes GaH. In the MOVPE growth of GaN, TMG decomposes into GaH by the successive desorption of its methyl groups. The results presented here concur with recent high-resolution mass spectroscopy results.

  18. A molecular-field-based similarity study of non-nucleoside HIV-1 reverse transcriptase inhibitors

    NASA Astrophysics Data System (ADS)

    Mestres, Jordi; Rohrer, Douglas C.; Maggiora, Gerald M.

    1999-01-01

    This article describes a molecular-field-based similarity method for aligning molecules by matching their steric and electrostatic fields and an application of the method to the alignment of three structurally diverse non-nucleoside HIV-1 reverse transcriptase inhibitors. A brief description of the method, as implemented in the program MIMIC, is presented, including a discussion of pairwise and multi-molecule similarity-based matching. The application provides an example that illustrates how relative binding orientations of molecules can be determined in the absence of detailed structural information on their target protein. In the particular system studied here, availability of the X-ray crystal structures of the respective ligand-protein complexes provides a means for constructing an 'experimental model' of the relative binding orientations of the three inhibitors. The experimental model is derived by using MIMIC to align the steric fields of the three protein P66 subunit main chains, producing an overlay with a 1.41 Å average rms distance between the corresponding Cα's in the three chains. The inter-chain residue similarities for the backbone structures show that the main-chain conformations are conserved in the region of the inhibitor-binding site, with the major deviations located primarily in the 'finger' and RNase H regions. The resulting inhibitor structure overlay provides an experimental-based model that can be used to evaluate the quality of the direct a priori inhibitor alignment obtained using MIMIC. It is found that the 'best' pairwise alignments do not always correspond to the experimental model alignments. Therefore, simply combining the best pairwise alignments will not necessarily produce the optimal multi-molecule alignment. However, the best simultaneous three-molecule alignment was found to reproduce the experimental inhibitor alignment model. A pairwise consistency index has been derived which gauges the quality of combining the pairwise alignments and aids in efficiently forming the optimal multi-molecule alignment analysis. Two post-alignment procedures are described that provide information on feature-based and field-based pharmacophoric patterns. The former corresponds to traditional pharmacophore models and is derived from the contribution of individual atoms to the total similarity. The latter is based on molecular regions rather than atoms and is constructed by computing the percent contribution to the similarity of individual points in a regular lattice surrounding the molecules, which when contoured and colored visually depict regions of highly conserved similarity. A discussion of how the information provided by each of the procedures is useful in drug design is also presented.

  19. Artifact removal from EEG data with empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Grubov, Vadim V.; Runnova, Anastasiya E.; Efremova, Tatyana Yu.; Hramov, Alexander E.

    2017-03-01

    In the paper we propose the novel method for dealing with the physiological artifacts caused by intensive activity of facial and neck muscles and other movements in experimental human EEG recordings. The method is based on analysis of EEG signals with empirical mode decomposition (Hilbert-Huang transform). We introduce the mathematical algorithm of the method with following steps: empirical mode decomposition of EEG signal, choosing of empirical modes with artifacts, removing empirical modes with artifacts, reconstruction of the initial EEG signal. We test the method on filtration of experimental human EEG signals from movement artifacts and show high efficiency of the method.

  20. Better Decomposition Heuristics for the Maximum-Weight Connected Graph Problem Using Betweenness Centrality

    NASA Astrophysics Data System (ADS)

    Yamamoto, Takanori; Bannai, Hideo; Nagasaki, Masao; Miyano, Satoru

    We present new decomposition heuristics for finding the optimal solution for the maximum-weight connected graph problem, which is known to be NP-hard. Previous optimal algorithms for solving the problem decompose the input graph into subgraphs using heuristics based on node degree. We propose new heuristics based on betweenness centrality measures, and show through computational experiments that our new heuristics tend to reduce the number of subgraphs in the decomposition, and therefore could lead to the reduction in computational time for finding the optimal solution. The method is further applied to analysis of biological pathway data.

  1. Application of empirical mode decomposition in removing fidgeting interference in doppler radar life signs monitoring devices.

    PubMed

    Mostafanezhad, Isar; Boric-Lubecke, Olga; Lubecke, Victor; Mandic, Danilo P

    2009-01-01

    Empirical Mode Decomposition has been shown effective in the analysis of non-stationary and non-linear signals. As an application in wireless life signs monitoring in this paper we use this method in conditioning the signals obtained from the Doppler device. Random physical movements, fidgeting, of the human subject during a measurement can fall on the same frequency of the heart or respiration rate and interfere with the measurement. It will be shown how Empirical Mode Decomposition can break the radar signal down into its components and help separate and remove the fidgeting interference.

  2. Using dynamic mode decomposition for real-time background/foreground separation in video

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kutz, Jose Nathan; Grosek, Jacob; Brunton, Steven

    The technique of dynamic mode decomposition (DMD) is disclosed herein for the purpose of robustly separating video frames into background (low-rank) and foreground (sparse) components in real-time. Foreground/background separation is achieved at the computational cost of just one singular value decomposition (SVD) and one linear equation solve, thus producing results orders of magnitude faster than robust principal component analysis (RPCA). Additional techniques, including techniques for analyzing the video for multi-resolution time-scale components, and techniques for reusing computations to allow processing of streaming video in real time, are also described herein.

  3. Quantitative separation of tetralin hydroperoxide from its decomposition products by high performance liquid chromatography

    NASA Technical Reports Server (NTRS)

    Worstell, J. H.; Daniel, S. R.

    1981-01-01

    A method for the separation and analysis of tetralin hydroperoxide and its decomposition products by high pressure liquid chromatography has been developed. Elution with a single, mixed solvent from a micron-Porasil column was employed. Constant response factors (internal standard method) over large concentration ranges and reproducible retention parameters are reported.

  4. Educational Outcomes and Socioeconomic Status: A Decomposition Analysis for Middle-Income Countries

    ERIC Educational Resources Information Center

    Nieto, Sandra; Ramos, Raúl

    2015-01-01

    This article analyzes the factors that explain the gap in educational outcomes between the top and bottom quartile of students in different countries, according to their socioeconomic status. To do so, it uses PISA microdata for 10 middle-income and 2 high-income countries, and applies the Oaxaca-Blinder decomposition method. Its results show that…

  5. Randomized Approaches for Nearest Neighbor Search in Metric Space When Computing the Pairwise Distance Is Extremely Expensive

    NASA Astrophysics Data System (ADS)

    Wang, Lusheng; Yang, Yong; Lin, Guohui

    Finding the closest object for a query in a database is a classical problem in computer science. For some modern biological applications, computing the similarity between two objects might be very time consuming. For example, it takes a long time to compute the edit distance between two whole chromosomes and the alignment cost of two 3D protein structures. In this paper, we study the nearest neighbor search problem in metric space, where the pair-wise distance between two objects in the database is known and we want to minimize the number of distances computed on-line between the query and objects in the database in order to find the closest object. We have designed two randomized approaches for indexing metric space databases, where objects are purely described by their distances with each other. Analysis and experiments show that our approaches only need to compute O(logn) objects in order to find the closest object, where n is the total number of objects in the database.

  6. Evaluating multiple determinants of the structure of plant-animal mutualistic networks.

    PubMed

    Vázquez, Diego P; Chacoff, Natacha P; Cagnolo, Luciano

    2009-08-01

    The structure of mutualistic networks is likely to result from the simultaneous influence of neutrality and the constraints imposed by complementarity in species phenotypes, phenologies, spatial distributions, phylogenetic relationships, and sampling artifacts. We develop a conceptual and methodological framework to evaluate the relative contributions of these potential determinants. Applying this approach to the analysis of a plant-pollinator network, we show that information on relative abundance and phenology suffices to predict several aggregate network properties (connectance, nestedness, interaction evenness, and interaction asymmetry). However, such information falls short of predicting the detailed network structure (the frequency of pairwise interactions), leaving a large amount of variation unexplained. Taken together, our results suggest that both relative species abundance and complementarity in spatiotemporal distribution contribute substantially to generate observed network patters, but that this information is by no means sufficient to predict the occurrence and frequency of pairwise interactions. Future studies could use our methodological framework to evaluate the generality of our findings in a representative sample of study systems with contrasting ecological conditions.

  7. Wiki surveys: open and quantifiable social data collection.

    PubMed

    Salganik, Matthew J; Levy, Karen E C

    2015-01-01

    In the social sciences, there is a longstanding tension between data collection methods that facilitate quantification and those that are open to unanticipated information. Advances in technology now enable new, hybrid methods that combine some of the benefits of both approaches. Drawing inspiration from online information aggregation systems like Wikipedia and from traditional survey research, we propose a new class of research instruments called wiki surveys. Just as Wikipedia evolves over time based on contributions from participants, we envision an evolving survey driven by contributions from respondents. We develop three general principles that underlie wiki surveys: they should be greedy, collaborative, and adaptive. Building on these principles, we develop methods for data collection and data analysis for one type of wiki survey, a pairwise wiki survey. Using two proof-of-concept case studies involving our free and open-source website www.allourideas.org, we show that pairwise wiki surveys can yield insights that would be difficult to obtain with other methods.

  8. Wiki Surveys: Open and Quantifiable Social Data Collection

    PubMed Central

    Salganik, Matthew J.; Levy, Karen E. C.

    2015-01-01

    In the social sciences, there is a longstanding tension between data collection methods that facilitate quantification and those that are open to unanticipated information. Advances in technology now enable new, hybrid methods that combine some of the benefits of both approaches. Drawing inspiration from online information aggregation systems like Wikipedia and from traditional survey research, we propose a new class of research instruments called wiki surveys. Just as Wikipedia evolves over time based on contributions from participants, we envision an evolving survey driven by contributions from respondents. We develop three general principles that underlie wiki surveys: they should be greedy, collaborative, and adaptive. Building on these principles, we develop methods for data collection and data analysis for one type of wiki survey, a pairwise wiki survey. Using two proof-of-concept case studies involving our free and open-source website www.allourideas.org, we show that pairwise wiki surveys can yield insights that would be difficult to obtain with other methods. PMID:25992565

  9. Absolute continuity for operator valued completely positive maps on C∗-algebras

    NASA Astrophysics Data System (ADS)

    Gheondea, Aurelian; Kavruk, Ali Şamil

    2009-02-01

    Motivated by applicability to quantum operations, quantum information, and quantum probability, we investigate the notion of absolute continuity for operator valued completely positive maps on C∗-algebras, previously introduced by Parthasarathy [in Athens Conference on Applied Probability and Time Series Analysis I (Springer-Verlag, Berlin, 1996), pp. 34-54]. We obtain an intrinsic definition of absolute continuity, we show that the Lebesgue decomposition defined by Parthasarathy is the maximal one among all other Lebesgue-type decompositions and that this maximal Lebesgue decomposition does not depend on the jointly dominating completely positive map, we obtain more flexible formulas for calculating the maximal Lebesgue decomposition, and we point out the nonuniqueness of the Lebesgue decomposition as well as a sufficient condition for uniqueness. In addition, we consider Radon-Nikodym derivatives for absolutely continuous completely positive maps that, in general, are unbounded positive self-adjoint operators affiliated to a certain von Neumann algebra, and we obtain a spectral approximation by bounded Radon-Nikodym derivatives. An application to the existence of the infimum of two completely positive maps is indicated, and formulas in terms of Choi's matrices for the Lebesgue decomposition of completely positive maps in matrix algebras are obtained.

  10. Relative distribution of ketamine and norketamine in skeletal tissues following various periods of decomposition.

    PubMed

    Watterson, James H; Donohue, Joseph P

    2011-09-01

    Skeletal tissues (rat) were analyzed for ketamine (KET) and norketamine (NKET) following acute ketamine exposure (75 mg/kg i.p.) to examine the influence of bone type and decomposition period on drug levels. Following euthanasia, drug-free (n = 6) and drug-positive (n = 20) animals decomposed outdoors in rural Ontario for 0, 1, or 2 weeks. Skeletal remains were recovered and ground samples of various bones underwent passive methanolic extraction and analysis by GC-MS after solid-phase extraction. Drug levels, expressed as mass normalized response ratios, were compared across tissue types and decomposition periods. Bone type was a main effect (p < 0.05) for drug level and drug/metabolite level ratio (DMLR) for all decomposition times, except for DMLR after 2 weeks of decomposition. Mean drug level (KET and NKET) and DMLR varied by up to 23-fold, 18-fold, and 5-fold, respectively, between tissue types. Decomposition time was significantly related to DMLR, KET level, and NKET level in 3/7, 4/7, and 1/7 tissue types, respectively. Although substantial sitedependence may exist in measured bone drug levels, ratios of drug and metabolite levels should be investigated for utility in discrimination of drug administration patterns in forensic work.

  11. Fungal community structure of fallen pine and oak wood at different stages of decomposition in the Qinling Mountains, China.

    PubMed

    Yuan, Jie; Zheng, Xiaofeng; Cheng, Fei; Zhu, Xian; Hou, Lin; Li, Jingxia; Zhang, Shuoxin

    2017-10-24

    Historically, intense forest hazards have resulted in an increase in the quantity of fallen wood in the Qinling Mountains. Fallen wood has a decisive influence on the nutrient cycling, carbon budget and ecosystem biodiversity of forests, and fungi are essential for the decomposition of fallen wood. Moreover, decaying dead wood alters fungal communities. The development of high-throughput sequencing methods has facilitated the ongoing investigation of relevant molecular forest ecosystems with a focus on fungal communities. In this study, fallen wood and its associated fungal communities were compared at different stages of decomposition to evaluate relative species abundance and species diversity. The physical and chemical factors that alter fungal communities were also compared by performing correspondence analysis according to host tree species across all stages of decomposition. Tree species were the major source of differences in fungal community diversity at all decomposition stages, and fungal communities achieved the highest levels of diversity at the intermediate and late decomposition stages. Interactions between various physical and chemical factors and fungal communities shared the same regulatory mechanisms, and there was no tree species-specific influence. Improving our knowledge of wood-inhabiting fungal communities is crucial for forest ecosystem conservation.

  12. Prospects for inferring pairwise relationships with single nucleotide polymorphisms

    Treesearch

    Jeffery C. Glaubitz; O. Eugene, Jr. Rhodes; J. Andrew DeWoody

    2003-01-01

    An extraordinarily large number of single nucleotide polymorphisms (SNPs) are now available in humans as well as in other model organisms. Technological advancements may soon make it feasible to assay hundreds of SNPs in virtually any organism of interest. One potential application of SNPs is the determination of pairwise genetic relationships in populations without...

  13. Hierarchical semi-numeric method for pairwise fuzzy group decision making.

    PubMed

    Marimin, M; Umano, M; Hatono, I; Tamura, H

    2002-01-01

    Gradual improvements to a single-level semi-numeric method, i.e., linguistic labels preference representation by fuzzy sets computation for pairwise fuzzy group decision making are summarized. The method is extended to solve multiple criteria hierarchical structure pairwise fuzzy group decision-making problems. The problems are hierarchically structured into focus, criteria, and alternatives. Decision makers express their evaluations of criteria and alternatives based on each criterion by using linguistic labels. The labels are converted into and processed in triangular fuzzy numbers (TFNs). Evaluations of criteria yield relative criteria weights. Evaluations of the alternatives, based on each criterion, yield a degree of preference for each alternative or a degree of satisfaction for each preference value. By using a neat ordered weighted average (OWA) or a fuzzy weighted average operator, solutions obtained based on each criterion are aggregated into final solutions. The hierarchical semi-numeric method is suitable for solving a larger and more complex pairwise fuzzy group decision-making problem. The proposed method has been verified and applied to solve some real cases and is compared to Saaty's (1996) analytic hierarchy process (AHP) method.

  14. A new method of content based medical image retrieval and its applications to CT imaging sign retrieval.

    PubMed

    Ma, Ling; Liu, Xiabi; Gao, Yan; Zhao, Yanfeng; Zhao, Xinming; Zhou, Chunwu

    2017-02-01

    This paper proposes a new method of content based medical image retrieval through considering fused, context-sensitive similarity. Firstly, we fuse the semantic and visual similarities between the query image and each image in the database as their pairwise similarities. Then, we construct a weighted graph whose nodes represent the images and edges measure their pairwise similarities. By using the shortest path algorithm over the weighted graph, we obtain a new similarity measure, context-sensitive similarity measure, between the query image and each database image to complete the retrieval process. Actually, we use the fused pairwise similarity to narrow down the semantic gap for obtaining a more accurate pairwise similarity measure, and spread it on the intrinsic data manifold to achieve the context-sensitive similarity for a better retrieval performance. The proposed method has been evaluated on the retrieval of the Common CT Imaging Signs of Lung Diseases (CISLs) and achieved not only better retrieval results but also the satisfactory computation efficiency. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. SVM-dependent pairwise HMM: an application to protein pairwise alignments.

    PubMed

    Orlando, Gabriele; Raimondi, Daniele; Khan, Taushif; Lenaerts, Tom; Vranken, Wim F

    2017-12-15

    Methods able to provide reliable protein alignments are crucial for many bioinformatics applications. In the last years many different algorithms have been developed and various kinds of information, from sequence conservation to secondary structure, have been used to improve the alignment performances. This is especially relevant for proteins with highly divergent sequences. However, recent works suggest that different features may have different importance in diverse protein classes and it would be an advantage to have more customizable approaches, capable to deal with different alignment definitions. Here we present Rigapollo, a highly flexible pairwise alignment method based on a pairwise HMM-SVM that can use any type of information to build alignments. Rigapollo lets the user decide the optimal features to align their protein class of interest. It outperforms current state of the art methods on two well-known benchmark datasets when aligning highly divergent sequences. A Python implementation of the algorithm is available at http://ibsquare.be/rigapollo. wim.vranken@vub.be. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  16. Frequency-Dependent Selection: The High Potential for Permanent Genetic Variation in the Diallelic, Pairwise Interaction Model

    PubMed Central

    Asmussen, M. A.; Basnayake, E.

    1990-01-01

    A detailed analytic and numerical study is made of the potential for permanent genetic variation in frequency-dependent models based on pairwise interactions among genotypes at a single diallelic locus. The full equilibrium structure and qualitative gene-frequency dynamics are derived analytically for a symmetric model, in which pairwise fitnesses are chiefly determined by the genetic similarity of the individuals involved. This is supplemented by an extensive numerical investigation of the general model, the symmetric model, and nine other special cases. Together the results show that there is a high potential for permanent genetic diversity in the pairwise interaction model, and provide insight into the extent to which various forms of genotypic interactions enhance or reduce this potential. Technically, although two stable polymorphic equilibria are possible, the increased likelihood of maintaining both alleles, and the poor performance of protected polymorphism conditions as a measure of this likelihood, are primarily due to a greater variety and frequency of equilibrium patterns with one stable polymorphic equilibrium, in conjunction with a disproportionately large domain of attraction for stable internal equilibria. PMID:2341034

  17. Adequacy assessment of mathematical models in the dynamics of litter decomposition in a tropical forest Mosaic Atlantic, in southeastern Brazil.

    PubMed

    Nunes, F P; Garcia, Q S

    2015-05-01

    The study of litter decomposition and nutrient cycling is essential to know native forests structure and functioning. Mathematical models can help to understand the local and temporal litter fall variations and their environmental variables relationships. The objective of this study was test the adequacy of mathematical models for leaf litter decomposition in the Atlantic Forest in southeastern Brazil. We study four native forest sites in Parque Estadual do Rio Doce, a Biosphere Reserve of the Atlantic, which were installed 200 bags of litter decomposing with 20 × 20 cm nylon screen of 2 mm, with 10 grams of litter. Monthly from 09/2007 to 04/2009, 10 litterbags were removed for determination of the mass loss. We compared 3 nonlinear models: 1 - Olson Exponential Model (1963), which considers the constant K, 2 - Model proposed by Fountain and Schowalter (2004), 3 - Model proposed by Coelho and Borges (2005), which considers the variable K through QMR, SQR, SQTC, DMA and Test F. The Fountain and Schowalter (2004) model was inappropriate for this study by overestimating decomposition rate. The decay curve analysis showed that the model with the variable K was more appropriate, although the values of QMR and DMA revealed no significant difference (p > 0.05) between the models. The analysis showed a better adjustment of DMA using K variable, reinforced by the values of the adjustment coefficient (R2). However, convergence problems were observed in this model for estimate study areas outliers, which did not occur with K constant model. This problem can be related to the non-linear fit of mass/time values to K variable generated. The model with K constant shown to be adequate to describe curve decomposition for separately areas and best adjustability without convergence problems. The results demonstrated the adequacy of Olson model to estimate tropical forest litter decomposition. Although use of reduced number of parameters equaling the steps of the decomposition process, no difficulties of convergence were observed in Olson model. So, this model can be used to describe decomposition curves in different types of environments, estimating K appropriately.

  18. COMPADRE: an R and web resource for pathway activity analysis by component decompositions.

    PubMed

    Ramos-Rodriguez, Roberto-Rafael; Cuevas-Diaz-Duran, Raquel; Falciani, Francesco; Tamez-Peña, Jose-Gerardo; Trevino, Victor

    2012-10-15

    The analysis of biological networks has become essential to study functional genomic data. Compadre is a tool to estimate pathway/gene sets activity indexes using sub-matrix decompositions for biological networks analyses. The Compadre pipeline also includes one of the direct uses of activity indexes to detect altered gene sets. For this, the gene expression sub-matrix of a gene set is decomposed into components, which are used to test differences between groups of samples. This procedure is performed with and without differentially expressed genes to decrease false calls. During this process, Compadre also performs an over-representation test. Compadre already implements four decomposition methods [principal component analysis (PCA), Isomaps, independent component analysis (ICA) and non-negative matrix factorization (NMF)], six statistical tests (t- and f-test, SAM, Kruskal-Wallis, Welch and Brown-Forsythe), several gene sets (KEGG, BioCarta, Reactome, GO and MsigDB) and can be easily expanded. Our simulation results shown in Supplementary Information suggest that Compadre detects more pathways than over-representation tools like David, Babelomics and Webgestalt and less false positives than PLAGE. The output is composed of results from decomposition and over-representation analyses providing a more complete biological picture. Examples provided in Supplementary Information show the utility, versatility and simplicity of Compadre for analyses of biological networks. Compadre is freely available at http://bioinformatica.mty.itesm.mx:8080/compadre. The R package is also available at https://sourceforge.net/p/compadre.

  19. Decomposition analysis of water footprint changes in a water-limited river basin: a case study of the Haihe River basin, China

    NASA Astrophysics Data System (ADS)

    Zhi, Y.; Yang, Z. F.; Yin, X. A.

    2014-05-01

    Decomposition analysis of water footprint (WF) changes, or assessing the changes in WF and identifying the contributions of factors leading to the changes, is important to water resource management. Instead of focusing on WF from the perspective of administrative regions, we built a framework in which the input-output (IO) model, the structural decomposition analysis (SDA) model and the generating regional IO tables (GRIT) method are combined to implement decomposition analysis for WF in a river basin. This framework is illustrated in the WF in Haihe River basin (HRB) from 2002 to 2007, which is a typical water-limited river basin. It shows that the total WF in the HRB increased from 4.3 × 1010 m3 in 2002 to 5.6 × 1010 m3 in 2007, and the agriculture sector makes the dominant contribution to the increase. Both the WF of domestic products (internal) and the WF of imported products (external) increased, and the proportion of external WF rose from 29.1 to 34.4%. The technological effect was the dominant contributor to offsetting the increase of WF. However, the growth of WF caused by the economic structural effect and the scale effect was greater, so the total WF increased. This study provides insights about water challenges in the HRB and proposes possible strategies for the future, and serves as a reference for WF management and policy-making in other water-limited river basins.

  20. Marginal semi-supervised sub-manifold projections with informative constraints for dimensionality reduction and recognition.

    PubMed

    Zhang, Zhao; Zhao, Mingbo; Chow, Tommy W S

    2012-12-01

    In this work, sub-manifold projections based semi-supervised dimensionality reduction (DR) problem learning from partial constrained data is discussed. Two semi-supervised DR algorithms termed Marginal Semi-Supervised Sub-Manifold Projections (MS³MP) and orthogonal MS³MP (OMS³MP) are proposed. MS³MP in the singular case is also discussed. We also present the weighted least squares view of MS³MP. Based on specifying the types of neighborhoods with pairwise constraints (PC) and the defined manifold scatters, our methods can preserve the local properties of all points and discriminant structures embedded in the localized PC. The sub-manifolds of different classes can also be separated. In PC guided methods, exploring and selecting the informative constraints is challenging and random constraint subsets significantly affect the performance of algorithms. This paper also introduces an effective technique to select the informative constraints for DR with consistent constraints. The analytic form of the projection axes can be obtained by eigen-decomposition. The connections between this work and other related work are also elaborated. The validity of the proposed constraint selection approach and DR algorithms are evaluated by benchmark problems. Extensive simulations show that our algorithms can deliver promising results over some widely used state-of-the-art semi-supervised DR techniques. Copyright © 2012 Elsevier Ltd. All rights reserved.

  1. Decomposition rates and termite assemblage composition in semiarid Africa

    USGS Publications Warehouse

    Schuurman, G.

    2005-01-01

    Outside of the humid tropics, abiotic factors are generally considered the dominant regulators of decomposition, and biotic influences are frequently not considered in predicting decomposition rates. In this study, I examined the effect of termite assemblage composition and abundance on decomposition of wood litter of an indigenous species (Croton megalobotrys) in five terrestrial habitats of the highly seasonal semiarid Okavango Delta region of northern Botswana, to determine whether natural variation in decomposer community composition and abundance influences decomposition rates. 1 conducted the study in two areas, Xudum and Santawani, with the Xudum study preceding the Santawani study. I assessed termite assemblage composition and abundance using a grid of survey baits (rolls of toilet paper) placed on the soil surface and checked 2-4 times/month. I placed a billet (a section of wood litter) next to each survey bait and measured decomposition in a plot by averaging the mass loss of its billets. Decomposition rates varied up to sixfold among plots within the same habitat and locality, despite the fact that these plots experienced the same climate. In addition, billets decomposed significantly faster during the cooler and drier Santawani study, contradicting climate-based predictions. Because termite incidence was generally higher in Santawani plots, termite abundance initially seemed a likely determinant of decomposition in this system. However, no significant effect of termite incidence on billet mass loss rates was observed among the Xudum plots, where decomposition rates remained low even though termite incidence varied considerably. Considering the incidences of fungus-growing termites and non-fungus-growing termites separately resolves this apparent contradiction: in both Santawani and Xudum, only fungus-growing termites play a significant role in decomposition. This result is mirrored in an analysis of the full data set of combined Xudum and Santawani data. The determination that natural variation in the abundance of a single taxonomic group of soil fauna, a termite subfamily, determines almost all observed variation in decomposition rates supports the emerging view that biotic influences may be important in many biomes and that consideration of decomposer community composition and abundance may be critical for accurate prediction of decomposition rates. ?? 2005 by the Ecological Society of America.

  2. Consistency-based rectification of nonrigid registrations

    PubMed Central

    Gass, Tobias; Székely, Gábor; Goksel, Orcun

    2015-01-01

    Abstract. We present a technique to rectify nonrigid registrations by improving their group-wise consistency, which is a widely used unsupervised measure to assess pair-wise registration quality. While pair-wise registration methods cannot guarantee any group-wise consistency, group-wise approaches typically enforce perfect consistency by registering all images to a common reference. However, errors in individual registrations to the reference then propagate, distorting the mean and accumulating in the pair-wise registrations inferred via the reference. Furthermore, the assumption that perfect correspondences exist is not always true, e.g., for interpatient registration. The proposed consistency-based registration rectification (CBRR) method addresses these issues by minimizing the group-wise inconsistency of all pair-wise registrations using a regularized least-squares algorithm. The regularization controls the adherence to the original registration, which is additionally weighted by the local postregistration similarity. This allows CBRR to adaptively improve consistency while locally preserving accurate pair-wise registrations. We show that the resulting registrations are not only more consistent, but also have lower average transformation error when compared to known transformations in simulated data. On clinical data, we show improvements of up to 50% target registration error in breathing motion estimation from four-dimensional MRI and improvements in atlas-based segmentation quality of up to 65% in terms of mean surface distance in three-dimensional (3-D) CT. Such improvement was observed consistently using different registration algorithms, dimensionality (two-dimensional/3-D), and modalities (MRI/CT). PMID:26158083

  3. Kinetics and mechanism of solid decompositions — From basic discoveries by atomic absorption spectrometry and quadrupole mass spectroscopy to thorough thermogravimetric analysis

    NASA Astrophysics Data System (ADS)

    L'vov, Boris V.

    2008-02-01

    This paper sums up the evolution of thermochemical approach to the interpretation of solid decompositions for the past 25 years. This period includes two stages related to decomposition studies by different techniques: by ET AAS and QMS in 1981-2001 and by TG in 2002-2007. As a result of ET AAS and QMS investigations, the method for determination of absolute rates of solid decompositions was developed and the mechanism of decompositions through the congruent dissociative vaporization was discovered. On this basis, in the period from 1997 to 2001, the decomposition mechanisms of several classes of reactants were interpreted and some unusual effects observed in TA were explained. However, the thermochemical approach has not received any support by other TA researchers. One of the potential reasons of this distrust was the unreliability of the E values measured by the traditional Arrhenius plot method. The theoretical analysis and comparison of metrological features of different methods used in the determinations of thermochemical quantities permitted to conclude that in comparison with the Arrhenius plot and second-law methods, the third-law method is to be very much preferred. However, this method cannot be used in the kinetic studies by the Arrhenius approach because its use suggests the measuring of the equilibrium pressures of decomposition products. On the contrary, the method of absolute rates is ideally suitable for this purpose. As a result of much higher precision of the third-law method, some quantitative conclusions that follow from the theory were confirmed, and several new effects, which were invisible in the framework of the Arrhenius approach, have been revealed. In spite of great progress reached in the development of reliable methodology, based on the third-law method, the thermochemical approach remains unclaimed as before.

  4. Microarray analysis of port wine stains before and after pulsed dye laser treatment.

    PubMed

    Laquer, Vivian T; Hevezi, Peter A; Albrecht, Huguette; Chen, Tina S; Zlotnik, Albert; Kelly, Kristen M

    2013-02-01

    Neither the pathogenesis of port wine stain (PWS) birthmarks nor tissue effects of pulsed dye laser (PDL) treatment of these lesions is fully understood. There are few published reports utilizing gene expression analysis in human PWS skin. We aim to compare gene expression in PWS before and after PDL, using DNA microarrays that represent most, if not all, human genes to obtain comprehensive molecular profiles of PWS lesions and PDL-associated tissue effects. Five human subjects had PDL treatment of their PWS. One week later, three biopsies were taken from each subject: normal skin (N); untreated PWS (PWS); PWS post-PDL (PWS + PDL). Samples included two lower extremity lesions, two facial lesions, and one facial nodule. High-quality total RNA isolated from skin biopsies was processed and applied to Affymetrix Human gene 1.0ST microarrays for gene expression analysis. We performed a 16 pair-wise comparison identifying either up- or down-regulated genes between N versus PWS and PWS versus PWS + PDL for four of the donor samples. The PWS nodule (nPWS) was analyzed separately. There was significant variation in gene expression profiles between individuals. By doing pair-wise comparisons between samples taken from the same donor, we were able to identify genes that may participate in the formation of PWS lesions and PDL tissue effects. Genes associated with immune, epidermal, and lipid metabolism were up-regulated in PWS skin. The nPWS exhibited more profound differences in gene expression than the rest of the samples, with significant differential expression of genes associated with angiogenesis, tumorigenesis, and inflammation. In summary, gene expression profiles from N, PWS, and PWS + PDL demonstrated significant variation within samples from the same donor and between donors. By doing pair-wise comparisons between samples taken from the same donor and comparing these results between donors, we were able to identify genes that may participate in formation of PWS and PDL effects. Our preliminary results indicate changes in gene expression of angiogenesis-related genes, suggesting that dysregulation of angiogenic signals and/or components may contribute to PWS pathology. Copyright © 2012 Wiley Periodicals, Inc.

  5. Decomposition Odour Profiling in the Air and Soil Surrounding Vertebrate Carrion

    PubMed Central

    2014-01-01

    Chemical profiling of decomposition odour is conducted in the environmental sciences to detect malodourous target sources in air, water or soil. More recently decomposition odour profiling has been employed in the forensic sciences to generate a profile of the volatile organic compounds (VOCs) produced by decomposed remains. The chemical profile of decomposition odour is still being debated with variations in the VOC profile attributed to the sample collection technique, method of chemical analysis, and environment in which decomposition occurred. To date, little consideration has been given to the partitioning of odour between different matrices and the impact this has on developing an accurate VOC profile. The purpose of this research was to investigate the decomposition odour profile surrounding vertebrate carrion to determine how VOCs partition between soil and air. Four pig carcasses (Sus scrofa domesticus L.) were placed on a soil surface to decompose naturally and their odour profile monitored over a period of two months. Corresponding control sites were also monitored to determine the VOC profile of the surrounding environment. Samples were collected from the soil below and the air (headspace) above the decomposed remains using sorbent tubes and analysed using gas chromatography-mass spectrometry. A total of 249 compounds were identified but only 58 compounds were common to both air and soil samples. This study has demonstrated that soil and air samples produce distinct subsets of VOCs that contribute to the overall decomposition odour. Sample collection from only one matrix will reduce the likelihood of detecting the complete spectrum of VOCs, which further confounds the issue of determining a complete and accurate decomposition odour profile. Confirmation of this profile will enhance the performance of cadaver-detection dogs that are tasked with detecting decomposition odour in both soil and air to locate victim remains. PMID:24740412

  6. Plasma-catalyst hybrid reactor with CeO2/γ-Al2O3 for benzene decomposition with synergetic effect and nano particle by-product reduction.

    PubMed

    Mao, Lingai; Chen, Zhizong; Wu, Xinyue; Tang, Xiujuan; Yao, Shuiliang; Zhang, Xuming; Jiang, Boqiong; Han, Jingyi; Wu, Zuliang; Lu, Hao; Nozaki, Tomohiro

    2018-04-05

    A dielectric barrier discharge (DBD) catalyst hybrid reactor with CeO 2 /γ-Al 2 O 3 catalyst balls was investigated for benzene decomposition at atmospheric pressure and 30 °C. At an energy density of 37-40 J/L, benzene decomposition was as high as 92.5% when using the hybrid reactor with 5.0wt%CeO 2 /γ-Al 2 O 3 ; while it was 10%-20% when using a normal DBD reactor without a catalyst. Benzene decomposition using the hybrid reactor was almost the same as that using an O 3 catalyst reactor with the same CeO 2 /γ-Al 2 O 3 catalyst, indicating that O 3 plays a key role in the benzene decomposition. Fourier transform infrared spectroscopy analysis showed that O 3 adsorption on CeO 2 /γ-Al 2 O 3 promotes the production of adsorbed O 2 - and O 2 2‒ , which contribute benzene decomposition over heterogeneous catalysts. Nano particles as by-products (phenol and 1,4-benzoquinone) from benzene decomposition can be significantly reduced using the CeO 2 /γ-Al 2 O 3 catalyst. H 2 O inhibits benzene decomposition; however, it improves CO 2 selectivity. The deactivated CeO 2 /γ-Al 2 O 3 catalyst can be regenerated by performing discharges at 100 °C and 192-204 J/L. The decomposition mechanism of benzene over CeO 2 /γ-Al 2 O 3 catalyst was proposed. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Decomposition odour profiling in the air and soil surrounding vertebrate carrion.

    PubMed

    Forbes, Shari L; Perrault, Katelynn A

    2014-01-01

    Chemical profiling of decomposition odour is conducted in the environmental sciences to detect malodourous target sources in air, water or soil. More recently decomposition odour profiling has been employed in the forensic sciences to generate a profile of the volatile organic compounds (VOCs) produced by decomposed remains. The chemical profile of decomposition odour is still being debated with variations in the VOC profile attributed to the sample collection technique, method of chemical analysis, and environment in which decomposition occurred. To date, little consideration has been given to the partitioning of odour between different matrices and the impact this has on developing an accurate VOC profile. The purpose of this research was to investigate the decomposition odour profile surrounding vertebrate carrion to determine how VOCs partition between soil and air. Four pig carcasses (Sus scrofa domesticus L.) were placed on a soil surface to decompose naturally and their odour profile monitored over a period of two months. Corresponding control sites were also monitored to determine the VOC profile of the surrounding environment. Samples were collected from the soil below and the air (headspace) above the decomposed remains using sorbent tubes and analysed using gas chromatography-mass spectrometry. A total of 249 compounds were identified but only 58 compounds were common to both air and soil samples. This study has demonstrated that soil and air samples produce distinct subsets of VOCs that contribute to the overall decomposition odour. Sample collection from only one matrix will reduce the likelihood of detecting the complete spectrum of VOCs, which further confounds the issue of determining a complete and accurate decomposition odour profile. Confirmation of this profile will enhance the performance of cadaver-detection dogs that are tasked with detecting decomposition odour in both soil and air to locate victim remains.

  8. Chemistry of decomposition of freshwater wetland sedimentary organic material during ramped pyrolysis

    NASA Astrophysics Data System (ADS)

    Williams, E. K.; Rosenheim, B. E.

    2011-12-01

    Ramped pyrolysis methodology, such as that used in the programmed-temperature pyrolysis/combustion system (PTP/CS), improves radiocarbon analysis of geologic materials devoid of authigenic carbonate compounds and with low concentrations of extractable authochthonous organic molecules. The approach has improved sediment chronology in organic-rich sediments proximal to Antarctic ice shelves (Rosenheim et al., 2008) and constrained the carbon sequestration potential of suspended sediments in the lower Mississippi River (Roe et al., in review). Although ramped pyrolysis allows for separation of sedimentary organic material based upon relative reactivity, chemical information (i.e. chemical composition of pyrolysis products) is lost during the in-line combustion of pyrolysis products. A first order approximation of ramped pyrolysis/combustion system CO2 evolution, employing a simple Gaussian decomposition routine, has been useful (Rosenheim et al., 2008), but improvements may be possible. First, without prior compound-specific extractions, the molecular composition of sedimentary organic matter is unknown and/or unidentifiable. Second, even if determined as constituents of sedimentary organic material, many organic compounds have unknown or variable decomposition temperatures. Third, mixtures of organic compounds may result in significant chemistry within the pyrolysis reactor, prior to introduction of oxygen along the flow path. Gaussian decomposition of the reaction rate may be too simple to fully explain the combination of these factors. To relate both the radiocarbon age over different temperature intervals and the pyrolysis reaction thermograph (temperature (°C) vs. CO2 evolved (μmol)) obtained from PTP/CS to chemical composition of sedimentary organic material, we present a modeling framework developed based upon the ramped pyrolysis decomposition of simple mixtures of organic compounds (i.e. cellulose, lignin, plant fatty acids, etc.) often found in sedimentary organic material to account for changes in thermograph shape. The decompositions will be compositionally verified by 13C NMR analysis of pyrolysis residues from interrupted reactions. This will allow for constraint of decomposition temperatures of individual compounds as well as chemical reactions between volatilized moieties in mixtures of these compounds. We will apply this framework with 13C NMR analysis of interrupted pyrolysis residues and radiocarbon data from PTP/CS analysis of sedimentary organic material from a freshwater marsh wetland in Barataria Bay, Louisiana. We expect to characterize the bulk chemical composition during pyrolysis and as well as diagenetic changes with depth. Most importantly, we expect to constrain the potential and the limitations of this modeling framework for application to other depositional environments.

  9. Efficacy of Proton Pump Inhibitors for Patients with Duodenal Ulcers: A Pairwise and Network Meta-Analysis of Randomized Controlled Trials

    PubMed Central

    Hu, Zhan-Hong; Shi, Ai-Ming; Hu, Duan-Min; Bao, Jun-Jie

    2017-01-01

    Background/Aim: To compare the efficacy and tolerance of different proton pump inhibitors (PPIs) in different doses for patients with duodenal ulcers. Materials and Methods: An electronic database was searched to collect all randomized clinical trials (RCTs), and a pairwise and network meta-analysis were performed. Results: A total of 24 RCTs involving 6188 patients were included. The network meta-analysis showed that there were no significant differences for the 4-week healing rate of duodenal ulcer treated with different PPI regimens except pantoprazle 40 mg/d versus lansoprazole 15 mg/d [Relative risk (RR) = 3.57; 95% confidence interval (CI) = 1.36–10.31)] and lansoprazole 30 mg/d versus lansoprazole 15 mg/d (RR = 2.45; 95% CI = 1.01–6.14). In comparison with H2 receptor antagonists (H2 RA), pantoprazole 40 mg/d and lansoprazole 30 mg/d significantly increase the healing rate (RR = 2.96; 95% CI = 1.78–5.14 and RR = 2.04; 95% CI = 1.13–3.53, respectively). There was no significant difference for the rate of adverse events between different regimens, including H2 RA for a duration of 4-week of follow up. Conclusion: There was no significant difference for the efficacy and tolerance between the ordinary doses of different PPIs with the exception of lansoprazle 15 mg/d. PMID:28139495

  10. Double Bounce Component in Cross-Polarimetric SAR from a New Scattering Target Decomposition

    NASA Astrophysics Data System (ADS)

    Hong, Sang-Hoon; Wdowinski, Shimon

    2013-08-01

    Common vegetation scattering theories assume that the Synthetic Aperture Radar (SAR) cross-polarization (cross-pol) signal represents solely volume scattering. We found this assumption incorrect based on SAR phase measurements acquired over the south Florida Everglades wetlands indicating that the cross-pol radar signal often samples the water surface beneath the vegetation. Based on these new observations, we propose that the cross-pol measurement consists of both volume scattering and double bounce components. The simplest multi-bounce scattering mechanism that generates cross-pol signal occurs by rotated dihedrals. Thus, we use the rotated dihedral mechanism with probability density function to revise some of the vegetation scattering theories and develop a three- component decomposition algorithm with single bounce, double bounce from both co-pol and cross-pol, and volume scattering components. We applied the new decomposition analysis to both urban and rural environments using Radarsat-2 quad-pol datasets. The decomposition of the San Francisco's urban area shows higher double bounce scattering and reduced volume scattering compared to other common three-component decomposition. The decomposition of the rural Everglades area shows that the relations between volume and cross-pol double bounce depend on the vegetation density. The new decomposition can be useful to better understand vegetation scattering behavior over the various surfaces and the estimation of above ground biomass using SAR observations.

  11. Aging-driven decomposition in zolpidem hemitartrate hemihydrate and the single-crystal structure of its decomposition products.

    PubMed

    Vega, Daniel R; Baggio, Ricardo; Roca, Mariana; Tombari, Dora

    2011-04-01

    The "aging-driven" decomposition of zolpidem hemitartrate hemihydrate (form A) has been followed by X-ray powder diffraction (XRPD), and the crystal and molecular structures of the decomposition products studied by single-crystal methods. The process is very similar to the "thermally driven" one, recently described in the literature for form E (Halasz and Dinnebier. 2010. J Pharm Sci 99(2): 871-874), resulting in a two-phase system: the neutral free base (common to both decomposition processes) and, in the present case, a novel zolpidem tartrate monohydrate, unique to the "aging-driven" decomposition. Our room-temperature single-crystal analysis gives for the free base comparable results as the high-temperature XRPD ones already reported by Halasz and Dinnebier: orthorhombic, Pcba, a = 9.6360(10) Å, b = 18.2690(5) Å, c = 18.4980(11) Å, and V = 3256.4(4) Å(3) . The unreported zolpidem tartrate monohydrate instead crystallizes in monoclinic P21 , which, for comparison purposes, we treated in the nonstandard setting P1121 with a = 20.7582(9) Å, b = 15.2331(5) Å, c = 7.2420(2) Å, γ = 90.826(2)°, and V = 2289.73(14) Å(3) . The structure presents two complete moieties in the asymmetric unit (z = 4, z' = 2). The different phases obtained in both decompositions are readily explained, considering the diverse genesis of both processes. Copyright © 2010 Wiley-Liss, Inc.

  12. Assessing the Relative Effects of Geographic Location and Soil Type on Microbial Communities Associated with Straw Decomposition

    PubMed Central

    Wang, Xiaoyue; Wang, Feng; Jiang, Yuji

    2013-01-01

    Decomposition of plant residues is largely mediated by soil-dwelling microorganisms whose activities are influenced by both climate conditions and properties of the soil. However, a comprehensive understanding of their relative importance remains elusive, mainly because traditional methods, such as soil incubation and environmental surveys, have a limited ability to differentiate between the combined effects of climate and soil. Here, we performed a large-scale reciprocal soil transplantation experiment, whereby microbial communities associated with straw decomposition were examined in three initially identical soils placed in parallel in three climate regions of China (red soil, Chao soil, and black soil, located in midsubtropical, warm-temperate, and cold-temperate zones). Maize straws buried in mesh bags were sampled at 0.5, 1, and 2 years after the burial and subjected to chemical, physical, and microbiological analyses, e.g., phospholipid fatty acid analysis for microbial abundance, community-level physiological profiling, and 16S rRNA gene denaturing gradient gel electrophoresis, respectively, for functional and phylogenic diversity. Results of aggregated boosted tree analysis show that location rather soil is the primary determining factor for the rate of straw decomposition and structures of the associated microbial communities. Principal component analysis indicates that the straw communities are primarily grouped by location at any of the three time points. In contrast, microbial communities in bulk soil remained closely related to one another for each soil. Together, our data suggest that climate (specifically, geographic location) has stronger effects than soil on straw decomposition; moreover, the successive process of microbial communities in soils is slower than those in straw residues in response to climate changes. PMID:23524671

  13. Thermodynamics and Phase Behavior of Miscible Polymer Blends in the Presence of Supercritical Carbon Dioxide

    NASA Astrophysics Data System (ADS)

    Young, Nicholas Philip

    The design of environmentally-benign polymer processing techniques is an area of growing interest, motivated by the desire to reduce the emission of volatile organic compounds. Recently, supercritical carbon dioxide (scCO 2) has gained traction as a viable candidate to process polymers both as a solvent and diluent. The focus of this work was to elucidate the nature of the interactions between scCO2 and polymers in order to provide rational insight into the molecular interactions which result in the unexpected mixing thermodynamics in one such system. The work also provides insight into the nature of pairwise thermodynamic interactions in multicomponent polymer-polymer-diluent blends, and the effect of these interactions on the phase behavior of the mixture. In order to quantify the strength of interactions in the multicomponent system, the binary mixtures were characterized individually in addition to the ternary blend. Quantitative analysis of was made tractable through the use of a model miscible polymer blend containing styrene-acrylonitrile copolymer (SAN) and poly(methyl methacrylate) (dPMMA), a mixture which has been considered for a variety of practical applications. In the case of both individual polymers, scCO2 is known to behave as a diluent, wherein the extent of polymer swelling depends on both temperature and pressure. The solubility of scCO 2 in each polymer as a function of temperature and pressure was characterized elsewhere. The SAN-dPMMA blend clearly exhibited lower critical solution temperature behavior, forming homogeneous mixtures at low temperatures and phase separating at elevated temperature. These measurements allowed the determination of the Flory-Huggins interaction parameter chi23 for SAN (species 2) and dPMMA (species 3) as a function of temperature at ambient pressure, in the absence of scCO2 (species 1). Characterization of the phase behavior of the multicomponent (ternary) mixture was also carried out by SANS. An in situ SANS environment was developed to allow measurement of blend miscibility in the presence of scCO2. The pressure-temperature phase behavior of the system could be mapped by approaching the point of phase separation by spinodal decomposition through pressure increases at constant temperature. For a roughly symmetric mixture of SAN and dPMMA, the temperature at which phase separation occurred could be decreased by over 125 °C. The extent to which the phase behavior of the multicomponent system could be tuned motivated further investigation into the interactions present within the homogeneous mixtures. Analysis of the SANS results for homogeneous mixtures was undertaken using a new multicomponent formalism of the random phase approximation theory. The scattering profiles obtained from the scCO2-SAN-dPMMA system could be predicted with reasonable success. The success of the theoretical predictions was facilitated by directly employing the interactions found in the binary experiments. Exploitation of the condition of homogeneity with respect to chemical potential allowed determination of interaction parameters for scCO2-SAN and 2-dPMMA within the multicomponent mixture (chi12 and chi13, respectively). Studying this system over a large range of the supercritical regime yielded insight on the nature of interactions in the system. Near the critical point of scCO 2, chi12 and chi13 increase monotonically as a function of pressure. Conversely, at elevated temperature away from the critical point, the interaction parameters are found to go through a minimum as a pressure increases. Analysis of the critical phenomenon associated with scCO2 suggests that the observed dependence of chi12 and chi13 on pressure are related to the magnitude of scCO 2 density fluctuations and the proximity of the system to the so-called density fluctuation ridge. By tuning the system parameters of the multicomponent mixture, the phase behavior can be altered through the balance of pairwise interactions been the constituent species. The presence of scCO2 in the mixtures appears to eliminate the existence of the metastable state that epitomizes most polymer-polymer mixtures. Thus it is shown that knowledge of the individual pairwise interactions in such multicomponent mixtures can greatly influence the resulting phase behavior, and provide insight into the design of improved functional materials with decreased environmental impacts.

  14. Thermal decomposition of ammonium hexachloroosmate.

    PubMed

    Asanova, T I; Kantor, I; Asanov, I P; Korenev, S V; Yusenko, K V

    2016-12-07

    Structural changes of (NH 4 ) 2 [OsCl 6 ] occurring during thermal decomposition in a reduction atmosphere have been studied in situ using combined energy-dispersive X-ray absorption spectroscopy (ED-XAFS) and powder X-ray diffraction (PXRD). According to PXRD, (NH 4 ) 2 [OsCl 6 ] transforms directly to metallic Os without the formation of any crystalline intermediates but through a plateau where no reactions occur. XANES and EXAFS data by means of Multivariate Curve Resolution (MCR) analysis show that thermal decomposition occurs with the formation of an amorphous intermediate {OsCl 4 } x with a possible polymeric structure. Being revealed for the first time the intermediate was subjected to determine the local atomic structure around osmium. The thermal decomposition of hexachloroosmate is much more complex and occurs within a minimum two-step process, which has never been observed before.

  15. Applications of singular value analysis and partial-step algorithm for nonlinear orbit determination

    NASA Technical Reports Server (NTRS)

    Ryne, Mark S.; Wang, Tseng-Chan

    1991-01-01

    An adaptive method in which cruise and nonlinear orbit determination problems can be solved using a single program is presented. It involves singular value decomposition augmented with an extended partial step algorithm. The extended partial step algorithm constrains the size of the correction to the spacecraft state and other solve-for parameters. The correction is controlled by an a priori covariance and a user-supplied bounds parameter. The extended partial step method is an extension of the update portion of the singular value decomposition algorithm. It thus preserves the numerical stability of the singular value decomposition method, while extending the region over which it converges. In linear cases, this method reduces to the singular value decomposition algorithm with the full rank solution. Two examples are presented to illustrate the method's utility.

  16. Solution of rocks and refractory minerals by acids at high temperatures and pressures. Determination of silica after decomposition with hydrofluoric acid

    USGS Publications Warehouse

    May, I.; Rowe, J.J.

    1965-01-01

    A modified Morey bomb was designed which contains a removable nichromecased 3.5-ml platinium crucible. This bomb is particularly useful for decompositions of refractory samples for micro- and semimicro-analysis. Temperatures of 400-450?? and pressures estimated as great as 6000 p.s.i. were maintained in the bomb for periods as long as 24 h. Complete decompositions of rocks, garnet, beryl, chrysoberyl, phenacite, sapphirine, and kyanite were obtained with hydrofluoric acid or a mixture of hydrofluoric and sulfuric acids; the decomposition of chrome refractory was made with hydrochloric acid. Aluminum-rich samples formed difficultly soluble aluminum fluoride precipitates. Because no volatilization losses occur, silica can be determined on sample solutions by a molybdenum-blue procedure using aluminum(III) to complex interfering fluoride. ?? 1965.

  17. Network meta-analysis: a technique to gather evidence from direct and indirect comparisons

    PubMed Central

    2017-01-01

    Systematic reviews and pairwise meta-analyses of randomized controlled trials, at the intersection of clinical medicine, epidemiology and statistics, are positioned at the top of evidence-based practice hierarchy. These are important tools to base drugs approval, clinical protocols and guidelines formulation and for decision-making. However, this traditional technique only partially yield information that clinicians, patients and policy-makers need to make informed decisions, since it usually compares only two interventions at the time. In the market, regardless the clinical condition under evaluation, usually many interventions are available and few of them have been studied in head-to-head studies. This scenario precludes conclusions to be drawn from comparisons of all interventions profile (e.g. efficacy and safety). The recent development and introduction of a new technique – usually referred as network meta-analysis, indirect meta-analysis, multiple or mixed treatment comparisons – has allowed the estimation of metrics for all possible comparisons in the same model, simultaneously gathering direct and indirect evidence. Over the last years this statistical tool has matured as technique with models available for all types of raw data, producing different pooled effect measures, using both Frequentist and Bayesian frameworks, with different software packages. However, the conduction, report and interpretation of network meta-analysis still poses multiple challenges that should be carefully considered, especially because this technique inherits all assumptions from pairwise meta-analysis but with increased complexity. Thus, we aim to provide a basic explanation of network meta-analysis conduction, highlighting its risks and benefits for evidence-based practice, including information on statistical methods evolution, assumptions and steps for performing the analysis. PMID:28503228

  18. Rapid characterization of lithium ion battery electrolytes and thermal aging products by low-temperature plasma ambient ionization high-resolution mass spectrometry.

    PubMed

    Vortmann, Britta; Nowak, Sascha; Engelhard, Carsten

    2013-03-19

    Lithium ion batteries (LIBs) are key components for portable electronic devices that are used around the world. However, thermal decomposition products in the battery reduce its lifetime, and decomposition processes are still not understood. In this study, a rapid method for in situ analysis and reaction monitoring in LIB electrolytes is presented based on high-resolution mass spectrometry (HR-MS) with low-temperature plasma probe (LTP) ambient desorption/ionization for the first time. This proof-of-principle study demonstrates the capabilities of ambient mass spectrometry in battery research. LTP-HR-MS is ideally suited for qualitative analysis in the ambient environment because it allows direct sample analysis independent of the sample size, geometry, and structure. Further, it is environmental friendly because it eliminates the need of organic solvents that are typically used in separation techniques coupled to mass spectrometry. Accurate mass measurements were used to identify the time-/condition-dependent formation of electrolyte decomposition compounds. A LIB model electrolyte containing ethylene carbonate and dimethyl carbonate was analyzed before and after controlled thermal stress and over the course of several weeks. Major decomposition products identified include difluorophosphoric acid, monofluorophosphoric acid methyl ester, monofluorophosphoric acid dimethyl ester, and hexafluorophosphate. Solvents (i.e., dimethyl carbonate) were partly consumed via an esterification pathway. LTP-HR-MS is considered to be an attractive method for fundamental LIB studies.

  19. Carbon dioxide emissions from the electricity sector in major countries: a decomposition analysis.

    PubMed

    Li, Xiangzheng; Liao, Hua; Du, Yun-Fei; Wang, Ce; Wang, Jin-Wei; Liu, Yanan

    2018-03-01

    The electric power sector is one of the primary sources of CO 2 emissions. Analyzing the influential factors that result in CO 2 emissions from the power sector would provide valuable information to reduce the world's CO 2 emissions. Herein, we applied the Divisia decomposition method to analyze the influential factors for CO 2 emissions from the power sector from 11 countries, which account for 67% of the world's emissions from 1990 to 2013. We decompose the influential factors for CO 2 emissions into seven areas: the emission coefficient, energy intensity, the share of electricity generation, the share of thermal power generation, electricity intensity, economic activity, and population. The decomposition analysis results show that economic activity, population, and the emission coefficient have positive roles in increasing CO 2 emissions, and their contribution rates are 119, 23.9, and 0.5%, respectively. Energy intensity, electricity intensity, the share of electricity generation, and the share of thermal power generation curb CO 2 emissions and their contribution rates are 17.2, 15.7, 7.7, and 2.8%, respectively. Through decomposition analysis for each country, economic activity and population are the major factors responsible for increasing CO 2 emissions from the power sector. However, the other factors from developed countries can offset the growth in CO 2 emissions due to economic activities.

  20. Flexible Mediation Analysis With Multiple Mediators.

    PubMed

    Steen, Johan; Loeys, Tom; Moerkerke, Beatrijs; Vansteelandt, Stijn

    2017-07-15

    The advent of counterfactual-based mediation analysis has triggered enormous progress on how, and under what assumptions, one may disentangle path-specific effects upon combining arbitrary (possibly nonlinear) models for mediator and outcome. However, current developments have largely focused on single mediators because required identification assumptions prohibit simple extensions to settings with multiple mediators that may depend on one another. In this article, we propose a procedure for obtaining fine-grained decompositions that may still be recovered from observed data in such complex settings. We first show that existing analytical approaches target specific instances of a more general set of decompositions and may therefore fail to provide a comprehensive assessment of the processes that underpin cause-effect relationships between exposure and outcome. We then outline conditions for obtaining the remaining set of decompositions. Because the number of targeted decompositions increases rapidly with the number of mediators, we introduce natural effects models along with estimation methods that allow for flexible and parsimonious modeling. Our procedure can easily be implemented using off-the-shelf software and is illustrated using a reanalysis of the World Health Organization's Large Analysis and Review of European Housing and Health Status (WHO-LARES) study on the effect of mold exposure on mental health (2002-2003). © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  1. A two-stage linear discriminant analysis via QR-decomposition.

    PubMed

    Ye, Jieping; Li, Qi

    2005-06-01

    Linear Discriminant Analysis (LDA) is a well-known method for feature extraction and dimension reduction. It has been used widely in many applications involving high-dimensional data, such as image and text classification. An intrinsic limitation of classical LDA is the so-called singularity problems; that is, it fails when all scatter matrices are singular. Many LDA extensions were proposed in the past to overcome the singularity problems. Among these extensions, PCA+LDA, a two-stage method, received relatively more attention. In PCA+LDA, the LDA stage is preceded by an intermediate dimension reduction stage using Principal Component Analysis (PCA). Most previous LDA extensions are computationally expensive, and not scalable, due to the use of Singular Value Decomposition or Generalized Singular Value Decomposition. In this paper, we propose a two-stage LDA method, namely LDA/QR, which aims to overcome the singularity problems of classical LDA, while achieving efficiency and scalability simultaneously. The key difference between LDA/QR and PCA+LDA lies in the first stage, where LDA/QR applies QR decomposition to a small matrix involving the class centroids, while PCA+LDA applies PCA to the total scatter matrix involving all training data points. We further justify the proposed algorithm by showing the relationship among LDA/QR and previous LDA methods. Extensive experiments on face images and text documents are presented to show the effectiveness of the proposed algorithm.

  2. Document Level Assessment of Document Retrieval Systems in a Pairwise System Evaluation

    ERIC Educational Resources Information Center

    Rajagopal, Prabha; Ravana, Sri Devi

    2017-01-01

    Introduction: The use of averaged topic-level scores can result in the loss of valuable data and can cause misinterpretation of the effectiveness of system performance. This study aims to use the scores of each document to evaluate document retrieval systems in a pairwise system evaluation. Method: The chosen evaluation metrics are document-level…

  3. Impaired Discrimination Learning in Mice Lacking the NMDA Receptor NR2A Subunit

    ERIC Educational Resources Information Center

    Brigman, Jonathan L.; Feyder, Michael; Saksida, Lisa M.; Bussey, Timothy J.; Mishina, Masayoshi; Holmes, Andrew

    2008-01-01

    N-Methyl-D-aspartate receptors (NMDARs) mediate certain forms of synaptic plasticity and learning. We used a touchscreen system to assess NR2A subunit knockout mice (KO) for (1) pairwise visual discrimination and reversal learning and (2) acquisition and extinction of an instrumental response requiring no pairwise discrimination. NR2A KO mice…

  4. Pairwise-additive hydrophobic effect for alkanes in water

    PubMed Central

    Wu, Jianzhong; Prausnitz, John M.

    2008-01-01

    Pairwise additivity of the hydrophobic effect is indicated by reliable experimental Henry's constants for a large number of linear and branched low-molecular-weight alkanes in water. Pairwise additivity suggests that the hydrophobic effect is primarily a local phenomenon and that the hydrophobic interaction may be represented by a semiempirical force field. By representing the hydrophobic potential between two methane molecules as a linear function of the overlap volume of the hydration layers, we find that the contact value of the hydrophobic potential (−0.72 kcal/mol) is smaller than that from quantum mechanics simulations (−2.8 kcal/mol) but is close to that from classical molecular dynamics (−0.5∼−0.9 kcal/mol). PMID:18599448

  5. Modular analysis of biological networks.

    PubMed

    Kaltenbach, Hans-Michael; Stelling, Jörg

    2012-01-01

    The analysis of complex biological networks has traditionally relied on decomposition into smaller, semi-autonomous units such as individual signaling pathways. With the increased scope of systems biology (models), rational approaches to modularization have become an important topic. With increasing acceptance of de facto modularity in biology, widely different definitions of what constitutes a module have sparked controversies. Here, we therefore review prominent classes of modular approaches based on formal network representations. Despite some promising research directions, several important theoretical challenges remain open on the way to formal, function-centered modular decompositions for dynamic biological networks.

  6. What Role Does Photodegradation Play in Influencing Plant Litter Decomposition and Biogeochemistry in Coastal Marsh Ecosystems?

    NASA Astrophysics Data System (ADS)

    Tobler, M.; White, D. A.; Abbene, M. L.; Burst, S. L.; McCulley, R. L.; Barnes, P. W.

    2016-02-01

    Decomposition is a crucial component of global biogeochemical cycles that influences the fate and residence time of carbon and nutrients in organic matter pools, yet the processes controlling litter decomposition in coastal marshes are not fully understood. We conducted a series of field studies to examine what role photodegradation, a process driven in part by solar UV radiation (280-400 nm), plays in the decomposition of the standing dead litter of Sagittaria lancifolia and Spartina patens, two common species in marshes of intermediate salinity in southern Louisiana, USA. Results indicate that the exclusion of solar UV significantly altered litter mass loss, but the magnitude and direction of these effects varied depending on species, height of the litter above the water surface and the stage of decomposition. Over one growing season, S. lancifolia litter exposed to ambient solar UV had significantly less mass loss compared to litter exposed to attenuated UV over the initial phase of decomposition (0-5 months; ANOVA P=0.004) then treatment effects switched in the latter phase of the study (5-7 months; ANOVA P<0.001). Similar results were found in S. patens over an 11-month period. UV exposure reduced total C, N and lignin by 24-33% in remaining tissue with treatment differences most pronounced in S. patens. Phospholipid fatty-acid analysis (PFLA) indicated that UV also significantly altered microbial (bacterial) biomass and bacteria:fungi ratios of decomposing litter. These findings, and others, indicate that solar UV can have positive and negative net effects on litter decomposition in marsh plants with inhibition of biotic (microbial) processes occurring early in the decomposition process then shifting to enhancement of decomposition via abiotic (photodegradation) processes later in decomposition. Photodegradation of standing litter represents a potentially significant pathway of C and N loss from these coastal wetland ecosystems.

  7. Decomposition of the Total Effect in the Presence of Multiple Mediators and Interactions.

    PubMed

    Bellavia, Andrea; Valeri, Linda

    2018-06-01

    Mediation analysis allows decomposing a total effect into a direct effect of the exposure on the outcome and an indirect effect operating through a number of possible hypothesized pathways. Recent studies have provided formal definitions of direct and indirect effects when multiple mediators are of interest and have described parametric and semiparametric methods for their estimation. Investigating direct and indirect effects with multiple mediators, however, can be challenging in the presence of multiple exposure-mediator and mediator-mediator interactions. In this paper we derive a decomposition of the total effect that unifies mediation and interaction when multiple mediators are present. We illustrate the properties of the proposed framework in a secondary analysis of a pragmatic trial for the treatment of schizophrenia. The decomposition is employed to investigate the interplay of side effects and psychiatric symptoms in explaining the effect of antipsychotic medication on quality of life in schizophrenia patients. Our result offers a valuable tool to identify the proportions of total effect due to mediation and interaction when more than one mediator is present, providing the finest decomposition of the total effect that unifies multiple mediators and interactions.

  8. Dissolved organic matter release in overlying water and bacterial community shifts in biofilm during the decomposition of Myriophyllum verticillatum.

    PubMed

    Zhang, Lisha; Zhang, Songhe; Lv, Xiaoyang; Qiu, Zheng; Zhang, Ziqiu; Yan, Liying

    2018-08-15

    This study investigated the alterations in biomass, nutrients and dissolved organic matter concentration in overlying water and determined the bacterial 16S rRNA gene in biofilms attached to plant residual during the decomposition of Myriophyllum verticillatum. The 55-day decomposition experimental results show that plant decay process can be well described by the exponential model, with the average decomposition rate of 0.037d -1 . Total organic carbon, total nitrogen, and organic nitrogen concentrations increased significantly in overlying water during decomposition compared to control within 35d. Results from excitation emission matrix-parallel factor analysis showed humic acid-like and tyrosine acid-like substances might originate from plant degradation processes. Tyrosine acid-like substances had an obvious correlation to organic nitrogen and total nitrogen (p<0.01). Decomposition rates were positively related to pH, total organic carbon, oxidation-reduction potential and dissolved oxygen but negatively related to temperature in overlying water. Microbe densities attached to plant residues increased with decomposition process. The most dominant phylum was Bacteroidetes (>46%) at 7d, Chlorobi (20%-44%) or Proteobacteria (25%-34%) at 21d and Chlorobi (>40%) at 55d. In microbes attached to plant residues, sugar- and polysaccharides-degrading genus including Bacteroides, Blvii28, Fibrobacter, and Treponema dominated at 7d while Chlorobaculum, Rhodobacter, Methanobacterium, Thiobaca, Methanospirillum and Methanosarcina at 21d and 55d. These results gain the insight into the dissolved organic matter release and bacterial community shifts during submerged macrophytes decomposition. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Study on the decomposition of trace benzene over V2O5-WO3 ...

    EPA Pesticide Factsheets

    Commercial and laboratory-prepared V2O5–WO3/TiO2-based catalysts with different compositions were tested for catalytic decomposition of chlorobenzene (ClBz) in simulated flue gas. Resonance enhanced multiphoton ionization-time of flight mass spectrometry (REMPI-TOFMS) was employed to measure real-time, trace concentrations of ClBz contained in the flue gas before and after the catalyst. The effects of various parameters, including vanadium content of the catalyst, the catalyst support, as well as the reaction temperature on decomposition of ClBz were investigated. The results showed that the ClBz decomposition efficiency was significantly enhanced when nano-TiO2 instead of conventional TiO2 was used as the catalyst support. No promotion effects were found in the ClBz decomposition process when the catalysts were wet-impregnated with CuO and CeO2. Tests with different concentrations (1,000, 500, and 100 ppb) of ClBz showed that ClBz-decomposition efficiency decreased with increasing concentration, unless active sites were plentiful. A comparison between ClBz and benzene decomposition on the V2O5–WO3/TiO2-based catalyst and the relative kinetics analysis showed that two different active sites were likely involved in the decomposition mechanism and the V=O and V-O-Ti groups may only work for the degradation of the phenyl group and the benzene ring rather than the C-Cl bond. V2O5-WO3/TiO2 based catalysts, that have been used for destruction of a wide variet

  10. Thermal properties of Bentonite Modified with 3-aminopropyltrimethoxysilane

    NASA Astrophysics Data System (ADS)

    Pramono, E.; Pratiwi, W.; Wahyuningrum, D.; Radiman, C. L.

    2018-03-01

    Chemical modifications of Bentonite (BNT) clay have been carried out by using 3-aminoprophyltrimethoxysilane (APS) in various solvent media. The degradation properties of products (BNTAPS) were characterized by thermogravimetric analysis (TGA). Samples were heated at 30 to 700°C with a heating rate 10 deg/min, and the total silane-grafted amount was determined by calculating the weight loss at 200 – 600°C. The thermogram of TGA showed that there were three main decomposition regions which are attributed to the elimination of physically adsorbed water, decomposition of silane and dehydroxylation of Bentonite. High weight loss attributed to the thermal decomposition of silane was observed between 200 to 550°C. Quantitative analysis of grafted silane results high silane loaded using a solvent with high surface energy, which indicates the type of solvent affected interaction and adsorption of APS in BNT platelets.

  11. Gene ontology analysis of pairwise genetic associations in two genome-wide studies of sporadic ALS.

    PubMed

    Kim, Nora Chung; Andrews, Peter C; Asselbergs, Folkert W; Frost, H Robert; Williams, Scott M; Harris, Brent T; Read, Cynthia; Askland, Kathleen D; Moore, Jason H

    2012-07-28

    It is increasingly clear that common human diseases have a complex genetic architecture characterized by both additive and nonadditive genetic effects. The goal of the present study was to determine whether patterns of both additive and nonadditive genetic associations aggregate in specific functional groups as defined by the Gene Ontology (GO). We first estimated all pairwise additive and nonadditive genetic effects using the multifactor dimensionality reduction (MDR) method that makes few assumptions about the underlying genetic model. Statistical significance was evaluated using permutation testing in two genome-wide association studies of ALS. The detection data consisted of 276 subjects with ALS and 271 healthy controls while the replication data consisted of 221 subjects with ALS and 211 healthy controls. Both studies included genotypes from approximately 550,000 single-nucleotide polymorphisms (SNPs). Each SNP was mapped to a gene if it was within 500 kb of the start or end. Each SNP was assigned a p-value based on its strongest joint effect with the other SNPs. We then used the Exploratory Visual Analysis (EVA) method and software to assign a p-value to each gene based on the overabundance of significant SNPs at the α = 0.05 level in the gene. We also used EVA to assign p-values to each GO group based on the overabundance of significant genes at the α = 0.05 level. A GO category was determined to replicate if that category was significant at the α = 0.05 level in both studies. We found two GO categories that replicated in both studies. The first, 'Regulation of Cellular Component Organization and Biogenesis', a GO Biological Process, had p-values of 0.010 and 0.014 in the detection and replication studies, respectively. The second, 'Actin Cytoskeleton', a GO Cellular Component, had p-values of 0.040 and 0.046 in the detection and replication studies, respectively. Pathway analysis of pairwise genetic associations in two GWAS of sporadic ALS revealed a set of genes involved in cellular component organization and actin cytoskeleton, more specifically, that were not reported by prior GWAS. However, prior biological studies have implicated actin cytoskeleton in ALS and other motor neuron diseases. This study supports the idea that pathway-level analysis of GWAS data may discover important associations not revealed using conventional one-SNP-at-a-time approaches.

  12. Effect of preliminary thermal treatment on decomposition kinetics of austenite in low-alloyed pipe steel in intercritical temperature interval

    NASA Astrophysics Data System (ADS)

    Makovetskii, A. N.; Tabatchikova, T. I.; Yakovleva, I. L.; Tereshchenko, N. A.; Mirzaev, D. A.

    2013-06-01

    The decomposition kinetics of austenite that appears in the 13KhFA low-alloyed pipe steel upon heating the samples in an intercritical temperature interval (ICI) and exposure for 5 or 30 min has been studied by the method of high-speed dilatometry. The results of dilatometry are supplemented by the microstructure analysis. Thermokinetic diagrams of the decomposition of the γ phase are represented. The conclusion has been drawn that an increase in the duration of exposure in the intercritical interval leads to a significant increase in the stability of the γ phase.

  13. Relation between SM-covers and SM-decompositions of Petri nets

    NASA Astrophysics Data System (ADS)

    Karatkevich, Andrei; Wiśniewski, Remigiusz

    2015-12-01

    A task of finding for a given Petri net a set of sequential components being able to represent together the behavior of the net arises often in formal analysis of Petri nets and in applications of Petri net to logical control. Such task can be met in two different variants: obtaining a Petri net cover or a decomposition. Petri net cover supposes that a set of the subnets of given net is selected, and the sequential nets forming a decomposition may have additional places, which do not belong to the decomposed net. The paper discusses difference and relations between two mentioned tasks and their results.

  14. Mössbauer study on the thermal decomposition of potassium tris (oxalato) ferrate(III) trihydrate and bis (oxalato) ferrate(II) dihydrate

    NASA Astrophysics Data System (ADS)

    Ladriere, J.

    1992-04-01

    The thermal decompositions of K3Fe(ox)3 3 H2O and K2Fe(ox)2 2 H2O in nitrogen have been studied using Mössbauer spectroscopy, X-ray diffraction and thermal analysis methods in order to determine the nature of the solid residues obtained after each stage of decomposition. Particularly, after dehydration at 113°C, the ferric complex is reduced into a ferrous compound, with a quadrupole splitting of 3.89 mm/s, which corresponds to the anhydrous form of K2Fe(ox)2 2 H2O.

  15. Direct Growth of CuO Nanorods on Graphitic Carbon Nitride with Synergistic Effect on Thermal Decomposition of Ammonium Perchlorate.

    PubMed

    Tan, Linghua; Xu, Jianhua; Li, Shiying; Li, Dongnan; Dai, Yuming; Kou, Bo; Chen, Yu

    2017-05-02

    Novel graphitic carbon nitride/CuO (g-C₃N₄/CuO) nanocomposite was synthesized through a facile precipitation method. Due to the strong ion-dipole interaction between copper ions and nitrogen atoms of g-C₃N₄, CuO nanorods (length 200-300 nm, diameter 5-10 nm) were directly grown on g-C₃N₄, forming a g-C₃N₄/CuO nanocomposite, which was confirmed via X-ray diffraction (XRD), transmission electron microscopy (TEM), field emission scanning electron microscopy (FESEM), and X-ray photoelectron spectroscopy (XPS). Finally, thermal decomposition of ammonium perchlorate (AP) in the absence and presence of the prepared g-C₃N₄/CuO nanocomposite was examined by differential thermal analysis (DTA), and thermal gravimetric analysis (TGA). The g-C₃N₄/CuO nanocomposite showed promising catalytic effects for the thermal decomposition of AP. Upon addition of 2 wt % nanocomposite with the best catalytic performance (g-C₃N₄/20 wt % CuO), the decomposition temperature of AP was decreased by up to 105.5 °C and only one decomposition step was found instead of the two steps commonly reported in other examples, demonstrating the synergistic catalytic activity of the as-synthesized nanocomposite. This study demonstrated a successful example regarding the direct growth of metal oxide on g-C₃N₄ by ion-dipole interaction between metallic ions, and the lone pair electrons on nitrogen atoms, which could provide a novel strategy for the preparation of g-C₃N₄-based nanocomposite.

  16. Partial differential equation-based approach for empirical mode decomposition: application on image analysis.

    PubMed

    Niang, Oumar; Thioune, Abdoulaye; El Gueirea, Mouhamed Cheikh; Deléchelle, Eric; Lemoine, Jacques

    2012-09-01

    The major problem with the empirical mode decomposition (EMD) algorithm is its lack of a theoretical framework. So, it is difficult to characterize and evaluate this approach. In this paper, we propose, in the 2-D case, the use of an alternative implementation to the algorithmic definition of the so-called "sifting process" used in the original Huang's EMD method. This approach, especially based on partial differential equations (PDEs), was presented by Niang in previous works, in 2005 and 2007, and relies on a nonlinear diffusion-based filtering process to solve the mean envelope estimation problem. In the 1-D case, the efficiency of the PDE-based method, compared to the original EMD algorithmic version, was also illustrated in a recent paper. Recently, several 2-D extensions of the EMD method have been proposed. Despite some effort, 2-D versions for EMD appear poorly performing and are very time consuming. So in this paper, an extension to the 2-D space of the PDE-based approach is extensively described. This approach has been applied in cases of both signal and image decomposition. The obtained results confirm the usefulness of the new PDE-based sifting process for the decomposition of various kinds of data. Some results have been provided in the case of image decomposition. The effectiveness of the approach encourages its use in a number of signal and image applications such as denoising, detrending, or texture analysis.

  17. Improving pairwise comparison of protein sequences with domain co-occurrence

    PubMed Central

    Gascuel, Olivier

    2018-01-01

    Comparing and aligning protein sequences is an essential task in bioinformatics. More specifically, local alignment tools like BLAST are widely used for identifying conserved protein sub-sequences, which likely correspond to protein domains or functional motifs. However, to limit the number of false positives, these tools are used with stringent sequence-similarity thresholds and hence can miss several hits, especially for species that are phylogenetically distant from reference organisms. A solution to this problem is then to integrate additional contextual information to the procedure. Here, we propose to use domain co-occurrence to increase the sensitivity of pairwise sequence comparisons. Domain co-occurrence is a strong feature of proteins, since most protein domains tend to appear with a limited number of other domains on the same protein. We propose a method to take this information into account in a typical BLAST analysis and to construct new domain families on the basis of these results. We used Plasmodium falciparum as a case study to evaluate our method. The experimental findings showed an increase of 14% of the number of significant BLAST hits and an increase of 25% of the proteome area that can be covered with a domain. Our method identified 2240 new domains for which, in most cases, no model of the Pfam database could be linked. Moreover, our study of the quality of the new domains in terms of alignment and physicochemical properties show that they are close to that of standard Pfam domains. Source code of the proposed approach and supplementary data are available at: https://gite.lirmm.fr/menichelli/pairwise-comparison-with-cooccurrence PMID:29293498

  18. Constraints on the optical depth of galaxy groups and clusters

    DOE PAGES

    Flender, Samuel; Nagai, Daisuke; McDonald, Michael

    2017-03-10

    Here, future data from galaxy redshift surveys, combined with high-resolutions maps of the cosmic microwave background, will enable measurements of the pairwise kinematic Sunyaev–Zel'dovich (kSZ) signal with unprecedented statistical significance. This signal probes the matter-velocity correlation function, scaled by the average optical depth (τ) of the galaxy groups and clusters in the sample, and is thus of fundamental importance for cosmology. However, in order to translate pairwise kSZ measurements into cosmological constraints, external constraints on τ are necessary. In this work, we present a new model for the intracluster medium, which takes into account star formation, feedback, non-thermal pressure, and gas cooling. Our semi-analytic model is computationally efficient and can reproduce results of recent hydrodynamical simulations of galaxy cluster formation. We calibrate the free parameters in the model using recent X-ray measurements of gas density profiles of clusters, and gas masses of groups and clusters. Our observationally calibrated model predicts the averagemore » $${\\tau }_{500}$$ (i.e., the integrated τ within a disk of size R 500) to better than 6% modeling uncertainty (at 95% confidence level). If the remaining uncertainties associated with other astrophysical uncertainties and X-ray selection effects can be better understood, our model for the optical depth should break the degeneracy between optical depth and cluster velocity in the analysis of future pairwise kSZ measurements and improve cosmological constraints with the combination of upcoming galaxy and CMB surveys, including the nature of dark energy, modified gravity, and neutrino mass.« less

  19. Delineating slowly and rapidly evolving fractions of the Drosophila genome.

    PubMed

    Keith, Jonathan M; Adams, Peter; Stephen, Stuart; Mattick, John S

    2008-05-01

    Evolutionary conservation is an important indicator of function and a major component of bioinformatic methods to identify non-protein-coding genes. We present a new Bayesian method for segmenting pairwise alignments of eukaryotic genomes while simultaneously classifying segments into slowly and rapidly evolving fractions. We also describe an information criterion similar to the Akaike Information Criterion (AIC) for determining the number of classes. Working with pairwise alignments enables detection of differences in conservation patterns among closely related species. We analyzed three whole-genome and three partial-genome pairwise alignments among eight Drosophila species. Three distinct classes of conservation level were detected. Sequences comprising the most slowly evolving component were consistent across a range of species pairs, and constituted approximately 62-66% of the D. melanogaster genome. Almost all (>90%) of the aligned protein-coding sequence is in this fraction, suggesting much of it (comprising the majority of the Drosophila genome, including approximately 56% of non-protein-coding sequences) is functional. The size and content of the most rapidly evolving component was species dependent, and varied from 1.6% to 4.8%. This fraction is also enriched for protein-coding sequence (while containing significant amounts of non-protein-coding sequence), suggesting it is under positive selection. We also classified segments according to conservation and GC content simultaneously. This analysis identified numerous sub-classes of those identified on the basis of conservation alone, but was nevertheless consistent with that classification. Software, data, and results available at www.maths.qut.edu.au/-keithj/. Genomic segments comprising the conservation classes available in BED format.

  20. Constraints on the optical depth of galaxy groups and clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flender, Samuel; Nagai, Daisuke; McDonald, Michael

    Here, future data from galaxy redshift surveys, combined with high-resolutions maps of the cosmic microwave background, will enable measurements of the pairwise kinematic Sunyaev–Zel'dovich (kSZ) signal with unprecedented statistical significance. This signal probes the matter-velocity correlation function, scaled by the average optical depth (τ) of the galaxy groups and clusters in the sample, and is thus of fundamental importance for cosmology. However, in order to translate pairwise kSZ measurements into cosmological constraints, external constraints on τ are necessary. In this work, we present a new model for the intracluster medium, which takes into account star formation, feedback, non-thermal pressure, and gas cooling. Our semi-analytic model is computationally efficient and can reproduce results of recent hydrodynamical simulations of galaxy cluster formation. We calibrate the free parameters in the model using recent X-ray measurements of gas density profiles of clusters, and gas masses of groups and clusters. Our observationally calibrated model predicts the averagemore » $${\\tau }_{500}$$ (i.e., the integrated τ within a disk of size R 500) to better than 6% modeling uncertainty (at 95% confidence level). If the remaining uncertainties associated with other astrophysical uncertainties and X-ray selection effects can be better understood, our model for the optical depth should break the degeneracy between optical depth and cluster velocity in the analysis of future pairwise kSZ measurements and improve cosmological constraints with the combination of upcoming galaxy and CMB surveys, including the nature of dark energy, modified gravity, and neutrino mass.« less

  1. Grouping individual independent BOLD effects: a new way to ICA group analysis

    NASA Astrophysics Data System (ADS)

    Duann, Jeng-Ren; Jung, Tzyy-Ping; Sejnowski, Terrence J.; Makeig, Scott

    2009-04-01

    A new group analysis method to summarize the task-related BOLD responses based on independent component analysis (ICA) was presented. As opposite to the previously proposed group ICA (gICA) method, which first combined multi-subject fMRI data in either temporal or spatial domain and applied ICA decomposition only once to the combined fMRI data to extract the task-related BOLD effects, the method presented here applied ICA decomposition to the individual subjects' fMRI data to first find the independent BOLD effects specifically for each individual subject. Then, the task-related independent BOLD component was selected among the resulting independent components from the single-subject ICA decomposition and hence grouped across subjects to derive the group inference. In this new ICA group analysis (ICAga) method, one does not need to assume that the task-related BOLD time courses are identical across brain areas and subjects as used in the grand ICA decomposition on the spatially concatenated fMRI data. Neither does one need to assume that after spatial normalization, the voxels at the same coordinates represent exactly the same functional or structural brain anatomies across different subjects. These two assumptions have been problematic given the recent BOLD activation evidences. Further, since the independent BOLD effects were obtained from each individual subject, the ICAga method can better account for the individual differences in the task-related BOLD effects. Unlike the gICA approach whereby the task-related BOLD effects could only be accounted for by a single unified BOLD model across multiple subjects. As a result, the newly proposed method, ICAga, was able to better fit the task-related BOLD effects at individual level and thus allow grouping more appropriate multisubject BOLD effects in the group analysis.

  2. Amplitude-cyclic frequency decomposition of vibration signals for bearing fault diagnosis based on phase editing

    NASA Astrophysics Data System (ADS)

    Barbini, L.; Eltabach, M.; Hillis, A. J.; du Bois, J. L.

    2018-03-01

    In rotating machine diagnosis different spectral tools are used to analyse vibration signals. Despite the good diagnostic performance such tools are usually refined, computationally complex to implement and require oversight of an expert user. This paper introduces an intuitive and easy to implement method for vibration analysis: amplitude cyclic frequency decomposition. This method firstly separates vibration signals accordingly to their spectral amplitudes and secondly uses the squared envelope spectrum to reveal the presence of cyclostationarity in each amplitude level. The intuitive idea is that in a rotating machine different components contribute vibrations at different amplitudes, for instance defective bearings contribute a very weak signal in contrast to gears. This paper also introduces a new quantity, the decomposition squared envelope spectrum, which enables separation between the components of a rotating machine. The amplitude cyclic frequency decomposition and the decomposition squared envelope spectrum are tested on real word signals, both at stationary and varying speeds, using data from a wind turbine gearbox and an aircraft engine. In addition a benchmark comparison to the spectral correlation method is presented.

  3. Time-resolved x-ray scattering instrumentation

    DOEpatents

    Borso, C.S.

    1985-11-21

    An apparatus and method for increased speed and efficiency of data compilation and analysis in real time is presented in this disclosure. Data is sensed and grouped in combinations in accordance with predetermined logic. The combinations are grouped so that a simplified reduced signal results, such as pairwise summing of data values having offsetting algebraic signs, thereby reducing the magnitude of the net pair sum. Bit storage requirements are reduced and speed of data compilation and analysis is increased by manipulation of shorter bit length data values, making real time evaluation possible.

  4. A configuration space of homologous proteins conserving mutual information and allowing a phylogeny inference based on pair-wise Z-score probabilities.

    PubMed

    Bastien, Olivier; Ortet, Philippe; Roy, Sylvaine; Maréchal, Eric

    2005-03-10

    Popular methods to reconstruct molecular phylogenies are based on multiple sequence alignments, in which addition or removal of data may change the resulting tree topology. We have sought a representation of homologous proteins that would conserve the information of pair-wise sequence alignments, respect probabilistic properties of Z-scores (Monte Carlo methods applied to pair-wise comparisons) and be the basis for a novel method of consistent and stable phylogenetic reconstruction. We have built up a spatial representation of protein sequences using concepts from particle physics (configuration space) and respecting a frame of constraints deduced from pair-wise alignment score properties in information theory. The obtained configuration space of homologous proteins (CSHP) allows the representation of real and shuffled sequences, and thereupon an expression of the TULIP theorem for Z-score probabilities. Based on the CSHP, we propose a phylogeny reconstruction using Z-scores. Deduced trees, called TULIP trees, are consistent with multiple-alignment based trees. Furthermore, the TULIP tree reconstruction method provides a solution for some previously reported incongruent results, such as the apicomplexan enolase phylogeny. The CSHP is a unified model that conserves mutual information between proteins in the way physical models conserve energy. Applications include the reconstruction of evolutionary consistent and robust trees, the topology of which is based on a spatial representation that is not reordered after addition or removal of sequences. The CSHP and its assigned phylogenetic topology, provide a powerful and easily updated representation for massive pair-wise genome comparisons based on Z-score computations.

  5. Simulations of the pairwise kinematic Sunyaev-Zel'dovich signal

    DOE PAGES

    Flender, Samuel; Bleem, Lindsey; Finkel, Hal; ...

    2016-05-26

    The pairwise kinematic Sunyaev–Zel'dovich (kSZ) signal from galaxy clusters is a probe of their line of sight momenta, and thus a potentially valuable source of cosmological information. In addition to the momenta, the amplitude of the measured signal depends on the properties of the intracluster gas and observational limitations such as errors in determining cluster centers and redshifts. In this work, we simulate the pairwise kSZ signal of clusters atmore » $$z\\lt 1$$, using the output from a cosmological N-body simulation and including the properties of the intracluster gas via a model that can be varied in post-processing. We find that modifications to the gas profile due to star formation and feedback reduce the pairwise kSZ amplitude of clusters by $$\\sim 50\\%$$, relative to the naive "gas traces mass" assumption. We demonstrate that miscentering can reduce the overall amplitude of the pairwise kSZ signal by up to 10%, while redshift errors can lead to an almost complete suppression of the signal at small separations. We confirm that a high-significance detection is expected from the combination of data from current generation, high-resolution cosmic microwave background experiments, such as the South Pole Telescope, and cluster samples from optical photometric surveys, such as the Dark Energy Survey. As a result, we forecast that future experiments such as Advanced ACTPol in conjunction with data from the Dark Energy Spectroscopic Instrument will yield detection significances of at least $$20\\sigma $$, and up to $$57\\sigma $$ in an optimistic scenario.« less

  6. Genetic and Antigenic Evidence Supports the Separation of Hepatozoon canis and Hepatozoon americanum at the Species Level

    PubMed Central

    Baneth, Gad; Barta, John R.; Shkap, Varda; Martin, Donald S.; Macintire, Douglass K.; Vincent-Johnson, Nancy

    2000-01-01

    Recognition of Hepatozoon canis and Hepatozoon americanum as distinct species was supported by the results of Western immunoblotting of canine anti-H. canis and anti-H. americanum sera against H. canis gamonts. Sequence analysis of 368 bases near the 3′ end of the 18S rRNA gene from each species revealed a pairwise difference of 13.59%. PMID:10699047

  7. Evaluation of advanced multiplex short tandem repeat systems in pairwise kinship analysis.

    PubMed

    Tamura, Tomonori; Osawa, Motoki; Ochiai, Eriko; Suzuki, Takanori; Nakamura, Takashi

    2015-09-01

    The AmpFLSTR Identifiler Kit, comprising 15 autosomal short tandem repeat (STR) loci, is commonly employed in forensic practice for calculating match probabilities and parentage testing. The conventional system exhibits insufficient estimation for kinship analysis such as sibship testing because of shortness of examined loci. This study evaluated the power of the PowerPlex Fusion System, GlobalFiler Kit, and PowerPlex 21 System, which comprise more than 20 autosomal STR loci, to estimate pairwise blood relatedness (i.e., parent-child, full siblings, second-degree relatives, and first cousins). The genotypes of all 24 STR loci in 10,000 putative pedigrees were constructed by simulation. The likelihood ratio for each locus was calculated from joint probabilities for relatives and non-relatives. The combined likelihood ratio was calculated according to the product rule. The addition of STR loci improved separation between relatives and non-relatives. However, these systems were less effectively extended to the inference for first cousins. In conclusion, these advanced systems will be useful in forensic personal identification, especially in the evaluation of full siblings and second-degree relatives. Moreover, the additional loci may give rise to two major issues of more frequent mutational events and several pairs of linked loci on the same chromosome. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  8. Kernel machine methods for integrative analysis of genome-wide methylation and genotyping studies.

    PubMed

    Zhao, Ni; Zhan, Xiang; Huang, Yen-Tsung; Almli, Lynn M; Smith, Alicia; Epstein, Michael P; Conneely, Karen; Wu, Michael C

    2018-03-01

    Many large GWAS consortia are expanding to simultaneously examine the joint role of DNA methylation in addition to genotype in the same subjects. However, integrating information from both data types is challenging. In this paper, we propose a composite kernel machine regression model to test the joint epigenetic and genetic effect. Our approach works at the gene level, which allows for a common unit of analysis across different data types. The model compares the pairwise similarities in the phenotype to the pairwise similarities in the genotype and methylation values; and high correspondence is suggestive of association. A composite kernel is constructed to measure the similarities in the genotype and methylation values between pairs of samples. We demonstrate through simulations and real data applications that the proposed approach can correctly control type I error, and is more robust and powerful than using only the genotype or methylation data in detecting trait-associated genes. We applied our method to investigate the genetic and epigenetic regulation of gene expression in response to stressful life events using data that are collected from the Grady Trauma Project. Within the kernel machine testing framework, our methods allow for heterogeneity in effect sizes, nonlinear, and interactive effects, as well as rapid P-value computation. © 2017 WILEY PERIODICALS, INC.

  9. Classification of forest-based ecotourism areas in Pocahontas County of West Virginia using GIS and pairwise comparison method

    Treesearch

    Ishwar Dhami; Jinyang. Deng

    2012-01-01

    Many previous studies have examined ecotourism primarily from the perspective of tourists while largely ignoring ecotourism destinations. This study used geographical information system (GIS) and pairwise comparison to identify forest-based ecotourism areas in Pocahontas County, West Virginia. The study adopted the criteria and scores developed by Boyd and Butler (1994...

  10. Activation energy and energy density: a bioenergetic framework for assessing soil organic matter stability

    NASA Astrophysics Data System (ADS)

    Williams, E. K.; Plante, A. F.

    2017-12-01

    The stability and cycling of natural organic matter depends on the input of energy needed to decompose it and the net energy gained from its decomposition. In soils, this relationship is complicated by microbial enzymatic activity which decreases the activation energies associated with soil organic matter (SOM) decomposition and by chemical and physical protection mechanisms which decreases the concentrations of the available organic matter substrate and also require additional energies to overcome for decomposition. In this study, we utilize differential scanning calorimetry and evolved CO2 gas analysis to characterize differences in the energetics (activation energy and energy density) in soils that have undergone degradation in natural (bare fallow), field (changes in land-use), chemical (acid hydrolysis), and laboratory (high temperature incubation) experimental conditions. We will present this data in a novel conceptual framework relating these energy dynamics to organic matter inputs, decomposition, and molecular complexity.

  11. Modified complementary ensemble empirical mode decomposition and intrinsic mode functions evaluation index for high-speed train gearbox fault diagnosis

    NASA Astrophysics Data System (ADS)

    Chen, Dongyue; Lin, Jianhui; Li, Yanping

    2018-06-01

    Complementary ensemble empirical mode decomposition (CEEMD) has been developed for the mode-mixing problem in Empirical Mode Decomposition (EMD) method. Compared to the ensemble empirical mode decomposition (EEMD), the CEEMD method reduces residue noise in the signal reconstruction. Both CEEMD and EEMD need enough ensemble number to reduce the residue noise, and hence it would be too much computation cost. Moreover, the selection of intrinsic mode functions (IMFs) for further analysis usually depends on experience. A modified CEEMD method and IMFs evaluation index are proposed with the aim of reducing the computational cost and select IMFs automatically. A simulated signal and in-service high-speed train gearbox vibration signals are employed to validate the proposed method in this paper. The results demonstrate that the modified CEEMD can decompose the signal efficiently with less computation cost, and the IMFs evaluation index can select the meaningful IMFs automatically.

  12. Kinetics of non-isothermal decomposition of cinnamic acid

    NASA Astrophysics Data System (ADS)

    Zhao, Ming-rui; Qi, Zhen-li; Chen, Fei-xiong; Yue, Xia-xin

    2014-07-01

    The thermal stability and kinetics of decomposition of cinnamic acid were investigated by thermogravimetry and differential scanning calorimetry at four heating rates. The activation energies of this process were calculated from analysis of TG curves by methods of Flynn-Wall-Ozawa, Doyle, Distributed Activation Energy Model, Šatava-Šesták and Kissinger, respectively. There are only one stage of thermal decomposition process in TG and two endothermic peaks in DSC. For this decomposition process of cinnamic acid, E and log A[s-1] were determined to be 81.74 kJ mol-1 and 8.67, respectively. The mechanism was Mampel Power law (the reaction order, n = 1), with integral form G(α) = α (α = 0.1-0.9). Moreover, thermodynamic properties of Δ H ≠, Δ S ≠, Δ G ≠ were 77.96 kJ mol-1, -90.71 J mol-1 K-1, 119.41 kJ mol-1.

  13. Effect of decomposition and organic residues on resistivity of copper films fabricated via low-temperature sintering of complex particle mixed dispersions

    NASA Astrophysics Data System (ADS)

    Yong, Yingqiong; Nguyen, Mai Thanh; Tsukamoto, Hiroki; Matsubara, Masaki; Liao, Ying-Chih; Yonezawa, Tetsu

    2017-03-01

    Mixtures of a copper complex and copper fine particles as copper-based metal-organic decomposition (MOD) dispersions have been demonstrated to be effective for low-temperature sintering of conductive copper film. However, the copper particle size effect on decomposition process of the dispersion during heating and the effect of organic residues on the resistivity have not been studied. In this study, the decomposition process of dispersions containing mixtures of a copper complex and copper particles with various sizes was studied. The effect of organic residues on the resistivity was also studied using thermogravimetric analysis. In addition, the choice of copper salts in the copper complex was also discussed. In this work, a low-resistivity sintered copper film (7 × 10-6 Ω·m) at a temperature as low as 100 °C was achieved without using any reductive gas.

  14. Investigation of automated task learning, decomposition and scheduling

    NASA Technical Reports Server (NTRS)

    Livingston, David L.; Serpen, Gursel; Masti, Chandrashekar L.

    1990-01-01

    The details and results of research conducted in the application of neural networks to task planning and decomposition are presented. Task planning and decomposition are operations that humans perform in a reasonably efficient manner. Without the use of good heuristics and usually much human interaction, automatic planners and decomposers generally do not perform well due to the intractable nature of the problems under consideration. The human-like performance of neural networks has shown promise for generating acceptable solutions to intractable problems such as planning and decomposition. This was the primary reasoning behind attempting the study. The basis for the work is the use of state machines to model tasks. State machine models provide a useful means for examining the structure of tasks since many formal techniques have been developed for their analysis and synthesis. It is the approach to integrate the strong algebraic foundations of state machines with the heretofore trial-and-error approach to neural network synthesis.

  15. Macrobenthic assemblages of the Changjiang River estuary (Yangtze River, China) and adjacent continental shelf relative to mild summer hypoxia

    NASA Astrophysics Data System (ADS)

    Liao, Yibo; Shou, Lu; Tang, Yanbin; Zeng, Jiangning; Gao, Aigen; Chen, Quanzhen; Yan, Xiaojun

    2017-05-01

    To assess the effects of hypoxia, macrobenthic communities along an estuarine gradient of the Changjiang estuary and adjacent continental shelf were analyzed. This revealed spatial variations in the communities and relationships with environmental variables during periods of reduced dissolved oxygen (DO) concentration in summer. Statistical analyses revealed significant differences in macrobenthic community composition among the three zones: estuarine zone (EZ), mildly hypoxic zone (MHZ) in the continental shelf, and normoxic zone (NZ) in the continental shelf (Global R =0.206, P =0.002). Pairwise tests showed that the macrobenthic community composition of the EZ was significantly different from the MHZ (pairwise test R =0.305, P =0.001) and the NZ (pairwise test R =0.259, P =0.001). There was no significant difference in macrobenthic communities between the MHZ and the NZ (pairwise test R =0.062, P =0.114). The taxa included small and typically opportunistic polychaetes, which made the greatest contribution to the dissimilarity between the zones. The effects of mild hypoxia on the macrobenthic communities are a result not only of reduced DO concentration but also of differences in environmental variables such as temperature, salinity, and nutrient concentrations caused by stratification.

  16. From pairwise to group interactions in games of cyclic dominance.

    PubMed

    Szolnoki, Attila; Vukov, Jeromos; Perc, Matjaž

    2014-06-01

    We study the rock-paper-scissors game in structured populations, where the invasion rates determine individual payoffs that govern the process of strategy change. The traditional version of the game is recovered if the payoffs for each potential invasion stem from a single pairwise interaction. However, the transformation of invasion rates to payoffs also allows the usage of larger interaction ranges. In addition to the traditional pairwise interaction, we therefore consider simultaneous interactions with all nearest neighbors, as well as with all nearest and next-nearest neighbors, thus effectively going from single pair to group interactions in games of cyclic dominance. We show that differences in the interaction range affect not only the stationary fractions of strategies but also their relations of dominance. The transition from pairwise to group interactions can thus decelerate and even revert the direction of the invasion between the competing strategies. Like in evolutionary social dilemmas, in games of cyclic dominance, too, the indirect multipoint interactions that are due to group interactions hence play a pivotal role. Our results indicate that, in addition to the invasion rates, the interaction range is at least as important for the maintenance of biodiversity among cyclically competing strategies.

  17. Detection of the kinematic Sunyaev–Zel'dovich effect with DES Year 1 and SPT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soergel, B.; Flender, S.; Story, K. T.

    Here, we detect the kinematic Sunyaev-Zel'dovich (kSZ) effect with a statistical significance ofmore » $$4.2 \\sigma$$ by combining a cluster catalogue derived from the first year data of the Dark Energy Survey (DES) with CMB temperature maps from the South Pole Telescope Sunyaev-Zel'dovich (SPT-SZ) Survey. This measurement is performed with a differential statistic that isolates the pairwise kSZ signal, providing the first detection of the large-scale, pairwise motion of clusters using redshifts derived from photometric data. By fitting the pairwise kSZ signal to a theoretical template we measure the average central optical depth of the cluster sample, $$\\bar{\\tau}_e = (3.75 \\pm 0.89)\\cdot 10^{-3}$$. We compare the extracted signal to realistic simulations and find good agreement with respect to the signal-to-noise, the constraint on $$\\bar{\\tau}_e$$, and the corresponding gas fraction. High-precision measurements of the pairwise kSZ signal with future data will be able to place constraints on the baryonic physics of galaxy clusters, and could be used to probe gravity on scales $$ \\gtrsim 100$$ Mpc.« less

  18. Detection of the kinematic Sunyaev–Zel'dovich effect with DES Year 1 and SPT

    DOE PAGES

    Soergel, B.; Flender, S.; Story, K. T.; ...

    2016-06-17

    Here, we detect the kinematic Sunyaev-Zel'dovich (kSZ) effect with a statistical significance ofmore » $$4.2 \\sigma$$ by combining a cluster catalogue derived from the first year data of the Dark Energy Survey (DES) with CMB temperature maps from the South Pole Telescope Sunyaev-Zel'dovich (SPT-SZ) Survey. This measurement is performed with a differential statistic that isolates the pairwise kSZ signal, providing the first detection of the large-scale, pairwise motion of clusters using redshifts derived from photometric data. By fitting the pairwise kSZ signal to a theoretical template we measure the average central optical depth of the cluster sample, $$\\bar{\\tau}_e = (3.75 \\pm 0.89)\\cdot 10^{-3}$$. We compare the extracted signal to realistic simulations and find good agreement with respect to the signal-to-noise, the constraint on $$\\bar{\\tau}_e$$, and the corresponding gas fraction. High-precision measurements of the pairwise kSZ signal with future data will be able to place constraints on the baryonic physics of galaxy clusters, and could be used to probe gravity on scales $$ \\gtrsim 100$$ Mpc.« less

  19. Optimization of data analysis for the in vivo neutron activation analysis of aluminum in bone.

    PubMed

    Mohseni, H K; Matysiak, W; Chettle, D R; Byun, S H; Priest, N; Atanackovic, J; Prestwich, W V

    2016-10-01

    An existing system at McMaster University has been used for the in vivo measurement of aluminum in human bone. Precise and detailed analysis approaches are necessary to determine the aluminum concentration because of the low levels of aluminum found in the bone and the challenges associated with its detection. Phantoms resembling the composition of the human hand with varying concentrations of aluminum were made for testing the system prior to the application to human studies. A spectral decomposition model and a photopeak fitting model involving the inverse-variance weighted mean and a time-dependent analysis were explored to analyze the results and determine the model with the best performance and lowest minimum detection limit. The results showed that the spectral decomposition and the photopeak fitting model with the inverse-variance weighted mean both provided better results compared to the other methods tested. The spectral decomposition method resulted in a marginally lower detection limit (5μg Al/g Ca) compared to the inverse-variance weighted mean (5.2μg Al/g Ca), rendering both equally applicable to human measurements. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Measuring Glial Metabolism in Repetitive Brain Trauma and Alzheimer’s Disease

    DTIC Science & Technology

    2016-09-01

    Six methods: Single value decomposition (SVD), wavelet, sliding window, sliding window with Gaussian weighting, spline and spectral improvements...comparison of a range of different denoising methods for dynamic MRS. Six denoising methods were considered: Single value decomposition (SVD), wavelet...project by improving the software required for the data analysis by developing six different denoising methods. He also assisted with the testing

  1. Decomposition of the Inequality of Income Distribution by Income Types—Application for Romania

    NASA Astrophysics Data System (ADS)

    Andrei, Tudorel; Oancea, Bogdan; Richmond, Peter; Dhesi, Gurjeet; Herteliu, Claudiu

    2017-09-01

    This paper identifies the salient factors that characterize the inequality income distribution for Romania. Data analysis is rigorously carried out using sophisticated techniques borrowed from classical statistics (Theil). Decomposition of the inequalities measured by the Theil index is also performed. This study relies on an exhaustive (11.1 million records for 2014) data-set for total personal gross income of Romanian citizens.

  2. Teaching a New Method of Partial Fraction Decomposition to Senior Secondary Students: Results and Analysis from a Pilot Study

    ERIC Educational Resources Information Center

    Man, Yiu-Kwong; Leung, Allen

    2012-01-01

    In this paper, we introduce a new approach to compute the partial fraction decompositions of rational functions and describe the results of its trials at three secondary schools in Hong Kong. The data were collected via quizzes, questionnaire and interviews. In general, according to the responses from the teachers and students concerned, this new…

  3. Long-term litter decomposition controlled by manganese redox cycling

    PubMed Central

    Keiluweit, Marco; Nico, Peter; Harmon, Mark E.; Mao, Jingdong; Pett-Ridge, Jennifer; Kleber, Markus

    2015-01-01

    Litter decomposition is a keystone ecosystem process impacting nutrient cycling and productivity, soil properties, and the terrestrial carbon (C) balance, but the factors regulating decomposition rate are still poorly understood. Traditional models assume that the rate is controlled by litter quality, relying on parameters such as lignin content as predictors. However, a strong correlation has been observed between the manganese (Mn) content of litter and decomposition rates across a variety of forest ecosystems. Here, we show that long-term litter decomposition in forest ecosystems is tightly coupled to Mn redox cycling. Over 7 years of litter decomposition, microbial transformation of litter was paralleled by variations in Mn oxidation state and concentration. A detailed chemical imaging analysis of the litter revealed that fungi recruit and redistribute unreactive Mn2+ provided by fresh plant litter to produce oxidative Mn3+ species at sites of active decay, with Mn eventually accumulating as insoluble Mn3+/4+ oxides. Formation of reactive Mn3+ species coincided with the generation of aromatic oxidation products, providing direct proof of the previously posited role of Mn3+-based oxidizers in the breakdown of litter. Our results suggest that the litter-decomposing machinery at our coniferous forest site depends on the ability of plants and microbes to supply, accumulate, and regenerate short-lived Mn3+ species in the litter layer. This observation indicates that biogeochemical constraints on bioavailability, mobility, and reactivity of Mn in the plant–soil system may have a profound impact on litter decomposition rates. PMID:26372954

  4. [Progress in Raman spectroscopic measurement of methane hydrate].

    PubMed

    Xu, Feng; Zhu, Li-hua; Wu, Qiang; Xu, Long-jun

    2009-09-01

    Complex thermodynamics and kinetics problems are involved in the methane hydrate formation and decomposition, and these problems are crucial to understanding the mechanisms of hydrate formation and hydrate decomposition. However, it was difficult to accurately obtain such information due to the difficulty of measurement since methane hydrate is only stable under low temperature and high pressure condition, and until recent years, methane hydrate has been measured in situ using Raman spectroscopy. Raman spectroscopy, a non-destructive and non-invasive technique, is used to study vibrational modes of molecules. Studies of methane hydrate using Raman spectroscopy have been developed over the last decade. The Raman spectra of CH4 in vapor phase and in hydrate phase are presented in this paper. The progress in the research on methane hydrate formation thermodynamics, formation kinetics, decomposition kinetics and decomposition mechanism based on Raman spectroscopic measurements in the laboratory and deep sea are reviewed. Formation thermodynamic studies, including in situ observation of formation condition of methane hydrate, analysis of structure, and determination of hydrate cage occupancy and hydration numbers by using Raman spectroscopy, are emphasized. In the aspect of formation kinetics, research on variation in hydrate cage amount and methane concentration in water during the growth of hydrate using Raman spectroscopy is also introduced. For the methane hydrate decomposition, the investigation associated with decomposition mechanism, the mutative law of cage occupancy ratio and the formulation of decomposition rate in porous media are described. The important aspects for future hydrate research based on Raman spectroscopy are discussed.

  5. Sensitivity of decomposition rates of soil organic matter with respect to simultaneous changes in temperature and moisture

    NASA Astrophysics Data System (ADS)

    Sierra, Carlos A.; Trumbore, Susan E.; Davidson, Eric A.; Vicca, Sara; Janssens, I.

    2015-03-01

    The sensitivity of soil organic matter decomposition to global environmental change is a topic of prominent relevance for the global carbon cycle. Decomposition depends on multiple factors that are being altered simultaneously as a result of global environmental change; therefore, it is important to study the sensitivity of the rates of soil organic matter decomposition with respect to multiple and interacting drivers. In this manuscript, we present an analysis of the potential response of decomposition rates to simultaneous changes in temperature and moisture. To address this problem, we first present a theoretical framework to study the sensitivity of soil organic matter decomposition when multiple driving factors change simultaneously. We then apply this framework to models and data at different levels of abstraction: (1) to a mechanistic model that addresses the limitation of enzyme activity by simultaneous effects of temperature and soil water content, the latter controlling substrate supply and oxygen concentration for microbial activity; (2) to different mathematical functions used to represent temperature and moisture effects on decomposition in biogeochemical models. To contrast model predictions at these two levels of organization, we compiled different data sets of observed responses in field and laboratory studies. Then we applied our conceptual framework to: (3) observations of heterotrophic respiration at the ecosystem level; (4) laboratory experiments looking at the response of heterotrophic respiration to independent changes in moisture and temperature; and (5) ecosystem-level experiments manipulating soil temperature and water content simultaneously.

  6. Long-term litter decomposition controlled by manganese redox cycling.

    PubMed

    Keiluweit, Marco; Nico, Peter; Harmon, Mark E; Mao, Jingdong; Pett-Ridge, Jennifer; Kleber, Markus

    2015-09-22

    Litter decomposition is a keystone ecosystem process impacting nutrient cycling and productivity, soil properties, and the terrestrial carbon (C) balance, but the factors regulating decomposition rate are still poorly understood. Traditional models assume that the rate is controlled by litter quality, relying on parameters such as lignin content as predictors. However, a strong correlation has been observed between the manganese (Mn) content of litter and decomposition rates across a variety of forest ecosystems. Here, we show that long-term litter decomposition in forest ecosystems is tightly coupled to Mn redox cycling. Over 7 years of litter decomposition, microbial transformation of litter was paralleled by variations in Mn oxidation state and concentration. A detailed chemical imaging analysis of the litter revealed that fungi recruit and redistribute unreactive Mn(2+) provided by fresh plant litter to produce oxidative Mn(3+) species at sites of active decay, with Mn eventually accumulating as insoluble Mn(3+/4+) oxides. Formation of reactive Mn(3+) species coincided with the generation of aromatic oxidation products, providing direct proof of the previously posited role of Mn(3+)-based oxidizers in the breakdown of litter. Our results suggest that the litter-decomposing machinery at our coniferous forest site depends on the ability of plants and microbes to supply, accumulate, and regenerate short-lived Mn(3+) species in the litter layer. This observation indicates that biogeochemical constraints on bioavailability, mobility, and reactivity of Mn in the plant-soil system may have a profound impact on litter decomposition rates.

  7. Efficient analysis of three dimensional EUV mask induced imaging artifacts using the waveguide decomposition method

    NASA Astrophysics Data System (ADS)

    Shao, Feng; Evanschitzky, Peter; Fühner, Tim; Erdmann, Andreas

    2009-10-01

    This paper employs the Waveguide decomposition method as an efficient rigorous electromagnetic field (EMF) solver to investigate three dimensional mask-induced imaging artifacts in EUV lithography. The major mask diffraction induced imaging artifacts are first identified by applying the Zernike analysis of the mask nearfield spectrum of 2D lines/spaces. Three dimensional mask features like 22nm semidense/dense contacts/posts, isolated elbows and line-ends are then investigated in terms of lithographic results. After that, the 3D mask-induced imaging artifacts such as feature orientation dependent best focus shift, process window asymmetries, and other aberration-like phenomena are explored for the studied mask features. The simulation results can help lithographers to understand the reasons of EUV-specific imaging artifacts and to devise illumination and feature dependent strategies for their compensation in the optical proximity correction (OPC) for EUV masks. At last, an efficient approach using the Zernike analysis together with the Waveguide decomposition technique is proposed to characterize the impact of mask properties for the future OPC process.

  8. A Four-Stage Hybrid Model for Hydrological Time Series Forecasting

    PubMed Central

    Di, Chongli; Yang, Xiaohua; Wang, Xiaochao

    2014-01-01

    Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of ‘denoising, decomposition and ensemble’. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models. PMID:25111782

  9. Experimental and DFT simulation study of a novel felodipine cocrystal: Characterization, dissolving properties and thermal decomposition kinetics.

    PubMed

    Yang, Caiqin; Guo, Wei; Lin, Yulong; Lin, Qianqian; Wang, Jiaojiao; Wang, Jing; Zeng, Yanli

    2018-05-30

    In this study, a new cocrystal of felodipine (Fel) and glutaric acid (Glu) with a high dissolution rate was developed using the solvent ultrasonic method. The prepared cocrystal was characterized using X-ray powder diffraction, differential scanning calorimetry, thermogravimetric (TG) analysis, and infrared (IR) spectroscopy. To provide basic information about the optimization of pharmaceutical preparations of Fel-based cocrystals, this work investigated the thermal decomposition kinetics of the Fel-Glu cocrystal through non-isothermal thermogravimetry. Density functional theory (DFT) simulations were also performed on the Fel monomer and the trimolecular cocrystal compound for exploring the mechanisms underlying hydrogen bonding formation and thermal decomposition. Combined results of IR spectroscopy and DFT simulation verified that the Fel-Glu cocrystal formed via the NH⋯OC and CO⋯HO hydrogen bonds between Fel and Glu at the ratio of 1:2. The TG/derivative TG curves indicated that the thermal decomposition of the Fel-Glu cocrystal underwent a two-step process. The apparent activation energy (E a ) and pre-exponential factor (A) of the thermal decomposition for the first stage were 84.90 kJ mol -1 and 7.03 × 10 7  min -1 , respectively. The mechanism underlying thermal decomposition possibly involved nucleation and growth, with the integral mechanism function G(α) of α 3/2 . DFT calculation revealed that the hydrogen bonding between Fel and Glu weakened the terminal methoxyl, methyl, and ethyl groups in the Fel molecule. As a result, these groups were lost along with the Glu molecule in the first thermal decomposition. In conclusion, the formed cocrystal exhibited different thermal decomposition kinetics and showed different E a , A, and shelf life from the intact active pharmaceutical ingredient. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Dynamics of Potassium Release and Adsorption on Rice Straw Residue

    PubMed Central

    Li, Jifu; Lu, Jianwei; Li, Xiaokun; Ren, Tao; Cong, Rihuan; Zhou, Li

    2014-01-01

    Straw application can not only increase crop yields, improve soil structure and enrich soil fertility, but can also enhance water and nutrient retention. The aim of this study was to ascertain the relationships between straw decomposition and the release-adsorption processes of K+. This study increases the understanding of the roles played by agricultural crop residues in the soil environment, informs more effective straw recycling and provides a method for reducing potassium loss. The influence of straw decomposition on the K+ release rate in paddy soil under flooded condition was studied using incubation experiments, which indicated the decomposition process of rice straw could be divided into two main stages: (a) a rapid decomposition stage from 0 to 60 d and (b) a slow decomposition stage from 60 to 110 d. However, the characteristics of the straw potassium release were different from those of the overall straw decomposition, as 90% of total K was released by the third day of the study. The batches of the K sorption experiments showed that crop residues could adsorb K+ from the ambient environment, which was subject to decomposition periods and extra K+ concentration. In addition, a number of materials or binding sites were observed on straw residues using IR analysis, indicating possible coupling sites for K+ ions. The aqueous solution experiments indicated that raw straw could absorb water at 3.88 g g−1, and this rate rose to its maximum 15 d after incubation. All of the experiments demonstrated that crop residues could absorb large amount of aqueous solution to preserve K+ indirectly during the initial decomposition period. These crop residues could also directly adsorb K+ via physical and chemical adsorption in the later period, allowing part of this K+ to be absorbed by plants for the next growing season. PMID:24587364

  11. Dynamics of potassium release and adsorption on rice straw residue.

    PubMed

    Li, Jifu; Lu, Jianwei; Li, Xiaokun; Ren, Tao; Cong, Rihuan; Zhou, Li

    2014-01-01

    Straw application can not only increase crop yields, improve soil structure and enrich soil fertility, but can also enhance water and nutrient retention. The aim of this study was to ascertain the relationships between straw decomposition and the release-adsorption processes of K(+). This study increases the understanding of the roles played by agricultural crop residues in the soil environment, informs more effective straw recycling and provides a method for reducing potassium loss. The influence of straw decomposition on the K(+) release rate in paddy soil under flooded condition was studied using incubation experiments, which indicated the decomposition process of rice straw could be divided into two main stages: (a) a rapid decomposition stage from 0 to 60 d and (b) a slow decomposition stage from 60 to 110 d. However, the characteristics of the straw potassium release were different from those of the overall straw decomposition, as 90% of total K was released by the third day of the study. The batches of the K sorption experiments showed that crop residues could adsorb K(+) from the ambient environment, which was subject to decomposition periods and extra K(+) concentration. In addition, a number of materials or binding sites were observed on straw residues using IR analysis, indicating possible coupling sites for K(+) ions. The aqueous solution experiments indicated that raw straw could absorb water at 3.88 g g(-1), and this rate rose to its maximum 15 d after incubation. All of the experiments demonstrated that crop residues could absorb large amount of aqueous solution to preserve K(+) indirectly during the initial decomposition period. These crop residues could also directly adsorb K(+) via physical and chemical adsorption in the later period, allowing part of this K(+) to be absorbed by plants for the next growing season.

  12. A four-stage hybrid model for hydrological time series forecasting.

    PubMed

    Di, Chongli; Yang, Xiaohua; Wang, Xiaochao

    2014-01-01

    Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of 'denoising, decomposition and ensemble'. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models.

  13. Resolving Some Paradoxes in the Thermal Decomposition Mechanism of Acetaldehyde

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sivaramakrishnan, Raghu; Michael, Joe V.; Harding, Lawrence B.

    2015-07-16

    The mechanism for the thermal decomposition of acetaldehyde has been revisited with an analysis of literature kinetics experiments using theoretical kinetics. The present modeling study was motivated by recent observations, with very sensitive diagnostics, of some unexpected products in high temperature micro-tubular reactor experiments on the thermal decomposition of CH3CHO and its deuterated analogs, CH3CDO, CD3CHO, and CD3CDO. The observations of these products prompted the authors of these studies to suggest that the enol tautomer, CH2CHOH (vinyl alcohol), is a primary intermediate in the thermal decomposition of acetaldehyde. The present modeling efforts on acetaldehyde decomposition incorporate a master equation re-analysismore » of the CH3CHO potential energy surface (PES). The lowest energy process on this PES is an isomerization of CH3CHO to CH2CHOH. However, the subsequent product channels for CH2CHOH are substantially higher in energy, and the only unimolecular process that can be thermally accessed is a re-isomerization to CH3CHO. The incorporation of these new theoretical kinetics predictions into models for selected literature experiments on CH3CHO thermal decomposition confirms our earlier experiment and theory based conclusions that the dominant decomposition process in CH3CHO at high temperatures is C-C bond fission with a minor contribution (~10-20%) from the roaming mechanism to form CH4 and CO. The present modeling efforts also incorporate a master-equation analysis of the H + CH2CHOH potential energy surface. This bimolecular reaction is the primary mechanism for removal of CH2CHOH, which can accumulate to minor amounts at high temperatures, T > 1000 K, in most lab-scale experiments that use large initial concentrations of CH3CHO. Our modeling efforts indicate that the observation of ketene, water and acetylene in the recent micro-tubular experiments are primarily due to bimolecular reactions of CH3CHO and CH2CHOH with H-atoms, and have no bearing on the unimolecular decomposition mechanism of CH3CHO. The present simulations also indicate that experiments using these micro-tubular reactors when interpreted with the aid of high-level theoretical calculations and kinetics modeling can offer insights into the chemistry of elusive intermediates in high temperature pyrolysis of organic molecules.« less

  14. Organic and inorganic decomposition products from the thermal desorption of atmospheric particles

    NASA Astrophysics Data System (ADS)

    Williams, B. J.; Zhang, Y.; Zuo, X.; Martinez, R. E.; Walker, M. J.; Kreisberg, N. M.; Goldstein, A. H.; Docherty, K. S.; Jimenez, J. L.

    2015-12-01

    Atmospheric aerosol composition is often analyzed using thermal desorption techniques to evaporate samples and deliver organic or inorganic molecules to various designs of detectors for identification and quantification. The organic aerosol (OA) fraction is composed of thousands of individual compounds, some with nitrogen- and sulfur-containing functionality, and often contains oligomeric material, much of which may be susceptible to decomposition upon heating. Here we analyze thermal decomposition products as measured by a thermal desorption aerosol gas chromatograph (TAG) capable of separating thermal decomposition products from thermally stable molecules. The TAG impacts particles onto a collection and thermal desorption (CTD) cell, and upon completion of sample collection, heats and transfers the sample in a helium flow up to 310 °C. Desorbed molecules are refocused at the head of a GC column that is held at 45 °C and any volatile decomposition products pass directly through the column and into an electron impact quadrupole mass spectrometer (MS). Analysis of the sample introduction (thermal decomposition) period reveals contributions of NO+ (m/z 30), NO2+ (m/z 46), SO+ (m/z 48), and SO2+ (m/z 64), derived from either inorganic or organic particle-phase nitrate and sulfate. CO2+ (m/z 44) makes up a major component of the decomposition signal, along with smaller contributions from other organic components that vary with the type of aerosol contributing to the signal (e.g., m/z 53, 82 observed here for isoprene-derived secondary OA). All of these ions are important for ambient aerosol analyzed with the aerosol mass spectrometer (AMS), suggesting similarity of the thermal desorption processes in both instruments. Ambient observations of these decomposition products compared to organic, nitrate, and sulfate mass concentrations measured by an AMS reveal good correlation, with improved correlations for OA when compared to the AMS oxygenated OA (OOA) component. TAG signal found in the traditional compound elution time period reveals higher correlations with AMS hydrocarbon-like OA (HOA) combined with the fraction of OOA that is less oxygenated. Potential to quantify nitrate and sulfate aerosol mass concentrations using the TAG system is explored through analysis of ammonium sulfate and ammonium nitrate standards. While chemical standards display a linear response in the TAG system, re-desorptions of the CTD cell following ambient sample analysis shows some signal carryover on sulfate and organics, and new desorption methods should be developed to improve throughput. Future standards should be composed of complex organic/inorganic mixtures, similar to what is found in the atmosphere, and perhaps will more accurately account for any aerosol mixture effects on compositional quantification.

  15. Organic and inorganic decomposition products from the thermal desorption of atmospheric particles

    NASA Astrophysics Data System (ADS)

    Williams, Brent J.; Zhang, Yaping; Zuo, Xiaochen; Martinez, Raul E.; Walker, Michael J.; Kreisberg, Nathan M.; Goldstein, Allen H.; Docherty, Kenneth S.; Jimenez, Jose L.

    2016-04-01

    Atmospheric aerosol composition is often analyzed using thermal desorption techniques to evaporate samples and deliver organic or inorganic molecules to various designs of detectors for identification and quantification. The organic aerosol (OA) fraction is composed of thousands of individual compounds, some with nitrogen- and sulfur-containing functionality and, often contains oligomeric material, much of which may be susceptible to decomposition upon heating. Here we analyze thermal decomposition products as measured by a thermal desorption aerosol gas chromatograph (TAG) capable of separating thermal decomposition products from thermally stable molecules. The TAG impacts particles onto a collection and thermal desorption (CTD) cell, and upon completion of sample collection, heats and transfers the sample in a helium flow up to 310 °C. Desorbed molecules are refocused at the head of a gas chromatography column that is held at 45 °C and any volatile decomposition products pass directly through the column and into an electron impact quadrupole mass spectrometer. Analysis of the sample introduction (thermal decomposition) period reveals contributions of NO+ (m/z 30), NO2+ (m/z 46), SO+ (m/z 48), and SO2+ (m/z 64), derived from either inorganic or organic particle-phase nitrate and sulfate. CO2+ (m/z 44) makes up a major component of the decomposition signal, along with smaller contributions from other organic components that vary with the type of aerosol contributing to the signal (e.g., m/z 53, 82 observed here for isoprene-derived secondary OA). All of these ions are important for ambient aerosol analyzed with the aerosol mass spectrometer (AMS), suggesting similarity of the thermal desorption processes in both instruments. Ambient observations of these decomposition products compared to organic, nitrate, and sulfate mass concentrations measured by an AMS reveal good correlation, with improved correlations for OA when compared to the AMS oxygenated OA (OOA) component. TAG signal found in the traditional compound elution time period reveals higher correlations with AMS hydrocarbon-like OA (HOA) combined with the fraction of OOA that is less oxygenated. Potential to quantify nitrate and sulfate aerosol mass concentrations using the TAG system is explored through analysis of ammonium sulfate and ammonium nitrate standards. While chemical standards display a linear response in the TAG system, redesorptions of the CTD cell following ambient sample analysis show some signal carryover on sulfate and organics, and new desorption methods should be developed to improve throughput. Future standards should be composed of complex organic/inorganic mixtures, similar to what is found in the atmosphere, and perhaps will more accurately account for any aerosol mixture effects on compositional quantification.

  16. Organic and inorganic decomposition products from the thermal desorption of atmospheric particles

    DOE PAGES

    Williams, Brent J.; Zhang, Yaping; Zuo, Xiaochen; ...

    2016-04-11

    Here, atmospheric aerosol composition is often analyzed using thermal desorption techniques to evaporate samples and deliver organic or inorganic molecules to various designs of detectors for identification and quantification. The organic aerosol (OA) fraction is composed of thousands of individual compounds, some with nitrogen- and sulfur-containing functionality and, often contains oligomeric material, much of which may be susceptible to decomposition upon heating. Here we analyze thermal decomposition products as measured by a thermal desorption aerosol gas chromatograph (TAG) capable of separating thermal decomposition products from thermally stable molecules. The TAG impacts particles onto a collection and thermal desorption (CTD) cell, and upon completionmore » of sample collection, heats and transfers the sample in a helium flow up to 310 °C. Desorbed molecules are refocused at the head of a gas chromatography column that is held at 45 °C and any volatile decomposition products pass directly through the column and into an electron impact quadrupole mass spectrometer. Analysis of the sample introduction (thermal decomposition) period reveals contributions of NO + ( m/z 30), NO 2 + ( m/z 46), SO + ( m/z 48), and SO 2 + ( m/z 64), derived from either inorganic or organic particle-phase nitrate and sulfate. CO 2 + ( m/z 44) makes up a major component of the decomposition signal, along with smaller contributions from other organic components that vary with the type of aerosol contributing to the signal (e.g., m/z 53, 82 observed here for isoprene-derived secondary OA). All of these ions are important for ambient aerosol analyzed with the aerosol mass spectrometer (AMS), suggesting similarity of the thermal desorption processes in both instruments. Ambient observations of these decomposition products compared to organic, nitrate, and sulfate mass concentrations measured by an AMS reveal good correlation, with improved correlations for OA when compared to the AMS oxygenated OA (OOA) component. TAG signal found in the traditional compound elution time period reveals higher correlations with AMS hydrocarbon-like OA (HOA) combined with the fraction of OOA that is less oxygenated. Potential to quantify nitrate and sulfate aerosol mass concentrations using the TAG system is explored through analysis of ammonium sulfate and ammonium nitrate standards. While chemical standards display a linear response in the TAG system, redesorptions of the CTD cell following ambient sample analysis show some signal carryover on sulfate and organics, and new desorption methods should be developed to improve throughput. Future standards should be composed of complex organic/inorganic mixtures, similar to what is found in the atmosphere, and perhaps will more accurately account for any aerosol mixture effects on compositional quantification.« less

  17. An asymptotic induced numerical method for the convection-diffusion-reaction equation

    NASA Technical Reports Server (NTRS)

    Scroggs, Jeffrey S.; Sorensen, Danny C.

    1988-01-01

    A parallel algorithm for the efficient solution of a time dependent reaction convection diffusion equation with small parameter on the diffusion term is presented. The method is based on a domain decomposition that is dictated by singular perturbation analysis. The analysis is used to determine regions where certain reduced equations may be solved in place of the full equation. Parallelism is evident at two levels. Domain decomposition provides parallelism at the highest level, and within each domain there is ample opportunity to exploit parallelism. Run time results demonstrate the viability of the method.

  18. Critical Analysis of Nitramine Decomposition Data: Product Distributions from HMX and RDX Decomposition

    DTIC Science & Technology

    1985-06-01

    12. It was stated that analysis of the gaseous products showed that they consisted of N2O, NO, N2, CO, CO2, F^CO and traces of N,* The products of...IR, UV and mass spectrometry. These were (yields summarized in Table 1) as follows: No 1 N2O, NO, CO2, CO, HCN, CH2O, and I^O. NO2 and a trace ...Ramirez, "Reaction of Gem-Nitronitroso Compounds with Triethyl Phosphite ," Tetrahedron, Vol. 29, p. 4195, 1973. J. Jappy and P.N. Preston

  19. Raman analysis of non stoichiometric Ni1-δO

    NASA Astrophysics Data System (ADS)

    Dubey, Paras; Choudhary, K. K.; Kaurav, Netram

    2018-04-01

    Thermal decomposition method was used to synthesize non-stoichiometric nickel oxide at different sintering temperatures upto 1100 °C. The structure of synthesized compounds were analyzed by X ray diffraction analysis (XRD) and magnetic ordering was studied with the help of Raman scattering spectroscopy for the samples sintered at different temperature. It was found that due to change in sintering temperature the stoichiometry of the sample changes and hence intensity of two magnon band changes. These results were interpreted as the decomposition temperature increases, which heals the defects present in the non-stoichiometric nickel oxide and antiferromagnetic spin correlation changes accordingly.

  20. Empirical mode decomposition for analyzing acoustical signals

    NASA Technical Reports Server (NTRS)

    Huang, Norden E. (Inventor)

    2005-01-01

    The present invention discloses a computer implemented signal analysis method through the Hilbert-Huang Transformation (HHT) for analyzing acoustical signals, which are assumed to be nonlinear and nonstationary. The Empirical Decomposition Method (EMD) and the Hilbert Spectral Analysis (HSA) are used to obtain the HHT. Essentially, the acoustical signal will be decomposed into the Intrinsic Mode Function Components (IMFs). Once the invention decomposes the acoustic signal into its constituting components, all operations such as analyzing, identifying, and removing unwanted signals can be performed on these components. Upon transforming the IMFs into Hilbert spectrum, the acoustical signal may be compared with other acoustical signals.

  1. Using Decomposition Analysis to Identify Modifiable Racial Disparities in the Distribution of Blood Pressure in the United States.

    PubMed

    Basu, Sanjay; Hong, Anthony; Siddiqi, Arjumand

    2015-08-15

    To lower the prevalence of hypertension and racial disparities in hypertension, public health agencies have attempted to reduce modifiable risk factors for high blood pressure, such as excess sodium intake or high body mass index. In the present study, we used decomposition methods to identify how population-level reductions in key risk factors for hypertension could reshape entire population distributions of blood pressure and associated disparities among racial/ethnic groups. We compared blood pressure distributions among non-Hispanic white, non-Hispanic black, and Mexican-American persons using data from the US National Health and Nutrition Examination Survey (2003-2010). When using standard adjusted logistic regression analysis, we found that differences in body mass index were the only significant explanatory correlate to racial disparities in blood pressure. By contrast, our decomposition approach provided more nuanced revelations; we found that disparities in hypertension related to tobacco use might be masked by differences in body mass index that significantly increase the disparities between black and white participants. Analysis of disparities between white and Mexican-American participants also reveal hidden relationships between tobacco use, body mass index, and blood pressure. Decomposition offers an approach to understand how modifying risk factors might alter population-level health disparities in overall outcome distributions that can be obscured by standard regression analyses. © The Author 2015. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health.

  2. Multiset singular value decomposition for joint analysis of multi-modal data: application to fingerprint analysis

    NASA Astrophysics Data System (ADS)

    Emge, Darren K.; Adalı, Tülay

    2014-06-01

    As the availability and use of imaging methodologies continues to increase, there is a fundamental need to jointly analyze data that is collected from multiple modalities. This analysis is further complicated when, the size or resolution of the images differ, implying that the observation lengths of each of modality can be highly varying. To address this expanding landscape, we introduce the multiset singular value decomposition (MSVD), which can perform a joint analysis on any number of modalities regardless of their individual observation lengths. Through simulations, the inter modal relationships across the different modalities which are revealed by the MSVD are shown. We apply the MSVD to forensic fingerprint analysis, showing that MSVD joint analysis successfully identifies relevant similarities for further analysis, significantly reducing the processing time required. This reduction, takes this technique from a laboratory method to a useful forensic tool with applications across the law enforcement and security regimes.

  3. Controllable pneumatic generator based on the catalytic decomposition of hydrogen peroxide

    NASA Astrophysics Data System (ADS)

    Kim, Kyung-Rok; Kim, Kyung-Soo; Kim, Soohyun

    2014-07-01

    This paper presents a novel compact and controllable pneumatic generator that uses hydrogen peroxide decomposition. A fuel micro-injector using a piston-pump mechanism is devised and tested to control the chemical decomposition rate. By controlling the injection rate, the feedback controller maintains the pressure of the gas reservoir at a desired pressure level. Thermodynamic analysis and experiments are performed to demonstrate the feasibility of the proposed pneumatic generator. Using a prototype of the pneumatic generator, it takes 6 s to reach 3.5 bars with a reservoir volume of 200 ml at the room temperature, which is sufficiently rapid and effective to maintain the repetitive lifting of a 1 kg mass.

  4. Controllable pneumatic generator based on the catalytic decomposition of hydrogen peroxide.

    PubMed

    Kim, Kyung-Rok; Kim, Kyung-Soo; Kim, Soohyun

    2014-07-01

    This paper presents a novel compact and controllable pneumatic generator that uses hydrogen peroxide decomposition. A fuel micro-injector using a piston-pump mechanism is devised and tested to control the chemical decomposition rate. By controlling the injection rate, the feedback controller maintains the pressure of the gas reservoir at a desired pressure level. Thermodynamic analysis and experiments are performed to demonstrate the feasibility of the proposed pneumatic generator. Using a prototype of the pneumatic generator, it takes 6 s to reach 3.5 bars with a reservoir volume of 200 ml at the room temperature, which is sufficiently rapid and effective to maintain the repetitive lifting of a 1 kg mass.

  5. New spectrophotometric assay for pilocarpine.

    PubMed

    El-Masry, S; Soliman, R

    1980-07-01

    A quick method for the determination of pilocarpine in eye drops in the presence of decomposition products is described. The method involves complexation of the alkaloid with bromocresol purple at pH 6. After treatment with 0.1N NaOH, the liberated dye is measured at 580 nm. The method has a relative standard deviation of 1.99%, and has been successfully applied to the analysis of 2 batches of pilocarpine eye drops. The recommended method was also used to monitor the stability of a pilocarpine nitrate solution in 0.05N NaOH at 65 degrees C. The BPC method failed to detect any significant decomposition after 2 h incubation, but the recommended method revealed 87.5% decomposition.

  6. Solid-state reaction kinetics of neodymium doped magnesium hydrogen phosphate system

    NASA Astrophysics Data System (ADS)

    Gupta, Rashmi; Slathia, Goldy; Bamzai, K. K.

    2018-05-01

    Neodymium doped magnesium hydrogen phosphate (NdMHP) crystals were grown by using gel encapsulation technique. Structural characterization of the grown crystals has been carried out by single crystal X-ray diffraction (XRD) and it revealed that NdMHP crystals crystallize in orthorhombic crystal system with space group Pbca. Kinetics of the decomposition of the grown crystals has been studied by non-isothermal analysis. The estimation of decomposition temperatures and weight loss has been made from the thermogravimetric/differential thermo analytical (TG/DTA) in conjuncture with DSC studies. The various steps involved in the thermal decomposition of the material have been analysed using Horowitz-Metzger, Coats-Redfern and Piloyan-Novikova equations for evaluating various kinetic parameters.

  7. A multisite validation of whole slide imaging for primary diagnosis using standardized data collection and analysis.

    PubMed

    Wack, Katy; Drogowski, Laura; Treloar, Murray; Evans, Andrew; Ho, Jonhan; Parwani, Anil; Montalto, Michael C

    2016-01-01

    Text-based reporting and manual arbitration for whole slide imaging (WSI) validation studies are labor intensive and do not allow for consistent, scalable, and repeatable data collection or analysis. The objective of this study was to establish a method of data capture and analysis using standardized codified checklists and predetermined synoptic discordance tables and to use these methods in a pilot multisite validation study. Fifteen case report form checklists were generated from the College of American Pathology cancer protocols. Prior to data collection, all hypothetical pairwise comparisons were generated, and a level of harm was determined for each possible discordance. Four sites with four pathologists each generated 264 independent reads of 33 cases. Preestablished discordance tables were applied to determine site by site and pooled accuracy, intrareader/intramodality, and interreader intramodality error rates. Over 10,000 hypothetical pairwise comparisons were evaluated and assigned harm in discordance tables. The average difference in error rates between WSI and glass, as compared to ground truth, was 0.75% with a lower bound of 3.23% (95% confidence interval). Major discordances occurred on challenging cases, regardless of modality. The average inter-reader agreement across sites for glass was 76.5% (weighted kappa of 0.68) and for digital it was 79.1% (weighted kappa of 0.72). These results demonstrate the feasibility and utility of employing standardized synoptic checklists and predetermined discordance tables to gather consistent, comprehensive diagnostic data for WSI validation studies. This method of data capture and analysis can be applied in large-scale multisite WSI validations.

  8. Multiscale Characterization of PM2.5 in Southern Taiwan based on Noise-assisted Multivariate Empirical Mode Decomposition and Time-dependent Intrinsic Correlation

    NASA Astrophysics Data System (ADS)

    Hsiao, Y. R.; Tsai, C.

    2017-12-01

    As the WHO Air Quality Guideline indicates, ambient air pollution exposes world populations under threat of fatal symptoms (e.g. heart disease, lung cancer, asthma etc.), raising concerns of air pollution sources and relative factors. This study presents a novel approach to investigating the multiscale variations of PM2.5 in southern Taiwan over the past decade, with four meteorological influencing factors (Temperature, relative humidity, precipitation and wind speed),based on Noise-assisted Multivariate Empirical Mode Decomposition(NAMEMD) algorithm, Hilbert Spectral Analysis(HSA) and Time-dependent Intrinsic Correlation(TDIC) method. NAMEMD algorithm is a fully data-driven approach designed for nonlinear and nonstationary multivariate signals, and is performed to decompose multivariate signals into a collection of channels of Intrinsic Mode Functions (IMFs). TDIC method is an EMD-based method using a set of sliding window sizes to quantify localized correlation coefficients for multiscale signals. With the alignment property and quasi-dyadic filter bank of NAMEMD algorithm, one is able to produce same number of IMFs for all variables and estimates the cross correlation in a more accurate way. The performance of spectral representation of NAMEMD-HSA method is compared with Complementary Empirical Mode Decomposition/ Hilbert Spectral Analysis (CEEMD-HSA) and Wavelet Analysis. The nature of NAMAMD-based TDICC analysis is then compared with CEEMD-based TDIC analysis and the traditional correlation analysis.

  9. Optimal Averages for Nonlinear Signal Decompositions - Another Alternative for Empirical Mode Decomposition

    DTIC Science & Technology

    2014-10-01

    nonlinear and non-stationary signals. It aims at decomposing a signal, via an iterative sifting procedure, into several intrinsic mode functions ...stationary signals. It aims at decomposing a signal, via an iterative sifting procedure into several intrinsic mode functions (IMFs), and each of the... function , optimization. 1 Introduction It is well known that nonlinear and non-stationary signal analysis is important and difficult. His- torically

  10. Analysis of HEMCL Railgun Insulator Damage

    DTIC Science & Technology

    2006-06-01

    pyrolytic epoxy degradation and glass fiber softening and liquification in the insulator, it is determined that rail-to-rail plasmas are present behind...produces epoxy decomposition products in the form of gases, oils , waxes and chars solid (heavily cross-linked residues) [4]. The nature of the... pyrolytic decomposition product (wax) of the epoxy as in the fired specimens. Figures 6 and 7 are typical examples of glass fiber softening and

  11. Kinetic study of the thermal decomposition of uranium metaphosphate, U(PO3)4, into uranium pyrophosphate, UP2O7

    NASA Astrophysics Data System (ADS)

    Yang, Hee-Chul; Kim, Hyung-Ju; Lee, Si-Young; Yang, In-Hwan; Chung, Dong-Yong

    2017-06-01

    The thermochemical properties of uranium compounds have attracted much interest in relation to thermochemical treatments and the safe disposal of radioactive waste bearing uranium compounds. The characteristics of the thermal decomposition of uranium metaphosphate, U(PO3)4, into uranium pyrophosphate, UP2O7, have been studied from the view point of reaction kinetics and acting mechanisms. A mixture of U(PO3)4 and UP2O7 was prepared from the pyrolysis residue of uranium-bearing spent TBP. A kinetic analysis of the reaction of U(PO3)4 into UP2O7 was conducted using an isoconversional method and a master plot method on the basis of data from a non-isothermal thermogravimetric analysis. The thermal decomposition of U(PO3)4 into UP2O7 followed a single-step reaction with an activation energy of 175.29 ± 1.58 kJ mol-1. The most probable kinetic model was determined as a type of nucleation and nuclei-growth models, the Avrami-Erofeev model (A3), which describes that there are certain restrictions on nuclei growth of UP2O7 during the solid-state decomposition of U(PO3)4.

  12. FACETS: multi-faceted functional decomposition of protein interaction networks.

    PubMed

    Seah, Boon-Siew; Bhowmick, Sourav S; Dewey, C Forbes

    2012-10-15

    The availability of large-scale curated protein interaction datasets has given rise to the opportunity to investigate higher level organization and modularity within the protein-protein interaction (PPI) network using graph theoretic analysis. Despite the recent progress, systems level analysis of high-throughput PPIs remains a daunting task because of the amount of data they present. In this article, we propose a novel PPI network decomposition algorithm called FACETS in order to make sense of the deluge of interaction data using Gene Ontology (GO) annotations. FACETS finds not just a single functional decomposition of the PPI network, but a multi-faceted atlas of functional decompositions that portray alternative perspectives of the functional landscape of the underlying PPI network. Each facet in the atlas represents a distinct interpretation of how the network can be functionally decomposed and organized. Our algorithm maximizes interpretative value of the atlas by optimizing inter-facet orthogonality and intra-facet cluster modularity. We tested our algorithm on the global networks from IntAct, and compared it with gold standard datasets from MIPS and KEGG. We demonstrated the performance of FACETS. We also performed a case study that illustrates the utility of our approach. Supplementary data are available at the Bioinformatics online. Our software is available freely for non-commercial purposes from: http://www.cais.ntu.edu.sg/~assourav/Facets/

  13. CombAlign: a code for generating a one-to-many sequence alignment from a set of pairwise structure-based sequence alignments.

    PubMed

    Zhou, Carol L Ecale

    2015-01-01

    In order to better define regions of similarity among related protein structures, it is useful to identify the residue-residue correspondences among proteins. Few codes exist for constructing a one-to-many multiple sequence alignment derived from a set of structure or sequence alignments, and a need was evident for creating such a tool for combining pairwise structure alignments that would allow for insertion of gaps in the reference structure. This report describes a new Python code, CombAlign, which takes as input a set of pairwise sequence alignments (which may be structure based) and generates a one-to-many, gapped, multiple structure- or sequence-based sequence alignment (MSSA). The use and utility of CombAlign was demonstrated by generating gapped MSSAs using sets of pairwise structure-based sequence alignments between structure models of the matrix protein (VP40) and pre-small/secreted glycoprotein (sGP) of Reston Ebolavirus and the corresponding proteins of several other filoviruses. The gapped MSSAs revealed structure-based residue-residue correspondences, which enabled identification of structurally similar versus differing regions in the Reston proteins compared to each of the other corresponding proteins. CombAlign is a new Python code that generates a one-to-many, gapped, multiple structure- or sequence-based sequence alignment (MSSA) given a set of pairwise sequence alignments (which may be structure based). CombAlign has utility in assisting the user in distinguishing structurally conserved versus divergent regions on a reference protein structure relative to other closely related proteins. CombAlign was developed in Python 2.6, and the source code is available for download from the GitHub code repository.

  14. Improving prediction of heterodimeric protein complexes using combination with pairwise kernel.

    PubMed

    Ruan, Peiying; Hayashida, Morihiro; Akutsu, Tatsuya; Vert, Jean-Philippe

    2018-02-19

    Since many proteins become functional only after they interact with their partner proteins and form protein complexes, it is essential to identify the sets of proteins that form complexes. Therefore, several computational methods have been proposed to predict complexes from the topology and structure of experimental protein-protein interaction (PPI) network. These methods work well to predict complexes involving at least three proteins, but generally fail at identifying complexes involving only two different proteins, called heterodimeric complexes or heterodimers. There is however an urgent need for efficient methods to predict heterodimers, since the majority of known protein complexes are precisely heterodimers. In this paper, we use three promising kernel functions, Min kernel and two pairwise kernels, which are Metric Learning Pairwise Kernel (MLPK) and Tensor Product Pairwise Kernel (TPPK). We also consider the normalization forms of Min kernel. Then, we combine Min kernel or its normalization form and one of the pairwise kernels by plugging. We applied kernels based on PPI, domain, phylogenetic profile, and subcellular localization properties to predicting heterodimers. Then, we evaluate our method by employing C-Support Vector Classification (C-SVC), carrying out 10-fold cross-validation, and calculating the average F-measures. The results suggest that the combination of normalized-Min-kernel and MLPK leads to the best F-measure and improved the performance of our previous work, which had been the best existing method so far. We propose new methods to predict heterodimers, using a machine learning-based approach. We train a support vector machine (SVM) to discriminate interacting vs non-interacting protein pairs, based on informations extracted from PPI, domain, phylogenetic profiles and subcellular localization. We evaluate in detail new kernel functions to encode these data, and report prediction performance that outperforms the state-of-the-art.

  15. KinLinks: Software Toolkit for Kinship Analysis and Pedigree Generation from NGS Datasets

    DTIC Science & Technology

    2015-04-21

    Retinitis pigmentosa families 2110 and 2111 of 52 individuals across 6 generations (Figure 5a), and 54 geographically diverse samples (Supplementary Table...relationships within the Retinitis pigmentosa family. Machine Learning Classifier for pairwise kinship prediction Ten features were identified for training...family (Figure 4b), and the Retinitis pigmentosa family (Figure 5b). The auto-generated pedigrees were graphed as well as in family-tree format using

  16. Analysis of Neuronal Sequences Using Pairwise Biases

    DTIC Science & Technology

    2015-08-27

    semantic memory (knowledge of facts) and implicit memory (e.g., how to ride a bike ). Evidence for the participation of the hippocampus in the formation of...hippocampal formation in an attempt to be cured of severe epileptic seizures. Although the surgery was successful in regards to reducing the frequency and...very different from each other in many ways including duration and number of spikes. Still, these sequences share a similar trend in the general order

  17. Refined genetic mapping of X-linked Charcot-Marie-Tooth neuropathy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fain, P.R.; Barker, D.F.; Chance, P.F.

    1994-02-01

    Genetic linkage studies were conducted in four multigenerational families with X-linked Charcot-Marie-Tooth disease (CMTX), using 12 highly polymorphic short-tandem-repeat markers for the pericentromeric region of the X Chromosome. Pairwise linkage analysis with individual markers confirmed tight linkage of CMTX to the pericentromeric region in each family. Multipoint analyses strongly support the order DXS337-CMTX-DXS441-(DXS56, PGK1). 38 refs., 2 figs., 1 tab.

  18. Active and total microbial communities in forest soil are largely different and highly stratified during decomposition.

    PubMed

    Baldrian, Petr; Kolařík, Miroslav; Stursová, Martina; Kopecký, Jan; Valášková, Vendula; Větrovský, Tomáš; Zifčáková, Lucia; Snajdr, Jaroslav; Rídl, Jakub; Vlček, Cestmír; Voříšková, Jana

    2012-02-01

    Soils of coniferous forest ecosystems are important for the global carbon cycle, and the identification of active microbial decomposers is essential for understanding organic matter transformation in these ecosystems. By the independent analysis of DNA and RNA, whole communities of bacteria and fungi and its active members were compared in topsoil of a Picea abies forest during a period of organic matter decomposition. Fungi quantitatively dominate the microbial community in the litter horizon, while the organic horizon shows comparable amount of fungal and bacterial biomasses. Active microbial populations obtained by RNA analysis exhibit similar diversity as DNA-derived populations, but significantly differ in the composition of microbial taxa. Several highly active taxa, especially fungal ones, show low abundance or even absence in the DNA pool. Bacteria and especially fungi are often distinctly associated with a particular soil horizon. Fungal communities are less even than bacterial ones and show higher relative abundances of dominant species. While dominant bacterial species are distributed across the studied ecosystem, distribution of dominant fungi is often spatially restricted as they are only recovered at some locations. The sequences of cbhI gene encoding for cellobiohydrolase (exocellulase), an essential enzyme for cellulose decomposition, were compared in soil metagenome and metatranscriptome and assigned to their producers. Litter horizon exhibits higher diversity and higher proportion of expressed sequences than organic horizon. Cellulose decomposition is mediated by highly diverse fungal populations largely distinct between soil horizons. The results indicate that low-abundance species make an important contribution to decomposition processes in soils.

  19. Convergent cross-mapping and pairwise asymmetric inference.

    PubMed

    McCracken, James M; Weigel, Robert S

    2014-12-01

    Convergent cross-mapping (CCM) is a technique for computing specific kinds of correlations between sets of times series. It was introduced by Sugihara et al. [Science 338, 496 (2012).] and is reported to be "a necessary condition for causation" capable of distinguishing causality from standard correlation. We show that the relationships between CCM correlations proposed by Sugihara et al. do not, in general, agree with intuitive concepts of "driving" and as such should not be considered indicative of causality. It is shown that the fact that the CCM algorithm implies causality is a function of system parameters for simple linear and nonlinear systems. For example, in a circuit containing a single resistor and inductor, both voltage and current can be identified as the driver depending on the frequency of the source voltage. It is shown that the CCM algorithm, however, can be modified to identify relationships between pairs of time series that are consistent with intuition for the considered example systems for which CCM causality analysis provided nonintuitive driver identifications. This modification of the CCM algorithm is introduced as "pairwise asymmetric inference" (PAI) and examples of its use are presented.

  20. Systematic chemical-genetic and chemical-chemical interaction datasets for prediction of compound synergism

    PubMed Central

    Wildenhain, Jan; Spitzer, Michaela; Dolma, Sonam; Jarvik, Nick; White, Rachel; Roy, Marcia; Griffiths, Emma; Bellows, David S.; Wright, Gerard D.; Tyers, Mike

    2016-01-01

    The network structure of biological systems suggests that effective therapeutic intervention may require combinations of agents that act synergistically. However, a dearth of systematic chemical combination datasets have limited the development of predictive algorithms for chemical synergism. Here, we report two large datasets of linked chemical-genetic and chemical-chemical interactions in the budding yeast Saccharomyces cerevisiae. We screened 5,518 unique compounds against 242 diverse yeast gene deletion strains to generate an extended chemical-genetic matrix (CGM) of 492,126 chemical-gene interaction measurements. This CGM dataset contained 1,434 genotype-specific inhibitors, termed cryptagens. We selected 128 structurally diverse cryptagens and tested all pairwise combinations to generate a benchmark dataset of 8,128 pairwise chemical-chemical interaction tests for synergy prediction, termed the cryptagen matrix (CM). An accompanying database resource called ChemGRID was developed to enable analysis, visualisation and downloads of all data. The CGM and CM datasets will facilitate the benchmarking of computational approaches for synergy prediction, as well as chemical structure-activity relationship models for anti-fungal drug discovery. PMID:27874849

  1. Characterization of demographic expansions from pairwise comparisons of linked microsatellite haplotypes.

    PubMed

    Navascués, Miguel; Hardy, Olivier J; Burgarella, Concetta

    2009-03-01

    This work extends the methods of demographic inference based on the distribution of pairwise genetic differences between individuals (mismatch distribution) to the case of linked microsatellite data. Population genetics theory describes the distribution of mutations among a sample of genes under different demographic scenarios. However, the actual number of mutations can rarely be deduced from DNA polymorphisms. The inclusion of mutation models in theoretical predictions can improve the performance of statistical methods. We have developed a maximum-pseudolikelihood estimator for the parameters that characterize a demographic expansion for a series of linked loci evolving under a stepwise mutation model. Those loci would correspond to DNA polymorphisms of linked microsatellites (such as those found on the Y chromosome or the chloroplast genome). The proposed method was evaluated with simulated data sets and with a data set of chloroplast microsatellites that showed signal for demographic expansion in a previous study. The results show that inclusion of a mutational model in the analysis improves the estimates of the age of expansion in the case of older expansions.

  2. Prediction of microsleeps using pairwise joint entropy and mutual information between EEG channels.

    PubMed

    Baseer, Abdul; Weddell, Stephen J; Jones, Richard D

    2017-07-01

    Microsleeps are involuntary and brief instances of complete loss of responsiveness, typically of 0.5-15 s duration. They adversely affect performance in extended attention-driven jobs and can be fatal. Our aim was to predict microsleeps from 16 channel EEG signals. Two information theoretic concepts - pairwise joint entropy and mutual information - were independently used to continuously extract features from EEG signals. k-nearest neighbor (kNN) with k = 3 was used to calculate both joint entropy and mutual information. Highly correlated features were discarded and the rest were ranked using Fisher score followed by an average of 3-fold cross-validation area under the curve of the receiver operating characteristic (AUC ROC ). Leave-one-out method (LOOM) was performed to test the performance of microsleep prediction system on independent data. The best prediction for 0.25 s ahead was AUCROC, sensitivity, precision, geometric mean (GM), and φ of 0.93, 0.68, 0.33, 0.75, and 0.38 respectively with joint entropy using single linear discriminant analysis (LDA) classifier.

  3. Galaxy and Mass Assembly (GAMA): small-scale anisotropic galaxy clustering and the pairwise velocity dispersion of galaxies

    NASA Astrophysics Data System (ADS)

    Loveday, J.; Christodoulou, L.; Norberg, P.; Peacock, J. A.; Baldry, I. K.; Bland-Hawthorn, J.; Brown, M. J. I.; Colless, M.; Driver, S. P.; Holwerda, B. W.; Hopkins, A. M.; Kafle, P. R.; Liske, J.; Lopez-Sanchez, A. R.; Taylor, E. N.

    2018-03-01

    The galaxy pairwise velocity dispersion (PVD) can provide important tests of non-standard gravity and galaxy formation models. We describe measurements of the PVD of galaxies in the Galaxy and Mass Assembly (GAMA) survey as a function of projected separation and galaxy luminosity. Due to the faint magnitude limit (r < 19.8) and highly complete spectroscopic sampling of the GAMA survey, we are able to reliably measure the PVD to smaller scales (r⊥ = 0.01 h - 1 Mpc) than previous work. The measured PVD at projected separations r⊥ ≲ 1 h - 1 Mpc increases near monotonically with increasing luminosity from σ12 ≈ 200 km s - 1 at Mr = -17 mag to σ12 ≈ 600 km s - 1 at Mr ≈ -22 mag. Analysis of the Gonzalez-Perez et al. (2014) GALFORM semi-analytic model yields no such trend of PVD with luminosity: the model overpredicts the PVD for faint galaxies. This is most likely a result of the model placing too many low-luminosity galaxies in massive haloes.

  4. Genetic Diversity and Association Studies in US Hispanic/Latino Populations: Applications in the Hispanic Community Health Study/Study of Latinos

    PubMed Central

    Conomos, Matthew P.; Laurie, Cecelia A.; Stilp, Adrienne M.; Gogarten, Stephanie M.; McHugh, Caitlin P.; Nelson, Sarah C.; Sofer, Tamar; Fernández-Rhodes, Lindsay; Justice, Anne E.; Graff, Mariaelisa; Young, Kristin L.; Seyerle, Amanda A.; Avery, Christy L.; Taylor, Kent D.; Rotter, Jerome I.; Talavera, Gregory A.; Daviglus, Martha L.; Wassertheil-Smoller, Sylvia; Schneiderman, Neil; Heiss, Gerardo; Kaplan, Robert C.; Franceschini, Nora; Reiner, Alex P.; Shaffer, John R.; Barr, R. Graham; Kerr, Kathleen F.; Browning, Sharon R.; Browning, Brian L.; Weir, Bruce S.; Avilés-Santa, M. Larissa; Papanicolaou, George J.; Lumley, Thomas; Szpiro, Adam A.; North, Kari E.; Rice, Ken; Thornton, Timothy A.; Laurie, Cathy C.

    2016-01-01

    US Hispanic/Latino individuals are diverse in genetic ancestry, culture, and environmental exposures. Here, we characterized and controlled for this diversity in genome-wide association studies (GWASs) for the Hispanic Community Health Study/Study of Latinos (HCHS/SOL). We simultaneously estimated population-structure principal components (PCs) robust to familial relatedness and pairwise kinship coefficients (KCs) robust to population structure, admixture, and Hardy-Weinberg departures. The PCs revealed substantial genetic differentiation within and among six self-identified background groups (Cuban, Dominican, Puerto Rican, Mexican, and Central and South American). To control for variation among groups, we developed a multi-dimensional clustering method to define a “genetic-analysis group” variable that retains many properties of self-identified background while achieving substantially greater genetic homogeneity within groups and including participants with non-specific self-identification. In GWASs of 22 biomedical traits, we used a linear mixed model (LMM) including pairwise empirical KCs to account for familial relatedness, PCs for ancestry, and genetic-analysis groups for additional group-associated effects. Including the genetic-analysis group as a covariate accounted for significant trait variation in 8 of 22 traits, even after we fit 20 PCs. Additionally, genetic-analysis groups had significant heterogeneity of residual variance for 20 of 22 traits, and modeling this heteroscedasticity within the LMM reduced genomic inflation for 19 traits. Furthermore, fitting an LMM that utilized a genetic-analysis group rather than a self-identified background group achieved higher power to detect previously reported associations. We expect that the methods applied here will be useful in other studies with multiple ethnic groups, admixture, and relatedness. PMID:26748518

  5. APOLLO: a quality assessment service for single and multiple protein models.

    PubMed

    Wang, Zheng; Eickholt, Jesse; Cheng, Jianlin

    2011-06-15

    We built a web server named APOLLO, which can evaluate the absolute global and local qualities of a single protein model using machine learning methods or the global and local qualities of a pool of models using a pair-wise comparison approach. Based on our evaluations on 107 CASP9 (Critical Assessment of Techniques for Protein Structure Prediction) targets, the predicted quality scores generated from our machine learning and pair-wise methods have an average per-target correlation of 0.671 and 0.917, respectively, with the true model quality scores. Based on our test on 92 CASP9 targets, our predicted absolute local qualities have an average difference of 2.60 Å with the actual distances to native structure. http://sysbio.rnet.missouri.edu/apollo/. Single and pair-wise global quality assessment software is also available at the site.

  6. An optimized time varying filtering based empirical mode decomposition method with grey wolf optimizer for machinery fault diagnosis

    NASA Astrophysics Data System (ADS)

    Zhang, Xin; Liu, Zhiwen; Miao, Qiang; Wang, Lei

    2018-03-01

    A time varying filtering based empirical mode decomposition (EMD) (TVF-EMD) method was proposed recently to solve the mode mixing problem of EMD method. Compared with the classical EMD, TVF-EMD was proven to improve the frequency separation performance and be robust to noise interference. However, the decomposition parameters (i.e., bandwidth threshold and B-spline order) significantly affect the decomposition results of this method. In original TVF-EMD method, the parameter values are assigned in advance, which makes it difficult to achieve satisfactory analysis results. To solve this problem, this paper develops an optimized TVF-EMD method based on grey wolf optimizer (GWO) algorithm for fault diagnosis of rotating machinery. Firstly, a measurement index termed weighted kurtosis index is constructed by using kurtosis index and correlation coefficient. Subsequently, the optimal TVF-EMD parameters that match with the input signal can be obtained by GWO algorithm using the maximum weighted kurtosis index as objective function. Finally, fault features can be extracted by analyzing the sensitive intrinsic mode function (IMF) owning the maximum weighted kurtosis index. Simulations and comparisons highlight the performance of TVF-EMD method for signal decomposition, and meanwhile verify the fact that bandwidth threshold and B-spline order are critical to the decomposition results. Two case studies on rotating machinery fault diagnosis demonstrate the effectiveness and advantages of the proposed method.

  7. Long-term litter decomposition controlled by manganese redox cycling

    DOE PAGES

    Keiluweit, Marco; Nico, Peter S.; Harmon, Mark; ...

    2015-09-08

    Litter decomposition is a keystone ecosystem process impacting nutrient cycling and productivity, soil properties, and the terrestrial carbon (C) balance, but the factors regulating decomposition rate are still poorly understood. Traditional models assume that the rate is controlled by litter quality, relying on parameters such as lignin content as predictors. However, a strong correlation has been observed between the manganese (Mn) content of litter and decomposition rates across a variety of forest ecosystems. Here, we show that long-term litter decomposition in forest ecosystems is tightly coupled to Mn redox cycling. Over 7 years of litter decomposition, microbial transformation of littermore » was paralleled by variations in Mn oxidation state and concentration. A detailed chemical imaging analysis of the litter revealed that fungi recruit and redistribute unreactive Mn 2+ provided by fresh plant litter to produce oxidative Mn 3+ species at sites of active decay, with Mn eventually accumulating as insoluble Mn 3+/4+ oxides. Formation of reactive Mn 3+ species coincided with the generation of aromatic oxidation products, providing direct proof of the previously posited role of Mn 3+-based oxidizers in the breakdown of litter. Our results suggest that the litter-decomposing machinery at our coniferous forest site depends on the ability of plants and microbes to supply, accumulate, and regenerate short-lived Mn 3+ species in the litter layer. As a result, this observation indicates that biogeochemical constraints on bioavailability, mobility, and reactivity of Mn in the plant–soil system may have a profound impact on litter decomposition rates.« less

  8. Mode Analyses of Gyrokinetic Simulations of Plasma Microturbulence

    NASA Astrophysics Data System (ADS)

    Hatch, David R.

    This thesis presents analysis of the excitation and role of damped modes in gyrokinetic simulations of plasma microturbulence. In order to address this question, mode decompositions are used to analyze gyrokinetic simulation data. A mode decomposition can be constructed by projecting a nonlinearly evolved gyrokinetic distribution function onto a set of linear eigenmodes, or alternatively by constructing a proper orthogonal decomposition of the distribution function. POD decompositions are used to examine the role of damped modes in saturating ion temperature gradient driven turbulence. In order to identify the contribution of different modes to the energy sources and sinks, numerical diagnostics for a gyrokinetic energy quantity were developed for the GENE code. The use of these energy diagnostics in conjunction with POD mode decompositions demonstrates that ITG turbulence saturates largely through dissipation by damped modes at the same perpendicular spatial scales as those of the driving instabilities. This defines a picture of turbulent saturation that is very different from both traditional hydrodynamic scenarios and also many common theories for the saturation of plasma turbulence. POD mode decompositions are also used to examine the role of subdominant modes in causing magnetic stochasticity in electromagnetic gyrokinetic simulations. It is shown that the magnetic stochasticity, which appears to be ubiquitous in electromagnetic microturbulence, is caused largely by subdominant modes with tearing parity. The application of higher-order singular value decomposition (HOSVD) to the full distribution function from gyrokinetic simulations is presented. This is an effort to demonstrate the ability to characterize and extract insight from a very large, complex, and high-dimensional data-set - the 5-D (plus time) gyrokinetic distribution function.

  9. Evidence from neglect dyslexia for morphological decomposition at the early stages of orthographic-visual analysis

    PubMed Central

    Reznick, Julia; Friedmann, Naama

    2015-01-01

    This study examined whether and how the morphological structure of written words affects reading in word-based neglect dyslexia (neglexia), and what can be learned about morphological decomposition in reading from the effect of morphology on neglexia. The oral reading of 7 Hebrew-speaking participants with acquired neglexia at the word level—6 with left neglexia and 1 with right neglexia—was evaluated. The main finding was that the morphological role of the letters on the neglected side of the word affected neglect errors: When an affix appeared on the neglected side, it was neglected significantly more often than when the neglected side was part of the root; root letters on the neglected side were never omitted, whereas affixes were. Perceptual effects of length and final letter form were found for words with an affix on the neglected side, but not for words in which a root letter appeared in the neglected side. Semantic and lexical factors did not affect the participants' reading and error pattern, and neglect errors did not preserve the morpho-lexical characteristics of the target words. These findings indicate that an early morphological decomposition of words to their root and affixes occurs before access to the lexicon and to semantics, at the orthographic-visual analysis stage, and that the effects did not result from lexical feedback. The same effects of morphological structure on reading were manifested by the participants with left- and right-sided neglexia. Since neglexia is a deficit at the orthographic-visual analysis level, the effect of morphology on reading patterns in neglexia further supports that morphological decomposition occurs in the orthographic-visual analysis stage, prelexically, and that the search for the three letters of the root in Hebrew is a trigger for attention shift in neglexia. PMID:26528159

  10. Comparison of three-way and four-way calibration for the real-time quantitative analysis of drug hydrolysis in complex dynamic samples by excitation-emission matrix fluorescence.

    PubMed

    Yin, Xiao-Li; Gu, Hui-Wen; Liu, Xiao-Lu; Zhang, Shan-Hui; Wu, Hai-Long

    2018-03-05

    Multiway calibration in combination with spectroscopic technique is an attractive tool for online or real-time monitoring of target analyte(s) in complex samples. However, how to choose a suitable multiway calibration method for the resolution of spectroscopic-kinetic data is a troubling problem in practical application. In this work, for the first time, three-way and four-way fluorescence-kinetic data arrays were generated during the real-time monitoring of the hydrolysis of irinotecan (CPT-11) in human plasma by excitation-emission matrix fluorescence. Alternating normalization-weighted error (ANWE) and alternating penalty trilinear decomposition (APTLD) were used as three-way calibration for the decomposition of the three-way kinetic data array, whereas alternating weighted residual constraint quadrilinear decomposition (AWRCQLD) and alternating penalty quadrilinear decomposition (APQLD) were applied as four-way calibration to the four-way kinetic data array. The quantitative results of the two kinds of calibration models were fully compared from the perspective of predicted real-time concentrations, spiked recoveries of initial concentration, and analytical figures of merit. The comparison study demonstrated that both three-way and four-way calibration models could achieve real-time quantitative analysis of the hydrolysis of CPT-11 in human plasma under certain conditions. However, it was also found that both of them possess some critical advantages and shortcomings during the process of dynamic analysis. The conclusions obtained in this paper can provide some helpful guidance for the reasonable selection of multiway calibration models to achieve the real-time quantitative analysis of target analyte(s) in complex dynamic systems. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Decomposition reactions of (hydroxyalkyl) nitrosoureas and related compounds: possible relationship to carcinogenicity.

    PubMed

    Singer, S S

    1985-08-01

    (Hydroxyalkyl)nitrosoureas and the related cyclic carbamates N-nitrosooxazolidones are potent carcinogens. The decompositions of four such compounds, 1-nitroso-1-(2-hydroxyethyl)urea (I), 3-nitrosooxazolid-2-one (II), 1-nitroso-1-(2-hydroxypropyl)urea (III), and 5-methyl-3-nitrosooxazolid-2-one (IV), in aqueous buffers at physiological pH were studied to determine if any obvious differences in decomposition pathways could account for the variety of tumors obtained from these four compounds. The products predicted by the literature mechanisms for nitrosourea and nitrosooxazolidone decompositions (which were derived from experiments at pH 10-12) were indeed the products formed, including glycols, active carbonyl compounds, epoxides, and, from the oxazolidones, cyclic carbonates. Furthermore, it was shown that in pH 6.4-7.4 buffer epoxides were stable reaction products. However, in the presence of hepatocytes, most of the epoxide was converted to glycol. The analytical methods developed were then applied to the analysis of the decomposition products of some related dialkylnitrosoureas, and similar results were obtained. The formation of chemically reactive secondary products and the possible relevance of these results to carcinogenesis studies are discussed.

  12. 3D tensor-based blind multispectral image decomposition for tumor demarcation

    NASA Astrophysics Data System (ADS)

    Kopriva, Ivica; Peršin, Antun

    2010-03-01

    Blind decomposition of multi-spectral fluorescent image for tumor demarcation is formulated exploiting tensorial structure of the image. First contribution of the paper is identification of the matrix of spectral responses and 3D tensor of spatial distributions of the materials present in the image from Tucker3 or PARAFAC models of 3D image tensor. Second contribution of the paper is clustering based estimation of the number of the materials present in the image as well as matrix of their spectral profiles. 3D tensor of the spatial distributions of the materials is recovered through 3-mode multiplication of the multi-spectral image tensor and inverse of the matrix of spectral profiles. Tensor representation of the multi-spectral image preserves its local spatial structure that is lost, due to vectorization process, when matrix factorization-based decomposition methods (such as non-negative matrix factorization and independent component analysis) are used. Superior performance of the tensor-based image decomposition over matrix factorization-based decompositions is demonstrated on experimental red-green-blue (RGB) image with known ground truth as well as on RGB fluorescent images of the skin tumor (basal cell carcinoma).

  13. Arbuscular mycorrhiza enhance the rate of litter decomposition while inhibiting soil microbial community development

    PubMed Central

    Gui, Heng; Hyde, Kevin; Xu, Jianchu; Mortimer, Peter

    2017-01-01

    Although there is a growing amount of evidence that arbuscular mycorrhizal fungi (AMF) influence the decomposition process, the extent of their involvement remains unclear. Therefore, given this knowledge gap, our aim was to test how AMF influence the soil decomposer communities. Dual compartment microcosms, where AMF (Glomus mosseae) were either allowed access (AM+) to or excluded (AM−) from forest soil compartments containing litterbags (leaf litter from Calophyllum polyanthum) were used. The experiment ran for six months, with destructive harvests at 0, 90, 120, 150, and 180 days. For each harvest we measured AMF colonization, soil nutrients, litter mass loss, and microbial biomass (using phospholipid fatty acid analysis (PLFA)). AMF significantly enhanced litter decomposition in the first 5 months, whilst delaying the development of total microbial biomass (represented by total PLFA) from T150 to T180. A significant decline in soil available N was observed through the course of the experiment for both treatments. This study shows that AMF have the capacity to interact with soil microbial communities and inhibit the development of fungal and bacterial groups in the soil at the later stage of the litter decomposition (180 days), whilst enhancing the rates of decomposition. PMID:28176855

  14. Vacancy-induced initial decomposition of condensed phase NTO via bimolecular hydrogen transfer mechanisms at high pressure: a DFT-D study.

    PubMed

    Liu, Zhichao; Wu, Qiong; Zhu, Weihua; Xiao, Heming

    2015-04-28

    Density functional theory with dispersion-correction (DFT-D) was employed to study the effects of vacancy and pressure on the structure and initial decomposition of crystalline 5-nitro-2,4-dihydro-3H-1,2,4-triazol-3-one (β-NTO), a high-energy insensitive explosive. A comparative analysis of the chemical behaviors of NTO in the ideal bulk crystal and vacancy-containing crystals under applied hydrostatic compression was considered. Our calculated formation energy, vacancy interaction energy, electron density difference, and frontier orbitals reveal that the stability of NTO can be effectively manipulated by changing the molecular environment. Bimolecular hydrogen transfer is suggested to be a potential initial chemical reaction in the vacancy-containing NTO solid at 50 GPa, which is prior to the C-NO2 bond dissociation as its initiation decomposition in the gas phase. The vacancy defects introduced into the ideal bulk NTO crystal can produce a localized site, where the initiation decomposition is preferentially accelerated and then promotes further decompositions. Our results may shed some light on the influence of the molecular environments on the initial pathways in molecular explosives.

  15. Defect Detection in Textures through the Use of Entropy as a Means for Automatically Selecting the Wavelet Decomposition Level.

    PubMed

    Navarro, Pedro J; Fernández-Isla, Carlos; Alcover, Pedro María; Suardíaz, Juan

    2016-07-27

    This paper presents a robust method for defect detection in textures, entropy-based automatic selection of the wavelet decomposition level (EADL), based on a wavelet reconstruction scheme, for detecting defects in a wide variety of structural and statistical textures. Two main features are presented. One of the new features is an original use of the normalized absolute function value (NABS) calculated from the wavelet coefficients derived at various different decomposition levels in order to identify textures where the defect can be isolated by eliminating the texture pattern in the first decomposition level. The second is the use of Shannon's entropy, calculated over detail subimages, for automatic selection of the band for image reconstruction, which, unlike other techniques, such as those based on the co-occurrence matrix or on energy calculation, provides a lower decomposition level, thus avoiding excessive degradation of the image, allowing a more accurate defect segmentation. A metric analysis of the results of the proposed method with nine different thresholding algorithms determined that selecting the appropriate thresholding method is important to achieve optimum performance in defect detection. As a consequence, several different thresholding algorithms depending on the type of texture are proposed.

  16. Energy decomposition analysis for exciplexes using absolutely localized molecular orbitals

    NASA Astrophysics Data System (ADS)

    Ge, Qinghui; Mao, Yuezhi; Head-Gordon, Martin

    2018-02-01

    An energy decomposition analysis (EDA) scheme is developed for understanding the intermolecular interaction involving molecules in their excited states. The EDA utilizes absolutely localized molecular orbitals to define intermediate states and is compatible with excited state methods based on linear response theory such as configuration interaction singles and time-dependent density functional theory. The shift in excitation energy when an excited molecule interacts with the environment is decomposed into frozen, polarization, and charge transfer contributions, and the frozen term can be further separated into Pauli repulsion and electrostatics. These terms can be added to their counterparts obtained from the ground state EDA to form a decomposition of the total interaction energy. The EDA scheme is applied to study a variety of systems, including some model systems to demonstrate the correct behavior of all the proposed energy components as well as more realistic systems such as hydrogen-bonding complexes (e.g., formamide-water, pyridine/pyrimidine-water) and halide (F-, Cl-)-water clusters that involve charge-transfer-to-solvent excitations.

  17. Theoretical investigation of HNgNH{sub 3}{sup +} ions (Ng = He, Ne, Ar, Kr, and Xe)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Kunqi; Sheng, Li, E-mail: shengli@hit.edu.cn

    2015-04-14

    The equilibrium geometries, harmonic frequencies, and dissociation energies of HNgNH{sub 3}{sup +} ions (Ng = He, Ne, Ar, Kr, and Xe) were investigated using the following method: Becke-3-parameter-Lee-Yang-Parr (B3LYP), Boese-Matrin for Kinetics (BMK), second-order Møller-Plesset perturbation theory (MP2), and coupled-cluster with single and double excitations as well as perturbative inclusion of triples (CCSD(T)). The results indicate that HHeNH{sub 3}{sup +}, HArNH{sub 3}{sup +}, HKrNH{sub 3}{sup +}, and HXeNH{sub 3}{sup +} ions are metastable species that are protected from decomposition by high energy barriers, whereas the HNeNH{sub 3}{sup +} ion is unstable because of its relatively small energy barrier for decomposition.more » The bonding nature of noble-gas atoms in HNgNH{sub 3}{sup +} was also analyzed using the atoms in molecules approach, natural energy decomposition analysis, and natural bond orbital analysis.« less

  18. Palm vein recognition based on directional empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Lee, Jen-Chun; Chang, Chien-Ping; Chen, Wei-Kuei

    2014-04-01

    Directional empirical mode decomposition (DEMD) has recently been proposed to make empirical mode decomposition suitable for the processing of texture analysis. Using DEMD, samples are decomposed into a series of images, referred to as two-dimensional intrinsic mode functions (2-D IMFs), from finer to large scale. A DEMD-based 2 linear discriminant analysis (LDA) for palm vein recognition is proposed. The proposed method progresses through three steps: (i) a set of 2-D IMF features of various scale and orientation are extracted using DEMD, (ii) the 2LDA method is then applied to reduce the dimensionality of the feature space in both the row and column directions, and (iii) the nearest neighbor classifier is used for classification. We also propose two strategies for using the set of 2-D IMF features: ensemble DEMD vein representation (EDVR) and multichannel DEMD vein representation (MDVR). In experiments using palm vein databases, the proposed MDVR-based 2LDA method achieved recognition accuracy of 99.73%, thereby demonstrating its feasibility for palm vein recognition.

  19. An intelligent decomposition approach for efficient design of non-hierarchic systems

    NASA Technical Reports Server (NTRS)

    Bloebaum, Christina L.

    1992-01-01

    The design process associated with large engineering systems requires an initial decomposition of the complex systems into subsystem modules which are coupled through transference of output data. The implementation of such a decomposition approach assumes the ability exists to determine what subsystems and interactions exist and what order of execution will be imposed during the analysis process. Unfortunately, this is quite often an extremely complex task which may be beyond human ability to efficiently achieve. Further, in optimizing such a coupled system, it is essential to be able to determine which interactions figure prominently enough to significantly affect the accuracy of the optimal solution. The ability to determine 'weak' versus 'strong' coupling strengths would aid the designer in deciding which couplings could be permanently removed from consideration or which could be temporarily suspended so as to achieve computational savings with minimal loss in solution accuracy. An approach that uses normalized sensitivities to quantify coupling strengths is presented. The approach is applied to a coupled system composed of analysis equations for verification purposes.

  20. Effects of biopretreatment of corn stover with white-rot fungus on low-temperature pyrolysis products.

    PubMed

    Yang, Xuewei; Ma, Fuying; Yu, Hongbo; Zhang, Xiaoyu; Chen, Shulin

    2011-02-01

    The thermal decomposition of biopretreated corn stover during the low temperature has been studied by using the Py-GC/MS analysis and thermogravimetric analysis with the distributed activation energy model (DAEM). Results showed that biopretreatment with white-rot fungus Echinodontium taxodii 2538 can improve the low-temperature pyrolysis of biomass, by increasing the pyrolysis products of cellulose, hemicellulose (furfural and sucrose increased up to 4.68-fold and 2.94-fold respectively) and lignin (biophenyl and 3,7,11,15-tetramethyl-2-hexadecen-1-ol increased 2.45-fold and 4.22-fold, respectively). Calculated by DAEM method, it showed that biopretreatment can decrease the activation energy during the low temperature range, accelerate the reaction rate and start the thermal decomposition with lower temperature. ATR-FTIR results showed that the deconstruction of lignin and the decomposition of the main linkages between hemicellulose and lignin could contribute to the improvement of the pyrolysis at low temperature. Copyright © 2010 Elsevier Ltd. All rights reserved.

  1. Thermal degradation of Shredded Oil Palm Empty Fruit Bunches (SOPEFB) embedded with Cobalt catalyst by Thermogravimetric Analysis (TGA)

    NASA Astrophysics Data System (ADS)

    Alias, R.; Hamid, N. H.; Jaapar, J.; Musa, M.; Alwi, H.; Halim, K. H. Ku

    2018-03-01

    Thermal behavior and decomposition kinetics of shredded oil palm empty fruit bunches (SOPEFB) were investigated in this study by using thermogravimetric analysis (TGA). The SOPEFB were analyzed under conditions of temperature 30 °C to 900 °C with nitrogen gas flow at 50 ml/min. The SOPEFB were embedded with cobalt (II) nitrate solution with concentration 5%, 10%, 15% and 20%. The TG/DTG curves shows the degradation behavior of SOPEFB following with char production for each heating rate and each concentration of cobalt catalyst. Thermal degradation occurred in three phases, water drying phase, decomposition of hemicellulose and cellulose phase, and lignin decomposition phase. The kinetic equation with relevant parameters described the activation energy required for thermal degradation at the temperature regions of 200 °C to 350 °C. Activation energy (E) for different heating rate with SOPEFB embedded with different concentration of cobalt catalyst showing that the lowest E required was at SOPEFB with 20% concentration of cobalt catalyst..

  2. Alphasatellitidae: a new family with two subfamilies for the classification of geminivirus- and nanovirus-associated alphasatellites.

    PubMed

    Briddon, Rob W; Martin, Darren P; Roumagnac, Philippe; Navas-Castillo, Jesús; Fiallo-Olivé, Elvira; Moriones, Enrique; Lett, Jean-Michel; Zerbini, F Murilo; Varsani, Arvind

    2018-05-09

    Nanoviruses and geminiviruses are circular, single stranded DNA viruses that infect many plant species around the world. Nanoviruses and certain geminiviruses that belong to the Begomovirus and Mastrevirus genera are associated with additional circular, single stranded DNA molecules (~ 1-1.4 kb) that encode a replication-associated protein (Rep). These Rep-encoding satellite molecules are commonly referred to as alphasatellites and here we communicate the establishment of the family Alphasatellitidae to which these have been assigned. Within the Alphasatellitidae family two subfamilies, Geminialphasatellitinae and Nanoalphasatellitinae, have been established to respectively accommodate the geminivirus- and nanovirus-associated alphasatellites. Whereas the pairwise nucleotide sequence identity distribution of all the known geminialphasatellites (n = 628) displayed a troughs at ~ 70% and 88% pairwise identity, that of the known nanoalphasatellites (n = 54) had a troughs at ~ 67% and ~ 80% pairwise identity. We use these pairwise identity values as thresholds together with phylogenetic analyses to establish four genera and 43 species of geminialphasatellites and seven genera and 19 species of nanoalphasatellites. Furthermore, a divergent alphasatellite associated with coconut foliar decay disease is assigned to a species but not a subfamily as it likely represents a new alphasatellite subfamily that could be established once other closely related molecules are discovered.

  3. Estimating Seven Coefficients of Pairwise Relatedness Using Population-Genomic Data

    PubMed Central

    Ackerman, Matthew S.; Johri, Parul; Spitze, Ken; Xu, Sen; Doak, Thomas G.; Young, Kimberly; Lynch, Michael

    2017-01-01

    Population structure can be described by genotypic-correlation coefficients between groups of individuals, the most basic of which are the pairwise relatedness coefficients between any two individuals. There are nine pairwise relatedness coefficients in the most general model, and we show that these can be reduced to seven coefficients for biallelic loci. Although all nine coefficients can be estimated from pedigrees, six coefficients have been beyond empirical reach. We provide a numerical optimization procedure that estimates all seven reduced coefficients from population-genomic data. Simulations show that the procedure is nearly unbiased, even at 3× coverage, and errors in five of the seven coefficients are statistically uncorrelated. The remaining two coefficients have a negative correlation of errors, but their sum provides an unbiased assessment of the overall correlation of heterozygosity between two individuals. Application of these new methods to four populations of the freshwater crustacean Daphnia pulex reveal the occurrence of half siblings in our samples, as well as a number of identical individuals that are likely obligately asexual clone mates. Statistically significant negative estimates of these pairwise relatedness coefficients, including inbreeding coefficients that were typically negative, underscore the difficulties that arise when interpreting genotypic correlations as estimations of the probability that alleles are identical by descent. PMID:28341647

  4. Detection of the pairwise kinematic Sunyaev-Zel'dovich effect with BOSS DR11 and the Atacama Cosmology Telescope

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernardis, F. De; Aiola, S.; Vavagiakis, E. M.

    Here, we present a new measurement of the kinematic Sunyaev-Zel'dovich effect using data from the Atacama Cosmology Telescope (ACT) and the Baryon Oscillation Spectroscopic Survey (BOSS). Using 600 square degrees of overlapping sky area, we evaluate the mean pairwise baryon momentum associated with the positions of 50,000 bright galaxies in the BOSS DR11 Large Scale Structure catalog. A non-zero signal arises from the large-scale motions of halos containing the sample galaxies. The data fits an analytical signal model well, with the optical depth to microwave photon scattering as a free parameter determining the overall signal amplitude. We estimate the covariancemore » matrix of the mean pairwise momentum as a function of galaxy separation, using microwave sky simulations, jackknife evaluation, and bootstrap estimates. The most conservative simulation-based errors give signal-to-noise estimates between 3.6 and 4.1 for varying galaxy luminosity cuts. We discuss how the other error determinations can lead to higher signal-to-noise values, and consider the impact of several possible systematic errors. Estimates of the optical depth from the average thermal Sunyaev-Zel'dovich signal at the sample galaxy positions are broadly consistent with those obtained from the mean pairwise momentum signal.« less

  5. Detection of the pairwise kinematic Sunyaev-Zel'dovich effect with BOSS DR11 and the Atacama Cosmology Telescope

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernardis, F. De; Vavagiakis, E.M.; Niemack, M.D.

    We present a new measurement of the kinematic Sunyaev-Zel'dovich effect using data from the Atacama Cosmology Telescope (ACT) and the Baryon Oscillation Spectroscopic Survey (BOSS). Using 600 square degrees of overlapping sky area, we evaluate the mean pairwise baryon momentum associated with the positions of 50,000 bright galaxies in the BOSS DR11 Large Scale Structure catalog. A non-zero signal arises from the large-scale motions of halos containing the sample galaxies. The data fits an analytical signal model well, with the optical depth to microwave photon scattering as a free parameter determining the overall signal amplitude. We estimate the covariance matrixmore » of the mean pairwise momentum as a function of galaxy separation, using microwave sky simulations, jackknife evaluation, and bootstrap estimates. The most conservative simulation-based errors give signal-to-noise estimates between 3.6 and 4.1 for varying galaxy luminosity cuts. We discuss how the other error determinations can lead to higher signal-to-noise values, and consider the impact of several possible systematic errors. Estimates of the optical depth from the average thermal Sunyaev-Zel'dovich signal at the sample galaxy positions are broadly consistent with those obtained from the mean pairwise momentum signal.« less

  6. Detection of the Pairwise Kinematic Sunyaev-Zel'dovich Effect with BOSS DR11 and the Atacama Cosmology Telescope

    NASA Technical Reports Server (NTRS)

    De Bernardis, F.; Aiola, S.; Vavagiakis, E. M.; Battaglia, N.; Niemack, M. D.; Beall, J.; Becker, D. T.; Bond, J. R.; Calabrese, E.; Cho, H.; hide

    2017-01-01

    We present a new measurement of the kinematic Sunyaev-Zel'dovich effect using data from the Atacama Cosmology Telescope (ACT) and the Baryon Oscillation Spectroscopic Survey (BOSS). Using 600 square degrees of overlapping sky area, we evaluate the mean pairwise baryon momentum associated with the positions of 50,000 bright galaxies in the BOSS DR11 Large Scale Structure catalog. A non-zero signal arises from the large-scale motions of halos containing the sample galaxies. The data fits an analytical signal model well, with the optical depth to microwave photon scattering as a free parameter determining the overall signal amplitude. We estimate the covariance matrix of the mean pairwise momentum as a function of galaxy separation, using microwave sky simulations, jackknife evaluation, and bootstrap estimates. The most conservative simulation-based errors give signal-to-noise estimates between 3.6 and 4.1 for varying galaxy luminosity cuts. We discuss how the other error determinations can lead to higher signal-to-noise values, and consider the impact of several possible systematic errors. Estimates of the optical depth from the average thermal Sunyaev-Zel'dovich signal at the sample galaxy positions are broadly consistent with those obtained from the mean pairwise momentum signal.

  7. Detection of the pairwise kinematic Sunyaev-Zel'dovich effect with BOSS DR11 and the Atacama Cosmology Telescope

    NASA Astrophysics Data System (ADS)

    De Bernardis, F.; Aiola, S.; Vavagiakis, E. M.; Battaglia, N.; Niemack, M. D.; Beall, J.; Becker, D. T.; Bond, J. R.; Calabrese, E.; Cho, H.; Coughlin, K.; Datta, R.; Devlin, M.; Dunkley, J.; Dunner, R.; Ferraro, S.; Fox, A.; Gallardo, P. A.; Halpern, M.; Hand, N.; Hasselfield, M.; Henderson, S. W.; Hill, J. C.; Hilton, G. C.; Hilton, M.; Hincks, A. D.; Hlozek, R.; Hubmayr, J.; Huffenberger, K.; Hughes, J. P.; Irwin, K. D.; Koopman, B. J.; Kosowsky, A.; Li, D.; Louis, T.; Lungu, M.; Madhavacheril, M. S.; Maurin, L.; McMahon, J.; Moodley, K.; Naess, S.; Nati, F.; Newburgh, L.; Nibarger, J. P.; Page, L. A.; Partridge, B.; Schaan, E.; Schmitt, B. L.; Sehgal, N.; Sievers, J.; Simon, S. M.; Spergel, D. N.; Staggs, S. T.; Stevens, J. R.; Thornton, R. J.; van Engelen, A.; Van Lanen, J.; Wollack, E. J.

    2017-03-01

    We present a new measurement of the kinematic Sunyaev-Zel'dovich effect using data from the Atacama Cosmology Telescope (ACT) and the Baryon Oscillation Spectroscopic Survey (BOSS). Using 600 square degrees of overlapping sky area, we evaluate the mean pairwise baryon momentum associated with the positions of 50,000 bright galaxies in the BOSS DR11 Large Scale Structure catalog. A non-zero signal arises from the large-scale motions of halos containing the sample galaxies. The data fits an analytical signal model well, with the optical depth to microwave photon scattering as a free parameter determining the overall signal amplitude. We estimate the covariance matrix of the mean pairwise momentum as a function of galaxy separation, using microwave sky simulations, jackknife evaluation, and bootstrap estimates. The most conservative simulation-based errors give signal-to-noise estimates between 3.6 and 4.1 for varying galaxy luminosity cuts. We discuss how the other error determinations can lead to higher signal-to-noise values, and consider the impact of several possible systematic errors. Estimates of the optical depth from the average thermal Sunyaev-Zel'dovich signal at the sample galaxy positions are broadly consistent with those obtained from the mean pairwise momentum signal.

  8. Detection of the pairwise kinematic Sunyaev-Zel'dovich effect with BOSS DR11 and the Atacama Cosmology Telescope

    DOE PAGES

    Bernardis, F. De; Aiola, S.; Vavagiakis, E. M.; ...

    2017-03-07

    Here, we present a new measurement of the kinematic Sunyaev-Zel'dovich effect using data from the Atacama Cosmology Telescope (ACT) and the Baryon Oscillation Spectroscopic Survey (BOSS). Using 600 square degrees of overlapping sky area, we evaluate the mean pairwise baryon momentum associated with the positions of 50,000 bright galaxies in the BOSS DR11 Large Scale Structure catalog. A non-zero signal arises from the large-scale motions of halos containing the sample galaxies. The data fits an analytical signal model well, with the optical depth to microwave photon scattering as a free parameter determining the overall signal amplitude. We estimate the covariancemore » matrix of the mean pairwise momentum as a function of galaxy separation, using microwave sky simulations, jackknife evaluation, and bootstrap estimates. The most conservative simulation-based errors give signal-to-noise estimates between 3.6 and 4.1 for varying galaxy luminosity cuts. We discuss how the other error determinations can lead to higher signal-to-noise values, and consider the impact of several possible systematic errors. Estimates of the optical depth from the average thermal Sunyaev-Zel'dovich signal at the sample galaxy positions are broadly consistent with those obtained from the mean pairwise momentum signal.« less

  9. Thermochemical and kinetic analysis of the thermal decomposition of monomethylhydrazine: an elementary reaction mechanism.

    PubMed

    Sun, Hongyan; Law, Chung K

    2007-05-17

    The reaction kinetics for the thermal decomposition of monomethylhydrazine (MMH) was studied with quantum Rice-Ramsperger-Kassel (QRRK) theory and a master equation analysis for pressure falloff. Thermochemical properties were determined by ab initio and density functional calculations. The entropies, S degrees (298.15 K), and heat capacities, Cp degrees (T) (0 < or = T/K < or = 1500), from vibrational, translational, and external rotational contributions were calculated using statistical mechanics based on the vibrational frequencies and structures obtained from the density functional study. Potential barriers for internal rotations were calculated at the B3LYP/6-311G(d,p) level, and hindered rotational contributions to S degrees (298.15 K) and Cp degrees (T) were calculated by solving the Schrödinger equation with free rotor wave functions, and the partition coefficients were treated by direct integration over energy levels of the internal rotation potentials. Enthalpies of formation, DeltafH degrees (298.15 K), for the parent MMH (CH3NHNH2) and its corresponding radicals CH3N*NH2, CH3NHN*H, and C*H2NHNH2 were determined to be 21.6, 48.5, 51.1, and 62.8 kcal mol(-1) by use of isodesmic reaction analysis and various ab initio methods. The kinetic analysis of the thermal decomposition, abstraction, and substitution reactions of MMH was performed at the CBS-QB3 level, with those of N-N and C-N bond scissions determined by high level CCSD(T)/6-311++G(3df,2p)//MPWB1K/6-31+G(d,p) calculations. Rate constants of thermally activated MMH to dissociation products were calculated as functions of pressure and temperature. An elementary reaction mechanism based on the calculated rate constants, thermochemical properties, and literature data was developed to model the experimental data on the overall MMH thermal decomposition rate. The reactions of N-N and C-N bond scission were found to be the major reaction paths for the modeling of MMH homogeneous decomposition at atmospheric conditions.

  10. Tensorial extensions of independent component analysis for multisubject FMRI analysis.

    PubMed

    Beckmann, C F; Smith, S M

    2005-03-01

    We discuss model-free analysis of multisubject or multisession FMRI data by extending the single-session probabilistic independent component analysis model (PICA; Beckmann and Smith, 2004. IEEE Trans. on Medical Imaging, 23 (2) 137-152) to higher dimensions. This results in a three-way decomposition that represents the different signals and artefacts present in the data in terms of their temporal, spatial, and subject-dependent variations. The technique is derived from and compared with parallel factor analysis (PARAFAC; Harshman and Lundy, 1984. In Research methods for multimode data analysis, chapter 5, pages 122-215. Praeger, New York). Using simulated data as well as data from multisession and multisubject FMRI studies we demonstrate that the tensor PICA approach is able to efficiently and accurately extract signals of interest in the spatial, temporal, and subject/session domain. The final decompositions improve upon PARAFAC results in terms of greater accuracy, reduced interference between the different estimated sources (reduced cross-talk), robustness (against deviations of the data from modeling assumptions and against overfitting), and computational speed. On real FMRI 'activation' data, the tensor PICA approach is able to extract plausible activation maps, time courses, and session/subject modes as well as provide a rich description of additional processes of interest such as image artefacts or secondary activation patterns. The resulting data decomposition gives simple and useful representations of multisubject/multisession FMRI data that can aid the interpretation and optimization of group FMRI studies beyond what can be achieved using model-based analysis techniques.

  11. Improving Arterial Spin Labeling by Using Deep Learning.

    PubMed

    Kim, Ki Hwan; Choi, Seung Hong; Park, Sung-Hong

    2018-05-01

    Purpose To develop a deep learning algorithm that generates arterial spin labeling (ASL) perfusion images with higher accuracy and robustness by using a smaller number of subtraction images. Materials and Methods For ASL image generation from pair-wise subtraction, we used a convolutional neural network (CNN) as a deep learning algorithm. The ground truth perfusion images were generated by averaging six or seven pairwise subtraction images acquired with (a) conventional pseudocontinuous arterial spin labeling from seven healthy subjects or (b) Hadamard-encoded pseudocontinuous ASL from 114 patients with various diseases. CNNs were trained to generate perfusion images from a smaller number (two or three) of subtraction images and evaluated by means of cross-validation. CNNs from the patient data sets were also tested on 26 separate stroke data sets. CNNs were compared with the conventional averaging method in terms of mean square error and radiologic score by using a paired t test and/or Wilcoxon signed-rank test. Results Mean square errors were approximately 40% lower than those of the conventional averaging method for the cross-validation with the healthy subjects and patients and the separate test with the patients who had experienced a stroke (P < .001). Region-of-interest analysis in stroke regions showed that cerebral blood flow maps from CNN (mean ± standard deviation, 19.7 mL per 100 g/min ± 9.7) had smaller mean square errors than those determined with the conventional averaging method (43.2 ± 29.8) (P < .001). Radiologic scoring demonstrated that CNNs suppressed noise and motion and/or segmentation artifacts better than the conventional averaging method did (P < .001). Conclusion CNNs provided superior perfusion image quality and more accurate perfusion measurement compared with those of the conventional averaging method for generation of ASL images from pair-wise subtraction images. © RSNA, 2017.

  12. Highly viscous antibody solutions are a consequence of network formation caused by domain-domain electrostatic complementarities: insights from coarse-grained simulations.

    PubMed

    Buck, Patrick M; Chaudhri, Anuj; Kumar, Sandeep; Singh, Satish K

    2015-01-05

    Therapeutic monoclonal antibody (mAb) candidates that form highly viscous solutions at concentrations above 100 mg/mL can lead to challenges in bioprocessing, formulation development, and subcutaneous drug delivery. Earlier studies of mAbs with concentration-dependent high viscosity have indicated that mAbs with negatively charged Fv regions have a dipole-like quality that increases the likelihood of reversible self-association. This suggests that weak electrostatic intermolecular interactions can form transient antibody networks that participate in resistance to solution deformation under shear stress. Here this hypothesis is explored by parametrizing a coarse-grained (CG) model of an antibody using the domain charges from four different mAbs that have had their concentration-dependent viscosity behaviors previously determined. Multicopy molecular dynamics simulations were performed for these four CG mAbs at several concentrations to understand the effect of surface charge on mass diffusivity, pairwise interactions, and electrostatic network formation. Diffusion coefficients computed from simulations were in qualitative agreement with experimentally determined viscosities for all four mAbs. Contact analysis revealed an overall greater number of pairwise interactions for the two mAbs in this study with high concentration viscosity issues. Further, using equilibrated solution trajectories, the two mAbs with high concentration viscosity issues quantitatively formed more features of an electrostatic network than the other mAbs. The change in the number of these network features as a function of concentration is related to the number of pairwise interactions formed by electrostatic complementarities between antibody domains. Thus, transient antibody network formation caused by domain-domain electrostatic complementarities is the most probable origin of high concentration viscosity for mAbs in this study.

  13. Characterizing the Propagation of Uterine Electrophysiological Signals Recorded with a Multi-Sensor Abdominal Array in Term Pregnancies.

    PubMed

    Escalona-Vargas, Diana; Govindan, Rathinaswamy B; Furdea, Adrian; Murphy, Pam; Lowery, Curtis L; Eswaran, Hari

    2015-01-01

    The objective of this study was to quantify the number of segments that have contractile activity and determine the propagation speed from uterine electrophysiological signals recorded over the abdomen. The uterine magnetomyographic (MMG) signals were recorded with a 151 channel SARA (SQUID Array for Reproductive Assessment) system from 36 pregnant women between 37 and 40 weeks of gestational age. The MMG signals were scored and segments were classified based on presence of uterine contractile burst activity. The sensor space was then split into four quadrants and in each quadrant signal strength at each sample was calculated using center-of-gravity (COG). To this end, the cross-correlation analysis of the COG was performed to calculate the delay between pairwise combinations of quadrants. The relationship in propagation across the quadrants was quantified and propagation speeds were calculated from the delays. MMG recordings were successfully processed from 25 subjects and the average values of propagation speeds ranged from 1.3-9.5 cm/s, which was within the physiological range. The propagation was observed between both vertical and horizontal quadrants confirming multidirectional propagation. After the multiple pairwise test (99% CI), significant differences in speeds can be observed between certain vertical or horizontal combinations and the crossed pair combinations. The number of segments containing contractile activity in any given quadrant pair with a detectable delay was significantly higher in the lower abdominal pairwise combination as compared to all others. The quadrant-based approach using MMG signals provided us with high spatial-temporal information of the uterine contractile activity and will help us in the future to optimize abdominal electromyographic (EMG) recordings that are practical in a clinical setting.

  14. Characterizing the Propagation of Uterine Electrophysiological Signals Recorded with a Multi-Sensor Abdominal Array in Term Pregnancies

    PubMed Central

    Escalona-Vargas, Diana; Govindan, Rathinaswamy B.; Furdea, Adrian; Murphy, Pam; Lowery, Curtis L.; Eswaran, Hari

    2015-01-01

    The objective of this study was to quantify the number of segments that have contractile activity and determine the propagation speed from uterine electrophysiological signals recorded over the abdomen. The uterine magnetomyographic (MMG) signals were recorded with a 151 channel SARA (SQUID Array for Reproductive Assessment) system from 36 pregnant women between 37 and 40 weeks of gestational age. The MMG signals were scored and segments were classified based on presence of uterine contractile burst activity. The sensor space was then split into four quadrants and in each quadrant signal strength at each sample was calculated using center-of-gravity (COG). To this end, the cross-correlation analysis of the COG was performed to calculate the delay between pairwise combinations of quadrants. The relationship in propagation across the quadrants was quantified and propagation speeds were calculated from the delays. MMG recordings were successfully processed from 25 subjects and the average values of propagation speeds ranged from 1.3–9.5 cm/s, which was within the physiological range. The propagation was observed between both vertical and horizontal quadrants confirming multidirectional propagation. After the multiple pairwise test (99% CI), significant differences in speeds can be observed between certain vertical or horizontal combinations and the crossed pair combinations. The number of segments containing contractile activity in any given quadrant pair with a detectable delay was significantly higher in the lower abdominal pairwise combination as compared to all others. The quadrant-based approach using MMG signals provided us with high spatial-temporal information of the uterine contractile activity and will help us in the future to optimize abdominal electromyographic (EMG) recordings that are practical in a clinical setting. PMID:26505624

  15. Intercenter Differences in Bronchopulmonary Dysplasia or Death Among Very Low Birth Weight Infants

    PubMed Central

    Walsh, Michele; Bobashev, Georgiy; Das, Abhik; Levine, Burton; Carlo, Waldemar A.; Higgins, Rosemary D.

    2011-01-01

    OBJECTIVES: To determine (1) the magnitude of clustering of bronchopulmonary dysplasia (36 weeks) or death (the outcome) across centers of the Eunice Kennedy Shriver National Institute of Child and Human Development National Research Network, (2) the infant-level variables associated with the outcome and estimate their clustering, and (3) the center-specific practices associated with the differences and build predictive models. METHODS: Data on neonates with a birth weight of <1250 g from the cluster-randomized benchmarking trial were used to determine the magnitude of clustering of the outcome according to alternating logistic regression by using pairwise odds ratio and predictive modeling. Clinical variables associated with the outcome were identified by using multivariate analysis. The magnitude of clustering was then evaluated after correction for infant-level variables. Predictive models were developed by using center-specific and infant-level variables for data from 2001 2004 and projected to 2006. RESULTS: In 2001–2004, clustering of bronchopulmonary dysplasia/death was significant (pairwise odds ratio: 1.3; P < .001) and increased in 2006 (pairwise odds ratio: 1.6; overall incidence: 52%; range across centers: 32%–74%); center rates were relatively stable over time. Variables that varied according to center and were associated with increased risk of outcome included lower body temperature at NICU admission, use of prophylactic indomethacin, specific drug therapy on day 1, and lack of endotracheal intubation. Center differences remained significant even after correction for clustered variables. CONCLUSION: Bronchopulmonary dysplasia/death rates demonstrated moderate clustering according to center. Clinical variables associated with the outcome were also clustered. Center differences after correction of clustered variables indicate presence of as-yet unmeasured center variables. PMID:21149431

  16. The Fourier decomposition method for nonlinear and non-stationary time series analysis.

    PubMed

    Singh, Pushpendra; Joshi, Shiv Dutt; Patney, Rakesh Kumar; Saha, Kaushik

    2017-03-01

    for many decades, there has been a general perception in the literature that Fourier methods are not suitable for the analysis of nonlinear and non-stationary data. In this paper, we propose a novel and adaptive Fourier decomposition method (FDM), based on the Fourier theory, and demonstrate its efficacy for the analysis of nonlinear and non-stationary time series. The proposed FDM decomposes any data into a small number of 'Fourier intrinsic band functions' (FIBFs). The FDM presents a generalized Fourier expansion with variable amplitudes and variable frequencies of a time series by the Fourier method itself. We propose an idea of zero-phase filter bank-based multivariate FDM (MFDM), for the analysis of multivariate nonlinear and non-stationary time series, using the FDM. We also present an algorithm to obtain cut-off frequencies for MFDM. The proposed MFDM generates a finite number of band-limited multivariate FIBFs (MFIBFs). The MFDM preserves some intrinsic physical properties of the multivariate data, such as scale alignment, trend and instantaneous frequency. The proposed methods provide a time-frequency-energy (TFE) distribution that reveals the intrinsic structure of a data. Numerical computations and simulations have been carried out and comparison is made with the empirical mode decomposition algorithms.

  17. The Fourier decomposition method for nonlinear and non-stationary time series analysis

    PubMed Central

    Joshi, Shiv Dutt; Patney, Rakesh Kumar; Saha, Kaushik

    2017-01-01

    for many decades, there has been a general perception in the literature that Fourier methods are not suitable for the analysis of nonlinear and non-stationary data. In this paper, we propose a novel and adaptive Fourier decomposition method (FDM), based on the Fourier theory, and demonstrate its efficacy for the analysis of nonlinear and non-stationary time series. The proposed FDM decomposes any data into a small number of ‘Fourier intrinsic band functions’ (FIBFs). The FDM presents a generalized Fourier expansion with variable amplitudes and variable frequencies of a time series by the Fourier method itself. We propose an idea of zero-phase filter bank-based multivariate FDM (MFDM), for the analysis of multivariate nonlinear and non-stationary time series, using the FDM. We also present an algorithm to obtain cut-off frequencies for MFDM. The proposed MFDM generates a finite number of band-limited multivariate FIBFs (MFIBFs). The MFDM preserves some intrinsic physical properties of the multivariate data, such as scale alignment, trend and instantaneous frequency. The proposed methods provide a time–frequency–energy (TFE) distribution that reveals the intrinsic structure of a data. Numerical computations and simulations have been carried out and comparison is made with the empirical mode decomposition algorithms. PMID:28413352

  18. In vitro analysis of rifampicin and its effect on quality control tests of rifampicin containing dosage forms.

    PubMed

    Agrawal, S; Panchagnula, R

    2004-10-01

    The chemical stability of rifampicin both in solid state and various media has widely been investigated. While rifampicin is appreciably stable in solid-state, its decomposition rate is very high in acidic as well as in alkaline medium and a variety of decomposition products were identified. The literature reports on highly variable rifampicin decomposition in acidic medium. Hence, the objective of this investigation was to study possible reasons responsible for this variability. For this purpose, filter validation and correlation between rifampicin and its degradation products were developed to account for the loss of rifampicin in acidic media. For analysis of rifampicin with or without the presence of isoniazid, a simple and accurate method was developed using high performance chromatography recommended in FDC monographs of the United States Pharmacopoeia. Using the equations developed in this investigation, the amount of rifampicin degraded in the acidic media was calculated from the area under curve of the degradation products. Further, it was proved that in a dissolution study, the colorimetric method of analysis recommended in the United States Pharmacopoeia provides accurate results regarding rifampicin release. Filter type, time of injection as well as interpretation of data are important factors that affect analysis results of rifampicin in in vitro studies and quality control.

  19. Heuristic decomposition for non-hierarchic systems

    NASA Technical Reports Server (NTRS)

    Bloebaum, Christina L.; Hajela, P.

    1991-01-01

    Design and optimization is substantially more complex in multidisciplinary and large-scale engineering applications due to the existing inherently coupled interactions. The paper introduces a quasi-procedural methodology for multidisciplinary optimization that is applicable for nonhierarchic systems. The necessary decision-making support for the design process is provided by means of an embedded expert systems capability. The method employs a decomposition approach whose modularity allows for implementation of specialized methods for analysis and optimization within disciplines.

  20. Decomposition Studies of Tetraphenylborate Slurries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crawford, C.L.

    1997-05-06

    This report details the decomposition of aqueous (K,Na) slurries in concentrated salt solutions using a more complete candidate catalyst recipe, extended testing temperatures (40-70 degrees C) and test durations of approximately 1500 hours (9 weeks). This study uses recently developed High-Pressure Liquid Chromatography (HPLC) methods for analysis of tetraphenylborate (TPB-), triphenylborane (3PB) and diphenylborinic acid (2PB). All of the present tests involve non-radioactive simulants and do not include investigations of radiolysis effects.

Top