Sample records for processing tensor decomposition

  1. Genten: Software for Generalized Tensor Decompositions v. 1.0.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phipps, Eric T.; Kolda, Tamara G.; Dunlavy, Daniel

    Tensors, or multidimensional arrays, are a powerful mathematical means of describing multiway data. This software provides computational means for decomposing or approximating a given tensor in terms of smaller tensors of lower dimension, focusing on decomposition of large, sparse tensors. These techniques have applications in many scientific areas, including signal processing, linear algebra, computer vision, numerical analysis, data mining, graph analysis, neuroscience and more. The software is designed to take advantage of parallelism present emerging computer architectures such has multi-core CPUs, many-core accelerators such as the Intel Xeon Phi, and computation-oriented GPUs to enable efficient processing of large tensors.

  2. 3D tensor-based blind multispectral image decomposition for tumor demarcation

    NASA Astrophysics Data System (ADS)

    Kopriva, Ivica; Peršin, Antun

    2010-03-01

    Blind decomposition of multi-spectral fluorescent image for tumor demarcation is formulated exploiting tensorial structure of the image. First contribution of the paper is identification of the matrix of spectral responses and 3D tensor of spatial distributions of the materials present in the image from Tucker3 or PARAFAC models of 3D image tensor. Second contribution of the paper is clustering based estimation of the number of the materials present in the image as well as matrix of their spectral profiles. 3D tensor of the spatial distributions of the materials is recovered through 3-mode multiplication of the multi-spectral image tensor and inverse of the matrix of spectral profiles. Tensor representation of the multi-spectral image preserves its local spatial structure that is lost, due to vectorization process, when matrix factorization-based decomposition methods (such as non-negative matrix factorization and independent component analysis) are used. Superior performance of the tensor-based image decomposition over matrix factorization-based decompositions is demonstrated on experimental red-green-blue (RGB) image with known ground truth as well as on RGB fluorescent images of the skin tumor (basal cell carcinoma).

  3. An optimization approach for fitting canonical tensor decompositions.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dunlavy, Daniel M.; Acar, Evrim; Kolda, Tamara Gibson

    Tensor decompositions are higher-order analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as the CANDECOMP/PARAFAC decomposition (CPD), which expresses a tensor as the sum of component rank-one tensors and is used in a multitude of applications such as chemometrics, signal processing, neuroscience, and web analysis. The task of computing the CPD, however, can be difficult. The typical approach is based on alternating least squares (ALS) optimization, which can be remarkably fast but is not very accurate. Previously, nonlinear least squares (NLS) methodsmore » have also been recommended; existing NLS methods are accurate but slow. In this paper, we propose the use of gradient-based optimization methods. We discuss the mathematical calculation of the derivatives and further show that they can be computed efficiently, at the same cost as one iteration of ALS. Computational experiments demonstrate that the gradient-based optimization methods are much more accurate than ALS and orders of magnitude faster than NLS.« less

  4. Dictionary-Based Tensor Canonical Polyadic Decomposition

    NASA Astrophysics Data System (ADS)

    Cohen, Jeremy Emile; Gillis, Nicolas

    2018-04-01

    To ensure interpretability of extracted sources in tensor decomposition, we introduce in this paper a dictionary-based tensor canonical polyadic decomposition which enforces one factor to belong exactly to a known dictionary. A new formulation of sparse coding is proposed which enables high dimensional tensors dictionary-based canonical polyadic decomposition. The benefits of using a dictionary in tensor decomposition models are explored both in terms of parameter identifiability and estimation accuracy. Performances of the proposed algorithms are evaluated on the decomposition of simulated data and the unmixing of hyperspectral images.

  5. Geometric decomposition of the conformation tensor in viscoelastic turbulence

    NASA Astrophysics Data System (ADS)

    Hameduddin, Ismail; Meneveau, Charles; Zaki, Tamer A.; Gayme, Dennice F.

    2018-05-01

    This work introduces a mathematical approach to analysing the polymer dynamics in turbulent viscoelastic flows that uses a new geometric decomposition of the conformation tensor, along with associated scalar measures of the polymer fluctuations. The approach circumvents an inherent difficulty in traditional Reynolds decompositions of the conformation tensor: the fluctuating tensor fields are not positive-definite and so do not retain the physical meaning of the tensor. The geometric decomposition of the conformation tensor yields both mean and fluctuating tensor fields that are positive-definite. The fluctuating tensor in the present decomposition has a clear physical interpretation as a polymer deformation relative to the mean configuration. Scalar measures of this fluctuating conformation tensor are developed based on the non-Euclidean geometry of the set of positive-definite tensors. Drag-reduced viscoelastic turbulent channel flow is then used an example case study. The conformation tensor field, obtained using direct numerical simulations, is analysed using the proposed framework.

  6. Tensor-based Dictionary Learning for Spectral CT Reconstruction

    PubMed Central

    Zhang, Yanbo; Wang, Ge

    2016-01-01

    Spectral computed tomography (CT) produces an energy-discriminative attenuation map of an object, extending a conventional image volume with a spectral dimension. In spectral CT, an image can be sparsely represented in each of multiple energy channels, and are highly correlated among energy channels. According to this characteristics, we propose a tensor-based dictionary learning method for spectral CT reconstruction. In our method, tensor patches are extracted from an image tensor, which is reconstructed using the filtered backprojection (FBP), to form a training dataset. With the Candecomp/Parafac decomposition, a tensor-based dictionary is trained, in which each atom is a rank-one tensor. Then, the trained dictionary is used to sparsely represent image tensor patches during an iterative reconstruction process, and the alternating minimization scheme is adapted for optimization. The effectiveness of our proposed method is validated with both numerically simulated and real preclinical mouse datasets. The results demonstrate that the proposed tensor-based method generally produces superior image quality, and leads to more accurate material decomposition than the currently popular popular methods. PMID:27541628

  7. Heterogeneous Tensor Decomposition for Clustering via Manifold Optimization.

    PubMed

    Sun, Yanfeng; Gao, Junbin; Hong, Xia; Mishra, Bamdev; Yin, Baocai

    2016-03-01

    Tensor clustering is an important tool that exploits intrinsically rich structures in real-world multiarray or Tensor datasets. Often in dealing with those datasets, standard practice is to use subspace clustering that is based on vectorizing multiarray data. However, vectorization of tensorial data does not exploit complete structure information. In this paper, we propose a subspace clustering algorithm without adopting any vectorization process. Our approach is based on a novel heterogeneous Tucker decomposition model taking into account cluster membership information. We propose a new clustering algorithm that alternates between different modes of the proposed heterogeneous tensor model. All but the last mode have closed-form updates. Updating the last mode reduces to optimizing over the multinomial manifold for which we investigate second order Riemannian geometry and propose a trust-region algorithm. Numerical experiments show that our proposed algorithm compete effectively with state-of-the-art clustering algorithms that are based on tensor factorization.

  8. Tensor gauge condition and tensor field decomposition

    NASA Astrophysics Data System (ADS)

    Zhu, Ben-Chao; Chen, Xiang-Song

    2015-10-01

    We discuss various proposals of separating a tensor field into pure-gauge and gauge-invariant components. Such tensor field decomposition is intimately related to the effort of identifying the real gravitational degrees of freedom out of the metric tensor in Einstein’s general relativity. We show that as for a vector field, the tensor field decomposition has exact correspondence to and can be derived from the gauge-fixing approach. The complication for the tensor field, however, is that there are infinitely many complete gauge conditions in contrast to the uniqueness of Coulomb gauge for a vector field. The cause of such complication, as we reveal, is the emergence of a peculiar gauge-invariant pure-gauge construction for any gauge field of spin ≥ 2. We make an extensive exploration of the complete tensor gauge conditions and their corresponding tensor field decompositions, regarding mathematical structures, equations of motion for the fields and nonlinear properties. Apparently, no single choice is superior in all aspects, due to an awkward fact that no gauge-fixing can reduce a tensor field to be purely dynamical (i.e. transverse and traceless), as can the Coulomb gauge in a vector case.

  9. Randomized interpolative decomposition of separated representations

    NASA Astrophysics Data System (ADS)

    Biagioni, David J.; Beylkin, Daniel; Beylkin, Gregory

    2015-01-01

    We introduce an algorithm to compute tensor interpolative decomposition (dubbed CTD-ID) for the reduction of the separation rank of Canonical Tensor Decompositions (CTDs). Tensor ID selects, for a user-defined accuracy ɛ, a near optimal subset of terms of a CTD to represent the remaining terms via a linear combination of the selected terms. CTD-ID can be used as an alternative to or in combination with the Alternating Least Squares (ALS) algorithm. We present examples of its use within a convergent iteration to compute inverse operators in high dimensions. We also briefly discuss the spectral norm as a computational alternative to the Frobenius norm in estimating approximation errors of tensor ID. We reduce the problem of finding tensor IDs to that of constructing interpolative decompositions of certain matrices. These matrices are generated via randomized projection of the terms of the given tensor. We provide cost estimates and several examples of the new approach to the reduction of separation rank.

  10. Trace Norm Regularized CANDECOMP/PARAFAC Decomposition With Missing Data.

    PubMed

    Liu, Yuanyuan; Shang, Fanhua; Jiao, Licheng; Cheng, James; Cheng, Hong

    2015-11-01

    In recent years, low-rank tensor completion (LRTC) problems have received a significant amount of attention in computer vision, data mining, and signal processing. The existing trace norm minimization algorithms for iteratively solving LRTC problems involve multiple singular value decompositions of very large matrices at each iteration. Therefore, they suffer from high computational cost. In this paper, we propose a novel trace norm regularized CANDECOMP/PARAFAC decomposition (TNCP) method for simultaneous tensor decomposition and completion. We first formulate a factor matrix rank minimization model by deducing the relation between the rank of each factor matrix and the mode- n rank of a tensor. Then, we introduce a tractable relaxation of our rank function, and then achieve a convex combination problem of much smaller-scale matrix trace norm minimization. Finally, we develop an efficient algorithm based on alternating direction method of multipliers to solve our problem. The promising experimental results on synthetic and real-world data validate the effectiveness of our TNCP method. Moreover, TNCP is significantly faster than the state-of-the-art methods and scales to larger problems.

  11. Decomposition of a symmetric second-order tensor

    NASA Astrophysics Data System (ADS)

    Heras, José A.

    2018-05-01

    In the three-dimensional space there are different definitions for the dot and cross products of a vector with a second-order tensor. In this paper we show how these products can uniquely be defined for the case of symmetric tensors. We then decompose a symmetric second-order tensor into its ‘dot’ part, which involves the dot product, and the ‘cross’ part, which involves the cross product. For some physical applications, this decomposition can be interpreted as one in which the dot part identifies with the ‘parallel’ part of the tensor and the cross part identifies with the ‘perpendicular’ part. This decomposition of a symmetric second-order tensor may be suitable for undergraduate courses of vector calculus, mechanics and electrodynamics.

  12. Highly Efficient and Scalable Compound Decomposition of Two-Electron Integral Tensor and Its Application in Coupled Cluster Calculations.

    PubMed

    Peng, Bo; Kowalski, Karol

    2017-09-12

    The representation and storage of two-electron integral tensors are vital in large-scale applications of accurate electronic structure methods. Low-rank representation and efficient storage strategy of integral tensors can significantly reduce the numerical overhead and consequently time-to-solution of these methods. In this work, by combining pivoted incomplete Cholesky decomposition (CD) with a follow-up truncated singular vector decomposition (SVD), we develop a decomposition strategy to approximately represent the two-electron integral tensor in terms of low-rank vectors. A systematic benchmark test on a series of 1-D, 2-D, and 3-D carbon-hydrogen systems demonstrates high efficiency and scalability of the compound two-step decomposition of the two-electron integral tensor in our implementation. For the size of the atomic basis set, N b , ranging from ∼100 up to ∼2,000, the observed numerical scaling of our implementation shows [Formula: see text] versus [Formula: see text] cost of performing single CD on the two-electron integral tensor in most of the other implementations. More importantly, this decomposition strategy can significantly reduce the storage requirement of the atomic orbital (AO) two-electron integral tensor from [Formula: see text] to [Formula: see text] with moderate decomposition thresholds. The accuracy tests have been performed using ground- and excited-state formulations of coupled cluster formalism employing single and double excitations (CCSD) on several benchmark systems including the C 60 molecule described by nearly 1,400 basis functions. The results show that the decomposition thresholds can be generally set to 10 -4 to 10 -3 to give acceptable compromise between efficiency and accuracy.

  13. C%2B%2B tensor toolbox user manual.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plantenga, Todd D.; Kolda, Tamara Gibson

    2012-04-01

    The C++ Tensor Toolbox is a software package for computing tensor decompositions. It is based on the Matlab Tensor Toolbox, and is particularly optimized for sparse data sets. This user manual briefly overviews tensor decomposition mathematics, software capabilities, and installation of the package. Tensors (also known as multidimensional arrays or N-way arrays) are used in a variety of applications ranging from chemometrics to network analysis. The Tensor Toolbox provides classes for manipulating dense, sparse, and structured tensors in C++. The Toolbox compiles into libraries and is intended for use with custom applications written by users.

  14. TENSOR DECOMPOSITIONS AND SPARSE LOG-LINEAR MODELS

    PubMed Central

    Johndrow, James E.; Bhattacharya, Anirban; Dunson, David B.

    2017-01-01

    Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. We derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions. PMID:29332971

  15. Highly Efficient and Scalable Compound Decomposition of Two-Electron Integral Tensor and Its Application in Coupled Cluster Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peng, Bo; Kowalski, Karol

    The representation and storage of two-electron integral tensors are vital in large- scale applications of accurate electronic structure methods. Low-rank representation and efficient storage strategy of integral tensors can significantly reduce the numerical overhead and consequently time-to-solution of these methods. In this paper, by combining pivoted incomplete Cholesky decomposition (CD) with a follow-up truncated singular vector decomposition (SVD), we develop a decomposition strategy to approximately represent the two-electron integral tensor in terms of low-rank vectors. A systematic benchmark test on a series of 1-D, 2-D, and 3-D carbon-hydrogen systems demonstrates high efficiency and scalability of the compound two-step decomposition ofmore » the two-electron integral tensor in our implementation. For the size of atomic basis set N_b ranging from ~ 100 up to ~ 2, 000, the observed numerical scaling of our implementation shows O(N_b^{2.5~3}) versus O(N_b^{3~4}) of single CD in most of other implementations. More importantly, this decomposition strategy can significantly reduce the storage requirement of the atomic-orbital (AO) two-electron integral tensor from O(N_b^4) to O(N_b^2 log_{10}(N_b)) with moderate decomposition thresholds. The accuracy tests have been performed using ground- and excited-state formulations of coupled- cluster formalism employing single and double excitations (CCSD) on several bench- mark systems including the C_{60} molecule described by nearly 1,400 basis functions. The results show that the decomposition thresholds can be generally set to 10^{-4} to 10^{-3} to give acceptable compromise between efficiency and accuracy.« less

  16. Hidden discriminative features extraction for supervised high-order time series modeling.

    PubMed

    Nguyen, Ngoc Anh Thi; Yang, Hyung-Jeong; Kim, Sunhee

    2016-11-01

    In this paper, an orthogonal Tucker-decomposition-based extraction of high-order discriminative subspaces from a tensor-based time series data structure is presented, named as Tensor Discriminative Feature Extraction (TDFE). TDFE relies on the employment of category information for the maximization of the between-class scatter and the minimization of the within-class scatter to extract optimal hidden discriminative feature subspaces that are simultaneously spanned by every modality for supervised tensor modeling. In this context, the proposed tensor-decomposition method provides the following benefits: i) reduces dimensionality while robustly mining the underlying discriminative features, ii) results in effective interpretable features that lead to an improved classification and visualization, and iii) reduces the processing time during the training stage and the filtering of the projection by solving the generalized eigenvalue issue at each alternation step. Two real third-order tensor-structures of time series datasets (an epilepsy electroencephalogram (EEG) that is modeled as channel×frequency bin×time frame and a microarray data that is modeled as gene×sample×time) were used for the evaluation of the TDFE. The experiment results corroborate the advantages of the proposed method with averages of 98.26% and 89.63% for the classification accuracies of the epilepsy dataset and the microarray dataset, respectively. These performance averages represent an improvement on those of the matrix-based algorithms and recent tensor-based, discriminant-decomposition approaches; this is especially the case considering the small number of samples that are used in practice. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Atomic-batched tensor decomposed two-electron repulsion integrals

    NASA Astrophysics Data System (ADS)

    Schmitz, Gunnar; Madsen, Niels Kristian; Christiansen, Ove

    2017-04-01

    We present a new integral format for 4-index electron repulsion integrals, in which several strategies like the Resolution-of-the-Identity (RI) approximation and other more general tensor-decomposition techniques are combined with an atomic batching scheme. The 3-index RI integral tensor is divided into sub-tensors defined by atom pairs on which we perform an accelerated decomposition to the canonical product (CP) format. In a first step, the RI integrals are decomposed to a high-rank CP-like format by repeated singular value decompositions followed by a rank reduction, which uses a Tucker decomposition as an intermediate step to lower the prefactor of the algorithm. After decomposing the RI sub-tensors (within the Coulomb metric), they can be reassembled to the full decomposed tensor (RC approach) or the atomic batched format can be maintained (ABC approach). In the first case, the integrals are very similar to the well-known tensor hypercontraction integral format, which gained some attraction in recent years since it allows for quartic scaling implementations of MP2 and some coupled cluster methods. On the MP2 level, the RC and ABC approaches are compared concerning efficiency and storage requirements. Furthermore, the overall accuracy of this approach is assessed. Initial test calculations show a good accuracy and that it is not limited to small systems.

  18. Atomic-batched tensor decomposed two-electron repulsion integrals.

    PubMed

    Schmitz, Gunnar; Madsen, Niels Kristian; Christiansen, Ove

    2017-04-07

    We present a new integral format for 4-index electron repulsion integrals, in which several strategies like the Resolution-of-the-Identity (RI) approximation and other more general tensor-decomposition techniques are combined with an atomic batching scheme. The 3-index RI integral tensor is divided into sub-tensors defined by atom pairs on which we perform an accelerated decomposition to the canonical product (CP) format. In a first step, the RI integrals are decomposed to a high-rank CP-like format by repeated singular value decompositions followed by a rank reduction, which uses a Tucker decomposition as an intermediate step to lower the prefactor of the algorithm. After decomposing the RI sub-tensors (within the Coulomb metric), they can be reassembled to the full decomposed tensor (RC approach) or the atomic batched format can be maintained (ABC approach). In the first case, the integrals are very similar to the well-known tensor hypercontraction integral format, which gained some attraction in recent years since it allows for quartic scaling implementations of MP2 and some coupled cluster methods. On the MP2 level, the RC and ABC approaches are compared concerning efficiency and storage requirements. Furthermore, the overall accuracy of this approach is assessed. Initial test calculations show a good accuracy and that it is not limited to small systems.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kolda, Tamara Gibson

    We propose two new multilinear operators for expressing the matrix compositions that are needed in the Tucker and PARAFAC (CANDECOMP) decompositions. The first operator, which we call the Tucker operator, is shorthand for performing an n-mode matrix multiplication for every mode of a given tensor and can be employed to concisely express the Tucker decomposition. The second operator, which we call the Kruskal operator, is shorthand for the sum of the outer-products of the columns of N matrices and allows a divorce from a matricized representation and a very concise expression of the PARAFAC decomposition. We explore the properties ofmore » the Tucker and Kruskal operators independently of the related decompositions. Additionally, we provide a review of the matrix and tensor operations that are frequently used in the context of tensor decompositions.« less

  20. A Type-2 Block-Component-Decomposition Based 2D AOA Estimation Algorithm for an Electromagnetic Vector Sensor Array

    PubMed Central

    Gao, Yu-Fei; Gui, Guan; Xie, Wei; Zou, Yan-Bin; Yang, Yue; Wan, Qun

    2017-01-01

    This paper investigates a two-dimensional angle of arrival (2D AOA) estimation algorithm for the electromagnetic vector sensor (EMVS) array based on Type-2 block component decomposition (BCD) tensor modeling. Such a tensor decomposition method can take full advantage of the multidimensional structural information of electromagnetic signals to accomplish blind estimation for array parameters with higher resolution. However, existing tensor decomposition methods encounter many restrictions in applications of the EMVS array, such as the strict requirement for uniqueness conditions of decomposition, the inability to handle partially-polarized signals, etc. To solve these problems, this paper investigates tensor modeling for partially-polarized signals of an L-shaped EMVS array. The 2D AOA estimation algorithm based on rank-(L1,L2,·) BCD is developed, and the uniqueness condition of decomposition is analyzed. By means of the estimated steering matrix, the proposed algorithm can automatically achieve angle pair-matching. Numerical experiments demonstrate that the present algorithm has the advantages of both accuracy and robustness of parameter estimation. Even under the conditions of lower SNR, small angular separation and limited snapshots, the proposed algorithm still possesses better performance than subspace methods and the canonical polyadic decomposition (CPD) method. PMID:28448431

  1. A Type-2 Block-Component-Decomposition Based 2D AOA Estimation Algorithm for an Electromagnetic Vector Sensor Array.

    PubMed

    Gao, Yu-Fei; Gui, Guan; Xie, Wei; Zou, Yan-Bin; Yang, Yue; Wan, Qun

    2017-04-27

    This paper investigates a two-dimensional angle of arrival (2D AOA) estimation algorithm for the electromagnetic vector sensor (EMVS) array based on Type-2 block component decomposition (BCD) tensor modeling. Such a tensor decomposition method can take full advantage of the multidimensional structural information of electromagnetic signals to accomplish blind estimation for array parameters with higher resolution. However, existing tensor decomposition methods encounter many restrictions in applications of the EMVS array, such as the strict requirement for uniqueness conditions of decomposition, the inability to handle partially-polarized signals, etc. To solve these problems, this paper investigates tensor modeling for partially-polarized signals of an L-shaped EMVS array. The 2D AOA estimation algorithm based on rank- ( L 1 , L 2 , · ) BCD is developed, and the uniqueness condition of decomposition is analyzed. By means of the estimated steering matrix, the proposed algorithm can automatically achieve angle pair-matching. Numerical experiments demonstrate that the present algorithm has the advantages of both accuracy and robustness of parameter estimation. Even under the conditions of lower SNR, small angular separation and limited snapshots, the proposed algorithm still possesses better performance than subspace methods and the canonical polyadic decomposition (CPD) method.

  2. Tensor Factorization for Low-Rank Tensor Completion.

    PubMed

    Zhou, Pan; Lu, Canyi; Lin, Zhouchen; Zhang, Chao

    2018-03-01

    Recently, a tensor nuclear norm (TNN) based method was proposed to solve the tensor completion problem, which has achieved state-of-the-art performance on image and video inpainting tasks. However, it requires computing tensor singular value decomposition (t-SVD), which costs much computation and thus cannot efficiently handle tensor data, due to its natural large scale. Motivated by TNN, we propose a novel low-rank tensor factorization method for efficiently solving the 3-way tensor completion problem. Our method preserves the low-rank structure of a tensor by factorizing it into the product of two tensors of smaller sizes. In the optimization process, our method only needs to update two smaller tensors, which can be more efficiently conducted than computing t-SVD. Furthermore, we prove that the proposed alternating minimization algorithm can converge to a Karush-Kuhn-Tucker point. Experimental results on the synthetic data recovery, image and video inpainting tasks clearly demonstrate the superior performance and efficiency of our developed method over state-of-the-arts including the TNN and matricization methods.

  3. Near-lossless multichannel EEG compression based on matrix and tensor decompositions.

    PubMed

    Dauwels, Justin; Srinivasan, K; Reddy, M Ramasubba; Cichocki, Andrzej

    2013-05-01

    A novel near-lossless compression algorithm for multichannel electroencephalogram (MC-EEG) is proposed based on matrix/tensor decomposition models. MC-EEG is represented in suitable multiway (multidimensional) forms to efficiently exploit temporal and spatial correlations simultaneously. Several matrix/tensor decomposition models are analyzed in view of efficient decorrelation of the multiway forms of MC-EEG. A compression algorithm is built based on the principle of “lossy plus residual coding,” consisting of a matrix/tensor decomposition-based coder in the lossy layer followed by arithmetic coding in the residual layer. This approach guarantees a specifiable maximum absolute error between original and reconstructed signals. The compression algorithm is applied to three different scalp EEG datasets and an intracranial EEG dataset, each with different sampling rate and resolution. The proposed algorithm achieves attractive compression ratios compared to compressing individual channels separately. For similar compression ratios, the proposed algorithm achieves nearly fivefold lower average error compared to a similar wavelet-based volumetric MC-EEG compression algorithm.

  4. Implementing the sine transform of fermionic modes as a tensor network

    NASA Astrophysics Data System (ADS)

    Epple, Hannes; Fries, Pascal; Hinrichsen, Haye

    2017-09-01

    Based on the algebraic theory of signal processing, we recursively decompose the discrete sine transform of the first kind (DST-I) into small orthogonal block operations. Using a diagrammatic language, we then second-quantize this decomposition to construct a tensor network implementing the DST-I for fermionic modes on a lattice. The complexity of the resulting network is shown to scale as 5/4 n logn (not considering swap gates), where n is the number of lattice sites. Our method provides a systematic approach of generalizing Ferris' spectral tensor network for nontrivial boundary conditions.

  5. Analysis of temporal-longitudinal-latitudinal characteristics in the global ionosphere based on tensor rank-1 decomposition

    NASA Astrophysics Data System (ADS)

    Lu, Shikun; Zhang, Hao; Li, Xihai; Li, Yihong; Niu, Chao; Yang, Xiaoyun; Liu, Daizhi

    2018-03-01

    Combining analyses of spatial and temporal characteristics of the ionosphere is of great significance for scientific research and engineering applications. Tensor decomposition is performed to explore the temporal-longitudinal-latitudinal characteristics in the ionosphere. Three-dimensional tensors are established based on the time series of ionospheric vertical total electron content maps obtained from the Centre for Orbit Determination in Europe. To obtain large-scale characteristics of the ionosphere, rank-1 decomposition is used to obtain U^{(1)}, U^{(2)}, and U^{(3)}, which are the resulting vectors for the time, longitude, and latitude modes, respectively. Our initial finding is that the correspondence between the frequency spectrum of U^{(1)} and solar variation indicates that rank-1 decomposition primarily describes large-scale temporal variations in the global ionosphere caused by the Sun. Furthermore, the time lags between the maxima of the ionospheric U^{(2)} and solar irradiation range from 1 to 3.7 h without seasonal dependence. The differences in time lags may indicate different interactions between processes in the magnetosphere-ionosphere-thermosphere system. Based on the dataset displayed in the geomagnetic coordinates, the position of the barycenter of U^{(3)} provides evidence for north-south asymmetry (NSA) in the large-scale ionospheric variations. The daily variation in such asymmetry indicates the influences of solar ionization. The diurnal geomagnetic coordinate variations in U^{(3)} show that the large-scale EIA (equatorial ionization anomaly) variations during the day and night have similar characteristics. Considering the influences of geomagnetic disturbance on ionospheric behavior, we select the geomagnetic quiet GIMs to construct the ionospheric tensor. The results indicate that the geomagnetic disturbances have little effect on large-scale ionospheric characteristics.

  6. Tensor Toolbox for MATLAB v. 3.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kola, Tamara; Bader, Brett W.; Acar Ataman, Evrim NMN

    Tensors (also known as multidimensional arrays or N-way arrays) are used in a variety of applications ranging from chemometrics to network analysis. The Tensor Toolbox provides classes for manipulating dense, sparse, and structured tensors using MATLAB's object-oriented features. It also provides algorithms for tensor decomposition and factorization, algorithms for computing tensor eigenvalues, and methods for visualization of results.

  7. Entanglement branching operator

    NASA Astrophysics Data System (ADS)

    Harada, Kenji

    2018-01-01

    We introduce an entanglement branching operator to split a composite entanglement flow in a tensor network which is a promising theoretical tool for many-body systems. We can optimize an entanglement branching operator by solving a minimization problem based on squeezing operators. The entanglement branching is a new useful operation to manipulate a tensor network. For example, finding a particular entanglement structure by an entanglement branching operator, we can improve a higher-order tensor renormalization group method to catch a proper renormalization flow in a tensor network space. This new method yields a new type of tensor network states. The second example is a many-body decomposition of a tensor by using an entanglement branching operator. We can use it for a perfect disentangling among tensors. Applying a many-body decomposition recursively, we conceptually derive projected entangled pair states from quantum states that satisfy the area law of entanglement entropy.

  8. Intrinsic Decomposition of The Stretch Tensor for Fibrous Media

    NASA Astrophysics Data System (ADS)

    Kellermann, David C.

    2010-05-01

    This paper presents a novel mechanism for the description of fibre reorientation based on the decomposition of the stretch tensor according to a given material's intrinsic constitutive properties. This approach avoids the necessity for fibre directors, structural tensors or specialised model such as the ideal fibre reinforced model, which are commonly applied to the analysis of fibre kinematics in the finite deformation of fibrous media for biomechanical problems. The proposed approach uses Intrinsic-Field Tensors (IFTs) that build upon the linear orthotropic theory presented in a previous paper entitled Strongly orthotropic continuum mechanics and finite element treatment. The intrinsic decomposition of the stretch tensor therein provides superior capacity to represent the intermediary kinematics driven by finite orthotropic ratios, where the benefits are predominantly expressed in cases of large deformation as is typical in the biomechanical studies. Satisfaction of requirements such as Material Frame-Indifference (MFI) and Euclidean objectivity are demonstrated here—these factors being necessary for the proposed IFTs to be valid tensorial quantities. The resultant tensors, initially for the simplest case of linear elasticity, are able to describe the same fibre reorientation as would the contemporary approaches such as with use of structural tensors and the like, while additionally being capable of showing results intermediary to classical isotropy and the infinitely orthotropic representations. This intermediary case is previously unreported.

  9. Quantifying polymer deformation in viscoelastic turbulence: the geometric decomposition and a Riemannian approach to scalar measures

    NASA Astrophysics Data System (ADS)

    Hameduddin, Ismail; Meneveau, Charles; Zaki, Tamer; Gayme, Dennice

    2017-11-01

    We develop a new framework to quantify the fluctuating behaviour of the conformation tensor in viscoelastic turbulent flows. This framework addresses two shortcomings of the classical approach based on Reynolds decomposition: the fluctuating part of the conformation tensor is not guaranteed to be positive definite and it does not consistently represent polymer expansions and contractions about the mean. Our approach employs a geometric decomposition that yields a positive-definite fluctuating conformation tensor with a clear physical interpretation as a deformation to the mean conformation. We propose three scalar measures of this fluctuating conformation tensor, which respect the non-Euclidean Riemannian geometry of the manifold of positive-definite tensors: fluctuating polymer volume, geodesic distance from the mean, and an anisotropy measure. We use these scalar quantities to investigate drag-reduced viscoelastic turbulent channel flow. Our approach establishes a systematic method to study viscoelastic turbulence. It also uncovers interesting phenomena that are not apparent using traditional analysis tools, including a logarithmic decrease in anisotropy of the mean conformation tensor away from the wall and polymer fluctuations peaking beyond the buffer layer. This work has been partially funded by the following NSF Grants: CBET-1652244, OCE-1633124, CBET-1511937.

  10. Performance of tensor decomposition-based modal identification under nonstationary vibration

    NASA Astrophysics Data System (ADS)

    Friesen, P.; Sadhu, A.

    2017-03-01

    Health monitoring of civil engineering structures is of paramount importance when they are subjected to natural hazards or extreme climatic events like earthquake, strong wind gusts or man-made excitations. Most of the traditional modal identification methods are reliant on stationarity assumption of the vibration response and posed difficulty while analyzing nonstationary vibration (e.g. earthquake or human-induced vibration). Recently tensor decomposition based methods are emerged as powerful and yet generic blind (i.e. without requiring a knowledge of input characteristics) signal decomposition tool for structural modal identification. In this paper, a tensor decomposition based system identification method is further explored to estimate modal parameters using nonstationary vibration generated due to either earthquake or pedestrian induced excitation in a structure. The effects of lag parameters and sensor densities on tensor decomposition are studied with respect to the extent of nonstationarity of the responses characterized by the stationary duration and peak ground acceleration of the earthquake. A suite of more than 1400 earthquakes is used to investigate the performance of the proposed method under a wide variety of ground motions utilizing both complete and partial measurements of a high-rise building model. Apart from the earthquake, human-induced nonstationary vibration of a real-life pedestrian bridge is also used to verify the accuracy of the proposed method.

  11. Real-time object recognition in multidimensional images based on joined extended structural tensor and higher-order tensor decomposition methods

    NASA Astrophysics Data System (ADS)

    Cyganek, Boguslaw; Smolka, Bogdan

    2015-02-01

    In this paper a system for real-time recognition of objects in multidimensional video signals is proposed. Object recognition is done by pattern projection into the tensor subspaces obtained from the factorization of the signal tensors representing the input signal. However, instead of taking only the intensity signal the novelty of this paper is first to build the Extended Structural Tensor representation from the intensity signal that conveys information on signal intensities, as well as on higher-order statistics of the input signals. This way the higher-order input pattern tensors are built from the training samples. Then, the tensor subspaces are built based on the Higher-Order Singular Value Decomposition of the prototype pattern tensors. Finally, recognition relies on measurements of the distance of a test pattern projected into the tensor subspaces obtained from the training tensors. Due to high-dimensionality of the input data, tensor based methods require high memory and computational resources. However, recent achievements in the technology of the multi-core microprocessors and graphic cards allows real-time operation of the multidimensional methods as is shown and analyzed in this paper based on real examples of object detection in digital images.

  12. Community ecology in 3D: Tensor decomposition reveals spatio-temporal dynamics of large ecological communities.

    PubMed

    Frelat, Romain; Lindegren, Martin; Denker, Tim Spaanheden; Floeter, Jens; Fock, Heino O; Sguotti, Camilla; Stäbler, Moritz; Otto, Saskia A; Möllmann, Christian

    2017-01-01

    Understanding spatio-temporal dynamics of biotic communities containing large numbers of species is crucial to guide ecosystem management and conservation efforts. However, traditional approaches usually focus on studying community dynamics either in space or in time, often failing to fully account for interlinked spatio-temporal changes. In this study, we demonstrate and promote the use of tensor decomposition for disentangling spatio-temporal community dynamics in long-term monitoring data. Tensor decomposition builds on traditional multivariate statistics (e.g. Principal Component Analysis) but extends it to multiple dimensions. This extension allows for the synchronized study of multiple ecological variables measured repeatedly in time and space. We applied this comprehensive approach to explore the spatio-temporal dynamics of 65 demersal fish species in the North Sea, a marine ecosystem strongly altered by human activities and climate change. Our case study demonstrates how tensor decomposition can successfully (i) characterize the main spatio-temporal patterns and trends in species abundances, (ii) identify sub-communities of species that share similar spatial distribution and temporal dynamics, and (iii) reveal external drivers of change. Our results revealed a strong spatial structure in fish assemblages persistent over time and linked to differences in depth, primary production and seasonality. Furthermore, we simultaneously characterized important temporal distribution changes related to the low frequency temperature variability inherent in the Atlantic Multidecadal Oscillation. Finally, we identified six major sub-communities composed of species sharing similar spatial distribution patterns and temporal dynamics. Our case study demonstrates the application and benefits of using tensor decomposition for studying complex community data sets usually derived from large-scale monitoring programs.

  13. Importance of Force Decomposition for Local Stress Calculations in Biomembrane Molecular Simulations.

    PubMed

    Vanegas, Juan M; Torres-Sánchez, Alejandro; Arroyo, Marino

    2014-02-11

    Local stress fields are routinely computed from molecular dynamics trajectories to understand the structure and mechanical properties of lipid bilayers. These calculations can be systematically understood with the Irving-Kirkwood-Noll theory. In identifying the stress tensor, a crucial step is the decomposition of the forces on the particles into pairwise contributions. However, such a decomposition is not unique in general, leading to an ambiguity in the definition of the stress tensor, particularly for multibody potentials. Furthermore, a theoretical treatment of constraints in local stress calculations has been lacking. Here, we present a new implementation of local stress calculations that systematically treats constraints and considers a privileged decomposition, the central force decomposition, that leads to a symmetric stress tensor by construction. We focus on biomembranes, although the methodology presented here is widely applicable. Our results show that some unphysical behavior obtained with previous implementations (e.g. nonconstant normal stress profiles along an isotropic bilayer in equilibrium) is a consequence of an improper treatment of constraints. Furthermore, other valid force decompositions produce significantly different stress profiles, particularly in the presence of dihedral potentials. Our methodology reveals the striking effect of unsaturations on the bilayer mechanics, missed by previous stress calculation implementations.

  14. Generalized Higher Order Orthogonal Iteration for Tensor Learning and Decomposition.

    PubMed

    Liu, Yuanyuan; Shang, Fanhua; Fan, Wei; Cheng, James; Cheng, Hong

    2016-12-01

    Low-rank tensor completion (LRTC) has successfully been applied to a wide range of real-world problems. Despite the broad, successful applications, existing LRTC methods may become very slow or even not applicable for large-scale problems. To address this issue, a novel core tensor trace-norm minimization (CTNM) method is proposed for simultaneous tensor learning and decomposition, and has a much lower computational complexity. In our solution, first, the equivalence relation of trace norm of a low-rank tensor and its core tensor is induced. Second, the trace norm of the core tensor is used to replace that of the whole tensor, which leads to two much smaller scale matrix TNM problems. Finally, an efficient alternating direction augmented Lagrangian method is developed to solve our problems. Our CTNM formulation needs only O((R N +NRI)log(√{I N })) observations to reliably recover an N th-order I×I×…×I tensor of n -rank (r,r,…,r) , compared with O(rI N-1 ) observations required by those tensor TNM methods ( I > R ≥ r ). Extensive experimental results show that CTNM is usually more accurate than them, and is orders of magnitude faster.

  15. Simultaneous Tensor Decomposition and Completion Using Factor Priors.

    PubMed

    Chen, Yi-Lei; Hsu, Chiou-Ting Candy; Liao, Hong-Yuan Mark

    2013-08-27

    Tensor completion, which is a high-order extension of matrix completion, has generated a great deal of research interest in recent years. Given a tensor with incomplete entries, existing methods use either factorization or completion schemes to recover the missing parts. However, as the number of missing entries increases, factorization schemes may overfit the model because of incorrectly predefined ranks, while completion schemes may fail to interpret the model factors. In this paper, we introduce a novel concept: complete the missing entries and simultaneously capture the underlying model structure. To this end, we propose a method called Simultaneous Tensor Decomposition and Completion (STDC) that combines a rank minimization technique with Tucker model decomposition. Moreover, as the model structure is implicitly included in the Tucker model, we use factor priors, which are usually known a priori in real-world tensor objects, to characterize the underlying joint-manifold drawn from the model factors. We conducted experiments to empirically verify the convergence of our algorithm on synthetic data, and evaluate its effectiveness on various kinds of real-world data. The results demonstrate the efficacy of the proposed method and its potential usage in tensor-based applications. It also outperforms state-of-the-art methods on multilinear model analysis and visual data completion tasks.

  16. Spherical Tensor Calculus for Local Adaptive Filtering

    NASA Astrophysics Data System (ADS)

    Reisert, Marco; Burkhardt, Hans

    In 3D image processing tensors play an important role. While rank-1 and rank-2 tensors are well understood and commonly used, higher rank tensors are rare. This is probably due to their cumbersome rotation behavior which prevents a computationally efficient use. In this chapter we want to introduce the notion of a spherical tensor which is based on the irreducible representations of the 3D rotation group. In fact, any ordinary cartesian tensor can be decomposed into a sum of spherical tensors, while each spherical tensor has a quite simple rotation behavior. We introduce so called tensorial harmonics that provide an orthogonal basis for spherical tensor fields of any rank. It is just a generalization of the well known spherical harmonics. Additionally we propose a spherical derivative which connects spherical tensor fields of different degree by differentiation. Based on the proposed theory we present two applications. We propose an efficient algorithm for dense tensor voting in 3D, which makes use of tensorial harmonics decomposition of the tensor-valued voting field. In this way it is possible to perform tensor voting by linear-combinations of convolutions in an efficient way. Secondly, we propose an anisotropic smoothing filter that uses a local shape and orientation adaptive filter kernel which can be computed efficiently by the use spherical derivatives.

  17. Community ecology in 3D: Tensor decomposition reveals spatio-temporal dynamics of large ecological communities

    PubMed Central

    Lindegren, Martin; Denker, Tim Spaanheden; Floeter, Jens; Fock, Heino O.; Sguotti, Camilla; Stäbler, Moritz; Otto, Saskia A.; Möllmann, Christian

    2017-01-01

    Understanding spatio-temporal dynamics of biotic communities containing large numbers of species is crucial to guide ecosystem management and conservation efforts. However, traditional approaches usually focus on studying community dynamics either in space or in time, often failing to fully account for interlinked spatio-temporal changes. In this study, we demonstrate and promote the use of tensor decomposition for disentangling spatio-temporal community dynamics in long-term monitoring data. Tensor decomposition builds on traditional multivariate statistics (e.g. Principal Component Analysis) but extends it to multiple dimensions. This extension allows for the synchronized study of multiple ecological variables measured repeatedly in time and space. We applied this comprehensive approach to explore the spatio-temporal dynamics of 65 demersal fish species in the North Sea, a marine ecosystem strongly altered by human activities and climate change. Our case study demonstrates how tensor decomposition can successfully (i) characterize the main spatio-temporal patterns and trends in species abundances, (ii) identify sub-communities of species that share similar spatial distribution and temporal dynamics, and (iii) reveal external drivers of change. Our results revealed a strong spatial structure in fish assemblages persistent over time and linked to differences in depth, primary production and seasonality. Furthermore, we simultaneously characterized important temporal distribution changes related to the low frequency temperature variability inherent in the Atlantic Multidecadal Oscillation. Finally, we identified six major sub-communities composed of species sharing similar spatial distribution patterns and temporal dynamics. Our case study demonstrates the application and benefits of using tensor decomposition for studying complex community data sets usually derived from large-scale monitoring programs. PMID:29136658

  18. Compression of hyper-spectral images using an accelerated nonnegative tensor decomposition

    NASA Astrophysics Data System (ADS)

    Li, Jin; Liu, Zilong

    2017-12-01

    Nonnegative tensor Tucker decomposition (NTD) in a transform domain (e.g., 2D-DWT, etc) has been used in the compression of hyper-spectral images because it can remove redundancies between spectrum bands and also exploit spatial correlations of each band. However, the use of a NTD has a very high computational cost. In this paper, we propose a low complexity NTD-based compression method of hyper-spectral images. This method is based on a pair-wise multilevel grouping approach for the NTD to overcome its high computational cost. The proposed method has a low complexity under a slight decrease of the coding performance compared to conventional NTD. We experimentally confirm this method, which indicates that this method has the less processing time and keeps a better coding performance than the case that the NTD is not used. The proposed approach has a potential application in the loss compression of hyper-spectral or multi-spectral images

  19. The Exact Solution to Rank-1 L1-Norm TUCKER2 Decomposition

    NASA Astrophysics Data System (ADS)

    Markopoulos, Panos P.; Chachlakis, Dimitris G.; Papalexakis, Evangelos E.

    2018-04-01

    We study rank-1 {L1-norm-based TUCKER2} (L1-TUCKER2) decomposition of 3-way tensors, treated as a collection of $N$ $D \\times M$ matrices that are to be jointly decomposed. Our contributions are as follows. i) We prove that the problem is equivalent to combinatorial optimization over $N$ antipodal-binary variables. ii) We derive the first two algorithms in the literature for its exact solution. The first algorithm has cost exponential in $N$; the second one has cost polynomial in $N$ (under a mild assumption). Our algorithms are accompanied by formal complexity analysis. iii) We conduct numerical studies to compare the performance of exact L1-TUCKER2 (proposed) with standard HOSVD, HOOI, GLRAM, PCA, L1-PCA, and TPCA-L1. Our studies show that L1-TUCKER2 outperforms (in tensor approximation) all the above counterparts when the processed data are outlier corrupted.

  20. ON THE DECOMPOSITION OF STRESS AND STRAIN TENSORS INTO SPHERICAL AND DEVIATORIC PARTS

    PubMed Central

    Augusti, G.; Martin, J. B.; Prager, W.

    1969-01-01

    It is well known that Hooke's law for a linearly elastic, isotropic solid may be written in the form of two relations that involve only the spherical or only the deviatoric parts of the tensors of stress and strain. The example of the linearly elastic, transversely isotropic solid is used to show that this decomposition is not, in general, feasible for linearly elastic, anisotropic solids. The discussion is extended to a large class of work-hardening rigid, plastic solids, and it is shown that the considered decomposition can only be achieved for the incompressible solids of this class. PMID:16591754

  1. Simultaneous tensor decomposition and completion using factor priors.

    PubMed

    Chen, Yi-Lei; Hsu, Chiou-Ting; Liao, Hong-Yuan Mark

    2014-03-01

    The success of research on matrix completion is evident in a variety of real-world applications. Tensor completion, which is a high-order extension of matrix completion, has also generated a great deal of research interest in recent years. Given a tensor with incomplete entries, existing methods use either factorization or completion schemes to recover the missing parts. However, as the number of missing entries increases, factorization schemes may overfit the model because of incorrectly predefined ranks, while completion schemes may fail to interpret the model factors. In this paper, we introduce a novel concept: complete the missing entries and simultaneously capture the underlying model structure. To this end, we propose a method called simultaneous tensor decomposition and completion (STDC) that combines a rank minimization technique with Tucker model decomposition. Moreover, as the model structure is implicitly included in the Tucker model, we use factor priors, which are usually known a priori in real-world tensor objects, to characterize the underlying joint-manifold drawn from the model factors. By exploiting this auxiliary information, our method leverages two classic schemes and accurately estimates the model factors and missing entries. We conducted experiments to empirically verify the convergence of our algorithm on synthetic data and evaluate its effectiveness on various kinds of real-world data. The results demonstrate the efficacy of the proposed method and its potential usage in tensor-based applications. It also outperforms state-of-the-art methods on multilinear model analysis and visual data completion tasks.

  2. Comparative study of methods for recognition of an unknown person's action from a video sequence

    NASA Astrophysics Data System (ADS)

    Hori, Takayuki; Ohya, Jun; Kurumisawa, Jun

    2009-02-01

    This paper proposes a Tensor Decomposition Based method that can recognize an unknown person's action from a video sequence, where the unknown person is not included in the database (tensor) used for the recognition. The tensor consists of persons, actions and time-series image features. For the observed unknown person's action, one of the actions stored in the tensor is assumed. Using the motion signature obtained from the assumption, the unknown person's actions are synthesized. The actions of one of the persons in the tensor are replaced by the synthesized actions. Then, the core tensor for the replaced tensor is computed. This process is repeated for the actions and persons. For each iteration, the difference between the replaced and original core tensors is computed. The assumption that gives the minimal difference is the action recognition result. For the time-series image features to be stored in the tensor and to be extracted from the observed video sequence, the human body silhouette's contour shape based feature is used. To show the validity of our proposed method, our proposed method is experimentally compared with Nearest Neighbor rule and Principal Component analysis based method. Experiments using 33 persons' seven kinds of action show that our proposed method achieves better recognition accuracies for the seven actions than the other methods.

  3. The tensor hierarchy algebra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palmkvist, Jakob, E-mail: palmkvist@ihes.fr

    We introduce an infinite-dimensional Lie superalgebra which is an extension of the U-duality Lie algebra of maximal supergravity in D dimensions, for 3 ⩽ D ⩽ 7. The level decomposition with respect to the U-duality Lie algebra gives exactly the tensor hierarchy of representations that arises in gauge deformations of the theory described by an embedding tensor, for all positive levels p. We prove that these representations are always contained in those coming from the associated Borcherds-Kac-Moody superalgebra, and we explain why some of the latter representations are not included in the tensor hierarchy. The most remarkable feature of ourmore » Lie superalgebra is that it does not admit a triangular decomposition like a (Borcherds-)Kac-Moody (super)algebra. Instead the Hodge duality relations between level p and D − 2 − p extend to negative p, relating the representations at the first two negative levels to the supersymmetry and closure constraints of the embedding tensor.« less

  4. Simplified moment tensor analysis and unified decomposition of acoustic emission source: Application to in situ hydrofracturing test

    NASA Astrophysics Data System (ADS)

    Ohtsu, Masayasu

    1991-04-01

    An application of a moment tensor analysis to acoustic emission (AE) is studied to elucidate crack types and orientations of AE sources. In the analysis, simplified treatment is desirable, because hundreds of AE records are obtained from just one experiment and thus sophisticated treatment is realistically cumbersome. Consequently, a moment tensor inversion based on P wave amplitude is employed to determine six independent tensor components. Selecting only P wave portion from the full-space Green's function of homogeneous and isotropic material, a computer code named SiGMA (simplified Green's functions for the moment tensor analysis) is developed for the AE inversion analysis. To classify crack type and to determine crack orientation from moment tensor components, a unified decomposition of eigenvalues into a double-couple (DC) part, a compensated linear vector dipole (CLVD) part, and an isotropic part is proposed. The aim of the decomposition is to determine the proportion of shear contribution (DC) and tensile contribution (CLVD + isotropic) on AE sources and to classify cracks into a crack type of the dominant motion. Crack orientations determined from eigenvectors are presented as crack-opening vectors for tensile cracks and fault motion vectors for shear cracks, instead of stereonets. The SiGMA inversion and the unified decomposition are applied to synthetic data and AE waveforms detected during an in situ hydrofracturing test. To check the accuracy of the procedure, numerical experiments are performed on the synthetic waveforms, including cases with 10% random noise added. Results show reasonable agreement with assumed crack configurations. Although the maximum error is approximately 10% with respect to the ratios, the differences on crack orientations are less than 7°. AE waveforms detected by eight accelerometers deployed during the hydrofracturing test are analyzed. Crack types and orientations determined are in reasonable agreement with a predicted failure plane from borehole TV observation. The results suggest that tensile cracks are generated first at weak seams and then shear cracks follow on the opened joints.

  5. The Speech multi features fusion perceptual hash algorithm based on tensor decomposition

    NASA Astrophysics Data System (ADS)

    Huang, Y. B.; Fan, M. H.; Zhang, Q. Y.

    2018-03-01

    With constant progress in modern speech communication technologies, the speech data is prone to be attacked by the noise or maliciously tampered. In order to make the speech perception hash algorithm has strong robustness and high efficiency, this paper put forward a speech perception hash algorithm based on the tensor decomposition and multi features is proposed. This algorithm analyses the speech perception feature acquires each speech component wavelet packet decomposition. LPCC, LSP and ISP feature of each speech component are extracted to constitute the speech feature tensor. Speech authentication is done by generating the hash values through feature matrix quantification which use mid-value. Experimental results showing that the proposed algorithm is robust for content to maintain operations compared with similar algorithms. It is able to resist the attack of the common background noise. Also, the algorithm is highly efficiency in terms of arithmetic, and is able to meet the real-time requirements of speech communication and complete the speech authentication quickly.

  6. Killing-Yano tensors in spaces admitting a hypersurface orthogonal Killing vector

    NASA Astrophysics Data System (ADS)

    Garfinkle, David; Glass, E. N.

    2013-03-01

    Methods are presented for finding Killing-Yano tensors, conformal Killing-Yano tensors, and conformal Killing vectors in spacetimes with a hypersurface orthogonal Killing vector. These methods are similar to a method developed by the authors for finding Killing tensors. In all cases one decomposes both the tensor and the equation it satisfies into pieces along the Killing vector and pieces orthogonal to the Killing vector. Solving the separate equations that result from this decomposition requires less computing than integrating the original equation. In each case, examples are given to illustrate the method.

  7. Identifying key nodes in multilayer networks based on tensor decomposition.

    PubMed

    Wang, Dingjie; Wang, Haitao; Zou, Xiufen

    2017-06-01

    The identification of essential agents in multilayer networks characterized by different types of interactions is a crucial and challenging topic, one that is essential for understanding the topological structure and dynamic processes of multilayer networks. In this paper, we use the fourth-order tensor to represent multilayer networks and propose a novel method to identify essential nodes based on CANDECOMP/PARAFAC (CP) tensor decomposition, referred to as the EDCPTD centrality. This method is based on the perspective of multilayer networked structures, which integrate the information of edges among nodes and links between different layers to quantify the importance of nodes in multilayer networks. Three real-world multilayer biological networks are used to evaluate the performance of the EDCPTD centrality. The bar chart and ROC curves of these multilayer networks indicate that the proposed approach is a good alternative index to identify real important nodes. Meanwhile, by comparing the behavior of both the proposed method and the aggregated single-layer methods, we demonstrate that neglecting the multiple relationships between nodes may lead to incorrect identification of the most versatile nodes. Furthermore, the Gene Ontology functional annotation demonstrates that the identified top nodes based on the proposed approach play a significant role in many vital biological processes. Finally, we have implemented many centrality methods of multilayer networks (including our method and the published methods) and created a visual software based on the MATLAB GUI, called ENMNFinder, which can be used by other researchers.

  8. Identifying key nodes in multilayer networks based on tensor decomposition

    NASA Astrophysics Data System (ADS)

    Wang, Dingjie; Wang, Haitao; Zou, Xiufen

    2017-06-01

    The identification of essential agents in multilayer networks characterized by different types of interactions is a crucial and challenging topic, one that is essential for understanding the topological structure and dynamic processes of multilayer networks. In this paper, we use the fourth-order tensor to represent multilayer networks and propose a novel method to identify essential nodes based on CANDECOMP/PARAFAC (CP) tensor decomposition, referred to as the EDCPTD centrality. This method is based on the perspective of multilayer networked structures, which integrate the information of edges among nodes and links between different layers to quantify the importance of nodes in multilayer networks. Three real-world multilayer biological networks are used to evaluate the performance of the EDCPTD centrality. The bar chart and ROC curves of these multilayer networks indicate that the proposed approach is a good alternative index to identify real important nodes. Meanwhile, by comparing the behavior of both the proposed method and the aggregated single-layer methods, we demonstrate that neglecting the multiple relationships between nodes may lead to incorrect identification of the most versatile nodes. Furthermore, the Gene Ontology functional annotation demonstrates that the identified top nodes based on the proposed approach play a significant role in many vital biological processes. Finally, we have implemented many centrality methods of multilayer networks (including our method and the published methods) and created a visual software based on the MATLAB GUI, called ENMNFinder, which can be used by other researchers.

  9. Bimodule structure of the mixed tensor product over Uq sℓ (2 | 1) and quantum walled Brauer algebra

    NASA Astrophysics Data System (ADS)

    Bulgakova, D. V.; Kiselev, A. M.; Tipunin, I. Yu.

    2018-03-01

    We study a mixed tensor product 3⊗m ⊗3 ‾ ⊗ n of the three-dimensional fundamental representations of the Hopf algebra Uq sℓ (2 | 1), whenever q is not a root of unity. Formulas for the decomposition of tensor products of any simple and projective Uq sℓ (2 | 1)-module with the generating modules 3 and 3 ‾ are obtained. The centralizer of Uq sℓ (2 | 1) on the mixed tensor product is calculated. It is shown to be the quotient Xm,n of the quantum walled Brauer algebra qw Bm,n. The structure of projective modules over Xm,n is written down explicitly. It is known that the walled Brauer algebras form an infinite tower. We have calculated the corresponding restriction functors on simple and projective modules over Xm,n. This result forms a crucial step in decomposition of the mixed tensor product as a bimodule over Xm,n ⊠Uq sℓ (2 | 1). We give an explicit bimodule structure for all m , n.

  10. Crossing Fibers Detection with an Analytical High Order Tensor Decomposition

    PubMed Central

    Megherbi, T.; Kachouane, M.; Oulebsir-Boumghar, F.; Deriche, R.

    2014-01-01

    Diffusion magnetic resonance imaging (dMRI) is the only technique to probe in vivo and noninvasively the fiber structure of human brain white matter. Detecting the crossing of neuronal fibers remains an exciting challenge with an important impact in tractography. In this work, we tackle this challenging problem and propose an original and efficient technique to extract all crossing fibers from diffusion signals. To this end, we start by estimating, from the dMRI signal, the so-called Cartesian tensor fiber orientation distribution (CT-FOD) function, whose maxima correspond exactly to the orientations of the fibers. The fourth order symmetric positive definite tensor that represents the CT-FOD is then analytically decomposed via the application of a new theoretical approach and this decomposition is used to accurately extract all the fibers orientations. Our proposed high order tensor decomposition based approach is minimal and allows recovering the whole crossing fibers without any a priori information on the total number of fibers. Various experiments performed on noisy synthetic data, on phantom diffusion, data and on human brain data validate our approach and clearly demonstrate that it is efficient, robust to noise and performs favorably in terms of angular resolution and accuracy when compared to some classical and state-of-the-art approaches. PMID:25246940

  11. Video denoising using low rank tensor decomposition

    NASA Astrophysics Data System (ADS)

    Gui, Lihua; Cui, Gaochao; Zhao, Qibin; Wang, Dongsheng; Cichocki, Andrzej; Cao, Jianting

    2017-03-01

    Reducing noise in a video sequence is of vital important in many real-world applications. One popular method is block matching collaborative filtering. However, the main drawback of this method is that noise standard deviation for the whole video sequence is known in advance. In this paper, we present a tensor based denoising framework that considers 3D patches instead of 2D patches. By collecting the similar 3D patches non-locally, we employ the low-rank tensor decomposition for collaborative filtering. Since we specify the non-informative prior over the noise precision parameter, the noise variance can be inferred automatically from observed video data. Therefore, our method is more practical, which does not require knowing the noise variance. The experimental on video denoising demonstrates the effectiveness of our proposed method.

  12. Low-rank canonical-tensor decomposition of potential energy surfaces: application to grid-based diagrammatic vibrational Green's function theory

    NASA Astrophysics Data System (ADS)

    Rai, Prashant; Sargsyan, Khachik; Najm, Habib; Hermes, Matthew R.; Hirata, So

    2017-09-01

    A new method is proposed for a fast evaluation of high-dimensional integrals of potential energy surfaces (PES) that arise in many areas of quantum dynamics. It decomposes a PES into a canonical low-rank tensor format, reducing its integral into a relatively short sum of products of low-dimensional integrals. The decomposition is achieved by the alternating least squares (ALS) algorithm, requiring only a small number of single-point energy evaluations. Therefore, it eradicates a force-constant evaluation as the hotspot of many quantum dynamics simulations and also possibly lifts the curse of dimensionality. This general method is applied to the anharmonic vibrational zero-point and transition energy calculations of molecules using the second-order diagrammatic vibrational many-body Green's function (XVH2) theory with a harmonic-approximation reference. In this application, high dimensional PES and Green's functions are both subjected to a low-rank decomposition. Evaluating the molecular integrals over a low-rank PES and Green's functions as sums of low-dimensional integrals using the Gauss-Hermite quadrature, this canonical-tensor-decomposition-based XVH2 (CT-XVH2) achieves an accuracy of 0.1 cm-1 or higher and nearly an order of magnitude speedup as compared with the original algorithm using force constants for water and formaldehyde.

  13. Tensor-based classification of an auditory mobile BCI without a subject-specific calibration phase

    NASA Astrophysics Data System (ADS)

    Zink, Rob; Hunyadi, Borbála; Van Huffel, Sabine; De Vos, Maarten

    2016-04-01

    Objective. One of the major drawbacks in EEG brain-computer interfaces (BCI) is the need for subject-specific training of the classifier. By removing the need for a supervised calibration phase, new users could potentially explore a BCI faster. In this work we aim to remove this subject-specific calibration phase and allow direct classification. Approach. We explore canonical polyadic decompositions and block term decompositions of the EEG. These methods exploit structure in higher dimensional data arrays called tensors. The BCI tensors are constructed by concatenating ERP templates from other subjects to a target and non-target trial and the inherent structure guides a decomposition that allows accurate classification. We illustrate the new method on data from a three-class auditory oddball paradigm. Main results. The presented approach leads to a fast and intuitive classification with accuracies competitive with a supervised and cross-validated LDA approach. Significance. The described methods are a promising new way of classifying BCI data with a forthright link to the original P300 ERP signal over the conventional and widely used supervised approaches.

  14. Tensor-based classification of an auditory mobile BCI without a subject-specific calibration phase.

    PubMed

    Zink, Rob; Hunyadi, Borbála; Huffel, Sabine Van; Vos, Maarten De

    2016-04-01

    One of the major drawbacks in EEG brain-computer interfaces (BCI) is the need for subject-specific training of the classifier. By removing the need for a supervised calibration phase, new users could potentially explore a BCI faster. In this work we aim to remove this subject-specific calibration phase and allow direct classification. We explore canonical polyadic decompositions and block term decompositions of the EEG. These methods exploit structure in higher dimensional data arrays called tensors. The BCI tensors are constructed by concatenating ERP templates from other subjects to a target and non-target trial and the inherent structure guides a decomposition that allows accurate classification. We illustrate the new method on data from a three-class auditory oddball paradigm. The presented approach leads to a fast and intuitive classification with accuracies competitive with a supervised and cross-validated LDA approach. The described methods are a promising new way of classifying BCI data with a forthright link to the original P300 ERP signal over the conventional and widely used supervised approaches.

  15. The 3 + 1 decomposition of conformal Yano-Killing tensors and ‘momentary’ charges for the spin-2 field

    NASA Astrophysics Data System (ADS)

    Jezierski, Jacek; Migacz, Szymon

    2015-02-01

    The ‘fully charged’ spin-2 field solution is presented. This is an analog of the Coulomb solution in electrodynamics and represents the ‘non-waving’ part of the spin-2 field theory. Basic facts and definitions of the spin-2 field and conformal Yano-Killing tensors are introduced. Application of those two objects provides a precise definition of quasi-local gravitational charge. Next, the 3 + 1 decomposition leads to the construction of the momentary gravitational charges on the initial surface, which is applicable for Schwarzschild-like spacetimes.

  16. gamAID: Greedy CP tensor decomposition for supervised EHR-based disease trajectory differentiation.

    PubMed

    Henderson, Jette; Ho, Joyce; Ghosh, Joydeep

    2017-07-01

    We propose gamAID, an exploratory, supervised nonnegative tensor factorization method that iteratively extracts phenotypes from tensors constructed from medical count data. Using data from diabetic patients who later on get diagnosed with chronic kidney disorder (CKD) as well as diabetic patients who do not receive a CKD diagnosis, we demonstrate the potential of gamAID to discover phenotypes that characterize patients who are at risk for developing a disease.

  17. Low-rank canonical-tensor decomposition of potential energy surfaces: application to grid-based diagrammatic vibrational Green's function theory

    DOE PAGES

    Rai, Prashant; Sargsyan, Khachik; Najm, Habib; ...

    2017-03-07

    Here, a new method is proposed for a fast evaluation of high-dimensional integrals of potential energy surfaces (PES) that arise in many areas of quantum dynamics. It decomposes a PES into a canonical low-rank tensor format, reducing its integral into a relatively short sum of products of low-dimensional integrals. The decomposition is achieved by the alternating least squares (ALS) algorithm, requiring only a small number of single-point energy evaluations. Therefore, it eradicates a force-constant evaluation as the hotspot of many quantum dynamics simulations and also possibly lifts the curse of dimensionality. This general method is applied to the anharmonic vibrationalmore » zero-point and transition energy calculations of molecules using the second-order diagrammatic vibrational many-body Green's function (XVH2) theory with a harmonic-approximation reference. In this application, high dimensional PES and Green's functions are both subjected to a low-rank decomposition. Evaluating the molecular integrals over a low-rank PES and Green's functions as sums of low-dimensional integrals using the Gauss–Hermite quadrature, this canonical-tensor-decomposition-based XVH2 (CT-XVH2) achieves an accuracy of 0.1 cm -1 or higher and nearly an order of magnitude speedup as compared with the original algorithm using force constants for water and formaldehyde.« less

  18. Low-rank canonical-tensor decomposition of potential energy surfaces: application to grid-based diagrammatic vibrational Green's function theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rai, Prashant; Sargsyan, Khachik; Najm, Habib

    Here, a new method is proposed for a fast evaluation of high-dimensional integrals of potential energy surfaces (PES) that arise in many areas of quantum dynamics. It decomposes a PES into a canonical low-rank tensor format, reducing its integral into a relatively short sum of products of low-dimensional integrals. The decomposition is achieved by the alternating least squares (ALS) algorithm, requiring only a small number of single-point energy evaluations. Therefore, it eradicates a force-constant evaluation as the hotspot of many quantum dynamics simulations and also possibly lifts the curse of dimensionality. This general method is applied to the anharmonic vibrationalmore » zero-point and transition energy calculations of molecules using the second-order diagrammatic vibrational many-body Green's function (XVH2) theory with a harmonic-approximation reference. In this application, high dimensional PES and Green's functions are both subjected to a low-rank decomposition. Evaluating the molecular integrals over a low-rank PES and Green's functions as sums of low-dimensional integrals using the Gauss–Hermite quadrature, this canonical-tensor-decomposition-based XVH2 (CT-XVH2) achieves an accuracy of 0.1 cm -1 or higher and nearly an order of magnitude speedup as compared with the original algorithm using force constants for water and formaldehyde.« less

  19. Hilbert complexes of nonlinear elasticity

    NASA Astrophysics Data System (ADS)

    Angoshtari, Arzhang; Yavari, Arash

    2016-12-01

    We introduce some Hilbert complexes involving second-order tensors on flat compact manifolds with boundary that describe the kinematics and the kinetics of motion in nonlinear elasticity. We then use the general framework of Hilbert complexes to write Hodge-type and Helmholtz-type orthogonal decompositions for second-order tensors. As some applications of these decompositions in nonlinear elasticity, we study the strain compatibility equations of linear and nonlinear elasticity in the presence of Dirichlet boundary conditions and the existence of stress functions on non-contractible bodies. As an application of these Hilbert complexes in computational mechanics, we briefly discuss the derivation of a new class of mixed finite element methods for nonlinear elasticity.

  20. A gravitational energy–momentum and the thermodynamic description of gravity

    NASA Astrophysics Data System (ADS)

    Acquaviva, G.; Kofroň, D.; Scholtz, M.

    2018-05-01

    A proposal for the gravitational energy–momentum tensor, known in the literature as the square root of Bel–Robinson tensor (SQBR), is analyzed in detail. Being constructed exclusively from the Weyl part of the Riemann tensor, such tensor encapsulates the geometric properties of free gravitational fields in terms of optical scalars of null congruences: making use of the general decomposition of any energy–momentum tensor, we explore the thermodynamic interpretation of such geometric quantities. While the matter energy–momentum is identically conserved due to Einstein’s field equations, the SQBR is not necessarily conserved and dissipative terms could arise in its vacuum continuity equation. We discuss the possible physical interpretations of such mathematical properties.

  1. Accurate calculation of the geometric measure of entanglement for multipartite quantum states

    NASA Astrophysics Data System (ADS)

    Teng, Peiyuan

    2017-07-01

    This article proposes an efficient way of calculating the geometric measure of entanglement using tensor decomposition methods. The connection between these two concepts is explored using the tensor representation of the wavefunction. Numerical examples are benchmarked and compared. Furthermore, we search for highly entangled qubit states to show the applicability of this method.

  2. Rortex—A new vortex vector definition and vorticity tensor and vector decompositions

    NASA Astrophysics Data System (ADS)

    Liu, Chaoqun; Gao, Yisheng; Tian, Shuling; Dong, Xiangrui

    2018-03-01

    A vortex is intuitively recognized as the rotational/swirling motion of the fluids. However, an unambiguous and universally accepted definition for vortex is yet to be achieved in the field of fluid mechanics, which is probably one of the major obstacles causing considerable confusions and misunderstandings in turbulence research. In our previous work, a new vector quantity that is called vortex vector was proposed to accurately describe the local fluid rotation and clearly display vortical structures. In this paper, the definition of the vortex vector, named Rortex here, is revisited from the mathematical perspective. The existence of the possible rotational axis is proved through real Schur decomposition. Based on real Schur decomposition, a fast algorithm for calculating Rortex is also presented. In addition, new vorticity tensor and vector decompositions are introduced: the vorticity tensor is decomposed to a rigidly rotational part and a non-rotationally anti-symmetric part, and the vorticity vector is decomposed to a rigidly rotational vector which is called the Rortex vector and a non-rotational vector which is called the shear vector. Several cases, including the 2D Couette flow, 2D rigid rotational flow, and 3D boundary layer transition on a flat plate, are studied to demonstrate the justification of the definition of Rortex. It can be observed that Rortex identifies both the precise swirling strength and the rotational axis, and thus it can reasonably represent the local fluid rotation and provide a new powerful tool for vortex dynamics and turbulence research.

  3. EEG Classification for Hybrid Brain-Computer Interface Using a Tensor Based Multiclass Multimodal Analysis Scheme

    PubMed Central

    Ji, Hongfei; Li, Jie; Lu, Rongrong; Gu, Rong; Cao, Lei; Gong, Xiaoliang

    2016-01-01

    Electroencephalogram- (EEG-) based brain-computer interface (BCI) systems usually utilize one type of changes in the dynamics of brain oscillations for control, such as event-related desynchronization/synchronization (ERD/ERS), steady state visual evoked potential (SSVEP), and P300 evoked potentials. There is a recent trend to detect more than one of these signals in one system to create a hybrid BCI. However, in this case, EEG data were always divided into groups and analyzed by the separate processing procedures. As a result, the interactive effects were ignored when different types of BCI tasks were executed simultaneously. In this work, we propose an improved tensor based multiclass multimodal scheme especially for hybrid BCI, in which EEG signals are denoted as multiway tensors, a nonredundant rank-one tensor decomposition model is proposed to obtain nonredundant tensor components, a weighted fisher criterion is designed to select multimodal discriminative patterns without ignoring the interactive effects, and support vector machine (SVM) is extended to multiclass classification. Experiment results suggest that the proposed scheme can not only identify the different changes in the dynamics of brain oscillations induced by different types of tasks but also capture the interactive effects of simultaneous tasks properly. Therefore, it has great potential use for hybrid BCI. PMID:26880873

  4. EEG Classification for Hybrid Brain-Computer Interface Using a Tensor Based Multiclass Multimodal Analysis Scheme.

    PubMed

    Ji, Hongfei; Li, Jie; Lu, Rongrong; Gu, Rong; Cao, Lei; Gong, Xiaoliang

    2016-01-01

    Electroencephalogram- (EEG-) based brain-computer interface (BCI) systems usually utilize one type of changes in the dynamics of brain oscillations for control, such as event-related desynchronization/synchronization (ERD/ERS), steady state visual evoked potential (SSVEP), and P300 evoked potentials. There is a recent trend to detect more than one of these signals in one system to create a hybrid BCI. However, in this case, EEG data were always divided into groups and analyzed by the separate processing procedures. As a result, the interactive effects were ignored when different types of BCI tasks were executed simultaneously. In this work, we propose an improved tensor based multiclass multimodal scheme especially for hybrid BCI, in which EEG signals are denoted as multiway tensors, a nonredundant rank-one tensor decomposition model is proposed to obtain nonredundant tensor components, a weighted fisher criterion is designed to select multimodal discriminative patterns without ignoring the interactive effects, and support vector machine (SVM) is extended to multiclass classification. Experiment results suggest that the proposed scheme can not only identify the different changes in the dynamics of brain oscillations induced by different types of tasks but also capture the interactive effects of simultaneous tasks properly. Therefore, it has great potential use for hybrid BCI.

  5. Higher-order stochastic differential equations and the positive Wigner function

    NASA Astrophysics Data System (ADS)

    Drummond, P. D.

    2017-12-01

    General higher-order stochastic processes that correspond to any diffusion-type tensor of higher than second order are obtained. The relationship of multivariate higher-order stochastic differential equations with tensor decomposition theory and tensor rank is explained. Techniques for generating the requisite complex higher-order noise are proved to exist either using polar coordinates and γ distributions, or from products of Gaussian variates. This method is shown to allow the calculation of the dynamics of the Wigner function, after it is extended to a complex phase space. The results are illustrated physically through dynamical calculations of the positive Wigner distribution for three-mode parametric downconversion, widely used in quantum optics. The approach eliminates paradoxes arising from truncation of the higher derivative terms in Wigner function time evolution. Anomalous results of negative populations and vacuum scattering found in truncated Wigner quantum simulations in quantum optics and Bose-Einstein condensate dynamics are shown not to occur with this type of stochastic theory.

  6. Parallel Tensor Compression for Large-Scale Scientific Data.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kolda, Tamara G.; Ballard, Grey; Austin, Woody Nathan

    As parallel computing trends towards the exascale, scientific data produced by high-fidelity simulations are growing increasingly massive. For instance, a simulation on a three-dimensional spatial grid with 512 points per dimension that tracks 64 variables per grid point for 128 time steps yields 8 TB of data. By viewing the data as a dense five way tensor, we can compute a Tucker decomposition to find inherent low-dimensional multilinear structure, achieving compression ratios of up to 10000 on real-world data sets with negligible loss in accuracy. So that we can operate on such massive data, we present the first-ever distributed memorymore » parallel implementation for the Tucker decomposition, whose key computations correspond to parallel linear algebra operations, albeit with nonstandard data layouts. Our approach specifies a data distribution for tensors that avoids any tensor data redistribution, either locally or in parallel. We provide accompanying analysis of the computation and communication costs of the algorithms. To demonstrate the compression and accuracy of the method, we apply our approach to real-world data sets from combustion science simulations. We also provide detailed performance results, including parallel performance in both weak and strong scaling experiments.« less

  7. Human action recognition based on point context tensor shape descriptor

    NASA Astrophysics Data System (ADS)

    Li, Jianjun; Mao, Xia; Chen, Lijiang; Wang, Lan

    2017-07-01

    Motion trajectory recognition is one of the most important means to determine the identity of a moving object. A compact and discriminative feature representation method can improve the trajectory recognition accuracy. This paper presents an efficient framework for action recognition using a three-dimensional skeleton kinematic joint model. First, we put forward a rotation-scale-translation-invariant shape descriptor based on point context (PC) and the normal vector of hypersurface to jointly characterize local motion and shape information. Meanwhile, an algorithm for extracting the key trajectory based on the confidence coefficient is proposed to reduce the randomness and computational complexity. Second, to decrease the eigenvalue decomposition time complexity, a tensor shape descriptor (TSD) based on PC that can globally capture the spatial layout and temporal order to preserve the spatial information of each frame is proposed. Then, a multilinear projection process is achieved by tensor dynamic time warping to map the TSD to a low-dimensional tensor subspace of the same size. Experimental results show that the proposed shape descriptor is effective and feasible, and the proposed approach obtains considerable performance improvement over the state-of-the-art approaches with respect to accuracy on a public action dataset.

  8. Tensor network method for reversible classical computation

    NASA Astrophysics Data System (ADS)

    Yang, Zhi-Cheng; Kourtis, Stefanos; Chamon, Claudio; Mucciolo, Eduardo R.; Ruckenstein, Andrei E.

    2018-03-01

    We develop a tensor network technique that can solve universal reversible classical computational problems, formulated as vertex models on a square lattice [Nat. Commun. 8, 15303 (2017), 10.1038/ncomms15303]. By encoding the truth table of each vertex constraint in a tensor, the total number of solutions compatible with partial inputs and outputs at the boundary can be represented as the full contraction of a tensor network. We introduce an iterative compression-decimation (ICD) scheme that performs this contraction efficiently. The ICD algorithm first propagates local constraints to longer ranges via repeated contraction-decomposition sweeps over all lattice bonds, thus achieving compression on a given length scale. It then decimates the lattice via coarse-graining tensor contractions. Repeated iterations of these two steps gradually collapse the tensor network and ultimately yield the exact tensor trace for large systems, without the need for manual control of tensor dimensions. Our protocol allows us to obtain the exact number of solutions for computations where a naive enumeration would take astronomically long times.

  9. Tensorial extensions of independent component analysis for multisubject FMRI analysis.

    PubMed

    Beckmann, C F; Smith, S M

    2005-03-01

    We discuss model-free analysis of multisubject or multisession FMRI data by extending the single-session probabilistic independent component analysis model (PICA; Beckmann and Smith, 2004. IEEE Trans. on Medical Imaging, 23 (2) 137-152) to higher dimensions. This results in a three-way decomposition that represents the different signals and artefacts present in the data in terms of their temporal, spatial, and subject-dependent variations. The technique is derived from and compared with parallel factor analysis (PARAFAC; Harshman and Lundy, 1984. In Research methods for multimode data analysis, chapter 5, pages 122-215. Praeger, New York). Using simulated data as well as data from multisession and multisubject FMRI studies we demonstrate that the tensor PICA approach is able to efficiently and accurately extract signals of interest in the spatial, temporal, and subject/session domain. The final decompositions improve upon PARAFAC results in terms of greater accuracy, reduced interference between the different estimated sources (reduced cross-talk), robustness (against deviations of the data from modeling assumptions and against overfitting), and computational speed. On real FMRI 'activation' data, the tensor PICA approach is able to extract plausible activation maps, time courses, and session/subject modes as well as provide a rich description of additional processes of interest such as image artefacts or secondary activation patterns. The resulting data decomposition gives simple and useful representations of multisubject/multisession FMRI data that can aid the interpretation and optimization of group FMRI studies beyond what can be achieved using model-based analysis techniques.

  10. Tensor hypercontraction. II. Least-squares renormalization

    NASA Astrophysics Data System (ADS)

    Parrish, Robert M.; Hohenstein, Edward G.; Martínez, Todd J.; Sherrill, C. David

    2012-12-01

    The least-squares tensor hypercontraction (LS-THC) representation for the electron repulsion integral (ERI) tensor is presented. Recently, we developed the generic tensor hypercontraction (THC) ansatz, which represents the fourth-order ERI tensor as a product of five second-order tensors [E. G. Hohenstein, R. M. Parrish, and T. J. Martínez, J. Chem. Phys. 137, 044103 (2012)], 10.1063/1.4732310. Our initial algorithm for the generation of the THC factors involved a two-sided invocation of overlap-metric density fitting, followed by a PARAFAC decomposition, and is denoted PARAFAC tensor hypercontraction (PF-THC). LS-THC supersedes PF-THC by producing the THC factors through a least-squares renormalization of a spatial quadrature over the otherwise singular 1/r12 operator. Remarkably, an analytical and simple formula for the LS-THC factors exists. Using this formula, the factors may be generated with O(N^5) effort if exact integrals are decomposed, or O(N^4) effort if the decomposition is applied to density-fitted integrals, using any choice of density fitting metric. The accuracy of LS-THC is explored for a range of systems using both conventional and density-fitted integrals in the context of MP2. The grid fitting error is found to be negligible even for extremely sparse spatial quadrature grids. For the case of density-fitted integrals, the additional error incurred by the grid fitting step is generally markedly smaller than the underlying Coulomb-metric density fitting error. The present results, coupled with our previously published factorizations of MP2 and MP3, provide an efficient, robust O(N^4) approach to both methods. Moreover, LS-THC is generally applicable to many other methods in quantum chemistry.

  11. Tensor hypercontraction. II. Least-squares renormalization.

    PubMed

    Parrish, Robert M; Hohenstein, Edward G; Martínez, Todd J; Sherrill, C David

    2012-12-14

    The least-squares tensor hypercontraction (LS-THC) representation for the electron repulsion integral (ERI) tensor is presented. Recently, we developed the generic tensor hypercontraction (THC) ansatz, which represents the fourth-order ERI tensor as a product of five second-order tensors [E. G. Hohenstein, R. M. Parrish, and T. J. Martínez, J. Chem. Phys. 137, 044103 (2012)]. Our initial algorithm for the generation of the THC factors involved a two-sided invocation of overlap-metric density fitting, followed by a PARAFAC decomposition, and is denoted PARAFAC tensor hypercontraction (PF-THC). LS-THC supersedes PF-THC by producing the THC factors through a least-squares renormalization of a spatial quadrature over the otherwise singular 1∕r(12) operator. Remarkably, an analytical and simple formula for the LS-THC factors exists. Using this formula, the factors may be generated with O(N(5)) effort if exact integrals are decomposed, or O(N(4)) effort if the decomposition is applied to density-fitted integrals, using any choice of density fitting metric. The accuracy of LS-THC is explored for a range of systems using both conventional and density-fitted integrals in the context of MP2. The grid fitting error is found to be negligible even for extremely sparse spatial quadrature grids. For the case of density-fitted integrals, the additional error incurred by the grid fitting step is generally markedly smaller than the underlying Coulomb-metric density fitting error. The present results, coupled with our previously published factorizations of MP2 and MP3, provide an efficient, robust O(N(4)) approach to both methods. Moreover, LS-THC is generally applicable to many other methods in quantum chemistry.

  12. Low-rank factorization of electron integral tensors and its application in electronic structure theory

    DOE PAGES

    Peng, Bo; Kowalski, Karol

    2017-01-25

    In this paper, we apply reverse Cuthill-McKee (RCM) algorithm to transform two-electron integral tensors to their block diagonal forms. By further applying Cholesky decomposition (CD) on each of the diagonal blocks, we are able to represent the high-dimensional two-electron integral tensors in terms of permutation matrices and low-rank Cholesky vectors. This representation facilitates low-rank factorizations of high-dimensional tensor contractions in post-Hartree-Fock calculations. Finally, we discuss the second-order Møller-Plesset (MP2) method and the linear coupled-cluster model with doubles (L-CCD) as examples to demonstrate the efficiency of this technique in representing the two-electron integrals in a compact form.

  13. Low-rank factorization of electron integral tensors and its application in electronic structure theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peng, Bo; Kowalski, Karol

    In this paper, we apply reverse Cuthill-McKee (RCM) algorithm to transform two-electron integral tensors to their block diagonal forms. By further applying Cholesky decomposition (CD) on each of the diagonal blocks, we are able to represent the high-dimensional two-electron integral tensors in terms of permutation matrices and low-rank Cholesky vectors. This representation facilitates low-rank factorizations of high-dimensional tensor contractions in post-Hartree-Fock calculations. Finally, we discuss the second-order Møller-Plesset (MP2) method and the linear coupled-cluster model with doubles (L-CCD) as examples to demonstrate the efficiency of this technique in representing the two-electron integrals in a compact form.

  14. Using Matrix and Tensor Factorizations for the Single-Trial Analysis of Population Spike Trains.

    PubMed

    Onken, Arno; Liu, Jian K; Karunasekara, P P Chamanthi R; Delis, Ioannis; Gollisch, Tim; Panzeri, Stefano

    2016-11-01

    Advances in neuronal recording techniques are leading to ever larger numbers of simultaneously monitored neurons. This poses the important analytical challenge of how to capture compactly all sensory information that neural population codes carry in their spatial dimension (differences in stimulus tuning across neurons at different locations), in their temporal dimension (temporal neural response variations), or in their combination (temporally coordinated neural population firing). Here we investigate the utility of tensor factorizations of population spike trains along space and time. These factorizations decompose a dataset of single-trial population spike trains into spatial firing patterns (combinations of neurons firing together), temporal firing patterns (temporal activation of these groups of neurons) and trial-dependent activation coefficients (strength of recruitment of such neural patterns on each trial). We validated various factorization methods on simulated data and on populations of ganglion cells simultaneously recorded in the salamander retina. We found that single-trial tensor space-by-time decompositions provided low-dimensional data-robust representations of spike trains that capture efficiently both their spatial and temporal information about sensory stimuli. Tensor decompositions with orthogonality constraints were the most efficient in extracting sensory information, whereas non-negative tensor decompositions worked well even on non-independent and overlapping spike patterns, and retrieved informative firing patterns expressed by the same population in response to novel stimuli. Our method showed that populations of retinal ganglion cells carried information in their spike timing on the ten-milliseconds-scale about spatial details of natural images. This information could not be recovered from the spike counts of these cells. First-spike latencies carried the majority of information provided by the whole spike train about fine-scale image features, and supplied almost as much information about coarse natural image features as firing rates. Together, these results highlight the importance of spike timing, and particularly of first-spike latencies, in retinal coding.

  15. Using Matrix and Tensor Factorizations for the Single-Trial Analysis of Population Spike Trains

    PubMed Central

    Onken, Arno; Liu, Jian K.; Karunasekara, P. P. Chamanthi R.; Delis, Ioannis; Gollisch, Tim; Panzeri, Stefano

    2016-01-01

    Advances in neuronal recording techniques are leading to ever larger numbers of simultaneously monitored neurons. This poses the important analytical challenge of how to capture compactly all sensory information that neural population codes carry in their spatial dimension (differences in stimulus tuning across neurons at different locations), in their temporal dimension (temporal neural response variations), or in their combination (temporally coordinated neural population firing). Here we investigate the utility of tensor factorizations of population spike trains along space and time. These factorizations decompose a dataset of single-trial population spike trains into spatial firing patterns (combinations of neurons firing together), temporal firing patterns (temporal activation of these groups of neurons) and trial-dependent activation coefficients (strength of recruitment of such neural patterns on each trial). We validated various factorization methods on simulated data and on populations of ganglion cells simultaneously recorded in the salamander retina. We found that single-trial tensor space-by-time decompositions provided low-dimensional data-robust representations of spike trains that capture efficiently both their spatial and temporal information about sensory stimuli. Tensor decompositions with orthogonality constraints were the most efficient in extracting sensory information, whereas non-negative tensor decompositions worked well even on non-independent and overlapping spike patterns, and retrieved informative firing patterns expressed by the same population in response to novel stimuli. Our method showed that populations of retinal ganglion cells carried information in their spike timing on the ten-milliseconds-scale about spatial details of natural images. This information could not be recovered from the spike counts of these cells. First-spike latencies carried the majority of information provided by the whole spike train about fine-scale image features, and supplied almost as much information about coarse natural image features as firing rates. Together, these results highlight the importance of spike timing, and particularly of first-spike latencies, in retinal coding. PMID:27814363

  16. Combined Tensor Fitting and TV Regularization in Diffusion Tensor Imaging Based on a Riemannian Manifold Approach.

    PubMed

    Baust, Maximilian; Weinmann, Andreas; Wieczorek, Matthias; Lasser, Tobias; Storath, Martin; Navab, Nassir

    2016-08-01

    In this paper, we consider combined TV denoising and diffusion tensor fitting in DTI using the affine-invariant Riemannian metric on the space of diffusion tensors. Instead of first fitting the diffusion tensors, and then denoising them, we define a suitable TV type energy functional which incorporates the measured DWIs (using an inverse problem setup) and which measures the nearness of neighboring tensors in the manifold. To approach this functional, we propose generalized forward- backward splitting algorithms which combine an explicit and several implicit steps performed on a decomposition of the functional. We validate the performance of the derived algorithms on synthetic and real DTI data. In particular, we work on real 3D data. To our knowledge, the present paper describes the first approach to TV regularization in a combined manifold and inverse problem setup.

  17. Representing Matrix Cracks Through Decomposition of the Deformation Gradient Tensor in Continuum Damage Mechanics Methods

    NASA Technical Reports Server (NTRS)

    Leone, Frank A., Jr.

    2015-01-01

    A method is presented to represent the large-deformation kinematics of intraply matrix cracks and delaminations in continuum damage mechanics (CDM) constitutive material models. The method involves the additive decomposition of the deformation gradient tensor into 'crack' and 'bulk material' components. The response of the intact bulk material is represented by a reduced deformation gradient tensor, and the opening of an embedded cohesive interface is represented by a normalized cohesive displacement-jump vector. The rotation of the embedded interface is tracked as the material deforms and as the crack opens. The distribution of the total local deformation between the bulk material and the cohesive interface components is determined by minimizing the difference between the cohesive stress and the bulk material stress projected onto the cohesive interface. The improvements to the accuracy of CDM models that incorporate the presented method over existing approaches are demonstrated for a single element subjected to simple shear deformation and for a finite element model of a unidirectional open-hole tension specimen. The material model is implemented as a VUMAT user subroutine for the Abaqus/Explicit finite element software. The presented deformation gradient decomposition method reduces the artificial load transfer across matrix cracks subjected to large shearing deformations, and avoids the spurious secondary failure modes that often occur in analyses based on conventional progressive damage models.

  18. Uncertainty propagation in orbital mechanics via tensor decomposition

    NASA Astrophysics Data System (ADS)

    Sun, Yifei; Kumar, Mrinal

    2016-03-01

    Uncertainty forecasting in orbital mechanics is an essential but difficult task, primarily because the underlying Fokker-Planck equation (FPE) is defined on a relatively high dimensional (6-D) state-space and is driven by the nonlinear perturbed Keplerian dynamics. In addition, an enormously large solution domain is required for numerical solution of this FPE (e.g. encompassing the entire orbit in the x-y-z subspace), of which the state probability density function (pdf) occupies a tiny fraction at any given time. This coupling of large size, high dimensionality and nonlinearity makes for a formidable computational task, and has caused the FPE for orbital uncertainty propagation to remain an unsolved problem. To the best of the authors' knowledge, this paper presents the first successful direct solution of the FPE for perturbed Keplerian mechanics. To tackle the dimensionality issue, the time-varying state pdf is approximated in the CANDECOMP/PARAFAC decomposition tensor form where all the six spatial dimensions as well as the time dimension are separated from one other. The pdf approximation for all times is obtained simultaneously via the alternating least squares algorithm. Chebyshev spectral differentiation is employed for discretization on account of its spectral ("super-fast") convergence rate. To facilitate the tensor decomposition and control the solution domain size, system dynamics is expressed using spherical coordinates in a noninertial reference frame. Numerical results obtained on a regular personal computer are compared with Monte Carlo simulations.

  19. Sparse Tensor Decomposition for Haplotype Assembly of Diploids and Polyploids.

    PubMed

    Hashemi, Abolfazl; Zhu, Banghua; Vikalo, Haris

    2018-03-21

    Haplotype assembly is the task of reconstructing haplotypes of an individual from a mixture of sequenced chromosome fragments. Haplotype information enables studies of the effects of genetic variations on an organism's phenotype. Most of the mathematical formulations of haplotype assembly are known to be NP-hard and haplotype assembly becomes even more challenging as the sequencing technology advances and the length of the paired-end reads and inserts increases. Assembly of haplotypes polyploid organisms is considerably more difficult than in the case of diploids. Hence, scalable and accurate schemes with provable performance are desired for haplotype assembly of both diploid and polyploid organisms. We propose a framework that formulates haplotype assembly from sequencing data as a sparse tensor decomposition. We cast the problem as that of decomposing a tensor having special structural constraints and missing a large fraction of its entries into a product of two factors, U and [Formula: see text]; tensor [Formula: see text] reveals haplotype information while U is a sparse matrix encoding the origin of erroneous sequencing reads. An algorithm, AltHap, which reconstructs haplotypes of either diploid or polyploid organisms by iteratively solving this decomposition problem is proposed. The performance and convergence properties of AltHap are theoretically analyzed and, in doing so, guarantees on the achievable minimum error correction scores and correct phasing rate are established. The developed framework is applicable to diploid, biallelic and polyallelic polyploid species. The code for AltHap is freely available from https://github.com/realabolfazl/AltHap . AltHap was tested in a number of different scenarios and was shown to compare favorably to state-of-the-art methods in applications to haplotype assembly of diploids, and significantly outperforms existing techniques when applied to haplotype assembly of polyploids.

  20. Communication: Acceleration of coupled cluster singles and doubles via orbital-weighted least-squares tensor hypercontraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parrish, Robert M.; Sherrill, C. David, E-mail: sherrill@gatech.edu; Hohenstein, Edward G.

    2014-05-14

    We apply orbital-weighted least-squares tensor hypercontraction decomposition of the electron repulsion integrals to accelerate the coupled cluster singles and doubles (CCSD) method. Using accurate and flexible low-rank factorizations of the electron repulsion integral tensor, we are able to reduce the scaling of the most vexing particle-particle ladder term in CCSD from O(N{sup 6}) to O(N{sup 5}), with remarkably low error. Combined with a T{sub 1}-transformed Hamiltonian, this leads to substantial practical accelerations against an optimized density-fitted CCSD implementation.

  1. Unsupervised Tensor Mining for Big Data Practitioners.

    PubMed

    Papalexakis, Evangelos E; Faloutsos, Christos

    2016-09-01

    Multiaspect data are ubiquitous in modern Big Data applications. For instance, different aspects of a social network are the different types of communication between people, the time stamp of each interaction, and the location associated to each individual. How can we jointly model all those aspects and leverage the additional information that they introduce to our analysis? Tensors, which are multidimensional extensions of matrices, are a principled and mathematically sound way of modeling such multiaspect data. In this article, our goal is to popularize tensors and tensor decompositions to Big Data practitioners by demonstrating their effectiveness, outlining challenges that pertain to their application in Big Data scenarios, and presenting our recent work that tackles those challenges. We view this work as a step toward a fully automated, unsupervised tensor mining tool that can be easily and broadly adopted by practitioners in academia and industry.

  2. Tensor products of U{sub q}{sup Prime }sl-caret(2)-modules and the big q{sup 2}-Jacobi function transform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gade, R. M.

    2013-01-15

    Four tensor products of evaluation modules of the quantum affine algebra U{sub q}{sup Prime }sl-caret(2) obtained from the negative and positive series, the complementary and the strange series representations are investigated. Linear operators R(z) satisfying the intertwining property on finite linear combinations of the canonical basis elements of the tensor products are described in terms of two sets of infinite sums {l_brace}{tau}{sup (r,t)}{r_brace}{sub r,t Element-Of Z{sub {>=}{sub 0}}} and {l_brace}{tau}{sup (r,t)}{r_brace}{sub r,t Element-Of Z{sub {>=}{sub 0}}} involving big q{sup 2}-Jacobi functions or related nonterminating basic hypergeometric series. Inhomogeneous recurrence relations can be derived for both sets. Evaluations of the simplestmore » sums provide the corresponding initial conditions. For the first set of sums the relations entail a big q{sup 2}-Jacobi function transform pair. An integral decomposition is obtained for the sum {tau}{sup (r,t)}. A partial description of the relation between the decompositions of the tensor products with respect to U{sub q}sl(2) or with respect to its complement in U{sub q}{sup Prime }sl-caret(2) can be formulated in terms of Askey-Wilson function transforms. For a particular combination of two tensor products, the occurrence of proper U{sub q}{sup Prime }sl-caret(2)-submodules is discussed.« less

  3. Anisotropic Developments for Homogeneous Shear Flows

    NASA Technical Reports Server (NTRS)

    Cambon, Claude; Rubinstein, Robert

    2006-01-01

    The general decomposition of the spectral correlation tensor R(sub ij)(k) by Cambon et al. (J. Fluid Mech., 202, 295; J. Fluid Mech., 337, 303) into directional and polarization components is applied to the representation of R(sub ij)(k) by spherically averaged quantities. The decomposition splits the deviatoric part H(sub ij)(k) of the spherical average of R(sub ij)(k) into directional and polarization components H(sub ij)(sup e)(k) and H(sub ij)(sup z)(k). A self-consistent representation of the spectral tensor in the limit of weak anisotropy is constructed in terms of these spherically averaged quantities. The directional polarization components must be treated independently: models that attempt the same representation of the spectral tensor using the spherical average H(sub ij)(k) alone prove to be inconsistent with Navier-Stokes dynamics. In particular, a spectral tensor consistent with a prescribed Reynolds stress is not unique. The degree of anisotropy permitted by this theory is restricted by realizability requirements. Since these requirements will be less severe in a more accurate theory, a preliminary account is given of how to generalize the formalism of spherical averages to higher expansion of the spectral tensor. Directionality is described by a conventional expansion in spherical harmonics, but polarization requires an expansion in tensorial spherical harmonics generated by irreducible representations of the spatial rotation group SO(exp 3). These expansions are considered in more detail in the special case of axial symmetry.

  4. A Tensor-Based Subspace Approach for Bistatic MIMO Radar in Spatial Colored Noise

    PubMed Central

    Wang, Xianpeng; Wang, Wei; Li, Xin; Wang, Junxiang

    2014-01-01

    In this paper, a new tensor-based subspace approach is proposed to estimate the direction of departure (DOD) and the direction of arrival (DOA) for bistatic multiple-input multiple-output (MIMO) radar in the presence of spatial colored noise. Firstly, the received signals can be packed into a third-order measurement tensor by exploiting the inherent structure of the matched filter. Then, the measurement tensor can be divided into two sub-tensors, and a cross-covariance tensor is formulated to eliminate the spatial colored noise. Finally, the signal subspace is constructed by utilizing the higher-order singular value decomposition (HOSVD) of the cross-covariance tensor, and the DOD and DOA can be obtained through the estimation of signal parameters via rotational invariance technique (ESPRIT) algorithm, which are paired automatically. Since the multidimensional inherent structure and the cross-covariance tensor technique are used, the proposed method provides better angle estimation performance than Chen's method, the ESPRIT algorithm and the multi-SVD method. Simulation results confirm the effectiveness and the advantage of the proposed method. PMID:24573313

  5. A tensor-based subspace approach for bistatic MIMO radar in spatial colored noise.

    PubMed

    Wang, Xianpeng; Wang, Wei; Li, Xin; Wang, Junxiang

    2014-02-25

    In this paper, a new tensor-based subspace approach is proposed to estimate the direction of departure (DOD) and the direction of arrival (DOA) for bistatic multiple-input multiple-output (MIMO) radar in the presence of spatial colored noise. Firstly, the received signals can be packed into a third-order measurement tensor by exploiting the inherent structure of the matched filter. Then, the measurement tensor can be divided into two sub-tensors, and a cross-covariance tensor is formulated to eliminate the spatial colored noise. Finally, the signal subspace is constructed by utilizing the higher-order singular value decomposition (HOSVD) of the cross-covariance tensor, and the DOD and DOA can be obtained through the estimation of signal parameters via rotational invariance technique (ESPRIT) algorithm, which are paired automatically. Since the multidimensional inherent structure and the cross-covariance tensor technique are used, the proposed method provides better angle estimation performance than Chen's method, the ESPRIT algorithm and the multi-SVD method. Simulation results confirm the effectiveness and the advantage of the proposed method.

  6. Trends in biomedical informatics: automated topic analysis of JAMIA articles

    PubMed Central

    Wang, Shuang; Jiang, Chao; Jiang, Xiaoqian; Kim, Hyeon-Eui; Sun, Jimeng; Ohno-Machado, Lucila

    2015-01-01

    Biomedical Informatics is a growing interdisciplinary field in which research topics and citation trends have been evolving rapidly in recent years. To analyze these data in a fast, reproducible manner, automation of certain processes is needed. JAMIA is a “generalist” journal for biomedical informatics. Its articles reflect the wide range of topics in informatics. In this study, we retrieved Medical Subject Headings (MeSH) terms and citations of JAMIA articles published between 2009 and 2014. We use tensors (i.e., multidimensional arrays) to represent the interaction among topics, time and citations, and applied tensor decomposition to automate the analysis. The trends represented by tensors were then carefully interpreted and the results were compared with previous findings based on manual topic analysis. A list of most cited JAMIA articles, their topics, and publication trends over recent years is presented. The analyses confirmed previous studies and showed that, from 2012 to 2014, the number of articles related to MeSH terms Methods, Organization & Administration, and Algorithms increased significantly both in number of publications and citations. Citation trends varied widely by topic, with Natural Language Processing having a large number of citations in particular years, and Medical Record Systems, Computerized remaining a very popular topic in all years. PMID:26555018

  7. Low-rank factorization of electron integral tensors and its application in electronic structure theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peng, Bo; Kowalski, Karol

    In this letter, we introduce the reverse Cuthill-McKee (RCM) algorithm, which is often used for the bandwidth reduction of sparse tensors, to transform the two-electron integral tensors to their block diagonal forms. By further applying the pivoted Cholesky decomposition (CD) on each of the diagonal blocks, we are able to represent the high-dimensional two-electron integral tensors in terms of permutation matrices and low-rank Cholesky vectors. This representation facilitates the low-rank factorization of the high-dimensional tensor contractions that are usually encountered in post-Hartree-Fock calculations. In this letter, we discuss the second-order Møller-Plesset (MP2) method and linear coupled- cluster model with doublesmore » (L-CCD) as two simple examples to demonstrate the efficiency of the RCM-CD technique in representing two-electron integrals in a compact form.« less

  8. Combining Diffusion Tensor Metrics and DSC Perfusion Imaging: Can It Improve the Diagnostic Accuracy in Differentiating Tumefactive Demyelination from High-Grade Glioma?

    PubMed

    Hiremath, S B; Muraleedharan, A; Kumar, S; Nagesh, C; Kesavadas, C; Abraham, M; Kapilamoorthy, T R; Thomas, B

    2017-04-01

    Tumefactive demyelinating lesions with atypical features can mimic high-grade gliomas on conventional imaging sequences. The aim of this study was to assess the role of conventional imaging, DTI metrics ( p:q tensor decomposition), and DSC perfusion in differentiating tumefactive demyelinating lesions and high-grade gliomas. Fourteen patients with tumefactive demyelinating lesions and 21 patients with high-grade gliomas underwent brain MR imaging with conventional, DTI, and DSC perfusion imaging. Imaging sequences were assessed for differentiation of the lesions. DTI metrics in the enhancing areas and perilesional hyperintensity were obtained by ROI analysis, and the relative CBV values in enhancing areas were calculated on DSC perfusion imaging. Conventional imaging sequences had a sensitivity of 80.9% and specificity of 57.1% in differentiating high-grade gliomas ( P = .049) from tumefactive demyelinating lesions. DTI metrics ( p : q tensor decomposition) and DSC perfusion demonstrated a statistically significant difference in the mean values of ADC, the isotropic component of the diffusion tensor, the anisotropic component of the diffusion tensor, the total magnitude of the diffusion tensor, and rCBV among enhancing portions in tumefactive demyelinating lesions and high-grade gliomas ( P ≤ .02), with the highest specificity for ADC, the anisotropic component of the diffusion tensor, and relative CBV (92.9%). Mean fractional anisotropy values showed no significant statistical difference between tumefactive demyelinating lesions and high-grade gliomas. The combination of DTI and DSC parameters improved the diagnostic accuracy (area under the curve = 0.901). Addition of a heterogeneous enhancement pattern to DTI and DSC parameters improved it further (area under the curve = 0.966). The sensitivity increased from 71.4% to 85.7% after the addition of the enhancement pattern. DTI and DSC perfusion add profoundly to conventional imaging in differentiating tumefactive demyelinating lesions and high-grade gliomas. The combination of DTI metrics and DSC perfusion markedly improved diagnostic accuracy. © 2017 by American Journal of Neuroradiology.

  9. Towards Interactive Construction of Topical Hierarchy: A Recursive Tensor Decomposition Approach

    PubMed Central

    Wang, Chi; Liu, Xueqing; Song, Yanglei; Han, Jiawei

    2015-01-01

    Automatic construction of user-desired topical hierarchies over large volumes of text data is a highly desirable but challenging task. This study proposes to give users freedom to construct topical hierarchies via interactive operations such as expanding a branch and merging several branches. Existing hierarchical topic modeling techniques are inadequate for this purpose because (1) they cannot consistently preserve the topics when the hierarchy structure is modified; and (2) the slow inference prevents swift response to user requests. In this study, we propose a novel method, called STROD, that allows efficient and consistent modification of topic hierarchies, based on a recursive generative model and a scalable tensor decomposition inference algorithm with theoretical performance guarantee. Empirical evaluation shows that STROD reduces the runtime of construction by several orders of magnitude, while generating consistent and quality hierarchies. PMID:26705505

  10. Towards Interactive Construction of Topical Hierarchy: A Recursive Tensor Decomposition Approach.

    PubMed

    Wang, Chi; Liu, Xueqing; Song, Yanglei; Han, Jiawei

    2015-08-01

    Automatic construction of user-desired topical hierarchies over large volumes of text data is a highly desirable but challenging task. This study proposes to give users freedom to construct topical hierarchies via interactive operations such as expanding a branch and merging several branches. Existing hierarchical topic modeling techniques are inadequate for this purpose because (1) they cannot consistently preserve the topics when the hierarchy structure is modified; and (2) the slow inference prevents swift response to user requests. In this study, we propose a novel method, called STROD, that allows efficient and consistent modification of topic hierarchies, based on a recursive generative model and a scalable tensor decomposition inference algorithm with theoretical performance guarantee. Empirical evaluation shows that STROD reduces the runtime of construction by several orders of magnitude, while generating consistent and quality hierarchies.

  11. Multi-Fault Diagnosis of Rolling Bearings via Adaptive Projection Intrinsically Transformed Multivariate Empirical Mode Decomposition and High Order Singular Value Decomposition

    PubMed Central

    Lv, Yong; Song, Gangbing

    2018-01-01

    Rolling bearings are important components in rotary machinery systems. In the field of multi-fault diagnosis of rolling bearings, the vibration signal collected from single channels tends to miss some fault characteristic information. Using multiple sensors to collect signals at different locations on the machine to obtain multivariate signal can remedy this problem. The adverse effect of a power imbalance between the various channels is inevitable, and unfavorable for multivariate signal processing. As a useful, multivariate signal processing method, Adaptive-projection has intrinsically transformed multivariate empirical mode decomposition (APIT-MEMD), and exhibits better performance than MEMD by adopting adaptive projection strategy in order to alleviate power imbalances. The filter bank properties of APIT-MEMD are also adopted to enable more accurate and stable intrinsic mode functions (IMFs), and to ease mode mixing problems in multi-fault frequency extractions. By aligning IMF sets into a third order tensor, high order singular value decomposition (HOSVD) can be employed to estimate the fault number. The fault correlation factor (FCF) analysis is used to conduct correlation analysis, in order to determine effective IMFs; the characteristic frequencies of multi-faults can then be extracted. Numerical simulations and the application of multi-fault situation can demonstrate that the proposed method is promising in multi-fault diagnoses of multivariate rolling bearing signal. PMID:29659510

  12. Multi-Fault Diagnosis of Rolling Bearings via Adaptive Projection Intrinsically Transformed Multivariate Empirical Mode Decomposition and High Order Singular Value Decomposition.

    PubMed

    Yuan, Rui; Lv, Yong; Song, Gangbing

    2018-04-16

    Rolling bearings are important components in rotary machinery systems. In the field of multi-fault diagnosis of rolling bearings, the vibration signal collected from single channels tends to miss some fault characteristic information. Using multiple sensors to collect signals at different locations on the machine to obtain multivariate signal can remedy this problem. The adverse effect of a power imbalance between the various channels is inevitable, and unfavorable for multivariate signal processing. As a useful, multivariate signal processing method, Adaptive-projection has intrinsically transformed multivariate empirical mode decomposition (APIT-MEMD), and exhibits better performance than MEMD by adopting adaptive projection strategy in order to alleviate power imbalances. The filter bank properties of APIT-MEMD are also adopted to enable more accurate and stable intrinsic mode functions (IMFs), and to ease mode mixing problems in multi-fault frequency extractions. By aligning IMF sets into a third order tensor, high order singular value decomposition (HOSVD) can be employed to estimate the fault number. The fault correlation factor (FCF) analysis is used to conduct correlation analysis, in order to determine effective IMFs; the characteristic frequencies of multi-faults can then be extracted. Numerical simulations and the application of multi-fault situation can demonstrate that the proposed method is promising in multi-fault diagnoses of multivariate rolling bearing signal.

  13. Micropolar continuum modelling of bi-dimensional tetrachiral lattices

    PubMed Central

    Chen, Y.; Liu, X. N.; Hu, G. K.; Sun, Q. P.; Zheng, Q. S.

    2014-01-01

    The in-plane behaviour of tetrachiral lattices should be characterized by bi-dimensional orthotropic material owing to the existence of two orthogonal axes of rotational symmetry. Moreover, the constitutive model must also represent the chirality inherent in the lattices. To this end, a bi-dimensional orthotropic chiral micropolar model is developed based on the theory of irreducible orthogonal tensor decomposition. The obtained constitutive tensors display a hierarchy structure depending on the symmetry of the underlying microstructure. Eight additional material constants, in addition to five for the hemitropic case, are introduced to characterize the anisotropy under Z2 invariance. The developed continuum model is then applied to a tetrachiral lattice, and the material constants of the continuum model are analytically derived by a homogenization process. By comparing with numerical simulations for the discrete lattice, it is found that the proposed continuum model can correctly characterize the static and wave properties of the tetrachiral lattice. PMID:24808754

  14. Effective metrics and a fully covariant description of constitutive tensors in electrodynamics

    NASA Astrophysics Data System (ADS)

    Schuster, Sebastian; Visser, Matt

    2017-12-01

    Using electromagnetism to study analogue space-times is tantamount to considering consistency conditions for when a given (meta-) material would provide an analogue space-time model or—vice versa—characterizing which given metric could be modeled with a (meta-) material. While the consistency conditions themselves are by now well known and studied, the form the metric takes once they are satisfied is not. This question is mostly easily answered by keeping the formalisms of the two research fields here in contact as close to each other as possible. While fully covariant formulations of the electrodynamics of media have been around for a long while, they are usually abandoned for (3 +1 )- or six-dimensional formalisms. Here we use the fully unified and fully covariant approach. This enables us even to generalize the consistency conditions for the existence of an effective metric to arbitrary background metrics beyond flat space-time electrodynamics. We also show how the familiar matrices for permittivity ɛ , permeability μ-1, and magnetoelectric effects ζ can be seen as the three independent pieces of the Bel decomposition for the constitutive tensor Za b c d, i.e., the components of an orthogonal decomposition with respect to a given observer with four-velocity Va. Finally, we use the Moore-Penrose pseudoinverse and the closely related pseudodeterminant to then gain the desired reconstruction of the effective metric in terms of the permittivity tensor ɛa b, the permeability tensor [μ-1]a b, and the magnetoelectric tensor ζa b, as an explicit function geff(ɛ ,μ-1,ζ ).

  15. Virtual viewpoint generation for three-dimensional display based on the compressive light field

    NASA Astrophysics Data System (ADS)

    Meng, Qiao; Sang, Xinzhu; Chen, Duo; Guo, Nan; Yan, Binbin; Yu, Chongxiu; Dou, Wenhua; Xiao, Liquan

    2016-10-01

    Virtual view-point generation is one of the key technologies the three-dimensional (3D) display, which renders the new scene image perspective with the existing viewpoints. The three-dimensional scene information can be effectively recovered at different viewing angles to allow users to switch between different views. However, in the process of multiple viewpoints matching, when N free viewpoints are received, we need to match N viewpoints each other, namely matching C 2N = N(N-1)/2 times, and even in the process of matching different baselines errors can occur. To address the problem of great complexity of the traditional virtual view point generation process, a novel and rapid virtual view point generation algorithm is presented in this paper, and actual light field information is used rather than the geometric information. Moreover, for better making the data actual meaning, we mainly use nonnegative tensor factorization(NTF). A tensor representation is introduced for virtual multilayer displays. The light field emitted by an N-layer, M-frame display is represented by a sparse set of non-zero elements restricted to a plane within an Nth-order, rank-M tensor. The tensor representation allows for optimal decomposition of a light field into time-multiplexed, light-attenuating layers using NTF. Finally, the compressive light field of multilayer displays information synthesis is used to obtain virtual view-point by multiple multiplication. Experimental results show that the approach not only the original light field is restored with the high image quality, whose PSNR is 25.6dB, but also the deficiency of traditional matching is made up and any viewpoint can obtained from N free viewpoints.

  16. CP decomposition approach to blind separation for DS-CDMA system using a new performance index

    NASA Astrophysics Data System (ADS)

    Rouijel, Awatif; Minaoui, Khalid; Comon, Pierre; Aboutajdine, Driss

    2014-12-01

    In this paper, we present a canonical polyadic (CP) tensor decomposition isolating the scaling matrix. This has two major implications: (i) the problem conditioning shows up explicitly and could be controlled through a constraint on the so-called coherences and (ii) a performance criterion concerning the factor matrices can be exactly calculated and is more realistic than performance metrics used in the literature. Two new algorithms optimizing the CP decomposition based on gradient descent are proposed. This decomposition is illustrated by an application to direct-sequence code division multiplexing access (DS-CDMA) systems; computer simulations are provided and demonstrate the good behavior of these algorithms, compared to others in the literature.

  17. Constitutive equations of a tensorial model for strain-induced damage of metals based on three invariants

    NASA Astrophysics Data System (ADS)

    Tutyshkin, Nikolai D.; Lofink, Paul; Müller, Wolfgang H.; Wille, Ralf; Stahn, Oliver

    2017-01-01

    On the basis of the physical concepts of void formation, nucleation, and growth, generalized constitutive equations are formulated for a tensorial model of plastic damage in metals based on three invariants. The multiplicative decomposition of the metric transformation tensor and a thermodynamically consistent formulation of constitutive relations leads to a symmetric second-order damage tensor with a clear physical meaning. Its first invariant determines the damage related to plastic dilatation of the material due to growth of the voids. The second invariant of the deviatoric damage tensor is related to the change in void shape. The third invariant of the deviatoric tensor describes the impact of the stress state on damage (Lode angle), including the effect of rotating the principal axes of the stress tensor (Lode angle change). The introduction of three measures with related physical meaning allows for the description of kinetic processes of strain-induced damage with an equivalent parameter in a three-dimensional vector space, including the critical condition of ductile failure. Calculations were performed by using experimentally determined material functions for plastic dilatation and deviatoric strain at the mesoscale, as well as three-dimensional graphs for plastic damage of steel DC01. The constitutive parameter was determined from tests in tension, compression, and shear by using scanning electron microscopy, which allowed to vary the Lode angle over the full range of its values [InlineEquation not available: see fulltext.]. In order to construct the three-dimensional plastic damage curve for a range of triaxiality parameters -1 ≤ ST ≤ 1 and of Lode angles [InlineEquation not available: see fulltext.], we used our own, as well as systematized published experimental data. A comparison of calculations shows a significant effect of the third invariant (Lode angle) on equivalent damage. The measure of plastic damage, based on three invariants, can be useful for assessing the quality of metal mesostructure produced during metal forming processes. In many processes of metal sheet forming the material experiences, a non-proportional loading accompanied by rotating the principal axes of the stress tensor and a corresponding change of Lode angle.

  18. Kronecker-Basis-Representation Based Tensor Sparsity and Its Applications to Tensor Recovery.

    PubMed

    Xie, Qi; Zhao, Qian; Meng, Deyu; Xu, Zongben

    2017-08-02

    It is well known that the sparsity/low-rank of a vector/matrix can be rationally measured by nonzero-entries-number ($l_0$ norm)/nonzero- singular-values-number (rank), respectively. However, data from real applications are often generated by the interaction of multiple factors, which obviously cannot be sufficiently represented by a vector/matrix, while a high order tensor is expected to provide more faithful representation to deliver the intrinsic structure underlying such data ensembles. Unlike the vector/matrix case, constructing a rational high order sparsity measure for tensor is a relatively harder task. To this aim, in this paper we propose a measure for tensor sparsity, called Kronecker-basis-representation based tensor sparsity measure (KBR briefly), which encodes both sparsity insights delivered by Tucker and CANDECOMP/PARAFAC (CP) low-rank decompositions for a general tensor. Then we study the KBR regularization minimization (KBRM) problem, and design an effective ADMM algorithm for solving it, where each involved parameter can be updated with closed-form equations. Such an efficient solver makes it possible to extend KBR to various tasks like tensor completion and tensor robust principal component analysis. A series of experiments, including multispectral image (MSI) denoising, MSI completion and background subtraction, substantiate the superiority of the proposed methods beyond state-of-the-arts.

  19. Relaxations to Sparse Optimization Problems and Applications

    NASA Astrophysics Data System (ADS)

    Skau, Erik West

    Parsimony is a fundamental property that is applied to many characteristics in a variety of fields. Of particular interest are optimization problems that apply rank, dimensionality, or support in a parsimonious manner. In this thesis we study some optimization problems and their relaxations, and focus on properties and qualities of the solutions of these problems. The Gramian tensor decomposition problem attempts to decompose a symmetric tensor as a sum of rank one tensors.We approach the Gramian tensor decomposition problem with a relaxation to a semidefinite program. We study conditions which ensure that the solution of the relaxed semidefinite problem gives the minimal Gramian rank decomposition. Sparse representations with learned dictionaries are one of the leading image modeling techniques for image restoration. When learning these dictionaries from a set of training images, the sparsity parameter of the dictionary learning algorithm strongly influences the content of the dictionary atoms.We describe geometrically the content of trained dictionaries and how it changes with the sparsity parameter.We use statistical analysis to characterize how the different content is used in sparse representations. Finally, a method to control the structure of the dictionaries is demonstrated, allowing us to learn a dictionary which can later be tailored for specific applications. Variations of dictionary learning can be broadly applied to a variety of applications.We explore a pansharpening problem with a triple factorization variant of coupled dictionary learning. Another application of dictionary learning is computer vision. Computer vision relies heavily on object detection, which we explore with a hierarchical convolutional dictionary learning model. Data fusion of disparate modalities is a growing topic of interest.We do a case study to demonstrate the benefit of using social media data with satellite imagery to estimate hazard extents. In this case study analysis we apply a maximum entropy model, guided by the social media data, to estimate the flooded regions during a 2013 flood in Boulder, CO and show that the results are comparable to those obtained using expert information.

  20. Matthew Reynolds | NREL

    Science.gov Websites

    food science. Matthew's research at NREL is focused on applying uncertainty quantification techniques . Research Interests Uncertainty quantification Computational multilinear algebra Approximation theory of and the Canonical Tensor Decomposition, Journal of Computational Physics (2017) Randomized Alternating

  1. The study of Thai stock market across the 2008 financial crisis

    NASA Astrophysics Data System (ADS)

    Kanjamapornkul, K.; Pinčák, Richard; Bartoš, Erik

    2016-11-01

    The cohomology theory for financial market can allow us to deform Kolmogorov space of time series data over time period with the explicit definition of eight market states in grand unified theory. The anti-de Sitter space induced from a coupling behavior field among traders in case of a financial market crash acts like gravitational field in financial market spacetime. Under this hybrid mathematical superstructure, we redefine a behavior matrix by using Pauli matrix and modified Wilson loop for time series data. We use it to detect the 2008 financial market crash by using a degree of cohomology group of sphere over tensor field in correlation matrix over all possible dominated stocks underlying Thai SET50 Index Futures. The empirical analysis of financial tensor network was performed with the help of empirical mode decomposition and intrinsic time scale decomposition of correlation matrix and the calculation of closeness centrality of planar graph.

  2. Tensor-driven extraction of developmental features from varying paediatric EEG datasets.

    PubMed

    Kinney-Lang, Eli; Spyrou, Loukianos; Ebied, Ahmed; Chin, Richard Fm; Escudero, Javier

    2018-05-21

    Constant changes in developing children's brains can pose a challenge in EEG dependant technologies. Advancing signal processing methods to identify developmental differences in paediatric populations could help improve function and usability of such technologies. Taking advantage of the multi-dimensional structure of EEG data through tensor analysis may offer a framework for extracting relevant developmental features of paediatric datasets. A proof of concept is demonstrated through identifying latent developmental features in resting-state EEG. Approach. Three paediatric datasets (n = 50, 17, 44) were analyzed using a two-step constrained parallel factor (PARAFAC) tensor decomposition. Subject age was used as a proxy measure of development. Classification used support vector machines (SVM) to test if PARAFAC identified features could predict subject age. The results were cross-validated within each dataset. Classification analysis was complemented by visualization of the high-dimensional feature structures using t-distributed Stochastic Neighbour Embedding (t-SNE) maps. Main Results. Development-related features were successfully identified for the developmental conditions of each dataset. SVM classification showed the identified features could accurately predict subject at a significant level above chance for both healthy and impaired populations. t-SNE maps revealed suitable tensor factorization was key in extracting the developmental features. Significance. The described methods are a promising tool for identifying latent developmental features occurring throughout childhood EEG. © 2018 IOP Publishing Ltd.

  3. Kubo-Greenwood electrical conductivity formulation and implementation for projector augmented wave datasets

    NASA Astrophysics Data System (ADS)

    Calderín, L.; Karasiev, V. V.; Trickey, S. B.

    2017-12-01

    As the foundation for a new computational implementation, we survey the calculation of the complex electrical conductivity tensor based on the Kubo-Greenwood (KG) formalism (Kubo, 1957; Greenwood, 1958), with emphasis on derivations and technical aspects pertinent to use of projector augmented wave datasets with plane wave basis sets (Blöchl, 1994). New analytical results and a full implementation of the KG approach in an open-source Fortran 90 post-processing code for use with Quantum Espresso (Giannozzi et al., 2009) are presented. Named KGEC ([K]ubo [G]reenwood [E]lectronic [C]onductivity), the code calculates the full complex conductivity tensor (not just the average trace). It supports use of either the original KG formula or the popular one approximated in terms of a Dirac delta function. It provides both Gaussian and Lorentzian representations of the Dirac delta function (though the Lorentzian is preferable on basic grounds). KGEC provides decomposition of the conductivity into intra- and inter-band contributions as well as degenerate state contributions. It calculates the dc conductivity tensor directly. It is MPI parallelized over k-points, bands, and plane waves, with an option to recover the plane wave processes for their use in band parallelization as well. It is designed to provide rapid convergence with respect to k-point density. Examples of its use are given.

  4. Trends in biomedical informatics: automated topic analysis of JAMIA articles.

    PubMed

    Han, Dong; Wang, Shuang; Jiang, Chao; Jiang, Xiaoqian; Kim, Hyeon-Eui; Sun, Jimeng; Ohno-Machado, Lucila

    2015-11-01

    Biomedical Informatics is a growing interdisciplinary field in which research topics and citation trends have been evolving rapidly in recent years. To analyze these data in a fast, reproducible manner, automation of certain processes is needed. JAMIA is a "generalist" journal for biomedical informatics. Its articles reflect the wide range of topics in informatics. In this study, we retrieved Medical Subject Headings (MeSH) terms and citations of JAMIA articles published between 2009 and 2014. We use tensors (i.e., multidimensional arrays) to represent the interaction among topics, time and citations, and applied tensor decomposition to automate the analysis. The trends represented by tensors were then carefully interpreted and the results were compared with previous findings based on manual topic analysis. A list of most cited JAMIA articles, their topics, and publication trends over recent years is presented. The analyses confirmed previous studies and showed that, from 2012 to 2014, the number of articles related to MeSH terms Methods, Organization & Administration, and Algorithms increased significantly both in number of publications and citations. Citation trends varied widely by topic, with Natural Language Processing having a large number of citations in particular years, and Medical Record Systems, Computerized remaining a very popular topic in all years. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  5. On the Grothendieck rings of equivariant fusion categories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burciu, Sebastian, E-mail: sebastian.burciu@imar.ro

    2015-07-15

    In this paper, we describe a Mackey type decomposition for group actions on abelian categories. This allows us to define new Mackey functors which associates to any subgroup the K-theory of the corresponding equivariantized abelian category. In the case of an action by tensor autoequivalences, the Mackey functor at the level of Grothendieck rings has a Green functor structure. As an application we give a description of the Grothendieck rings of equivariantized fusion categories under group actions by tensor autoequivalences on graded fusion categories. In this settings, a new formula for the tensor product of any two simple objects ofmore » an equivariantized fusion category is given, simplifying the fusion formula from Burciu and Natale [J. Math. Phys. 54, 013511 (2013)].« less

  6. Natural chemical shielding analysis of nuclear magnetic resonance shielding tensors from gauge-including atomic orbital calculations

    NASA Astrophysics Data System (ADS)

    Bohmann, Jonathan A.; Weinhold, Frank; Farrar, Thomas C.

    1997-07-01

    Nuclear magnetic shielding tensors computed by the gauge including atomic orbital (GIAO) method in the Hartree-Fock self-consistent-field (HF-SCF) framework are partitioned into magnetic contributions from chemical bonds and lone pairs by means of natural chemical shielding (NCS) analysis, an extension of natural bond orbital (NBO) analysis. NCS analysis complements the description provided by alternative localized orbital methods by directly calculating chemical shieldings due to delocalized features in the electronic structure, such as bond conjugation and hyperconjugation. Examples of NCS tensor decomposition are reported for CH4, CO, and H2CO, for which a graphical mnemonic due to Cornwell is used to illustrate the effect of hyperconjugative delocalization on the carbon shielding.

  7. Rapid acquisition of data dense solid-state CPMG NMR spectral sets using multi-dimensional statistical analysis

    DOE PAGES

    Mason, H. E.; Uribe, E. C.; Shusterman, J. A.

    2018-01-01

    Tensor-rank decomposition methods have been applied to variable contact time 29 Si{ 1 H} CP/CPMG NMR data sets to extract NMR dynamics information and dramatically decrease conventional NMR acquisition times.

  8. Rapid acquisition of data dense solid-state CPMG NMR spectral sets using multi-dimensional statistical analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mason, H. E.; Uribe, E. C.; Shusterman, J. A.

    Tensor-rank decomposition methods have been applied to variable contact time 29 Si{ 1 H} CP/CPMG NMR data sets to extract NMR dynamics information and dramatically decrease conventional NMR acquisition times.

  9. On squares of representations of compact Lie algebras

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeier, Robert, E-mail: robert.zeier@ch.tum.de; Zimborás, Zoltán, E-mail: zimboras@gmail.com

    We study how tensor products of representations decompose when restricted from a compact Lie algebra to one of its subalgebras. In particular, we are interested in tensor squares which are tensor products of a representation with itself. We show in a classification-free manner that the sum of multiplicities and the sum of squares of multiplicities in the corresponding decomposition of a tensor square into irreducible representations has to strictly grow when restricted from a compact semisimple Lie algebra to a proper subalgebra. For this purpose, relevant details on tensor products of representations are compiled from the literature. Since the summore » of squares of multiplicities is equal to the dimension of the commutant of the tensor-square representation, it can be determined by linear-algebra computations in a scenario where an a priori unknown Lie algebra is given by a set of generators which might not be a linear basis. Hence, our results offer a test to decide if a subalgebra of a compact semisimple Lie algebra is a proper one without calculating the relevant Lie closures, which can be naturally applied in the field of controlled quantum systems.« less

  10. A Tensor-Train accelerated solver for integral equations in complex geometries

    NASA Astrophysics Data System (ADS)

    Corona, Eduardo; Rahimian, Abtin; Zorin, Denis

    2017-04-01

    We present a framework using the Quantized Tensor Train (QTT) decomposition to accurately and efficiently solve volume and boundary integral equations in three dimensions. We describe how the QTT decomposition can be used as a hierarchical compression and inversion scheme for matrices arising from the discretization of integral equations. For a broad range of problems, computational and storage costs of the inversion scheme are extremely modest O (log ⁡ N) and once the inverse is computed, it can be applied in O (Nlog ⁡ N) . We analyze the QTT ranks for hierarchically low rank matrices and discuss its relationship to commonly used hierarchical compression techniques such as FMM and HSS. We prove that the QTT ranks are bounded for translation-invariant systems and argue that this behavior extends to non-translation invariant volume and boundary integrals. For volume integrals, the QTT decomposition provides an efficient direct solver requiring significantly less memory compared to other fast direct solvers. We present results demonstrating the remarkable performance of the QTT-based solver when applied to both translation and non-translation invariant volume integrals in 3D. For boundary integral equations, we demonstrate that using a QTT decomposition to construct preconditioners for a Krylov subspace method leads to an efficient and robust solver with a small memory footprint. We test the QTT preconditioners in the iterative solution of an exterior elliptic boundary value problem (Laplace) formulated as a boundary integral equation in complex, multiply connected geometries.

  11. Bayesian inference and interpretation of centroid moment tensors of the 2016 Kumamoto earthquake sequence, Kyushu, Japan

    NASA Astrophysics Data System (ADS)

    Hallo, Miroslav; Asano, Kimiyuki; Gallovič, František

    2017-09-01

    On April 16, 2016, Kumamoto prefecture in Kyushu region, Japan, was devastated by a shallow M JMA7.3 earthquake. The series of foreshocks started by M JMA6.5 foreshock 28 h before the mainshock. They have originated in Hinagu fault zone intersecting the mainshock Futagawa fault zone; hence, the tectonic background for this earthquake sequence is rather complex. Here we infer centroid moment tensors (CMTs) for 11 events with M JMA between 4.8 and 6.5, using strong motion records of the K-NET, KiK-net and F-net networks. We use upgraded Bayesian full-waveform inversion code ISOLA-ObsPy, which takes into account uncertainty of the velocity model. Such an approach allows us to reliably assess uncertainty of the CMT parameters including the centroid position. The solutions show significant systematic spatial and temporal variations throughout the sequence. Foreshocks are right-lateral steeply dipping strike-slip events connected to the NE-SW shear zone. Those located close to the intersection of the Hinagu and Futagawa fault zones are dipping slightly to ESE, while those in the southern area are dipping to WNW. Contrarily, aftershocks are mostly normal dip-slip events, being related to the N-S extensional tectonic regime. Most of the deviatoric moment tensors contain only minor CLVD component, which can be attributed to the velocity model uncertainty. Nevertheless, two of the CMTs involve a significant CLVD component, which may reflect complex rupture process. Decomposition of those moment tensors into two pure shear moment tensors suggests combined right-lateral strike-slip and normal dip-slip mechanisms, consistent with the tectonic settings of the intersection of the Hinagu and Futagawa fault zones.[Figure not available: see fulltext.

  12. Multidimensional Compressed Sensing MRI Using Tensor Decomposition-Based Sparsifying Transform

    PubMed Central

    Yu, Yeyang; Jin, Jin; Liu, Feng; Crozier, Stuart

    2014-01-01

    Compressed Sensing (CS) has been applied in dynamic Magnetic Resonance Imaging (MRI) to accelerate the data acquisition without noticeably degrading the spatial-temporal resolution. A suitable sparsity basis is one of the key components to successful CS applications. Conventionally, a multidimensional dataset in dynamic MRI is treated as a series of two-dimensional matrices, and then various matrix/vector transforms are used to explore the image sparsity. Traditional methods typically sparsify the spatial and temporal information independently. In this work, we propose a novel concept of tensor sparsity for the application of CS in dynamic MRI, and present the Higher-order Singular Value Decomposition (HOSVD) as a practical example. Applications presented in the three- and four-dimensional MRI data demonstrate that HOSVD simultaneously exploited the correlations within spatial and temporal dimensions. Validations based on cardiac datasets indicate that the proposed method achieved comparable reconstruction accuracy with the low-rank matrix recovery methods and, outperformed the conventional sparse recovery methods. PMID:24901331

  13. Incompressible inelasticity as an essential ingredient for the validity of the kinematic decomposition F =FeFi

    NASA Astrophysics Data System (ADS)

    Reina, Celia; Conti, Sergio

    2017-10-01

    The multiplicative decomposition of the total deformation F =FeFi between an elastic (Fe) and an inelastic component (Fi) is standard in the modeling of many irreversible processes such as plasticity, growth, thermoelasticity, viscoelasticty or phase transformations. The heuristic argument for such kinematic assumption is based on the chain rule for the compatible scenario (CurlFi = 0) where the individual deformation tensors are gradients of deformation mappings, i.e. F = D φ = D (φe ∘φi) = (Dφe) ∘φi (Dφi) =FeFi . Yet, the conditions for its validity in the general incompatible case (CurlFi ≠ 0) has so far remained uncertain. We show in this paper that detFi = 1 and CurlFi bounded are necessary and sufficient conditions for the validity of F =FeFi for a wide range of inelastic processes. In particular, in the context of crystal plasticity, we demonstrate via rigorous homogenization from discrete dislocations to the continuum level in two dimensions, that the volume preserving property of the mechanistics of dislocation glide, combined with a finite dislocation density, is sufficient to deliver F =FeFp at the continuum scale. We then generalize this result to general two-dimensional inelastic processes that may be described at a lower dimensional scale via a multiplicative decomposition while exhibiting a finite density of incompatibilities. The necessity of the conditions detFi = 1 and CurlFi bounded for such systems is demonstrated via suitable counterexamples.

  14. Solving a mixture of many random linear equations by tensor decomposition and alternating minimization.

    DOT National Transportation Integrated Search

    2016-09-01

    We consider the problem of solving mixed random linear equations with k components. This is the noiseless setting of mixed linear regression. The goal is to estimate multiple linear models from mixed samples in the case where the labels (which sample...

  15. Tensor Decompositions for Learning Latent Variable Models

    DTIC Science & Technology

    2012-12-08

    and eigenvectors of tensors is generally significantly more complicated than their matrix counterpart (both algebraically [Qi05, CS11, Lim05] and...The reduction First, let W ∈ Rd×k be a linear transformation such that M2(W,W ) = W M2W = I where I is the k × k identity matrix (i.e., W whitens ...approximate the whitening matrix W ∈ Rd×k from second-moment matrix M2 ∈ Rd×d. To do this, one first multiplies M2 by a random matrix R ∈ Rd×k′ for some k′ ≥ k

  16. Finite-width Laplacian sum rules for 2++ tensor glueball in the instanton vacuum model

    NASA Astrophysics Data System (ADS)

    Chen, Junlong; Liu, Jueping

    2017-01-01

    The more carefully defined and more appropriate 2++ tensor glueball current is a S Uc(3 ) gauge-invariant, symmetric, traceless, and conserved Lorentz-irreducible tensor. After Lorentz decomposition, the invariant amplitude of the correlation function is abstracted and calculated based on the semiclassical expansion for quantum chromodynamics (QCD) in the instanton liquid background. In addition to taking the perturbative contribution into account, we calculate the contribution arising from the interaction (or the interference) between instantons and the quantum gluon fields, which is infrared free. Instead of the usual zero-width approximation for the resonances, the Breit-Wigner form with a correct threshold behavior for the spectral function of the finite-width three resonances is adopted. The properties of the 2++ tensor glueball are investigated via a family of the QCD Laplacian sum rules for the invariant amplitude. The values of the mass, decay width, and coupling constants for the 2++ resonance in which the glueball fraction is dominant are obtained.

  17. Low-Rank Tensor Subspace Learning for RGB-D Action Recognition.

    PubMed

    Jia, Chengcheng; Fu, Yun

    2016-07-09

    Since RGB-D action data inherently equip with extra depth information compared with RGB data, recently many works employ RGB-D data in a third-order tensor representation containing spatio-temporal structure to find a subspace for action recognition. However, there are two main challenges of these methods. First, the dimension of subspace is usually fixed manually. Second, preserving local information by finding intraclass and inter-class neighbors from a manifold is highly timeconsuming. In this paper, we learn a tensor subspace, whose dimension is learned automatically by low-rank learning, for RGB-D action recognition. Particularly, the tensor samples are factorized to obtain three Projection Matrices (PMs) by Tucker Decomposition, where all the PMs are performed by nuclear norm in a close-form to obtain the tensor ranks which are used as tensor subspace dimension. Additionally, we extract the discriminant and local information from a manifold using a graph constraint. This graph preserves the local knowledge inherently, which is faster than the previous way by calculating both the intra-class and inter-class neighbors of each sample. We evaluate the proposed method on four widely used RGB-D action datasets including MSRDailyActivity3D, MSRActionPairs, MSRActionPairs skeleton and UTKinect-Action3D datasets, and the experimental results show higher accuracy and efficiency of the proposed method.

  18. Tensor numerical methods in quantum chemistry: from Hartree-Fock to excitation energies.

    PubMed

    Khoromskaia, Venera; Khoromskij, Boris N

    2015-12-21

    We resume the recent successes of the grid-based tensor numerical methods and discuss their prospects in real-space electronic structure calculations. These methods, based on the low-rank representation of the multidimensional functions and integral operators, first appeared as an accurate tensor calculus for the 3D Hartree potential using 1D complexity operations, and have evolved to entirely grid-based tensor-structured 3D Hartree-Fock eigenvalue solver. It benefits from tensor calculation of the core Hamiltonian and two-electron integrals (TEI) in O(n log n) complexity using the rank-structured approximation of basis functions, electron densities and convolution integral operators all represented on 3D n × n × n Cartesian grids. The algorithm for calculating TEI tensor in a form of the Cholesky decomposition is based on multiple factorizations using algebraic 1D "density fitting" scheme, which yield an almost irreducible number of product basis functions involved in the 3D convolution integrals, depending on a threshold ε > 0. The basis functions are not restricted to separable Gaussians, since the analytical integration is substituted by high-precision tensor-structured numerical quadratures. The tensor approaches to post-Hartree-Fock calculations for the MP2 energy correction and for the Bethe-Salpeter excitation energies, based on using low-rank factorizations and the reduced basis method, were recently introduced. Another direction is towards the tensor-based Hartree-Fock numerical scheme for finite lattices, where one of the numerical challenges is the summation of electrostatic potentials of a large number of nuclei. The 3D grid-based tensor method for calculation of a potential sum on a L × L × L lattice manifests the linear in L computational work, O(L), instead of the usual O(L(3) log L) scaling by the Ewald-type approaches.

  19. Probabilistic low-rank factorization accelerates tensor network simulations of critical quantum many-body ground states.

    PubMed

    Kohn, Lucas; Tschirsich, Ferdinand; Keck, Maximilian; Plenio, Martin B; Tamascelli, Dario; Montangero, Simone

    2018-01-01

    We provide evidence that randomized low-rank factorization is a powerful tool for the determination of the ground-state properties of low-dimensional lattice Hamiltonians through tensor network techniques. In particular, we show that randomized matrix factorization outperforms truncated singular value decomposition based on state-of-the-art deterministic routines in time-evolving block decimation (TEBD)- and density matrix renormalization group (DMRG)-style simulations, even when the system under study gets close to a phase transition: We report linear speedups in the bond or local dimension of up to 24 times in quasi-two-dimensional cylindrical systems.

  20. Probabilistic low-rank factorization accelerates tensor network simulations of critical quantum many-body ground states

    NASA Astrophysics Data System (ADS)

    Kohn, Lucas; Tschirsich, Ferdinand; Keck, Maximilian; Plenio, Martin B.; Tamascelli, Dario; Montangero, Simone

    2018-01-01

    We provide evidence that randomized low-rank factorization is a powerful tool for the determination of the ground-state properties of low-dimensional lattice Hamiltonians through tensor network techniques. In particular, we show that randomized matrix factorization outperforms truncated singular value decomposition based on state-of-the-art deterministic routines in time-evolving block decimation (TEBD)- and density matrix renormalization group (DMRG)-style simulations, even when the system under study gets close to a phase transition: We report linear speedups in the bond or local dimension of up to 24 times in quasi-two-dimensional cylindrical systems.

  1. Using geometric algebra to represent curvature in shell theory with applications to Starling resistors

    PubMed Central

    Agarwal, A.; Lasenby, J.

    2017-01-01

    We present a novel application of rotors in geometric algebra to represent the change of curvature tensor that is used in shell theory as part of the constitutive law. We introduce a new decomposition of the change of curvature tensor, which has explicit terms for changes of curvature due to initial curvature combined with strain, and changes in rotation over the surface. We use this decomposition to perform a scaling analysis of the relative importance of bending and stretching in flexible tubes undergoing self-excited oscillations. These oscillations have relevance to the lung, in which it is believed that they are responsible for wheezing. The new analysis is necessitated by the fact that the working fluid is air, compared to water in most previous work. We use stereographic imaging to empirically measure the relative importance of bending and stretching energy in observed self-excited oscillations. This enables us to validate our scaling analysis. We show that bending energy is dominated by stretching energy, and the scaling analysis makes clear that this will remain true for tubes in the airways of the lung. PMID:29291106

  2. Moving mesh finite element simulation for phase-field modeling of brittle fracture and convergence of Newton's iteration

    NASA Astrophysics Data System (ADS)

    Zhang, Fei; Huang, Weizhang; Li, Xianping; Zhang, Shicheng

    2018-03-01

    A moving mesh finite element method is studied for the numerical solution of a phase-field model for brittle fracture. The moving mesh partial differential equation approach is employed to dynamically track crack propagation. Meanwhile, the decomposition of the strain tensor into tensile and compressive components is essential for the success of the phase-field modeling of brittle fracture but results in a non-smooth elastic energy and stronger nonlinearity in the governing equation. This makes the governing equation much more difficult to solve and, in particular, Newton's iteration often fails to converge. Three regularization methods are proposed to smooth out the decomposition of the strain tensor. Numerical examples of fracture propagation under quasi-static load demonstrate that all of the methods can effectively improve the convergence of Newton's iteration for relatively small values of the regularization parameter but without compromising the accuracy of the numerical solution. They also show that the moving mesh finite element method is able to adaptively concentrate the mesh elements around propagating cracks and handle multiple and complex crack systems.

  3. Using geometric algebra to represent curvature in shell theory with applications to Starling resistors.

    PubMed

    Gregory, A L; Agarwal, A; Lasenby, J

    2017-11-01

    We present a novel application of rotors in geometric algebra to represent the change of curvature tensor that is used in shell theory as part of the constitutive law. We introduce a new decomposition of the change of curvature tensor, which has explicit terms for changes of curvature due to initial curvature combined with strain, and changes in rotation over the surface. We use this decomposition to perform a scaling analysis of the relative importance of bending and stretching in flexible tubes undergoing self-excited oscillations. These oscillations have relevance to the lung, in which it is believed that they are responsible for wheezing. The new analysis is necessitated by the fact that the working fluid is air, compared to water in most previous work. We use stereographic imaging to empirically measure the relative importance of bending and stretching energy in observed self-excited oscillations. This enables us to validate our scaling analysis. We show that bending energy is dominated by stretching energy, and the scaling analysis makes clear that this will remain true for tubes in the airways of the lung.

  4. Complete set of invariants of a 4th order tensor: the 12 tasks of HARDI from ternary quartics.

    PubMed

    Papadopoulo, Théo; Ghosh, Aurobrata; Deriche, Rachid

    2014-01-01

    Invariants play a crucial role in Diffusion MRI. In DTI (2nd order tensors), invariant scalars (FA, MD) have been successfully used in clinical applications. But DTI has limitations and HARDI models (e.g. 4th order tensors) have been proposed instead. These, however, lack invariant features and computing them systematically is challenging. We present a simple and systematic method to compute a functionally complete set of invariants of a non-negative 3D 4th order tensor with respect to SO3. Intuitively, this transforms the tensor's non-unique ternary quartic (TQ) decomposition (from Hilbert's theorem) to a unique canonical representation independent of orientation - the invariants. The method consists of two steps. In the first, we reduce the 18 degrees-of-freedom (DOF) of a TQ representation by 3-DOFs via an orthogonal transformation. This transformation is designed to enhance a rotation-invariant property of choice of the 3D 4th order tensor. In the second, we further reduce 3-DOFs via a 3D rotation transformation of coordinates to arrive at a canonical set of invariants to SO3 of the tensor. The resulting invariants are, by construction, (i) functionally complete, (ii) functionally irreducible (if desired), (iii) computationally efficient and (iv) reversible (mappable to the TQ coefficients or shape); which is the novelty of our contribution in comparison to prior work. Results from synthetic and real data experiments validate the method and indicate its importance.

  5. Extended vector-tensor theories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kimura, Rampei; Naruko, Atsushi; Yoshida, Daisuke, E-mail: rampei@th.phys.titech.ac.jp, E-mail: naruko@th.phys.titech.ac.jp, E-mail: yoshida@th.phys.titech.ac.jp

    Recently, several extensions of massive vector theory in curved space-time have been proposed in many literatures. In this paper, we consider the most general vector-tensor theories that contain up to two derivatives with respect to metric and vector field. By imposing a degeneracy condition of the Lagrangian in the context of ADM decomposition of space-time to eliminate an unwanted mode, we construct a new class of massive vector theories where five degrees of freedom can propagate, corresponding to three for massive vector modes and two for massless tensor modes. We find that the generalized Proca and the beyond generalized Procamore » theories up to the quartic Lagrangian, which should be included in this formulation, are degenerate theories even in curved space-time. Finally, introducing new metric and vector field transformations, we investigate the properties of thus obtained theories under such transformations.« less

  6. Holistic approach for automated background EEG assessment in asphyxiated full-term infants

    NASA Astrophysics Data System (ADS)

    Matic, Vladimir; Cherian, Perumpillichira J.; Koolen, Ninah; Naulaers, Gunnar; Swarte, Renate M.; Govaert, Paul; Van Huffel, Sabine; De Vos, Maarten

    2014-12-01

    Objective. To develop an automated algorithm to quantify background EEG abnormalities in full-term neonates with hypoxic ischemic encephalopathy. Approach. The algorithm classifies 1 h of continuous neonatal EEG (cEEG) into a mild, moderate or severe background abnormality grade. These classes are well established in the literature and a clinical neurophysiologist labeled 272 1 h cEEG epochs selected from 34 neonates. The algorithm is based on adaptive EEG segmentation and mapping of the segments into the so-called segments’ feature space. Three features are suggested and further processing is obtained using a discretized three-dimensional distribution of the segments’ features represented as a 3-way data tensor. Further classification has been achieved using recently developed tensor decomposition/classification methods that reduce the size of the model and extract a significant and discriminative set of features. Main results. Effective parameterization of cEEG data has been achieved resulting in high classification accuracy (89%) to grade background EEG abnormalities. Significance. For the first time, the algorithm for the background EEG assessment has been validated on an extensive dataset which contained major artifacts and epileptic seizures. The demonstrated high robustness, while processing real-case EEGs, suggests that the algorithm can be used as an assistive tool to monitor the severity of hypoxic insults in newborns.

  7. Direct Solution of the Chemical Master Equation Using Quantized Tensor Trains

    PubMed Central

    Kazeev, Vladimir; Khammash, Mustafa; Nip, Michael; Schwab, Christoph

    2014-01-01

    The Chemical Master Equation (CME) is a cornerstone of stochastic analysis and simulation of models of biochemical reaction networks. Yet direct solutions of the CME have remained elusive. Although several approaches overcome the infinite dimensional nature of the CME through projections or other means, a common feature of proposed approaches is their susceptibility to the curse of dimensionality, i.e. the exponential growth in memory and computational requirements in the number of problem dimensions. We present a novel approach that has the potential to “lift” this curse of dimensionality. The approach is based on the use of the recently proposed Quantized Tensor Train (QTT) formatted numerical linear algebra for the low parametric, numerical representation of tensors. The QTT decomposition admits both, algorithms for basic tensor arithmetics with complexity scaling linearly in the dimension (number of species) and sub-linearly in the mode size (maximum copy number), and a numerical tensor rounding procedure which is stable and quasi-optimal. We show how the CME can be represented in QTT format, then use the exponentially-converging -discontinuous Galerkin discretization in time to reduce the CME evolution problem to a set of QTT-structured linear equations to be solved at each time step using an algorithm based on Density Matrix Renormalization Group (DMRG) methods from quantum chemistry. Our method automatically adapts the “basis” of the solution at every time step guaranteeing that it is large enough to capture the dynamics of interest but no larger than necessary, as this would increase the computational complexity. Our approach is demonstrated by applying it to three different examples from systems biology: independent birth-death process, an example of enzymatic futile cycle, and a stochastic switch model. The numerical results on these examples demonstrate that the proposed QTT method achieves dramatic speedups and several orders of magnitude storage savings over direct approaches. PMID:24626049

  8. Tensor models, Kronecker coefficients and permutation centralizer algebras

    NASA Astrophysics Data System (ADS)

    Geloun, Joseph Ben; Ramgoolam, Sanjaye

    2017-11-01

    We show that the counting of observables and correlators for a 3-index tensor model are organized by the structure of a family of permutation centralizer algebras. These algebras are shown to be semi-simple and their Wedderburn-Artin decompositions into matrix blocks are given in terms of Clebsch-Gordan coefficients of symmetric groups. The matrix basis for the algebras also gives an orthogonal basis for the tensor observables which diagonalizes the Gaussian two-point functions. The centres of the algebras are associated with correlators which are expressible in terms of Kronecker coefficients (Clebsch-Gordan multiplicities of symmetric groups). The color-exchange symmetry present in the Gaussian model, as well as a large class of interacting models, is used to refine the description of the permutation centralizer algebras. This discussion is extended to a general number of colors d: it is used to prove the integrality of an infinite family of number sequences related to color-symmetrizations of colored graphs, and expressible in terms of symmetric group representation theory data. Generalizing a connection between matrix models and Belyi maps, correlators in Gaussian tensor models are interpreted in terms of covers of singular 2-complexes. There is an intriguing difference, between matrix and higher rank tensor models, in the computational complexity of superficially comparable correlators of observables parametrized by Young diagrams.

  9. Three-dimensional modelling and geothermal process simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burns, K.L.

    1990-01-01

    The subsurface geological model or 3-D GIS is constructed from three kinds of objects, which are a lithotope (in boundary representation), a number of fault systems, and volumetric textures (vector fields). The chief task of the model is to yield an estimate of the conductance tensors (fluid permeability and thermal conductivity) throughout an array of voxels. This is input as material properties to a FEHM numerical physical process model. The main task of the FEHM process model is to distinguish regions of convective from regions of conductive heat flow, and to estimate the fluid phase, pressure and flow paths. Themore » temperature, geochemical, and seismic data provide the physical constraints on the process. The conductance tensors in the Franciscan Complex are to be derived by the addition of two components. The isotropic component is a stochastic spatial variable due to disruption of lithologies in melange. The deviatoric component is deterministic, due to smoothness and continuity in the textural vector fields. This decomposition probably also applies to the engineering hydrogeological properties of shallow terrestrial fluvial systems. However there are differences in quantity. The isotropic component is much more variable in the Franciscan, to the point where volumetric averages are misleading, and it may be necessary to select that component from several, discrete possible states. The deviatoric component is interpolated using a textural vector field. The Franciscan field is much more complicated, and contains internal singularities. 27 refs., 10 figs.« less

  10. New non-naturally reductive Einstein metrics on exceptional simple Lie groups

    NASA Astrophysics Data System (ADS)

    Chen, Huibin; Chen, Zhiqi; Deng, Shaoqiang

    2018-01-01

    In this article, we construct several non-naturally reductive Einstein metrics on exceptional simple Lie groups, which are found through the decomposition arising from generalized Wallach spaces. Using the decomposition corresponding to the two involutions, we calculate the non-zero coefficients in the formulas of the components of Ricci tensor with respect to the given metrics. The Einstein metrics are obtained as solutions of a system of polynomial equations, which we manipulate by symbolic computations using Gröbner bases. In particular, we discuss the concrete numbers of non-naturally reductive Einstein metrics for each case up to isometry and homothety.

  11. Modal decomposition of turbulent supersonic cavity

    NASA Astrophysics Data System (ADS)

    Soni, R. K.; Arya, N.; De, A.

    2018-06-01

    Self-sustained oscillations in a Mach 3 supersonic cavity with a length-to-depth ratio of three are investigated using wall-modeled large eddy simulation methodology for ReD = 3.39× 105 . The unsteady data obtained through computation are utilized to investigate the spatial and temporal evolution of the flow field, especially the second invariant of the velocity tensor, while the phase-averaged data are analyzed over a feedback cycle to study the spatial structures. This analysis is accompanied by the proper orthogonal decomposition (POD) data, which reveals the presence of discrete vortices along the shear layer. The POD analysis is performed in both the spanwise and streamwise planes to extract the coherence in flow structures. Finally, dynamic mode decomposition is performed on the data sequence to obtain the dynamic information and deeper insight into the self-sustained mechanism.

  12. Lagrangian theory of structure formation in relativistic cosmology. IV. Lagrangian approach to gravitational waves

    NASA Astrophysics Data System (ADS)

    Al Roumi, Fosca; Buchert, Thomas; Wiegand, Alexander

    2017-12-01

    The relativistic generalization of the Newtonian Lagrangian perturbation theory is investigated. In previous works, the perturbation and solution schemes that are generated by the spatially projected gravitoelectric part of the Weyl tensor were given to any order of the perturbations, together with extensions and applications for accessing the nonperturbative regime. We here discuss more in detail the general first-order scheme within the Cartan formalism including and concentrating on the gravitational wave propagation in matter. We provide master equations for all parts of Lagrangian-linearized perturbations propagating in the perturbed spacetime, and we outline the solution procedure that allows one to find general solutions. Particular emphasis is given to global properties of the Lagrangian perturbation fields by employing results of Hodge-de Rham theory. We here discuss how the Hodge decomposition relates to the standard scalar-vector-tensor decomposition. Finally, we demonstrate that we obtain the known linear perturbation solutions of the standard relativistic perturbation scheme by performing two steps: first, by restricting our solutions to perturbations that propagate on a flat unperturbed background spacetime and, second, by transforming to Eulerian background coordinates with truncation of nonlinear terms.

  13. Application of modern tensor calculus to engineered domain structures. 1. Calculation of tensorial covariants.

    PubMed

    Kopský, Vojtech

    2006-03-01

    This article is a roadmap to a systematic calculation and tabulation of tensorial covariants for the point groups of material physics. The following are the essential steps in the described approach to tensor calculus. (i) An exact specification of the considered point groups by their embellished Hermann-Mauguin and Schoenflies symbols. (ii) Introduction of oriented Laue classes of magnetic point groups. (iii) An exact specification of matrix ireps (irreducible representations). (iv) Introduction of so-called typical (standard) bases and variables -- typical invariants, relative invariants or components of the typical covariants. (v) Introduction of Clebsch-Gordan products of the typical variables. (vi) Calculation of tensorial covariants of ascending ranks with consecutive use of tables of Clebsch-Gordan products. (vii) Opechowski's magic relations between tensorial decompositions. These steps are illustrated for groups of the tetragonal oriented Laue class D(4z) -- 4(z)2(x)2(xy) of magnetic point groups and for tensors up to fourth rank.

  14. Databases post-processing in Tensoral

    NASA Technical Reports Server (NTRS)

    Dresselhaus, Eliot

    1994-01-01

    The Center for Turbulent Research (CTR) post-processing effort aims to make turbulence simulations and data more readily and usefully available to the research and industrial communities. The Tensoral language, introduced in this document and currently existing in prototype form, is the foundation of this effort. Tensoral provides a convenient and powerful protocol to connect users who wish to analyze fluids databases with the authors who generate them. In this document we introduce Tensoral and its prototype implementation in the form of a user's guide. This guide focuses on use of Tensoral for post-processing turbulence databases. The corresponding document - the Tensoral 'author's guide' - which focuses on how authors can make databases available to users via the Tensoral system - is currently unwritten. Section 1 of this user's guide defines Tensoral's basic notions: we explain the class of problems at hand and how Tensoral abstracts them. Section 2 defines Tensoral syntax for mathematical expressions. Section 3 shows how these expressions make up Tensoral statements. Section 4 shows how Tensoral statements and expressions are embedded into other computer languages (such as C or Vectoral) to make Tensoral programs. We conclude with a complete example program.

  15. n  +  1 formalism of f (Lovelock) gravity

    NASA Astrophysics Data System (ADS)

    Lachaume, Xavier

    2018-06-01

    In this note we perform the n  +  1 decomposition, or Arnowitt–Deser–Misner (ADM) formulation of gravity theory. The Hamiltonian form of Lovelock gravity was known since the work of Teitelboim and Zanelli in 1987, but this result had not yet been extended to gravity. Besides, field equations of have been recently computed by Bueno et al, though without ADM decomposition. We focus on the non-degenerate case, i.e. when the Hessian of f is invertible. Using the same Legendre transform as for theories, we can identify the partial derivatives of f as scalar fields, and consider the theory as a generalised scalar‑tensor theory. We then derive the field equations, and project them along a n  +  1 decomposition. We obtain an original system of constraint equations for gravity, as well as dynamical equations. We give explicit formulas for the case.

  16. Tensor hypercontraction density fitting. I. Quartic scaling second- and third-order Møller-Plesset perturbation theory

    NASA Astrophysics Data System (ADS)

    Hohenstein, Edward G.; Parrish, Robert M.; Martínez, Todd J.

    2012-07-01

    Many approximations have been developed to help deal with the O(N4) growth of the electron repulsion integral (ERI) tensor, where N is the number of one-electron basis functions used to represent the electronic wavefunction. Of these, the density fitting (DF) approximation is currently the most widely used despite the fact that it is often incapable of altering the underlying scaling of computational effort with respect to molecular size. We present a method for exploiting sparsity in three-center overlap integrals through tensor decomposition to obtain a low-rank approximation to density fitting (tensor hypercontraction density fitting or THC-DF). This new approximation reduces the 4th-order ERI tensor to a product of five matrices, simultaneously reducing the storage requirement as well as increasing the flexibility to regroup terms and reduce scaling behavior. As an example, we demonstrate such a scaling reduction for second- and third-order perturbation theory (MP2 and MP3), showing that both can be carried out in O(N4) operations. This should be compared to the usual scaling behavior of O(N5) and O(N6) for MP2 and MP3, respectively. The THC-DF technique can also be applied to other methods in electronic structure theory, such as coupled-cluster and configuration interaction, promising significant gains in computational efficiency and storage reduction.

  17. Calculation and Analysis of magnetic gradient tensor components of global magnetic models

    NASA Astrophysics Data System (ADS)

    Schiffler, Markus; Queitsch, Matthias; Schneider, Michael; Stolz, Ronny; Krech, Wolfram; Meyer, Hans-Georg; Kukowski, Nina

    2014-05-01

    Magnetic mapping missions like SWARM and its predecessors, e.g. the CHAMP and MAGSAT programs, offer high resolution Earth's magnetic field data. These datasets are usually combined with magnetic observatory and survey data, and subject to harmonic analysis. The derived spherical harmonic coefficients enable magnetic field modelling using a potential series expansion. Recently, new instruments like the JeSSY STAR Full Tensor Magnetic Gradiometry system equipped with very high sensitive sensors can directly measure the magnetic field gradient tensor components. The full understanding of the quality of the measured data requires the extension of magnetic field models to gradient tensor components. In this study, we focus on the extension of the derivation of the magnetic field out of the potential series magnetic field gradient tensor components and apply the new theoretical framework to the International Geomagnetic Reference Field (IGRF) and the High Definition Magnetic Model (HDGM). The gradient tensor component maps for entire Earth's surface produced for the IGRF show low values and smooth variations reflecting the core and mantle contributions whereas those for the HDGM gives a novel tool to unravel crustal structure and deep-situated ore bodies. For example, the Thor Suture and the Sorgenfrei-Thornquist Zone in Europe are delineated by a strong northward gradient. Derived from Eigenvalue decomposition of the magnetic gradient tensor, the scaled magnetic moment, normalized source strength (NSS) and the bearing of the lithospheric sources are presented. The NSS serves as a tool for estimating the lithosphere-asthenosphere boundary as well as the depth of plutons and ore bodies. Furthermore changes in magnetization direction parallel to the mid-ocean ridges can be obtained from the scaled magnetic moment and the normalized source strength discriminates the boundaries between the anomalies of major continental provinces like southern Africa or the Eastern European Craton.

  18. A non-statistical regularization approach and a tensor product decomposition method applied to complex flow data

    NASA Astrophysics Data System (ADS)

    von Larcher, Thomas; Blome, Therese; Klein, Rupert; Schneider, Reinhold; Wolf, Sebastian; Huber, Benjamin

    2016-04-01

    Handling high-dimensional data sets like they occur e.g. in turbulent flows or in multiscale behaviour of certain types in Geosciences are one of the big challenges in numerical analysis and scientific computing. A suitable solution is to represent those large data sets in an appropriate compact form. In this context, tensor product decomposition methods currently emerge as an important tool. One reason is that these methods often enable one to attack high-dimensional problems successfully, another that they allow for very compact representations of large data sets. We follow the novel Tensor-Train (TT) decomposition method to support the development of improved understanding of the multiscale behavior and the development of compact storage schemes for solutions of such problems. One long-term goal of the project is the construction of a self-consistent closure for Large Eddy Simulations (LES) of turbulent flows that explicitly exploits the tensor product approach's capability of capturing self-similar structures. Secondly, we focus on a mixed deterministic-stochastic subgrid scale modelling strategy currently under development for application in Finite Volume Large Eddy Simulation (LES) codes. Advanced methods of time series analysis for the databased construction of stochastic models with inherently non-stationary statistical properties and concepts of information theory based on a modified Akaike information criterion and on the Bayesian information criterion for the model discrimination are used to construct surrogate models for the non-resolved flux fluctuations. Vector-valued auto-regressive models with external influences form the basis for the modelling approach [1], [2], [4]. Here, we present the reconstruction capabilities of the two modeling approaches tested against 3D turbulent channel flow data computed by direct numerical simulation (DNS) for an incompressible, isothermal fluid at Reynolds number Reτ = 590 (computed by [3]). References [1] I. Horenko. On identification of nonstationary factor models and its application to atmospherical data analysis. J. Atm. Sci., 67:1559-1574, 2010. [2] P. Metzner, L. Putzig and I. Horenko. Analysis of persistent non-stationary time series and applications. CAMCoS, 7:175-229, 2012. [3] M. Uhlmann. Generation of a temporally well-resolved sequence of snapshots of the flow-field in turbulent plane channel flow. URL: http://www-turbul.ifh.unikarlsruhe.de/uhlmann/reports/produce.pdf, 2000. [4] Th. von Larcher, A. Beck, R. Klein, I. Horenko, P. Metzner, M. Waidmann, D. Igdalov, G. Gassner and C.-D. Munz. Towards a Framework for the Stochastic Modelling of Subgrid Scale Fluxes for Large Eddy Simulation. Meteorol. Z., 24:313-342, 2015.

  19. Gordan—Capelli series in superalgebras

    PubMed Central

    Brini, Andrea; Palareti, Aldopaolo; Teolis, Antonio G. B.

    1988-01-01

    We derive two Gordan—Capelli series for the supersymmetric algebra of the tensor product of two [unk]2-graded [unk]-vector spaces U and V, being [unk] a field of characteristic zero. These expansions yield complete decompositions of the supersymmetric algebra regarded as a pl(U)- and a pl(V)- module, where pl(U) and pl(V) are the general linear Lie superalgebras of U and V, respectively. PMID:16593911

  20. Resting state networks in empirical and simulated dynamic functional connectivity.

    PubMed

    Glomb, Katharina; Ponce-Alvarez, Adrián; Gilson, Matthieu; Ritter, Petra; Deco, Gustavo

    2017-10-01

    It is well-established that patterns of functional connectivity (FC) - measures of correlated activity between pairs of voxels or regions observed in the human brain using neuroimaging - are robustly expressed in spontaneous activity during rest. These patterns are not static, but exhibit complex spatio-temporal dynamics. Over the last years, a multitude of methods have been proposed to reveal these dynamics on the level of the whole brain. One finding is that the brain transitions through different FC configurations over time, and substantial effort has been put into characterizing these configurations. However, the dynamics governing these transitions are more elusive, specifically, the contribution of stationary vs. non-stationary dynamics is an active field of inquiry. In this study, we use a whole-brain approach, considering FC dynamics between 66 ROIs covering the entire cortex. We combine an innovative dimensionality reduction technique, tensor decomposition, with a mean field model which possesses stationary dynamics. It has been shown to explain resting state FC averaged over time and multiple subjects, however, this average FC summarizes the spatial distribution of correlations while hiding their temporal dynamics. First, we apply tensor decomposition to resting state scans from 24 healthy controls in order to characterize spatio-temporal dynamics present in the data. We simultaneously utilize temporal and spatial information by creating tensors that are subsequently decomposed into sets of brain regions ("communities") that share similar temporal dynamics, and their associated time courses. The tensors contain pairwise FC computed inside of overlapping sliding windows. Communities are discovered by clustering features pooled from all subjects, thereby ensuring that they generalize. We find that, on the group level, the data give rise to four distinct communities that resemble known resting state networks (RSNs): default mode network, visual network, control networks, and somatomotor network. Second, we simulate data with our stationary mean field model whose nodes are connected according to results from DTI and fiber tracking. In this model, all spatio-temporal structure is due to noisy fluctuations around the average FC. We analyze the simulated data in the same way as the empirical data in order to determine whether stationary dynamics can explain the emergence of distinct FC patterns (RSNs) which have their own time courses. We find that this is the case for all four networks using the spatio-temporal information revealed by tensor decomposition if nodes in the simulation are connected according to model-based effective connectivity. Furthermore, we find that these results require only a small part of the FC values, namely the highest values that occur across time and ROI pair. Our findings show that stationary dynamics can account for the emergence of RSNs. We provide an innovative method that does not make strong assumptions about the underlying data and is generally applicable to resting state or task data from different subject populations. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Detecting brain dynamics during resting state: a tensor based evolutionary clustering approach

    NASA Astrophysics Data System (ADS)

    Al-sharoa, Esraa; Al-khassaweneh, Mahmood; Aviyente, Selin

    2017-08-01

    Human brain is a complex network with connections across different regions. Understanding the functional connectivity (FC) of the brain is important both during resting state and task; as disruptions in connectivity patterns are indicators of different psychopathological and neurological diseases. In this work, we study the resting state functional connectivity networks (FCNs) of the brain from fMRI BOLD signals. Recent studies have shown that FCNs are dynamic even during resting state and understanding the temporal dynamics of FCNs is important for differentiating between different conditions. Therefore, it is important to develop algorithms to track the dynamic formation and dissociation of FCNs of the brain during resting state. In this paper, we propose a two step tensor based community detection algorithm to identify and track the brain network community structure across time. First, we introduce an information-theoretic function to reduce the dynamic FCN and identify the time points that are similar topologically to combine them into a tensor. These time points will be used to identify the different FC states. Second, a tensor based spectral clustering approach is developed to identify the community structure of the constructed tensors. The proposed algorithm applies Tucker decomposition to the constructed tensors and extract the orthogonal factor matrices along the connectivity mode to determine the common subspace within each FC state. The detected community structure is summarized and described as FC states. The results illustrate the dynamic structure of resting state networks (RSNs), including the default mode network, somatomotor network, subcortical network and visual network.

  2. Tensoral for post-processing users and simulation authors

    NASA Technical Reports Server (NTRS)

    Dresselhaus, Eliot

    1993-01-01

    The CTR post-processing effort aims to make turbulence simulations and data more readily and usefully available to the research and industrial communities. The Tensoral language, which provides the foundation for this effort, is introduced here in the form of a user's guide. The Tensoral user's guide is presented in two main sections. Section one acts as a general introduction and guides database users who wish to post-process simulation databases. Section two gives a brief description of how database authors and other advanced users can make simulation codes and/or the databases they generate available to the user community via Tensoral database back ends. The two-part structure of this document conforms to the two-level design structure of the Tensoral language. Tensoral has been designed to be a general computer language for performing tensor calculus and statistics on numerical data. Tensoral's generality allows it to be used for stand-alone native coding of high-level post-processing tasks (as described in section one of this guide). At the same time, Tensoral's specialization to a minute task (namely, to numerical tensor calculus and statistics) allows it to be easily embedded into applications written partly in Tensoral and partly in other computer languages (here, C and Vectoral). Embedded Tensoral, aimed at advanced users for more general coding (e.g. of efficient simulations, for interfacing with pre-existing software, for visualization, etc.), is described in section two of this guide.

  3. LiDAR point classification based on sparse representation

    NASA Astrophysics Data System (ADS)

    Li, Nan; Pfeifer, Norbert; Liu, Chun

    2017-04-01

    In order to combine the initial spatial structure and features of LiDAR data for accurate classification. The LiDAR data is represented as a 4-order tensor. Sparse representation for classification(SRC) method is used for LiDAR tensor classification. It turns out SRC need only a few of training samples from each class, meanwhile can achieve good classification result. Multiple features are extracted from raw LiDAR points to generate a high-dimensional vector at each point. Then the LiDAR tensor is built by the spatial distribution and feature vectors of the point neighborhood. The entries of LiDAR tensor are accessed via four indexes. Each index is called mode: three spatial modes in direction X ,Y ,Z and one feature mode. Sparse representation for classification(SRC) method is proposed in this paper. The sparsity algorithm is to find the best represent the test sample by sparse linear combination of training samples from a dictionary. To explore the sparsity of LiDAR tensor, the tucker decomposition is used. It decomposes a tensor into a core tensor multiplied by a matrix along each mode. Those matrices could be considered as the principal components in each mode. The entries of core tensor show the level of interaction between the different components. Therefore, the LiDAR tensor can be approximately represented by a sparse tensor multiplied by a matrix selected from a dictionary along each mode. The matrices decomposed from training samples are arranged as initial elements in the dictionary. By dictionary learning, a reconstructive and discriminative structure dictionary along each mode is built. The overall structure dictionary composes of class-specified sub-dictionaries. Then the sparse core tensor is calculated by tensor OMP(Orthogonal Matching Pursuit) method based on dictionaries along each mode. It is expected that original tensor should be well recovered by sub-dictionary associated with relevant class, while entries in the sparse tensor associated with other classed should be nearly zero. Therefore, SRC use the reconstruction error associated with each class to do data classification. A section of airborne LiDAR points of Vienna city is used and classified into 6classes: ground, roofs, vegetation, covered ground, walls and other points. Only 6 training samples from each class are taken. For the final classification result, ground and covered ground are merged into one same class(ground). The classification accuracy for ground is 94.60%, roof is 95.47%, vegetation is 85.55%, wall is 76.17%, other object is 20.39%.

  4. Dilation and Hypertrophy: A Cell-Based Continuum Mechanics Approach Towards Ventricular Growth and Remodeling

    NASA Astrophysics Data System (ADS)

    Ulerich, J.; Göktepe, S.; Kuhl, E.

    This manuscript presents a continuum approach towards cardiac growth and remodeling that is capable to predict chronic maladaptation of the heart in response to changes in mechanical loading. It is based on the multiplicative decomposition of the deformation gradient into and elastic and a growth part. Motivated by morphological changes in cardiomyocyte geometry, we introduce an anisotropic growth tensor that can capture both hypertrophic wall thickening and ventricular dilation within one generic concept. In agreement with clinical observations, we propose wall thickening to be a stress-driven phenomenon whereas dilation is introduced as a strain-driven process. The features of the proposed approach are illustrated in terms of the adaptation of thin heart slices and in terms overload-induced dilation in a generic bi-ventricular heart model.

  5. When are Overcomplete Topic Models Identifiable? Uniqueness of Tensor Tucker Decompositions with Structured Sparsity

    DTIC Science & Technology

    2013-08-14

    Communications and Computing, Electrical Engineering and Computer Science Dept., University of California, Irvine, USA 92697. Email : a.anandkumar...uci.edu,mjanzami@uci.edu. Daniel Hsu and Sham Kakade are with Microsoft Research New England, 1 Memorial Drive, Cambridge, MA 02142. Email : dahsu...Andreas Maurer, Massimiliano Pontil, and Bernardino Romera-Paredes. Sparse coding for multitask and transfer learning. ArxXiv preprint, abs/1209.0738, 2012

  6. Husimi coordinates of multipartite separable states

    NASA Astrophysics Data System (ADS)

    Parfionov, Georges; Zapatrin, Romàn R.

    2010-12-01

    A parametrization of multipartite separable states in a finite-dimensional Hilbert space is suggested. It is proved to be a diffeomorphism between the set of zero-trace operators and the interior of the set of separable density operators. The result is applicable to any tensor product decomposition of the state space. An analytical criterion for separability of density operators is established in terms of the boundedness of a sequence of operators.

  7. On deformation of complex continuum immersed in a plane space

    NASA Astrophysics Data System (ADS)

    Kovalev, V. A.; Murashkin, E. V.; Radayev, Y. N.

    2018-05-01

    The present paper is devoted to mathematical modelling of complex continua deformations considered as immersed in an external plane space. The complex continuum is defined as a differential manifold supplied with metrics induced by the external space. A systematic derivation of strain tensors by notion of isometric immersion of the complex continuum into a plane space of a higher dimension is proposed. Problem of establishing complete systems of irreducible objective strain and extrastrain tensors for complex continuum immersed in an external plane space is resolved. The solution to the problem is obtained by methods of the field theory and the theory of rational algebraic invariants. Strain tensors of the complex continuum are derived as irreducible algebraic invariants of contravariant vectors of the external space emerging as functional arguments in the complex continuum action density. Present analysis is restricted to rational algebraic invariants. Completeness of the considered systems of rational algebraic invariants is established for micropolar elastic continua. Rational syzygies for non-quadratic invariants are discussed. Objective strain tensors (indifferent to frame rotations in the external plane space) for micropolar continuum are alternatively obtained by properly combining multipliers of polar decompositions of deformation and extra-deformation gradients. The latter is realized only for continua immersed in a plane space of the equal mathematical dimension.

  8. Modelling Dynamic Behaviour and Spall Failure of Aluminium Alloy AA7010

    NASA Astrophysics Data System (ADS)

    Ma'at, N.; Nor, M. K. Mohd; Ismail, A. E.; Kamarudin, K. A.; Jamian, S.; Ibrahim, M. N.; Awang, M. K.

    2017-10-01

    A finite strain constitutive model to predict the dynamic deformation behaviour of Aluminium Alloy 7010 including shockwaves and spall failure is developed in this work. The important feature of this newly hyperelastic-plastic constitutive formulation is a new Mandel stress tensor formulated using new generalized orthotropic pressure. This tensor is combined with a shock equation of state (EOS) and Grady spall failure. The Hill’s yield criterion is adopted to characterize plastic orthotropy by means of the evolving structural tensors that is defined in the isoclinic configuration. This material model was developed and integration into elastic and plastic parts. The elastic anisotropy is taken into account through the newly stress tensor decomposition of a generalized orthotropic pressure. Plastic anisotropy is considered through yield surface and an isotropic hardening defined in a unique alignment of deviatoric plane within the stress space. To test its ability to describe shockwave propagation and spall failure, the new material model was implemented into the LLNL-DYNA3D code of UTHM’s. The capability of this newly constitutive model were compared against published experimental data of Plate Impact Test at 234m/s, 450m/s and 895m/s impact velocities. A good agreement is obtained between experimental and simulation in each test.

  9. Inversion of gravity gradient tensor data: does it provide better resolution?

    NASA Astrophysics Data System (ADS)

    Paoletti, V.; Fedi, M.; Italiano, F.; Florio, G.; Ialongo, S.

    2016-04-01

    The gravity gradient tensor (GGT) has been increasingly used in practical applications, but the advantages and the disadvantages of the analysis of GGT components versus the analysis of the vertical component of the gravity field are still debated. We analyse the performance of joint inversion of GGT components versus separate inversion of the gravity field alone, or of one tensor component. We perform our analysis by inspection of the Picard Plot, a Singular Value Decomposition tool, and analyse both synthetic data and gradiometer measurements carried out at the Vredefort structure, South Africa. We show that the main factors controlling the reliability of the inversion are algebraic ambiguity (the difference between the number of unknowns and the number of available data points) and signal-to-noise ratio. Provided that algebraic ambiguity is kept low and the noise level is small enough so that a sufficient number of SVD components can be included in the regularized solution, we find that: (i) the choice of tensor components involved in the inversion is not crucial to the overall reliability of the reconstructions; (ii) GGT inversion can yield the same resolution as inversion with a denser distribution of gravity data points, but with the advantage of using fewer measurement stations.

  10. Scalar, Axial, and Tensor Interactions of Light Nuclei from Lattice QCD

    NASA Astrophysics Data System (ADS)

    Chang, Emmanuel; Davoudi, Zohreh; Detmold, William; Gambhir, Arjun S.; Orginos, Kostas; Savage, Martin J.; Shanahan, Phiala E.; Wagman, Michael L.; Winter, Frank; Nplqcd Collaboration

    2018-04-01

    Complete flavor decompositions of the matrix elements of the scalar, axial, and tensor currents in the proton, deuteron, diproton, and 3He at SU(3)-symmetric values of the quark masses corresponding to a pion mass mπ˜806 MeV are determined using lattice quantum chromodynamics. At the physical quark masses, the scalar interactions constrain mean-field models of nuclei and the low-energy interactions of nuclei with potential dark matter candidates. The axial and tensor interactions of nuclei constrain their spin content, integrated transversity, and the quark contributions to their electric dipole moments. External fields are used to directly access the quark-line connected matrix elements of quark bilinear operators, and a combination of stochastic estimation techniques is used to determine the disconnected sea-quark contributions. The calculated matrix elements differ from, and are typically smaller than, naive single-nucleon estimates. Given the particularly large, O (10 %), size of nuclear effects in the scalar matrix elements, contributions from correlated multinucleon effects should be quantified in the analysis of dark matter direct-detection experiments using nuclear targets.

  11. Robust Angle Estimation for MIMO Radar with the Coexistence of Mutual Coupling and Colored Noise.

    PubMed

    Wang, Junxiang; Wang, Xianpeng; Xu, Dingjie; Bi, Guoan

    2018-03-09

    This paper deals with joint estimation of direction-of-departure (DOD) and direction-of- arrival (DOA) in bistatic multiple-input multiple-output (MIMO) radar with the coexistence of unknown mutual coupling and spatial colored noise by developing a novel robust covariance tensor-based angle estimation method. In the proposed method, a third-order tensor is firstly formulated for capturing the multidimensional nature of the received data. Then taking advantage of the temporal uncorrelated characteristic of colored noise and the banded complex symmetric Toeplitz structure of the mutual coupling matrices, a novel fourth-order covariance tensor is constructed for eliminating the influence of both spatial colored noise and mutual coupling. After a robust signal subspace estimation is obtained by using the higher-order singular value decomposition (HOSVD) technique, the rotational invariance technique is applied to achieve the DODs and DOAs. Compared with the existing HOSVD-based subspace methods, the proposed method can provide superior angle estimation performance and automatically jointly perform the DODs and DOAs. Results from numerical experiments are presented to verify the effectiveness of the proposed method.

  12. Scalar, Axial, and Tensor Interactions of Light Nuclei from Lattice QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Emmanuel; Davoudi, Zohreh; Detmold, William

    Complete flavor decompositions of the matrix elements of the scalar, axial, and tensor currents in the proton, deuteron, diproton, and 3He at SU(3)-symmetric values of the quark masses corresponding to a pion mass m π~806 MeV are determined using lattice quantum chromodynamics. At the physical quark masses, the scalar interactions constrain mean-field models of nuclei and the low-energy interactions of nuclei with potential dark matter candidates. The axial and tensor interactions of nuclei constrain their spin content, integrated transversity, and the quark contributions to their electric dipole moments. External fields are used to directly access the quark-line connected matrix elementsmore » of quark bilinear operators, and a combination of stochastic estimation techniques is used to determine the disconnected sea-quark contributions. The calculated matrix elements differ from, and are typically smaller than, naive single-nucleon estimates. Given the particularly large, O(10%), size of nuclear effects in the scalar matrix elements, contributions from correlated multinucleon effects should be quantified in the analysis of dark matter direct-detection experiments using nuclear targets.« less

  13. Scalar, Axial, and Tensor Interactions of Light Nuclei from Lattice QCD

    DOE PAGES

    Chang, Emmanuel; Davoudi, Zohreh; Detmold, William; ...

    2018-04-13

    Complete flavor decompositions of the matrix elements of the scalar, axial, and tensor currents in the proton, deuteron, diproton, and 3He at SU(3)-symmetric values of the quark masses corresponding to a pion mass m π~806 MeV are determined using lattice quantum chromodynamics. At the physical quark masses, the scalar interactions constrain mean-field models of nuclei and the low-energy interactions of nuclei with potential dark matter candidates. The axial and tensor interactions of nuclei constrain their spin content, integrated transversity, and the quark contributions to their electric dipole moments. External fields are used to directly access the quark-line connected matrix elementsmore » of quark bilinear operators, and a combination of stochastic estimation techniques is used to determine the disconnected sea-quark contributions. The calculated matrix elements differ from, and are typically smaller than, naive single-nucleon estimates. Given the particularly large, O(10%), size of nuclear effects in the scalar matrix elements, contributions from correlated multinucleon effects should be quantified in the analysis of dark matter direct-detection experiments using nuclear targets.« less

  14. Scalar, Axial, and Tensor Interactions of Light Nuclei from Lattice QCD.

    PubMed

    Chang, Emmanuel; Davoudi, Zohreh; Detmold, William; Gambhir, Arjun S; Orginos, Kostas; Savage, Martin J; Shanahan, Phiala E; Wagman, Michael L; Winter, Frank

    2018-04-13

    Complete flavor decompositions of the matrix elements of the scalar, axial, and tensor currents in the proton, deuteron, diproton, and ^{3}He at SU(3)-symmetric values of the quark masses corresponding to a pion mass m_{π}∼806  MeV are determined using lattice quantum chromodynamics. At the physical quark masses, the scalar interactions constrain mean-field models of nuclei and the low-energy interactions of nuclei with potential dark matter candidates. The axial and tensor interactions of nuclei constrain their spin content, integrated transversity, and the quark contributions to their electric dipole moments. External fields are used to directly access the quark-line connected matrix elements of quark bilinear operators, and a combination of stochastic estimation techniques is used to determine the disconnected sea-quark contributions. The calculated matrix elements differ from, and are typically smaller than, naive single-nucleon estimates. Given the particularly large, O(10%), size of nuclear effects in the scalar matrix elements, contributions from correlated multinucleon effects should be quantified in the analysis of dark matter direct-detection experiments using nuclear targets.

  15. Robust Angle Estimation for MIMO Radar with the Coexistence of Mutual Coupling and Colored Noise

    PubMed Central

    Wang, Junxiang; Wang, Xianpeng; Xu, Dingjie; Bi, Guoan

    2018-01-01

    This paper deals with joint estimation of direction-of-departure (DOD) and direction-of- arrival (DOA) in bistatic multiple-input multiple-output (MIMO) radar with the coexistence of unknown mutual coupling and spatial colored noise by developing a novel robust covariance tensor-based angle estimation method. In the proposed method, a third-order tensor is firstly formulated for capturing the multidimensional nature of the received data. Then taking advantage of the temporal uncorrelated characteristic of colored noise and the banded complex symmetric Toeplitz structure of the mutual coupling matrices, a novel fourth-order covariance tensor is constructed for eliminating the influence of both spatial colored noise and mutual coupling. After a robust signal subspace estimation is obtained by using the higher-order singular value decomposition (HOSVD) technique, the rotational invariance technique is applied to achieve the DODs and DOAs. Compared with the existing HOSVD-based subspace methods, the proposed method can provide superior angle estimation performance and automatically jointly perform the DODs and DOAs. Results from numerical experiments are presented to verify the effectiveness of the proposed method. PMID:29522499

  16. Weyl geometry

    NASA Astrophysics Data System (ADS)

    Wheeler, James T.

    2018-07-01

    We develop the properties of Weyl geometry, beginning with a review of the conformal properties of Riemannian spacetimes. Decomposition of the Riemann curvature into trace and traceless parts allows an easy proof that the Weyl curvature tensor is the conformally invariant part of the Riemann curvature, and shows the explicit change in the Ricci and Schouten tensors required to insure conformal invariance. We include a proof of the well-known condition for the existence of a conformal transformation to a Ricci-flat spacetime. We generalize this to a derivation of the condition for the existence of a conformal transformation to a spacetime satisfying the Einstein equation with matter sources. Then, enlarging the symmetry from Poincaré to Weyl, we develop the Cartan structure equations of Weyl geometry, the form of the curvature tensor and its relationship to the Riemann curvature of the corresponding Riemannian geometry. We present a simple theory of Weyl-covariant gravity based on a curvature-linear action, and show that it is conformally equivalent to general relativity. This theory is invariant under local dilatations, but not the full conformal group.

  17. When are Overcomplete Representations Identifiable? Uniqueness of Tensor Decompositions Under Expansion Constraints

    DTIC Science & Technology

    2013-06-16

    Science Dept., University of California, Irvine, USA 92697. Email : a.anandkumar@uci.edu,mjanzami@uci.edu. Daniel Hsu and Sham Kakade are with...Microsoft Research New England, 1 Memorial Drive, Cambridge, MA 02142. Email : dahsu@microsoft.com, skakade@microsoft.com 1 a latent space dimensionality...Sparse coding for multitask and transfer learning. ArxXiv preprint, abs/1209.0738, 2012. [34] G.H. Golub and C.F. Van Loan. Matrix Computations. The

  18. Adjoint affine fusion and tadpoles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Urichuk, Andrew, E-mail: andrew.urichuk@uleth.ca; Walton, Mark A., E-mail: walton@uleth.ca; International School for Advanced Studies

    2016-06-15

    We study affine fusion with the adjoint representation. For simple Lie algebras, elementary and universal formulas determine the decomposition of a tensor product of an integrable highest-weight representation with the adjoint representation. Using the (refined) affine depth rule, we prove that equally striking results apply to adjoint affine fusion. For diagonal fusion, a coefficient equals the number of nonzero Dynkin labels of the relevant affine highest weight, minus 1. A nice lattice-polytope interpretation follows and allows the straightforward calculation of the genus-1 1-point adjoint Verlinde dimension, the adjoint affine fusion tadpole. Explicit formulas, (piecewise) polynomial in the level, are writtenmore » for the adjoint tadpoles of all classical Lie algebras. We show that off-diagonal adjoint affine fusion is obtained from the corresponding tensor product by simply dropping non-dominant representations.« less

  19. Leith diffusion model for homogeneous anisotropic turbulence

    DOE PAGES

    Rubinstein, Robert; Clark, Timothy T.; Kurien, Susan

    2017-06-01

    Here, a proposal for a spectral closure model for homogeneous anisotropic turbulence. The systematic development begins by closing the third-order correlation describing nonlinear interactions by an anisotropic generalization of the Leith diffusion model for isotropic turbulence. The correlation tensor is then decomposed into a tensorially isotropic part, or directional anisotropy, and a trace-free remainder, or polarization anisotropy. The directional and polarization components are then decomposed using irreducible representations of the SO(3) symmetry group. Under the ansatz that the decomposition is truncated at quadratic order, evolution equations are derived for the directional and polarization pieces of the correlation tensor. Here, numericalmore » simulation of the model equations for a freely decaying anisotropic flow illustrate the non-trivial effects of spectral dependencies on the different return-to-isotropy rates of the directional and polarization contributions.« less

  20. [An Improved Spectral Quaternion Interpolation Method of Diffusion Tensor Imaging].

    PubMed

    Xu, Yonghong; Gao, Shangce; Hao, Xiaofei

    2016-04-01

    Diffusion tensor imaging(DTI)is a rapid development technology in recent years of magnetic resonance imaging.The diffusion tensor interpolation is a very important procedure in DTI image processing.The traditional spectral quaternion interpolation method revises the direction of the interpolation tensor and can preserve tensors anisotropy,but the method does not revise the size of tensors.The present study puts forward an improved spectral quaternion interpolation method on the basis of traditional spectral quaternion interpolation.Firstly,we decomposed diffusion tensors with the direction of tensors being represented by quaternion.Then we revised the size and direction of the tensor respectively according to different situations.Finally,we acquired the tensor of interpolation point by calculating the weighted average.We compared the improved method with the spectral quaternion method and the Log-Euclidean method by the simulation data and the real data.The results showed that the improved method could not only keep the monotonicity of the fractional anisotropy(FA)and the determinant of tensors,but also preserve the tensor anisotropy at the same time.In conclusion,the improved method provides a kind of important interpolation method for diffusion tensor image processing.

  1. Definition of Contravariant Velocity Components

    NASA Technical Reports Server (NTRS)

    Hung, Ching-moa; Kwak, Dochan (Technical Monitor)

    2002-01-01

    In this paper we have reviewed the basics of tensor analysis in an attempt to clarify some misconceptions regarding contravariant and covariant vector components as used in fluid dynamics. We have indicated that contravariant components are components of a given vector expressed as a unique combination of the covariant base vector system and, vice versa, that the covariant components are components of a vector expressed with the contravariant base vector system. Mathematically, expressing a vector with a combination of base vector is a decomposition process for a specific base vector system. Hence, the contravariant velocity components are decomposed components of velocity vector along the directions of coordinate lines, with respect to the covariant base vector system. However, the contravariant (and covariant) components are not physical quantities. Their magnitudes and dimensions are controlled by their corresponding covariant (and contravariant) base vectors.

  2. Moment-tensor solutions estimated using optimal filter theory: Global seismicity, 2001

    USGS Publications Warehouse

    Sipkin, S.A.; Bufe, C.G.; Zirbes, M.D.

    2003-01-01

    This paper is the 12th in a series published yearly containing moment-tensor solutions computed at the US Geological Survey using an algorithm based on the theory of optimal filter design (Sipkin, 1982 and Sipkin, 1986b). An inversion has been attempted for all earthquakes with a magnitude, mb or MS, of 5.5 or greater. Previous listings include solutions for earthquakes that occurred from 1981 to 2000 (Sipkin, 1986b; Sipkin and Needham, 1989, Sipkin and Needham, 1991, Sipkin and Needham, 1992, Sipkin and Needham, 1993, Sipkin and Needham, 1994a and Sipkin and Needham, 1994b; Sipkin and Zirbes, 1996 and Sipkin and Zirbes, 1997; Sipkin et al., 1998, Sipkin et al., 1999, Sipkin et al., 2000a, Sipkin et al., 2000b and Sipkin et al., 2002).The entire USGS moment-tensor catalog can be obtained via anonymous FTP at ftp://ghtftp.cr.usgs.gov. After logging on, change directory to “momten”. This directory contains two compressed ASCII files that contain the finalized solutions, “mt.lis.Z” and “fmech.lis.Z”. “mt.lis.Z” contains the elements of the moment tensors along with detailed event information; “fmech.lis.Z” contains the decompositions into the principal axes and best double-couples. The fast moment-tensor solutions for more recent events that have not yet been finalized and added to the catalog, are gathered by month in the files “jan01.lis.Z”, etc. “fmech.doc.Z” describes the various fields.

  3. Introduction to Vector Field Visualization

    NASA Technical Reports Server (NTRS)

    Kao, David; Shen, Han-Wei

    2010-01-01

    Vector field visualization techniques are essential to help us understand the complex dynamics of flow fields. These can be found in a wide range of applications such as study of flows around an aircraft, the blood flow in our heart chambers, ocean circulation models, and severe weather predictions. The vector fields from these various applications can be visually depicted using a number of techniques such as particle traces and advecting textures. In this tutorial, we present several fundamental algorithms in flow visualization including particle integration, particle tracking in time-dependent flows, and seeding strategies. For flows near surfaces, a wide variety of synthetic texture-based algorithms have been developed to depict near-body flow features. The most common approach is based on the Line Integral Convolution (LIC) algorithm. There also exist extensions of LIC to support more flexible texture generations for 3D flow data. This tutorial reviews these algorithms. Tensor fields are found in several real-world applications and also require the aid of visualization to help users understand their data sets. Examples where one can find tensor fields include mechanics to see how material respond to external forces, civil engineering and geomechanics of roads and bridges, and the study of neural pathway via diffusion tensor imaging. This tutorial will provide an overview of the different tensor field visualization techniques, discuss basic tensor decompositions, and go into detail on glyph based methods, deformation based methods, and streamline based methods. Practical examples will be used when presenting the methods; and applications from some case studies will be used as part of the motivation.

  4. Cross-scale efficient tensor contractions for coupled cluster computations through multiple programming model backends

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibrahim, Khaled Z.; Epifanovsky, Evgeny; Williams, Samuel

    Coupled-cluster methods provide highly accurate models of molecular structure through explicit numerical calculation of tensors representing the correlation between electrons. These calculations are dominated by a sequence of tensor contractions, motivating the development of numerical libraries for such operations. While based on matrix–matrix multiplication, these libraries are specialized to exploit symmetries in the molecular structure and in electronic interactions, and thus reduce the size of the tensor representation and the complexity of contractions. The resulting algorithms are irregular and their parallelization has been previously achieved via the use of dynamic scheduling or specialized data decompositions. We introduce our efforts tomore » extend the Libtensor framework to work in the distributed memory environment in a scalable and energy-efficient manner. We achieve up to 240× speedup compared with the optimized shared memory implementation of Libtensor. We attain scalability to hundreds of thousands of compute cores on three distributed-memory architectures (Cray XC30 and XC40, and IBM Blue Gene/Q), and on a heterogeneous GPU-CPU system (Cray XK7). As the bottlenecks shift from being compute-bound DGEMM's to communication-bound collectives as the size of the molecular system scales, we adopt two radically different parallelization approaches for handling load-imbalance, tasking and bulk synchronous models. Nevertheless, we preserve a unified interface to both programming models to maintain the productivity of computational quantum chemists.« less

  5. Cross-scale efficient tensor contractions for coupled cluster computations through multiple programming model backends

    DOE PAGES

    Ibrahim, Khaled Z.; Epifanovsky, Evgeny; Williams, Samuel; ...

    2017-03-08

    Coupled-cluster methods provide highly accurate models of molecular structure through explicit numerical calculation of tensors representing the correlation between electrons. These calculations are dominated by a sequence of tensor contractions, motivating the development of numerical libraries for such operations. While based on matrix–matrix multiplication, these libraries are specialized to exploit symmetries in the molecular structure and in electronic interactions, and thus reduce the size of the tensor representation and the complexity of contractions. The resulting algorithms are irregular and their parallelization has been previously achieved via the use of dynamic scheduling or specialized data decompositions. We introduce our efforts tomore » extend the Libtensor framework to work in the distributed memory environment in a scalable and energy-efficient manner. We achieve up to 240× speedup compared with the optimized shared memory implementation of Libtensor. We attain scalability to hundreds of thousands of compute cores on three distributed-memory architectures (Cray XC30 and XC40, and IBM Blue Gene/Q), and on a heterogeneous GPU-CPU system (Cray XK7). As the bottlenecks shift from being compute-bound DGEMM's to communication-bound collectives as the size of the molecular system scales, we adopt two radically different parallelization approaches for handling load-imbalance, tasking and bulk synchronous models. Nevertheless, we preserve a unified interface to both programming models to maintain the productivity of computational quantum chemists.« less

  6. A Review of Tensors and Tensor Signal Processing

    NASA Astrophysics Data System (ADS)

    Cammoun, L.; Castaño-Moraga, C. A.; Muñoz-Moreno, E.; Sosa-Cabrera, D.; Acar, B.; Rodriguez-Florido, M. A.; Brun, A.; Knutsson, H.; Thiran, J. P.

    Tensors have been broadly used in mathematics and physics, since they are a generalization of scalars or vectors and allow to represent more complex properties. In this chapter we present an overview of some tensor applications, especially those focused on the image processing field. From a mathematical point of view, a lot of work has been developed about tensor calculus, which obviously is more complex than scalar or vectorial calculus. Moreover, tensors can represent the metric of a vector space, which is very useful in the field of differential geometry. In physics, tensors have been used to describe several magnitudes, such as the strain or stress of materials. In solid mechanics, tensors are used to define the generalized Hooke’s law, where a fourth order tensor relates the strain and stress tensors. In fluid dynamics, the velocity gradient tensor provides information about the vorticity and the strain of the fluids. Also an electromagnetic tensor is defined, that simplifies the notation of the Maxwell equations. But tensors are not constrained to physics and mathematics. They have been used, for instance, in medical imaging, where we can highlight two applications: the diffusion tensor image, which represents how molecules diffuse inside the tissues and is broadly used for brain imaging; and the tensorial elastography, which computes the strain and vorticity tensor to analyze the tissues properties. Tensors have also been used in computer vision to provide information about the local structure or to define anisotropic image filters.

  7. A Generalized Framework for Reduced-Order Modeling of a Wind Turbine Wake

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamilton, Nicholas; Viggiano, Bianca; Calaf, Marc

    A reduced-order model for a wind turbine wake is sought from large eddy simulation data. Fluctuating velocity fields are combined in the correlation tensor to form the kernel of the proper orthogonal decomposition (POD). Proper orthogonal decomposition modes resulting from the decomposition represent the spatially coherent turbulence structures in the wind turbine wake; eigenvalues delineate the relative amount of turbulent kinetic energy associated with each mode. Back-projecting the POD modes onto the velocity snapshots produces dynamic coefficients that express the amplitude of each mode in time. A reduced-order model of the wind turbine wake (wakeROM) is defined through a seriesmore » of polynomial parameters that quantify mode interaction and the evolution of each POD mode coefficients. The resulting system of ordinary differential equations models the wind turbine wake composed only of the large-scale turbulent dynamics identified by the POD. Tikhonov regularization is used to recalibrate the dynamical system by adding additional constraints to the minimization seeking polynomial parameters, reducing error in the modeled mode coefficients. The wakeROM is periodically reinitialized with new initial conditions found by relating the incoming turbulent velocity to the POD mode coefficients through a series of open-loop transfer functions. The wakeROM reproduces mode coefficients to within 25.2%, quantified through the normalized root-mean-square error. A high-level view of the modeling approach is provided as a platform to discuss promising research directions, alternate processes that could benefit stability and efficiency, and desired extensions of the wakeROM.« less

  8. A study of perturbations in scalar-tensor theory using 1 + 3 covariant approach

    NASA Astrophysics Data System (ADS)

    Ntahompagaze, Joseph; Abebe, Amare; Mbonye, Manasse

    This work discusses scalar-tensor theories of gravity, with a focus on the Brans-Dicke sub-class, and one that also takes note of the latter’s equivalence with f(R) gravitation theories. A 1 + 3 covariant formalism is used in this case to discuss covariant perturbations on a background Friedmann-Laimaître-Robertson-Walker (FLRW) spacetime. Linear perturbation equations are developed based on gauge-invariant gradient variables. Both scalar and harmonic decompositions are applied to obtain second-order equations. These equations can then be used for further analysis of the behavior of the perturbation quantities in such a scalar-tensor theory of gravitation. Energy density perturbations are studied for two systems, namely for a scalar fluid-radiation system and for a scalar fluid-dust system, for Rn models. For the matter-dominated era, it is shown that the dust energy density perturbations grow exponentially, a result which agrees with those already existing in the literatures. In the radiation-dominated era, it is found that the behavior of the radiation energy-density perturbations is oscillatory, with growing amplitudes for n > 1, and with decaying amplitudes for 0 < n < 1. This is a new result.

  9. Matrix- and tensor-based recommender systems for the discovery of currently unknown inorganic compounds

    NASA Astrophysics Data System (ADS)

    Seko, Atsuto; Hayashi, Hiroyuki; Kashima, Hisashi; Tanaka, Isao

    2018-01-01

    Chemically relevant compositions (CRCs) and atomic arrangements of inorganic compounds have been collected as inorganic crystal structure databases. Machine learning is a unique approach to search for currently unknown CRCs from vast candidates. Herein we propose matrix- and tensor-based recommender system approaches to predict currently unknown CRCs from database entries of CRCs. Firstly, the performance of the recommender system approaches to discover currently unknown CRCs is examined. A Tucker decomposition recommender system shows the best discovery rate of CRCs as the majority of the top 100 recommended ternary and quaternary compositions correspond to CRCs. Secondly, systematic density functional theory (DFT) calculations are performed to investigate the phase stability of the recommended compositions. The phase stability of the 27 compositions reveals that 23 currently unknown compounds are newly found to be stable. These results indicate that the recommender system has great potential to accelerate the discovery of new compounds.

  10. Low rank factorization of the Coulomb integrals for periodic coupled cluster theory.

    PubMed

    Hummel, Felix; Tsatsoulis, Theodoros; Grüneis, Andreas

    2017-03-28

    We study a tensor hypercontraction decomposition of the Coulomb integrals of periodic systems where the integrals are factorized into a contraction of six matrices of which only two are distinct. We find that the Coulomb integrals can be well approximated in this form already with small matrices compared to the number of real space grid points. The cost of computing the matrices scales as O(N 4 ) using a regularized form of the alternating least squares algorithm. The studied factorization of the Coulomb integrals can be exploited to reduce the scaling of the computational cost of expensive tensor contractions appearing in the amplitude equations of coupled cluster methods with respect to system size. We apply the developed methodologies to calculate the adsorption energy of a single water molecule on a hexagonal boron nitride monolayer in a plane wave basis set and periodic boundary conditions.

  11. Reducing tensor magnetic gradiometer data for unexploded ordnance detection

    USGS Publications Warehouse

    Bracken, Robert E.; Brown, Philip J.

    2005-01-01

    We performed a survey to demonstrate the effectiveness of a prototype tensor magnetic gradiometer system (TMGS) for detection of buried unexploded ordnance (UXO). In order to achieve a useful result, we designed a data-reduction procedure that resulted in a realistic magnetic gradient tensor and devised a simple way of viewing complicated tensor data, not only to assess the validity of the final resulting tensor, but also to preview the data at interim stages of processing. The final processed map of the surveyed area clearly shows a sharp anomaly that peaks almost directly over the target UXO. This map agrees well with a modeled map derived from dipolar sources near the known target locations. From this agreement, it can be deduced that the reduction process is valid, making the prototype TMGS a foundation for development of future systems and processes.

  12. Scalar/Vector potential formulation for compressible viscous unsteady flows

    NASA Technical Reports Server (NTRS)

    Morino, L.

    1985-01-01

    A scalar/vector potential formulation for unsteady viscous compressible flows is presented. The scalar/vector potential formulation is based on the classical Helmholtz decomposition of any vector field into the sum of an irrotational and a solenoidal field. The formulation is derived from fundamental principles of mechanics and thermodynamics. The governing equations for the scalar potential and vector potential are obtained, without restrictive assumptions on either the equation of state or the constitutive relations or the stress tensor and the heat flux vector.

  13. Magnetofluid dynamics in curved spacetime

    NASA Astrophysics Data System (ADS)

    Bhattacharjee, Chinmoy; Das, Rupam; Mahajan, S. M.

    2015-03-01

    A grand unified field Mμ ν is constructed from Maxwell's field tensor and an appropriately modified flow field, both nonminimally coupled to gravity, to analyze the dynamics of hot charged fluids in curved background space-time. With a suitable 3 +1 decomposition, this new formalism of the hot fluid is then applied to investigate the vortical dynamics of the system. Finally, the equilibrium state for plasma with nonminimal coupling through Ricci scalar R to gravity is investigated to derive a double Beltrami equation in curved space-time.

  14. Extracting the potential-well of a near-field optical trap using the Helmholtz-Hodge decomposition

    NASA Astrophysics Data System (ADS)

    Zaman, Mohammad Asif; Padhy, Punnag; Hansen, Paul C.; Hesselink, Lambertus

    2018-02-01

    The non-conservative nature of the force field generated by a near-field optical trap is analyzed. A plasmonic C-shaped engraving on a gold film is considered as the trap. The force field is calculated using the Maxwell stress tensor method. The Helmholtz-Hodge decomposition is used to extract the conservative and the non-conservative component of the force. Due to the non-negligible non-conservative component, it is found that the conventional approach of extracting the potential by direct integration of the force is not accurate. Despite the non-conservative nature of the force field, it is found that the statistical properties of a trapped nanoparticle can be estimated from the conservative component of the force field alone. Experimental and numerical results are presented to support the claims.

  15. Multiview robotic microscope reveals the in-plane kinematics of amphibian neurulation.

    PubMed

    Veldhuis, Jim H; Brodland, G Wayne; Wiebe, Colin J; Bootsma, Gregory J

    2005-06-01

    A new robotic microscope system, called the Frogatron 3000, was developed to collect time-lapse images from arbitrary viewing angles over the surface of live embryos. Embryos are mounted at the center of a horizontal, fluid-filled, cylindrical glass chamber around which a camera with special optics traverses. To hold them at the center of the chamber and revolve them about a vertical axis, the embryos are placed on the end of a small vertical glass tube that is rotated under computer control. To demonstrate operation of the system, it was used to capture time-lapse images of developing axolotl (amphibian) embryos from 63 viewing angles during the process of neurulation and the in-plane kinematics of the epithelia visible at the center of each view was calculated. The motions of points on the surface of the embryo were determined by digital tracking of their natural surface texture, and a least-squares algorithm was developed to calculate the deformation-rate tensor from the motions of these surface points. Principal strain rates and directions were extracted from this tensor using decomposition and eigenvector techniques. The highest observed principal true strain rate was 28 +/- 5% per hour, along the midline of the neural plate during developmental stage 14, while the greatest contractile true strain rate was--35 +/- 5% per hour, normal to the embryo midline during stage 15.

  16. Dynamic field theory and equations of motion in cosmology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kopeikin, Sergei M., E-mail: kopeikins@missouri.edu; Petrov, Alexander N., E-mail: alex.petrov55@gmail.com

    2014-11-15

    We discuss a field-theoretical approach based on general-relativistic variational principle to derive the covariant field equations and hydrodynamic equations of motion of baryonic matter governed by cosmological perturbations of dark matter and dark energy. The action depends on the gravitational and matter Lagrangian. The gravitational Lagrangian depends on the metric tensor and its first and second derivatives. The matter Lagrangian includes dark matter, dark energy and the ordinary baryonic matter which plays the role of a bare perturbation. The total Lagrangian is expanded in an asymptotic Taylor series around the background cosmological manifold defined as a solution of Einstein’s equationsmore » in the form of the Friedmann–Lemaître–Robertson–Walker (FLRW) metric tensor. The small parameter of the decomposition is the magnitude of the metric tensor perturbation. Each term of the series expansion is gauge-invariant and all of them together form a basis for the successive post-Friedmannian approximations around the background metric. The approximation scheme is covariant and the asymptotic nature of the Lagrangian decomposition does not require the post-Friedmannian perturbations to be small though computationally it works the most effectively when the perturbed metric is close enough to the background FLRW metric. The temporal evolution of the background metric is governed by dark matter and dark energy and we associate the large scale inhomogeneities in these two components as those generated by the primordial cosmological perturbations with an effective matter density contrast δρ/ρ≤1. The small scale inhomogeneities are generated by the condensations of baryonic matter considered as the bare perturbations of the background manifold that admits δρ/ρ≫1. Mathematically, the large scale perturbations are given by the homogeneous solution of the linearized field equations while the small scale perturbations are described by a particular solution of these equations with the bare stress–energy tensor of the baryonic matter. We explicitly work out the covariant field equations of the successive post-Friedmannian approximations of Einstein’s equations in cosmology and derive equations of motion of large and small scale inhomogeneities of dark matter and dark energy. We apply these equations to derive the post-Friedmannian equations of motion of baryonic matter comprising stars, galaxies and their clusters.« less

  17. Geodesic-loxodromes for diffusion tensor interpolation and difference measurement.

    PubMed

    Kindlmann, Gordon; Estépar, Raúl San José; Niethammer, Marc; Haker, Steven; Westin, Carl-Fredrik

    2007-01-01

    In algorithms for processing diffusion tensor images, two common ingredients are interpolating tensors, and measuring the distance between them. We propose a new class of interpolation paths for tensors, termed geodesic-loxodromes, which explicitly preserve clinically important tensor attributes, such as mean diffusivity or fractional anisotropy, while using basic differential geometry to interpolate tensor orientation. This contrasts with previous Riemannian and Log-Euclidean methods that preserve the determinant. Path integrals of tangents of geodesic-loxodromes generate novel measures of over-all difference between two tensors, and of difference in shape and in orientation.

  18. Generic, network schema agnostic sparse tensor factorization for single-pass clustering of heterogeneous information networks

    PubMed Central

    Meng, Qinggang; Deng, Su; Huang, Hongbin; Wu, Yahui; Badii, Atta

    2017-01-01

    Heterogeneous information networks (e.g. bibliographic networks and social media networks) that consist of multiple interconnected objects are ubiquitous. Clustering analysis is an effective method to understand the semantic information and interpretable structure of the heterogeneous information networks, and it has attracted the attention of many researchers in recent years. However, most studies assume that heterogeneous information networks usually follow some simple schemas, such as bi-typed networks or star network schema, and they can only cluster one type of object in the network each time. In this paper, a novel clustering framework is proposed based on sparse tensor factorization for heterogeneous information networks, which can cluster multiple types of objects simultaneously in a single pass without any network schema information. The types of objects and the relations between them in the heterogeneous information networks are modeled as a sparse tensor. The clustering issue is modeled as an optimization problem, which is similar to the well-known Tucker decomposition. Then, an Alternating Least Squares (ALS) algorithm and a feasible initialization method are proposed to solve the optimization problem. Based on the tensor factorization, we simultaneously partition different types of objects into different clusters. The experimental results on both synthetic and real-world datasets have demonstrated that our proposed clustering framework, STFClus, can model heterogeneous information networks efficiently and can outperform state-of-the-art clustering algorithms as a generally applicable single-pass clustering method for heterogeneous network which is network schema agnostic. PMID:28245222

  19. Irreducible structure, symmetry and average of Eshelby's tensor fields in isotropic elasticity

    NASA Astrophysics Data System (ADS)

    Zheng, Q.-S.; Zhao, Z.-H.; Du, D.-X.

    2006-02-01

    The strain field ɛ(x) in an infinitely large, homogenous, and isotropic elastic medium induced by a uniform eigenstrain ɛ0 in a domain ω depends linearly upon ɛ0 : ɛij(x)=Sijklω(x)ɛkl0. It has been a long-standing conjecture that the Eshelby's tensor field Sω(x) is uniform inside ω if and only if ω is ellipsoidally shaped. Because of the minor index symmetry Sijklω=Sjiklω=Sijlkω, Sω might have a maximum of 36 or nine independent components in three or two dimensions, respectively. In this paper, using the irreducible decomposition of Sω, we show that the isotropic part S of Sω vanishes outside ω and is uniform inside ω with the same value as the Eshelby's tensor S0 for 3D spherical or 2D circular domains. We further show that the anisotropic part Aω=Sω-S of Sω is characterized by a second- and a fourth-order deviatoric tensors and therefore have at maximum 14 or four independent components as characteristics of ω's geometry. Remarkably, the above irreducible structure of Sω is independent of ω's geometry (e.g., shape, orientation, connectedness, convexity, boundary smoothness, etc.). Interesting consequences have implication for a number of recently findings that, for example, both the values of Sω at the center of a 2D Cn(n⩾3,n≠4)-symmetric or 3D icosahedral ω and the average value of Sω over such a ω are equal to S0.

  20. Generic, network schema agnostic sparse tensor factorization for single-pass clustering of heterogeneous information networks.

    PubMed

    Wu, Jibing; Meng, Qinggang; Deng, Su; Huang, Hongbin; Wu, Yahui; Badii, Atta

    2017-01-01

    Heterogeneous information networks (e.g. bibliographic networks and social media networks) that consist of multiple interconnected objects are ubiquitous. Clustering analysis is an effective method to understand the semantic information and interpretable structure of the heterogeneous information networks, and it has attracted the attention of many researchers in recent years. However, most studies assume that heterogeneous information networks usually follow some simple schemas, such as bi-typed networks or star network schema, and they can only cluster one type of object in the network each time. In this paper, a novel clustering framework is proposed based on sparse tensor factorization for heterogeneous information networks, which can cluster multiple types of objects simultaneously in a single pass without any network schema information. The types of objects and the relations between them in the heterogeneous information networks are modeled as a sparse tensor. The clustering issue is modeled as an optimization problem, which is similar to the well-known Tucker decomposition. Then, an Alternating Least Squares (ALS) algorithm and a feasible initialization method are proposed to solve the optimization problem. Based on the tensor factorization, we simultaneously partition different types of objects into different clusters. The experimental results on both synthetic and real-world datasets have demonstrated that our proposed clustering framework, STFClus, can model heterogeneous information networks efficiently and can outperform state-of-the-art clustering algorithms as a generally applicable single-pass clustering method for heterogeneous network which is network schema agnostic.

  1. Conformal and Nearly Conformal Theories at Large N

    NASA Astrophysics Data System (ADS)

    Tarnoplskiy, Grigory M.

    In this thesis we present new results in conformal and nearly conformal field theories in various dimensions. In chapter two, we study different properties of the conformal Quantum Electrodynamics (QED) in continuous dimension d. At first we study conformal QED using large Nf methods, where Nf is the number of massless fermions. We compute its sphere free energy as a function of d, ignoring the terms of order 1/Nf and higher. For finite Nf we use the epsilon-expansion. Next we use a large Nf diagrammatic approach to calculate the leading corrections to CT, the coefficient of the two-point function of the stress-energy tensor, and CJ, the coefficient of the two-point function of the global symmetry current. We present explicit formulae as a function of d and check them versus the expectations in 2 and 4 - epsilon dimensions. In chapter three, we discuss vacuum stability in 1 + 1 dimensional conformal field theories with external background fields. We show that the vacuum decay rate is given by a non-local two-form. This two-form is a boundary term that must be added to the effective in/out Lagrangian. The two-form is expressed in terms of a Riemann-Hilbert decomposition for background gauge fields, and is given by its novel "functional'' version in the gravitational case. In chapter four, we explore Tensor models. Such models possess the large N limit dominated by the melon diagrams. The quantum mechanics of a real anti-commuting rank-3 tensor has a large N limit similar to the Sachdev-Ye-Kitaev (SYK) model. We also discuss the quantum mechanics of a complex 3-index anti-commuting tensor and argue that it is equivalent in the large N limit to a version of SYK model with complex fermions. Finally, we discuss models of a commuting tensor in dimension d. We study the spectrum of the large N quantum field theory of bosonic rank-3 tensors using the Schwinger-Dyson equations. We compare some of these results with the 4 - epsilon expansion, finding perfect agreement. We also study the spectra of bosonic theories of rank q - 1 tensors with φq interactions.

  2. Tensor decomposition-based and principal-component-analysis-based unsupervised feature extraction applied to the gene expression and methylation profiles in the brains of social insects with multiple castes.

    PubMed

    Taguchi, Y-H

    2018-05-08

    Even though coexistence of multiple phenotypes sharing the same genomic background is interesting, it remains incompletely understood. Epigenomic profiles may represent key factors, with unknown contributions to the development of multiple phenotypes, and social-insect castes are a good model for elucidation of the underlying mechanisms. Nonetheless, previous studies have failed to identify genes associated with aberrant gene expression and methylation profiles because of the lack of suitable methodology that can address this problem properly. A recently proposed principal component analysis (PCA)-based and tensor decomposition (TD)-based unsupervised feature extraction (FE) can solve this problem because these two approaches can deal with gene expression and methylation profiles even when a small number of samples is available. PCA-based and TD-based unsupervised FE methods were applied to the analysis of gene expression and methylation profiles in the brains of two social insects, Polistes canadensis and Dinoponera quadriceps. Genes associated with differential expression and methylation between castes were identified, and analysis of enrichment of Gene Ontology terms confirmed reliability of the obtained sets of genes from the biological standpoint. Biologically relevant genes, shown to be associated with significant differential gene expression and methylation between castes, were identified here for the first time. The identification of these genes may help understand the mechanisms underlying epigenetic control of development of multiple phenotypes under the same genomic conditions.

  3. Tensor Algebra Library for NVidia Graphics Processing Units

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liakh, Dmitry

    This is a general purpose math library implementing basic tensor algebra operations on NVidia GPU accelerators. This software is a tensor algebra library that can perform basic tensor algebra operations, including tensor contractions, tensor products, tensor additions, etc., on NVidia GPU accelerators, asynchronously with respect to the CPU host. It supports a simultaneous use of multiple NVidia GPUs. Each asynchronous API function returns a handle which can later be used for querying the completion of the corresponding tensor algebra operation on a specific GPU. The tensors participating in a particular tensor operation are assumed to be stored in local RAMmore » of a node or GPU RAM. The main research area where this library can be utilized is the quantum many-body theory (e.g., in electronic structure theory).« less

  4. A Continuum Damage Mechanics Model to Predict Kink-Band Propagation Using Deformation Gradient Tensor Decomposition

    NASA Technical Reports Server (NTRS)

    Bergan, Andrew C.; Leone, Frank A., Jr.

    2016-01-01

    A new model is proposed that represents the kinematics of kink-band formation and propagation within the framework of a mesoscale continuum damage mechanics (CDM) model. The model uses the recently proposed deformation gradient decomposition approach to represent a kink band as a displacement jump via a cohesive interface that is embedded in an elastic bulk material. The model is capable of representing the combination of matrix failure in the frame of a misaligned fiber and instability due to shear nonlinearity. In contrast to conventional linear or bilinear strain softening laws used in most mesoscale CDM models for longitudinal compression, the constitutive response of the proposed model includes features predicted by detailed micromechanical models. These features include: 1) the rotational kinematics of the kink band, 2) an instability when the peak load is reached, and 3) a nonzero plateau stress under large strains.

  5. On computing stress in polymer systems involving multi-body potentials from molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    Fu, Yao; Song, Jeong-Hoon

    2014-08-01

    Hardy stress definition has been restricted to pair potentials and embedded-atom method potentials due to the basic assumptions in the derivation of a symmetric microscopic stress tensor. Force decomposition required in the Hardy stress expression becomes obscure for multi-body potentials. In this work, we demonstrate the invariance of the Hardy stress expression for a polymer system modeled with multi-body interatomic potentials including up to four atoms interaction, by applying central force decomposition of the atomic force. The balance of momentum has been demonstrated to be valid theoretically and tested under various numerical simulation conditions. The validity of momentum conservation justifies the extension of Hardy stress expression to multi-body potential systems. Computed Hardy stress has been observed to converge to the virial stress of the system with increasing spatial averaging volume. This work provides a feasible and reliable linkage between the atomistic and continuum scales for multi-body potential systems.

  6. Covariant Conformal Decomposition of Einstein Equations

    NASA Astrophysics Data System (ADS)

    Gourgoulhon, E.; Novak, J.

    It has been shown1,2 that the usual 3+1 form of Einstein's equations may be ill-posed. This result has been previously observed in numerical simulations3,4. We present a 3+1 type formalism inspired by these works to decompose Einstein's equations. This decomposition is motivated by the aim of stable numerical implementation and resolution of the equations. We introduce the conformal 3-``metric'' (scaled by the determinant of the usual 3-metric) which is a tensor density of weight -2/3. The Einstein equations are then derived in terms of this ``metric'', of the conformal extrinsic curvature and in terms of the associated derivative. We also introduce a flat 3-metric (the asymptotic metric for isolated systems) and the associated derivative. Finally, the generalized Dirac gauge (introduced by Smarr and York5) is used in this formalism and some examples of formulation of Einstein's equations are shown.

  7. Retrodictive determinism. [covariant and transformational behavior of tensor fields in hydrodynamics and thermodynamics

    NASA Technical Reports Server (NTRS)

    Kiehn, R. M.

    1976-01-01

    With respect to irreversible, non-homeomorphic maps, contravariant and covariant tensor fields have distinctly natural covariance and transformational behavior. For thermodynamic processes which are non-adiabatic, the fact that the process cannot be represented by a homeomorphic map emphasizes the logical arrow of time, an idea which encompasses a principle of retrodictive determinism for covariant tensor fields.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibrahim, Khaled Z.; Epifanovsky, Evgeny; Williams, Samuel W.

    Coupled-cluster methods provide highly accurate models of molecular structure by explicit numerical calculation of tensors representing the correlation between electrons. These calculations are dominated by a sequence of tensor contractions, motivating the development of numerical libraries for such operations. While based on matrix-matrix multiplication, these libraries are specialized to exploit symmetries in the molecular structure and in electronic interactions, and thus reduce the size of the tensor representation and the complexity of contractions. The resulting algorithms are irregular and their parallelization has been previously achieved via the use of dynamic scheduling or specialized data decompositions. We introduce our efforts tomore » extend the Libtensor framework to work in the distributed memory environment in a scalable and energy efficient manner. We achieve up to 240 speedup compared with the best optimized shared memory implementation. We attain scalability to hundreds of thousands of compute cores on three distributed-memory architectures, (Cray XC30&XC40, BlueGene/Q), and on a heterogeneous GPU-CPU system (Cray XK7). As the bottlenecks shift from being compute-bound DGEMM's to communication-bound collectives as the size of the molecular system scales, we adopt two radically different parallelization approaches for handling load-imbalance. Nevertheless, we preserve a uni ed interface to both programming models to maintain the productivity of computational quantum chemists.« less

  9. Molecular orbital analysis of the inverse halogen dependence of nuclear magnetic shielding in LaX₃, X = F, Cl, Br, I.

    PubMed

    Moncho, Salvador; Autschbach, Jochen

    2010-12-01

    The NMR nuclear shielding tensors for the series LaX(3), with X = F, Cl, Br and I, have been computed using two-component relativistic density functional theory based on the zeroth-order regular approximation (ZORA). A detailed analysis of the inverse halogen dependence (IHD) of the La shielding was performed via decomposition of the shielding tensor elements into contributions from localized and delocalized molecular orbitals. Both spin-orbit and paramagnetic shielding terms are important, with the paramagnetic terms being dominant. Major contributions to the IHD can be attributed to the La-X bonding orbitals, as well as to trends associated with the La core and halogen lone pair orbitals, the latter being related to X-La π donation. An 'orbital rotation' model for the in-plane π acceptor f orbital of La helps to rationalize the significant magnitude of deshielding associated with the in-plane π donation. The IHD goes along with a large increase in the shielding tensor anisotropy as X becomes heavier, which can be associated with trends for the covalency of the La-X bonds, with a particularly effective transfer of spin-orbit coupling induced spin density from iodine to La in LaI(3). Copyright © 2010 John Wiley & Sons, Ltd.

  10. Iterative image reconstruction for multienergy computed tomography via structure tensor total variation regularization

    NASA Astrophysics Data System (ADS)

    Zeng, Dong; Bian, Zhaoying; Gong, Changfei; Huang, Jing; He, Ji; Zhang, Hua; Lu, Lijun; Feng, Qianjin; Liang, Zhengrong; Ma, Jianhua

    2016-03-01

    Multienergy computed tomography (MECT) has the potential to simultaneously offer multiple sets of energy- selective data belonging to specific energy windows. However, because sufficient photon counts are not available in the specific energy windows compared with that in the whole energy window, the MECT images reconstructed by the analytical approach often suffer from poor signal-to-noise (SNR) and strong streak artifacts. To eliminate this drawback, in this work we present a penalized weighted least-squares (PWLS) scheme by incorporating the new concept of structure tensor total variation (STV) regularization to improve the MECT images quality from low-milliampere-seconds (low-mAs) data acquisitions. Henceforth the present scheme is referred to as `PWLS- STV' for simplicity. Specifically, the STV regularization is derived by penalizing the eigenvalues of the structure tensor of every point in the MECT images. Thus it can provide more robust measures of image variation, which can eliminate the patchy artifacts often observed in total variation regularization. Subsequently, an alternating optimization algorithm was adopted to minimize the objective function. Experiments with a digital XCAT phantom clearly demonstrate that the present PWLS-STV algorithm can achieve more gains than the existing TV-based algorithms and the conventional filtered backpeojection (FBP) algorithm in terms of noise-induced artifacts suppression, resolution preservation, and material decomposition assessment.

  11. Implicit constitutive models with a thermodynamic basis: a study of stress concentration

    NASA Astrophysics Data System (ADS)

    Bridges, C.; Rajagopal, K. R.

    2015-02-01

    Motivated by the recent generalization of the class of elastic bodies by Rajagopal (Appl Math 48:279-319, 2003), there have been several recent studies that have been carried out within the context of this new class. Rajagopal and Srinivasa (Proc R Soc Ser A 463:357-367, 2007, Proc R Soc Ser A: Math Phys Eng Sci 465:493-500, 2009) provided a thermodynamic basis for such models and appealing to the idea that rate of entropy production ought to be maximized they developed nonlinear rate equations of the form where T is the Cauchy stress and D is the stretching tensor as well as , where S is the Piola-Kirchhoff stress tensor and E is the Green-St. Venant strain tensor. We follow a similar procedure by utilizing the Gibb's potential and the left stretch tensor V from the Polar Decomposition of the deformation gradient, and we show that when the displacement gradient is small one arrives at constitutive relations of the form . This is, of course, in stark contrast to traditional elasticity wherein one obtains a single model, Hooke's law, when the displacement gradient is small. By solving a classical boundary value problem, with a particular form for f( T), we show that when the stresses are small, the strains are also small which is in agreement with traditional elasticity. However, within the context of our model, when the stress blows up the strains remain small, unlike the implications of Hooke's law. We use this model to study boundary value problems in annular domains to illustrate its efficacy.

  12. Multilinear Graph Embedding: Representation and Regularization for Images.

    PubMed

    Chen, Yi-Lei; Hsu, Chiou-Ting

    2014-02-01

    Given a set of images, finding a compact and discriminative representation is still a big challenge especially when multiple latent factors are hidden in the way of data generation. To represent multifactor images, although multilinear models are widely used to parameterize the data, most methods are based on high-order singular value decomposition (HOSVD), which preserves global statistics but interprets local variations inadequately. To this end, we propose a novel method, called multilinear graph embedding (MGE), as well as its kernelization MKGE to leverage the manifold learning techniques into multilinear models. Our method theoretically links the linear, nonlinear, and multilinear dimensionality reduction. We also show that the supervised MGE encodes informative image priors for image regularization, provided that an image is represented as a high-order tensor. From our experiments on face and gait recognition, the superior performance demonstrates that MGE better represents multifactor images than classic methods, including HOSVD and its variants. In addition, the significant improvement in image (or tensor) completion validates the potential of MGE for image regularization.

  13. Infinite matter properties and zero-range limit of non-relativistic finite-range interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davesne, D.; Becker, P., E-mail: pbecker@ipnl.in2p3.fr; Pastore, A.

    2016-12-15

    We discuss some infinite matter properties of two finite-range interactions widely used for nuclear structure calculations, namely Gogny and M3Y interactions. We show that some useful informations can be deduced for the central, tensor and spin–orbit terms from the spin–isospin channels and the partial wave decomposition of the symmetric nuclear matter equation of state. We show in particular that the central part of the Gogny interaction should benefit from the introduction of a third Gaussian and the tensor parameters of both interactions can be deduced from special combinations of partial waves. We also discuss the fact that the spin–orbit ofmore » the M3Y interaction is not compatible with local gauge invariance. Finally, we show that the zero-range limit of both families of interactions coincides with the specific form of the zero-range Skyrme interaction extended to higher momentum orders and we emphasize from this analogy its benefits.« less

  14. Symmetry operators and decoupled equations for linear fields on black hole spacetimes

    NASA Astrophysics Data System (ADS)

    Araneda, Bernardo

    2017-02-01

    In the class of vacuum Petrov type D spacetimes with cosmological constant, which includes the Kerr-(A)dS black hole as a particular case, we find a set of four-dimensional operators that, when composed off shell with the Dirac, Maxwell and linearized gravity equations, give a system of equations for spin weighted scalars associated with the linear fields, that decouple on shell. Using these operator relations we give compact reconstruction formulae for solutions of the original spinor and tensor field equations in terms of solutions of the decoupled scalar equations. We also analyze the role of Killing spinors and Killing-Yano tensors in the spin weight zero equations and, in the case of spherical symmetry, we compare our four-dimensional formulation with the standard 2  +  2 decomposition and particularize to the Schwarzschild-(A)dS black hole. Our results uncover a pattern that generalizes a number of previous results on Teukolsky-like equations and Debye potentials for higher spin fields.

  15. Numerical simulation of a compressible homogeneous, turbulent shear flow. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Feiereisen, W. J.; Reynolds, W. C.; Ferziger, J. H.

    1981-01-01

    A direct, low Reynolds number, numerical simulation was performed on a homogeneous turbulent shear flow. The full compressible Navier-Stokes equations were used in a simulation on the ILLIAC IV computer with a 64,000 mesh. The flow fields generated by the code are used as an experimental data base, to examine the behavior of the Reynols stresses in this simple, compressible flow. The variation of the structure of the stresses and their dynamic equations as the character of the flow changed is emphasized. The structure of the tress tensor is more heavily dependent on the shear number and less on the fluctuating Mach number. The pressure-strain correlation tensor in the dynamic uations is directly calculated in this simulation. These correlations are decomposed into several parts, as contrasted with the traditional incompressible decomposition into two parts. The performance of existing models for the conventional terms is examined, and a model is proposed for the 'mean fluctuating' part.

  16. A magnetic and electronic circular dichroism study of azurin, plastocyanin, cucumber basic protein, and nitrite reductase based on time-dependent density functional theory calculations.

    PubMed

    Zhekova, Hristina R; Seth, Michael; Ziegler, Tom

    2010-06-03

    The excitation, circular dichroism, magnetic circular dichroism (MCD) and electron paramagnetic resonance (EPR) spectra of small models of four blue copper proteins are simulated on the TDDFT/BP86 level. X-Ray diffraction geometries are used for the modeling of the blue copper sites in azurin, plastocyanin, cucumber basic protein, and nitrite reductase. Comparison with experimental data reveals that the calculations reproduce most of the qualitative trends of the observed experimental spectra with some discrepancies in the orbital decompositions and the values of the excitation energies, the g( parallel) components of the g tensor, and the components of the A tensor. These discrepancies are discussed relative to deficiencies in the time-dependent density functional theory (TDDFT) methodology, as opposed to previous studies which address them as a result of insufficient model size or poor performance of the BP86 functional. In addition, attempts are made to elucidate the correlation between the MCD and EPR signals.

  17. Initial data sets for the Schwarzschild spacetime

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gomez-Lobo, Alfonso Garcia-Parrado; Kroon, Juan A. Valiente; School of Mathematical Sciences, Queen Mary, University of London, Mile End Road, London E1 4NS

    2007-01-15

    A characterization of initial data sets for the Schwarzschild spacetime is provided. This characterization is obtained by performing a 3+1 decomposition of a certain invariant characterization of the Schwarzschild spacetime given in terms of concomitants of the Weyl tensor. This procedure renders a set of necessary conditions--which can be written in terms of the electric and magnetic parts of the Weyl tensor and their concomitants--for an initial data set to be a Schwarzschild initial data set. Our approach also provides a formula for a static Killing initial data set candidate--a KID candidate. Sufficient conditions for an initial data set tomore » be a Schwarzschild initial data set are obtained by supplementing the necessary conditions with the requirement that the initial data set possesses a stationary Killing initial data set of the form given by our KID candidate. Thus, we obtain an algorithmic procedure of checking whether a given initial data set is Schwarzschildean or not.« less

  18. Full magnetic gradient tensor from triaxial aeromagnetic gradient measurements: Calculation and application

    NASA Astrophysics Data System (ADS)

    Luo, Yao; Wu, Mei-Ping; Wang, Ping; Duan, Shu-Ling; Liu, Hao-Jun; Wang, Jin-Long; An, Zhan-Feng

    2015-09-01

    The full magnetic gradient tensor (MGT) refers to the spatial change rate of the three field components of the geomagnetic field vector along three mutually orthogonal axes. The tensor is of use to geological mapping, resources exploration, magnetic navigation, and others. However, it is very difficult to measure the full magnetic tensor gradient using existing engineering technology. We present a method to use triaxial aeromagnetic gradient measurements for deriving the full MGT. The method uses the triaxial gradient data and makes full use of the variation of the magnetic anomaly modulus in three dimensions to obtain a self-consistent magnetic tensor gradient. Numerical simulations show that the full MGT data obtained with the proposed method are of high precision and satisfy the requirements of data processing. We selected triaxial aeromagnetic gradient data from the Hebei Province for calculating the full MGT. Data processing shows that using triaxial tensor gradient data allows to take advantage of the spatial rate of change of the total field in three dimensions and suppresses part of the independent noise in the aeromagnetic gradient. The calculated tensor components have improved resolution, and the transformed full tensor gradient satisfies the requirement of geological mapping and interpretation.

  19. Similar Tensor Arrays - A Framework for Storage of Tensor Array Data

    NASA Astrophysics Data System (ADS)

    Brun, Anders; Martin-Fernandez, Marcos; Acar, Burak; Munoz-Moreno, Emma; Cammoun, Leila; Sigfridsson, Andreas; Sosa-Cabrera, Dario; Svensson, Björn; Herberthson, Magnus; Knutsson, Hans

    This chapter describes a framework for storage of tensor array data, useful to describe regularly sampled tensor fields. The main component of the framework, called Similar Tensor Array Core (STAC), is the result of a collaboration between research groups within the SIMILAR network of excellence. It aims to capture the essence of regularly sampled tensor fields using a minimal set of attributes and can therefore be used as a “greatest common divisor” and interface between tensor array processing algorithms. This is potentially useful in applied fields like medical image analysis, in particular in Diffusion Tensor MRI, where misinterpretation of tensor array data is a common source of errors. By promoting a strictly geometric perspective on tensor arrays, with a close resemblance to the terminology used in differential geometry, (STAC) removes ambiguities and guides the user to define all necessary information. In contrast to existing tensor array file formats, it is minimalistic and based on an intrinsic and geometric interpretation of the array itself, without references to other coordinate systems.

  20. Linked independent component analysis for multimodal data fusion.

    PubMed

    Groves, Adrian R; Beckmann, Christian F; Smith, Steve M; Woolrich, Mark W

    2011-02-01

    In recent years, neuroimaging studies have increasingly been acquiring multiple modalities of data and searching for task- or disease-related changes in each modality separately. A major challenge in analysis is to find systematic approaches for fusing these differing data types together to automatically find patterns of related changes across multiple modalities, when they exist. Independent Component Analysis (ICA) is a popular unsupervised learning method that can be used to find the modes of variation in neuroimaging data across a group of subjects. When multimodal data is acquired for the subjects, ICA is typically performed separately on each modality, leading to incompatible decompositions across modalities. Using a modular Bayesian framework, we develop a novel "Linked ICA" model for simultaneously modelling and discovering common features across multiple modalities, which can potentially have completely different units, signal- and contrast-to-noise ratios, voxel counts, spatial smoothnesses and intensity distributions. Furthermore, this general model can be configured to allow tensor ICA or spatially-concatenated ICA decompositions, or a combination of both at the same time. Linked ICA automatically determines the optimal weighting of each modality, and also can detect single-modality structured components when present. This is a fully probabilistic approach, implemented using Variational Bayes. We evaluate the method on simulated multimodal data sets, as well as on a real data set of Alzheimer's patients and age-matched controls that combines two very different types of structural MRI data: morphological data (grey matter density) and diffusion data (fractional anisotropy, mean diffusivity, and tensor mode). Copyright © 2010 Elsevier Inc. All rights reserved.

  1. Low-Dose Dynamic Cerebral Perfusion Computed Tomography Reconstruction via Kronecker-Basis Representation Tensor Sparsity Regularization

    PubMed Central

    Zeng, Dong; Xie, Qi; Cao, Wenfei; Lin, Jiahui; Zhang, Hao; Zhang, Shanli; Huang, Jing; Bian, Zhaoying; Meng, Deyu; Xu, Zongben; Liang, Zhengrong; Chen, Wufan

    2017-01-01

    Dynamic cerebral perfusion computed tomography (DCPCT) has the ability to evaluate the hemodynamic information throughout the brain. However, due to multiple 3-D image volume acquisitions protocol, DCPCT scanning imposes high radiation dose on the patients with growing concerns. To address this issue, in this paper, based on the robust principal component analysis (RPCA, or equivalently the low-rank and sparsity decomposition) model and the DCPCT imaging procedure, we propose a new DCPCT image reconstruction algorithm to improve low dose DCPCT and perfusion maps quality via using a powerful measure, called Kronecker-basis-representation tensor sparsity regularization, for measuring low-rankness extent of a tensor. For simplicity, the first proposed model is termed tensor-based RPCA (T-RPCA). Specifically, the T-RPCA model views the DCPCT sequential images as a mixture of low-rank, sparse, and noise components to describe the maximum temporal coherence of spatial structure among phases in a tensor framework intrinsically. Moreover, the low-rank component corresponds to the “background” part with spatial–temporal correlations, e.g., static anatomical contribution, which is stationary over time about structure, and the sparse component represents the time-varying component with spatial–temporal continuity, e.g., dynamic perfusion enhanced information, which is approximately sparse over time. Furthermore, an improved nonlocal patch-based T-RPCA (NL-T-RPCA) model which describes the 3-D block groups of the “background” in a tensor is also proposed. The NL-T-RPCA model utilizes the intrinsic characteristics underlying the DCPCT images, i.e., nonlocal self-similarity and global correlation. Two efficient algorithms using alternating direction method of multipliers are developed to solve the proposed T-RPCA and NL-T-RPCA models, respectively. Extensive experiments with a digital brain perfusion phantom, preclinical monkey data, and clinical patient data clearly demonstrate that the two proposed models can achieve more gains than the existing popular algorithms in terms of both quantitative and visual quality evaluations from low-dose acquisitions, especially as low as 20 mAs. PMID:28880164

  2. Tensor products of process matrices with indefinite causal structure

    NASA Astrophysics Data System (ADS)

    Jia, Ding; Sakharwade, Nitica

    2018-03-01

    Theories with indefinite causal structure have been studied from both the fundamental perspective of quantum gravity and the practical perspective of information processing. In this paper we point out a restriction in forming tensor products of objects with indefinite causal structure in certain models: there exist both classical and quantum objects the tensor products of which violate the normalization condition of probabilities, if all local operations are allowed. We obtain a necessary and sufficient condition for when such unrestricted tensor products of multipartite objects are (in)valid. This poses a challenge to extending communication theory to indefinite causal structures, as the tensor product is the fundamental ingredient in the asymptotic setting of communication theory. We discuss a few options to evade this issue. In particular, we show that the sequential asymptotic setting does not suffer the violation of normalization.

  3. Improved preconditioned conjugate gradient algorithm and application in 3D inversion of gravity-gradiometry data

    NASA Astrophysics Data System (ADS)

    Wang, Tai-Han; Huang, Da-Nian; Ma, Guo-Qing; Meng, Zhao-Hai; Li, Ye

    2017-06-01

    With the continuous development of full tensor gradiometer (FTG) measurement techniques, three-dimensional (3D) inversion of FTG data is becoming increasingly used in oil and gas exploration. In the fast processing and interpretation of large-scale high-precision data, the use of the graphics processing unit process unit (GPU) and preconditioning methods are very important in the data inversion. In this paper, an improved preconditioned conjugate gradient algorithm is proposed by combining the symmetric successive over-relaxation (SSOR) technique and the incomplete Choleksy decomposition conjugate gradient algorithm (ICCG). Since preparing the preconditioner requires extra time, a parallel implement based on GPU is proposed. The improved method is then applied in the inversion of noisecontaminated synthetic data to prove its adaptability in the inversion of 3D FTG data. Results show that the parallel SSOR-ICCG algorithm based on NVIDIA Tesla C2050 GPU achieves a speedup of approximately 25 times that of a serial program using a 2.0 GHz Central Processing Unit (CPU). Real airborne gravity-gradiometry data from Vinton salt dome (southwest Louisiana, USA) are also considered. Good results are obtained, which verifies the efficiency and feasibility of the proposed parallel method in fast inversion of 3D FTG data.

  4. Unifying neural-network quantum states and correlator product states via tensor networks

    NASA Astrophysics Data System (ADS)

    Clark, Stephen R.

    2018-04-01

    Correlator product states (CPS) are a powerful and very broad class of states for quantum lattice systems whose (unnormalised) amplitudes in a fixed basis can be sampled exactly and efficiently. They work by gluing together states of overlapping clusters of sites on the lattice, called correlators. Recently Carleo and Troyer (2017 Science 355 602) introduced a new type sampleable ansatz called neural-network quantum states (NQS) that are inspired by the restricted Boltzmann model used in machine learning. By employing the formalism of tensor networks we show that NQS are a special form of CPS with novel properties. Diagramatically a number of simple observations become transparent. Namely, that NQS are CPS built from extensively sized GHZ-form correlators making them uniquely unbiased geometrically. The appearance of GHZ correlators also relates NQS to canonical polyadic decompositions of tensors. Another immediate implication of the NQS equivalence to CPS is that we are able to formulate exact NQS representations for a wide range of paradigmatic states, including superpositions of weighed-graph states, the Laughlin state, toric code states, and the resonating valence bond state. These examples reveal the potential of using higher dimensional hidden units and a second hidden layer in NQS. The major outlook of this study is the elevation of NQS to correlator operators allowing them to enhance conventional well-established variational Monte Carlo approaches for strongly correlated fermions.

  5. New wrinkles on black hole perturbations: Numerical treatment of acoustic and gravitational waves

    NASA Astrophysics Data System (ADS)

    Tenyotkin, Valery

    2009-06-01

    This thesis develops two main topics. A full relativistic calculation of quasinormal modes of an acoustic black hole is carried out. The acoustic black hole is formed by a perfect, inviscid, relativistic, ideal gas that is spherically accreting onto a Schwarzschild black hole. The second major part is the calculation of sourceless vector (electromagnetic) and tensor (gravitational) covariant field evolution equations for perturbations on a Schwarzschild background using the relatively recent [Special characters omitted.] decomposition method. Scattering calculations are carried out in Schwarzschild coordinates for electromagnetic and gravitational cases as validation of the method and the derived equations.

  6. The inner topological structure and defect control of magnetic skyrmions

    NASA Astrophysics Data System (ADS)

    Ren, Ji-Rong; Yu, Zhong-Xi

    2017-10-01

    We prove that the integrand of magnetic skyrmions can be expressed as curvature tensor of Wu-Yang potential. Taking the projection of the normalized magnetization vector on the 2-dim material surface, and according to Duan's decomposition theory of gauge potential, we reveal that every single skyrmion is just characterized by Hopf index and Brouwer degree at the zero point of this vector field. Our theory meet the results that experimental physicists have achieved by many technologies. The inner topological structure expression of skyrmion with Hopf index and Brouwer degree will be indispensable mathematical basis of skyrmion logic gates.

  7. An anisotropic elastoplastic constitutive formulation generalised for orthotropic materials

    NASA Astrophysics Data System (ADS)

    Mohd Nor, M. K.; Ma'at, N.; Ho, C. S.

    2018-03-01

    This paper presents a finite strain constitutive model to predict a complex elastoplastic deformation behaviour that involves very high pressures and shockwaves in orthotropic materials using an anisotropic Hill's yield criterion by means of the evolving structural tensors. The yield surface of this hyperelastic-plastic constitutive model is aligned uniquely within the principal stress space due to the combination of Mandel stress tensor and a new generalised orthotropic pressure. The formulation is developed in the isoclinic configuration and allows for a unique treatment for elastic and plastic orthotropy. An isotropic hardening is adopted to define the evolution of plastic orthotropy. The important feature of the proposed hyperelastic-plastic constitutive model is the introduction of anisotropic effect in the Mie-Gruneisen equation of state (EOS). The formulation is further combined with Grady spall failure model to predict spall failure in the materials. The proposed constitutive model is implemented as a new material model in the Lawrence Livermore National Laboratory (LLNL)-DYNA3D code of UTHM's version, named Material Type 92 (Mat92). The combination of the proposed stress tensor decomposition and the Mie-Gruneisen EOS requires some modifications in the code to reflect the formulation of the generalised orthotropic pressure. The validation approach is also presented in this paper for guidance purpose. The \\varvec{ψ} tensor used to define the alignment of the adopted yield surface is first validated. This is continued with an internal validation related to elastic isotropic, elastic orthotropic and elastic-plastic orthotropic of the proposed formulation before a comparison against range of plate impact test data at 234, 450 and {895 ms}^{-1} impact velocities is performed. A good agreement is obtained in each test.

  8. Global Existence Results for Viscoplasticity at Finite Strain

    NASA Astrophysics Data System (ADS)

    Mielke, Alexander; Rossi, Riccarda; Savaré, Giuseppe

    2018-01-01

    We study a model for rate-dependent gradient plasticity at finite strain based on the multiplicative decomposition of the strain tensor, and investigate the existence of global-in-time solutions to the related PDE system. We reveal its underlying structure as a generalized gradient system, where the driving energy functional is highly nonconvex and features the geometric nonlinearities related to finite-strain elasticity as well as the multiplicative decomposition of finite-strain plasticity. Moreover, the dissipation potential depends on the left-invariant plastic rate, and thus depends on the plastic state variable. The existence theory is developed for a class of abstract, nonsmooth, and nonconvex gradient systems, for which we introduce suitable notions of solutions, namely energy-dissipation-balance and energy-dissipation-inequality solutions. Hence, we resort to the toolbox of the direct method of the calculus of variations to check that the specific energy and dissipation functionals for our viscoplastic models comply with the conditions of the general theory.

  9. ADM Analysis of gravity models within the framework of bimetric variational formalism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Golovnev, Alexey; Karčiauskas, Mindaugas; Nyrhinen, Hannu J., E-mail: agolovnev@yandex.ru, E-mail: mindaugas.karciauskas@helsinki.fi, E-mail: hannu.nyrhinen@helsinki.fi

    2015-05-01

    Bimetric variational formalism was recently employed to construct novel bimetric gravity models. In these models an affine connection is generated by an additional tensor field which is independent of the physical metric. In this work we demonstrate how the ADM decomposition can be applied to study such models and provide some technical intermediate details. Using ADM decomposition we are able to prove that a linear model is unstable as has previously been indicated by perturbative analysis. Moreover, we show that it is also very difficult if not impossible to construct a non-linear model which is ghost-free within the framework ofmore » bimetric variational formalism. However, we demonstrate that viable models are possible along similar lines of thought. To this end, we consider a set up in which the affine connection is a variation of the Levi-Civita one. As a proof of principle we construct a gravity model with a massless scalar field obtained this way.« less

  10. On computing stress in polymer systems involving multi-body potentials from molecular dynamics simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fu, Yao, E-mail: fu5@mailbox.sc.edu, E-mail: jhsong@cec.sc.edu; Song, Jeong-Hoon, E-mail: fu5@mailbox.sc.edu, E-mail: jhsong@cec.sc.edu

    2014-08-07

    Hardy stress definition has been restricted to pair potentials and embedded-atom method potentials due to the basic assumptions in the derivation of a symmetric microscopic stress tensor. Force decomposition required in the Hardy stress expression becomes obscure for multi-body potentials. In this work, we demonstrate the invariance of the Hardy stress expression for a polymer system modeled with multi-body interatomic potentials including up to four atoms interaction, by applying central force decomposition of the atomic force. The balance of momentum has been demonstrated to be valid theoretically and tested under various numerical simulation conditions. The validity of momentum conservation justifiesmore » the extension of Hardy stress expression to multi-body potential systems. Computed Hardy stress has been observed to converge to the virial stress of the system with increasing spatial averaging volume. This work provides a feasible and reliable linkage between the atomistic and continuum scales for multi-body potential systems.« less

  11. Rank-based decompositions of morphological templates.

    PubMed

    Sussner, P; Ritter, G X

    2000-01-01

    Methods for matrix decomposition have found numerous applications in image processing, in particular for the problem of template decomposition. Since existing matrix decomposition techniques are mainly concerned with the linear domain, we consider it timely to investigate matrix decomposition techniques in the nonlinear domain with applications in image processing. The mathematical basis for these investigations is the new theory of rank within minimax algebra. Thus far, only minimax decompositions of rank 1 and rank 2 matrices into outer product expansions are known to the image processing community. We derive a heuristic algorithm for the decomposition of matrices having arbitrary rank.

  12. Finding Imaging Patterns of Structural Covariance via Non-Negative Matrix Factorization

    PubMed Central

    Sotiras, Aristeidis; Resnick, Susan M.; Davatzikos, Christos

    2015-01-01

    In this paper, we investigate the use of Non-Negative Matrix Factorization (NNMF) for the analysis of structural neuroimaging data. The goal is to identify the brain regions that co-vary across individuals in a consistent way, hence potentially being part of underlying brain networks or otherwise influenced by underlying common mechanisms such as genetics and pathologies. NNMF offers a directly data-driven way of extracting relatively localized co-varying structural regions, thereby transcending limitations of Principal Component Analysis (PCA), Independent Component Analysis (ICA) and other related methods that tend to produce dispersed components of positive and negative loadings. In particular, leveraging upon the well known ability of NNMF to produce parts-based representations of image data, we derive decompositions that partition the brain into regions that vary in consistent ways across individuals. Importantly, these decompositions achieve dimensionality reduction via highly interpretable ways and generalize well to new data as shown via split-sample experiments. We empirically validate NNMF in two data sets: i) a Diffusion Tensor (DT) mouse brain development study, and ii) a structural Magnetic Resonance (sMR) study of human brain aging. We demonstrate the ability of NNMF to produce sparse parts-based representations of the data at various resolutions. These representations seem to follow what we know about the underlying functional organization of the brain and also capture some pathological processes. Moreover, we show that these low dimensional representations favorably compare to descriptions obtained with more commonly used matrix factorization methods like PCA and ICA. PMID:25497684

  13. On Adapting the Tensor Voting Framework to Robust Color Image Denoising

    NASA Astrophysics Data System (ADS)

    Moreno, Rodrigo; Garcia, Miguel Angel; Puig, Domenec; Julià, Carme

    This paper presents an adaptation of the tensor voting framework for color image denoising, while preserving edges. Tensors are used in order to encode the CIELAB color channels, the uniformity and the edginess of image pixels. A specific voting process is proposed in order to propagate color from a pixel to its neighbors by considering the distance between pixels, the perceptual color difference (by using an optimized version of CIEDE2000), a uniformity measurement and the likelihood of the pixels being impulse noise. The original colors are corrected with those encoded by the tensors obtained after the voting process. Peak to noise ratios and visual inspection show that the proposed methodology has a better performance than state-of-the-art techniques.

  14. HOSVD-Based 3D Active Appearance Model: Segmentation of Lung Fields in CT Images.

    PubMed

    Wang, Qingzhu; Kang, Wanjun; Hu, Haihui; Wang, Bin

    2016-07-01

    An Active Appearance Model (AAM) is a computer vision model which can be used to effectively segment lung fields in CT images. However, the fitting result is often inadequate when the lungs are affected by high-density pathologies. To overcome this problem, we propose a Higher-order Singular Value Decomposition (HOSVD)-based Three-dimensional (3D) AAM. An evaluation was performed on 310 diseased lungs form the Lung Image Database Consortium Image Collection. Other contemporary AAMs operate directly on patterns represented by vectors, i.e., before applying the AAM to a 3D lung volume,it has to be vectorized first into a vector pattern by some technique like concatenation. However, some implicit structural or local contextual information may be lost in this transformation. According to the nature of the 3D lung volume, HOSVD is introduced to represent and process the lung in tensor space. Our method can not only directly operate on the original 3D tensor patterns, but also efficiently reduce the computer memory usage. The evaluation resulted in an average Dice coefficient of 97.0 % ± 0.59 %, a mean absolute surface distance error of 1.0403 ± 0.5716 mm, a mean border positioning errors of 0.9187 ± 0.5381 pixel, and a Hausdorff Distance of 20.4064 ± 4.3855, respectively. Experimental results showed that our methods delivered significant and better segmentation results, compared with the three other model-based lung segmentation approaches, namely 3D Snake, 3D ASM and 3D AAM.

  15. High-order perturbations of a spherical collapsing star

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brizuela, David; Martin-Garcia, Jose M.; Sperhake, Ulrich

    2010-11-15

    A formalism to deal with high-order perturbations of a general spherical background was developed in earlier work [D. Brizuela, J. M. Martin-Garcia, and G. A. Mena Marugan, Phys. Rev. D 74, 044039 (2006); D. Brizuela, J. M. Martin-Garcia, and G. A. Mena Marugan, Phys. Rev. D 76, 024004 (2007)]. In this paper, we apply it to the particular case of a perfect fluid background. We have expressed the perturbations of the energy-momentum tensor at any order in terms of the perturbed fluid's pressure, density, and velocity. In general, these expressions are not linear and have sources depending on lower-order perturbations.more » For the second-order case we make the explicit decomposition of these sources in tensor spherical harmonics. Then, a general procedure is given to evolve the perturbative equations of motions of the perfect fluid for any value of the harmonic label. Finally, with the problem of a spherical collapsing star in mind, we discuss the high-order perturbative matching conditions across a timelike surface, in particular, the surface separating the perfect fluid interior from the exterior vacuum.« less

  16. Concepts and procedures required for successful reduction of tensor magnetic gradiometer data obtained from an unexploded ordnance detection demonstration at Yuma Proving Grounds, Arizona

    USGS Publications Warehouse

    Bracken, Robert E.; Brown, Philip J.

    2006-01-01

    On March 12, 2003, data were gathered at Yuma Proving Grounds, in Arizona, using a Tensor Magnetic Gradiometer System (TMGS). This report shows how these data were processed and explains concepts required for successful TMGS data reduction. Important concepts discussed include extreme attitudinal sensitivity of vector measurements, low attitudinal sensitivity of gradient measurements, leakage of the common-mode field into gradient measurements, consequences of thermal drift, and effects of field curvature. Spatial-data collection procedures and a spin-calibration method are addressed. Discussions of data-reduction procedures include tracking of axial data by mathematically matching transfer functions among the axes, derivation and application of calibration coefficients, calculation of sensor-pair gradients, thermal-drift corrections, and gradient collocation. For presentation, the magnetic tensor at each data station is converted to a scalar quantity, the I2 tensor invariant, which is easily found by calculating the determinant of the tensor. At important processing junctures, the determinants for all stations in the mapped area are shown in shaded relief map-view. Final processed results are compared to a mathematical model to show the validity of the assumptions made during processing and the reasonableness of the ultimate answer obtained.

  17. Developing a complex independent component analysis technique to extract non-stationary patterns from geophysical time-series

    NASA Astrophysics Data System (ADS)

    Forootan, Ehsan; Kusche, Jürgen

    2016-04-01

    Geodetic/geophysical observations, such as the time series of global terrestrial water storage change or sea level and temperature change, represent samples of physical processes and therefore contain information about complex physical interactionswith many inherent time scales. Extracting relevant information from these samples, for example quantifying the seasonality of a physical process or its variability due to large-scale ocean-atmosphere interactions, is not possible by rendering simple time series approaches. In the last decades, decomposition techniques have found increasing interest for extracting patterns from geophysical observations. Traditionally, principal component analysis (PCA) and more recently independent component analysis (ICA) are common techniques to extract statistical orthogonal (uncorrelated) and independent modes that represent the maximum variance of observations, respectively. PCA and ICA can be classified as stationary signal decomposition techniques since they are based on decomposing the auto-covariance matrix or diagonalizing higher (than two)-order statistical tensors from centered time series. However, the stationary assumption is obviously not justifiable for many geophysical and climate variables even after removing cyclic components e.g., the seasonal cycles. In this paper, we present a new decomposition method, the complex independent component analysis (CICA, Forootan, PhD-2014), which can be applied to extract to non-stationary (changing in space and time) patterns from geophysical time series. Here, CICA is derived as an extension of real-valued ICA (Forootan and Kusche, JoG-2012), where we (i) define a new complex data set using a Hilbert transformation. The complex time series contain the observed values in their real part, and the temporal rate of variability in their imaginary part. (ii) An ICA algorithm based on diagonalization of fourth-order cumulants is then applied to decompose the new complex data set in (i). (iii) Dominant non-stationary patterns are recognized as independent complex patterns that can be used to represent the space and time amplitude and phase propagations. We present the results of CICA on simulated and real cases e.g., for quantifying the impact of large-scale ocean-atmosphere interaction on global mass changes. Forootan (PhD-2014) Statistical signal decomposition techniques for analyzing time-variable satellite gravimetry data, PhD Thesis, University of Bonn, http://hss.ulb.uni-bonn.de/2014/3766/3766.htm Forootan and Kusche (JoG-2012) Separation of global time-variable gravity signals into maximally independent components, Journal of Geodesy 86 (7), 477-497, doi: 10.1007/s00190-011-0532-5

  18. ChIP-PIT: Enhancing the Analysis of ChIP-Seq Data Using Convex-Relaxed Pair-Wise Interaction Tensor Decomposition.

    PubMed

    Zhu, Lin; Guo, Wei-Li; Deng, Su-Ping; Huang, De-Shuang

    2016-01-01

    In recent years, thanks to the efforts of individual scientists and research consortiums, a huge amount of chromatin immunoprecipitation followed by high-throughput sequencing (ChIP-seq) experimental data have been accumulated. Instead of investigating them independently, several recent studies have convincingly demonstrated that a wealth of scientific insights can be gained by integrative analysis of these ChIP-seq data. However, when used for the purpose of integrative analysis, a serious drawback of current ChIP-seq technique is that it is still expensive and time-consuming to generate ChIP-seq datasets of high standard. Most researchers are therefore unable to obtain complete ChIP-seq data for several TFs in a wide variety of cell lines, which considerably limits the understanding of transcriptional regulation pattern. In this paper, we propose a novel method called ChIP-PIT to overcome the aforementioned limitation. In ChIP-PIT, ChIP-seq data corresponding to a diverse collection of cell types, TFs and genes are fused together using the three-mode pair-wise interaction tensor (PIT) model, and the prediction of unperformed ChIP-seq experimental results is formulated as a tensor completion problem. Computationally, we propose efficient first-order method based on extensions of coordinate descent method to learn the optimal solution of ChIP-PIT, which makes it particularly suitable for the analysis of massive scale ChIP-seq data. Experimental evaluation the ENCODE data illustrate the usefulness of the proposed model.

  19. Reynolds and Maxwell stress measurements in the reversed field pinch experiment Extrap-T2R

    NASA Astrophysics Data System (ADS)

    Vianello, N.; Antoni, V.; Spada, E.; Spolaore, M.; Serianni, G.; Cavazzana, R.; Bergsåker, H.; Cecconello, M.; Drake, J. R.

    2005-08-01

    The complete Reynolds stress (RS) has been measured in the edge region of the Extrap-T2R reversed field pinch experiment. The RS exhibits a strong gradient in the region where a high E × B shear takes place. Experimental results show this gradient to be almost entirely due to the electrostatic contribution. This has been interpreted as experimental evidence of flow generation via turbulence mechanism. The scales involved in flow generation are deduced from the frequency decomposition of RS tensor. They are found related to magnetohydrodynamic activity but are different with respect to the scales responsible for turbulent transport.

  20. Bubble Divergences: Sorting out Topology from Cell Structure

    NASA Astrophysics Data System (ADS)

    Bonzom, Valentin; Smerlak, Matteo

    2012-02-01

    We conclude our analysis of bubble divergences in the flat spinfoam model. In [arXiv:1008.1476] we showed that the divergence degree of an arbitrary two-complex Gamma can be evaluated exactly by means of twisted cohomology. Here, we specialize this result to the case where Gamma is the two-skeleton of the cell decomposition of a pseudomanifold, and sharpen it with a careful analysis of the cellular and topological structures involved. Moreover, we explain in detail how this approach reproduces all the previous powercounting results for the Boulatov-Ooguri (colored) tensor models, and sheds light on algebraic-topological aspects of Gurau's 1/N expansion.

  1. Tensor sufficient dimension reduction

    PubMed Central

    Zhong, Wenxuan; Xing, Xin; Suslick, Kenneth

    2015-01-01

    Tensor is a multiway array. With the rapid development of science and technology in the past decades, large amount of tensor observations are routinely collected, processed, and stored in many scientific researches and commercial activities nowadays. The colorimetric sensor array (CSA) data is such an example. Driven by the need to address data analysis challenges that arise in CSA data, we propose a tensor dimension reduction model, a model assuming the nonlinear dependence between a response and a projection of all the tensor predictors. The tensor dimension reduction models are estimated in a sequential iterative fashion. The proposed method is applied to a CSA data collected for 150 pathogenic bacteria coming from 10 bacterial species and 14 bacteria from one control species. Empirical performance demonstrates that our proposed method can greatly improve the sensitivity and specificity of the CSA technique. PMID:26594304

  2. OPERATOR NORM INEQUALITIES BETWEEN TENSOR UNFOLDINGS ON THE PARTITION LATTICE

    PubMed Central

    Wang, Miaoyan; Duc, Khanh Dao; Fischer, Jonathan; Song, Yun S.

    2017-01-01

    Interest in higher-order tensors has recently surged in data-intensive fields, with a wide range of applications including image processing, blind source separation, community detection, and feature extraction. A common paradigm in tensor-related algorithms advocates unfolding (or flattening) the tensor into a matrix and applying classical methods developed for matrices. Despite the popularity of such techniques, how the functional properties of a tensor changes upon unfolding is currently not well understood. In contrast to the body of existing work which has focused almost exclusively on matricizations, we here consider all possible unfoldings of an order-k tensor, which are in one-to-one correspondence with the set of partitions of {1, …, k}. We derive general inequalities between the lp-norms of arbitrary unfoldings defined on the partition lattice. In particular, we demonstrate how the spectral norm (p = 2) of a tensor is bounded by that of its unfoldings, and obtain an improved upper bound on the ratio of the Frobenius norm to the spectral norm of an arbitrary tensor. For specially-structured tensors satisfying a generalized definition of orthogonal decomposability, we prove that the spectral norm remains invariant under specific subsets of unfolding operations. PMID:28286347

  3. OPERATOR NORM INEQUALITIES BETWEEN TENSOR UNFOLDINGS ON THE PARTITION LATTICE.

    PubMed

    Wang, Miaoyan; Duc, Khanh Dao; Fischer, Jonathan; Song, Yun S

    2017-05-01

    Interest in higher-order tensors has recently surged in data-intensive fields, with a wide range of applications including image processing, blind source separation, community detection, and feature extraction. A common paradigm in tensor-related algorithms advocates unfolding (or flattening) the tensor into a matrix and applying classical methods developed for matrices. Despite the popularity of such techniques, how the functional properties of a tensor changes upon unfolding is currently not well understood. In contrast to the body of existing work which has focused almost exclusively on matricizations, we here consider all possible unfoldings of an order- k tensor, which are in one-to-one correspondence with the set of partitions of {1, …, k }. We derive general inequalities between the l p -norms of arbitrary unfoldings defined on the partition lattice. In particular, we demonstrate how the spectral norm ( p = 2) of a tensor is bounded by that of its unfoldings, and obtain an improved upper bound on the ratio of the Frobenius norm to the spectral norm of an arbitrary tensor. For specially-structured tensors satisfying a generalized definition of orthogonal decomposability, we prove that the spectral norm remains invariant under specific subsets of unfolding operations.

  4. Magnetotelluric imaging of anisotropic crust near Fort McMurray, Alberta: implications for engineered geothermal system development

    NASA Astrophysics Data System (ADS)

    Liddell, Mitch; Unsworth, Martyn; Pek, Josef

    2016-06-01

    Viability for the development of an engineered geothermal system (EGS) in the oilsands region near Fort McMurray, Alberta, is investigated by studying the structure of the Precambrian basement rocks with magnetotellurics (MT). MT data were collected at 94 broad-band stations on two east-west profiles. Apparent resistivity and phase data showed little variation along each profile. The short period MT data detected a 1-D resistivity structure that could be identified as the shallow sedimentary basin underlain by crystalline basement rocks to a depth of 4-5 km. At lower frequencies a strong directional dependence, large phase splits, and regions of out-of-quadrant (OOQ) phase were detected. 2-D isotropic inversions of these data failed to produce a realistic resistivity model. A detailed dimensionality analysis found links between large phase tensor skews (˜15°), azimuths, OOQ phases and tensor decomposition strike angles at periods greater than 1 s. Low magnitude induction vectors, as well as uniformity of phase splits and phase tensor character between the northern and southern profiles imply that a 3-D analysis is not necessary or appropriate. Therefore, 2-D anisotropic forward modelling was used to generate a resistivity model to interpret the MT data. The preferred model was based on geological observations of outcropping anisotropic mylonitic basement rocks of the Charles Lake shear zone, 150 km to the north, linked to the study area by aeromagnetic and core sample data. This model fits all four impedance tensor elements with an rms misfit of 2.82 on the southern profile, and 3.3 on the northern. The conductive phase causing the anisotropy is interpreted to be interconnected graphite films within the metamorphic basement rocks. Characterizing the anisotropy is important for understanding how artificial fractures, necessary for EGS development, would form. Features of MT data commonly interpreted to be 3-D (e.g. out of OOQ phase and large phase tensor skew) are shown to be interpretable with this 2-D anisotropic model.

  5. Obtaining orthotropic elasticity tensor using entries zeroing method.

    NASA Astrophysics Data System (ADS)

    Gierlach, Bartosz; Danek, Tomasz

    2017-04-01

    A generally anisotropic elasticity tensor obtained from measurements can be represented by a tensor belonging to one of eight material symmetry classes. Knowledge of symmetry class and orientation is helpful for describing physical properties of a medium. For each non-trivial symmetry class except isotropic this problem is nonlinear. A common method of obtaining effective tensor is a choosing its non-trivial symmetry class and minimizing Frobenius norm between measured and effective tensor in the same coordinate system. Global optimization algorithm has to be used to determine the best rotation of a tensor. In this contribution, we propose a new approach to obtain optimal tensor, with the assumption that it is orthotropic (or at least has a similar shape to the orthotropic one). In orthotropic form tensor 24 out of 36 entries are zeros. The idea is to minimize the sum of squared entries which are supposed to be equal to zero through rotation calculated with optimization algorithm - in this case Particle Swarm Optimization (PSO) algorithm. Quaternions were used to parametrize rotations in 3D space to improve computational efficiency. In order to avoid a choice of local minima we apply PSO several times and only if we obtain similar results for the third time we consider it as a correct value and finish computations. To analyze obtained results Monte-Carlo method was used. After thousands of single runs of PSO optimization, we obtained values of quaternion parts and plot them. Points concentrate in several points of the graph following the regular pattern. It suggests the existence of more complex symmetry in the analyzed tensor. Then thousands of realizations of generally anisotropic tensor were generated - each tensor entry was replaced with a random value drawn from normal distribution having a mean equal to measured tensor entry and standard deviation of the measurement. Each of these tensors was subject of PSO based optimization delivering quaternion for optimal rotation. Computations were parallelized with OpenMP to decrease computational time what enables different tensors to be processed by different threads. As a result the distributions of rotated tensor entries values were obtained. For the entries which were to be zeroed we can observe almost normal distributions having mean equal to zero or sum of two normal distributions having inverse means. Non-zero entries represent different distributions with two or three maxima. Analysis of obtained results shows that described method produces consistent values of quaternions used to rotate tensors. Despite of less complex target function in a process of optimization in comparison to common approach, entries zeroing method provides results which can be applied to obtain an orthotropic tensor with good reliability. Modification of the method can produce also a tool for obtaining effective tensors belonging to another symmetry classes. This research was supported by the Polish National Science Center under contract No. DEC-2013/11/B/ST10/0472.

  6. Inference of segmented color and texture description by tensor voting.

    PubMed

    Jia, Jiaya; Tang, Chi-Keung

    2004-06-01

    A robust synthesis method is proposed to automatically infer missing color and texture information from a damaged 2D image by (N)D tensor voting (N > 3). The same approach is generalized to range and 3D data in the presence of occlusion, missing data and noise. Our method translates texture information into an adaptive (N)D tensor, followed by a voting process that infers noniteratively the optimal color values in the (N)D texture space. A two-step method is proposed. First, we perform segmentation based on insufficient geometry, color, and texture information in the input, and extrapolate partitioning boundaries by either 2D or 3D tensor voting to generate a complete segmentation for the input. Missing colors are synthesized using (N)D tensor voting in each segment. Different feature scales in the input are automatically adapted by our tensor scale analysis. Results on a variety of difficult inputs demonstrate the effectiveness of our tensor voting approach.

  7. Measuring Nematic Susceptibilities from the Elastoresistivity Tensor

    NASA Astrophysics Data System (ADS)

    Hristov, A. T.; Shapiro, M. C.; Hlobil, Patrick; Maharaj, Akash; Chu, Jiun-Haw; Fisher, Ian

    The elastoresistivity tensor mijkl relates changes in resistivity to the strain on a material. As a fourth-rank tensor, it contains considerably more information about the material than the simpler (second-rank) resistivity tensor; in particular, certain elastoresistivity coefficients can be related to thermodynamic susceptibilities and serve as a direct probe of symmetry breaking at a phase transition. The aim of this talk is twofold. First, we enumerate how symmetry both constrains the structure of the elastoresistivity tensor into an easy-to-understand form and connects tensor elements to thermodynamic susceptibilities. In the process, we generalize previous studies of elastoresistivity to include the effects of magnetic field. Second, we describe an approach to measuring quantities in the elastoresistivity tensor with a novel transverse measurement, which is immune to relative strain offsets. These techniques are then applied to BaFe2As2 in a proof of principle measurement. This work is supported by the Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division, under Contract DE-AC02-76SF00515.

  8. Capturing molecular multimode relaxation processes in excitable gases based on decomposition of acoustic relaxation spectra

    NASA Astrophysics Data System (ADS)

    Zhu, Ming; Liu, Tingting; Wang, Shu; Zhang, Kesheng

    2017-08-01

    Existing two-frequency reconstructive methods can only capture primary (single) molecular relaxation processes in excitable gases. In this paper, we present a reconstructive method based on the novel decomposition of frequency-dependent acoustic relaxation spectra to capture the entire molecular multimode relaxation process. This decomposition of acoustic relaxation spectra is developed from the frequency-dependent effective specific heat, indicating that a multi-relaxation process is the sum of the interior single-relaxation processes. Based on this decomposition, we can reconstruct the entire multi-relaxation process by capturing the relaxation times and relaxation strengths of N interior single-relaxation processes, using the measurements of acoustic absorption and sound speed at 2N frequencies. Experimental data for the gas mixtures CO2-N2 and CO2-O2 validate our decomposition and reconstruction approach.

  9. Highly efficient all-dielectric optical tensor impedance metasurfaces for chiral polarization control.

    PubMed

    Kim, Minseok; Eleftheriades, George V

    2016-10-15

    We propose a highly efficient (nearly lossless and impedance-matched) all-dielectric optical tensor impedance metasurface that mimics chiral effects at optical wavelengths. By cascading an array of rotated crossed silicon nanoblocks, we realize chiral optical tensor impedance metasurfaces that operate as circular polarization selective surfaces. Their efficiencies are maximized through a nonlinear numerical optimization process in which the tensor impedance metasurfaces are modeled via multi-conductor transmission line theory. From rigorous full-wave simulations that include all material losses, we show field transmission efficiencies of 94% for right- and left-handed circular polarization selective surfaces at 800 nm.

  10. An EM System with Dynamic Multi-Axis Transmitter and Tensor Gradiometer Receiver

    DTIC Science & Technology

    2011-06-01

    main difference between the spatial behavior of target anomalies measured with a magnetometer and those we measured with an EM system is in the nature...environmental and UXO applications, current efforts include the development of tensor magnetic gradiometers based on triaxial fluxgate technology by the USGS...Superconducting gradiometer/ Magnetometer Arrays and a Novel Signal Processing Technique. IEEE Trans. on Magnetics, MAG-11(2), 701-707. EM Tensor

  11. An EM System With Dramatic Multi-Axis Transmitter and Tensor Gradiometer Receiver

    DTIC Science & Technology

    2011-06-01

    Thus, the main difference between the spatial behavior of target anomalies measured with a magnetometer and those we measured with an EM system is in...current efforts include the development of tensor magnetic gradiometers based on triaxial fluxgate technology by the USGS (Snyder & Bracken, Development...Superconducting gradiometer/ Magnetometer Arrays and a Novel Signal Processing Technique. IEEE Trans. on Magnetics, MAG-11(2), 701-707. EM Tensor Gradiometer

  12. Finding imaging patterns of structural covariance via Non-Negative Matrix Factorization.

    PubMed

    Sotiras, Aristeidis; Resnick, Susan M; Davatzikos, Christos

    2015-03-01

    In this paper, we investigate the use of Non-Negative Matrix Factorization (NNMF) for the analysis of structural neuroimaging data. The goal is to identify the brain regions that co-vary across individuals in a consistent way, hence potentially being part of underlying brain networks or otherwise influenced by underlying common mechanisms such as genetics and pathologies. NNMF offers a directly data-driven way of extracting relatively localized co-varying structural regions, thereby transcending limitations of Principal Component Analysis (PCA), Independent Component Analysis (ICA) and other related methods that tend to produce dispersed components of positive and negative loadings. In particular, leveraging upon the well known ability of NNMF to produce parts-based representations of image data, we derive decompositions that partition the brain into regions that vary in consistent ways across individuals. Importantly, these decompositions achieve dimensionality reduction via highly interpretable ways and generalize well to new data as shown via split-sample experiments. We empirically validate NNMF in two data sets: i) a Diffusion Tensor (DT) mouse brain development study, and ii) a structural Magnetic Resonance (sMR) study of human brain aging. We demonstrate the ability of NNMF to produce sparse parts-based representations of the data at various resolutions. These representations seem to follow what we know about the underlying functional organization of the brain and also capture some pathological processes. Moreover, we show that these low dimensional representations favorably compare to descriptions obtained with more commonly used matrix factorization methods like PCA and ICA. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Geometric decompositions of collective motion

    NASA Astrophysics Data System (ADS)

    Mischiati, Matteo; Krishnaprasad, P. S.

    2017-04-01

    Collective motion in nature is a captivating phenomenon. Revealing the underlying mechanisms, which are of biological and theoretical interest, will require empirical data, modelling and analysis techniques. Here, we contribute a geometric viewpoint, yielding a novel method of analysing movement. Snapshots of collective motion are portrayed as tangent vectors on configuration space, with length determined by the total kinetic energy. Using the geometry of fibre bundles and connections, this portrait is split into orthogonal components each tangential to a lower dimensional manifold derived from configuration space. The resulting decomposition, when interleaved with classical shape space construction, is categorized into a family of kinematic modes-including rigid translations, rigid rotations, inertia tensor transformations, expansions and compressions. Snapshots of empirical data from natural collectives can be allocated to these modes and weighted by fractions of total kinetic energy. Such quantitative measures can provide insight into the variation of the driving goals of a collective, as illustrated by applying these methods to a publicly available dataset of pigeon flocking. The geometric framework may also be profitably employed in the control of artificial systems of interacting agents such as robots.

  14. Geometric decompositions of collective motion

    PubMed Central

    Krishnaprasad, P. S.

    2017-01-01

    Collective motion in nature is a captivating phenomenon. Revealing the underlying mechanisms, which are of biological and theoretical interest, will require empirical data, modelling and analysis techniques. Here, we contribute a geometric viewpoint, yielding a novel method of analysing movement. Snapshots of collective motion are portrayed as tangent vectors on configuration space, with length determined by the total kinetic energy. Using the geometry of fibre bundles and connections, this portrait is split into orthogonal components each tangential to a lower dimensional manifold derived from configuration space. The resulting decomposition, when interleaved with classical shape space construction, is categorized into a family of kinematic modes—including rigid translations, rigid rotations, inertia tensor transformations, expansions and compressions. Snapshots of empirical data from natural collectives can be allocated to these modes and weighted by fractions of total kinetic energy. Such quantitative measures can provide insight into the variation of the driving goals of a collective, as illustrated by applying these methods to a publicly available dataset of pigeon flocking. The geometric framework may also be profitably employed in the control of artificial systems of interacting agents such as robots. PMID:28484319

  15. A low dimensional dynamical system for the wall layer

    NASA Technical Reports Server (NTRS)

    Aubry, N.; Keefe, L. R.

    1987-01-01

    Low dimensional dynamical systems which model a fully developed turbulent wall layer were derived.The model is based on the optimally fast convergent proper orthogonal decomposition, or Karhunen-Loeve expansion. This decomposition provides a set of eigenfunctions which are derived from the autocorrelation tensor at zero time lag. Via Galerkin projection, low dimensional sets of ordinary differential equations in time, for the coefficients of the expansion, were derived from the Navier-Stokes equations. The energy loss to the unresolved modes was modeled by an eddy viscosity representation, analogous to Heisenberg's spectral model. A set of eigenfunctions and eigenvalues were obtained from direct numerical simulation of a plane channel at a Reynolds number of 6600, based on the mean centerline velocity and the channel width flow and compared with previous work done by Herzog. Using the new eigenvalues and eigenfunctions, a new ten dimensional set of ordinary differential equations were derived using five non-zero cross-stream Fourier modes with a periodic length of 377 wall units. The dynamical system was integrated for a range of the eddy viscosity prameter alpha. This work is encouraging.

  16. Thermal decomposition behavior of nano/micro bimodal feedstock with different solids loading

    NASA Astrophysics Data System (ADS)

    Oh, Joo Won; Lee, Won Sik; Park, Seong Jin

    2018-01-01

    Debinding is one of the most critical processes for powder injection molding. The parts in debinding process are vulnerable to defect formation, and long processing time of debinding decreases production rate of whole process. In order to determine the optimal condition for debinding process, decomposition behavior of feedstock should be understood. Since nano powder affects the decomposition behavior of feedstock, nano powder effect needs to be investigated for nano/micro bimodal feedstock. In this research, nano powder effect on decomposition behavior of nano/micro bimodal feedstock has been studied. Bimodal powders were fabricated with different ratios of nano powder, and the critical solids loading of each powder was measured by torque rheometer. Three different feedstocks were fabricated for each powder depending on solids loading condition. Thermogravimetric analysis (TGA) experiment was carried out to analyze the thermal decomposition behavior of the feedstocks, and decomposition activation energy was calculated. The result indicated nano powder showed limited effect on feedstocks in lower solids loading condition than optimal range. Whereas, it highly influenced the decomposition behavior in optimal solids loading condition by causing polymer chain scission with high viscosity.

  17. Moment tensor clustering: a tool to monitor mining induced seismicity

    NASA Astrophysics Data System (ADS)

    Cesca, Simone; Dahm, Torsten; Tolga Sen, Ali

    2013-04-01

    Automated moment tensor inversion routines have been setup in the last decades for the analysis of global and regional seismicity. Recent developments could be used to analyse smaller events and larger datasets. In particular, applications to microseismicity, e.g. in mining environments, have then led to the generation of large moment tensor catalogues. Moment tensor catalogues provide a valuable information about the earthquake source and details of rupturing processes taking place in the seismogenic region. Earthquake focal mechanisms can be used to discuss the local stress field, possible orientations of the fault system or to evaluate the presence of shear and/or tensile cracks. Focal mechanism and moment tensor solutions are typically analysed for selected events, and quick and robust tools for the automated analysis of larger catalogues are needed. We propose here a method to perform cluster analysis for large moment tensor catalogues and identify families of events which characterize the studied microseismicity. Clusters include events with similar focal mechanisms, first requiring the definition of distance between focal mechanisms. Different metrics are here proposed, both for the case of pure double couple, constrained moment tensor and full moment tensor catalogues. Different clustering approaches are implemented and discussed. The method is here applied to synthetic and real datasets from mining environments to demonstrate its potential: the proposed cluserting techniques prove to be able to automatically recognise major clusters. An important application for mining monitoring concerns the early identification of anomalous rupture processes, which is relevant for the hazard assessment. This study is funded by the project MINE, which is part of the R&D-Programme GEOTECHNOLOGIEN. The project MINE is funded by the German Ministry of Education and Research (BMBF), Grant of project BMBF03G0737.

  18. Data-driven process decomposition and robust online distributed modelling for large-scale processes

    NASA Astrophysics Data System (ADS)

    Shu, Zhang; Lijuan, Li; Lijuan, Yao; Shipin, Yang; Tao, Zou

    2018-02-01

    With the increasing attention of networked control, system decomposition and distributed models show significant importance in the implementation of model-based control strategy. In this paper, a data-driven system decomposition and online distributed subsystem modelling algorithm was proposed for large-scale chemical processes. The key controlled variables are first partitioned by affinity propagation clustering algorithm into several clusters. Each cluster can be regarded as a subsystem. Then the inputs of each subsystem are selected by offline canonical correlation analysis between all process variables and its controlled variables. Process decomposition is then realised after the screening of input and output variables. When the system decomposition is finished, the online subsystem modelling can be carried out by recursively block-wise renewing the samples. The proposed algorithm was applied in the Tennessee Eastman process and the validity was verified.

  19. On improving the efficiency of tensor voting.

    PubMed

    Moreno, Rodrigo; Garcia, Miguel Angel; Puig, Domenec; Pizarro, Luis; Burgeth, Bernhard; Weickert, Joachim

    2011-11-01

    This paper proposes two alternative formulations to reduce the high computational complexity of tensor voting, a robust perceptual grouping technique used to extract salient information from noisy data. The first scheme consists of numerical approximations of the votes, which have been derived from an in-depth analysis of the plate and ball voting processes. The second scheme simplifies the formulation while keeping the same perceptual meaning of the original tensor voting: The stick tensor voting and the stick component of the plate tensor voting must reinforce surfaceness, the plate components of both the plate and ball tensor voting must boost curveness, whereas junctionness must be strengthened by the ball component of the ball tensor voting. Two new parameters have been proposed for the second formulation in order to control the potentially conflictive influence of the stick component of the plate vote and the ball component of the ball vote. Results show that the proposed formulations can be used in applications where efficiency is an issue since they have a complexity of order O(1). Moreover, the second proposed formulation has been shown to be more appropriate than the original tensor voting for estimating saliencies by appropriately setting the two new parameters.

  20. The Chern-Simons Current in Systems of DNA-RNA Transcriptions

    NASA Astrophysics Data System (ADS)

    Capozziello, Salvatore; Pincak, Richard; Kanjamapornkul, Kabin; Saridakis, Emmanuel N.

    2018-04-01

    A Chern-Simons current, coming from ghost and anti-ghost fields of supersymmetry theory, can be used to define a spectrum of gene expression in new time series data where a spinor field, as alternative representation of a gene, is adopted instead of using the standard alphabet sequence of bases $A, T, C, G, U$. After a general discussion on the use of supersymmetry in biological systems, we give examples of the use of supersymmetry for living organism, discuss the codon and anti-codon ghost fields and develop an algebraic construction for the trash DNA, the DNA area which does not seem active in biological systems. As a general result, all hidden states of codon can be computed by Chern-Simons 3 forms. Finally, we plot a time series of genetic variations of viral glycoprotein gene and host T-cell receptor gene by using a gene tensor correlation network related to the Chern-Simons current. An empirical analysis of genetic shift, in host cell receptor genes with separated cluster of gene and genetic drift in viral gene, is obtained by using a tensor correlation plot over time series data derived as the empirical mode decomposition of Chern-Simons current.

  1. Sample-based synthesis of two-scale structures with anisotropy

    DOE PAGES

    Liu, Xingchen; Shapiro, Vadim

    2017-05-19

    A vast majority of natural or synthetic materials are characterized by their anisotropic properties, such as stiffness. Such anisotropy is effected by the spatial distribution of the fine-scale structure and/or anisotropy of the constituent phases at a finer scale. In design, proper control of the anisotropy may greatly enhance the efficiency and performance of synthesized structures. In this paper, we propose a sample-based two-scale structure synthesis approach that explicitly controls anisotropic effective material properties of the structure on the coarse scale by orienting sampled material neighborhoods at the fine scale. We first characterize the non-uniform orientations distribution of the samplemore » structure by showing that the principal axes of an orthotropic material may be determined by the eigenvalue decomposition of its effective stiffness tensor. Such effective stiffness tensors can be efficiently estimated based on the two-point correlation functions of the fine-scale structures. Then we synthesize the two-scale structure by rotating fine-scale structures from the sample to follow a given target orientation field. Finally, the effectiveness of the proposed approach is demonstrated through examples in both 2D and 3D.« less

  2. Sample-based synthesis of two-scale structures with anisotropy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Xingchen; Shapiro, Vadim

    A vast majority of natural or synthetic materials are characterized by their anisotropic properties, such as stiffness. Such anisotropy is effected by the spatial distribution of the fine-scale structure and/or anisotropy of the constituent phases at a finer scale. In design, proper control of the anisotropy may greatly enhance the efficiency and performance of synthesized structures. In this paper, we propose a sample-based two-scale structure synthesis approach that explicitly controls anisotropic effective material properties of the structure on the coarse scale by orienting sampled material neighborhoods at the fine scale. We first characterize the non-uniform orientations distribution of the samplemore » structure by showing that the principal axes of an orthotropic material may be determined by the eigenvalue decomposition of its effective stiffness tensor. Such effective stiffness tensors can be efficiently estimated based on the two-point correlation functions of the fine-scale structures. Then we synthesize the two-scale structure by rotating fine-scale structures from the sample to follow a given target orientation field. Finally, the effectiveness of the proposed approach is demonstrated through examples in both 2D and 3D.« less

  3. A strain energy filter for 3D vessel enhancement with application to pulmonary CT images.

    PubMed

    Xiao, Changyan; Staring, Marius; Shamonin, Denis; Reiber, Johan H C; Stolk, Jan; Stoel, Berend C

    2011-02-01

    The traditional Hessian-related vessel filters often suffer from detecting complex structures like bifurcations due to an over-simplified cylindrical model. To solve this problem, we present a shape-tuned strain energy density function to measure vessel likelihood in 3D medical images. This method is initially inspired by established stress-strain principles in mechanics. By considering the Hessian matrix as a stress tensor, the three invariants from orthogonal tensor decomposition are used independently or combined to formulate distinctive functions for vascular shape discrimination, brightness contrast and structure strength measuring. Moreover, a mathematical description of Hessian eigenvalues for general vessel shapes is obtained, based on an intensity continuity assumption, and a relative Hessian strength term is presented to ensure the dominance of second-order derivatives as well as suppress undesired step-edges. Finally, we adopt the multi-scale scheme to find an optimal solution through scale space. The proposed method is validated in experiments with a digital phantom and non-contrast-enhanced pulmonary CT data. It is shown that our model performed more effectively in enhancing vessel bifurcations and preserving details, compared to three existing filters. Copyright © 2010 Elsevier B.V. All rights reserved.

  4. A statistical approach based on accumulated degree-days to predict decomposition-related processes in forensic studies.

    PubMed

    Michaud, Jean-Philippe; Moreau, Gaétan

    2011-01-01

    Using pig carcasses exposed over 3 years in rural fields during spring, summer, and fall, we studied the relationship between decomposition stages and degree-day accumulation (i) to verify the predictability of the decomposition stages used in forensic entomology to document carcass decomposition and (ii) to build a degree-day accumulation model applicable to various decomposition-related processes. Results indicate that the decomposition stages can be predicted with accuracy from temperature records and that a reliable degree-day index can be developed to study decomposition-related processes. The development of degree-day indices opens new doors for researchers and allows for the application of inferential tools unaffected by climatic variability, as well as for the inclusion of statistics in a science that is primarily descriptive and in need of validation methods in courtroom proceedings. © 2010 American Academy of Forensic Sciences.

  5. Relativistic analysis of stochastic kinematics

    NASA Astrophysics Data System (ADS)

    Giona, Massimiliano

    2017-10-01

    The relativistic analysis of stochastic kinematics is developed in order to determine the transformation of the effective diffusivity tensor in inertial frames. Poisson-Kac stochastic processes are initially considered. For one-dimensional spatial models, the effective diffusion coefficient measured in a frame Σ moving with velocity w with respect to the rest frame of the stochastic process is inversely proportional to the third power of the Lorentz factor γ (w ) =(1-w2/c2) -1 /2 . Subsequently, higher-dimensional processes are analyzed and it is shown that the diffusivity tensor in a moving frame becomes nonisotropic: The diffusivities parallel and orthogonal to the velocity of the moving frame scale differently with respect to γ (w ) . The analysis of discrete space-time diffusion processes permits one to obtain a general transformation theory of the tensor diffusivity, confirmed by several different simulation experiments. Several implications of the theory are also addressed and discussed.

  6. Holographic spin networks from tensor network states

    NASA Astrophysics Data System (ADS)

    Singh, Sukhwinder; McMahon, Nathan A.; Brennen, Gavin K.

    2018-01-01

    In the holographic correspondence of quantum gravity, a global on-site symmetry at the boundary generally translates to a local gauge symmetry in the bulk. We describe one way how the global boundary on-site symmetries can be gauged within the formalism of the multiscale renormalization ansatz (MERA), in light of the ongoing discussion between tensor networks and holography. We describe how to "lift" the MERA representation of the ground state of a generic one dimensional (1D) local Hamiltonian, which has a global on-site symmetry, to a dual quantum state of a 2D "bulk" lattice on which the symmetry appears gauged. The 2D bulk state decomposes in terms of spin network states, which label a basis in the gauge-invariant sector of the bulk lattice. This decomposition is instrumental to obtain expectation values of gauge-invariant observables in the bulk, and also reveals that the bulk state is generally entangled between the gauge and the remaining ("gravitational") bulk degrees of freedom that are not fixed by the symmetry. We present numerical results for ground states of several 1D critical spin chains to illustrate that the bulk entanglement potentially depends on the central charge of the underlying conformal field theory. We also discuss the possibility of emergent topological order in the bulk using a simple example, and also of emergent symmetries in the nongauge (gravitational) sector in the bulk. More broadly, our holographic model translates the MERA, a tensor network state, to a superposition of spin network states, as they appear in lattice gauge theories in one higher dimension.

  7. Estimation of full moment tensors, including uncertainties, for earthquakes, volcanic events, and nuclear explosions

    NASA Astrophysics Data System (ADS)

    Alvizuri, Celso R.

    We present a catalog of full seismic moment tensors for 63 events from Uturuncu volcano in Bolivia. The events were recorded during 2011-2012 in the PLUTONS seismic array of 24 broadband stations. Most events had magnitudes between 0.5 and 2.0 and did not generate discernible surface waves; the largest event was Mw 2.8. For each event we computed the misfit between observed and synthetic waveforms, and we used first-motion polarity measurements to reduce the number of possible solutions. Each moment tensor solution was obtained using a grid search over the six-dimensional space of moment tensors. For each event we show the misfit function in eigenvalue space, represented by a lune. We identify three subsets of the catalog: (1) 6 isotropic events, (2) 5 tensional crack events, and (3) a swarm of 14 events southeast of the volcanic center that appear to be double couples. The occurrence of positively isotropic events is consistent with other published results from volcanic and geothermal regions. Several of these previous results, as well as our results, cannot be interpreted within the context of either an oblique opening crack or a crack-plus-double-couple model. Proper characterization of uncertainties for full moment tensors is critical for distinguishing among physical models of source processes. A seismic moment tensor is a 3x3 symmetric matrix that provides a compact representation of a seismic source. We develop an algorithm to estimate moment tensors and their uncertainties from observed seismic data. For a given event, the algorithm performs a grid search over the six-dimensional space of moment tensors by generating synthetic waveforms for each moment tensor and then evaluating a misfit function between the observed and synthetic waveforms. 'The' moment tensor M0 for the event is then the moment tensor with minimum misfit. To describe the uncertainty associated with M0, we first convert the misfit function to a probability function. The uncertainty, or rather the confidence, is then given by the 'confidence curve' P( V), where P(V) is the probability that the true moment tensor for the event lies within the neighborhood of M that has fractional volume V. The area under the confidence curve provides a single, abbreviated 'confidence parameter' for M0. We apply the method to data from events in different regions and tectonic settings: 63 small (M w 4) earthquakes in the southern Alaska subduction zone, and 12 earthquakes and 17 nuclear explosions at the Nevada Test Site. Characterization of moment tensor uncertainties puts us in better position to discriminate among moment tensor source types and to assign physical processes to the events.

  8. Advanced Signal Processing & Classification: UXO Standardized Test Site Data

    DTIC Science & Technology

    2012-04-01

    magnetic polarizability tensor , and represent the response of the target along each of three principal axes. In order to reduce the number of fit...Oldenburg-Billings (POB) model – GPA version The full POB analysis assumes an axially symmetric (axial and transverse) tensor dipolar target response, and... tensor , and represent the response of the target along each of three principal axes. The β’s are in turn expressed in terms of an empirical five

  9. Tensor-Dictionary Learning with Deep Kruskal-Factor Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stevens, Andrew J.; Pu, Yunchen; Sun, Yannan

    We introduce new dictionary learning methods for tensor-variate data of any order. We represent each data item as a sum of Kruskal decomposed dictionary atoms within the framework of beta-process factor analysis (BPFA). Our model is nonparametric and can infer the tensor-rank of each dictionary atom. This Kruskal-Factor Analysis (KFA) is a natural generalization of BPFA. We also extend KFA to a deep convolutional setting and develop online learning methods. We test our approach on image processing and classification tasks achieving state of the art results for 2D & 3D inpainting and Caltech 101. The experiments also show that atom-rankmore » impacts both overcompleteness and sparsity.« less

  10. A novel registration-based methodology for prediction of trabecular bone fabric from clinical QCT: A comprehensive analysis

    PubMed Central

    Reyes, Mauricio; Zysset, Philippe

    2017-01-01

    Osteoporosis leads to hip fractures in aging populations and is diagnosed by modern medical imaging techniques such as quantitative computed tomography (QCT). Hip fracture sites involve trabecular bone, whose strength is determined by volume fraction and orientation, known as fabric. However, bone fabric cannot be reliably assessed in clinical QCT images of proximal femur. Accordingly, we propose a novel registration-based estimation of bone fabric designed to preserve tensor properties of bone fabric and to map bone fabric by a global and local decomposition of the gradient of a non-rigid image registration transformation. Furthermore, no comprehensive analysis on the critical components of this methodology has been previously conducted. Hence, the aim of this work was to identify the best registration-based strategy to assign bone fabric to the QCT image of a patient’s proximal femur. The normalized correlation coefficient and curvature-based regularization were used for image-based registration and the Frobenius norm of the stretch tensor of the local gradient was selected to quantify the distance among the proximal femora in the population. Based on this distance, closest, farthest and mean femora with a distinction of sex were chosen as alternative atlases to evaluate their influence on bone fabric prediction. Second, we analyzed different tensor mapping schemes for bone fabric prediction: identity, rotation-only, rotation and stretch tensor. Third, we investigated the use of a population average fabric atlas. A leave one out (LOO) evaluation study was performed with a dual QCT and HR-pQCT database of 36 pairs of human femora. The quality of the fabric prediction was assessed with three metrics, the tensor norm (TN) error, the degree of anisotropy (DA) error and the angular deviation of the principal tensor direction (PTD). The closest femur atlas (CTP) with a full rotation (CR) for fabric mapping delivered the best results with a TN error of 7.3 ± 0.9%, a DA error of 6.6 ± 1.3% and a PTD error of 25 ± 2°. The closest to the population mean femur atlas (MTP) using the same mapping scheme yielded only slightly higher errors than CTP for substantially less computing efforts. The population average fabric atlas yielded substantially higher errors than the MTP with the CR mapping scheme. Accounting for sex did not bring any significant improvements. The identified fabric mapping methodology will be exploited in patient-specific QCT-based finite element analysis of the proximal femur to improve the prediction of hip fracture risk. PMID:29176881

  11. Process for remediation of plastic waste

    DOEpatents

    Pol, Vilas G [Westmont, IL; Thiyagarajan, Pappannan [Germantown, MD

    2012-04-10

    A single step process for degrading plastic waste by converting the plastic waste into carbonaceous products via thermal decomposition of the plastic waste by placing the plastic waste into a reactor, heating the plastic waste under an inert or air atmosphere until the temperature of 700.degree. C. is achieved, allowing the reactor to cool down, and recovering the resulting decomposition products therefrom. The decomposition products that this process yields are carbonaceous materials, and more specifically egg-shaped and spherical-shaped solid carbons. Additionally, in the presence of a transition metal compound, this thermal decomposition process produces multi-walled carbon nanotubes.

  12. Estimation of full moment tensors, including uncertainties, for earthquakes, volcanic events, and nuclear explosions

    NASA Astrophysics Data System (ADS)

    Alvizuri, Celso; Silwal, Vipul; Krischer, Lion; Tape, Carl

    2017-04-01

    A seismic moment tensor is a 3 × 3 symmetric matrix that provides a compact representation of seismic events within Earth's crust. We develop an algorithm to estimate moment tensors and their uncertainties from observed seismic data. For a given event, the algorithm performs a grid search over the six-dimensional space of moment tensors by generating synthetic waveforms at each grid point and then evaluating a misfit function between the observed and synthetic waveforms. 'The' moment tensor M for the event is then the moment tensor with minimum misfit. To describe the uncertainty associated with M, we first convert the misfit function to a probability function. The uncertainty, or rather the confidence, is then given by the 'confidence curve' P(V ), where P(V ) is the probability that the true moment tensor for the event lies within the neighborhood of M that has fractional volume V . The area under the confidence curve provides a single, abbreviated 'confidence parameter' for M. We apply the method to data from events in different regions and tectonic settings: small (Mw < 2.5) events at Uturuncu volcano in Bolivia, moderate (Mw > 4) earthquakes in the southern Alaska subduction zone, and natural and man-made events at the Nevada Test Site. Moment tensor uncertainties allow us to better discriminate among moment tensor source types and to assign physical processes to the events.

  13. Waveform-based Bayesian full moment tensor inversion and uncertainty determination for the induced seismicity in an oil/gas field

    NASA Astrophysics Data System (ADS)

    Gu, Chen; Marzouk, Youssef M.; Toksöz, M. Nafi

    2018-03-01

    Small earthquakes occur due to natural tectonic motions and are induced by oil and gas production processes. In many oil/gas fields and hydrofracking processes, induced earthquakes result from fluid extraction or injection. The locations and source mechanisms of these earthquakes provide valuable information about the reservoirs. Analysis of induced seismic events has mostly assumed a double-couple source mechanism. However, recent studies have shown a non-negligible percentage of non-double-couple components of source moment tensors in hydraulic fracturing events, assuming a full moment tensor source mechanism. Without uncertainty quantification of the moment tensor solution, it is difficult to determine the reliability of these source models. This study develops a Bayesian method to perform waveform-based full moment tensor inversion and uncertainty quantification for induced seismic events, accounting for both location and velocity model uncertainties. We conduct tests with synthetic events to validate the method, and then apply our newly developed Bayesian inversion approach to real induced seismicity in an oil/gas field in the sultanate of Oman—determining the uncertainties in the source mechanism and in the location of that event.

  14. Decomposition Mechanism and Decomposition Promoting Factors of Waste Hard Metal for Zinc Decomposition Process (ZDP)

    NASA Astrophysics Data System (ADS)

    Pee, J. H.; Kim, Y. J.; Kim, J. Y.; Seong, N. E.; Cho, W. S.; Kim, K. J.

    2011-10-01

    Decomposition promoting factors and decomposition mechanism in the zinc decomposition process of waste hard metals which are composed mostly of tungsten carbide and cobalt were evaluated. Zinc volatility amount was suppressed and zinc steam pressure was produced in the reaction graphite crucible inside an electric furnace for ZDP. Reaction was done for 2 hrs at 650 °C, which 100 % decomposed the waste hard metals that were over 30 mm thick. As for the separation-decomposition of waste hard metals, zinc melted alloy formed a liquid composed of a mixture of γ-β1 phase from the cobalt binder layer (reaction interface). The volume of reacted zone was expanded and the waste hard metal layer was decomposed-separated horizontally from the hard metal. Zinc used in the ZDP process was almost completely removed-collected by decantation and volatilization-collection process at 1000 °C. The small amount of zinc remaining in the tungsten carbide-cobalt powder which was completely decomposed was fully removed by using phosphate solution which had a slow cobalt dissolution speed.

  15. Reaction behaviors of decomposition of monocrotophos in aqueous solution by UV and UV/O processes.

    PubMed

    Ku, Y; Wang, W; Shen, Y S

    2000-02-01

    The decomposition of monocrotophos (cis-3-dimethoxyphosphinyloxy-N-methyl-crotonamide) in aqueous solution by UV and UV/O(3) processes was studied. The experiments were carried out under various solution pH values to investigate the decomposition efficiencies of the reactant and organic intermediates in order to determine the completeness of decomposition. The photolytic decomposition rate of monocrotophos was increased with increasing solution pH because the solution pH affects the distribution and light absorbance of monocrotophos species. The combination of O(3) with UV light apparently promoted the decomposition and mineralization of monocrotophos in aqueous solution. For the UV/O(3) process, the breakage of the >C=C< bond of monocrotophos by ozone molecules was found to occur first, followed by mineralization by hydroxyl radicals to generate CO(3)(2-), PO4(3-), and NO(3)(-) anions in sequence. The quasi-global kinetics based on a simplified consecutive-parallel reaction scheme was developed to describe the temporal behavior of monocrotophos decomposition in aqueous solution by the UV/O(3) process.

  16. Groupwise Registration and Atlas Construction of 4th-Order Tensor Fields Using the ℝ+ Riemannian Metric*

    PubMed Central

    Barmpoutis, Angelos

    2010-01-01

    Registration of Diffusion-Weighted MR Images (DW-MRI) can be achieved by registering the corresponding 2nd-order Diffusion Tensor Images (DTI). However, it has been shown that higher-order diffusion tensors (e.g. order-4) outperform the traditional DTI in approximating complex fiber structures such as fiber crossings. In this paper we present a novel method for unbiased group-wise non-rigid registration and atlas construction of 4th-order diffusion tensor fields. To the best of our knowledge there is no other existing method to achieve this task. First we define a metric on the space of positive-valued functions based on the Riemannian metric of real positive numbers (denoted by ℝ+). Then, we use this metric in a novel functional minimization method for non-rigid 4th-order tensor field registration. We define a cost function that accounts for the 4th-order tensor re-orientation during the registration process and has analytic derivatives with respect to the transformation parameters. Finally, the tensor field atlas is computed as the minimizer of the variance defined using the Riemannian metric. We quantitatively compare the proposed method with other techniques that register scalar-valued or diffusion tensor (rank-2) representations of the DWMRI. PMID:20436782

  17. Process for remediation of plastic waste

    DOEpatents

    Pol, Vilas G; Thiyagarajan, Pappannan

    2013-11-12

    A single step process for degrading plastic waste by converting the plastic waste into carbonaceous products via thermal decomposition of the plastic waste by placing the plastic waste into a reactor, heating the plastic waste under an inert or air atmosphere until the temperature of about 700.degree. C. is achieved, allowing the reactor to cool down, and recovering the resulting decomposition products therefrom. The decomposition products that this process yields are carbonaceous materials, and more specifically carbon nanotubes having a partially filled core (encapsulated) adjacent to one end of the nanotube. Additionally, in the presence of a transition metal compound, this thermal decomposition process produces multi-walled carbon nanotubes.

  18. The use of Stress Tensor Discriminator Faults in separating heterogeneous fault-slip data with best-fit stress inversion methods. II. Compressional stress regimes

    NASA Astrophysics Data System (ADS)

    Tranos, Markos D.

    2018-02-01

    Synthetic heterogeneous fault-slip data as driven by Andersonian compressional stress tensors were used to examine the efficiency of best-fit stress inversion methods in separating them. Heterogeneous fault-slip data are separated only if (a) they have been driven by stress tensors defining 'hybrid' compression (R < 0.375), and their σ1 axes differ in trend more than 30° (R = 0) or 50° (R = 0.25). Separation is not feasible if they have been driven by (b) 'real' (R ≥ 0.375) and 'hybrid' compressional tensors having their σ1 axes in similar trend, or (c) 'real' compressional tensors. In case (a), the Stress Tensor Discriminator Faults (STDF) exist in more than 50% of the activated fault slip data while in cases (b) and (c), they exist in percentages of much less than 50% or not at all. They constitute a necessary discriminatory tool for the establishment and comparison of two compressional stress tensors determined by a best-fit stress inversion method. The best-fit stress inversion methods are not able to determine more than one 'real' compressional stress tensor, as far as the thrust stacking in an orogeny is concerned. They can only possibly discern stress differences in the late-orogenic faulting processes, but not between the main- and late-orogenic stages.

  19. Steepest Ascent Low/Non-Low-Frequency Ratio in Empirical Mode Decomposition to Separate Deterministic and Stochastic Velocities From a Single Lagrangian Drifter

    NASA Astrophysics Data System (ADS)

    Chu, Peter C.

    2018-03-01

    SOund Fixing And Ranging (RAFOS) floats deployed by the Naval Postgraduate School (NPS) in the California Current system from 1992 to 2001 at depth between 150 and 600 m (http://www.oc.nps.edu/npsRAFOS/) are used to study 2-D turbulent characteristics. Each drifter trajectory is adaptively decomposed using the empirical mode decomposition (EMD) into a series of intrinsic mode functions (IMFs) with corresponding specific scale for each IMF. A new steepest ascent low/non-low-frequency ratio is proposed in this paper to separate a Lagrangian trajectory into low-frequency (nondiffusive, i.e., deterministic) and high-frequency (diffusive, i.e., stochastic) components. The 2-D turbulent (or called eddy) diffusion coefficients are calculated on the base of the classical turbulent diffusion with mixing length theory from stochastic component of a single drifter. Statistical characteristics of the calculated 2-D turbulence length scale, strength, and diffusion coefficients from the NPS RAFOS data are presented with the mean values (over the whole drifters) of the 2-D diffusion coefficients comparable to the commonly used diffusivity tensor method.

  20. Multispectral image fusion for illumination-invariant palmprint recognition

    PubMed Central

    Zhang, Xinman; Xu, Xuebin; Shang, Dongpeng

    2017-01-01

    Multispectral palmprint recognition has shown broad prospects for personal identification due to its high accuracy and great stability. In this paper, we develop a novel illumination-invariant multispectral palmprint recognition method. To combine the information from multiple spectral bands, an image-level fusion framework is completed based on a fast and adaptive bidimensional empirical mode decomposition (FABEMD) and a weighted Fisher criterion. The FABEMD technique decomposes the multispectral images into their bidimensional intrinsic mode functions (BIMFs), on which an illumination compensation operation is performed. The weighted Fisher criterion is to construct the fusion coefficients at the decomposition level, making the images be separated correctly in the fusion space. The image fusion framework has shown strong robustness against illumination variation. In addition, a tensor-based extreme learning machine (TELM) mechanism is presented for feature extraction and classification of two-dimensional (2D) images. In general, this method has fast learning speed and satisfying recognition accuracy. Comprehensive experiments conducted on the PolyU multispectral palmprint database illustrate that the proposed method can achieve favorable results. For the testing under ideal illumination, the recognition accuracy is as high as 99.93%, and the result is 99.50% when the lighting condition is unsatisfied. PMID:28558064

  1. Multispectral image fusion for illumination-invariant palmprint recognition.

    PubMed

    Lu, Longbin; Zhang, Xinman; Xu, Xuebin; Shang, Dongpeng

    2017-01-01

    Multispectral palmprint recognition has shown broad prospects for personal identification due to its high accuracy and great stability. In this paper, we develop a novel illumination-invariant multispectral palmprint recognition method. To combine the information from multiple spectral bands, an image-level fusion framework is completed based on a fast and adaptive bidimensional empirical mode decomposition (FABEMD) and a weighted Fisher criterion. The FABEMD technique decomposes the multispectral images into their bidimensional intrinsic mode functions (BIMFs), on which an illumination compensation operation is performed. The weighted Fisher criterion is to construct the fusion coefficients at the decomposition level, making the images be separated correctly in the fusion space. The image fusion framework has shown strong robustness against illumination variation. In addition, a tensor-based extreme learning machine (TELM) mechanism is presented for feature extraction and classification of two-dimensional (2D) images. In general, this method has fast learning speed and satisfying recognition accuracy. Comprehensive experiments conducted on the PolyU multispectral palmprint database illustrate that the proposed method can achieve favorable results. For the testing under ideal illumination, the recognition accuracy is as high as 99.93%, and the result is 99.50% when the lighting condition is unsatisfied.

  2. Application of decomposition techniques to the preliminary design of a transport aircraft

    NASA Technical Reports Server (NTRS)

    Rogan, J. E.; Mcelveen, R. P.; Kolb, M. A.

    1986-01-01

    A multifaceted decomposition of a nonlinear constrained optimization problem describing the preliminary design process for a transport aircraft has been made. Flight dynamics, flexible aircraft loads and deformations, and preliminary structural design subproblems appear prominently in the decomposition. The use of design process decomposition for scheduling design projects, a new system integration approach to configuration control, and the application of object-centered programming to a new generation of design tools are discussed.

  3. Challenges of including nitrogen effects on decomposition in earth system models

    NASA Astrophysics Data System (ADS)

    Hobbie, S. E.

    2011-12-01

    Despite the importance of litter decomposition for ecosystem fertility and carbon balance, key uncertainties remain about how this fundamental process is affected by nitrogen (N) availability. Nevertheless, resolving such uncertainties is critical for mechanistic inclusion of such processes in earth system models, towards predicting the ecosystem consequences of increased anthropogenic reactive N. Towards that end, we have conducted a series of experiments examining nitrogen effects on litter decomposition. We found that both substrate N and externally supplied N (regardless of form) accelerated the initial decomposition rate. Faster initial decomposition rates were linked to the higher activity of carbohydrate-degrading enzymes associated with externally supplied N and the greater relative abundances of Gram negative and Gram positive bacteria associated with green leaves and externally supplied organic N (assessed using phospholipid fatty acid analysis, PLFA). By contrast, later in decomposition, externally supplied N slowed decomposition, increasing the fraction of slowly decomposing litter and reducing lignin-degrading enzyme activity and relative abundances of Gram negative and Gram positive bacteria. Our results suggest that elevated atmospheric N deposition may have contrasting effects on the dynamics of different soil carbon pools, decreasing mean residence times of active fractions comprising very fresh litter, while increasing those of more slowly decomposing fractions including more processed litter. Incorporating these contrasting effects of N on decomposition processes into models is complicated by lingering uncertainties about how these effects generalize across ecosystems and substrates.

  4. An embedding of the universal Askey-Wilson algebra into Uq (sl2) ⊗Uq (sl2) ⊗Uq (sl2)

    NASA Astrophysics Data System (ADS)

    Huang, Hau-Wen

    2017-09-01

    The Askey-Wilson algebras were used to interpret the algebraic structure hidden in the Racah-Wigner coefficients of the quantum algebra Uq (sl2). In this paper, we display an injection of a universal analog △q of Askey-Wilson algebras into Uq (sl2) ⊗Uq (sl2) ⊗Uq (sl2) behind the application. Moreover we establish the decomposition rules for 3-fold tensor products of irreducible Verma Uq (sl2)-modules and of finite-dimensional irreducible Uq (sl2)-modules into the direct sums of finite-dimensional irreducible △q-modules. As an application, we derive a formula for the Racah-Wigner coefficients of Uq (sl2).

  5. Elliptic Relaxation of a Tensor Representation for the Redistribution Terms in a Reynolds Stress Turbulence Model

    NASA Technical Reports Server (NTRS)

    Carlson, J. R.; Gatski, T. B.

    2002-01-01

    A formulation to include the effects of wall proximity in a second-moment closure model that utilizes a tensor representation for the redistribution terms in the Reynolds stress equations is presented. The wall-proximity effects are modeled through an elliptic relaxation process of the tensor expansion coefficients that properly accounts for both correlation length and time scales as the wall is approached. Direct numerical simulation data and Reynolds stress solutions using a full differential approach are compared for the case of fully developed channel flow.

  6. Elliptic Relaxation of a Tensor Representation of the Pressure-Strain and Dissipation Rate

    NASA Technical Reports Server (NTRS)

    Carlson, John R.; Gatski, Thomas B.

    2002-01-01

    A formulation to include the effects of wall-proximity in a second moment closure model is presented that utilizes a tensor representation for the redistribution term in the Reynolds stress equations. The wall-proximity effects are modeled through an elliptic relaxation process of the tensor expansion coefficients that properly accounts for both correlation length and time scales as the wall is approached. DNS data and Reynolds stress solutions using a full differential approach at channel Reynolds number of 590 are compared to the new model.

  7. Tensor-based dynamic reconstruction method for electrical capacitance tomography

    NASA Astrophysics Data System (ADS)

    Lei, J.; Mu, H. P.; Liu, Q. B.; Li, Z. H.; Liu, S.; Wang, X. Y.

    2017-03-01

    Electrical capacitance tomography (ECT) is an attractive visualization measurement method, in which the acquisition of high-quality images is beneficial for the understanding of the underlying physical or chemical mechanisms of the dynamic behaviors of the measurement objects. In real-world measurement environments, imaging objects are often in a dynamic process, and the exploitation of the spatial-temporal correlations related to the dynamic nature will contribute to improving the imaging quality. Different from existing imaging methods that are often used in ECT measurements, in this paper a dynamic image sequence is stacked into a third-order tensor that consists of a low rank tensor and a sparse tensor within the framework of the multiple measurement vectors model and the multi-way data analysis method. The low rank tensor models the similar spatial distribution information among frames, which is slowly changing over time, and the sparse tensor captures the perturbations or differences introduced in each frame, which is rapidly changing over time. With the assistance of the Tikhonov regularization theory and the tensor-based multi-way data analysis method, a new cost function, with the considerations of the multi-frames measurement data, the dynamic evolution information of a time-varying imaging object and the characteristics of the low rank tensor and the sparse tensor, is proposed to convert the imaging task in the ECT measurement into a reconstruction problem of a third-order image tensor. An effective algorithm is developed to search for the optimal solution of the proposed cost function, and the images are reconstructed via a batching pattern. The feasibility and effectiveness of the developed reconstruction method are numerically validated.

  8. Tectonic analysis of mine tremor mechanisms from the Upper Silesian Coal Basin

    NASA Astrophysics Data System (ADS)

    Sagan, Grzegorz; Teper, Lesław; Zuberek, Waclaw M.

    1996-07-01

    Fault network of the Upper Silesian Coal Basin (USCB) is built of sets of strike-slip, oblique-slip and dip-slip faults. It is a typical product of force couple which acts evenly with the parallel of latitude, causing horizontal and anti-clockwise movement of rock-mass. Earlier research of focal mechanisms of mine tremors, using a standard fault plane solution, has shown that some events are related to tectonic directions in main structural units of the USCB. An attempt was undertaken to analyze the records of mine tremors from the period 1992 1994 in the selected coal fields. The digital records of about 200 mine tremors with energy larger than 1×104 J ( M L >1.23) were analyzed with SMT software for seismic moment tensor inversion. The decomposition of seismic moment tensor of mine tremors was segmented into isotropic (I) part, compensated linear vector dipole (CLVD) part and double-couple (DC) part. The DC part is prevalent (up to 70%) in the majority of quakes from the central region of the USCB. A group of mine tremors with large I element (up to 50%) can also be observed. The spatial orientation of the fault and auxiliary planes were obtained from the computations for the seismic moment DC part. Study of the DC part of the seismic moment tensor made it possible for us to separate the group of events which might be acknowledged to have their origin in unstable energy release on surfaces of faults forming a regional structural pattern. The possible influence of the Cainozoic tectonic history of the USCB on the recent shape of stress field is discussed.

  9. Discrete variable representation in electronic structure theory: quadrature grids for least-squares tensor hypercontraction.

    PubMed

    Parrish, Robert M; Hohenstein, Edward G; Martínez, Todd J; Sherrill, C David

    2013-05-21

    We investigate the application of molecular quadratures obtained from either standard Becke-type grids or discrete variable representation (DVR) techniques to the recently developed least-squares tensor hypercontraction (LS-THC) representation of the electron repulsion integral (ERI) tensor. LS-THC uses least-squares fitting to renormalize a two-sided pseudospectral decomposition of the ERI, over a physical-space quadrature grid. While this procedure is technically applicable with any choice of grid, the best efficiency is obtained when the quadrature is tuned to accurately reproduce the overlap metric for quadratic products of the primary orbital basis. Properly selected Becke DFT grids can roughly attain this property. Additionally, we provide algorithms for adopting the DVR techniques of the dynamics community to produce two different classes of grids which approximately attain this property. The simplest algorithm is radial discrete variable representation (R-DVR), which diagonalizes the finite auxiliary-basis representation of the radial coordinate for each atom, and then combines Lebedev-Laikov spherical quadratures and Becke atomic partitioning to produce the full molecular quadrature grid. The other algorithm is full discrete variable representation (F-DVR), which uses approximate simultaneous diagonalization of the finite auxiliary-basis representation of the full position operator to produce non-direct-product quadrature grids. The qualitative features of all three grid classes are discussed, and then the relative efficiencies of these grids are compared in the context of LS-THC-DF-MP2. Coarse Becke grids are found to give essentially the same accuracy and efficiency as R-DVR grids; however, the latter are built from explicit knowledge of the basis set and may guide future development of atom-centered grids. F-DVR is found to provide reasonable accuracy with markedly fewer points than either Becke or R-DVR schemes.

  10. Broadband Magnetotelluric Investigations of Crustal Resistivity Structure in North-Eastern Alberta: Implications for Engineered Geothermal Systems

    NASA Astrophysics Data System (ADS)

    Liddell, M. V.; Unsworth, M. J.; Nieuwenhuis, G.

    2013-12-01

    Greenhouse gas emissions from hydrocarbon consumption produce profound changes in the global climate, and the implementation of alternative energy sources is needed. The oilsands industry in Alberta (Canada) is a major producer of greenhouse gases as natural gas is burnt to produce the heat required to extract and process bitumen. Geothermal energy could be utilized to provide this necessary heat and has the potential to reduce both financial costs and environmental impacts of the oilsands industry. In order to determine the geothermal potential the details of the reservoir must be understood. Conventional hydrothermal reservoirs have been detected using geophysical techniques such as magnetotellurics (MT) which measures the electrical conductivity of the Earth. However, in Northern Alberta the geothermal gradient is relatively low, and heat must be extracted from deep inside the basement rocks using Engineered Geothermal Systems (EGS) and therefore an alternative exploration technique is required. MT can be useful in this context as it can detect fracture zones and regions of elevated porosity. MT data were recorded near Fort McMurray with the goal of determining the geothermal potential by understanding the crustal resistivity structure beneath the Athabasca Oilsands. The MT data are being used to locate targets of significance for geothermal exploration such as regions of low resistivity in the basement rocks which can relate to in situ fluids or fracture zones which can facilitate efficient heat extraction or het transport. A total of 93 stations were collected ~500m apart on two profiles stretching 30 and 20km respectively. Signals were recorded using Phoenix Geophysics V5-2000 systems over frequency bands from 1000 to 0.001 Hz, corresponding to depths of penetration approximately 50m to 50km. Groom-Bailey tensor decomposition and phase tensor analysis shows a well defined geoelectric strike direction that varied along the profile from N60°E to N45°E. Inversion of the data reveals the low resistivity sedimentary rocks of the Western Canadian Sedimentary Basin overlying a highly resistive Pre-Cambrian crystalline basement. The basement rocks have strong indications of being electrically anisotropic. Groom-Bailey and phase tensor azimuths are stable and consistent across both frequency and distance but display large phase tensor skew values (indicating 3D structure) and small induction vectors (indicating a lack of lateral structure). This type of anisotropy is unique because of its apparent widespread nature and the number of sites we have to constrain the anisotropic characteristics. These results can help to guide future geothermal development in Alberta as detailed information of the host rock resistivity structure can aid any EGS development.

  11. Thermal Decomposition of Nd3(+), Sr2(+) and Pb2(+) Exchanged Beta’’ Aluminas,

    DTIC Science & Technology

    1987-07-01

    reconstructive recrystallization process is responsible for the formation of the MP phase; this perhaps is a surprising result. The decomposition processes of Nd3... eutectics may be present. A general trend for all decompositions of metastable substituted " aluminas would therefore seem to be that when occurring

  12. What Role Does Photodegradation Play in Influencing Plant Litter Decomposition and Biogeochemistry in Coastal Marsh Ecosystems?

    NASA Astrophysics Data System (ADS)

    Tobler, M.; White, D. A.; Abbene, M. L.; Burst, S. L.; McCulley, R. L.; Barnes, P. W.

    2016-02-01

    Decomposition is a crucial component of global biogeochemical cycles that influences the fate and residence time of carbon and nutrients in organic matter pools, yet the processes controlling litter decomposition in coastal marshes are not fully understood. We conducted a series of field studies to examine what role photodegradation, a process driven in part by solar UV radiation (280-400 nm), plays in the decomposition of the standing dead litter of Sagittaria lancifolia and Spartina patens, two common species in marshes of intermediate salinity in southern Louisiana, USA. Results indicate that the exclusion of solar UV significantly altered litter mass loss, but the magnitude and direction of these effects varied depending on species, height of the litter above the water surface and the stage of decomposition. Over one growing season, S. lancifolia litter exposed to ambient solar UV had significantly less mass loss compared to litter exposed to attenuated UV over the initial phase of decomposition (0-5 months; ANOVA P=0.004) then treatment effects switched in the latter phase of the study (5-7 months; ANOVA P<0.001). Similar results were found in S. patens over an 11-month period. UV exposure reduced total C, N and lignin by 24-33% in remaining tissue with treatment differences most pronounced in S. patens. Phospholipid fatty-acid analysis (PFLA) indicated that UV also significantly altered microbial (bacterial) biomass and bacteria:fungi ratios of decomposing litter. These findings, and others, indicate that solar UV can have positive and negative net effects on litter decomposition in marsh plants with inhibition of biotic (microbial) processes occurring early in the decomposition process then shifting to enhancement of decomposition via abiotic (photodegradation) processes later in decomposition. Photodegradation of standing litter represents a potentially significant pathway of C and N loss from these coastal wetland ecosystems.

  13. Computational aeroelastic analysis of aircraft wings including geometry nonlinearity

    NASA Astrophysics Data System (ADS)

    Tian, Binyu

    The objective of the present study is to show the ability of solving fluid structural interaction problems more realistically by including the geometric nonlinearity of the structure so that the aeroelastic analysis can be extended into the onset of flutter, or in the post flutter regime. A nonlinear Finite Element Analysis software is developed based on second Piola-Kirchhoff stress and Green-Lagrange strain. The second Piola-Kirchhoff stress and Green-Lagrange strain is a pair of energetically conjugated tensors that can accommodate arbitrary large structural deformations and deflection, to study the flutter phenomenon. Since both of these tensors are objective tensors, i.e., the rigid-body motion has no contribution to their components, the movement of the body, including maneuvers and deformation, can be included. The nonlinear Finite Element Analysis software developed in this study is verified with ANSYS, NASTRAN, ABAQUS, and IDEAS for the linear static, nonlinear static, linear dynamic and nonlinear dynamic structural solutions. To solve the flow problems by Euler/Navier equations, the current nonlinear structural software is then embedded into ENSAERO, which is an aeroelastic analysis software package developed at NASA Ames Research Center. The coupling of the two software, both nonlinear in their own field, is achieved by domain decomposition method first proposed by Guruswamy. A procedure has been set for the aeroelastic analysis process. The aeroelastic analysis results have been obtained for fight wing in the transonic regime for various cases. The influence dynamic pressure on flutter has been checked for a range of Mach number. Even though the current analysis matches the general aeroelastic characteristic, the numerical value not match very well with previous studies and needs farther investigations. The flutter aeroelastic analysis results have also been plotted at several time points. The influences of the deforming wing geometry can be well seen in those plots. The movement of shock changes the aerodynamic load distribution on the wing. The effect of viscous on aeroelastic analysis is also discussed. Also compared are the flutter solutions with, or without the structural nonlinearity. As can be seen, linear structural solution goes to infinite, which can not be true in reality. The nonlinear solution is more realistic and can be used to understand the fluid and structure interaction behavior, to control, or prevent disastrous events. (Abstract shortened by UMI.)

  14. Residue decomposition of submodel of WEPS

    USDA-ARS?s Scientific Manuscript database

    The Residue Decomposition submodel of the Wind Erosion Prediction System (WEPS) simulates the decrease in crop residue biomass due to microbial activity. The decomposition process is modeled as a first-order reaction with temperature and moisture as driving variables. Decomposition is a function of ...

  15. Non-double-couple earthquakes. 1. Theory

    USGS Publications Warehouse

    Julian, B.R.; Miller, A.D.; Foulger, G.R.

    1998-01-01

    Historically, most quantitative seismological analyses have been based on the assumption that earthquakes are caused by shear faulting, for which the equivalent force system in an isotropic medium is a pair of force couples with no net torque (a 'double couple,' or DC). Observations of increasing quality and coverage, however, now resolve departures from the DC model for many earthquakes and find some earthquakes, especially in volcanic and geothermal areas, that have strongly non-DC mechanisms. Understanding non-DC earthquakes is important both for studying the process of faulting in detail and for identifying nonshear-faulting processes that apparently occur in some earthquakes. This paper summarizes the theory of 'moment tensor' expansions of equivalent-force systems and analyzes many possible physical non-DC earthquake processes. Contrary to long-standing assumption, sources within the Earth can sometimes have net force and torque components, described by first-rank and asymmetric second-rank moment tensors, which must be included in analyses of landslides and some volcanic phenomena. Non-DC processes that lead to conventional (symmetric second-rank) moment tensors include geometrically complex shear faulting, tensile faulting, shear faulting in an anisotropic medium, shear faulting in a heterogeneous region (e.g., near an interface), and polymorphic phase transformations. Undoubtedly, many non-DC earthquake processes remain to be discovered. Progress will be facilitated by experimental studies that use wave amplitudes, amplitude ratios, and complete waveforms in addition to wave polarities and thus avoid arbitrary assumptions such as the absence of volume changes or the temporal similarity of different moment tensor components.

  16. Application of decomposition techniques to the preliminary design of a transport aircraft

    NASA Technical Reports Server (NTRS)

    Rogan, J. E.; Kolb, M. A.

    1987-01-01

    A nonlinear constrained optimization problem describing the preliminary design process for a transport aircraft has been formulated. A multifaceted decomposition of the optimization problem has been made. Flight dynamics, flexible aircraft loads and deformations, and preliminary structural design subproblems appear prominently in the decomposition. The use of design process decomposition for scheduling design projects, a new system integration approach to configuration control, and the application of object-centered programming to a new generation of design tools are discussed.

  17. Newton-based optimization for Kullback-Leibler nonnegative tensor factorizations

    DOE PAGES

    Plantenga, Todd; Kolda, Tamara G.; Hansen, Samantha

    2015-04-30

    Tensor factorizations with nonnegativity constraints have found application in analysing data from cyber traffic, social networks, and other areas. We consider application data best described as being generated by a Poisson process (e.g. count data), which leads to sparse tensors that can be modelled by sparse factor matrices. In this paper, we investigate efficient techniques for computing an appropriate canonical polyadic tensor factorization based on the Kullback–Leibler divergence function. We propose novel subproblem solvers within the standard alternating block variable approach. Our new methods exploit structure and reformulate the optimization problem as small independent subproblems. We employ bound-constrained Newton andmore » quasi-Newton methods. Finally, we compare our algorithms against other codes, demonstrating superior speed for high accuracy results and the ability to quickly find sparse solutions.« less

  18. Spatial Mapping of Translational Diffusion Coefficients Using Diffusion Tensor Imaging: A Mathematical Description

    PubMed Central

    SHETTY, ANIL N.; CHIANG, SHARON; MALETIC-SAVATIC, MIRJANA; KASPRIAN, GREGOR; VANNUCCI, MARINA; LEE, WESLEY

    2016-01-01

    In this article, we discuss the theoretical background for diffusion weighted imaging and diffusion tensor imaging. Molecular diffusion is a random process involving thermal Brownian motion. In biological tissues, the underlying microstructures restrict the diffusion of water molecules, making diffusion directionally dependent. Water diffusion in tissue is mathematically characterized by the diffusion tensor, the elements of which contain information about the magnitude and direction of diffusion and is a function of the coordinate system. Thus, it is possible to generate contrast in tissue based primarily on diffusion effects. Expressing diffusion in terms of the measured diffusion coefficient (eigenvalue) in any one direction can lead to errors. Nowhere is this more evident than in white matter, due to the preferential orientation of myelin fibers. The directional dependency is removed by diagonalization of the diffusion tensor, which then yields a set of three eigenvalues and eigenvectors, representing the magnitude and direction of the three orthogonal axes of the diffusion ellipsoid, respectively. For example, the eigenvalue corresponding to the eigenvector along the long axis of the fiber corresponds qualitatively to diffusion with least restriction. Determination of the principal values of the diffusion tensor and various anisotropic indices provides structural information. We review the use of diffusion measurements using the modified Stejskal–Tanner diffusion equation. The anisotropy is analyzed by decomposing the diffusion tensor based on symmetrical properties describing the geometry of diffusion tensor. We further describe diffusion tensor properties in visualizing fiber tract organization of the human brain. PMID:27441031

  19. Symmetric Positive 4th Order Tensors & Their Estimation from Diffusion Weighted MRI⋆

    PubMed Central

    Barmpoutis, Angelos; Jian, Bing; Vemuri, Baba C.; Shepherd, Timothy M.

    2009-01-01

    In Diffusion Weighted Magnetic Resonance Image (DW-MRI) processing a 2nd order tensor has been commonly used to approximate the diffusivity function at each lattice point of the DW-MRI data. It is now well known that this 2nd-order approximation fails to approximate complex local tissue structures, such as fibers crossings. In this paper we employ a 4th order symmetric positive semi-definite (PSD) tensor approximation to represent the diffusivity function and present a novel technique to estimate these tensors from the DW-MRI data guaranteeing the PSD property. There have been several published articles in literature on higher order tensor approximations of the diffusivity function but none of them guarantee the positive semi-definite constraint, which is a fundamental constraint since negative values of the diffusivity coefficients are not meaningful. In our methods, we parameterize the 4th order tensors as a sum of squares of quadratic forms by using the so called Gram matrix method from linear algebra and its relation to the Hilbert’s theorem on ternary quartics. This parametric representation is then used in a nonlinear-least squares formulation to estimate the PSD tensors of order 4 from the data. We define a metric for the higher-order tensors and employ it for regularization across the lattice. Finally, performance of this model is depicted on synthetic data as well as real DW-MRI from an isolated rat hippocampus. PMID:17633709

  20. Variational optical flow estimation based on stick tensor voting.

    PubMed

    Rashwan, Hatem A; Garcia, Miguel A; Puig, Domenec

    2013-07-01

    Variational optical flow techniques allow the estimation of flow fields from spatio-temporal derivatives. They are based on minimizing a functional that contains a data term and a regularization term. Recently, numerous approaches have been presented for improving the accuracy of the estimated flow fields. Among them, tensor voting has been shown to be particularly effective in the preservation of flow discontinuities. This paper presents an adaptation of the data term by using anisotropic stick tensor voting in order to gain robustness against noise and outliers with significantly lower computational cost than (full) tensor voting. In addition, an anisotropic complementary smoothness term depending on directional information estimated through stick tensor voting is utilized in order to preserve discontinuity capabilities of the estimated flow fields. Finally, a weighted non-local term that depends on both the estimated directional information and the occlusion state of pixels is integrated during the optimization process in order to denoise the final flow field. The proposed approach yields state-of-the-art results on the Middlebury benchmark.

  1. Decomposition of 3,5-dinitrobenzamide in aqueous solution during UV/H2O2 and UV/TiO2 oxidation processes.

    PubMed

    Yan, Yingjie; Liao, Qi-Nan; Ji, Feng; Wang, Wei; Yuan, Shoujun; Hu, Zhen-Hu

    2017-02-01

    3,5-Dinitrobenzamide has been widely used as a feed additive to control coccidiosis in poultry, and part of the added 3,5-dinitrobenzamide is excreted into wastewater and surface water. The removal of 3,5-dinitrobenzamide from wastewater and surface water has not been reported in previous studies. Highly reactive hydroxyl radicals from UV/hydrogen peroxide (H 2 O 2 ) and UV/titanium dioxide (TiO 2 ) advanced oxidation processes (AOPs) can decompose organic contaminants efficiently. In this study, the decomposition of 3,5-dinitrobenzamide in aqueous solution during UV/H 2 O 2 and UV/TiO 2 oxidation processes was investigated. The decomposition of 3,5-dinitrobenzamide fits well with a fluence-based pseudo-first-order kinetics model. The decomposition in both two oxidation processes was affected by solution pH, and was inhibited under alkaline conditions. Inorganic anions such as NO 3 - , Cl - , SO 4 2- , HCO 3 - , and CO 3 2- inhibited the degradation of 3,5-dinitrobenzamide during the UV/H 2 O 2 and UV/TiO 2 oxidation processes. After complete decomposition in both oxidation processes, approximately 50% of 3,5-dinitrobenzamide was decomposed into organic intermediates, and the rest was mineralized to CO 2 , H 2 O, and other inorganic anions. Ions such as NH 4 + , NO 3 - , and NO 2 - were released into aqueous solution during the degradation. The primary decomposition products of 3,5-dinitrobenzamide were identified using time-of-flight mass spectrometry (LCMS-IT-TOF). Based on these products and ions release, a possible decomposition pathway of 3,5-dinitrobenzamide in both UV/H 2 O 2 and UV/TiO 2 processes was proposed.

  2. On the construction of a ground truth framework for evaluating voxel-based diffusion tensor MRI analysis methods.

    PubMed

    Van Hecke, Wim; Sijbers, Jan; De Backer, Steve; Poot, Dirk; Parizel, Paul M; Leemans, Alexander

    2009-07-01

    Although many studies are starting to use voxel-based analysis (VBA) methods to compare diffusion tensor images between healthy and diseased subjects, it has been demonstrated that VBA results depend heavily on parameter settings and implementation strategies, such as the applied coregistration technique, smoothing kernel width, statistical analysis, etc. In order to investigate the effect of different parameter settings and implementations on the accuracy and precision of the VBA results quantitatively, ground truth knowledge regarding the underlying microstructural alterations is required. To address the lack of such a gold standard, simulated diffusion tensor data sets are developed, which can model an array of anomalies in the diffusion properties of a predefined location. These data sets can be employed to evaluate the numerous parameters that characterize the pipeline of a VBA algorithm and to compare the accuracy, precision, and reproducibility of different post-processing approaches quantitatively. We are convinced that the use of these simulated data sets can improve the understanding of how different diffusion tensor image post-processing techniques affect the outcome of VBA. In turn, this may possibly lead to a more standardized and reliable evaluation of diffusion tensor data sets of large study groups with a wide range of white matter altering pathologies. The simulated DTI data sets will be made available online (http://www.dti.ua.ac.be).

  3. Decentralized Dimensionality Reduction for Distributed Tensor Data Across Sensor Networks.

    PubMed

    Liang, Junli; Yu, Guoyang; Chen, Badong; Zhao, Minghua

    2016-11-01

    This paper develops a novel decentralized dimensionality reduction algorithm for the distributed tensor data across sensor networks. The main contributions of this paper are as follows. First, conventional centralized methods, which utilize entire data to simultaneously determine all the vectors of the projection matrix along each tensor mode, are not suitable for the network environment. Here, we relax the simultaneous processing manner into the one-vector-by-one-vector (OVBOV) manner, i.e., determining the projection vectors (PVs) related to each tensor mode one by one. Second, we prove that in the OVBOV manner each PV can be determined without modifying any tensor data, which simplifies corresponding computations. Third, we cast the decentralized PV determination problem as a set of subproblems with consensus constraints, so that it can be solved in the network environment only by local computations and information communications among neighboring nodes. Fourth, we introduce the null space and transform the PV determination problem with complex orthogonality constraints into an equivalent hidden convex one without any orthogonality constraint, which can be solved by the Lagrange multiplier method. Finally, experimental results are given to show that the proposed algorithm is an effective dimensionality reduction scheme for the distributed tensor data across the sensor networks.

  4. Spatial distribution of F-net moment tensors for the 2005 West Off Fukuoka Prefecture Earthquake determined by the extended method of the NIED F-net routine

    NASA Astrophysics Data System (ADS)

    Matsumoto, Takumi; Ito, Yoshihiro; Matsubayashi, Hirotoshi; Sekiguchi, Shoji

    2006-01-01

    The 2005 West Off Fukuoka Prefecture Earthquake with a Japan Meteorological Agency (JMA) magnitude (MJMA) of 7.0 occurred on March 20, 2005. We determined moment tensor solutions, using a surface wave with an extended method of the NIED F-net routine processing. The horizontal distance to the station is rounded to the nearest interval of 1 km, and the variance reduction approach is applied to a focal depth from 2 km with an interval of 1 km. We obtain the moment tensors of 101 events with (MJMA) exceeding 3.0 and spatial distribution of these moment tensors. The focal mechanism of aftershocks is mainly of the strike-slip type. The alignment of the epicenters in the rupture zone of the main-shock is oriented between N110°E and N130°E, which is close to the strike of the main-shock's moment tensor solutions (N122°E). These moment tensor solutions of intermediatesized aftershocks around the focal region represent basic and important information concerning earthquakes in investigating regional tectonic stress fields, source mechanisms and so on.

  5. Kinetics of Thermal Decomposition of Ammonium Perchlorate by TG/DSC-MS-FTIR

    NASA Astrophysics Data System (ADS)

    Zhu, Yan-Li; Huang, Hao; Ren, Hui; Jiao, Qing-Jie

    2014-01-01

    The method of thermogravimetry/differential scanning calorimetry-mass spectrometry-Fourier transform infrared (TG/DSC-MS-FTIR) simultaneous analysis has been used to study thermal decomposition of ammonium perchlorate (AP). The processing of nonisothermal data at various heating rates was performed using NETZSCH Thermokinetics. The MS-FTIR spectra showed that N2O and NO2 were the main gaseous products of the thermal decomposition of AP, and there was a competition between the formation reaction of N2O and that of NO2 during the process with an iso-concentration point of N2O and NO2. The dependence of the activation energy calculated by Friedman's iso-conversional method on the degree of conversion indicated that the AP decomposition process can be divided into three stages, which are autocatalytic, low-temperature diffusion and high-temperature, stable-phase reaction. The corresponding kinetic parameters were determined by multivariate nonlinear regression and the mechanism of the AP decomposition process was proposed.

  6. Understanding Litter Input Controls on Soil Organic Matter Turnover and Formation are Essential for Improving Carbon-Climate Feedback Predictions for Arctic, Tundra Ecosystems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wallenstein, Matthew

    The Arctic region stored vast amounts of carbon (C) in soils over thousands of years because decomposition has been limited by cold, wet conditions. Arctic soils now contain roughly as much C that is contained in all other soils across the globe combined. However, climate warming could unlock this oil C as decomposition accelerates and permafrost thaws. In addition to temperature-driven acceleration of decomposition, several additional processes could either counteract or augment warming-induced SOM losses. For example, increased plant growth under a warmer climate will increase organic matter inputs to soils, which could fuel further soil decomposition by microbes, butmore » will also increase the production of new SOM. Whether Arctic ecosystems store or release carbon in the future depends in part on the balance between these two counteracting processes. By differentiating SOM decomposition and formation and understanding the drivers of these processes, we will better understand how these systems function. We did not find evidence of priming under current conditions, defined as an increase in the decomposition of native SOM stocks. This suggests that decomposition is unlikely to be further accelerated through this mechanism. We did find that decomposition of native SOM did occur when nitrogen was added to these soils, suggesting that nitrogen limits decomposition in these systems. Our results highlight the resilience and extraordinary C storage capacity of these soils, and suggest shrub expansion may partially mitigate C losses from decomposition of old SOM as Arctic soils warm.« less

  7. An Improved Method for Seismic Event Depth and Moment Tensor Determination: CTBT Related Application

    NASA Astrophysics Data System (ADS)

    Stachnik, J.; Rozhkov, M.; Baker, B.

    2016-12-01

    According to the Protocol to CTBT, International Data Center is required to conduct expert technical analysis and special studies to improve event parameters and assist State Parties in identifying the source of specific event. Determination of seismic event source mechanism and its depth is a part of these tasks. It is typically done through a strategic linearized inversion of the waveforms for a complete or subset of source parameters, or similarly defined grid search through precomputed Greens Functions created for particular source models. We show preliminary results using the latter approach from an improved software design and applied on a moderately powered computer. In this development we tried to be compliant with different modes of CTBT monitoring regime and cover wide range of source-receiver distances (regional to teleseismic), resolve shallow source depths, provide full moment tensor solution based on body and surface waves recordings, be fast to satisfy both on-demand studies and automatic processing and properly incorporate observed waveforms and any uncertainties a priori as well as accurately estimate posteriori uncertainties. Implemented HDF5 based Green's Functions pre-packaging allows much greater flexibility in utilizing different software packages and methods for computation. Further additions will have the rapid use of Instaseis/AXISEM full waveform synthetics added to a pre-computed GF archive. Along with traditional post processing analysis of waveform misfits through several objective functions and variance reduction, we follow a probabilistic approach to assess the robustness of moment tensor solution. In a course of this project full moment tensor and depth estimates are determined for DPRK 2009, 2013 and 2016 events and shallow earthquakes using a new implementation of waveform fitting of teleseismic P waves. A full grid search over the entire moment tensor space is used to appropriately sample all possible solutions. A recent method by Tape & Tape (2012) to discretize the complete moment tensor space from a geometric perspective is used. Moment tensors for DPRK events show isotropic percentages greater than 50%. Depth estimates for the DPRK events range from 1.0-1.4 km. Probabilistic uncertainty estimates on the moment tensor parameters provide robustness to solution.

  8. Analytical gradients for tensor hyper-contracted MP2 and SOS-MP2 on graphical processing units

    DOE PAGES

    Song, Chenchen; Martinez, Todd J.

    2017-08-29

    Analytic energy gradients for tensor hyper-contraction (THC) are derived and implemented for second-order Møller-Plesset perturbation theory (MP2), with and without the scaled-opposite-spin (SOS)-MP2 approximation. By exploiting the THC factorization, the formal scaling of MP2 and SOS-MP2 gradient calculations with respect to system size is reduced to quartic and cubic, respectively. An efficient implementation has been developed that utilizes both graphics processing units and sparse tensor techniques exploiting spatial sparsity of the atomic orbitals. THC-MP2 has been applied to both geometry optimization and ab initio molecular dynamics (AIMD) simulations. Furthermore, the resulting energy conservation in micro-canonical AIMD demonstrates that the implementationmore » provides accurate nuclear gradients with respect to the THC-MP2 potential energy surfaces.« less

  9. Analytical gradients for tensor hyper-contracted MP2 and SOS-MP2 on graphical processing units

    NASA Astrophysics Data System (ADS)

    Song, Chenchen; Martínez, Todd J.

    2017-10-01

    Analytic energy gradients for tensor hyper-contraction (THC) are derived and implemented for second-order Møller-Plesset perturbation theory (MP2), with and without the scaled-opposite-spin (SOS)-MP2 approximation. By exploiting the THC factorization, the formal scaling of MP2 and SOS-MP2 gradient calculations with respect to system size is reduced to quartic and cubic, respectively. An efficient implementation has been developed that utilizes both graphics processing units and sparse tensor techniques exploiting spatial sparsity of the atomic orbitals. THC-MP2 has been applied to both geometry optimization and ab initio molecular dynamics (AIMD) simulations. The resulting energy conservation in micro-canonical AIMD demonstrates that the implementation provides accurate nuclear gradients with respect to the THC-MP2 potential energy surfaces.

  10. Analytical gradients for tensor hyper-contracted MP2 and SOS-MP2 on graphical processing units

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Chenchen; Martinez, Todd J.

    Analytic energy gradients for tensor hyper-contraction (THC) are derived and implemented for second-order Møller-Plesset perturbation theory (MP2), with and without the scaled-opposite-spin (SOS)-MP2 approximation. By exploiting the THC factorization, the formal scaling of MP2 and SOS-MP2 gradient calculations with respect to system size is reduced to quartic and cubic, respectively. An efficient implementation has been developed that utilizes both graphics processing units and sparse tensor techniques exploiting spatial sparsity of the atomic orbitals. THC-MP2 has been applied to both geometry optimization and ab initio molecular dynamics (AIMD) simulations. Furthermore, the resulting energy conservation in micro-canonical AIMD demonstrates that the implementationmore » provides accurate nuclear gradients with respect to the THC-MP2 potential energy surfaces.« less

  11. Decomposition of gas-phase trichloroethene by the UV/TiO2 process in the presence of ozone.

    PubMed

    Shen, Y S; Ku, Y

    2002-01-01

    The decomposition of gas-phase trichloroethene (TCE) in air streams by direct photolysis, the UV/TiO2 and UV/O3 processes was studied. The experiments were carried out under various UV light intensities and wavelengths, ozone dosages, and initial concentrations of TCE to investigate and compare the removal efficiency of the pollutant. For UV/TiO2 process, the individual contribution to the decomposition of TCE by direct photolysis and hydroxyl radicals destruction was differentiated to discuss the quantum efficiency with 254 and 365 nm UV lamps. The removal of gaseous TCE was found to reduce by UV/TiO2 process in the presence of ozone possibly because of the ozone molecules could scavenge hydroxyl radicals produced from the excitation of TiO2 by UV radiation to inhibit the decomposition of TCE. A photoreactor design equation for the decomposition of gaseous TCE by the UV/TiO2 process in air streams was developed by combining the continuity equation of the pollutant and the surface catalysis reaction rate expression. By the proposed design scheme, the temporal distribution of TCE at various operation conditions by the UV/TiO2 process can be well modeled.

  12. Soil fauna and plant litter decomposition in tropical and subalpine forests

    Treesearch

    G. Gonzalez; T.R. Seastedt

    2001-01-01

    The decomposition of plant residues is influenced by their chemical composition, the physical-chemical environment, and the decomposer organisms. Most studies interested in latitudinal gradients of decomposition have focused on substrate quality and climate effects on decomposition, and have excluded explicit recognition of the soil organisms involved in the process....

  13. Bayesian ISOLA: new tool for automated centroid moment tensor inversion

    NASA Astrophysics Data System (ADS)

    Vackář, Jiří; Burjánek, Jan; Gallovič, František; Zahradník, Jiří; Clinton, John

    2017-04-01

    Focal mechanisms are important for understanding seismotectonics of a region, and they serve as a basic input for seismic hazard assessment. Usually, the point source approximation and the moment tensor (MT) are used. We have developed a new, fully automated tool for the centroid moment tensor (CMT) inversion in a Bayesian framework. It includes automated data retrieval, data selection where station components with various instrumental disturbances and high signal-to-noise are rejected, and full-waveform inversion in a space-time grid around a provided hypocenter. The method is innovative in the following aspects: (i) The CMT inversion is fully automated, no user interaction is required, although the details of the process can be visually inspected latter on many figures which are automatically plotted.(ii) The automated process includes detection of disturbances based on MouseTrap code, so disturbed recordings do not affect inversion.(iii) A data covariance matrix calculated from pre-event noise yields an automated weighting of the station recordings according to their noise levels and also serves as an automated frequency filter suppressing noisy frequencies.(iv) Bayesian approach is used, so not only the best solution is obtained, but also the posterior probability density function.(v) A space-time grid search effectively combined with the least-squares inversion of moment tensor components speeds up the inversion and allows to obtain more accurate results compared to stochastic methods. The method has been tested on synthetic and observed data. It has been tested by comparison with manually processed moment tensors of all events greater than M≥3 in the Swiss catalogue over 16 years using data available at the Swiss data center (http://arclink.ethz.ch). The quality of the results of the presented automated process is comparable with careful manual processing of data. The software package programmed in Python has been designed to be as versatile as possible in order to be applicable in various networks ranging from local to regional. The method can be applied either to the everyday network data flow, or to process large previously existing earthquake catalogues and data sets.

  14. GC × GC-TOFMS and supervised multivariate approaches to study human cadaveric decomposition olfactive signatures.

    PubMed

    Stefanuto, Pierre-Hugues; Perrault, Katelynn A; Stadler, Sonja; Pesesse, Romain; LeBlanc, Helene N; Forbes, Shari L; Focant, Jean-François

    2015-06-01

    In forensic thanato-chemistry, the understanding of the process of soft tissue decomposition is still limited. A better understanding of the decomposition process and the characterization of the associated volatile organic compounds (VOC) can help to improve the training of victim recovery (VR) canines, which are used to search for trapped victims in natural disasters or to locate corpses during criminal investigations. The complexity of matrices and the dynamic nature of this process require the use of comprehensive analytical methods for investigation. Moreover, the variability of the environment and between individuals creates additional difficulties in terms of normalization. The resolution of the complex mixture of VOCs emitted by a decaying corpse can be improved using comprehensive two-dimensional gas chromatography (GC × GC), compared to classical single-dimensional gas chromatography (1DGC). This study combines the analytical advantages of GC × GC coupled to time-of-flight mass spectrometry (TOFMS) with the data handling robustness of supervised multivariate statistics to investigate the VOC profile of human remains during early stages of decomposition. Various supervised multivariate approaches are compared to interpret the large data set. Moreover, early decomposition stages of pig carcasses (typically used as human surrogates in field studies) are also monitored to obtain a direct comparison of the two VOC profiles and estimate the robustness of this human decomposition analog model. In this research, we demonstrate that pig and human decomposition processes can be described by the same trends for the major compounds produced during the early stages of soft tissue decomposition.

  15. Tracking Hierarchical Processing in Morphological Decomposition with Brain Potentials

    ERIC Educational Resources Information Center

    Lavric, Aureliu; Elchlepp, Heike; Rastle, Kathleen

    2012-01-01

    One important debate in psycholinguistics concerns the nature of morphological decomposition processes in visual word recognition (e.g., darkness = {dark} + {-ness}). One theory claims that these processes arise during orthographic analysis and prior to accessing meaning (Rastle & Davis, 2008), and another argues that these processes arise through…

  16. Variations in the expansion and shear scalars for dissipative fluids

    NASA Astrophysics Data System (ADS)

    Akram, A.; Ahmad, S.; Jami, A. Rehman; Sufyan, M.; Zahid, U.

    2018-04-01

    This work is devoted to the study of some dynamical features of spherical relativistic locally anisotropic stellar geometry in f(R) gravity. In this paper, a specific configuration of tanh f(R) cosmic model has been taken into account. The mass function through technique introduced by Misner-Sharp has been formulated and with the help of it, various fruitful relations are derived. After orthogonal decomposition of the Riemann tensor, the tanh modified structure scalars are calculated. The role of these tanh modified structure scalars (MSS) has been discussed through shear, expansion as well as Weyl scalar differential equations. The inhomogeneity factor has also been explored for the case of radiating viscous locally anisotropic spherical system and spherical dust cloud with and without constant Ricci scalar corrections.

  17. Efficient calculation of nuclear spin-rotation constants from auxiliary density functional theory.

    PubMed

    Zuniga-Gutierrez, Bernardo; Camacho-Gonzalez, Monica; Bendana-Castillo, Alfonso; Simon-Bastida, Patricia; Calaminici, Patrizia; Köster, Andreas M

    2015-09-14

    The computation of the spin-rotation tensor within the framework of auxiliary density functional theory (ADFT) in combination with the gauge including atomic orbital (GIAO) scheme, to treat the gauge origin problem, is presented. For the spin-rotation tensor, the calculation of the magnetic shielding tensor represents the most demanding computational task. Employing the ADFT-GIAO methodology, the central processing unit time for the magnetic shielding tensor calculation can be dramatically reduced. In this work, the quality of spin-rotation constants obtained with the ADFT-GIAO methodology is compared with available experimental data as well as with other theoretical results at the Hartree-Fock and coupled-cluster level of theory. It is found that the agreement between the ADFT-GIAO results and the experiment is good and very similar to the ones obtained by the coupled-cluster single-doubles-perturbative triples-GIAO methodology. With the improved computational performance achieved, the computation of the spin-rotation tensors of large systems or along Born-Oppenheimer molecular dynamics trajectories becomes feasible in reasonable times. Three models of carbon fullerenes containing hundreds of atoms and thousands of basis functions are used for benchmarking the performance. Furthermore, a theoretical study of temperature effects on the structure and spin-rotation tensor of the H(12)C-(12)CH-DF complex is presented. Here, the temperature dependency of the spin-rotation tensor of the fluorine nucleus can be used to identify experimentally the so far unknown bent isomer of this complex. To the best of our knowledge this is the first time that temperature effects on the spin-rotation tensor are investigated.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zuniga-Gutierrez, Bernardo, E-mail: bzuniga.51@gmail.com; Camacho-Gonzalez, Monica; Bendana-Castillo, Alfonso

    The computation of the spin-rotation tensor within the framework of auxiliary density functional theory (ADFT) in combination with the gauge including atomic orbital (GIAO) scheme, to treat the gauge origin problem, is presented. For the spin-rotation tensor, the calculation of the magnetic shielding tensor represents the most demanding computational task. Employing the ADFT-GIAO methodology, the central processing unit time for the magnetic shielding tensor calculation can be dramatically reduced. In this work, the quality of spin-rotation constants obtained with the ADFT-GIAO methodology is compared with available experimental data as well as with other theoretical results at the Hartree-Fock and coupled-clustermore » level of theory. It is found that the agreement between the ADFT-GIAO results and the experiment is good and very similar to the ones obtained by the coupled-cluster single-doubles-perturbative triples-GIAO methodology. With the improved computational performance achieved, the computation of the spin-rotation tensors of large systems or along Born-Oppenheimer molecular dynamics trajectories becomes feasible in reasonable times. Three models of carbon fullerenes containing hundreds of atoms and thousands of basis functions are used for benchmarking the performance. Furthermore, a theoretical study of temperature effects on the structure and spin-rotation tensor of the H{sup 12}C–{sup 12}CH–DF complex is presented. Here, the temperature dependency of the spin-rotation tensor of the fluorine nucleus can be used to identify experimentally the so far unknown bent isomer of this complex. To the best of our knowledge this is the first time that temperature effects on the spin-rotation tensor are investigated.« less

  19. Influence of Different Forest System Management Practices on Leaf Litter Decomposition Rates, Nutrient Dynamics and the Activity of Ligninolytic Enzymes: A Case Study from Central European Forests

    PubMed Central

    Schulz, Elke; Schloter, Michael; Buscot, François; Hofrichter, Martin; Krüger, Dirk

    2014-01-01

    Leaf litter decomposition is the key ecological process that determines the sustainability of managed forest ecosystems, however very few studies hitherto have investigated this process with respect to silvicultural management practices. The aims of the present study were to investigate the effects of forest management practices on leaf litter decomposition rates, nutrient dynamics (C, N, Mg, K, Ca, P) and the activity of ligninolytic enzymes. We approached these questions using a 473 day long litterbag experiment. We found that age-class beech and spruce forests (high forest management intensity) had significantly higher decomposition rates and nutrient release (most nutrients) than unmanaged deciduous forest reserves (P<0.05). The site with near-to-nature forest management (low forest management intensity) exhibited no significant differences in litter decomposition rate, C release, lignin decomposition, and C/N, lignin/N and ligninolytic enzyme patterns compared to the unmanaged deciduous forest reserves, but most nutrient dynamics examined in this study were significantly faster under such near-to-nature forest management practices. Analyzing the activities of ligninolytic enzymes provided evidence that different forest system management practices affect litter decomposition by changing microbial enzyme activities, at least over the investigated time frame of 473 days (laccase, P<0.0001; manganese peroxidase (MnP), P = 0.0260). Our results also indicate that lignin decomposition is the rate limiting step in leaf litter decomposition and that MnP is one of the key oxidative enzymes of litter degradation. We demonstrate here that forest system management practices can significantly affect important ecological processes and services such as decomposition and nutrient cycling. PMID:24699676

  20. Influence of different forest system management practices on leaf litter decomposition rates, nutrient dynamics and the activity of ligninolytic enzymes: a case study from central European forests.

    PubMed

    Purahong, Witoon; Kapturska, Danuta; Pecyna, Marek J; Schulz, Elke; Schloter, Michael; Buscot, François; Hofrichter, Martin; Krüger, Dirk

    2014-01-01

    Leaf litter decomposition is the key ecological process that determines the sustainability of managed forest ecosystems, however very few studies hitherto have investigated this process with respect to silvicultural management practices. The aims of the present study were to investigate the effects of forest management practices on leaf litter decomposition rates, nutrient dynamics (C, N, Mg, K, Ca, P) and the activity of ligninolytic enzymes. We approached these questions using a 473 day long litterbag experiment. We found that age-class beech and spruce forests (high forest management intensity) had significantly higher decomposition rates and nutrient release (most nutrients) than unmanaged deciduous forest reserves (P<0.05). The site with near-to-nature forest management (low forest management intensity) exhibited no significant differences in litter decomposition rate, C release, lignin decomposition, and C/N, lignin/N and ligninolytic enzyme patterns compared to the unmanaged deciduous forest reserves, but most nutrient dynamics examined in this study were significantly faster under such near-to-nature forest management practices. Analyzing the activities of ligninolytic enzymes provided evidence that different forest system management practices affect litter decomposition by changing microbial enzyme activities, at least over the investigated time frame of 473 days (laccase, P<0.0001; manganese peroxidase (MnP), P = 0.0260). Our results also indicate that lignin decomposition is the rate limiting step in leaf litter decomposition and that MnP is one of the key oxidative enzymes of litter degradation. We demonstrate here that forest system management practices can significantly affect important ecological processes and services such as decomposition and nutrient cycling.

  1. Tensor integrand reduction via Laurent expansion

    DOE PAGES

    Hirschi, Valentin; Peraro, Tiziano

    2016-06-09

    We introduce a new method for the application of one-loop integrand reduction via the Laurent expansion algorithm, as implemented in the public C++ library Ninja. We show how the coefficients of the Laurent expansion can be computed by suitable contractions of the loop numerator tensor with cut-dependent projectors, making it possible to interface Ninja to any one-loop matrix element generator that can provide the components of this tensor. We implemented this technique in the Ninja library and interfaced it to MadLoop, which is part of the public MadGraph5_aMC@NLO framework. We performed a detailed performance study, comparing against other public reductionmore » tools, namely CutTools, Samurai, IREGI, PJFry++ and Golem95. We find that Ninja out-performs traditional integrand reduction in both speed and numerical stability, the latter being on par with that of the tensor integral reduction tool Golem95 which is however more limited and slower than Ninja. Lastly, we considered many benchmark multi-scale processes of increasing complexity, involving QCD and electro-weak corrections as well as effective non-renormalizable couplings, showing that Ninja’s performance scales well with both the rank and multiplicity of the considered process.« less

  2. The tensor distribution function.

    PubMed

    Leow, A D; Zhu, S; Zhan, L; McMahon, K; de Zubicaray, G I; Meredith, M; Wright, M J; Toga, A W; Thompson, P M

    2009-01-01

    Diffusion weighted magnetic resonance imaging is a powerful tool that can be employed to study white matter microstructure by examining the 3D displacement profile of water molecules in brain tissue. By applying diffusion-sensitized gradients along a minimum of six directions, second-order tensors (represented by three-by-three positive definite matrices) can be computed to model dominant diffusion processes. However, conventional DTI is not sufficient to resolve more complicated white matter configurations, e.g., crossing fiber tracts. Recently, a number of high-angular resolution schemes with more than six gradient directions have been employed to address this issue. In this article, we introduce the tensor distribution function (TDF), a probability function defined on the space of symmetric positive definite matrices. Using the calculus of variations, we solve the TDF that optimally describes the observed data. Here, fiber crossing is modeled as an ensemble of Gaussian diffusion processes with weights specified by the TDF. Once this optimal TDF is determined, the orientation distribution function (ODF) can easily be computed by analytic integration of the resulting displacement probability function. Moreover, a tensor orientation distribution function (TOD) may also be derived from the TDF, allowing for the estimation of principal fiber directions and their corresponding eigenvalues.

  3. Single and double diffractive dissociation and the problem of extraction of the proton-Pomeron cross-section

    NASA Astrophysics Data System (ADS)

    Petrov, V. A.; Ryutin, R. A.

    2016-04-01

    Diffractive dissociation processes are analyzed in the framework of covariant reggeization. We have considered the general form of hadronic tensor and its asymptotic behavior for t → 0 in the case of conserved tensor currents before reggeization. Resulting expressions for differential cross-sections of single dissociation (SD) process (pp → pM), double dissociation (DD) (pp → M1M2) and for the proton-Pomeron cross-section are given in detail, and corresponding problems of the approach are discussed.

  4. Utilization of a balanced steady state free precession signal model for improved fat/water decomposition.

    PubMed

    Henze Bancroft, Leah C; Strigel, Roberta M; Hernando, Diego; Johnson, Kevin M; Kelcz, Frederick; Kijowski, Richard; Block, Walter F

    2016-03-01

    Chemical shift based fat/water decomposition methods such as IDEAL are frequently used in challenging imaging environments with large B0 inhomogeneity. However, they do not account for the signal modulations introduced by a balanced steady state free precession (bSSFP) acquisition. Here we demonstrate improved performance when the bSSFP frequency response is properly incorporated into the multipeak spectral fat model used in the decomposition process. Balanced SSFP allows for rapid imaging but also introduces a characteristic frequency response featuring periodic nulls and pass bands. Fat spectral components in adjacent pass bands will experience bulk phase offsets and magnitude modulations that change the expected constructive and destructive interference between the fat spectral components. A bSSFP signal model was incorporated into the fat/water decomposition process and used to generate images of a fat phantom, and bilateral breast and knee images in four normal volunteers at 1.5 Tesla. Incorporation of the bSSFP signal model into the decomposition process improved the performance of the fat/water decomposition. Incorporation of this model allows rapid bSSFP imaging sequences to use robust fat/water decomposition methods such as IDEAL. While only one set of imaging parameters were presented, the method is compatible with any field strength or repetition time. © 2015 Wiley Periodicals, Inc.

  5. Termites promote resistance of decomposition to spatiotemporal variability in rainfall.

    PubMed

    Veldhuis, Michiel P; Laso, Francisco J; Olff, Han; Berg, Matty P

    2017-02-01

    The ecological impact of rapid environmental change will depend on the resistance of key ecosystems processes, which may be promoted by species that exert strong control over local environmental conditions. Recent theoretical work suggests that macrodetritivores increase the resistance of African savanna ecosystems to changing climatic conditions, but experimental evidence is lacking. We examined the effect of large fungus-growing termites and other non-fungus-growing macrodetritivores on decomposition rates empirically with strong spatiotemporal variability in rainfall and temperature. Non-fungus-growing larger macrodetritivores (earthworms, woodlice, millipedes) promoted decomposition rates relative to microbes and small soil fauna (+34%) but both groups reduced their activities with decreasing rainfall. However, fungus-growing termites increased decomposition rates strongest (+123%) under the most water-limited conditions, making overall decomposition rates mostly independent from rainfall. We conclude that fungus-growing termites are of special importance in decoupling decomposition rates from spatiotemporal variability in rainfall due to the buffered environment they create within their extended phenotype (mounds), that allows decomposition to continue when abiotic conditions outside are less favorable. This points at a wider class of possibly important ecological processes, where soil-plant-animal interactions decouple ecosystem processes from large-scale climatic gradients. This may strongly alter predictions from current climate change models. © 2016 by the Ecological Society of America.

  6. Dilational processes accompanying earthquakes in the Long Valley Caldera

    USGS Publications Warehouse

    Dreger, Douglas S.; Tkalcic, Hrvoje; Johnston, M.

    2000-01-01

    Regional distance seismic moment tensor determinations and broadband waveforms of moment magnitude 4.6 to 4.9 earthquakes from a November 1997 Long Valley Caldera swarm, during an inflation episode, display evidence of anomalous seismic radiation characterized by non-double couple (NDC) moment tensors with significant volumetric components. Observed coseismic dilation suggests that hydrothermal or magmatic processes are directly triggering some of the seismicity in the region. Similarity in the NDC solutions implies a common source process, and the anomalous events may have been triggered by net fault-normal stress reduction due to high-pressure fluid injection or pressurization of fluid-saturated faults due to magmatic heating.

  7. Gaussian mixtures on tensor fields for segmentation: applications to medical imaging.

    PubMed

    de Luis-García, Rodrigo; Westin, Carl-Fredrik; Alberola-López, Carlos

    2011-01-01

    In this paper, we introduce a new approach for tensor field segmentation based on the definition of mixtures of Gaussians on tensors as a statistical model. Working over the well-known Geodesic Active Regions segmentation framework, this scheme presents several interesting advantages. First, it yields a more flexible model than the use of a single Gaussian distribution, which enables the method to better adapt to the complexity of the data. Second, it can work directly on tensor-valued images or, through a parallel scheme that processes independently the intensity and the local structure tensor, on scalar textured images. Two different applications have been considered to show the suitability of the proposed method for medical imaging segmentation. First, we address DT-MRI segmentation on a dataset of 32 volumes, showing a successful segmentation of the corpus callosum and favourable comparisons with related approaches in the literature. Second, the segmentation of bones from hand radiographs is studied, and a complete automatic-semiautomatic approach has been developed that makes use of anatomical prior knowledge to produce accurate segmentation results. Copyright © 2010 Elsevier Ltd. All rights reserved.

  8. Identification of the full anisotropic flow resistivity tensor for multiple glass wool and melamine foam samples.

    PubMed

    Van der Kelen, Christophe; Göransson, Peter

    2013-12-01

    The flow resistivity tensor, which is the inverse of the viscous permeability tensor, is one of the most important material properties for the acoustic performance of porous materials used in acoustic treatments. Due to the manufacturing processes involved, these porous materials are most often geometrically anisotropic on a microscopic scale, and for demanding applications, there is a need for improved characterization methods. This paper discusses recent refinements of a method for the identification of the anisotropic flow resistivity tensor. The inverse estimation is verified for three fictitious materials with different degrees of anisotropy. Measurements are performed on nine glass wool samples and seven melamine foam samples, and the anisotropic flow resistivity tensors obtained are validated by comparison to measurements performed on uni-directional cylindrical samples, extracted from the same, previously measured cubic samples. The variability of flow resistivity in the batch of material from which the glass wool is extracted is discussed. The results for the melamine foam suggest that there is a relation between the direction of highest flow resistivity, and the rise direction of the material.

  9. Modeling the evolution of lithium-ion particle contact distributions using a fabric tensor approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stershic, A. J.; Simunovic, S.; Nanda, J.

    2015-08-25

    Electrode microstructure and processing can strongly influence lithium-ion battery performance such as capacity retention, power, and rate. Battery electrodes are multi-phase composite structures wherein conductive diluents and binder bond active material to a current collector. The structure and response of this composite network during repeated electrochemical cycling directly affects battery performance characteristics. We propose the fabric tensor formalism for describing the structure and evolution of the electrode microstructure. Fabric tensors are directional measures of particulate assemblies based on inter-particle connectivity, relating to the structural and transport properties of the electrode. Fabric tensor analysis is applied to experimental data-sets for positivemore » electrode made of lithium nickel manganese cobalt oxide, captured by X-ray tomography for several compositions and consolidation pressures. We show that fabric tensors capture the evolution of inter-particle contact distribution and are therefore good measures for the internal state of and electronic transport within the electrode. The fabric tensor analysis is also applied to Discrete Element Method (DEM) simulations of electrode microstructures using spherical particles with size distributions from the tomography. Furthermore, these results do not follow the experimental trends, which indicates that the particle size distribution alone is not a sufficient measure for the electrode microstructures in DEM simulations.« less

  10. Reversible and dissipative macroscopic contributions to the stress tensor: active or passive?

    PubMed

    Brand, H R; Pleiner, H; Svenšek, D

    2014-09-01

    The issue of dynamic contributions to the macroscopic stress tensor has been of high interest in the field of bio-inspired active systems over the last few years. Of particular interest is a direct coupling ("active term") of the stress tensor with the order parameter, the latter describing orientational order induced by active processes. Here we analyze more generally possible reversible and irreversible dynamic contributions to the stress tensor for various passive and active macroscopic systems. This includes systems with tetrahedral/octupolar order, polar and non-polar (chiral) nematic and smectic liquid crystals, as well as active fluids with a dynamic preferred (polar or non-polar) direction. We show that it cannot a priori be seen, neither from the symmetry properties of the macroscopic variables involved, nor from the structure of the cross-coupling contributions to the stress tensor, whether the system studied is active or passive. Rather, that depends on whether the variables that give rise to those cross-couplings in the stress tensor are driven or not. We demonstrate that several simplified descriptions of active systems in the literature that neglect the necessary counter term to the active term violate linear irreversible thermodynamics and lead to an unphysical contribution to the entropy production.

  11. New algorithm for tensor contractions on multi-core CPUs, GPUs, and accelerators enables CCSD and EOM-CCSD calculations with over 1000 basis functions on a single compute node.

    PubMed

    Kaliman, Ilya A; Krylov, Anna I

    2017-04-30

    A new hardware-agnostic contraction algorithm for tensors of arbitrary symmetry and sparsity is presented. The algorithm is implemented as a stand-alone open-source code libxm. This code is also integrated with general tensor library libtensor and with the Q-Chem quantum-chemistry package. An overview of the algorithm, its implementation, and benchmarks are presented. Similarly to other tensor software, the algorithm exploits efficient matrix multiplication libraries and assumes that tensors are stored in a block-tensor form. The distinguishing features of the algorithm are: (i) efficient repackaging of the individual blocks into large matrices and back, which affords efficient graphics processing unit (GPU)-enabled calculations without modifications of higher-level codes; (ii) fully asynchronous data transfer between disk storage and fast memory. The algorithm enables canonical all-electron coupled-cluster and equation-of-motion coupled-cluster calculations with single and double substitutions (CCSD and EOM-CCSD) with over 1000 basis functions on a single quad-GPU machine. We show that the algorithm exhibits predicted theoretical scaling for canonical CCSD calculations, O(N 6 ), irrespective of the data size on disk. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  12. Influence of density and environmental factors on decomposition kinetics of amorphous polylactide - Reactive molecular dynamics studies.

    PubMed

    Mlyniec, A; Ekiert, M; Morawska-Chochol, A; Uhl, T

    2016-06-01

    In this work, we investigate the influence of the surrounding environment and the initial density on the decomposition kinetics of polylactide (PLA). The decomposition of the amorphous PLA was investigated by means of reactive molecular dynamics simulations. A computational model simulates the decomposition of PLA polymer inside the bulk, due to the assumed lack of removal of reaction products from the polymer matrix. We tracked the temperature dependency of the water and carbon monoxide production to extract the activation energy of thermal decomposition of PLA. We found that an increased density results in decreased activation energy of decomposition by about 50%. Moreover, initiation of decomposition of the amorphous PLA is followed by a rapid decline in activation energy caused by reaction products which accelerates the hydrolysis of esters. The addition of water molecules decreases initial energy of activation as well as accelerates the decomposition process. Additionally, we have investigated the dependency of density on external loading. Comparison of pressures needed to obtain assumed densities shows that this relationship is bilinear and the slope changes around a density equal to 1.3g/cm(3). The conducted analyses provide an insight into the thermal decomposition process of the amorphous phase of PLA, which is particularly susceptible to decomposition in amorphous and semi-crystalline PLA polymers. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Numerical Approximation of Elasticity Tensor Associated With Green-Naghdi Rate.

    PubMed

    Liu, Haofei; Sun, Wei

    2017-08-01

    Objective stress rates are often used in commercial finite element (FE) programs. However, deriving a consistent tangent modulus tensor (also known as elasticity tensor or material Jacobian) associated with the objective stress rates is challenging when complex material models are utilized. In this paper, an approximation method for the tangent modulus tensor associated with the Green-Naghdi rate of the Kirchhoff stress is employed to simplify the evaluation process. The effectiveness of the approach is demonstrated through the implementation of two user-defined fiber-reinforced hyperelastic material models. Comparisons between the approximation method and the closed-form analytical method demonstrate that the former can simplify the material Jacobian evaluation with satisfactory accuracy while retaining its computational efficiency. Moreover, since the approximation method is independent of material models, it can facilitate the implementation of complex material models in FE analysis using shell/membrane elements in abaqus.

  14. A density functional theory study of the decomposition mechanism of nitroglycerin.

    PubMed

    Pei, Liguan; Dong, Kehai; Tang, Yanhui; Zhang, Bo; Yu, Chang; Li, Wenzuo

    2017-08-21

    The detailed decomposition mechanism of nitroglycerin (NG) in the gas phase was studied by examining reaction pathways using density functional theory (DFT) and canonical variational transition state theory combined with a small-curvature tunneling correction (CVT/SCT). The mechanism of NG autocatalytic decomposition was investigated at the B3LYP/6-31G(d,p) level of theory. Five possible decomposition pathways involving NG were identified and the rate constants for the pathways at temperatures ranging from 200 to 1000 K were calculated using CVT/SCT. There was found to be a lower energy barrier to the β-H abstraction reaction than to the α-H abstraction reaction during the initial step in the autocatalytic decomposition of NG. The decomposition pathways for CHOCOCHONO 2 (a product obtained following the abstraction of three H atoms from NG by NO 2 ) include O-NO 2 cleavage or isomer production, meaning that the autocatalytic decomposition of NG has two reaction pathways, both of which are exothermic. The rate constants for these two reaction pathways are greater than the rate constants for the three pathways corresponding to unimolecular NG decomposition. The overall process of NG decomposition can be divided into two stages based on the NO 2 concentration, which affects the decomposition products and reactions. In the first stage, the reaction pathway corresponding to O-NO 2 cleavage is the main pathway, but the rates of the two autocatalytic decomposition pathways increase with increasing NO 2 concentration. However, when a threshold NO 2 concentration is reached, the NG decomposition process enters its second stage, with the two pathways for NG autocatalytic decomposition becoming the main and secondary reaction pathways.

  15. Ab initio optimization principle for the ground states of translationally invariant strongly correlated quantum lattice models.

    PubMed

    Ran, Shi-Ju

    2016-05-01

    In this work, a simple and fundamental numeric scheme dubbed as ab initio optimization principle (AOP) is proposed for the ground states of translational invariant strongly correlated quantum lattice models. The idea is to transform a nondeterministic-polynomial-hard ground-state simulation with infinite degrees of freedom into a single optimization problem of a local function with finite number of physical and ancillary degrees of freedom. This work contributes mainly in the following aspects: (1) AOP provides a simple and efficient scheme to simulate the ground state by solving a local optimization problem. Its solution contains two kinds of boundary states, one of which play the role of the entanglement bath that mimics the interactions between a supercell and the infinite environment, and the other gives the ground state in a tensor network (TN) form. (2) In the sense of TN, a novel decomposition named as tensor ring decomposition (TRD) is proposed to implement AOP. Instead of following the contraction-truncation scheme used by many existing TN-based algorithms, TRD solves the contraction of a uniform TN in an opposite way by encoding the contraction in a set of self-consistent equations that automatically reconstruct the whole TN, making the simulation simple and unified; (3) AOP inherits and develops the ideas of different well-established methods, including the density matrix renormalization group (DMRG), infinite time-evolving block decimation (iTEBD), network contractor dynamics, density matrix embedding theory, etc., providing a unified perspective that is previously missing in this fields. (4) AOP as well as TRD give novel implications to existing TN-based algorithms: A modified iTEBD is suggested and the two-dimensional (2D) AOP is argued to be an intrinsic 2D extension of DMRG that is based on infinite projected entangled pair state. This paper is focused on one-dimensional quantum models to present AOP. The benchmark is given on a transverse Ising chain and 2D classical Ising model, showing the remarkable efficiency and accuracy of the AOP.

  16. A review of plutonium oxalate decomposition reactions and effects of decomposition temperature on the surface area of the plutonium dioxide product

    NASA Astrophysics Data System (ADS)

    Orr, R. M.; Sims, H. E.; Taylor, R. J.

    2015-10-01

    Plutonium (IV) and (III) ions in nitric acid solution readily form insoluble precipitates with oxalic acid. The plutonium oxalates are then easily thermally decomposed to form plutonium dioxide powder. This simple process forms the basis of current industrial conversion or 'finishing' processes that are used in commercial scale reprocessing plants. It is also widely used in analytical or laboratory scale operations and for waste residues treatment. However, the mechanisms of the thermal decompositions in both air and inert atmospheres have been the subject of various studies over several decades. The nature of intermediate phases is of fundamental interest whilst understanding the evolution of gases at different temperatures is relevant to process control. The thermal decomposition is also used to control a number of powder properties of the PuO2 product that are important to either long term storage or mixed oxide fuel manufacturing. These properties are the surface area, residual carbon impurities and adsorbed volatile species whereas the morphology and particle size distribution are functions of the precipitation process. Available data and experience regarding the thermal and radiation-induced decompositions of plutonium oxalate to oxide are reviewed. The mechanisms of the thermal decompositions are considered with a particular focus on the likely redox chemistry involved. Also, whilst it is well known that the surface area is dependent on calcination temperature, there is a wide variation in the published data and so new correlations have been derived. Better understanding of plutonium (III) and (IV) oxalate decompositions will assist the development of more proliferation resistant actinide co-conversion processes that are needed for advanced reprocessing in future closed nuclear fuel cycles.

  17. New Techniques for Deep Learning with Geospatial Data using TensorFlow, Earth Engine, and Google Cloud Platform

    NASA Astrophysics Data System (ADS)

    Hancher, M.

    2017-12-01

    Recent years have seen promising results from many research teams applying deep learning techniques to geospatial data processing. In that same timeframe, TensorFlow has emerged as the most popular framework for deep learning in general, and Google has assembled petabytes of Earth observation data from a wide variety of sources and made them available in analysis-ready form in the cloud through Google Earth Engine. Nevertheless, developing and applying deep learning to geospatial data at scale has been somewhat cumbersome to date. We present a new set of tools and techniques that simplify this process. Our approach combines the strengths of several underlying tools: TensorFlow for its expressive deep learning framework; Earth Engine for data management, preprocessing, postprocessing, and visualization; and other tools in Google Cloud Platform to train TensorFlow models at scale, perform additional custom parallel data processing, and drive the entire process from a single familiar Python development environment. These tools can be used to easily apply standard deep neural networks, convolutional neural networks, and other custom model architectures to a variety of geospatial data structures. We discuss our experiences applying these and related tools to a range of machine learning problems, including classic problems like cloud detection, building detection, land cover classification, as well as more novel problems like illegal fishing detection. Our improved tools will make it easier for geospatial data scientists to apply modern deep learning techniques to their own problems, and will also make it easier for machine learning researchers to advance the state of the art of those techniques.

  18. Entanglement entropy from tensor network states for stabilizer codes

    NASA Astrophysics Data System (ADS)

    He, Huan; Zheng, Yunqin; Bernevig, B. Andrei; Regnault, Nicolas

    2018-03-01

    In this paper, we present the construction of tensor network states (TNS) for some of the degenerate ground states of three-dimensional (3D) stabilizer codes. We then use the TNS formalism to obtain the entanglement spectrum and entropy of these ground states for some special cuts. In particular, we work out examples of the 3D toric code, the X-cube model, and the Haah code. The latter two models belong to the category of "fracton" models proposed recently, while the first one belongs to the conventional topological phases. We mention the cases for which the entanglement entropy and spectrum can be calculated exactly: For these, the constructed TNS is a singular value decomposition (SVD) of the ground states with respect to particular entanglement cuts. Apart from the area law, the entanglement entropies also have constant and linear corrections for the fracton models, while the entanglement entropies for the toric code models only have constant corrections. For the cuts we consider, the entanglement spectra of these three models are completely flat. We also conjecture that the negative linear correction to the area law is a signature of extensive ground-state degeneracy. Moreover, the transfer matrices of these TNSs can be constructed. We show that the transfer matrices are projectors whose eigenvalues are either 1 or 0. The number of nonzero eigenvalues is tightly related to the ground-state degeneracy.

  19. FormTracer. A mathematica tracing package using FORM

    NASA Astrophysics Data System (ADS)

    Cyrol, Anton K.; Mitter, Mario; Strodthoff, Nils

    2017-10-01

    We present FormTracer, a high-performance, general purpose, easy-to-use Mathematica tracing package which uses FORM. It supports arbitrary space and spinor dimensions as well as an arbitrary number of simple compact Lie groups. While keeping the usability of the Mathematica interface, it relies on the efficiency of FORM. An additional performance gain is achieved by a decomposition algorithm that avoids redundant traces in the product tensors spaces. FormTracer supports a wide range of syntaxes which endows it with a high flexibility. Mathematica notebooks that automatically install the package and guide the user through performing standard traces in space-time, spinor and gauge-group spaces are provided. Program Files doi:http://dx.doi.org/10.17632/7rd29h4p3m.1 Licensing provisions: GPLv3 Programming language: Mathematica and FORM Nature of problem: Efficiently compute traces of large expressions Solution method: The expression to be traced is decomposed into its subspaces by a recursive Mathematica expansion algorithm. The result is subsequently translated to a FORM script that takes the traces. After FORM is executed, the final result is either imported into Mathematica or exported as optimized C/C++/Fortran code. Unusual features: The outstanding features of FormTracer are the simple interface, the capability to efficiently handle an arbitrary number of Lie groups in addition to Dirac and Lorentz tensors, and a customizable input-syntax.

  20. Gauge-invariant formalism of cosmological weak lensing

    NASA Astrophysics Data System (ADS)

    Yoo, Jaiyul; Grimm, Nastassia; Mitsou, Ermis; Amara, Adam; Refregier, Alexandre

    2018-04-01

    We present the gauge-invariant formalism of cosmological weak lensing, accounting for all the relativistic effects due to the scalar, vector, and tensor perturbations at the linear order. While the light propagation is fully described by the geodesic equation, the relation of the photon wavevector to the physical quantities requires the specification of the frames, where they are defined. By constructing the local tetrad bases at the observer and the source positions, we clarify the relation of the weak lensing observables such as the convergence, the shear, and the rotation to the physical size and shape defined in the source rest-frame and the observed angle and redshift measured in the observer rest-frame. Compared to the standard lensing formalism, additional relativistic effects contribute to all the lensing observables. We explicitly verify the gauge-invariance of the lensing observables and compare our results to previous work. In particular, we demonstrate that even in the presence of the vector and tensor perturbations, the physical rotation of the lensing observables vanishes at the linear order, while the tetrad basis rotates along the light propagation compared to a FRW coordinate. Though the latter is often used as a probe of primordial gravitational waves, the rotation of the tetrad basis is indeed not a physical observable. We further clarify its relation to the E-B decomposition in weak lensing. Our formalism provides a transparent and comprehensive perspective of cosmological weak lensing.

  1. Long-Lived Inverse Chirp Signals from Core-Collapse in Massive Scalar-Tensor Gravity

    NASA Astrophysics Data System (ADS)

    Sperhake, Ulrich; Moore, Christopher J.; Rosca, Roxana; Agathos, Michalis; Gerosa, Davide; Ott, Christian D.

    2017-11-01

    This Letter considers stellar core collapse in massive scalar-tensor theories of gravity. The presence of a mass term for the scalar field allows for dramatic increases in the radiated gravitational wave signal. There are several potential smoking gun signatures of a departure from general relativity associated with this process. These signatures could show up within existing LIGO-Virgo searches.

  2. Heterotic reduction of Courant algebroid connections and Einstein-Hilbert actions

    NASA Astrophysics Data System (ADS)

    Jurčo, Branislav; Vysoký, Jan

    2016-08-01

    We discuss Levi-Civita connections on Courant algebroids. We define an appropriate generalization of the curvature tensor and compute the corresponding scalar curvatures in the exact and heterotic case, leading to generalized (bosonic) Einstein-Hilbert type of actions known from supergravity. In particular, we carefully analyze the process of the reduction for the generalized metric, connection, curvature tensor and the scalar curvature.

  3. Epigenetic Age Acceleration Assessed with Human White-Matter Images.

    PubMed

    Hodgson, Karen; Carless, Melanie A; Kulkarni, Hemant; Curran, Joanne E; Sprooten, Emma; Knowles, Emma E; Mathias, Samuel; Göring, Harald H H; Yao, Nailin; Olvera, Rene L; Fox, Peter T; Almasy, Laura; Duggirala, Ravi; Blangero, John; Glahn, David C

    2017-05-03

    The accurate estimation of age using methylation data has proved a useful and heritable biomarker, with acceleration in epigenetic age predicting a number of age-related phenotypes. Measures of white matter integrity in the brain are also heritable and highly sensitive to both normal and pathological aging processes across adulthood. We consider the phenotypic and genetic interrelationships between epigenetic age acceleration and white matter integrity in humans. Our goal was to investigate processes that underlie interindividual variability in age-related changes in the brain. Using blood taken from a Mexican-American extended pedigree sample ( n = 628; age = 23.28-93.11 years), epigenetic age was estimated using the method developed by Horvath (2013). For n = 376 individuals, diffusion tensor imaging scans were also available. The interrelationship between epigenetic age acceleration and global white matter integrity was investigated with variance decomposition methods. To test for neuroanatomical specificity, 16 specific tracts were additionally considered. We observed negative phenotypic correlations between epigenetic age acceleration and global white matter tract integrity (ρ pheno = -0.119, p = 0.028), with evidence of shared genetic (ρ gene = -0.463, p = 0.013) but not environmental influences. Negative phenotypic and genetic correlations with age acceleration were also seen for a number of specific white matter tracts, along with additional negative phenotypic correlations between granulocyte abundance and white matter integrity. These findings (i.e., increased acceleration in epigenetic age in peripheral blood correlates with reduced white matter integrity in the brain and shares common genetic influences) provide a window into the neurobiology of aging processes within the brain and a potential biomarker of normal and pathological brain aging. SIGNIFICANCE STATEMENT Epigenetic measures can be used to predict age with a high degree of accuracy and so capture acceleration in biological age, relative to chronological age. The white matter tracts within the brain are also highly sensitive to aging processes. We show that increased biological aging (measured using epigenetic data from blood samples) is correlated with reduced integrity of white matter tracts within the human brain (measured using diffusion tensor imaging) with data from a large sample of Mexican-American families. Given the family design of the sample, we are also able to demonstrate that epigenetic aging and white matter tract integrity also share common genetic influences. Therefore, epigenetic age may be a potential, and accessible, biomarker of brain aging. Copyright © 2017 the authors 0270-6474/17/374735-09$15.00/0.

  4. Cryptic diversity and ecosystem functioning: a complex tale of differential effects on decomposition.

    PubMed

    De Meester, N; Gingold, R; Rigaux, A; Derycke, S; Moens, T

    2016-10-01

    Marine ecosystems are experiencing accelerating population and species loss. Some ecosystem functions are decreasing and there is growing interest in the link between biodiversity and ecosystem functioning. The role of cryptic (morphologically identical but genetically distinct) species in this biodiversity-ecosystem functioning link is unclear and has not yet been formally tested. We tested if there is a differential effect of four cryptic species of the bacterivorous nematode Litoditis marina on the decomposition process of macroalgae. Bacterivorous nematodes can stimulate or slow down bacterial activity and modify the bacterial assemblage composition. Moreover, we tested if interspecific interactions among the four cryptic species influence the decomposition process. A laboratory experiment with both mono- and multispecific nematode cultures was conducted, and loss of organic matter and the activity of two key extracellular enzymes for the degradation of phytodetritus were assessed. L. marina mainly influenced qualitative aspects of the decomposition process rather than its overall rate: an effect of the nematodes on the enzymatic activities became manifest, although no clear nematode effect on bulk organic matter weight loss was found. We also demonstrated that species-specific effects on the decomposition process existed. Combining the four cryptic species resulted in high competition, with one dominant species, but without complete exclusion of other species. These interspecific interactions translated into different effects on the decomposition process. The species-specific differences indicated that each cryptic species may play an important and distinct role in ecosystem functioning. Functional differences may result in coexistence among very similar species.

  5. Ordered rate constitutive theories for thermoviscoelastic solids with memory in Lagrangian description using Gibbs potential

    NASA Astrophysics Data System (ADS)

    Surana, K. S.; Reddy, J. N.; Nunez, Daniel

    2015-11-01

    This paper presents ordered rate constitutive theories of orders m and n, i.e., ( m, n) for finite deformation of homogeneous, isotropic, compressible and incompressible thermoviscoelastic solids with memory in Lagrangian description using entropy inequality in Gibbs potential Ψ as an alternate approach of deriving constitutive theories using entropy inequality in terms of Helmholtz free energy density Φ. Second Piola-Kirchhoff stress σ [0] and Green's strain tensor ɛ [0] are used as conjugate pair. We consider Ψ, heat vector q, entropy density η and rates of upto orders m and n of σ [0] and ɛ [0], i.e., σ [ i]; i = 0, 1, . . . , m and ɛ [ j]; j = 0, 1, . . . , n. We choose Ψ, ɛ [ n], q and η as dependent variables in the constitutive theories with ɛ [ j]; j = 0, 1, . . . , n - 1, σ [ i]; i = 0, 1, . . . , m, temperature gradient g and temperature θ as their argument tensors. Rationale for this choice is explained in the paper. Entropy inequality, decomposition of σ [0] into equilibrium and deviatoric stresses, the conditions resulting from entropy inequality and the theory of generators and invariants are used in the derivations of ordered rate constitutive theories of orders m and n in stress and strain tensors. Constitutive theories for the heat vector q (of up to orders m and n - 1) that are consistent (in terms of the argument tensors) with the constitutive theories for ɛ [ n] (of up to orders m and n) are also derived. Many simplified forms of the rate theories of orders ( m, n) are presented. Material coefficients are derived by considering Taylor series expansions of the coefficients in the linear combinations representing ɛ [ n] and q using the combined generators of the argument tensors about a known configuration {{\\underline{\\varOmega}}} in the combined invariants of the argument tensors and temperature. It is shown that the rate constitutive theories of order one ( m = 1, n = 1) when further simplified result in constitutive theories that resemble currently used theories but are in fact different. The solid continua characterized by these theories have mechanisms of elasticity, dissipation and memory, i.e., relaxation behavior or rheology. Fourier heat conduction law is shown to be an over simplified case of the rate theory of order one ( m = 1, n = 1) for q. The paper establishes when there is equivalence between the constitutive theories derived here using Ψ and those presented in reference Surana et al. (Acta Mech. doi:10.1007/s00707-014-1173-6, 2014) that are derived using Helmholtz free energy density Φ. The fundamental differences between the two constitutive theories in terms of physics and their explicit forms using Φ and Ψ are difficult to distinguish from the ordered theories of orders ( m, n) due to complexity of expressions. However, by choosing lower ordered theories, the difference between the two approaches can be clearly seen.

  6. Aridity and decomposition processes in complex landscapes

    NASA Astrophysics Data System (ADS)

    Ossola, Alessandro; Nyman, Petter

    2015-04-01

    Decomposition of organic matter is a key biogeochemical process contributing to nutrient cycles, carbon fluxes and soil development. The activity of decomposers depends on microclimate, with temperature and rainfall being major drivers. In complex terrain the fine-scale variation in microclimate (and hence water availability) as a result of slope orientation is caused by differences in incoming radiation and surface temperature. Aridity, measured as the long-term balance between net radiation and rainfall, is a metric that can be used to represent variations in water availability within the landscape. Since aridity metrics can be obtained at fine spatial scales, they could theoretically be used to investigate how decomposition processes vary across complex landscapes. In this study, four research sites were selected in tall open sclerophyll forest along a aridity gradient (Budyko dryness index ranging from 1.56 -2.22) where microclimate, litter moisture and soil moisture were monitored continuously for one year. Litter bags were packed to estimate decomposition rates (k) using leaves of a tree species not present in the study area (Eucalyptus globulus) in order to avoid home-field advantage effects. Litter mass loss was measured to assess the activity of macro-decomposers (6mm litter bag mesh size), meso-decomposers (1 mm mesh), microbes above-ground (0.2 mm mesh) and microbes below-ground (2 cm depth, 0.2 mm mesh). Four replicates for each set of bags were installed at each site and bags were collected at 1, 2, 4, 7 and 12 months since installation. We first tested whether differences in microclimate due to slope orientation have significant effects on decomposition processes. Then the dryness index was related to decomposition rates to evaluate if small-scale variation in decomposition can be predicted using readily available information on rainfall and radiation. Decomposition rates (k), calculated fitting single pool negative exponential models, generally decreased with increasing aridity with k going from 0.0025 day-1 on equatorial (dry) facing slopes to 0.0040 day-1 on polar (wet) facing slopes. However, differences in temperature as a result of morning vs afternoon sun on east and west aspects, respectively, (not captured in the aridity metric) resulted in poor prediction of decomposition for the sites located in the intermediate aridity range. Overall the results highlight that relatively small differences in microclimate due to slope orientation can have large effects on decomposition. Future research will aim to refine the aridity metric to better resolve small scale variation in surface temperature which is important when up-scaling decomposition processes to landscapes.

  7. The processing of aluminum gasarites via thermal decomposition of interstitial hydrides

    NASA Astrophysics Data System (ADS)

    Licavoli, Joseph J.

    Gasarite structures are a unique type of metallic foam containing tubular pores. The original methods for their production limited them to laboratory study despite appealing foam properties. Thermal decomposition processing of gasarites holds the potential to increase the application of gasarite foams in engineering design by removing several barriers to their industrial scale production. The following study characterized thermal decomposition gasarite processing both experimentally and theoretically. It was found that significant variation was inherent to this process therefore several modifications were necessary to produce gasarites using this method. Conventional means to increase porosity and enhance pore morphology were studied. Pore morphology was determined to be more easily replicated if pores were stabilized by alumina additions and powders were dispersed evenly. In order to better characterize processing, high temperature and high ramp rate thermal decomposition data were gathered. It was found that the high ramp rate thermal decomposition behavior of several hydrides was more rapid than hydride kinetics at low ramp rates. This data was then used to estimate the contribution of several pore formation mechanisms to the development of pore structure. It was found that gas-metal eutectic growth can only be a viable pore formation mode if non-equilibrium conditions persist. Bubble capture cannot be a dominant pore growth mode due to high bubble terminal velocities. Direct gas evolution appears to be the most likely pore formation mode due to high gas evolution rate from the decomposing particulate and microstructural pore growth trends. The overall process was evaluated for its economic viability. It was found that thermal decomposition has potential for industrialization, but further refinements are necessary in order for the process to be viable.

  8. Decomposition of energetic chemicals contaminated with iron or stainless steel.

    PubMed

    Chervin, Sima; Bodman, Glenn T; Barnhart, Richard W

    2006-03-17

    Contamination of chemicals or reaction mixtures with iron or stainless steel is likely to take place during chemical processing. If energetic and thermally unstable chemicals are involved in a manufacturing process, contamination with iron or stainless steel can impact the decomposition characteristics of these chemicals and, subsequently, the safety of the processes, and should be investigated. The goal of this project was to undertake a systematic approach to study the impact of iron or stainless steel contamination on the decomposition characteristics of different chemical classes. Differential scanning calorimetry (DSC) was used to study the decomposition reaction by testing each chemical pure, and in mixtures with iron and stainless steel. The following classes of energetic chemicals were investigated: nitrobenzenes, tetrazoles, hydrazines, hydroxylamines and oximes, sulfonic acid derivatives and monomers. The following non-energetic groups were investigated for contributing effects: halogens, hydroxyls, amines, amides, nitriles, sulfonic acid esters, carbonyl halides and salts of hydrochloric acid. Based on the results obtained, conclusions were drawn regarding the sensitivity of the decomposition reaction to contamination with iron and stainless steel for the chemical classes listed above. It was demonstrated that the most sensitive classes are hydrazines and hydroxylamines/oximes. Contamination of these chemicals with iron or stainless steel not only destabilizes them, leading to decomposition at significantly lower temperatures, but also sometimes causes increased severity of the decomposition. The sensitivity of nitrobenzenes to contamination with iron or stainless steel depended upon the presence of other contributing groups: the presence of such groups as acid chlorides or chlorine/fluorine significantly increased the effect of contamination on decomposition characteristics of nitrobenzenes. The decomposition of sulfonic acid derivatives and tetrazoles was not impacted by presence of iron or stainless steel.

  9. Long-term patterns of mass loss during the decomposition of leaf and fine root litter: an intersite comparison

    Treesearch

    Mark E. Harmon; Whendee L. Silver; Becky Fasth; Hua Chen; Ingrid C. Burke; William J. Parton; Stephen C. Hart; William S. Currie; Ariel E. Lugo

    2009-01-01

    Decomposition is a critical process in global carbon cycling. During decomposition, leaf and fine root litter may undergo a later, relatively slow phase; past long-term experiments indicate this phase occurs, but whether it is a general phenomenon has not been examined. Data from Long-term Intersite Decomposition Experiment Team, representing 27 sites and nine litter...

  10. Aging-driven decomposition in zolpidem hemitartrate hemihydrate and the single-crystal structure of its decomposition products.

    PubMed

    Vega, Daniel R; Baggio, Ricardo; Roca, Mariana; Tombari, Dora

    2011-04-01

    The "aging-driven" decomposition of zolpidem hemitartrate hemihydrate (form A) has been followed by X-ray powder diffraction (XRPD), and the crystal and molecular structures of the decomposition products studied by single-crystal methods. The process is very similar to the "thermally driven" one, recently described in the literature for form E (Halasz and Dinnebier. 2010. J Pharm Sci 99(2): 871-874), resulting in a two-phase system: the neutral free base (common to both decomposition processes) and, in the present case, a novel zolpidem tartrate monohydrate, unique to the "aging-driven" decomposition. Our room-temperature single-crystal analysis gives for the free base comparable results as the high-temperature XRPD ones already reported by Halasz and Dinnebier: orthorhombic, Pcba, a = 9.6360(10) Å, b = 18.2690(5) Å, c = 18.4980(11) Å, and V = 3256.4(4) Å(3) . The unreported zolpidem tartrate monohydrate instead crystallizes in monoclinic P21 , which, for comparison purposes, we treated in the nonstandard setting P1121 with a = 20.7582(9) Å, b = 15.2331(5) Å, c = 7.2420(2) Å, γ = 90.826(2)°, and V = 2289.73(14) Å(3) . The structure presents two complete moieties in the asymmetric unit (z = 4, z' = 2). The different phases obtained in both decompositions are readily explained, considering the diverse genesis of both processes. Copyright © 2010 Wiley-Liss, Inc.

  11. Multidisciplinary optimization for engineering systems - Achievements and potential

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1989-01-01

    The currently common sequential design process for engineering systems is likely to lead to suboptimal designs. Recently developed decomposition methods offer an alternative for coming closer to optimum by breaking the large task of system optimization into smaller, concurrently executed and, yet, coupled tasks, identified with engineering disciplines or subsystems. The hierarchic and non-hierarchic decompositions are discussed and illustrated by examples. An organization of a design process centered on the non-hierarchic decomposition is proposed.

  12. Multidisciplinary optimization for engineering systems: Achievements and potential

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1989-01-01

    The currently common sequential design process for engineering systems is likely to lead to suboptimal designs. Recently developed decomposition methods offer an alternative for coming closer to optimum by breaking the large task of system optimization into smaller, concurrently executed and, yet, coupled tasks, identified with engineering disciplines or subsystems. The hierarchic and non-hierarchic decompositions are discussed and illustrated by examples. An organization of a design process centered on the non-hierarchic decomposition is proposed.

  13. Development Of Polarimetric Decomposition Techniques For Indian Forest Resource Assessment Using Radar Imaging Satellite (Risat-1) Images

    NASA Astrophysics Data System (ADS)

    Sridhar, J.

    2015-12-01

    The focus of this work is to examine polarimetric decomposition techniques primarily focussed on Pauli decomposition and Sphere Di-Plane Helix (SDH) decomposition for forest resource assessment. The data processing methods adopted are Pre-processing (Geometric correction and Radiometric calibration), Speckle Reduction, Image Decomposition and Image Classification. Initially to classify forest regions, unsupervised classification was applied to determine different unknown classes. It was observed K-means clustering method gave better results in comparison with ISO Data method.Using the algorithm developed for Radar Tools, the code for decomposition and classification techniques were applied in Interactive Data Language (IDL) and was applied to RISAT-1 image of Mysore-Mandya region of Karnataka, India. This region is chosen for studying forest vegetation and consists of agricultural lands, water and hilly regions. Polarimetric SAR data possess a high potential for classification of earth surface.After applying the decomposition techniques, classification was done by selecting region of interests andpost-classification the over-all accuracy was observed to be higher in the SDH decomposed image, as it operates on individual pixels on a coherent basis and utilises the complete intrinsic coherent nature of polarimetric SAR data. Thereby, making SDH decomposition particularly suited for analysis of high-resolution SAR data. The Pauli Decomposition represents all the polarimetric information in a single SAR image however interpretation of the resulting image is difficult. The SDH decomposition technique seems to produce better results and interpretation as compared to Pauli Decomposition however more quantification and further analysis are being done in this area of research. The comparison of Polarimetric decomposition techniques and evolutionary classification techniques will be the scope of this work.

  14. A simple method for decomposition of peracetic acid in a microalgal cultivation system.

    PubMed

    Sung, Min-Gyu; Lee, Hansol; Nam, Kibok; Rexroth, Sascha; Rögner, Matthias; Kwon, Jong-Hee; Yang, Ji-Won

    2015-03-01

    A cost-efficient process devoid of several washing steps was developed, which is related to direct cultivation following the decomposition of the sterilizer. Peracetic acid (PAA) is known to be an efficient antimicrobial agent due to its high oxidizing potential. Sterilization by 2 mM PAA demands at least 1 h incubation time for an effective disinfection. Direct degradation of PAA was demonstrated by utilizing components in conventional algal medium. Consequently, ferric ion and pH buffer (HEPES) showed a synergetic effect for the decomposition of PAA within 6 h. On the contrary, NaNO3, one of the main components in algal media, inhibits the decomposition of PAA. The improved growth of Chlorella vulgaris and Synechocystis PCC6803 was observed in the prepared BG11 by decomposition of PAA. This process involving sterilization and decomposition of PAA should help cost-efficient management of photobioreactors in a large scale for the production of value-added products and biofuels from microalgal biomass.

  15. A global experiment suggests climate warming will not accelerate litter decomposition in streams but might reduce carbon sequestration.

    PubMed

    Boyero, Luz; Pearson, Richard G; Gessner, Mark O; Barmuta, Leon A; Ferreira, Verónica; Graça, Manuel A S; Dudgeon, David; Boulton, Andrew J; Callisto, Marcos; Chauvet, Eric; Helson, Julie E; Bruder, Andreas; Albariño, Ricardo J; Yule, Catherine M; Arunachalam, Muthukumarasamy; Davies, Judy N; Figueroa, Ricardo; Flecker, Alexander S; Ramírez, Alonso; Death, Russell G; Iwata, Tomoya; Mathooko, Jude M; Mathuriau, Catherine; Gonçalves, José F; Moretti, Marcelo S; Jinggut, Tajang; Lamothe, Sylvain; M'Erimba, Charles; Ratnarajah, Lavenia; Schindler, Markus H; Castela, José; Buria, Leonardo M; Cornejo, Aydeé; Villanueva, Verónica D; West, Derek C

    2011-03-01

    The decomposition of plant litter is one of the most important ecosystem processes in the biosphere and is particularly sensitive to climate warming. Aquatic ecosystems are well suited to studying warming effects on decomposition because the otherwise confounding influence of moisture is constant. By using a latitudinal temperature gradient in an unprecedented global experiment in streams, we found that climate warming will likely hasten microbial litter decomposition and produce an equivalent decline in detritivore-mediated decomposition rates. As a result, overall decomposition rates should remain unchanged. Nevertheless, the process would be profoundly altered, because the shift in importance from detritivores to microbes in warm climates would likely increase CO(2) production and decrease the generation and sequestration of recalcitrant organic particles. In view of recent estimates showing that inland waters are a significant component of the global carbon cycle, this implies consequences for global biogeochemistry and a possible positive climate feedback. © 2011 Blackwell Publishing Ltd/CNRS.

  16. Tensor completion for estimating missing values in visual data.

    PubMed

    Liu, Ji; Musialski, Przemyslaw; Wonka, Peter; Ye, Jieping

    2013-01-01

    In this paper, we propose an algorithm to estimate missing values in tensors of visual data. The values can be missing due to problems in the acquisition process or because the user manually identified unwanted outliers. Our algorithm works even with a small amount of samples and it can propagate structure to fill larger missing regions. Our methodology is built on recent studies about matrix completion using the matrix trace norm. The contribution of our paper is to extend the matrix case to the tensor case by proposing the first definition of the trace norm for tensors and then by building a working algorithm. First, we propose a definition for the tensor trace norm that generalizes the established definition of the matrix trace norm. Second, similarly to matrix completion, the tensor completion is formulated as a convex optimization problem. Unfortunately, the straightforward problem extension is significantly harder to solve than the matrix case because of the dependency among multiple constraints. To tackle this problem, we developed three algorithms: simple low rank tensor completion (SiLRTC), fast low rank tensor completion (FaLRTC), and high accuracy low rank tensor completion (HaLRTC). The SiLRTC algorithm is simple to implement and employs a relaxation technique to separate the dependent relationships and uses the block coordinate descent (BCD) method to achieve a globally optimal solution; the FaLRTC algorithm utilizes a smoothing scheme to transform the original nonsmooth problem into a smooth one and can be used to solve a general tensor trace norm minimization problem; the HaLRTC algorithm applies the alternating direction method of multipliers (ADMMs) to our problem. Our experiments show potential applications of our algorithms and the quantitative evaluation indicates that our methods are more accurate and robust than heuristic approaches. The efficiency comparison indicates that FaLTRC and HaLRTC are more efficient than SiLRTC and between FaLRTC an- HaLRTC the former is more efficient to obtain a low accuracy solution and the latter is preferred if a high-accuracy solution is desired.

  17. Data driven discrete-time parsimonious identification of a nonlinear state-space model for a weakly nonlinear system with short data record

    NASA Astrophysics Data System (ADS)

    Relan, Rishi; Tiels, Koen; Marconato, Anna; Dreesen, Philippe; Schoukens, Johan

    2018-05-01

    Many real world systems exhibit a quasi linear or weakly nonlinear behavior during normal operation, and a hard saturation effect for high peaks of the input signal. In this paper, a methodology to identify a parsimonious discrete-time nonlinear state space model (NLSS) for the nonlinear dynamical system with relatively short data record is proposed. The capability of the NLSS model structure is demonstrated by introducing two different initialisation schemes, one of them using multivariate polynomials. In addition, a method using first-order information of the multivariate polynomials and tensor decomposition is employed to obtain the parsimonious decoupled representation of the set of multivariate real polynomials estimated during the identification of NLSS model. Finally, the experimental verification of the model structure is done on the cascaded water-benchmark identification problem.

  18. Matrix pentagons

    NASA Astrophysics Data System (ADS)

    Belitsky, A. V.

    2017-10-01

    The Operator Product Expansion for null polygonal Wilson loop in planar maximally supersymmetric Yang-Mills theory runs systematically in terms of multi-particle pentagon transitions which encode the physics of excitations propagating on the color flux tube ending on the sides of the four-dimensional contour. Their dynamics was unraveled in the past several years and culminated in a complete description of pentagons as an exact function of the 't Hooft coupling. In this paper we provide a solution for the last building block in this program, the SU(4) matrix structure arising from internal symmetry indices of scalars and fermions. This is achieved by a recursive solution of the Mirror and Watson equations obeyed by the so-called singlet pentagons and fixing the form of the twisted component in their tensor decomposition. The non-singlet, or charged, pentagons are deduced from these by a limiting procedure.

  19. sl(1|2) Super-Toda Fields

    NASA Astrophysics Data System (ADS)

    Yang, Zhan-Ying; Xue, Pan-Pan; Zhao, Liu; Shi, Kang-Jie

    2008-11-01

    Explicit exact solution of supersymmetric Toda fields associated with the Lie superalgebra sl(2|1) is constructed. The approach used is a super extension of Leznov Saveliev algebraic analysis, which is based on a pair of chiral and antichiral Drienfeld Sokolov systems. Though such approach is well understood for Toda field theories associated with ordinary Lie algebras, its super analogue was only successful in the super Liouville case with the underlying Lie superalgebra osp(1|2). The problem lies in that a key step in the construction makes use of the tensor product decomposition of the highest weight representations of the underlying Lie superalgebra, which is not clear until recently. So our construction made in this paper presents a first explicit example of Leznov Saveliev analysis for super Toda systems associated with underlying Lie superalgebras of the rank higher than 1.

  20. Traumatic Brain Injury Diffusion Magnetic Resonance Imaging Research Roadmap Development Project

    DTIC Science & Technology

    2011-10-01

    promising technology on the horizon is the Diffusion Tensor Imaging ( DTI ). Diffusion tensor imaging ( DTI ) is a magnetic resonance imaging (MRI)-based...in the brain. The potential for DTI to improve our understanding of TBI has not been fully explored and challenges associated with non-existent...processing tools, quality control standards, and a shared image repository. The recommendations will be disseminated and pilot tested. A DTI of TBI

  1. Mathematical Modeling of Diverse Phenomena

    NASA Technical Reports Server (NTRS)

    Howard, J. C.

    1979-01-01

    Tensor calculus is applied to the formulation of mathematical models of diverse phenomena. Aeronautics, fluid dynamics, and cosmology are among the areas of application. The feasibility of combining tensor methods and computer capability to formulate problems is demonstrated. The techniques described are an attempt to simplify the formulation of mathematical models by reducing the modeling process to a series of routine operations, which can be performed either manually or by computer.

  2. Tensor integrand reduction via Laurent expansion

    NASA Astrophysics Data System (ADS)

    Hirschi, Valentin; Peraro, Tiziano

    2016-06-01

    We introduce a new method for the application of one-loop integrand reduction via the Laurent expansion algorithm, as implemented in the public C ++ library N inja. We show how the coefficients of the Laurent expansion can be computed by suitable contractions of the loop numerator tensor with cut-dependent projectors, making it possible to interface N inja to any one-loop matrix element generator that can provide the components of this tensor. We implemented this technique in the N inja library and interfaced it to M adL oop, which is part of the public M adG raph5_ aMC@NLO framework. We performed a detailed performance study, comparing against other public reduction tools, namely C utT ools, S amurai, IREGI, PJF ry++ and G olem95. We find that N inja out-performs traditional integrand reduction in both speed and numerical stability, the latter being on par with that of the tensor integral reduction tool Golem95 which is however more limited and slower than N inja. We considered many benchmark multi-scale processes of increasing complexity, involving QCD and electro-weak corrections as well as effective non-renormalizable couplings, showing that N inja's performance scales well with both the rank and multiplicity of the considered process.

  3. Dissolved organic matter release in overlying water and bacterial community shifts in biofilm during the decomposition of Myriophyllum verticillatum.

    PubMed

    Zhang, Lisha; Zhang, Songhe; Lv, Xiaoyang; Qiu, Zheng; Zhang, Ziqiu; Yan, Liying

    2018-08-15

    This study investigated the alterations in biomass, nutrients and dissolved organic matter concentration in overlying water and determined the bacterial 16S rRNA gene in biofilms attached to plant residual during the decomposition of Myriophyllum verticillatum. The 55-day decomposition experimental results show that plant decay process can be well described by the exponential model, with the average decomposition rate of 0.037d -1 . Total organic carbon, total nitrogen, and organic nitrogen concentrations increased significantly in overlying water during decomposition compared to control within 35d. Results from excitation emission matrix-parallel factor analysis showed humic acid-like and tyrosine acid-like substances might originate from plant degradation processes. Tyrosine acid-like substances had an obvious correlation to organic nitrogen and total nitrogen (p<0.01). Decomposition rates were positively related to pH, total organic carbon, oxidation-reduction potential and dissolved oxygen but negatively related to temperature in overlying water. Microbe densities attached to plant residues increased with decomposition process. The most dominant phylum was Bacteroidetes (>46%) at 7d, Chlorobi (20%-44%) or Proteobacteria (25%-34%) at 21d and Chlorobi (>40%) at 55d. In microbes attached to plant residues, sugar- and polysaccharides-degrading genus including Bacteroides, Blvii28, Fibrobacter, and Treponema dominated at 7d while Chlorobaculum, Rhodobacter, Methanobacterium, Thiobaca, Methanospirillum and Methanosarcina at 21d and 55d. These results gain the insight into the dissolved organic matter release and bacterial community shifts during submerged macrophytes decomposition. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Nuclear driven water decomposition plant for hydrogen production

    NASA Technical Reports Server (NTRS)

    Parker, G. H.; Brecher, L. E.; Farbman, G. H.

    1976-01-01

    The conceptual design of a hydrogen production plant using a very-high-temperature nuclear reactor (VHTR) to energize a hybrid electrolytic-thermochemical system for water decomposition has been prepared. A graphite-moderated helium-cooled VHTR is used to produce 1850 F gas for electric power generation and 1600 F process heat for the water-decomposition process which uses sulfur compounds and promises performance superior to normal water electrolysis or other published thermochemical processes. The combined cycle operates at an overall thermal efficiency in excess of 45%, and the overall economics of hydrogen production by this plant have been evaluated predicated on a consistent set of economic ground rules. The conceptual design and evaluation efforts have indicated that development of this type of nuclear-driven water-decomposition plant will permit large-scale economic generation of hydrogen in the 1990s.

  5. Divergence correction schemes in finite difference method for 3D tensor CSAMT in axial anisotropic media

    NASA Astrophysics Data System (ADS)

    Wang, Kunpeng; Tan, Handong; Zhang, Zhiyong; Li, Zhiqiang; Cao, Meng

    2017-05-01

    Resistivity anisotropy and full-tensor controlled-source audio-frequency magnetotellurics (CSAMT) have gradually become hot research topics. However, much of the current anisotropy research for tensor CSAMT only focuses on the one-dimensional (1D) solution. As the subsurface is rarely 1D, it is necessary to study three-dimensional (3D) model response. The staggered-grid finite difference method is an effective simulation method for 3D electromagnetic forward modelling. Previous studies have suggested using the divergence correction to constrain the iterative process when using a staggered-grid finite difference model so as to accelerate the 3D forward speed and enhance the computational accuracy. However, the traditional divergence correction method was developed assuming an isotropic medium. This paper improves the traditional isotropic divergence correction method and derivation process to meet the tensor CSAMT requirements for anisotropy using the volume integral of the divergence equation. This method is more intuitive, enabling a simple derivation of a discrete equation and then calculation of coefficients related to the anisotropic divergence correction equation. We validate the result of our 3D computational results by comparing them to the results computed using an anisotropic, controlled-source 2.5D program. The 3D resistivity anisotropy model allows us to evaluate the consequences of using the divergence correction at different frequencies and for two orthogonal finite length sources. Our results show that the divergence correction plays an important role in 3D tensor CSAMT resistivity anisotropy research and offers a solid foundation for inversion of CSAMT data collected over an anisotropic body.

  6. General tensor discriminant analysis and gabor features for gait recognition.

    PubMed

    Tao, Dacheng; Li, Xuelong; Wu, Xindong; Maybank, Stephen J

    2007-10-01

    The traditional image representations are not suited to conventional classification methods, such as the linear discriminant analysis (LDA), because of the under sample problem (USP): the dimensionality of the feature space is much higher than the number of training samples. Motivated by the successes of the two dimensional LDA (2DLDA) for face recognition, we develop a general tensor discriminant analysis (GTDA) as a preprocessing step for LDA. The benefits of GTDA compared with existing preprocessing methods, e.g., principal component analysis (PCA) and 2DLDA, include 1) the USP is reduced in subsequent classification by, for example, LDA; 2) the discriminative information in the training tensors is preserved; and 3) GTDA provides stable recognition rates because the alternating projection optimization algorithm to obtain a solution of GTDA converges, while that of 2DLDA does not. We use human gait recognition to validate the proposed GTDA. The averaged gait images are utilized for gait representation. Given the popularity of Gabor function based image decompositions for image understanding and object recognition, we develop three different Gabor function based image representations: 1) the GaborD representation is the sum of Gabor filter responses over directions, 2) GaborS is the sum of Gabor filter responses over scales, and 3) GaborSD is the sum of Gabor filter responses over scales and directions. The GaborD, GaborS and GaborSD representations are applied to the problem of recognizing people from their averaged gait images.A large number of experiments were carried out to evaluate the effectiveness (recognition rate) of gait recognition based on first obtaining a Gabor, GaborD, GaborS or GaborSD image representation, then using GDTA to extract features and finally using LDA for classification. The proposed methods achieved good performance for gait recognition based on image sequences from the USF HumanID Database. Experimental comparisons are made with nine state of the art classification methods in gait recognition.

  7. Source Parameters from Full Moment Tensor Inversions of Potentially Induced Earthquakes in Western Canada

    NASA Astrophysics Data System (ADS)

    Wang, R.; Gu, Y. J.; Schultz, R.; Kim, A.; Chen, Y.

    2015-12-01

    During the past four years, the number of earthquakes with magnitudes greater than three has substantially increased in the southern section of Western Canada Sedimentary Basin (WCSB). While some of these events are likely associated with tectonic forces, especially along the foothills of the Canadian Rockies, a significant fraction occurred in previously quiescent regions and has been linked to waste water disposal or hydraulic fracturing. A proper assessment of the origin and source properties of these 'induced earthquakes' requires careful analyses and modeling of regional broadband data, which steadily improved during the past 8 years due to recent establishments of regional broadband seismic networks such as CRANE, RAVEN and TD. Several earthquakes, especially those close to fracking activities (e.g. Fox creek town, Alberta) are analyzed. Our preliminary full moment tensor inversion results show maximum horizontal compressional orientations (P-axis) along the northeast-southwest orientation, which agree with the regional stress directions from borehole breakout data and the P-axis of historical events. The decomposition of those moment tensors shows evidence of strike-slip mechanism with near vertical fault plane solutions, which are comparable to the focal mechanisms of injection induced earthquakes in Oklahoma. Minimal isotropic components have been observed, while a modest percentage of compensated-linear-vector-dipole (CLVD) components, which have been linked to fluid migraition, may be required to match the waveforms. To further evaluate the non-double-couple components, we compare the outcomes of full, deviatoric and pure double couple (DC) inversions using multiple frequency ranges and phases. Improved location and depth information from a novel grid search greatly assists the identification and classification of earthquakes in potential connection with fluid injection or extraction. Overall, a systematic comparison of the source attributes of intermediate-sized earthquakes present a new window into the nature of potentially induced earthquakes in the WCSB.

  8. Predicting the reference evapotranspiration based on tensor decomposition

    NASA Astrophysics Data System (ADS)

    Misaghian, Negin; Shamshirband, Shahaboddin; Petković, Dalibor; Gocic, Milan; Mohammadi, Kasra

    2017-11-01

    Most of the available models for reference evapotranspiration (ET0) estimation are based upon only an empirical equation for ET0. Thus, one of the main issues in ET0 estimation is the appropriate integration of time information and different empirical ET0 equations to determine ET0 and boost the precision. The FAO-56 Penman-Monteith, adjusted Hargreaves, Blaney-Criddle, Priestley-Taylor, and Jensen-Haise equations were utilized in this study for estimating ET0 for two stations of Belgrade and Nis in Serbia using collected data for the period of 1980 to 2010. Three-order tensor is used to capture three-way correlations among months, years, and ET0 information. Afterward, the latent correlations among ET0 parameters were found by the multiway analysis to enhance the quality of the prediction. The suggested method is valuable as it takes into account simultaneous relations between elements, boosts the prediction precision, and determines latent associations. Models are compared with respect to coefficient of determination ( R 2), mean absolute error (MAE), and root-mean-square error (RMSE). The proposed tensor approach has a R 2 value of greater than 0.9 for all selected ET0 methods at both selected stations, which is acceptable for the ET0 prediction. RMSE is ranged between 0.247 and 0.485 mm day-1 at Nis station and between 0.277 and 0.451 mm day-1 at Belgrade station, while MAE is between 0.140 and 0.337 mm day-1 at Nis and between 0.208 and 0.360 mm day-1 at Belgrade station. The best performances are achieved by Priestley-Taylor model at Nis station ( R 2 = 0.985, MAE = 0.140 mm day-1, RMSE = 0.247 mm day-1) and FAO-56 Penman-Monteith model at Belgrade station (MAE = 0.208 mm day-1, RMSE = 0.277 mm day-1, R 2 = 0.975).

  9. Early stage litter decomposition across biomes

    Treesearch

    Ika Djukic; Sebastian Kepfer-Rojas; Inger Kappel Schmidt; Klaus Steenberg Larsen; Claus Beier; Björn Berg; Kris Verheyen; Adriano Caliman; Alain Paquette; Alba Gutiérrez-Girón; Alberto Humber; Alejandro Valdecantos; Alessandro Petraglia; Heather Alexander; Algirdas Augustaitis; Amélie Saillard; Ana Carolina Ruiz Fernández; Ana I. Sousa; Ana I. Lillebø; Anderson da Rocha Gripp; André-Jean Francez; Andrea Fischer; Andreas Bohner; Andrey Malyshev; Andrijana Andrić; Andy Smith; Angela Stanisci; Anikó Seres; Anja Schmidt; Anna Avila; Anne Probst; Annie Ouin; Anzar A. Khuroo; Arne Verstraeten; Arely N. Palabral-Aguilera; Artur Stefanski; Aurora Gaxiola; Bart Muys; Bernard Bosman; Bernd Ahrends; Bill Parker; Birgit Sattler; Bo Yang; Bohdan Juráni; Brigitta Erschbamer; Carmen Eugenia Rodriguez Ortiz; Casper T. Christiansen; E. Carol Adair; Céline Meredieu; Cendrine Mony; Charles A. Nock; Chi-Ling Chen; Chiao-Ping Wang; Christel Baum; Christian Rixen; Christine Delire; Christophe Piscart; Christopher Andrews; Corinna Rebmann; Cristina Branquinho; Dana Polyanskaya; David Fuentes Delgado; Dirk Wundram; Diyaa Radeideh; Eduardo Ordóñez-Regil; Edward Crawford; Elena Preda; Elena Tropina; Elli Groner; Eric Lucot; Erzsébet Hornung; Esperança Gacia; Esther Lévesque; Evanilde Benedito; Evgeny A. Davydov; Evy Ampoorter; Fabio Padilha Bolzan; Felipe Varela; Ferdinand Kristöfel; Fernando T. Maestre; Florence Maunoury-Danger; Florian Hofhansl; Florian Kitz; Flurin Sutter; Francisco Cuesta; Francisco de Almeida Lobo; Franco Leandro de Souza; Frank Berninger; Franz Zehetner; Georg Wohlfahrt; George Vourlitis; Geovana Carreño-Rocabado; Gina Arena; Gisele Daiane Pinha; Grizelle González; Guylaine Canut; Hanna Lee; Hans Verbeeck; Harald Auge; Harald Pauli; Hassan Bismarck Nacro; Héctor A. Bahamonde; Heike Feldhaar; Heinke Jäger; Helena C. Serrano; Hélène Verheyden; Helge Bruelheide; Henning Meesenburg; Hermann Jungkunst; Hervé Jactel; Hideaki Shibata; Hiroko Kurokawa; Hugo López Rosas; Hugo L. Rojas Villalobos; Ian Yesilonis; Inara Melece; Inge Van Halder; Inmaculada García Quirós; Isaac Makelele; Issaka Senou; István Fekete; Ivan Mihal; Ivika Ostonen; Jana Borovská; Javier Roales; Jawad Shoqeir; Jean-Christophe Lata; Jean-Paul Theurillat; Jean-Luc Probst; Jess Zimmerman; Jeyanny Vijayanathan; Jianwu Tang; Jill Thompson; Jiří Doležal; Joan-Albert Sanchez-Cabeza; Joël Merlet; Joh Henschel; Johan Neirynck; Johannes Knops; John Loehr; Jonathan von Oppen; Jónína Sigríður Þorláksdóttir; Jörg Löffler; José-Gilberto Cardoso-Mohedano; José-Luis Benito-Alonso; Jose Marcelo Torezan; Joseph C. Morina; Juan J. Jiménez; Juan Dario Quinde; Juha Alatalo; Julia Seeber; Jutta Stadler; Kaie Kriiska; Kalifa Coulibaly; Karibu Fukuzawa; Katalin Szlavecz; Katarína Gerhátová; Kate Lajtha; Kathrin Käppeler; Katie A. Jennings; Katja Tielbörger; Kazuhiko Hoshizaki; Ken Green; Lambiénou Yé; Laryssa Helena Ribeiro Pazianoto; Laura Dienstbach; Laura Williams; Laura Yahdjian; Laurel M. Brigham; Liesbeth van den Brink; Lindsey Rustad; al. et

    2018-01-01

    Through litter decomposition enormous amounts of carbon is emitted to the atmosphere. Numerous large-scale decomposition experiments have been conducted focusing on this fundamental soil process in order to understand the controls on the terrestrial carbon transfer to the atmosphere. However, previous studies were mostly based on site-specific litter and methodologies...

  10. Compositional aspects of herbaceous litter decomposition in the freshwater marshes of the Florida Everglades

    USDA-ARS?s Scientific Manuscript database

    Litter decomposition in wetlands is an important component of ecosystem function in these detrital systems. In oligotrophic wetlands, such as the Florida Everglades, litter decomposition processes are dependent on nutrient availability and litter quality. However, not much is known about how the che...

  11. Nonequilibrium adiabatic molecular dynamics simulations of methane clathrate hydrate decomposition

    NASA Astrophysics Data System (ADS)

    Alavi, Saman; Ripmeester, J. A.

    2010-04-01

    Nonequilibrium, constant energy, constant volume (NVE) molecular dynamics simulations are used to study the decomposition of methane clathrate hydrate in contact with water. Under adiabatic conditions, the rate of methane clathrate decomposition is affected by heat and mass transfer arising from the breakup of the clathrate hydrate framework and release of the methane gas at the solid-liquid interface and diffusion of methane through water. We observe that temperature gradients are established between the clathrate and solution phases as a result of the endothermic clathrate decomposition process and this factor must be considered when modeling the decomposition process. Additionally we observe that clathrate decomposition does not occur gradually with breakup of individual cages, but rather in a concerted fashion with rows of structure I cages parallel to the interface decomposing simultaneously. Due to the concerted breakup of layers of the hydrate, large amounts of methane gas are released near the surface which can form bubbles that will greatly affect the rate of mass transfer near the surface of the clathrate phase. The effects of these phenomena on the rate of methane hydrate decomposition are determined and implications on hydrate dissociation in natural methane hydrate reservoirs are discussed.

  12. Nonequilibrium adiabatic molecular dynamics simulations of methane clathrate hydrate decomposition.

    PubMed

    Alavi, Saman; Ripmeester, J A

    2010-04-14

    Nonequilibrium, constant energy, constant volume (NVE) molecular dynamics simulations are used to study the decomposition of methane clathrate hydrate in contact with water. Under adiabatic conditions, the rate of methane clathrate decomposition is affected by heat and mass transfer arising from the breakup of the clathrate hydrate framework and release of the methane gas at the solid-liquid interface and diffusion of methane through water. We observe that temperature gradients are established between the clathrate and solution phases as a result of the endothermic clathrate decomposition process and this factor must be considered when modeling the decomposition process. Additionally we observe that clathrate decomposition does not occur gradually with breakup of individual cages, but rather in a concerted fashion with rows of structure I cages parallel to the interface decomposing simultaneously. Due to the concerted breakup of layers of the hydrate, large amounts of methane gas are released near the surface which can form bubbles that will greatly affect the rate of mass transfer near the surface of the clathrate phase. The effects of these phenomena on the rate of methane hydrate decomposition are determined and implications on hydrate dissociation in natural methane hydrate reservoirs are discussed.

  13. Efficient material decomposition method for dual-energy X-ray cargo inspection system

    NASA Astrophysics Data System (ADS)

    Lee, Donghyeon; Lee, Jiseoc; Min, Jonghwan; Lee, Byungcheol; Lee, Byeongno; Oh, Kyungmin; Kim, Jaehyun; Cho, Seungryong

    2018-03-01

    Dual-energy X-ray inspection systems are widely used today for it provides X-ray attenuation contrast of the imaged object and also its material information. Material decomposition capability allows a higher detection sensitivity of potential targets including purposely loaded impurities in agricultural product inspections and threats in security scans for example. Dual-energy X-ray transmission data can be transformed into two basis material thickness data, and its transformation accuracy heavily relies on a calibration of material decomposition process. The calibration process in general can be laborious and time consuming. Moreover, a conventional calibration method is often challenged by the nonuniform spectral characteristics of the X-ray beam in the entire field-of-view (FOV). In this work, we developed an efficient material decomposition calibration process for a linear accelerator (LINAC) based high-energy X-ray cargo inspection system. We also proposed a multi-spot calibration method to improve the decomposition performance throughout the entire FOV. Experimental validation of the proposed method has been demonstrated by use of a cargo inspection system that supports 6 MV and 9 MV dual-energy imaging.

  14. Detecting the Extent of Cellular Decomposition after Sub-Eutectoid Annealing in Rolled UMo Foils

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kautz, Elizabeth J.; Jana, Saumyadeep; Devaraj, Arun

    2017-07-31

    This report presents an automated image processing approach to quantifying microstructure image data, specifically the extent of eutectoid (cellular) decomposition in rolled U-10Mo foils. An image processing approach is used here to be able to quantitatively describe microstructure image data in order to relate microstructure to processing parameters (time, temperature, deformation).

  15. Changes in bacterial and eukaryotic communities during sewage decomposition in Mississippi River water

    EPA Science Inventory

    Microbial decay processes are one of the mechanisms whereby sewage contamination is reduced in the environment. This decomposition process involves a highly complex array of bacterial and eukaryotic communities from both sewage and ambient waters. However, relatively little is kn...

  16. Optimal cost design of water distribution networks using a decomposition approach

    NASA Astrophysics Data System (ADS)

    Lee, Ho Min; Yoo, Do Guen; Sadollah, Ali; Kim, Joong Hoon

    2016-12-01

    Water distribution network decomposition, which is an engineering approach, is adopted to increase the efficiency of obtaining the optimal cost design of a water distribution network using an optimization algorithm. This study applied the source tracing tool in EPANET, which is a hydraulic and water quality analysis model, to the decomposition of a network to improve the efficiency of the optimal design process. The proposed approach was tested by carrying out the optimal cost design of two water distribution networks, and the results were compared with other optimal cost designs derived from previously proposed optimization algorithms. The proposed decomposition approach using the source tracing technique enables the efficient decomposition of an actual large-scale network, and the results can be combined with the optimal cost design process using an optimization algorithm. This proves that the final design in this study is better than those obtained with other previously proposed optimization algorithms.

  17. Conception of discrete systems decomposition algorithm using p-invariants and hypergraphs

    NASA Astrophysics Data System (ADS)

    Stefanowicz, Ł.

    2016-09-01

    In the article author presents an idea of decomposition algorithm of discrete systems described by Petri Nets using pinvariants. Decomposition process is significant from the point of view of discrete systems design, because it allows separation of the smaller sequential parts. Proposed algorithm uses modified Martinez-Silva method as well as author's selection algorithm. The developed method is a good complement of classical decomposition algorithms using graphs and hypergraphs.

  18. Kinetic analysis of overlapping multistep thermal decomposition comprising exothermic and endothermic processes: thermolysis of ammonium dinitramide.

    PubMed

    Muravyev, Nikita V; Koga, Nobuyoshi; Meerov, Dmitry B; Pivkina, Alla N

    2017-01-25

    This study focused on kinetic modeling of a specific type of multistep heterogeneous reaction comprising exothermic and endothermic reaction steps, as exemplified by the practical kinetic analysis of the experimental kinetic curves for the thermal decomposition of molten ammonium dinitramide (ADN). It is known that the thermal decomposition of ADN occurs as a consecutive two step mass-loss process comprising the decomposition of ADN and subsequent evaporation/decomposition of in situ generated ammonium nitrate. These reaction steps provide exothermic and endothermic contributions, respectively, to the overall thermal effect. The overall reaction process was deconvoluted into two reaction steps using simultaneously recorded thermogravimetry and differential scanning calorimetry (TG-DSC) curves by considering the different physical meanings of the kinetic data derived from TG and DSC by P value analysis. The kinetic data thus separated into exothermic and endothermic reaction steps were kinetically characterized using kinetic computation methods including isoconversional method, combined kinetic analysis, and master plot method. The overall kinetic behavior was reproduced as the sum of the kinetic equations for each reaction step considering the contributions to the rate data derived from TG and DSC. During reproduction of the kinetic behavior, the kinetic parameters and contributions of each reaction step were optimized using kinetic deconvolution analysis. As a result, the thermal decomposition of ADN was successfully modeled as partially overlapping exothermic and endothermic reaction steps. The logic of the kinetic modeling was critically examined, and the practical usefulness of phenomenological modeling for the thermal decomposition of ADN was illustrated to demonstrate the validity of the methodology and its applicability to similar complex reaction processes.

  19. Placement-aware decomposition of a digital standard cells library for double patterning lithography

    NASA Astrophysics Data System (ADS)

    Wassal, Amr G.; Sharaf, Heba; Hammouda, Sherif

    2012-11-01

    To continue scaling the circuit features down, Double Patterning (DP) technology is needed in 22nm technologies and lower. DP requires decomposing the layout features into two masks for pitch relaxation, such that the spacing between any two features on each mask is greater than the minimum allowed mask spacing. The relaxed pitches of each mask are then processed on two separate exposure steps. In many cases, post-layout decomposition fails to decompose the layout into two masks due to the presence of conflicts. Post-layout decomposition of a standard cells block can result in native conflicts inside the cells (internal conflict), or native conflicts on the boundary between two cells (boundary conflict). Resolving native conflicts requires a redesign and/or multiple iterations for the placement and routing phases to get a clean decomposition. Therefore, DP compliance must be considered in earlier phases, before getting the final placed cell block. The main focus of this paper is generating a library of decomposed standard cells to be used in a DP-aware placer. This library should contain all possible decompositions for each standard cell, i.e., these decompositions consider all possible combinations of boundary conditions. However, the large number of combinations of boundary conditions for each standard cell will significantly increase the processing time and effort required to obtain all possible decompositions. Therefore, an efficient methodology is required to reduce this large number of combinations. In this paper, three different reduction methodologies are proposed to reduce the number of different combinations processed to get the decomposed library. Experimental results show a significant reduction in the number of combinations and decompositions needed for the library processing. To generate and verify the proposed flow and methodologies, a prototype for a placement-aware DP-ready cell-library is developed with an optimized number of cell views.

  20. A Three-Dimensional Eulerian Code for Simulation of High-Speed Multimaterial Interactions

    DTIC Science & Technology

    2011-08-01

    PDE -based extension. The extension process is done on only the host cells on a particular processor. After extension the parallel communication is...condensation shocks, explosive debris transport, detonation in heterogeneous media and so on. In these flows complex interactions occur between the...A.22] and ijΩ is the spin tensor. The Jaumann derivative is used to ensure objectivity of the stress tensor with respect to rotation

  1. A comparative study of the decomposition of pig carcasses in a methyl methacrylate box and open air conditions.

    PubMed

    Li, Liangliang; Wang, Jiangfeng; Wang, Yu

    2016-08-01

    Analysis of the process of decomposition is essential in establishing the postmortem interval. However, despite the fact that insects are important players in body decomposition, their exact function within the decay process is still unclear. There is also limited knowledge as to how the decomposition process occurs in the absence of insects. In the present study, we compared the decomposition of a pig carcass in open air with that of one placed in a methyl methacrylate box to prevent insect contact. The pig carcass in the methyl methacrylate box was in the fresh stage for 1 day, the bloated stage from 2 d to 11 d, and underwent deflated decay from 12 d. In contrast, the pig carcass in open air went through the fresh, bloated, active decay and post-decay stages; and 22.3 h (0.93 d), 62.47 h (2.60 d), 123.63 h (5.15 d) and 246.5 h (10.27 d) following the start of the experiment respectively, prior to entering the skeletonization stage. A large amount of soft tissue were remained on the pig carcass in the methyl methacrylate box on 26 d, while only scattered bones remained on the pig carcass in open air. The results indicate that insects greatly accelerate the decomposition process. Copyright © 2016 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  2. Biomass is the main driver of changes in ecosystem process rates during tropical forest succession.

    PubMed

    Lohbeck, Madelon; Poorter, Lourens; Martínez-Ramos, Miguel; Bongers, Frans

    2015-05-01

    Over half of the world's forests are disturbed, and the rate at which ecosystem processes recover after disturbance is important for the services these forests can provide. We analyze the drivers' underlying changes in rates of key ecosystem processes (biomass productivity, litter productivity, actual litter decomposition, and potential litter decomposition) during secondary succession after shifting cultivation in wet tropical forest of Mexico. We test the importance of three alternative drivers of ecosystem processes: vegetation biomass (vegetation quantity hypothesis), community-weighted trait mean (mass ratio hypothesis), and functional diversity (niche complementarity hypothesis) using structural equation modeling. This allows us to infer the relative importance of different mechanisms underlying ecosystem process recovery. Ecosystem process rates changed during succession, and the strongest driver was aboveground biomass for each of the processes. Productivity of aboveground stem biomass and leaf litter as well as actual litter decomposition increased with initial standing vegetation biomass, whereas potential litter decomposition decreased with standing biomass. Additionally, biomass productivity was positively affected by community-weighted mean of specific leaf area, and potential decomposition was positively affected by functional divergence, and negatively by community-weighted mean of leaf dry matter content. Our empirical results show that functional diversity and community-weighted means are of secondary importance for explaining changes in ecosystem process rates during tropical forest succession. Instead, simply, the amount of vegetation in a site is the major driver of changes, perhaps because there is a steep biomass buildup during succession that overrides more subtle effects of community functional properties on ecosystem processes. We recommend future studies in the field of biodiversity and ecosystem functioning to separate the effects of vegetation quality (community-weighted mean trait values and functional diversity) from those of vegetation quantity (biomass) on ecosystem processes and services.

  3. Kinetics of non-isothermal decomposition of cinnamic acid

    NASA Astrophysics Data System (ADS)

    Zhao, Ming-rui; Qi, Zhen-li; Chen, Fei-xiong; Yue, Xia-xin

    2014-07-01

    The thermal stability and kinetics of decomposition of cinnamic acid were investigated by thermogravimetry and differential scanning calorimetry at four heating rates. The activation energies of this process were calculated from analysis of TG curves by methods of Flynn-Wall-Ozawa, Doyle, Distributed Activation Energy Model, Šatava-Šesták and Kissinger, respectively. There are only one stage of thermal decomposition process in TG and two endothermic peaks in DSC. For this decomposition process of cinnamic acid, E and log A[s-1] were determined to be 81.74 kJ mol-1 and 8.67, respectively. The mechanism was Mampel Power law (the reaction order, n = 1), with integral form G(α) = α (α = 0.1-0.9). Moreover, thermodynamic properties of Δ H ≠, Δ S ≠, Δ G ≠ were 77.96 kJ mol-1, -90.71 J mol-1 K-1, 119.41 kJ mol-1.

  4. Validating the performance of one-time decomposition for fMRI analysis using ICA with automatic target generation process.

    PubMed

    Yao, Shengnan; Zeng, Weiming; Wang, Nizhuan; Chen, Lei

    2013-07-01

    Independent component analysis (ICA) has been proven to be effective for functional magnetic resonance imaging (fMRI) data analysis. However, ICA decomposition requires to optimize the unmixing matrix iteratively whose initial values are generated randomly. Thus the randomness of the initialization leads to different ICA decomposition results. Therefore, just one-time decomposition for fMRI data analysis is not usually reliable. Under this circumstance, several methods about repeated decompositions with ICA (RDICA) were proposed to reveal the stability of ICA decomposition. Although utilizing RDICA has achieved satisfying results in validating the performance of ICA decomposition, RDICA cost much computing time. To mitigate the problem, in this paper, we propose a method, named ATGP-ICA, to do the fMRI data analysis. This method generates fixed initial values with automatic target generation process (ATGP) instead of being produced randomly. We performed experimental tests on both hybrid data and fMRI data to indicate the effectiveness of the new method and made a performance comparison of the traditional one-time decomposition with ICA (ODICA), RDICA and ATGP-ICA. The proposed method demonstrated that it not only could eliminate the randomness of ICA decomposition, but also could save much computing time compared to RDICA. Furthermore, the ROC (Receiver Operating Characteristic) power analysis also denoted the better signal reconstruction performance of ATGP-ICA than that of RDICA. Copyright © 2013 Elsevier Inc. All rights reserved.

  5. Image Corruption Detection in Diffusion Tensor Imaging for Post-Processing and Real-Time Monitoring

    PubMed Central

    Li, Yue; Shea, Steven M.; Lorenz, Christine H.; Jiang, Hangyi; Chou, Ming-Chung; Mori, Susumu

    2013-01-01

    Due to the high sensitivity of diffusion tensor imaging (DTI) to physiological motion, clinical DTI scans often suffer a significant amount of artifacts. Tensor-fitting-based, post-processing outlier rejection is often used to reduce the influence of motion artifacts. Although it is an effective approach, when there are multiple corrupted data, this method may no longer correctly identify and reject the corrupted data. In this paper, we introduce a new criterion called “corrected Inter-Slice Intensity Discontinuity” (cISID) to detect motion-induced artifacts. We compared the performance of algorithms using cISID and other existing methods with regard to artifact detection. The experimental results show that the integration of cISID into fitting-based methods significantly improves the retrospective detection performance at post-processing analysis. The performance of the cISID criterion, if used alone, was inferior to the fitting-based methods, but cISID could effectively identify severely corrupted images with a rapid calculation time. In the second part of this paper, an outlier rejection scheme was implemented on a scanner for real-time monitoring of image quality and reacquisition of the corrupted data. The real-time monitoring, based on cISID and followed by post-processing, fitting-based outlier rejection, could provide a robust environment for routine DTI studies. PMID:24204551

  6. Paving the way to a full chip gate level double patterning application

    NASA Astrophysics Data System (ADS)

    Haffner, Henning; Meiring, Jason; Baum, Zachary; Halle, Scott

    2007-10-01

    Double patterning lithography processes can offer significant yield enhancement for challenging circuit designs. Many decomposition (i.e. the process of dividing the layout design into first and second exposures) techniques are possible, but the focus of this paper is on the use of a secondary "cut" mask to trim away extraneous features left from the first exposure. This approach has the advantage that each exposure only needs to support a subset of critical features (e.g. dense lines with the first exposure, isolated spaces with the second one). The extraneous features ("printing assist features" or PrAFs) are designed to support the process window of critical features much like the role of the subresolution assist features (SRAFs) in conventional processes. However, the printing nature of PrAFs leads to many more design options, and hence a greater process and decomposition parameter exploration space, than are available for SRAFs. A decomposition scheme using PRAFs was developed for a gate level process. A critical driver of the work was to deliver improved across-chip linewidth variation (ACLV) performance versus an optimized single exposure process while providing support for a larger range of critical features. A variety of PRAF techniques were investigated by simulation, with a PrAF scheme similar to standard SRAF rules being chosen as the optimal solution [1]. This paper discusses aspects of the code development for an automated PrAF generation and placement scheme and the subsequent decomposition of a layout into two mask levels. While PrAF placement and decomposition is straightforward for layouts with pitch and orientation restrictions, it becomes rather complex for unrestricted layout styles. Because this higher complexity yields more irregularly shaped PrAFs, mask making becomes another critical driver of the optimum placement and clean-up strategies. Examples are given of how those challenges are met or can be successfully circumvented. During subsequent decomposition of the PrAF-enhanced layout into two independent mask levels, various geometric decomposition parameters have to be considered. As an example, the removal of PrAFs has to be guaranteed by a minimum required overlap of the cut mask opening past any PrAF edge. It is discussed that process assumptions such as CD tolerances and overlay as well as inter-level relationship ground rules need to be considered to successfully optimize the final decomposition scheme. Furthermore, simulation and experimental results regarding not only ACLV but also across-device linewidth variation (ADLV) are analyzed.

  7. Preliminary application of Structure from Motion and GIS to document decomposition and taphonomic processes.

    PubMed

    Carlton, Connor D; Mitchell, Samantha; Lewis, Patrick

    2018-01-01

    Over the past decade, Structure from Motion (SfM) has increasingly been used as a means of digital preservation and for documenting archaeological excavations, architecture, and cultural material. However, few studies have tapped the potential of using SfM to document and analyze taphonomic processes affecting burials for forensic sciences purposes. This project utilizes SfM models to elucidate specific post-depositional events that affected a series of three human cadavers deposited at the South East Texas Applied Forensic Science Facility (STAFS). The aim of this research was to test the ability for untrained researchers to employ spatial software and photogrammetry for data collection purposes. For a series of three months a single lens reflex (SLR) camera was used to capture a series of overlapping images at periodic stages in the decomposition process of each cadaver. These images are processed through photogrammetric software that creates a 3D model that can be measured, manipulated, and viewed. This project used photogrammetric and geospatial software to map changes in decomposition and movement of the body from original deposition points. Project results indicate SfM and GIS as a useful tool for documenting decomposition and taphonomic processes. Results indicate photogrammetry is an efficient, relatively simple, and affordable tool for the documentation of decomposition. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Ultrasound elastic tensor imaging: comparison with MR diffusion tensor imaging in the myocardium

    NASA Astrophysics Data System (ADS)

    Lee, Wei-Ning; Larrat, Benoît; Pernot, Mathieu; Tanter, Mickaël

    2012-08-01

    We have previously proven the feasibility of ultrasound-based shear wave imaging (SWI) to non-invasively characterize myocardial fiber orientation in both in vitro porcine and in vivo ovine hearts. The SWI-estimated results were in good correlation with histology. In this study, we proposed a new and robust fiber angle estimation method through a tensor-based approach for SWI, coined together as elastic tensor imaging (ETI), and compared it with magnetic resonance diffusion tensor imaging (DTI), a current gold standard and extensively reported non-invasive imaging technique for mapping fiber architecture. Fresh porcine (n = 5) and ovine (n = 5) myocardial samples (20 × 20 × 30 mm3) were studied. ETI was firstly performed to generate shear waves and to acquire the wave events at ultrafast frame rate (8000 fps). A 2.8 MHz phased array probe (pitch = 0.28 mm), connected to a prototype ultrasound scanner, was mounted on a customized MRI-compatible rotation device, which allowed both the rotation of the probe from -90° to 90° at 5° increments and co-registration between two imaging modalities. Transmural shear wave speed at all propagation directions realized was firstly estimated. The fiber angles were determined from the shear wave speed map using the least-squares method and eigen decomposition. The test myocardial sample together with the rotation device was then placed inside a 7T MRI scanner. Diffusion was encoded in six directions. A total of 270 diffusion-weighted images (b = 1000 s mm-2, FOV = 30 mm, matrix size = 60 × 64, TR = 6 s, TE = 19 ms, 24 averages) and 45 B0 images were acquired in 14 h 30 min. The fiber structure was analyzed by the fiber-tracking module in software, MedINRIA. The fiber orientation in the overlapped myocardial region which both ETI and DTI accessed was therefore compared, thanks to the co-registered imaging system. Results from all ten samples showed good correlation (r2 = 0.81, p < 0.0001) and good agreement (3.05° bias) between ETI and DTI fiber angle estimates. The average ETI-estimated fractional anisotropy (FA) values decreased from subendocardium to subepicardium (p < 0.05, unpaired, one-tailed t-test, N = 10) by 33%, whereas the corresponding DTI-estimated FA values presented a change of -10% (p > 0.05, unpaired, one-tailed t-test, N = 10). In conclusion, we have demonstrated that the fiber orientation estimated by ETI, which assesses the shear wave speed (and thus the stiffness), was comparable to that measured by DTI, which evaluates the preferred direction of water diffusion, and have validated this concept within the myocardium. Moreover, ETI was shown capable of mapping the transmural fiber angles with as few as seven shear wave propagation directions.

  9. Parallel processing for pitch splitting decomposition

    NASA Astrophysics Data System (ADS)

    Barnes, Levi; Li, Yong; Wadkins, David; Biederman, Steve; Miloslavsky, Alex; Cork, Chris

    2009-10-01

    Decomposition of an input pattern in preparation for a double patterning process is an inherently global problem in which the influence of a local decomposition decision can be felt across an entire pattern. In spite of this, a large portion of the work can be massively distributed. Here, we discuss the advantages of geometric distribution for polygon operations with limited range of influence. Further, we have found that even the naturally global "coloring" step can, in large part, be handled in a geometrically local manner. In some practical cases, up to 70% of the work can be distributed geometrically. We also describe the methods for partitioning the problem into local pieces and present scaling data up to 100 CPUs. These techniques reduce DPT decomposition runtime by orders of magnitude.

  10. Decomposition of sulfamethoxazole and trimethoprim by continuous UVA/LED/TiO2 photocatalysis: Decomposition pathways, residual antibacterial activity and toxicity.

    PubMed

    Cai, Qinqing; Hu, Jiangyong

    2017-02-05

    In this study, continuous LED/UVA/TiO 2 photocatalytic decomposition of sulfamethoxazole (SMX) and trimethoprim (TMP) was investigated. More than 90% of SMX and TMP were removed within 20min by the continuous photoreactor (with the initial concentration of 400ppb for each). The removal rates of SMX and TMP decreased with higher initial antibiotics loadings. SMX was much easier decomposed in acidic condition, while pH affected little on TMP's decomposition. 0.003% was found to be the optimum H 2 O 2 dosage to enhance SMX photocatalytic decomposition. Decomposition pathways of SMX and TMP were proposed based on the intermediates identified by using LC-MS-MS and GC-MS. Aniline was identified as a new intermediate generated during SMX photocatalytic decomposition. Antibacterial activity study with a reference Escherichia coli strain was also conducted during the photocatalytic process. Results indicated that with every portion of TMP removed, the residual antibacterial activity decreased by one portion. However, the synergistic effect between SMX and TMP tended to slow down the antibacterial activity removal of SMX and TMP mixture. Chronic toxicity studies conducted with Vibrio fischeri exhibited 13-20% bioluminescence inhibition during the decomposition of 1ppm SMX and 1ppm TMP, no acute toxicity to V. fischeri was observed during the photocatalytic process. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Decomposition characteristics of three different kinds of aquatic macrophytes and their potential application as carbon resource in constructed wetland.

    PubMed

    Wu, Suqing; He, Shengbing; Zhou, Weili; Gu, Jianya; Huang, Jungchen; Gao, Lei; Zhang, Xu

    2017-12-01

    Decomposition of aquatic macrophytes usually generates significant influence on aquatic environment. Study on the aquatic macrophytes decomposition may help reusing the aquatic macrophytes litters, as well as controlling the water pollution caused by the decomposition process. This study verified that the decomposition processes of three different kinds of aquatic macrophytes (water hyacinth, hydrilla and cattail) could exert significant influences on water quality of the receiving water, including the change extent of pH, dissolved oxygen (DO), the contents of carbon, nitrogen and phosphorus, etc. The influence of decomposition on water quality and the concentrations of the released chemical materials both followed the order of water hyacinth > hydrilla > cattail. Greater influence was obtained with higher dosage of plant litter addition. The influence also varied with sediment addition. Moreover, nitrogen released from the decomposition of water hyacinth and hydrilla were mainly NH 3 -N and organic nitrogen while those from cattail litter included organic nitrogen and NO 3 - -N. After the decomposition, the average carbon to nitrogen ratio (C/N) in the receiving water was about 2.6 (water hyacinth), 5.3 (hydrilla) and 20.3 (cattail). Therefore, cattail litter might be a potential plant carbon source for denitrification in ecological system of a constructed wetland. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. A fast and robust method for moment tensor and depth determination of shallow seismic events in CTBT related studies.

    NASA Astrophysics Data System (ADS)

    Baker, Ben; Stachnik, Joshua; Rozhkov, Mikhail

    2017-04-01

    International Data Center is required to conduct expert technical analysis and special studies to improve event parameters and assist State Parties in identifying the source of specific event according to the protocol to the Protocol to the Comprehensive Nuclear Test Ban Treaty. Determination of seismic event source mechanism and its depth is closely related to these tasks. It is typically done through a strategic linearized inversion of the waveforms for a complete or subset of source parameters, or similarly defined grid search through precomputed Greens Functions created for particular source models. In this presentation we demonstrate preliminary results obtained with the latter approach from an improved software design. In this development we tried to be compliant with different modes of CTBT monitoring regime and cover wide range of source-receiver distances (regional to teleseismic), resolve shallow source depths, provide full moment tensor solution based on body and surface waves recordings, be fast to satisfy both on-demand studies and automatic processing and properly incorporate observed waveforms and any uncertainties a priori as well as accurately estimate posteriori uncertainties. Posterior distributions of moment tensor parameters show narrow peaks where a significant number of reliable surface wave observations are available. For earthquake examples, fault orientation (strike, dip, and rake) posterior distributions also provide results consistent with published catalogues. Inclusion of observations on horizontal components will provide further constraints. In addition, the calculation of teleseismic P wave Green's Functions are improved through prior analysis to determine an appropriate attenuation parameter for each source-receiver path. Implemented HDF5 based Green's Functions pre-packaging allows much greater flexibility in utilizing different software packages and methods for computation. Further additions will have the rapid use of Instaseis/AXISEM full waveform synthetics added to a pre-computed GF archive. Along with traditional post processing analysis of waveform misfits through several objective functions and variance reduction, we follow a probabilistic approach to assess the robustness of moment tensor solution. In a course of this project full moment tensor and depth estimates are determined for DPRK events and shallow earthquakes using a new implementation of teleseismic P waves waveform fitting. A full grid search over the entire moment tensor space is used to appropriately sample all possible solutions. A recent method by Tape & Tape (2012) to discretize the complete moment tensor space from a geometric perspective is used. Probabilistic uncertainty estimates on the moment tensor parameters provide robustness to solution.

  13. Relationship between the Decomposition Process of Coarse Woody Debris and Fungal Community Structure as Detected by High-Throughput Sequencing in a Deciduous Broad-Leaved Forest in Japan

    PubMed Central

    Yamashita, Satoshi; Masuya, Hayato; Abe, Shin; Masaki, Takashi; Okabe, Kimiko

    2015-01-01

    We examined the relationship between the community structure of wood-decaying fungi, detected by high-throughput sequencing, and the decomposition rate using 13 years of data from a forest dynamics plot. For molecular analysis and wood density measurements, drill dust samples were collected from logs and stumps of Fagus and Quercus in the plot. Regression using a negative exponential model between wood density and time since death revealed that the decomposition rate of Fagus was greater than that of Quercus. The residual between the expected value obtained from the regression curve and the observed wood density was used as a decomposition rate index. Principal component analysis showed that the fungal community compositions of both Fagus and Quercus changed with time since death. Principal component analysis axis scores were used as an index of fungal community composition. A structural equation model for each wood genus was used to assess the effect of fungal community structure traits on the decomposition rate and how the fungal community structure was determined by the traits of coarse woody debris. Results of the structural equation model suggested that the decomposition rate of Fagus was affected by two fungal community composition components: one that was affected by time since death and another that was not affected by the traits of coarse woody debris. In contrast, the decomposition rate of Quercus was not affected by coarse woody debris traits or fungal community structure. These findings suggest that, in the case of Fagus coarse woody debris, the fungal community structure is related to the decomposition process of its host substrate. Because fungal community structure is affected partly by the decay stage and wood density of its substrate, these factors influence each other. Further research on interactive effects is needed to improve our understanding of the relationship between fungal community structure and the woody debris decomposition process. PMID:26110605

  14. Assessing the Uncertainties on Seismic Source Parameters: Towards Realistic Estimates of Moment Tensor Determinations

    NASA Astrophysics Data System (ADS)

    Magnoni, F.; Scognamiglio, L.; Tinti, E.; Casarotti, E.

    2014-12-01

    Seismic moment tensor is one of the most important source parameters defining the earthquake dimension and style of the activated fault. Moment tensor catalogues are ordinarily used by geoscientists, however, few attempts have been done to assess possible impacts of moment magnitude uncertainties upon their own analysis. The 2012 May 20 Emilia mainshock is a representative event since it is defined in literature with a moment magnitude value (Mw) spanning between 5.63 and 6.12. An uncertainty of ~0.5 units in magnitude leads to a controversial knowledge of the real size of the event. The possible uncertainty associated to this estimate could be critical for the inference of other seismological parameters, suggesting caution for seismic hazard assessment, coulomb stress transfer determination and other analyses where self-consistency is important. In this work, we focus on the variability of the moment tensor solution, highlighting the effect of four different velocity models, different types and ranges of filtering, and two different methodologies. Using a larger dataset, to better quantify the source parameter uncertainty, we also analyze the variability of the moment tensor solutions depending on the number, the epicentral distance and the azimuth of used stations. We endorse that the estimate of seismic moment from moment tensor solutions, as well as the estimate of the other kinematic source parameters, cannot be considered an absolute value and requires to come out with the related uncertainties and in a reproducible framework characterized by disclosed assumptions and explicit processing workflows.

  15. Comparison of the Decomposition VOC Profile during Winter and Summer in a Moist, Mid-Latitude (Cfb) Climate

    PubMed Central

    Forbes, Shari L.; Perrault, Katelynn A.; Stefanuto, Pierre-Hugues; Nizio, Katie D.; Focant, Jean-François

    2014-01-01

    The investigation of volatile organic compounds (VOCs) associated with decomposition is an emerging field in forensic taphonomy due to their importance in locating human remains using biological detectors such as insects and canines. A consistent decomposition VOC profile has not yet been elucidated due to the intrinsic impact of the environment on the decomposition process in different climatic zones. The study of decomposition VOCs has typically occurred during the warmer months to enable chemical profiling of all decomposition stages. The present study investigated the decomposition VOC profile in air during both warmer and cooler months in a moist, mid-latitude (Cfb) climate as decomposition occurs year-round in this environment. Pig carcasses (Sus scrofa domesticus L.) were placed on a soil surface to decompose naturally and their VOC profile was monitored during the winter and summer months. Corresponding control sites were also monitored to determine the natural VOC profile of the surrounding soil and vegetation. VOC samples were collected onto sorbent tubes and analyzed using comprehensive two-dimensional gas chromatography – time-of-flight mass spectrometry (GC×GC-TOFMS). The summer months were characterized by higher temperatures and solar radiation, greater rainfall accumulation, and comparable humidity when compared to the winter months. The rate of decomposition was faster and the number and abundance of VOCs was proportionally higher in summer. However, a similar trend was observed in winter and summer demonstrating a rapid increase in VOC abundance during active decay with a second increase in abundance occurring later in the decomposition process. Sulfur-containing compounds, alcohols and ketones represented the most abundant classes of compounds in both seasons, although almost all 10 compound classes identified contributed to discriminating the stages of decomposition throughout both seasons. The advantages of GC×GC-TOFMS were demonstrated for detecting and identifying trace levels of VOCs, particularly ethers, which are rarely reported as decomposition VOCs. PMID:25412504

  16. Comparison of the decomposition VOC profile during winter and summer in a moist, mid-latitude (Cfb) climate.

    PubMed

    Forbes, Shari L; Perrault, Katelynn A; Stefanuto, Pierre-Hugues; Nizio, Katie D; Focant, Jean-François

    2014-01-01

    The investigation of volatile organic compounds (VOCs) associated with decomposition is an emerging field in forensic taphonomy due to their importance in locating human remains using biological detectors such as insects and canines. A consistent decomposition VOC profile has not yet been elucidated due to the intrinsic impact of the environment on the decomposition process in different climatic zones. The study of decomposition VOCs has typically occurred during the warmer months to enable chemical profiling of all decomposition stages. The present study investigated the decomposition VOC profile in air during both warmer and cooler months in a moist, mid-latitude (Cfb) climate as decomposition occurs year-round in this environment. Pig carcasses (Sus scrofa domesticus L.) were placed on a soil surface to decompose naturally and their VOC profile was monitored during the winter and summer months. Corresponding control sites were also monitored to determine the natural VOC profile of the surrounding soil and vegetation. VOC samples were collected onto sorbent tubes and analyzed using comprehensive two-dimensional gas chromatography--time-of-flight mass spectrometry (GC × GC-TOFMS). The summer months were characterized by higher temperatures and solar radiation, greater rainfall accumulation, and comparable humidity when compared to the winter months. The rate of decomposition was faster and the number and abundance of VOCs was proportionally higher in summer. However, a similar trend was observed in winter and summer demonstrating a rapid increase in VOC abundance during active decay with a second increase in abundance occurring later in the decomposition process. Sulfur-containing compounds, alcohols and ketones represented the most abundant classes of compounds in both seasons, although almost all 10 compound classes identified contributed to discriminating the stages of decomposition throughout both seasons. The advantages of GC × GC-TOFMS were demonstrated for detecting and identifying trace levels of VOCs, particularly ethers, which are rarely reported as decomposition VOCs.

  17. Factors controlling bark decomposition and its role in wood decomposition in five tropical tree species

    PubMed Central

    Dossa, Gbadamassi G. O.; Paudel, Ekananda; Cao, Kunfang; Schaefer, Douglas; Harrison, Rhett D.

    2016-01-01

    Organic matter decomposition represents a vital ecosystem process by which nutrients are made available for plant uptake and is a major flux in the global carbon cycle. Previous studies have investigated decomposition of different plant parts, but few considered bark decomposition or its role in decomposition of wood. However, bark can comprise a large fraction of tree biomass. We used a common litter-bed approach to investigate factors affecting bark decomposition and its role in wood decomposition for five tree species in a secondary seasonal tropical rain forest in SW China. For bark, we implemented a litter bag experiment over 12 mo, using different mesh sizes to investigate effects of litter meso- and macro-fauna. For wood, we compared the decomposition of branches with and without bark over 24 mo. Bark in coarse mesh bags decomposed 1.11–1.76 times faster than bark in fine mesh bags. For wood decomposition, responses to bark removal were species dependent. Three species with slow wood decomposition rates showed significant negative effects of bark-removal, but there was no significant effect in the other two species. Future research should also separately examine bark and wood decomposition, and consider bark-removal experiments to better understand roles of bark in wood decomposition. PMID:27698461

  18. Factors controlling bark decomposition and its role in wood decomposition in five tropical tree species.

    PubMed

    Dossa, Gbadamassi G O; Paudel, Ekananda; Cao, Kunfang; Schaefer, Douglas; Harrison, Rhett D

    2016-10-04

    Organic matter decomposition represents a vital ecosystem process by which nutrients are made available for plant uptake and is a major flux in the global carbon cycle. Previous studies have investigated decomposition of different plant parts, but few considered bark decomposition or its role in decomposition of wood. However, bark can comprise a large fraction of tree biomass. We used a common litter-bed approach to investigate factors affecting bark decomposition and its role in wood decomposition for five tree species in a secondary seasonal tropical rain forest in SW China. For bark, we implemented a litter bag experiment over 12 mo, using different mesh sizes to investigate effects of litter meso- and macro-fauna. For wood, we compared the decomposition of branches with and without bark over 24 mo. Bark in coarse mesh bags decomposed 1.11-1.76 times faster than bark in fine mesh bags. For wood decomposition, responses to bark removal were species dependent. Three species with slow wood decomposition rates showed significant negative effects of bark-removal, but there was no significant effect in the other two species. Future research should also separately examine bark and wood decomposition, and consider bark-removal experiments to better understand roles of bark in wood decomposition.

  19. Kinetic Analysis of Isothermal Decomposition Process of Sodium Bicarbonate Using the Weibull Probability Function—Estimation of Density Distribution Functions of the Apparent Activation Energies

    NASA Astrophysics Data System (ADS)

    Janković, Bojan

    2009-10-01

    The decomposition process of sodium bicarbonate (NaHCO3) has been studied by thermogravimetry in isothermal conditions at four different operating temperatures (380 K, 400 K, 420 K, and 440 K). It was found that the experimental integral and differential conversion curves at the different operating temperatures can be successfully described by the isothermal Weibull distribution function with a unique value of the shape parameter ( β = 1.07). It was also established that the Weibull distribution parameters ( β and η) show independent behavior on the operating temperature. Using the integral and differential (Friedman) isoconversional methods, in the conversion (α) range of 0.20 ≤ α ≤ 0.80, the apparent activation energy ( E a ) value was approximately constant ( E a, int = 95.2 kJmol-1 and E a, diff = 96.6 kJmol-1, respectively). The values of E a calculated by both isoconversional methods are in good agreement with the value of E a evaluated from the Arrhenius equation (94.3 kJmol-1), which was expressed through the scale distribution parameter ( η). The Málek isothermal procedure was used for estimation of the kinetic model for the investigated decomposition process. It was found that the two-parameter Šesták-Berggren (SB) autocatalytic model best describes the NaHCO3 decomposition process with the conversion function f(α) = α0.18(1-α)1.19. It was also concluded that the calculated density distribution functions of the apparent activation energies ( ddfE a ’s) are not dependent on the operating temperature, which exhibit the highly symmetrical behavior (shape factor = 1.00). The obtained isothermal decomposition results were compared with corresponding results of the nonisothermal decomposition process of NaHCO3.

  20. Convergence analysis of the alternating RGLS algorithm for the identification of the reduced complexity Volterra model.

    PubMed

    Laamiri, Imen; Khouaja, Anis; Messaoud, Hassani

    2015-03-01

    In this paper we provide a convergence analysis of the alternating RGLS (Recursive Generalized Least Square) algorithm used for the identification of the reduced complexity Volterra model describing stochastic non-linear systems. The reduced Volterra model used is the 3rd order SVD-PARAFC-Volterra model provided using the Singular Value Decomposition (SVD) and the Parallel Factor (PARAFAC) tensor decomposition of the quadratic and the cubic kernels respectively of the classical Volterra model. The Alternating RGLS (ARGLS) algorithm consists on the execution of the classical RGLS algorithm in alternating way. The ARGLS convergence was proved using the Ordinary Differential Equation (ODE) method. It is noted that the algorithm convergence canno׳t be ensured when the disturbance acting on the system to be identified has specific features. The ARGLS algorithm is tested in simulations on a numerical example by satisfying the determined convergence conditions. To raise the elegies of the proposed algorithm, we proceed to its comparison with the classical Alternating Recursive Least Squares (ARLS) presented in the literature. The comparison has been built on a non-linear satellite channel and a benchmark system CSTR (Continuous Stirred Tank Reactor). Moreover the efficiency of the proposed identification approach is proved on an experimental Communicating Two Tank system (CTTS). Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Dressing the post-Newtonian two-body problem and classical effective field theory

    NASA Astrophysics Data System (ADS)

    Kol, Barak; Smolkin, Michael

    2009-12-01

    We apply a dressed perturbation theory to better organize and economize the computation of high orders of the 2-body effective action of an inspiralling post-Newtonian (PN) gravitating binary. We use the effective field theory approach with the nonrelativistic field decomposition (NRG fields). For that purpose we develop quite generally the dressing theory of a nonlinear classical field theory coupled to pointlike sources. We introduce dressed charges and propagators, but unlike the quantum theory there are no dressed bulk vertices. The dressed quantities are found to obey recursive integral equations which succinctly encode parts of the diagrammatic expansion, and are the classical version of the Schwinger-Dyson equations. Actually, the classical equations are somewhat stronger since they involve only finitely many quantities, unlike the quantum theory. Classical diagrams are shown to factorize exactly when they contain nonlinear worldline vertices, and we classify all the possible topologies of irreducible diagrams for low loop numbers. We apply the dressing program to our post-Newtonian case of interest. The dressed charges consist of the dressed energy-momentum tensor after a nonrelativistic decomposition, and we compute all dressed charges (in the harmonic gauge) appearing up to 2PN in the 2-body effective action (and more). We determine the irreducible skeleton diagrams up to 3PN and we employ the dressed charges to compute several terms beyond 2PN.

  2. An efficient and general approach for implementing thermodynamic phase equilibria information in geophysical and geodynamic studies

    NASA Astrophysics Data System (ADS)

    Afonso, Juan Carlos; Zlotnik, Sergio; Díez, Pedro

    2015-10-01

    We present a flexible, general, and efficient approach for implementing thermodynamic phase equilibria information (in the form of sets of physical parameters) into geophysical and geodynamic studies. The approach is based on Tensor Rank Decomposition methods, which transform the original multidimensional discrete information into a separated representation that contains significantly fewer terms, thus drastically reducing the amount of information to be stored in memory during a numerical simulation or geophysical inversion. Accordingly, the amount and resolution of the thermodynamic information that can be used in a simulation or inversion increases substantially. In addition, the method is independent of the actual software used to obtain the primary thermodynamic information, and therefore, it can be used in conjunction with any thermodynamic modeling program and/or database. Also, the errors associated with the decomposition procedure are readily controlled by the user, depending on her/his actual needs (e.g., preliminary runs versus full resolution runs). We illustrate the benefits, generality, and applicability of our approach with several examples of practical interest for both geodynamic modeling and geophysical inversion/modeling. Our results demonstrate that the proposed method is a competitive and attractive candidate for implementing thermodynamic constraints into a broad range of geophysical and geodynamic studies. MATLAB implementations of the method and examples are provided as supporting information and can be downloaded from the journal's website.

  3. Monitoring the Earthquake source process in North America

    USGS Publications Warehouse

    Herrmann, Robert B.; Benz, H.; Ammon, C.J.

    2011-01-01

    With the implementation of the USGS National Earthquake Information Center Prompt Assessment of Global Earthquakes for Response system (PAGER), rapid determination of earthquake moment magnitude is essential, especially for earthquakes that are felt within the contiguous United States. We report an implementation of moment tensor processing for application to broad, seismically active areas of North America. This effort focuses on the selection of regional crustal velocity models, codification of data quality tests, and the development of procedures for rapid computation of the seismic moment tensor. We systematically apply these techniques to earthquakes with reported magnitude greater than 3.5 in continental North America that are not associated with a tectonic plate boundary. Using the 0.02-0.10 Hz passband, we can usually determine, with few exceptions, moment tensor solutions for earthquakes with M w as small as 3.7. The threshold is significantly influenced by the density of stations, the location of the earthquake relative to the seismic stations and, of course, the signal-to-noise ratio. With the existing permanent broadband stations in North America operated for rapid earthquake response, the seismic moment tensor of most earthquakes that are M w 4 or larger can be routinely computed. As expected the nonuniform spatial pattern of these solutions reflects the seismicity pattern. However, the orientation of the direction of maximum compressive stress and the predominant style of faulting is spatially coherent across large regions of the continent.

  4. Complete Moment Tensor Determination of Induced Seismicity in Unconventional and Conventional Oil/Gas Fields

    NASA Astrophysics Data System (ADS)

    Gu, C.; Li, J.; Toksoz, M. N.

    2013-12-01

    Induced seismicity occurs both in conventional oil/gas fields due to production and water injection and in unconventional oil/gas fields due to hydraulic fracturing. Source mechanisms of these induced earthquakes are of great importance for understanding their causes and the physics of the seismic processes in reservoirs. Previous research on the analysis of induced seismic events in conventional oil/gas fields assumed a double couple (DC) source mechanism. However, recent studies have shown a non-negligible percentage of a non-double-couple (non-DC) component of source moment tensor in hydraulic fracturing events (Šílený et al., 2009; Warpinski and Du, 2010; Song and Toksöz, 2011). In this study, we determine the full moment tensor of the induced seismicity data in a conventional oil/gas field and for hydrofrac events in an unconventional oil/gas field. Song and Toksöz (2011) developed a full waveform based complete moment tensor inversion method to investigate a non-DC source mechanism. We apply this approach to the induced seismicity data from a conventional gas field in Oman. In addition, this approach is also applied to hydrofrac microseismicity data monitored by downhole geophones in four wells in US. We compare the source mechanisms of induced seismicity in the two different types of gas fields and explain the differences in terms of physical processes.

  5. SPIN CORRELATIONS OF THE FINAL LEPTONS IN THE TWO-PHOTON PROCESSES γγ → e+e-, μ+μ-, τ+τ-

    NASA Astrophysics Data System (ADS)

    Lyuboshitz, Valery V.; Lyuboshitz, Vladimir L.

    2014-12-01

    The spin structure of the process γγ → e+e- is theoretically investigated. It is shown that, if the primary photons are unpolarized, the final electron and positron are unpolarized as well but their spins are strongly correlated. For the final (e+e-) system, explicit expressions for the components of the correlation tensor are derived, and the relative fractions of singlet and triplet states are found. It is demonstrated that in the process γγ → e+e- one of the Bell-type incoherence inequalities for the correlation tensor components is always violated and, thus, spin correlations of the electron and positron in this process have the strongly pronounced quantum character. Analogous consideration can be wholly applied as well to the two-photon processes γγ → μ+μ- and γγ → τ+τ-, which become possible at considerably higher energies.

  6. Layout compliance for triple patterning lithography: an iterative approach

    NASA Astrophysics Data System (ADS)

    Yu, Bei; Garreton, Gilda; Pan, David Z.

    2014-10-01

    As the semiconductor process further scales down, the industry encounters many lithography-related issues. In the 14nm logic node and beyond, triple patterning lithography (TPL) is one of the most promising techniques for Metal1 layer and possibly Via0 layer. As one of the most challenging problems in TPL, recently layout decomposition efforts have received more attention from both industry and academia. Ideally the decomposer should point out locations in the layout that are not triple patterning decomposable and therefore manual intervention by designers is required. A traditional decomposition flow would be an iterative process, where each iteration consists of an automatic layout decomposition step and manual layout modification task. However, due to the NP-hardness of triple patterning layout decomposition, automatic full chip level layout decomposition requires long computational time and therefore design closure issues continue to linger around in the traditional flow. Challenged by this issue, we present a novel incremental layout decomposition framework to facilitate accelerated iterative decomposition. In the first iteration, our decomposer not only points out all conflicts, but also provides the suggestions to fix them. After the layout modification, instead of solving the full chip problem from scratch, our decomposer can provide a quick solution for a selected portion of layout. We believe this framework is efficient, in terms of performance and designer friendly.

  7. Decoupling the direct and indirect effects of climate on plant litter decomposition: Accounting for stress-induced modifications in plant chemistry.

    PubMed

    Suseela, Vidya; Tharayil, Nishanth

    2018-04-01

    Decomposition of plant litter is a fundamental ecosystem process that can act as a feedback to climate change by simultaneously influencing both the productivity of ecosystems and the flux of carbon dioxide from the soil. The influence of climate on decomposition from a postsenescence perspective is relatively well known; in particular, climate is known to regulate the rate of litter decomposition via its direct influence on the reaction kinetics and microbial physiology on processes downstream of tissue senescence. Climate can alter plant metabolism during the formative stage of tissues and could shape the final chemical composition of plant litter that is available for decomposition, and thus indirectly influence decomposition; however, these indirect effects are relatively poorly understood. Climatic stress disrupts cellular homeostasis in plants and results in the reprogramming of primary and secondary metabolic pathways, which leads to changes in the quantity, composition, and organization of small molecules and recalcitrant heteropolymers, including lignins, tannins, suberins, and cuticle within the plant tissue matrix. Furthermore, by regulating metabolism during tissue senescence, climate influences the resorption of nutrients from senescing tissues. Thus, the final chemical composition of plant litter that forms the substrate of decomposition is a combined product of presenescence physiological processes through the production and resorption of metabolites. The changes in quantity, composition, and localization of the molecular construct of the litter could enhance or hinder tissue decomposition and soil nutrient cycling by altering the recalcitrance of the lignocellulose matrix, the composition of microbial communities, and the activity of microbial exo-enzymes via various complexation reactions. Also, the climate-induced changes in the molecular composition of litter could differentially influence litter decomposition and soil nutrient cycling. Compared with temperate ecosystems, the indirect effects of climate on litter decomposition in the tropics are not well understood, which underscores the need to conduct additional studies in tropical biomes. We also emphasize the need to focus on how climatic stress affects the root chemistry as roots contribute significantly to biogeochemical cycling, and on utilizing more robust analytical approaches to capture the molecular composition of tissue matrix that fuel microbial metabolism. © 2017 John Wiley & Sons Ltd.

  8. Comparison of multi-fluid moment models with particle-in-cell simulations of collisionless magnetic reconnection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Liang, E-mail: liang.wang@unh.edu; Germaschewski, K.; Hakim, Ammar H.

    2015-01-15

    We introduce an extensible multi-fluid moment model in the context of collisionless magnetic reconnection. This model evolves full Maxwell equations and simultaneously moments of the Vlasov-Maxwell equation for each species in the plasma. Effects like electron inertia and pressure gradient are self-consistently embedded in the resulting multi-fluid moment equations, without the need to explicitly solving a generalized Ohm's law. Two limits of the multi-fluid moment model are discussed, namely, the five-moment limit that evolves a scalar pressures for each species and the ten-moment limit that evolves the full anisotropic, non-gyrotropic pressure tensor for each species. We first demonstrate analytically andmore » numerically that the five-moment model reduces to the widely used Hall magnetohydrodynamics (Hall MHD) model under the assumptions of vanishing electron inertia, infinite speed of light, and quasi-neutrality. Then, we compare ten-moment and fully kinetic particle-in-cell (PIC) simulations of a large scale Harris sheet reconnection problem, where the ten-moment equations are closed with a local linear collisionless approximation for the heat flux. The ten-moment simulation gives reasonable agreement with the PIC results regarding the structures and magnitudes of the electron flows, the polarities and magnitudes of elements of the electron pressure tensor, and the decomposition of the generalized Ohm's law. Possible ways to improve the simple local closure towards a nonlocal fully three-dimensional closure are also discussed.« less

  9. Two-way and three-way approaches to ultra high performance liquid chromatography-photodiode array dataset for the quantitative resolution of a two-component mixture containing ciprofloxacin and ornidazole.

    PubMed

    Dinç, Erdal; Ertekin, Zehra Ceren; Büker, Eda

    2016-09-01

    Two-way and three-way calibration models were applied to ultra high performance liquid chromatography with photodiode array data with coeluted peaks in the same wavelength and time regions for the simultaneous quantitation of ciprofloxacin and ornidazole in tablets. The chromatographic data cube (tensor) was obtained by recording chromatographic spectra of the standard and sample solutions containing ciprofloxacin and ornidazole with sulfadiazine as an internal standard as a function of time and wavelength. Parallel factor analysis and trilinear partial least squares were used as three-way calibrations for the decomposition of the tensor, whereas three-way unfolded partial least squares was applied as a two-way calibration to the unfolded dataset obtained from the data array of ultra high performance liquid chromatography with photodiode array detection. The validity and ability of two-way and three-way analysis methods were tested by analyzing validation samples: synthetic mixture, interday and intraday samples, and standard addition samples. Results obtained from two-way and three-way calibrations were compared to those provided by traditional ultra high performance liquid chromatography. The proposed methods, parallel factor analysis, trilinear partial least squares, unfolded partial least squares, and traditional ultra high performance liquid chromatography were successfully applied to the quantitative estimation of the solid dosage form containing ciprofloxacin and ornidazole. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Anomalous Polarized Raman Scattering and Large Circular Intensity Differential in Layered Triclinic ReS2.

    PubMed

    Zhang, Shishu; Mao, Nannan; Zhang, Na; Wu, Juanxia; Tong, Lianming; Zhang, Jin

    2017-10-24

    The Raman tensor of a crystal is the derivative of its polarizability tensor and is dependent on the symmetries of the crystal and the Raman-active vibrational mode. The intensity of a particular mode is determined by the Raman selection rule, which involves the Raman tensor and the polarization configurations. For anisotropic two-dimensional (2D) layered crystals, polarized Raman scattering has been used to reveal the crystalline orientations. However, due to its complicated Raman tensors and optical birefringence, the polarized Raman scattering of triclinic 2D crystals has not been well studied yet. Herein, we report the anomalous polarized Raman scattering of 2D layered triclinic rhenium disulfide (ReS 2 ) and show a large circular intensity differential (CID) of Raman scattering in ReS 2 of different thicknesses. The origin of CID and the anomalous behavior in polarized Raman scattering were attributed to the appearance of nonzero off-diagonal Raman tensor elements and the phase factor owing to optical birefringence. This can provide a method to identify the vertical orientation of triclinic layered materials. These findings may help to further understand the Raman scattering process in 2D materials of low symmetry and may indicate important applications in chiral recognition by using 2D materials.

  11. Full moment tensors for small events (Mw < 3) at Uturuncu volcano, Bolivia

    NASA Astrophysics Data System (ADS)

    Alvizuri, Celso; Tape, Carl

    2016-09-01

    We present a catalogue of full seismic moment tensors for 63 events from Uturuncu volcano in Bolivia. The events were recorded during 2011-2012 in the PLUTONS seismic array of 24 broad-band stations. Most events had magnitudes between 0.5 and 2.0 and did not generate discernible surface waves; the largest event was Mw 2.8. For each event we computed the misfit between observed and synthetic waveforms, and we used first-motion polarity measurements to reduce the number of possible solutions. Each moment tensor solution was obtained using a grid search over the 6-D space of moment tensors. For each event, we show the misfit function in eigenvalue space, represented by a lune. We identify three subsets of the catalogue: (1) six isotropic events, (2) five tensional crack events, and (3) a swarm of 14 events southeast of the volcanic centre that appear to be double couples. The occurrence of positively isotropic events is consistent with other published results from volcanic and geothermal regions. Several of these previous results, as well as our results, cannot be interpreted within the context of either an oblique opening crack or a crack-plus-double-couple model. Proper characterization of uncertainties for full moment tensors is critical for distinguishing among physical models of source processes.

  12. Multilevel decomposition of complete vehicle configuration in a parallel computing environment

    NASA Technical Reports Server (NTRS)

    Bhatt, Vinay; Ragsdell, K. M.

    1989-01-01

    This research summarizes various approaches to multilevel decomposition to solve large structural problems. A linear decomposition scheme based on the Sobieski algorithm is selected as a vehicle for automated synthesis of a complete vehicle configuration in a parallel processing environment. The research is in a developmental state. Preliminary numerical results are presented for several example problems.

  13. Interacting effects of insects and flooding on wood decomposition.

    Treesearch

    Michael Ulyshen

    2014-01-01

    Saproxylic arthropods are thought to play an important role in wood decomposition but very few efforts have been made to quantify their contributions to the process and the factors controlling their activities are not well understood. In the current study, mesh exclusion bags were used to quantify how arthropods affect loblolly pine (Pinus taeda L.) decomposition rates...

  14. Characterizing dielectric tensors of anisotropic materials from a single measurement

    NASA Astrophysics Data System (ADS)

    Smith, Paula Kay

    Ellipsometry techniques look at changes in polarization states to measure optical properties of thin film materials. A beam reflected from a substrate measures the real and imaginary parts of the index of the material represented as n and k, respectively. Measuring the substrate at several angles gives additional information that can be used to measure multilayer thin film stacks. However, the outstanding problem in standard ellipsometry is that it uses a limited number of incident polarization states (s and p). This limits the technique to isotropic materials. The technique discussed in this paper extends the standard process to measure anisotropic materials by using a larger set of incident polarization states. By using a polarimeter to generate several incident polarization states and measure the polarization properties of the sample, ellipsometry can be performed on biaxial materials. Use of an optimization algorithm in conjunction with biaxial ellipsometry can more accurately determine the dielectric tensor of individual layers in multilayer structures. Biaxial ellipsometry is a technique that measures the dielectric tensors of a biaxial substrate, single-layer thin film, or multi-layer structure. The dielectric tensor of a biaxial material consists of the real and imaginary parts of the three orthogonal principal indices (n x + ikx, ny +iky and nz + i kz) as well as three Euler angles (alpha, beta and gamma) to describe its orientation. The method utilized in this work measures an angle-of-incidence Mueller matrix from a Mueller matrix imaging polarimeter equipped with a pair of microscope objectives that have low polarization properties. To accurately determine the dielectric tensors for multilayer samples, the angle-of-incidence Mueller matrix images are collected for multiple wavelengths. This is done in either a transmission mode or a reflection mode, each incorporates an appropriate dispersion model. Given approximate a priori knowledge of the dielectric tensor and film thickness, a Jones reflectivity matrix is calculated by solving Maxwell's equations at each surface. Converting the Jones matrix into a Mueller matrix provides a starting point for optimization. An optimization algorithm then finds the best fit dielectric tensor based on the measured angle-of-incidence Mueller matrix image. This process can be applied to polarizing materials, birefringent crystals and the multilayer structures of liquid crystal displays. In particular, the need for such accuracy in liquid crystal displays is growing as their applications in industry evolve.

  15. Enhance the Quality of Crowdsensing for Fine-Grained Urban Environment Monitoring via Data Correlation

    PubMed Central

    Kang, Xu; Liu, Liang; Ma, Huadong

    2017-01-01

    Monitoring the status of urban environments, which provides fundamental information for a city, yields crucial insights into various fields of urban research. Recently, with the popularity of smartphones and vehicles equipped with onboard sensors, a people-centric scheme, namely “crowdsensing”, for city-scale environment monitoring is emerging. This paper proposes a data correlation based crowdsensing approach for fine-grained urban environment monitoring. To demonstrate urban status, we generate sensing images via crowdsensing network, and then enhance the quality of sensing images via data correlation. Specifically, to achieve a higher quality of sensing images, we not only utilize temporal correlation of mobile sensing nodes but also fuse the sensory data with correlated environment data by introducing a collective tensor decomposition approach. Finally, we conduct a series of numerical simulations and a real dataset based case study. The results validate that our approach outperforms the traditional spatial interpolation-based method. PMID:28054968

  16. Filtering techniques for efficient inversion of two-dimensional Nuclear Magnetic Resonance data

    NASA Astrophysics Data System (ADS)

    Bortolotti, V.; Brizi, L.; Fantazzini, P.; Landi, G.; Zama, F.

    2017-10-01

    The inversion of two-dimensional Nuclear Magnetic Resonance (NMR) data requires the solution of a first kind Fredholm integral equation with a two-dimensional tensor product kernel and lower bound constraints. For the solution of this ill-posed inverse problem, the recently presented 2DUPEN algorithm [V. Bortolotti et al., Inverse Problems, 33(1), 2016] uses multiparameter Tikhonov regularization with automatic choice of the regularization parameters. In this work, I2DUPEN, an improved version of 2DUPEN that implements Mean Windowing and Singular Value Decomposition filters, is deeply tested. The reconstruction problem with filtered data is formulated as a compressed weighted least squares problem with multi-parameter Tikhonov regularization. Results on synthetic and real 2D NMR data are presented with the main purpose to deeper analyze the separate and combined effects of these filtering techniques on the reconstructed 2D distribution.

  17. Smoothing Spline ANOVA Decomposition of Arbitrary Splines: An Application to Eye Movements in Reading

    PubMed Central

    Matuschek, Hannes; Kliegl, Reinhold; Holschneider, Matthias

    2015-01-01

    The Smoothing Spline ANOVA (SS-ANOVA) requires a specialized construction of basis and penalty terms in order to incorporate prior knowledge about the data to be fitted. Typically, one resorts to the most general approach using tensor product splines. This implies severe constraints on the correlation structure, i.e. the assumption of isotropy of smoothness can not be incorporated in general. This may increase the variance of the spline fit, especially if only a relatively small set of observations are given. In this article, we propose an alternative method that allows to incorporate prior knowledge without the need to construct specialized bases and penalties, allowing the researcher to choose the spline basis and penalty according to the prior knowledge of the observations rather than choosing them according to the analysis to be done. The two approaches are compared with an artificial example and with analyses of fixation durations during reading. PMID:25816246

  18. z -Weyl gravity in higher dimensions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moon, Taeyoon; Oh, Phillial, E-mail: dpproject@skku.edu, E-mail: ploh@skku.edu

    We consider higher dimensional gravity in which the four dimensional spacetime and extra dimensions are not treated on an equal footing. The anisotropy is implemented in the ADM decomposition of higher dimensional metric by requiring the foliation preserving diffeomorphism invariance adapted to the extra dimensions, thus keeping the general covariance only for the four dimensional spacetime. The conformally invariant gravity can be constructed with an extra (Weyl) scalar field and a real parameter z which describes the degree of anisotropy of conformal transformation between the spacetime and extra dimensional metrics. In the zero mode effective 4D action, it reduces tomore » four-dimensional scalar-tensor theory coupled with nonlinear sigma model described by extra dimensional metrics. There are no restrictions on the value of z at the classical level and possible applications to the cosmological constant problem with a specific choice of z are discussed.« less

  19. The notion of a plastic material spin in atomistic simulations

    NASA Astrophysics Data System (ADS)

    Dickel, D.; Tenev, T. G.; Gullett, P.; Horstemeyer, M. F.

    2016-12-01

    A kinematic algorithm is proposed to extend existing constructions of strain tensors from atomistic data to decouple elastic and plastic contributions to the strain. Elastic and plastic deformation and ultimately the plastic spin, useful quantities in continuum mechanics and finite element simulations, are computed from the full, discrete deformation gradient and an algorithm for the local elastic deformation gradient. This elastic deformation gradient algorithm identifies a crystal type using bond angle analysis (Ackland and Jones 2006 Phys. Rev. B 73 054104) and further exploits the relationship between bond angles to determine the local deformation from an ideal crystal lattice. Full definitions of plastic deformation follow directly using a multiplicative decomposition of the deformation gradient. The results of molecular dynamics simulations of copper in simple shear and torsion are presented to demonstrate the ability of these new discrete measures to describe plastic material spin in atomistic simulation and to compare them with continuum theory.

  20. Tea polyphenols dominate the short-term tea (Camellia sinensis) leaf litter decomposition*

    PubMed Central

    Fan, Dong-mei; Fan, Kai; Yu, Cui-ping; Lu, Ya-ting; Wang, Xiao-chang

    2017-01-01

    Polyphenols are one of the most important secondary metabolites, and affect the decomposition of litter and soil organic matter. This study aims to monitor the mass loss rate of tea leaf litter and nutrient release pattern, and investigate the role of tea polyphenols played in this process. High-performance liquid chromatography (HPLC) and classical litter bag method were used to simulate the decomposition process of tea leaf litter and track the changes occurring in major polyphenols over eight months. The release patterns of nitrogen, potassium, calcium, and magnesium were also determined. The decomposition pattern of tea leaf litter could be described by a two-phase decomposition model, and the polyphenol/N ratio effectively regulated the degradation process. Most of the catechins decreased dramatically within two months; gallic acid (GA), catechin gallate (CG), and gallocatechin (GC) were faintly detected, while others were outside the detection limits by the end of the experiment. These results demonstrated that tea polyphenols transformed quickly and catechins had an effect on the individual conversion rate. The nutrient release pattern was different from other plants which might be due to the existence of tea polyphenols. PMID:28124839

  1. Tea polyphenols dominate the short-term tea (Camellia sinensis) leaf litter decomposition.

    PubMed

    Fan, Dong-Mei; Fan, Kai; Yu, Cui-Ping; Lu, Ya-Ting; Wang, Xiao-Chang

    Polyphenols are one of the most important secondary metabolites, and affect the decomposition of litter and soil organic matter. This study aims to monitor the mass loss rate of tea leaf litter and nutrient release pattern, and investigate the role of tea polyphenols played in this process. High-performance liquid chromatography (HPLC) and classical litter bag method were used to simulate the decomposition process of tea leaf litter and track the changes occurring in major polyphenols over eight months. The release patterns of nitrogen, potassium, calcium, and magnesium were also determined. The decomposition pattern of tea leaf litter could be described by a two-phase decomposition model, and the polyphenol/N ratio effectively regulated the degradation process. Most of the catechins decreased dramatically within two months; gallic acid (GA), catechin gallate (CG), and gallocatechin (GC) were faintly detected, while others were outside the detection limits by the end of the experiment. These results demonstrated that tea polyphenols transformed quickly and catechins had an effect on the individual conversion rate. The nutrient release pattern was different from other plants which might be due to the existence of tea polyphenols.

  2. Characterization of nanosized TiO2 synthesized inside a porous glass ceramic monolith by metallo-organic decomposition process

    NASA Astrophysics Data System (ADS)

    Mazali, Italo Odone; Alves, Oswaldo Luiz

    2005-01-01

    This work reports the preparation of TiO2 by decomposition of a metallo-organic precursor (MOD process) in the pores of an α-NbPO5 glass-ceramic monolith (PGC-NbP) and the study of the TiO2 anatase-rutile transition phase. The impregnation of titanium di-(propoxy)-di-(2-ethylhexanoate) in the PGC-NbP was confirmed by diffuse reflectance infrared spectroscopy. In the restrictive porous environment the decomposition of the metallo-organic compound exhibits a lower initial decomposition temperature but a higher final decomposition temperature, in comparison to the free precursor. The pure TiO2 rutile phase is formed only above 700 °C when the titanium precursor is decomposed outside the pores. The TiO2 anatase obtained inside the PGC-NbP was stabilized up to 750 °C and exhibits a smaller average crystallite size in comparison with the MOD process performed without PGC-NbP. Furthemore, the temperature of the TiO2 anatase-rutile transformation depends on crystallite size, which was provided by XRD and Raman spectroscopy. The precursor impregnation-decomposition cycle revealed a linear mass increment inside PGC-NbP. Micro-Raman spectroscopy shows the presence of a gradient concentration of the TiO2 inside the PGC-NbP. The use of the MOD process in the PGC-NbP pores has several advantages: control of the amount and the nature of the phase formed and preservation of the pore structure of PGC-NbP for subsequent treatments and reactions.

  3. A comparison between decomposition rates of buried and surface remains in a temperate region of South Africa.

    PubMed

    Marais-Werner, Anátulie; Myburgh, J; Becker, P J; Steyn, M

    2018-01-01

    Several studies have been conducted on decomposition patterns and rates of surface remains; however, much less are known about this process for buried remains. Understanding the process of decomposition in buried remains is extremely important and aids in criminal investigations, especially when attempting to estimate the post mortem interval (PMI). The aim of this study was to compare the rates of decomposition between buried and surface remains. For this purpose, 25 pigs (Sus scrofa; 45-80 kg) were buried and excavated at different post mortem intervals (7, 14, 33, 92, and 183 days). The observed total body scores were then compared to those of surface remains decomposing at the same location. Stages of decomposition were scored according to separate categories for different anatomical regions based on standardised methods. Variation in the degree of decomposition was considerable especially with the buried 7-day interval pigs that displayed different degrees of discolouration in the lower abdomen and trunk. At 14 and 33 days, buried pigs displayed features commonly associated with the early stages of decomposition, but with less variation. A state of advanced decomposition was reached where little change was observed in the next ±90-183 days after interment. Although the patterns of decomposition for buried and surface remains were very similar, the rates differed considerably. Based on the observations made in this study, guidelines for the estimation of PMI are proposed. This pertains to buried remains found at a depth of approximately 0.75 m in the Central Highveld of South Africa.

  4. Microbial community assembly and metabolic function during mammalian corpse decomposition

    USGS Publications Warehouse

    Metcalf, Jessica L; Xu, Zhenjiang Zech; Weiss, Sophie; Lax, Simon; Van Treuren, Will; Hyde, Embriette R.; Song, Se Jin; Amir, Amnon; Larsen, Peter; Sangwan, Naseer; Haarmann, Daniel; Humphrey, Greg C; Ackermann, Gail; Thompson, Luke R; Lauber, Christian; Bibat, Alexander; Nicholas, Catherine; Gebert, Matthew J; Petrosino, Joseph F; Reed, Sasha C.; Gilbert, Jack A; Lynne, Aaron M; Bucheli, Sibyl R; Carter, David O; Knight, Rob

    2016-01-01

    Vertebrate corpse decomposition provides an important stage in nutrient cycling in most terrestrial habitats, yet microbially mediated processes are poorly understood. Here we combine deep microbial community characterization, community-level metabolic reconstruction, and soil biogeochemical assessment to understand the principles governing microbial community assembly during decomposition of mouse and human corpses on different soil substrates. We find a suite of bacterial and fungal groups that contribute to nitrogen cycling and a reproducible network of decomposers that emerge on predictable time scales. Our results show that this decomposer community is derived primarily from bulk soil, but key decomposers are ubiquitous in low abundance. Soil type was not a dominant factor driving community development, and the process of decomposition is sufficiently reproducible to offer new opportunities for forensic investigations.

  5. Microbial community assembly and metabolic function during mammalian corpse decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Metcalf, J. L.; Xu, Z. Z.; Weiss, S.

    2015-12-10

    Vertebrate corpse decomposition provides an important stage in nutrient cycling in most terrestrial habitats, yet microbially mediated processes are poorly understood. Here we combine deep microbial community characterization, community-level metabolic reconstruction, and soil biogeochemical assessment to understand the principles governing microbial community assembly during decomposition of mouse and human corpses on different soil substrates. We find a suite of bacterial and fungal groups that contribute to nitrogen cycling and a reproducible network of decomposers that emerge on predictable time scales. Our results show that this decomposer community is derived primarily from bulk soil, but key decomposers are ubiquitous in lowmore » abundance. Soil type was not a dominant factor driving community development, and the process of decomposition is sufficiently reproducible to offer new opportunities for forensic investigations.« less

  6. Dynamic Load Balancing Based on Constrained K-D Tree Decomposition for Parallel Particle Tracing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jiang; Guo, Hanqi; Yuan, Xiaoru

    Particle tracing is a fundamental technique in flow field data visualization. In this work, we present a novel dynamic load balancing method for parallel particle tracing. Specifically, we employ a constrained k-d tree decomposition approach to dynamically redistribute tasks among processes. Each process is initially assigned a regularly partitioned block along with duplicated ghost layer under the memory limit. During particle tracing, the k-d tree decomposition is dynamically performed by constraining the cutting planes in the overlap range of duplicated data. This ensures that each process is reassigned particles as even as possible, and on the other hand the newmore » assigned particles for a process always locate in its block. Result shows good load balance and high efficiency of our method.« less

  7. PROCESS OF COATING WITH NICKEL BY THE DECOMPOSITION OF NICKEL CARBONYL

    DOEpatents

    Hoover, T.B.

    1959-04-01

    An improved process is presented for the deposition of nickel coatings by the thermal decomposition of nickel carbonyl vapor. The improvement consists in incorporating a small amount of hydrogen sulfide gas in the nickel carbonyl plating gas. It is postulated that the hydrogen sulfide functions as a catalyst. i

  8. A characterization of the two-step reaction mechanism of phenol decomposition by a Fenton reaction

    NASA Astrophysics Data System (ADS)

    Valdés, Cristian; Alzate-Morales, Jans; Osorio, Edison; Villaseñor, Jorge; Navarro-Retamal, Carlos

    2015-11-01

    Phenol is one of the worst contaminants at date, and its degradation has been a crucial task over years. Here, the decomposition process of phenol, in a Fenton reaction, is described. Using scavengers, it was observed that decomposition of phenol was mainly influenced by production of hydroxyl radicals. Experimental and theoretical activation energies (Ea) for phenol oxidation intermediates were calculated. According to these Ea, phenol decomposition is a two-step reaction mechanism mediated predominantly by hydroxyl radicals, producing a decomposition yield order given as hydroquinone > catechol > resorcinol. Furthermore, traces of reaction derived acids were detected by HPLC and GS-MS.

  9. Decomposition

    USGS Publications Warehouse

    Middleton, Beth A.

    2014-01-01

    A cornerstone of ecosystem ecology, decomposition was recognized as a fundamental process driving the exchange of energy in ecosystems by early ecologists such as Lindeman 1942 and Odum 1960). In the history of ecology, studies of decomposition were incorporated into the International Biological Program in the 1960s to compare the nature of organic matter breakdown in various ecosystem types. Such studies still have an important role in ecological studies of today. More recent refinements have brought debates on the relative role microbes, invertebrates and environment in the breakdown and release of carbon into the atmosphere, as well as how nutrient cycling, production and other ecosystem processes regulated by decomposition may shift with climate change. Therefore, this bibliography examines the primary literature related to organic matter breakdown, but it also explores topics in which decomposition plays a key supporting role including vegetation composition, latitudinal gradients, altered ecosystems, anthropogenic impacts, carbon storage, and climate change models. Knowledge of these topics is relevant to both the study of ecosystem ecology as well projections of future conditions for human societies.

  10. Possibility of H2O2 decomposition in thin liquid films on Mars

    NASA Astrophysics Data System (ADS)

    Kereszturi, Akos; Gobi, Sandor

    2014-11-01

    In this work the pathways and possibilities of H2O2 decomposition on Mars in microscopic liquid interfacial water were analyzed by kinetic calculations. Thermal and photochemical driven decomposition, just like processes catalyzed by various metal oxides, is too slow compared to the annual duration while such microscopic liquid layers exist on Mars today, to produce substantial decomposition. The most effective analyzed process is catalyzed by Fe ions, which could decompose H2O2 under pH<4.5 with a half life of 1-2 days. This process might be important during volcanically influenced periods when sulfur release produces acidic pH, and rotational axis tilt change driven climatic changes also influence the volatile circulation and spatial occurrence just like the duration of thin liquid layer. Under current conditions, using the value of 200 K as the temperature in interfacial water (at the southern hemisphere), and applying Phoenix lander's wet chemistry laboratory results, the pH is not favorable for Fe mobility and this kind of decomposition. Despite current conditions (especially pH) being unfavorable for H2O2 decomposition, microscopic scale interfacial liquid water still might support the process. By the reaction called heterogeneous catalysis, without acidic pH and mobile Fe, but with minerals surfaces containing Fe decomposition of H2O2 with half life of 20 days can happen. This duration is still longer but not several orders than the existence of springtime interfacial liquid water on Mars today. This estimation is relevant for activation energy controlled reaction rates. The other main parameter that may influence the reaction rate is the diffusion speed. Although the available tests and theoretical calculations do not provide firm values for the diffusion speed in such a “2-dimensional” environment, using relevant estimations this parameter in the interfacial liquid layer is smaller than in bulk water. But the 20 days' duration mentioned above is still relevant, as the activation energy driven reaction rate is the main limiting factor in the decomposition and not the diffusion speed. The duration of dozen(s) days is still longer but not with orders of magnitude than the expected duration for the existence of springtime interfacial liquid water on Mars today. The results suggest such decomposition may happen today, however, because of our limited knowledge on chemical processes in thin interfacial liquid layers, this possibility waits for confirmation - and also points to the importance of conducting laboratory tests to validate the possible process. Although some tests were already realized for diffusion in an almost 2-dimensional liquid, the same is not true for activation energy, where only the value from the “normal” measurements was applied. Even if H2O2 decomposition is too slow today, the analysis of such a process is important, as under volcanic influence more effective decomposition might take place in thin interfacial liquids close to the climate of today if released sulfur produces pH<4.5. Large quantity and widespread occurrence of bulk liquid phase are not expected in the Amazonian period, but interfacial liquid water probably appeared regularly, and its locations, especially during volcanically active periods, might make certain sites than others more interesting for astrobiology with the lower concentration of oxidizing H2O2.

  11. Biogeochemistry of Decomposition and Detrital Processing

    NASA Astrophysics Data System (ADS)

    Sanderman, J.; Amundson, R.

    2003-12-01

    Decomposition is a key ecological process that roughly balances net primary production in terrestrial ecosystems and is an essential process in resupplying nutrients to the plant community. Decomposition consists of three concurrent processes: communition or fragmentation, leaching of water-soluble compounds, and microbial catabolism. Decomposition can also be viewed as a sequential process, what Eijsackers and Zehnder (1990) compare to a Russian matriochka doll. Soil macrofauna fragment and partially solubilize plant residues, facilitating establishment of a community of decomposer microorganisms. This decomposer community will gradually shift as the most easily degraded plant compounds are utilized and the more recalcitrant materials begin to accumulate. Given enough time and the proper environmental conditions, most naturally occurring compounds can completely be mineralized to inorganic forms. Simultaneously with mineralization, the process of humification acts to transform a fraction of the plant residues into stable soil organic matter (SOM) or humus. For reference, Schlesinger (1990) estimated that only ˜0.7% of detritus eventually becomes stabilized into humus.Decomposition plays a key role in the cycling of most plant macro- and micronutrients and in the formation of humus. Figure 1 places the roles of detrital processing and mineralization within the context of the biogeochemical cycling of essential plant nutrients. Chapin (1991) found that while the atmosphere supplied 4% and mineral weathering supplied no nitrogen and <1% of phosphorus, internal nutrient recycling is the source for >95% of all the nitrogen and phosphorus uptake by tundra species in Barrow, Alaska. In a cool temperate forest, nutrient recycling accounted for 93%, 89%, 88%, and 65% of total sources for nitrogen, phosphorus, potassium, and calcium, respectively ( Chapin, 1991). (13K)Figure 1. A decomposition-centric biogeochemical model of nutrient cycling. Although there is significant external input (1) and output (2) from neighboring ecosystems (such as erosion), weathering of primary minerals (3), loss of secondary minerals (4), atmospheric deposition and N-fixation (5) and volatilization (6), the majority of plant-available nutrients are supplied by internal recycling through decomposition. Nutrients that are taken up by plants (7) are either consumed by fauna (8) and returned to the soil through defecation and mortality (10) or returned to the soil through litterfall and mortality (9). Detritus and humus can be immobilized into microbial biomass (11 and 13). Humus is formed by the transformation and stabilization of detrital (12) and microbial (14) compounds. During these transformations, SOM is being continually mineralized by the microorganisms (15) replenishing the inorganic nutrient pool (after Swift et al., 1979). The second major ecosystem role of decomposition is in the formation and stabilization of humus. The cycling and stabilization of SOM in the litter-soil system is presented in a conceptual model in Figure 2. Parallel with litterfall and most root turnover, detrital processing is concentrated at or near the soil surface. As labile SOM is preferentially degraded, there is a progressive shift from labile to passive SOM with increasing depth. There are three basic mechanisms for SOM accumulation in the mineral soil: bioturbation or physical mixing of the soil by burrowing animals (e.g., earthworms, gophers, etc.), in situ decomposition of roots and root exudates, and the leaching of soluble organic compounds. In the absence of bioturbation, distinct litter layers often accumulate above the mineral soil. In grasslands where the majority of net primary productivity (NPP) is allocated belowground, root inputs will dominate. In sandy soils with ample rainfall, leaching may be the major process incorporating carbon into the soil. (11K)Figure 2. Conceptual model of carbon cycling in the litter-soil system. In each horizon or depth increment, SOM is represented by three pools: labile SOM, slow SOM, and passive SOM. Inputs include aboveground litterfall and belowground root turnover and exudates, which will be distributed among the pools based on the biochemical nature of the material. Outputs from each pool include mineralization to CO2 (dashed lines), humification (labile→slow→passive), and downward transport due to leaching and physical mixing. Communition by soil fauna will accelerate the decomposition process and reveal previously inaccessible materials. Soil mixing and other disturbances can also make physically protected passive SOM available to microbial attack (passive→slow). There exists an amazing body of literature on the subject of decomposition that draws from many disciplines - including ecology, soil science, microbiology, plant physiology, biochemistry, and zoology. In this chapter, we have attempted to draw information from all of these fields to present an integrated analysis of decomposition in a biogeochemical context. We begin by reviewing the composition of detrital resources and SOM (Section 8.07.2), the organisms responsible for decomposition ( Section 8.07.3), and some methods for quantifying decomposition rates ( Section 8.07.4). This is followed by a discussion of the mechanisms behind decomposition ( Section 8.07.5), humification ( Section 8.07.6), and the controls on these processes ( Section 8.07.7). We conclude the chapter with a brief discussion on how current biogeochemical models incorporate this information ( Section 8.07.8).

  12. ENVIRONMENTAL ASSESSMENT OF THE BASE CATALYZED DECOMPOSITION (BCD) PROCESS

    EPA Science Inventory

    This report summarizes laboratory-scale, pilot-scale, and field performance data on BCD (Base Catalyzed Decomposition) and technology, collected to date by various governmental, academic, and private organizations.

  13. Aligning observed and modelled behaviour based on workflow decomposition

    NASA Astrophysics Data System (ADS)

    Wang, Lu; Du, YuYue; Liu, Wei

    2017-09-01

    When business processes are mostly supported by information systems, the availability of event logs generated from these systems, as well as the requirement of appropriate process models are increasing. Business processes can be discovered, monitored and enhanced by extracting process-related information. However, some events cannot be correctly identified because of the explosion of the amount of event logs. Therefore, a new process mining technique is proposed based on a workflow decomposition method in this paper. Petri nets (PNs) are used to describe business processes, and then conformance checking of event logs and process models is investigated. A decomposition approach is proposed to divide large process models and event logs into several separate parts that can be analysed independently; while an alignment approach based on a state equation method in PN theory enhances the performance of conformance checking. Both approaches are implemented in programmable read-only memory (ProM). The correctness and effectiveness of the proposed methods are illustrated through experiments.

  14. Integrability conditions for Killing-Yano tensors and conformal Killing-Yano tensors

    NASA Astrophysics Data System (ADS)

    Batista, Carlos

    2015-01-01

    The integrability conditions for the existence of a conformal Killing-Yano tensor of arbitrary order are worked out in all dimensions and expressed in terms of the Weyl tensor. As a consequence, the integrability conditions for the existence of a Killing-Yano tensor are also obtained. By means of such conditions, it is shown that in certain Einstein spaces one can use a conformal Killing-Yano tensor of order p to generate a Killing-Yano tensor of order (p -1 ) . Finally, it is proved that in maximally symmetric spaces the covariant derivative of a Killing-Yano tensor is a closed conformal Killing-Yano tensor and that every conformal Killing-Yano tensor is uniquely decomposed as the sum of a Killing-Yano tensor and a closed conformal Killing-Yano tensor.

  15. Leaf Litter Mixtures Alter Microbial Community Development: Mechanisms for Non-Additive Effects in Litter Decomposition

    PubMed Central

    Chapman, Samantha K.; Newman, Gregory S.; Hart, Stephen C.; Schweitzer, Jennifer A.; Koch, George W.

    2013-01-01

    To what extent microbial community composition can explain variability in ecosystem processes remains an open question in ecology. Microbial decomposer communities can change during litter decomposition due to biotic interactions and shifting substrate availability. Though relative abundance of decomposers may change due to mixing leaf litter, linking these shifts to the non-additive patterns often recorded in mixed species litter decomposition rates has been elusive, and links community composition to ecosystem function. We extracted phospholipid fatty acids (PLFAs) from single species and mixed species leaf litterbags after 10 and 27 months of decomposition in a mixed conifer forest. Total PLFA concentrations were 70% higher on litter mixtures than single litter types after 10 months, but were only 20% higher after 27 months. Similarly, fungal-to-bacterial ratios differed between mixed and single litter types after 10 months of decomposition, but equalized over time. Microbial community composition, as indicated by principal components analyses, differed due to both litter mixing and stage of litter decomposition. PLFA biomarkers a15∶0 and cy17∶0, which indicate gram-positive and gram-negative bacteria respectively, in particular drove these shifts. Total PLFA correlated significantly with single litter mass loss early in decomposition but not at later stages. We conclude that litter mixing alters microbial community development, which can contribute to synergisms in litter decomposition. These findings advance our understanding of how changing forest biodiversity can alter microbial communities and the ecosystem processes they mediate. PMID:23658639

  16. Characteristic of root decomposition in a tropical rainforest in Sarawak, Malaysi

    NASA Astrophysics Data System (ADS)

    Ohashi, Mizue; Makita, Naoki; Katayam, Ayumi; Kume, Tomonori; Matsumoto, Kazuho; Khoon Kho, L.

    2016-04-01

    Woody roots play a significant role in forest carbon cycling, as up to 60 percent of tree photosynthetic production can be allocated to belowground. Root decay is one of the main processes of soil C dynamics and potentially relates to soil C sequestration. However, much less attention has been paid for root litter decomposition compared to the studies of leaf litter because roots are hidden from view. Previous studies have revealed that physico-chemical quality of roots, climate, and soil organisms affect root decomposition significantly. However, patterns and mechanisms of root decomposition are still poorly understood because of the high variability of root properties, field environment and potential decomposers. For example, root size would be a factor controlling decomposition rates, but general understanding of the difference between coarse and fine root decompositions is still lacking. Also, it is known that root decomposition is performed by soil animals, fungi and bacteria, but their relative importance is poorly understood. In this study, therefore, we aimed to characterize the root decomposition in a tropical rainforest in Sarawak, Malaysia, and clarify the impact of soil living organisms and root sizes on root litter decomposition. We buried soil cores with fine and coarse root litter bags in soil in Lambir Hills National Park. Three different types of soil cores that are covered by 1.5 cm plastic mesh, root-impermeable sheet (50um) and fungi-impermeable sheet (1um) were prepared. The soil cores were buried in February 2013 and collected 4 times, 134 days, 226 days, 786 days and 1151 days after the installation. We found that nearly 80 percent of the coarse root litter was decomposed after two years, whereas only 60 percent of the fine root litter was decomposed. Our results also showed significantly different ratio of decomposition between different cores, suggesting the different contribution of soil living organisms to decomposition process.

  17. Impacts Of Long-Term Prescribed Fire On Decomposition And Litter Quality In Uneven-Aged Loblolly Pine Stands

    Treesearch

    Michele L. Renschin; Hal O. Liechty; Michael G. Shelton

    2002-01-01

    Abstract - Although fire has long been an important forest management tool in the southern United States, little is known concerning the effects of long-term fire use on nutrient cycling and decomposition. To better understand the effects of fire on these processes, decomposition rates, and foliage litter quality were quantified in a study...

  18. Laboratory Tests to Determine the Chemical and Physical Characteristics of Propellant-Solvent-Fuel Oil Mixtures

    DTIC Science & Technology

    1990-02-01

    Decomposition ................ 165 Part IV. Thermal Decomposition - Analytical Methodologies .............. 167 Part V. Miscellaneous...500C ................... 45 12 Differential Scanning Calorimetry Curve for the Decomposition of a Smokeless-Grade Nitrocellulose .......... 62 13 Process...cellulose backbone with nitrating acids of high water content resulted in hydrolysis of the pentosans without the desired 3 result of nitration. Furthermore

  19. Fungal colonization and decomposition of leaves and stems of Salix arctica on deglaciated moraines in high-Arctic Canada

    NASA Astrophysics Data System (ADS)

    Osono, Takashi; Matsuoka, Shunsuke; Hirose, Dai; Uchida, Masaki; Kanda, Hiroshi

    2014-06-01

    Fungal colonization, succession, and decomposition of leaves and stems of Salix arctica were studied to estimate the roles of fungi in the decomposition processes in the high Arctic. The samples were collected from five moraines with different periods of development since deglaciation to investigate the effects of ecosystem development on the decomposition processes during the primary succession. The total hyphal lengths and the length of darkly pigmented hyphae increased during decomposition of leaves and stems and were not varied with the moraines. Four fungal morphotaxa were frequently isolated from both leaves and stems. The frequencies of occurrence of two morphotaxa varied with the decay class of leaves and/or stems. The hyphal lengths and the frequencies of occurrence of fungal morphotaxa were positively or negatively correlated with the contents of organic chemical components and nutrients in leaves and stems, suggesting the roles of fungi in chemical changes in the field. Pure culture decomposition tests demonstrated that the fungal morphotaxa were cellulose decomposers. Our results suggest that fungi took part in the chemical changes in decomposing leaves and stems even under the harsh environment of the high Arctic.

  20. Scoring of Decomposition: A Proposed Amendment to the Method When Using a Pig Model for Human Studies.

    PubMed

    Keough, Natalie; Myburgh, Jolandie; Steyn, Maryna

    2017-07-01

    Decomposition studies often use pigs as proxies for human cadavers. However, differences in decomposition sequences/rates relative to humans have not been scientifically examined. Descriptions of five main decomposition stages (humans) were developed and refined by Galloway and later by Megyesi. However, whether these changes/processes are alike in pigs is unclear. Any differences can have significant effects when pig models are used for human PMI estimation. This study compared human decomposition models to the changes observed in pigs. Twenty pigs (50-90 kg) were decomposed over five months and decompositional features recorded. Total body scores (TBS) were calculated. Significant differences were observed during early decomposition between pigs and humans. An amended scoring system to be used in future studies was developed. Standards for PMI estimation derived from porcine models may not directly apply to humans and may need adjustment. Porcine models, however, remain valuable to study variables influencing decomposition. © 2016 American Academy of Forensic Sciences.

  1. Theoretical studies of the decomposition mechanisms of 1,2,4-butanetriol trinitrate.

    PubMed

    Pei, Liguan; Dong, Kehai; Tang, Yanhui; Zhang, Bo; Yu, Chang; Li, Wenzuo

    2017-12-06

    Density functional theory (DFT) and canonical variational transition-state theory combined with a small-curvature tunneling correction (CVT/SCT) were used to explore the decomposition mechanisms of 1,2,4-butanetriol trinitrate (BTTN) in detail. The results showed that the γ-H abstraction reaction is the initial pathway for autocatalytic BTTN decomposition. The three possible hydrogen atom abstraction reactions are all exothermic. The rate constants for autocatalytic BTTN decomposition are 3 to 10 40 times greater than the rate constants for the two unimolecular decomposition reactions (O-NO 2 cleavage and HONO elimination). The process of BTTN decomposition can be divided into two stages according to whether the NO 2 concentration is above a threshold value. HONO elimination is the main reaction channel during the first stage because autocatalytic decomposition requires NO 2 and the concentration of NO 2 is initially low. As the reaction proceeds, the concentration of NO 2 gradually increases; when it exceeds the threshold value, the second stage begins, with autocatalytic decomposition becoming the main reaction channel.

  2. Numerical Estimation of the Elastic Properties of Thin-Walled Structures Manufactured from Short-Fiber-Reinforced Thermoplastics

    NASA Astrophysics Data System (ADS)

    Altenbach, H.; Naumenko, K.; L'vov, G. I.; Pilipenko, S. N.

    2003-05-01

    A model which allows us to estimate the elastic properties of thin-walled structures manufactured by injection molding is presented. The starting step is the numerical prediction of the microstructure of a short-fiber-reinforced composite developed during the filling stage of the manufacturing process. For this purpose, the Moldflow Plastic Insight® commercial program is used. As a result of simulating the filling process, a second-rank orientation tensor characterizing the microstructure of the material is obtained. The elastic properties of the prepared material locally depend on the orientational distribution of fibers. The constitutive equation is formulated by means of orientational averaging for a given orientation tensor. The tensor of elastic material properties is computed and translated into the format for a stress-strain analysis based on the ANSYSÒ finite-element code. The numerical procedure and the convergence of results are discussed for a thin strip, a rectangular plate, and a shell of revolution. The influence of manufacturing conditions on the stress-strain state of statically loaded thin-walled elements is illustrated.

  3. Decomposition Rate and Pattern in Hanging Pigs.

    PubMed

    Lynch-Aird, Jeanne; Moffatt, Colin; Simmons, Tal

    2015-09-01

    Accurate prediction of the postmortem interval requires an understanding of the decomposition process and the factors acting upon it. A controlled experiment, over 60 days at an outdoor site in the northwest of England, used 20 freshly killed pigs (Sus scrofa) as human analogues to study decomposition rate and pattern. Ten pigs were hung off the ground and ten placed on the surface. Observed differences in the decomposition pattern required a new decomposition scoring scale to be produced for the hanging pigs to enable comparisons with the surface pigs. The difference in the rate of decomposition between hanging and surface pigs was statistically significant (p=0.001). Hanging pigs reached advanced decomposition stages sooner, but lagged behind during the early stages. This delay is believed to result from lower variety and quantity of insects, due to restricted beetle access to the aerial carcass, and/or writhing maggots falling from the carcass. © 2015 American Academy of Forensic Sciences.

  4. Insight into litter decomposition driven by nutrient demands of symbiosis system through the hypha bridge of arbuscular mycorrhizal fungi.

    PubMed

    Kong, Xiangshi; Jia, Yanyan; Song, Fuqiang; Tian, Kai; Lin, Hong; Bei, Zhanlin; Jia, Xiuqin; Yao, Bei; Guo, Peng; Tian, Xingjun

    2018-02-01

    Arbuscular mycorrhizal fungi (AMF) play an important role in litter decomposition. This study investigated how soil nutrient level affected the process. Results showed that AMF colonization had no significant effect on litter decomposition under normal soil nutrient conditions. However, litter decomposition was accelerated significantly under lower nutrient conditions. Soil microbial biomass in decomposition system was significantly increased. Especially, in moderate lower nutrient treatment (condition of half-normal soil nutrient), litters exhibited the highest decomposition rate, AMF hypha revealed the greatest density, and enzymes (especially nitrate reductase) showed the highest activities as well. Meanwhile, the immobilization of nitrogen (N) in the decomposing litter remarkably decreased. Our results suggested that the roles AMF played in ecosystem were largely affected by soil nutrient levels. At normal soil nutrient level, AMF exhibited limited effects in promoting decomposition. When soil nutrient level decreased, the promoting effect of AMF on litter decomposition began to appear, especially on N mobilization. However, under extremely low nutrient conditions, AMF showed less influence on decomposition and may even compete with decomposer microorganisms for nutrients.

  5. Effects of motion and b-matrix correction for high resolution DTI with short-axis PROPELLER-EPI

    PubMed Central

    Aksoy, Murat; Skare, Stefan; Holdsworth, Samantha; Bammer, Roland

    2010-01-01

    Short-axis PROPELLER-EPI (SAP-EPI) has been proven to be very effective in providing high-resolution diffusion-weighted and diffusion tensor data. The self-navigation capabilities of SAP-EPI allow one to correct for motion, phase errors, and geometric distortion. However, in the presence of patient motion, the change in the effective diffusion-encoding direction (i.e. the b-matrix) between successive PROPELLER ‘blades’ can decrease the accuracy of the estimated diffusion tensors, which might result in erroneous reconstruction of white matter tracts in the brain. In this study, we investigate the effects of alterations in the b-matrix as a result of patient motion on the example of SAP-EPI DTI and eliminate these effects by incorporating our novel single-step non-linear diffusion tensor estimation scheme into the SAP-EPI post-processing procedure. Our simulations and in-vivo studies showed that, in the presence of patient motion, correcting the b-matrix is necessary in order to get more accurate diffusion tensor and white matter pathway reconstructions. PMID:20222149

  6. Robotic Online Path Planning on Point Cloud.

    PubMed

    Liu, Ming

    2016-05-01

    This paper deals with the path-planning problem for mobile wheeled- or tracked-robot which drive in 2.5-D environments, where the traversable surface is usually considered as a 2-D-manifold embedded in a 3-D ambient space. Specially, we aim at solving the 2.5-D navigation problem using raw point cloud as input. The proposed method is independent of traditional surface parametrization or reconstruction methods, such as a meshing process, which generally has high-computational complexity. Instead, we utilize the output of 3-D tensor voting framework on the raw point clouds. The computation of tensor voting is accelerated by optimized implementation on graphics computation unit. Based on the tensor voting results, a novel local Riemannian metric is defined using the saliency components, which helps the modeling of the latent traversable surface. Using the proposed metric, we prove that the geodesic in the 3-D tensor space leads to rational path-planning results by experiments. Compared to traditional methods, the results reveal the advantages of the proposed method in terms of smoothing the robot maneuver while considering the minimum travel distance.

  7. Probabilistic-driven oriented Speckle reducing anisotropic diffusion with application to cardiac ultrasonic images.

    PubMed

    Vegas-Sanchez-Ferrero, G; Aja-Fernandez, S; Martin-Fernandez, M; Frangi, A F; Palencia, C

    2010-01-01

    A novel anisotropic diffusion filter is proposed in this work with application to cardiac ultrasonic images. It includes probabilistic models which describe the probability density function (PDF) of tissues and adapts the diffusion tensor to the image iteratively. For this purpose, a preliminary study is performed in order to select the probability models that best fit the stastitical behavior of each tissue class in cardiac ultrasonic images. Then, the parameters of the diffusion tensor are defined taking into account the statistical properties of the image at each voxel. When the structure tensor of the probability of belonging to each tissue is included in the diffusion tensor definition, a better boundaries estimates can be obtained instead of calculating directly the boundaries from the image. This is the main contribution of this work. Additionally, the proposed method follows the statistical properties of the image in each iteration. This is considered as a second contribution since state-of-the-art methods suppose that noise or statistical properties of the image do not change during the filter process.

  8. Anisotropic mesoscale eddy transport in ocean general circulation models

    NASA Astrophysics Data System (ADS)

    Reckinger, Scott; Fox-Kemper, Baylor; Bachman, Scott; Bryan, Frank; Dennis, John; Danabasoglu, Gokhan

    2014-11-01

    In modern climate models, the effects of oceanic mesoscale eddies are introduced by relating subgrid eddy fluxes to the resolved gradients of buoyancy or other tracers, where the proportionality is, in general, governed by an eddy transport tensor. The symmetric part of the tensor, which represents the diffusive effects of mesoscale eddies, is universally treated isotropically. However, the diffusive processes that the parameterization approximates, such as shear dispersion and potential vorticity barriers, typically have strongly anisotropic characteristics. Generalizing the eddy diffusivity tensor for anisotropy extends the number of parameters from one to three: major diffusivity, minor diffusivity, and alignment. The Community Earth System Model (CESM) with the anisotropic eddy parameterization is used to test various choices for the parameters, which are motivated by observations and the eddy transport tensor diagnosed from high resolution simulations. Simply setting the ratio of major to minor diffusivities to a value of five globally, while aligning the major axis along the flow direction, improves biogeochemical tracer ventilation and reduces temperature and salinity biases. These effects can be improved by parameterizing the oceanic anisotropic transport mechanisms.

  9. Optimizing Tensor Contraction Expressions for Hybrid CPU-GPU Execution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Wenjing; Krishnamoorthy, Sriram; Villa, Oreste

    2013-03-01

    Tensor contractions are generalized multidimensional matrix multiplication operations that widely occur in quantum chemistry. Efficient execution of tensor contractions on Graphics Processing Units (GPUs) requires several challenges to be addressed, including index permutation and small dimension-sizes reducing thread block utilization. Moreover, to apply the same optimizations to various expressions, we need a code generation tool. In this paper, we present our approach to automatically generate CUDA code to execute tensor contractions on GPUs, including management of data movement between CPU and GPU. To evaluate our tool, GPU-enabled code is generated for the most expensive contractions in CCSD(T), a key coupledmore » cluster method, and incorporated into NWChem, a popular computational chemistry suite. For this method, we demonstrate speedup over a factor of 8.4 using one GPU (instead of one core per node) and over 2.6 when utilizing the entire system using hybrid CPU+GPU solution with 2 GPUs and 5 cores (instead of 7 cores per node). Finally, we analyze the implementation behavior on future GPU systems.« less

  10. Voting based object boundary reconstruction

    NASA Astrophysics Data System (ADS)

    Tian, Qi; Zhang, Like; Ma, Jingsheng

    2005-07-01

    A voting-based object boundary reconstruction approach is proposed in this paper. Morphological technique was adopted in many applications for video object extraction to reconstruct the missing pixels. However, when the missing areas become large, the morphological processing cannot bring us good results. Recently, Tensor voting has attracted people"s attention, and it can be used for boundary estimation on curves or irregular trajectories. However, the complexity of saliency tensor creation limits its applications in real-time systems. An alternative approach based on tensor voting is introduced in this paper. Rather than creating saliency tensors, we use a "2-pass" method for orientation estimation. For the first pass, Sobel d*etector is applied on a coarse boundary image to get the gradient map. In the second pass, each pixel puts decreasing weights based on its gradient information, and the direction with maximum weights sum is selected as the correct orientation of the pixel. After the orientation map is obtained, pixels begin linking edges or intersections along their direction. The approach is applied to various video surveillance clips under different conditions, and the experimental results demonstrate significant improvement on the final extracted objects accuracy.

  11. The role of electron heat flux in guide-field magnetic reconnection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hesse, Michael; Kuznetsova, Masha; Birn, Joachim

    2004-12-01

    A combination of analytical theory and particle-in-cell simulations are employed in order to investigate the electron dynamics near and at the site of guide field magnetic reconnection. A detailed analysis of the contributions to the reconnection electric field shows that both bulk inertia and pressure-based quasiviscous processes are important for the electrons. Analytic scaling demonstrates that conventional approximations for the electron pressure tensor behavior in the dissipation region fail, and that heat flux contributions need to be accounted for. Based on the evolution equation of the heat flux three tensor, which is derived in this paper, an approximate form ofmore » the relevant heat flux contributions to the pressure tensor is developed, which reproduces the numerical modeling result reasonably well. Based on this approximation, it is possible to develop a scaling of the electron current layer in the central dissipation region. It is shown that the pressure tensor contributions become important at the scale length defined by the electron Larmor radius in the guide magnetic field.« less

  12. A Catalog of Moment Tensors and Source-type Characterization for Small Events at Uturuncu Volcano, Bolivia

    NASA Astrophysics Data System (ADS)

    Alvizuri, C. R.; Tape, C.

    2015-12-01

    We present a catalog of full seismic moment tensors for 63 events from Uturuncu volcano in Bolivia. The events were recorded during 2011-2012 in the PLUTONS seismic array of 24 broadband stations. Most events had magnitudes between 0.5 and 2.0 and did not generate discernible surface waves; the largest event was Mw 2.8. For each event we computed the misfit between observed and synthetic waveforms, and we also used first-motion polarity measurements to reduce the number of possible solutions. Each moment tensor solution was obtained using a grid search over the six-dimensional space of moment tensors. For each event we characterize the variation of moment tensor source type by plotting the misfit function in eigenvalue space, represented by a lune. We plot the optimal solutions for the 63 events on the lune in order to identify three subsets of the catalog: (1) a set of isotropic events, (2) a set of tensional crack events, and (3) a swarm of events southeast of the volcanic center that appear to be double couples. The occurrence of positively isotropic events is consistent with other published results from volcanic and geothermal regions. Several of these previous results, as well as our results, cannot be interpreted within the context of either an oblique opening crack or a crack-plus-double-couple model; instead they require a multiple-process source model. Our study emphasizes the importance of characterizing uncertainties for full moment tensors, and it provides strong support for isotropic events at Uturuncu volcano.

  13. A compositional approach to building applications in a computational environment

    NASA Astrophysics Data System (ADS)

    Roslovtsev, V. V.; Shumsky, L. D.; Wolfengagen, V. E.

    2014-04-01

    The paper presents an approach to creating an applicative computational environment to feature computational processes and data decomposition, and a compositional approach to application building. The approach in question is based on the notion of combinator - both in systems with variable binding (such as λ-calculi) and those allowing programming without variables (combinatory logic style). We present a computation decomposition technique based on objects' structural decomposition, with the focus on computation decomposition. The computational environment's architecture is based on a network with nodes playing several roles simultaneously.

  14. Thermal decomposition kinetics of hydrazinium cerium 2,3-Pyrazinedicarboxylate hydrate: a new precursor for CeO2.

    PubMed

    Premkumar, Thathan; Govindarajan, Subbiah; Coles, Andrew E; Wight, Charles A

    2005-04-07

    The thermal decomposition kinetics of N(2)H(5)[Ce(pyrazine-2,3-dicarboxylate)(2)(H(2)O)] (Ce-P) have been studied by thermogravimetric analysis (TGA) and differential scanning calorimetry (DSC), for the first time; TGA analysis reveals an oxidative decomposition process yielding CeO(2) as the final product with an activation energy of approximately 160 kJ mol(-1). This complex may be used as a precursor to fine particle cerium oxides due to its low temperature of decomposition.

  15. Pursuing reliable thermal analysis techniques for energetic materials: decomposition kinetics and thermal stability of dihydroxylammonium 5,5'-bistetrazole-1,1'-diolate (TKX-50).

    PubMed

    Muravyev, Nikita V; Monogarov, Konstantin A; Asachenko, Andrey F; Nechaev, Mikhail S; Ananyev, Ivan V; Fomenkov, Igor V; Kiselev, Vitaly G; Pivkina, Alla N

    2016-12-21

    Thermal decomposition of a novel promising high-performance explosive dihydroxylammonium 5,5'-bistetrazole-1,1'-diolate (TKX-50) was studied using a number of thermal analysis techniques (thermogravimetry, differential scanning calorimetry, and accelerating rate calorimetry, ARC). To obtain more comprehensive insight into the kinetics and mechanism of TKX-50 decomposition, a variety of complementary thermoanalytical experiments were performed under various conditions. Non-isothermal and isothermal kinetics were obtained at both atmospheric and low (up to 0.3 Torr) pressures. The gas products of thermolysis were detected in situ using IR spectroscopy, and the structure of solid-state decomposition products was determined by X-ray diffraction and scanning electron microscopy. Diammonium 5,5'-bistetrazole-1,1'-diolate (ABTOX) was directly identified to be the most important intermediate of the decomposition process. The important role of bistetrazole diol (BTO) in the mechanism of TKX-50 decomposition was also rationalized by thermolysis experiments with mixtures of TKX-50 and BTO. Several widely used thermoanalytical data processing techniques (Kissinger, isoconversional, formal kinetic approaches, etc.) were independently benchmarked against the ARC data, which are more germane to the real storage and application conditions of energetic materials. Our study revealed that none of the Arrhenius parameters reported before can properly describe the complex two-stage decomposition process of TKX-50. In contrast, we showed the superior performance of the isoconversional methods combined with isothermal measurements, which yielded the most reliable kinetic parameters of TKX-50 thermolysis. In contrast with the existing reports, the thermal stability of TKX-50 was determined in the ARC experiments to be lower than that of hexogen, but close to that of hexanitrohexaazaisowurtzitane (CL-20).

  16. Soil respiration in the cold desert environment of the Colorado Plateau (USA): Abiotic regulators and thresholds

    USGS Publications Warehouse

    Fernandez, D.P.; Neff, J.C.; Belnap, J.; Reynolds, R.L.

    2006-01-01

    Decomposition is central to understanding ecosystem carbon exchange and nutrient-release processes. Unlike mesic ecosystems, which have been extensively studied, xeric landscapes have received little attention; as a result, abiotic soil-respiration regulatory processes are poorly understood in xeric environments. To provide a more complete and quantitative understanding about how abiotic factors influence soil respiration in xeric ecosystems, we conducted soil- respiration and decomposition-cloth measurements in the cold desert of southeast Utah. Our study evaluated when and to what extent soil texture, moisture, temperature, organic carbon, and nitrogen influence soil respiration and examined whether the inverse-texture hypothesis applies to decomposition. Within our study site, the effect of texture on moisture, as described by the inverse texture hypothesis, was evident, but its effect on decomposition was not. Our results show temperature and moisture to be the dominant abiotic controls of soil respiration. Specifically, temporal offsets in temperature and moisture conditions appear to have a strong control on soil respiration, with the highest fluxes occurring in spring when temperature and moisture were favorable. These temporal offsets resulted in decomposition rates that were controlled by soil moisture and temperature thresholds. The highest fluxes of CO2 occurred when soil temperature was between 10 and 16??C and volumetric soil moisture was greater than 10%. Decomposition-cloth results, which integrate decomposition processes across several months, support the soil-respiration results and further illustrate the seasonal patterns of high respiration rates during spring and low rates during summer and fall. Results from this study suggest that the parameters used to predict soil respiration in mesic ecosystems likely do not apply in cold-desert environments. ?? Springer 2006.

  17. Coupling experimental data and a prototype model to probe the physical and chemical processes of 2,4-dinitroimidazole solid-phase thermal decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Behrens, R.; Minier, L.; Bulusu, S.

    1998-12-31

    The time-dependent, solid-phase thermal decomposition behavior of 2,4-dinitroimidazole (2,4-DNI) has been measured utilizing simultaneous thermogravimetric modulated beam mass spectrometry (STMBMS) methods. The decomposition products consist of gaseous and non-volatile polymeric products. The temporal behavior of the gas formation rates of the identified products indicate that the overall thermal decomposition process is complex. In isothermal experiments with 2,4-DNI in the solid phase, four distinguishing features are observed: (1) elevated rates of gas formation are observed during the early stages of the decomposition, which appear to be correlated to the presence of exogenous water in the sample; (2) this is followed bymore » a period of relatively constant rates of gas formation; (3) next, the rates of gas formation accelerate, characteristic of an autocatalytic reaction; (4) finally, the 2,4-DNI is depleted and gaseous decomposition products continue to evolve at a decreasing rate. A physicochemical and mathematical model of the decomposition of 2,4-DNI has been developed and applied to the experimental results. The first generation of this model is described in this paper. Differences between the first generation of the model and the experimental data collected under different conditions suggest refinements for the next generation of the model.« less

  18. Young Children's Thinking About Decomposition: Early Modeling Entrees to Complex Ideas in Science

    NASA Astrophysics Data System (ADS)

    Ero-Tolliver, Isi; Lucas, Deborah; Schauble, Leona

    2013-10-01

    This study was part of a multi-year project on the development of elementary students' modeling approaches to understanding the life sciences. Twenty-three first grade students conducted a series of coordinated observations and investigations on decomposition, a topic that is rarely addressed in the early grades. The instruction included in-class observations of different types of soil and soil profiling, visits to the school's compost bin, structured observations of decaying organic matter of various kinds, study of organisms that live in the soil, and models of environmental conditions that affect rates of decomposition. Both before and after instruction, students completed a written performance assessment that asked them to reason about the process of decomposition. Additional information was gathered through one-on-one interviews with six focus students who represented variability of performance across the class. During instruction, researchers collected video of classroom activity, student science journal entries, and charts and illustrations produced by the teacher. After instruction, the first-grade students showed a more nuanced understanding of the composition and variability of soils, the role of visible organisms in decomposition, and environmental factors that influence rates of decomposition. Through a variety of representational devices, including drawings, narrative records, and physical models, students came to regard decomposition as a process, rather than simply as an end state that does not require explanation.

  19. Effect of pressure on rate of burning /decomposition with flame/ of liquid hydrazine.

    NASA Technical Reports Server (NTRS)

    Antoine, A. C.

    1966-01-01

    Liquid hydrazine decomposition process to determine what chemical or physical changes may be occurring that cause breaks in burning rate/ pressure curves, measuring flame temperature and light emission

  20. Ab initio kinetics of gas phase decomposition reactions.

    PubMed

    Sharia, Onise; Kuklja, Maija M

    2010-12-09

    The thermal and kinetic aspects of gas phase decomposition reactions can be extremely complex due to a large number of parameters, a variety of possible intermediates, and an overlap in thermal decomposition traces. The experimental determination of the activation energies is particularly difficult when several possible reaction pathways coexist in the thermal decomposition. Ab initio calculations intended to provide an interpretation of the experiment are often of little help if they produce only the activation barriers and ignore the kinetics of the decomposition process. To overcome this ambiguity, a theoretical study of a complete picture of gas phase thermo-decomposition, including reaction energies, activation barriers, and reaction rates, is illustrated with the example of the β-octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine (HMX) molecule by means of quantum-chemical calculations. We study three types of major decomposition reactions characteristic of nitramines: the HONO elimination, the NONO rearrangement, and the N-NO(2) homolysis. The reaction rates were determined using the conventional transition state theory for the HONO and NONO decompositions and the variational transition state theory for the N-NO(2) homolysis. Our calculations show that the HMX decomposition process is more complex than it was previously believed to be and is defined by a combination of reactions at any given temperature. At all temperatures, the direct N-NO(2) homolysis prevails with the activation barrier at 38.1 kcal/mol. The nitro-nitrite isomerization and the HONO elimination, with the activation barriers at 46.3 and 39.4 kcal/mol, respectively, are slow reactions at all temperatures. The obtained conclusions provide a consistent interpretation for the reported experimental data.

  1. Microbial community assembly and metabolic function during mammalian corpse decomposition.

    PubMed

    Metcalf, Jessica L; Xu, Zhenjiang Zech; Weiss, Sophie; Lax, Simon; Van Treuren, Will; Hyde, Embriette R; Song, Se Jin; Amir, Amnon; Larsen, Peter; Sangwan, Naseer; Haarmann, Daniel; Humphrey, Greg C; Ackermann, Gail; Thompson, Luke R; Lauber, Christian; Bibat, Alexander; Nicholas, Catherine; Gebert, Matthew J; Petrosino, Joseph F; Reed, Sasha C; Gilbert, Jack A; Lynne, Aaron M; Bucheli, Sibyl R; Carter, David O; Knight, Rob

    2016-01-08

    Vertebrate corpse decomposition provides an important stage in nutrient cycling in most terrestrial habitats, yet microbially mediated processes are poorly understood. Here we combine deep microbial community characterization, community-level metabolic reconstruction, and soil biogeochemical assessment to understand the principles governing microbial community assembly during decomposition of mouse and human corpses on different soil substrates. We find a suite of bacterial and fungal groups that contribute to nitrogen cycling and a reproducible network of decomposers that emerge on predictable time scales. Our results show that this decomposer community is derived primarily from bulk soil, but key decomposers are ubiquitous in low abundance. Soil type was not a dominant factor driving community development, and the process of decomposition is sufficiently reproducible to offer new opportunities for forensic investigations. Copyright © 2016, American Association for the Advancement of Science.

  2. Basic dye decomposition kinetics in a photocatalytic slurry reactor.

    PubMed

    Wu, Chun-Hsing; Chang, Hung-Wei; Chern, Jia-Ming

    2006-09-01

    Wastewater effluent from textile plants using various dyes is one of the major water pollutants to the environment. Traditional chemical, physical and biological processes for treating textile dye wastewaters have disadvantages such as high cost, energy waste and generating secondary pollution during the treatment process. The photocatalytic process using TiO2 semiconductor particles under UV light illumination has been shown to be potentially advantageous and applicable in the treatment of wastewater pollutants. In this study, the dye decomposition kinetics by nano-size TiO2 suspension at natural solution pH was experimentally studied by varying the agitation speed (50-200 rpm), TiO2 suspension concentration (0.25-1.71 g/L), initial dye concentration (10-50 ppm), temperature (10-50 degrees C), and UV power intensity (0-96 W). The experimental results show the agitation speed, varying from 50 to 200 rpm, has a slight influence on the dye decomposition rate and the pH history; the dye decomposition rate increases with the TiO2 suspension concentration up to 0.98 g/L, then decrease with increasing TiO2 suspension concentration; the initial dye decomposition rate increases with the initial dye concentration up to a certain value depending upon the temperature, then decreases with increasing initial dye concentration; the dye decomposition rate increases with the UV power intensity up to 64 W to reach a plateau. Kinetic models have been developed to fit the experimental kinetic data well.

  3. Enzyme Activities at Different Stages of Plant Biomass Decomposition in Three Species of Fungus-Growing Termites

    PubMed Central

    Pedersen, Kristine S. K.; Aanen, Duur K.

    2017-01-01

    ABSTRACT Fungus-growing termites rely on mutualistic fungi of the genus Termitomyces and gut microbes for plant biomass degradation. Due to a certain degree of symbiont complementarity, this tripartite symbiosis has evolved as a complex bioreactor, enabling decomposition of nearly any plant polymer, likely contributing to the success of the termites as one of the main plant decomposers in the Old World. In this study, we evaluated which plant polymers are decomposed and which enzymes are active during the decomposition process in two major genera of fungus-growing termites. We found a diversity of active enzymes at different stages of decomposition and a consistent decrease in plant components during the decomposition process. Furthermore, our findings are consistent with the hypothesis that termites transport enzymes from the older mature parts of the fungus comb through young worker guts to freshly inoculated plant substrate. However, preliminary fungal RNA sequencing (RNA-seq) analyses suggest that this likely transport is supplemented with enzymes produced in situ. Our findings support that the maintenance of an external fungus comb, inoculated with an optimal mixture of plant material, fungal spores, and enzymes, is likely the key to the extraordinarily efficient plant decomposition in fungus-growing termites. IMPORTANCE Fungus-growing termites have a substantial ecological footprint in the Old World (sub)tropics due to their ability to decompose dead plant material. Through the establishment of an elaborate plant biomass inoculation strategy and through fungal and bacterial enzyme contributions, this farming symbiosis has become an efficient and versatile aerobic bioreactor for plant substrate conversion. Since little is known about what enzymes are expressed and where they are active at different stages of the decomposition process, we used enzyme assays, transcriptomics, and plant content measurements to shed light on how this decomposition of plant substrate is so effectively accomplished. PMID:29269491

  4. Enzyme Activities at Different Stages of Plant Biomass Decomposition in Three Species of Fungus-Growing Termites.

    PubMed

    da Costa, Rafael R; Hu, Haofu; Pilgaard, Bo; Vreeburg, Sabine M E; Schückel, Julia; Pedersen, Kristine S K; Kračun, Stjepan K; Busk, Peter K; Harholt, Jesper; Sapountzis, Panagiotis; Lange, Lene; Aanen, Duur K; Poulsen, Michael

    2018-03-01

    Fungus-growing termites rely on mutualistic fungi of the genus Termitomyces and gut microbes for plant biomass degradation. Due to a certain degree of symbiont complementarity, this tripartite symbiosis has evolved as a complex bioreactor, enabling decomposition of nearly any plant polymer, likely contributing to the success of the termites as one of the main plant decomposers in the Old World. In this study, we evaluated which plant polymers are decomposed and which enzymes are active during the decomposition process in two major genera of fungus-growing termites. We found a diversity of active enzymes at different stages of decomposition and a consistent decrease in plant components during the decomposition process. Furthermore, our findings are consistent with the hypothesis that termites transport enzymes from the older mature parts of the fungus comb through young worker guts to freshly inoculated plant substrate. However, preliminary fungal RNA sequencing (RNA-seq) analyses suggest that this likely transport is supplemented with enzymes produced in situ Our findings support that the maintenance of an external fungus comb, inoculated with an optimal mixture of plant material, fungal spores, and enzymes, is likely the key to the extraordinarily efficient plant decomposition in fungus-growing termites. IMPORTANCE Fungus-growing termites have a substantial ecological footprint in the Old World (sub)tropics due to their ability to decompose dead plant material. Through the establishment of an elaborate plant biomass inoculation strategy and through fungal and bacterial enzyme contributions, this farming symbiosis has become an efficient and versatile aerobic bioreactor for plant substrate conversion. Since little is known about what enzymes are expressed and where they are active at different stages of the decomposition process, we used enzyme assays, transcriptomics, and plant content measurements to shed light on how this decomposition of plant substrate is so effectively accomplished. Copyright © 2018 da Costa et al.

  5. Detritus Quality Controls Macrophyte Decomposition under Different Nutrient Concentrations in a Eutrophic Shallow Lake, North China

    PubMed Central

    Li, Xia; Cui, Baoshan; Yang, Qichun; Tian, Hanqin; Lan, Yan; Wang, Tingting; Han, Zhen

    2012-01-01

    Macrophyte decomposition is important for carbon and nutrient cycling in lake ecosystems. Currently, little is known about how this process responds to detritus quality and water nutrient conditions in eutrophic shallow lakes in which incomplete decomposition of detritus accelerates the lake terrestrialization process. In this study, we investigated the effects of detritus quality and water nutrient concentrations on macrophyte decomposition in Lake Baiyangdian, China, by analyzing the decomposition of three major aquatic plants at three sites with different pollution intensities (low, medium, and high pollution sites). Detritus quality refers to detritus nutrient contents as well as C∶N, C∶P, and N∶P mass ratios in this study. Effects of detritus mixtures were tested by combining pairs of representative macrophytes at ratios of 75∶25, 50∶50 and 25∶75 (mass basis). The results indicate that the influence of species types on decomposition was stronger than that of site conditions. Correlation analysis showed that mass losses at the end of the experimental period were significantly controlled by initial detritus chemistry, especially by the initial phosphorus (P) content, carbon to nitrogen (C∶N), and carbon to phosphorus (C∶P) mass ratios in the detritus. The decomposition processes were also influenced by water chemistry. The NO3-N and NH4-N concentrations in the lake water retarded detritus mass loss at the low and high pollution sites, respectively. Net P mineralization in detritus was observed at all sites and detritus P release at the high pollution site was slower than at the other two sites. Nonadditive effects of mixtures tended to be species specific due to the different nutrient contents in each species. Results suggest that the nonadditive effects varied significantly among different sites, indicating that interactions between the detritus quality in species mixtures and site water chemistry may be another driver controlling decomposition in eutrophic shallow lakes. PMID:22848699

  6. The neural basis of novelty and appropriateness in processing of creative chunk decomposition.

    PubMed

    Huang, Furong; Fan, Jin; Luo, Jing

    2015-06-01

    Novelty and appropriateness have been recognized as the fundamental features of creative thinking. However, the brain mechanisms underlying these features remain largely unknown. In this study, we used event-related functional magnetic resonance imaging (fMRI) to dissociate these mechanisms in a revised creative chunk decomposition task in which participants were required to perform different types of chunk decomposition that systematically varied in novelty and appropriateness. We found that novelty processing involved functional areas for procedural memory (caudate), mental rewarding (substantia nigra, SN), and visual-spatial processing, whereas appropriateness processing was mediated by areas for declarative memory (hippocampus), emotional arousal (amygdala), and orthography recognition. These results indicate that non-declarative and declarative memory systems may jointly contribute to the two fundamental features of creative thinking. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. A new Weyl-like tensor of geometric origin

    NASA Astrophysics Data System (ADS)

    Vishwakarma, Ram Gopal

    2018-04-01

    A set of new tensors of purely geometric origin have been investigated, which form a hierarchy. A tensor of a lower rank plays the role of the potential for the tensor of one rank higher. The tensors have interesting mathematical and physical properties. The highest rank tensor of the hierarchy possesses all the geometrical properties of the Weyl tensor.

  8. Mechano-regulation of mesenchymal stem cell differentiation and collagen organisation during skeletal tissue repair.

    PubMed

    Nagel, Thomas; Kelly, Daniel J

    2010-06-01

    A number of mechano-regulation theories have been proposed that relate the differentiation pathway of mesenchymal stem cells (MSCs) to their local biomechanical environment. During spontaneous repair processes in skeletal tissues, the organisation of the extracellular matrix is a key determinant of its mechanical fitness. In this paper, we extend the mechano-regulation theory proposed by Prendergast et al. (J Biomech 30(6):539-548, 1997) to include the role of the mechanical environment on the collagen architecture in regenerating soft tissues. A large strain anisotropic poroelastic material model is used in a simulation of tissue differentiation in a fracture subject to cyclic bending (Cullinane et al. in J Orthop Res 20(3):579-586, 2002). The model predicts non-union with cartilage and fibrous tissue formation in the defect. Predicted collagen fibre angles, as determined by the principal decomposition of strain- and stress-type tensors, are similar to the architecture seen in native articular cartilage and neoarthroses induced by bending of mid-femoral defects in rats. Both stress and strain-based remodelling stimuli successfully predicted the general patterns of collagen fibre organisation observed in vivo. This provides further evidence that collagen organisation during tissue differentiation is determined by the mechanical environment. It is envisioned that such predictive models can play a key role in optimising MSC-based skeletal repair therapies where recapitulation of the normal tissue architecture is critical to successful repair.

  9. Development of the Tensoral Computer Language

    NASA Technical Reports Server (NTRS)

    Ferziger, Joel; Dresselhaus, Eliot

    1996-01-01

    The research scientist or engineer wishing to perform large scale simulations or to extract useful information from existing databases is required to have expertise in the details of the particular database, the numerical methods and the computer architecture to be used. This poses a significant practical barrier to the use of simulation data. The goal of this research was to develop a high-level computer language called Tensoral, designed to remove this barrier. The Tensoral language provides a framework in which efficient generic data manipulations can be easily coded and implemented. First of all, Tensoral is general. The fundamental objects in Tensoral represent tensor fields and the operators that act on them. The numerical implementation of these tensors and operators is completely and flexibly programmable. New mathematical constructs and operators can be easily added to the Tensoral system. Tensoral is compatible with existing languages. Tensoral tensor operations co-exist in a natural way with a host language, which may be any sufficiently powerful computer language such as Fortran, C, or Vectoral. Tensoral is very-high-level. Tensor operations in Tensoral typically act on entire databases (i.e., arrays) at one time and may, therefore, correspond to many lines of code in a conventional language. Tensoral is efficient. Tensoral is a compiled language. Database manipulations are simplified optimized and scheduled by the compiler eventually resulting in efficient machine code to implement them.

  10. Wood decomposition following clearcutting at Coweeta Hydrologic Laboratory

    Treesearch

    Kim G. Mattson; Wayne T. Swank

    2014-01-01

    Most of the forest on Watershed (WS) 7 was cut and ledt on site to decompose. This Chapter describes the rate and manner of wood decomposition and also quantifies the fluxes from decaying wood to the forest floor on WS 7. In doing so, we make the case that wood and its process of decomposition contributes to ecosystem stability. We also review some of the history of...

  11. Decomposition rate comparisons between frequently burned and unburned areas of uneven-aged loblolly pine stands in southeastern Arkansas

    Treesearch

    Miclele Renschin; Hal O. Leichty; Michael G. Shelton

    2001-01-01

    Although fire has been used extensively over long periods of time in loblolly pine (Pinis taeda L.) ecosystems, little is known concerning the effects of frequent fire use on nutrient cycling and decomposition. To better understand the long-term effects of fire on these processes, foliar litter decomposition rates were quantified in a study...

  12. New Methods For Interpretation Of Magnetic Gradient Tensor Data Using Eigenalysis And The Normalized Source Strength

    NASA Astrophysics Data System (ADS)

    Clark, D.

    2012-12-01

    In the future, acquisition of magnetic gradient tensor data is likely to become routine. New methods developed for analysis of magnetic gradient tensor data can also be applied to high quality conventional TMI surveys that have been processed using Fourier filtering techniques, or otherwise, to calculate magnetic vector and tensor components. This approach is, in fact, the only practical way at present to analyze vector component data, as measurements of vector components are seriously afflicted by motion noise, which is not as serious a problem for gradient components. In many circumstances, an optimal approach to extracting maximum information from magnetic surveys would be to combine analysis of measured gradient tensor data with vector components calculated from TMI measurements. New methods for inverting gradient tensor surveys to obtain source parameters have been developed for a number of elementary, but useful, models. These include point dipole (sphere), vertical line of dipoles (narrow vertical pipe), line of dipoles (horizontal cylinder), thin dipping sheet, horizontal line current and contact models. A key simplification is the use of eigenvalues and associated eigenvectors of the tensor. The normalized source strength (NSS), calculated from the eigenvalues, is a particularly useful rotational invariant that peaks directly over 3D compact sources, 2D compact sources, thin sheets and contacts, and is independent of magnetization direction for these sources (and only very weakly dependent on magnetization direction in general). In combination the NSS and its vector gradient enable estimation of the Euler structural index, thereby constraining source geometry, and determine source locations uniquely. NSS analysis can be extended to other useful models, such as vertical pipes, by calculating eigenvalues of the vertical derivative of the gradient tensor. Once source locations are determined, information of source magnetizations can be obtained by simple linear inversion of measured or calculated vector and/or tensor data. Inversions based on the vector gradient of the NSS over the Tallawang magnetite deposit in central New South Wales obtained good agreement between the inferred geometry of the tabular magnetite skarn body and drill hole intersections. Inverted magnetizations are consistent with magnetic property measurements on drill core samples from this deposit. Similarly, inversions of calculated tensor data over the Mount Leyshold gold-mineralized porphyry system in Queensland yield good estimates of the centroid location, total magnetic moment and magnetization direction of the magnetite-bearing potassic alteration zone that are consistent with geological and petrophysical information.

  13. Atomic orbital-based SOS-MP2 with tensor hypercontraction. I. GPU-based tensor construction and exploiting sparsity

    NASA Astrophysics Data System (ADS)

    Song, Chenchen; Martínez, Todd J.

    2016-05-01

    We present a tensor hypercontracted (THC) scaled opposite spin second order Møller-Plesset perturbation theory (SOS-MP2) method. By using THC, we reduce the formal scaling of SOS-MP2 with respect to molecular size from quartic to cubic. We achieve further efficiency by exploiting sparsity in the atomic orbitals and using graphical processing units (GPUs) to accelerate integral construction and matrix multiplication. The practical scaling of GPU-accelerated atomic orbital-based THC-SOS-MP2 calculations is found to be N2.6 for reference data sets of water clusters and alanine polypeptides containing up to 1600 basis functions. The errors in correlation energy with respect to density-fitting-SOS-MP2 are less than 0.5 kcal/mol for all systems tested (up to 162 atoms).

  14. Atomic orbital-based SOS-MP2 with tensor hypercontraction. I. GPU-based tensor construction and exploiting sparsity.

    PubMed

    Song, Chenchen; Martínez, Todd J

    2016-05-07

    We present a tensor hypercontracted (THC) scaled opposite spin second order Møller-Plesset perturbation theory (SOS-MP2) method. By using THC, we reduce the formal scaling of SOS-MP2 with respect to molecular size from quartic to cubic. We achieve further efficiency by exploiting sparsity in the atomic orbitals and using graphical processing units (GPUs) to accelerate integral construction and matrix multiplication. The practical scaling of GPU-accelerated atomic orbital-based THC-SOS-MP2 calculations is found to be N(2.6) for reference data sets of water clusters and alanine polypeptides containing up to 1600 basis functions. The errors in correlation energy with respect to density-fitting-SOS-MP2 are less than 0.5 kcal/mol for all systems tested (up to 162 atoms).

  15. What is the right formalism to search for resonances?

    NASA Astrophysics Data System (ADS)

    Mikhasenko, M.; Pilloni, A.; Nys, J.; Albaladejo, M.; Fernández-Ramírez, C.; Jackura, A.; Mathieu, V.; Sherrill, N.; Skwarnicki, T.; Szczepaniak, A. P.

    2018-03-01

    Hadron decay chains constitute one of the main sources of information on the QCD spectrum. We discuss the differences between several partial wave analysis formalisms used in the literature to build the amplitudes. We match the helicity amplitudes to the covariant tensor basis. Hereby, we pay attention to the analytical properties of the amplitudes and separate singularities of kinematical and dynamical nature. We study the analytical properties of the spin-orbit (LS) formalism, and some of the covariant tensor approaches. In particular, we explicitly build the amplitudes for the B→ ψ π K and B→ \\bar{D}π π decays, and show that the energy dependence of the covariant approach is model dependent. We also show that the usual recursive construction of covariant tensors explicitly violates crossing symmetry, which would lead to different resonance parameters extracted from scattering and decay processes.

  16. A procedure for the assessment of the toxicity of intermediates and products formed during the accidental thermal decomposition of a chemical species.

    PubMed

    Di Somma, Ilaria; Pollio, Antonino; Pinto, Gabriele; De Falco, Maria; Pizzo, Elio; Andreozzi, Roberto

    2010-04-15

    The knowledge of the substances which form when a molecule undergoes chemical reactions under unusual conditions is required by European legislation to evaluate the risks associated with an industrial chemical process. A thermal decomposition is often the result of a loss of control of the process which leads to the formation of many substances in some cases not easily predictable. The evaluation of the change of an overall toxicity passing from the parent compound to the mixture of its thermal decomposition products has been already proposed as a practical approach to this problem when preliminary indications about the temperature range in which the molecule decomposes are available. A new procedure is proposed in this work for the obtainment of the mixtures of thermal decomposition products also when there is no previous information about the thermal behaviour of investigated molecules. A scanning calorimetric run that is aimed to identify the onset temperature of the decomposition process is coupled to an isoperibolic one in order to obtain and collect the products. An algal strain is adopted for toxicological assessments of chemical compounds and mixtures. An extension of toxicological investigations to human cells is also attempted. 2009 Elsevier B.V. All rights reserved.

  17. Time-Optimized High-Resolution Readout-Segmented Diffusion Tensor Imaging

    PubMed Central

    Reishofer, Gernot; Koschutnig, Karl; Langkammer, Christian; Porter, David; Jehna, Margit; Enzinger, Christian; Keeling, Stephen; Ebner, Franz

    2013-01-01

    Readout-segmented echo planar imaging with 2D navigator-based reacquisition is an uprising technique enabling the sampling of high-resolution diffusion images with reduced susceptibility artifacts. However, low signal from the small voxels and long scan times hamper the clinical applicability. Therefore, we introduce a regularization algorithm based on total variation that is applied directly on the entire diffusion tensor. The spatially varying regularization parameter is determined automatically dependent on spatial variations in signal-to-noise ratio thus, avoiding over- or under-regularization. Information about the noise distribution in the diffusion tensor is extracted from the diffusion weighted images by means of complex independent component analysis. Moreover, the combination of those features enables processing of the diffusion data absolutely user independent. Tractography from in vivo data and from a software phantom demonstrate the advantage of the spatially varying regularization compared to un-regularized data with respect to parameters relevant for fiber-tracking such as Mean Fiber Length, Track Count, Volume and Voxel Count. Specifically, for in vivo data findings suggest that tractography results from the regularized diffusion tensor based on one measurement (16 min) generates results comparable to the un-regularized data with three averages (48 min). This significant reduction in scan time renders high resolution (1×1×2.5 mm3) diffusion tensor imaging of the entire brain applicable in a clinical context. PMID:24019951

  18. Moment tensor inversion with three-dimensional sensor configuration of mining induced seismicity (Kiruna mine, Sweden)

    NASA Astrophysics Data System (ADS)

    Ma, Ju; Dineva, Savka; Cesca, Simone; Heimann, Sebastian

    2018-06-01

    Mining induced seismicity is an undesired consequence of mining operations, which poses significant hazard to miners and infrastructures and requires an accurate analysis of the rupture process. Seismic moment tensors of mining-induced events help to understand the nature of mining-induced seismicity by providing information about the relationship between the mining, stress redistribution and instabilities in the rock mass. In this work, we adapt and test a waveform-based inversion method on high frequency data recorded by a dense underground seismic system in one of the largest underground mines in the world (Kiruna mine, Sweden). A stable algorithm for moment tensor inversion for comparatively small mining induced earthquakes, resolving both the double-couple and full moment tensor with high frequency data, is very challenging. Moreover, the application to underground mining system requires accounting for the 3-D geometry of the monitoring system. We construct a Green's function database using a homogeneous velocity model, but assuming a 3-D distribution of potential sources and receivers. We first perform a set of moment tensor inversions using synthetic data to test the effects of different factors on moment tensor inversion stability and source parameters accuracy, including the network spatial coverage, the number of sensors and the signal-to-noise ratio. The influence of the accuracy of the input source parameters on the inversion results is also tested. Those tests show that an accurate selection of the inversion parameters allows resolving the moment tensor also in the presence of realistic seismic noise conditions. Finally, the moment tensor inversion methodology is applied to eight events chosen from mining block #33/34 at Kiruna mine. Source parameters including scalar moment, magnitude, double-couple, compensated linear vector dipole and isotropic contributions as well as the strike, dip and rake configurations of the double-couple term were obtained. The orientations of the nodal planes of the double-couple component in most cases vary from NNW to NNE with a dip along the ore body or in the opposite direction.

  19. Moment Tensor Inversion with 3D sensor configuration of Mining Induced Seismicity (Kiruna mine, Sweden)

    NASA Astrophysics Data System (ADS)

    Ma, Ju; Dineva, Savka; Cesca, Simone; Heimann, Sebastian

    2018-03-01

    Mining induced seismicity is an undesired consequence of mining operations, which poses significant hazard to miners and infrastructures and requires an accurate analysis of the rupture process. Seismic moment tensors of mining-induced events help to understand the nature of mining-induced seismicity by providing information about the relationship between the mining, stress redistribution and instabilities in the rock mass. In this work, we adapt and test a waveform-based inversion method on high frequency data recorded by a dense underground seismic system in one of the largest underground mines in the world (Kiruna mine, Sweden). Stable algorithm for moment tensor inversion for comparatively small mining induced earthquakes, resolving both the double couple and full moment tensor with high frequency data is very challenging. Moreover, the application to underground mining system requires accounting for the 3D geometry of the monitoring system. We construct a Green's function database using a homogeneous velocity model, but assuming a 3D distribution of potential sources and receivers. We first perform a set of moment tensor inversions using synthetic data to test the effects of different factors on moment tensor inversion stability and source parameters accuracy, including the network spatial coverage, the number of sensors and the signal-to-noise ratio. The influence of the accuracy of the input source parameters on the inversion results is also tested. Those tests show that an accurate selection of the inversion parameters allows resolving the moment tensor also in presence of realistic seismic noise conditions. Finally, the moment tensor inversion methodology is applied to 8 events chosen from mining block #33/34 at Kiruna mine. Source parameters including scalar moment, magnitude, double couple, compensated linear vector dipole and isotropic contributions as well as the strike, dip, rake configurations of the double couple term were obtained. The orientations of the nodal planes of the double-couple component in most cases vary from NNW to NNE with a dip along the ore body or in the opposite direction.

  20. The 1/ N Expansion of Tensor Models with Two Symmetric Tensors

    NASA Astrophysics Data System (ADS)

    Gurau, Razvan

    2018-06-01

    It is well known that tensor models for a tensor with no symmetry admit a 1/ N expansion dominated by melonic graphs. This result relies crucially on identifying jackets, which are globally defined ribbon graphs embedded in the tensor graph. In contrast, no result of this kind has so far been established for symmetric tensors because global jackets do not exist. In this paper we introduce a new approach to the 1/ N expansion in tensor models adapted to symmetric tensors. In particular we do not use any global structure like the jackets. We prove that, for any rank D, a tensor model with two symmetric tensors and interactions the complete graph K D+1 admits a 1/ N expansion dominated by melonic graphs.

  1. Ab initio molecular dynamics study on the initial chemical events in nitramines: thermal decomposition of CL-20.

    PubMed

    Isayev, Olexandr; Gorb, Leonid; Qasim, Mo; Leszczynski, Jerzy

    2008-09-04

    CL-20 (2,4,6,8,10,12-hexanitro-2,4,6,8,10,12-hexaazaisowurtzitane or HNIW) is a high-energy nitramine explosive. To improve atomistic understanding of the thermal decomposition of CL-20 gas and solid phases, we performed a series of ab initio molecular dynamics simulations. We found that during unimolecular decomposition, unlike other nitramines (e.g., RDX, HMX), CL-20 has only one distinct initial reaction channelhomolysis of the N-NO2 bond. We did not observe any HONO elimination reaction during unimolecular decomposition, whereas the ring-breaking reaction was followed by NO 2 fission. Therefore, in spite of limited sampling, that provides a mostly qualitative picture, we proposed here a scheme of unimolecular decomposition of CL-20. The averaged product population over all trajectories was estimated at four HCN, two to four NO2, two to four NO, one CO, and one OH molecule per one CL-20 molecule. Our simulations provide a detailed description of the chemical processes in the initial stages of thermal decomposition of condensed CL-20, allowing elucidation of key features of such processes as composition of primary reaction products, reaction timing, and Arrhenius behavior of the system. The primary reactions leading to NO2, NO, N 2O, and N2 occur at very early stages. We also estimated potential activation barriers for the formation of NO2, which essentially determines overall decomposition kinetics and effective rate constants for NO2 and N2. The calculated solid-phase decomposition pathways correlate with available condensed-phase experimental data.

  2. The Weyl curvature tensor, Cotton-York tensor and gravitational waves: A covariant consideration

    NASA Astrophysics Data System (ADS)

    Osano, Bob

    1 + 3 covariant approach to cosmological perturbation theory often employs the electric part (Eab), the magnetic part (Hab) of the Weyl tensor or the shear tensor (σab) in a phenomenological description of gravitational waves. The Cotton-York tensor is rarely mentioned in connection with gravitational waves in this approach. This tensor acts as a source for the magnetic part of the Weyl tensor which should not be neglected in studies of gravitational waves in the 1 + 3 formalism. The tensor is only mentioned in connection with studies of “silent model” but even there the connection with gravitational waves is not exhaustively explored. In this study, we demonstrate that the Cotton-York tensor encodes contributions from both electric and magnetic parts of the Weyl tensor and in directly from the shear tensor. In our opinion, this makes the Cotton-York tensor arguably the natural choice for linear gravitational waves in the 1 + 3 covariant formalism. The tensor is cumbersome to work with but that should negate its usefulness. It is conceivable that the tensor would equally be useful in the metric approach, although we have not demonstrated this in this study. We contend that the use of only one of the Weyl tensor or the shear tensor, although phenomenologically correct, leads to loss of information. Such information is vital particularly when examining the contribution of gravitational waves to the anisotropy of an almost-Friedmann-Lamitre-Robertson-Walker (FLRW) universe. The recourse to this loss is the use Cotton-York tensor.

  3. Analytical separations of mammalian decomposition products for forensic science: a review.

    PubMed

    Swann, L M; Forbes, S L; Lewis, S W

    2010-12-03

    The study of mammalian soft tissue decomposition is an emerging area in forensic science, with a major focus of the research being the use of various chemical and biological methods to study the fate of human remains in the environment. Decomposition of mammalian soft tissue is a postmortem process that, depending on environmental conditions and physiological factors, will proceed until complete disintegration of the tissue. The major stages of decomposition involve complex reactions which result in the chemical breakdown of the body's main constituents; lipids, proteins, and carbohydrates. The first step to understanding this chemistry is identifying the compounds present in decomposition fluids and determining when they are produced. This paper provides an overview of decomposition chemistry and reviews recent advances in this area utilising analytical separation science. Copyright © 2010 Elsevier B.V. All rights reserved.

  4. Efficient Tensor Completion for Color Image and Video Recovery: Low-Rank Tensor Train.

    PubMed

    Bengua, Johann A; Phien, Ho N; Tuan, Hoang Duong; Do, Minh N

    2017-05-01

    This paper proposes a novel approach to tensor completion, which recovers missing entries of data represented by tensors. The approach is based on the tensor train (TT) rank, which is able to capture hidden information from tensors thanks to its definition from a well-balanced matricization scheme. Accordingly, new optimization formulations for tensor completion are proposed as well as two new algorithms for their solution. The first one called simple low-rank tensor completion via TT (SiLRTC-TT) is intimately related to minimizing a nuclear norm based on TT rank. The second one is from a multilinear matrix factorization model to approximate the TT rank of a tensor, and is called tensor completion by parallel matrix factorization via TT (TMac-TT). A tensor augmentation scheme of transforming a low-order tensor to higher orders is also proposed to enhance the effectiveness of SiLRTC-TT and TMac-TT. Simulation results for color image and video recovery show the clear advantage of our method over all other methods.

  5. Biological decomposition efficiency in different woodland soils.

    PubMed

    Herlitzius, H

    1983-03-01

    The decomposition (meaning disappearance) of different leaf types and artificial leaves made from cellulose hydrate foil was studied in three forests - an alluvial forest (Ulmetum), a beech forest on limestone soil (Melico-Fagetum), and a spruce forest in soil overlying limestone bedrock.Fine, medium, and coarse mesh litter bags of special design were used to investigate the roles of abiotic factors, microorganisms, and meso- and macrofauna in effecting decomposition in the three habitats. Additionally, the experimental design was carefully arranged so as to provide information about the effects on decomposition processes of the duration of exposure and the date or moment of exposure. 1. Exposure of litter samples oor 12 months showed: a) Litter enclosed in fine mesh bags decomposed to some 40-44% of the initial amount placed in each of the three forests. Most of this decomposition can be attributed to abiotic factors and microoganisms. b) Litter placed in medium mesh litter bags reduced by ca. 60% in alluvial forest, ca. 50% in beech forest and ca. 44% in spruce forest. c) Litter enclosed in coarse mesh litter bags was reduced by 71% of the initial weights exposed in alluvial and beech forests; in the spruce forest decomposition was no greater than observed with fine and medium mesh litter bags. Clearly, in spruce forest the macrofauna has little or no part to play in effecting decomposition. 2. Sequential month by month exposure of hazel leaves and cellulose hydrate foil in coarse mesh litter bags in all three forests showed that one month of exposure led to only slight material losses, they did occur smallest between March and May, and largest between June and October/November. 3. Coarse mesh litter bags containing either hazel or artificial leaves of cellulose hydrate foil were exposed to natural decomposition processes in December 1977 and subsampled monthly over a period of one year, this series constituted the From-sequence of experiments. Each of the From-sequence samples removed was immediately replaced by a fresh litter bag which was left in place until December 1978, this series constituted the To-sequence of experiments. The results arising from the designated From- and To-sequences showed: a) During the course of one year hazel leaves decomposed completely in alluvial forest, almost completely in beech forest but to only 50% of the initial value in spruce forest. b) Duration of exposure and not the date of exposure is the major controlling influence on decomposition in alluvial forest, a characteristic reflected in the mirror-image courses of the From- and To-sequences curves with respect to the abscissa or time axis. Conversely the date of exposure and not the duration of exposure is the major controlling influence on decomposition in the spruce forest, a characteristic reflected in the mirror-image courses of the From-and To-sequences with respect to the ordinate or axis of percentage decomposition. c) Leaf powder amendment increased the decomposition rate of the hazel and cellulose hydrate leaves in the spruce forest but had no significant effect on their decomposition rate in alluvial and beech forests. It is concluded from this, and other evidence, that litter amendment by leaf fragments of phytophage frass in sites of low biological decomposition activity (eg. spruce) enhances decomposition processes. d) The time course of hazel leaf decomposition in both alluvial and beech forest is sigmoidal. Three s-phases are distinguished and correspond to the activity of microflora/microfauna, mesofauna/macrofauna, and then microflora/microfauna again. In general, the sigmoidal pattern of the curve can be considered valid for all decomposition processes occurring in terrestrial situations. It is contended that no decomposition (=disappearance) curve actually follows an e-type exponential function. A logarithmic linear regression can be constructed from the sigmoid curve data and although this facilitates inter-system comparisons it does not clearly express the dynamics of decomposition. 4. The course of the curve constructed from information about the standard deviations of means derived from the From- and To-sequence data does reflect the dynamics of litter decomposition. The three s-phases can be recognised and by comparing the actual From-sequence deviation curve with a mirror inversion representation of the To-sequence curve it is possible to determine whether decomposition is primarily controlled by the duration of exposure or the date of exposure. As is the case for hazel leaf decomposition in beech forest intermediate conditions can be readily recognised.

  6. Reactivity continuum modeling of leaf, root, and wood decomposition across biomes

    NASA Astrophysics Data System (ADS)

    Koehler, Birgit; Tranvik, Lars J.

    2015-07-01

    Large carbon dioxide amounts are released to the atmosphere during organic matter decomposition. Yet the large-scale and long-term regulation of this critical process in global carbon cycling by litter chemistry and climate remains poorly understood. We used reactivity continuum (RC) modeling to analyze the decadal data set of the "Long-term Intersite Decomposition Experiment," in which fine litter and wood decomposition was studied in eight biome types (224 time series). In 32 and 46% of all sites the litter content of the acid-unhydrolyzable residue (AUR, formerly referred to as lignin) and the AUR/nitrogen ratio, respectively, retarded initial decomposition rates. This initial rate-retarding effect generally disappeared within the first year of decomposition, and rate-stimulating effects of nutrients and a rate-retarding effect of the carbon/nitrogen ratio became more prevalent. For needles and leaves/grasses, the influence of climate on decomposition decreased over time. For fine roots, the climatic influence was initially smaller but increased toward later-stage decomposition. The climate decomposition index was the strongest climatic predictor of decomposition. The similar variability in initial decomposition rates across litter categories as across biome types suggested that future changes in decomposition may be dominated by warming-induced changes in plant community composition. In general, the RC model parameters successfully predicted independent decomposition data for the different litter-biome combinations (196 time series). We argue that parameterization of large-scale decomposition models with RC model parameters, as opposed to the currently common discrete multiexponential models, could significantly improve their mechanistic foundation and predictive accuracy across climate zones and litter categories.

  7. Aerogel composites and method of manufacture

    DOEpatents

    Cao, Wanqing; Hunt, Arlon Jason

    1999-01-01

    Disclosed herewith is a process of forming an aerogel composite which comprises introducing a gaseous material into a formed aerogel monolith or powder, and causing decomposition of said gaseous material in said aerogel in amounts sufficient to cause deposition of the decomposition products of the gas on the surfaces of the pores of the said aerogel. Also disclosed are the composites made by the process.

  8. An enhanced structure tensor method for sea ice ridge detection from GF-3 SAR imagery

    NASA Astrophysics Data System (ADS)

    Zhu, T.; Li, F.; Zhang, Y.; Zhang, S.; Spreen, G.; Dierking, W.; Heygster, G.

    2017-12-01

    In SAR imagery, ridges or leads are shown as the curvilinear features. The proposed ridge detection method is facilitated by their curvilinear shapes. The bright curvilinear features are recognized as the ridges while the dark curvilinear features are classified as the leads. In dual-polarization HH or HV channel of C-band SAR imagery, the bright curvilinear feature may be false alarm because the frost flowers of young leads may show as bright pixels associated with changes in the surface salinity under calm surface conditions. Wind roughened leads also trigger the backscatter increasing that can be misclassified as ridges [1]. Thus the width limitation is considered in this proposed structure tensor method [2], since only shape feature based method is not enough for detecting ridges. The ridge detection algorithm is based on the hypothesis that the bright pixels are ridges with curvilinear shapes and the ridge width is less 30 meters. Benefited from GF-3 with high spatial resolution of 3 meters, we provide an enhanced structure tensor method for detecting the significant ridge. The preprocessing procedures including the calibration and incidence angle normalization are also investigated. The bright pixels will have strong response to the bandpass filtering. The ridge training samples are delineated from the SAR imagery in the Log-Gabor filters to construct structure tensor. From the tensor, the dominant orientation of the pixel representing the ridge is determined by the dominant eigenvector. For the post-processing of structure tensor, the elongated kernel is desired to enhance the ridge curvilinear shape. Since ridge presents along a certain direction, the ratio of the dominant eigenvector will be used to measure the intensity of local anisotropy. The convolution filter has been utilized in the constructed structure tensor is used to model spatial contextual information. Ridge detection results from GF-3 show the proposed method performs better compared to the direct threshold method.

  9. Two Dimensional Finite Element Based Magnetotelluric Inversion using Singular Value Decomposition Method on Transverse Electric Mode

    NASA Astrophysics Data System (ADS)

    Tjong, Tiffany; Yihaa’ Roodhiyah, Lisa; Nurhasan; Sutarno, Doddy

    2018-04-01

    In this work, an inversion scheme was performed using a vector finite element (VFE) based 2-D magnetotelluric (MT) forward modelling. We use an inversion scheme with Singular value decomposition (SVD) method toimprove the accuracy of MT inversion.The inversion scheme was applied to transverse electric (TE) mode of MT. SVD method was used in this inversion to decompose the Jacobian matrices. Singular values which obtained from the decomposition process were analyzed. This enabled us to determine the importance of data and therefore to define a threshold for truncation process. The truncation of singular value in inversion processcould improve the resulted model.

  10. An effective hierarchical model for the biomolecular covalent bond: an approach integrating artificial chemistry and an actual terrestrial life system.

    PubMed

    Oohashi, Tsutomu; Ueno, Osamu; Maekawa, Tadao; Kawai, Norie; Nishina, Emi; Honda, Manabu

    2009-01-01

    Under the AChem paradigm and the programmed self-decomposition (PSD) model, we propose a hierarchical model for the biomolecular covalent bond (HBCB model). This model assumes that terrestrial organisms arrange their biomolecules in a hierarchical structure according to the energy strength of their covalent bonds. It also assumes that they have evolutionarily selected the PSD mechanism of turning biological polymers (BPs) into biological monomers (BMs) as an efficient biomolecular recycling strategy We have examined the validity and effectiveness of the HBCB model by coordinating two complementary approaches: biological experiments using existent terrestrial life, and simulation experiments using an AChem system. Biological experiments have shown that terrestrial life possesses a PSD mechanism as an endergonic, genetically regulated process and that hydrolysis, which decomposes a BP into BMs, is one of the main processes of such a mechanism. In simulation experiments, we compared different virtual self-decomposition processes. The virtual species in which the self-decomposition process mainly involved covalent bond cleavage from a BP to BMs showed evolutionary superiority over other species in which the self-decomposition process involved cleavage from BP to classes lower than BM. These converging findings strongly support the existence of PSD and the validity and effectiveness of the HBCB model.

  11. Retrospective Correction of Physiological Noise in DTI Using an Extended Tensor Model and Peripheral Measurements

    PubMed Central

    Mohammadi, Siawoosh; Hutton, Chloe; Nagy, Zoltan; Josephs, Oliver; Weiskopf, Nikolaus

    2013-01-01

    Diffusion tensor imaging is widely used in research and clinical applications, but this modality is highly sensitive to artefacts. We developed an easy-to-implement extension of the original diffusion tensor model to account for physiological noise in diffusion tensor imaging using measures of peripheral physiology (pulse and respiration), the so-called extended tensor model. Within the framework of the extended tensor model two types of regressors, which respectively modeled small (linear) and strong (nonlinear) variations in the diffusion signal, were derived from peripheral measures. We tested the performance of four extended tensor models with different physiological noise regressors on nongated and gated diffusion tensor imaging data, and compared it to an established data-driven robust fitting method. In the brainstem and cerebellum the extended tensor models reduced the noise in the tensor-fit by up to 23% in accordance with previous studies on physiological noise. The extended tensor model addresses both large-amplitude outliers and small-amplitude signal-changes. The framework of the extended tensor model also facilitates further investigation into physiological noise in diffusion tensor imaging. The proposed extended tensor model can be readily combined with other artefact correction methods such as robust fitting and eddy current correction. PMID:22936599

  12. Thermal Decomposition Behavior of Hydroxytyrosol (HT) in Nitrogen Atmosphere Based on TG-FTIR Methods.

    PubMed

    Tu, Jun-Ling; Yuan, Jiao-Jiao

    2018-02-13

    The thermal decomposition behavior of olive hydroxytyrosol (HT) was first studied using thermogravimetry (TG). Cracked chemical bond and evolved gas analysis during the thermal decomposition process of HT were also investigated using thermogravimetry coupled with infrared spectroscopy (TG-FTIR). Thermogravimetry-Differential thermogravimetry (TG-DTG) curves revealed that the thermal decomposition of HT began at 262.8 °C and ended at 409.7 °C with a main mass loss. It was demonstrated that a high heating rate (over 20 K·min -1 ) restrained the thermal decomposition of HT, resulting in an obvious thermal hysteresis. Furthermore, a thermal decomposition kinetics investigation of HT indicated that the non-isothermal decomposition mechanism was one-dimensional diffusion (D1), integral form g ( x ) = x ², and differential form f ( x ) = 1/(2 x ). The four combined approaches were employed to calculate the activation energy ( E = 128.50 kJ·mol -1 ) and Arrhenius preexponential factor (ln A = 24.39 min -1 ). In addition, a tentative mechanism of HT thermal decomposition was further developed. The results provide a theoretical reference for the potential thermal stability of HT.

  13. Conceptual design optimization study

    NASA Technical Reports Server (NTRS)

    Hollowell, S. J.; Beeman, E. R., II; Hiyama, R. M.

    1990-01-01

    The feasibility of applying multilevel functional decomposition and optimization techniques to conceptual design of advanced fighter aircraft was investigated. Applying the functional decomposition techniques to the conceptual design phase appears to be feasible. The initial implementation of the modified design process will optimize wing design variables. A hybrid approach, combining functional decomposition techniques for generation of aerodynamic and mass properties linear sensitivity derivatives with existing techniques for sizing mission performance and optimization, is proposed.

  14. Effect of decomposition and organic residues on resistivity of copper films fabricated via low-temperature sintering of complex particle mixed dispersions

    NASA Astrophysics Data System (ADS)

    Yong, Yingqiong; Nguyen, Mai Thanh; Tsukamoto, Hiroki; Matsubara, Masaki; Liao, Ying-Chih; Yonezawa, Tetsu

    2017-03-01

    Mixtures of a copper complex and copper fine particles as copper-based metal-organic decomposition (MOD) dispersions have been demonstrated to be effective for low-temperature sintering of conductive copper film. However, the copper particle size effect on decomposition process of the dispersion during heating and the effect of organic residues on the resistivity have not been studied. In this study, the decomposition process of dispersions containing mixtures of a copper complex and copper particles with various sizes was studied. The effect of organic residues on the resistivity was also studied using thermogravimetric analysis. In addition, the choice of copper salts in the copper complex was also discussed. In this work, a low-resistivity sintered copper film (7 × 10-6 Ω·m) at a temperature as low as 100 °C was achieved without using any reductive gas.

  15. Understanding the critical challenges of self-aligned octuple patterning

    NASA Astrophysics Data System (ADS)

    Yu, Ji; Xiao, Wei; Kang, Weiling; Chen, Yijian

    2014-03-01

    In this paper, we present a thorough investigation of self-aligned octuple patterning (SAOP) process characteristics, cost structure, integration challenges, and layout decomposition. The statistical characteristics of SAOP CD variations such as multi-modality are analyzed and contributions from various features to CDU and MTT (mean-to-target) budgets are estimated. The gap space is found to have the worst CDU+MTT performance and is used to determine the required overlay accuracy to ensure a satisfactory edge-placement yield of a cut process. Moreover, we propose a 5-mask positive-tone SAOP (pSAOP) process for memory FEOL patterning and a 3-mask negative-tone SAOP (nSAOP) process for logic BEOL patterning. The potential challenges of 2-D SAOP layout decomposition for BEOL applications are identified. Possible decomposition approaches are explored and the functionality of several developed algorithm is verified using 2-D layout examples from Open Cell Library.

  16. Vector- and tensor-meson production and the Pomeron-f identity hypothesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, S.T.

    Within the context of a model introduced some time ago, the differential and total production cross sections for vector and tensor mesons are shown to be compatible with the hypothesis that the Pomeron and f are a single Regge trajectory. The model incorporates both cylinder and flavoring renormalizations of the Pomeron-f trajectory. The processes K/sup +- /p..-->..K/sup */(892)/sup +- /p, K/sup +- /p ..-->..K/sub 2//sup */(1430)/sup +- /p, and ..pi../sup +- /p..-->..A/sub 2/(1320)/sup +- /p are analyzed in some detail.

  17. Computer transformation of partial differential equations into any coordinate system

    NASA Technical Reports Server (NTRS)

    Sullivan, R. D.

    1977-01-01

    The use of tensors to provide a compact way of writing partial differential equations in a form valid in all coordinate systems is discussed. In order to find solutions to the equations with their boundary conditions they must be expressed in terms of the coordinate system under consideration. The process of arriving at these expressions from the tensor formulation was automated by a software system, TENSR. An allied system that analyzes the resulting expressions term by term and drops those that are negligible is also described.

  18. The Tensor and the Scalar Charges of the Nucleon from Hadron Phenomenology

    NASA Astrophysics Data System (ADS)

    Courtoy, A.

    2018-01-01

    We discuss the impact of the determination of the nucleon tensor charge on searches for physics Beyond the Standard Model. We also comment on the future extraction of the subleading-twist PDF e(x) from Jefferson Lab soon-to-be-released Beam Spin Asymmetry data as well as from the expected data of CLAS12 and SoLID, as the latter is related to the scalar charge. These analyses are possible through the phenomenology of Dihadron Fragmentation Functions related processes, which we report on here as well.

  19. Investigating Musical Disorders with Diffusion Tensor Imaging: a Comparison of Imaging Parameters

    PubMed Central

    Loui, Psyche; Schlaug, Gottfried

    2009-01-01

    The Arcuate Fasciculus (AF) is a bundle of white matter traditionally thought to be responsible for language function. However, its role in music is not known. Here we investigate the connectivity of the AF using Diffusion Tensor Imaging (DTI) and show that musically tone-deaf individuals, who show impairments in pitch discrimination, have reduced connectivity in the AF relative to musically normal-functioning control subjects. Results were robust to variations in imaging parameters and emphasize the importance of brain connectivity in para-linguistic processes such as music. PMID:19673766

  20. Ionization-Enhanced Decomposition of 2,4,6-Trinitrotoluene (TNT) Molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Bin; Wright, David; Cliffel, David

    2011-01-01

    The unimolecular decomposition reaction of TNT can in principle be used to design ways to either detect or remove TNT from the environment. Here, we report the results of a density functional theory study of possible ways to lower the reaction barrier for this decomposition process by ionization, so that decomposition and/or detection can occur at room temperature. We find that ionizing TNT lowers the reaction barrier for the initial step of this decomposition. We further show that a similar effect can occur if a positive moiety is bound to the TNT molecule. The positive charge produces a pronounced electronmore » redistribution and dipole formation in TNT with minimal charge transfer from TNT to the positive moiety.« less

  1. Scare Tactics: Evaluating Problem Decompositions Using Failure Scenarios

    NASA Technical Reports Server (NTRS)

    Helm, B. Robert; Fickas, Stephen

    1992-01-01

    Our interest is in the design of multi-agent problem-solving systems, which we refer to as composite systems. We have proposed an approach to composite system design by decomposition of problem statements. An automated assistant called Critter provides a library of reusable design transformations which allow a human analyst to search the space of decompositions for a problem. In this paper we describe a method for evaluating and critiquing problem decompositions generated by this search process. The method uses knowledge stored in the form of failure decompositions attached to design transformations. We suggest the benefits of our critiquing method by showing how it could re-derive steps of a published development example. We then identify several open issues for the method.

  2. Uncertainty estimations for moment tensor inversions: the issue of the 2012 May 20 Emilia earthquake

    NASA Astrophysics Data System (ADS)

    Scognamiglio, Laura; Magnoni, Federica; Tinti, Elisa; Casarotti, Emanuele

    2016-08-01

    Seismic moment tensor is one of the most important source parameters defining the earthquake dimension and style of the activated fault. Geoscientists ordinarily use moment tensor catalogues, however, few attempts have been done to assess possible impacts of moment magnitude uncertainties upon their analysis. The 2012 May 20 Emilia main shock is a representative event since it is defined in literature with a moment magnitude value (Mw) spanning between 5.63 and 6.12. A variability of ˜0.5 units in magnitude leads to a controversial knowledge of the real size of the event and reveals how the solutions could be poorly constrained. In this work, we investigate the stability of the moment tensor solution for this earthquake, studying the effect of five different 1-D velocity models, the number and the distribution of the stations used in the inversion procedure. We also introduce a 3-D velocity model to account for structural heterogeneity. We finally estimate the uncertainties associated to the computed focal planes and the obtained Mw. We conclude that our reliable source solutions provide a moment magnitude that ranges from 5.87, 1-D model, to 5.96, 3-D model, reducing the variability of the literature to ˜0.1. We endorse that the estimate of seismic moment from moment tensor solutions, as well as the estimate of the other kinematic source parameters, requires coming out with disclosed assumptions and explicit processing workflows. Finally and, probably more important, when moment tensor solution is used for secondary analyses it has to be combined with the same main boundary conditions (e.g. wave-velocity propagation model) to avoid conflicting results.

  3. Plant Diversity Impacts Decomposition and Herbivory via Changes in Aboveground Arthropods

    PubMed Central

    Ebeling, Anne; Meyer, Sebastian T.; Abbas, Maike; Eisenhauer, Nico; Hillebrand, Helmut; Lange, Markus; Scherber, Christoph; Vogel, Anja; Weigelt, Alexandra; Weisser, Wolfgang W.

    2014-01-01

    Loss of plant diversity influences essential ecosystem processes as aboveground productivity, and can have cascading effects on the arthropod communities in adjacent trophic levels. However, few studies have examined how those changes in arthropod communities can have additional impacts on ecosystem processes caused by them (e.g. pollination, bioturbation, predation, decomposition, herbivory). Therefore, including arthropod effects in predictions of the impact of plant diversity loss on such ecosystem processes is an important but little studied piece of information. In a grassland biodiversity experiment, we addressed this gap by assessing aboveground decomposer and herbivore communities and linking their abundance and diversity to rates of decomposition and herbivory. Path analyses showed that increasing plant diversity led to higher abundance and diversity of decomposing arthropods through higher plant biomass. Higher species richness of decomposers, in turn, enhanced decomposition. Similarly, species-rich plant communities hosted a higher abundance and diversity of herbivores through elevated plant biomass and C:N ratio, leading to higher herbivory rates. Integrating trophic interactions into the study of biodiversity effects is required to understand the multiple pathways by which biodiversity affects ecosystem functioning. PMID:25226237

  4. Scale-free crystallization of two-dimensional complex plasmas: Domain analysis using Minkowski tensors

    NASA Astrophysics Data System (ADS)

    Böbel, A.; Knapek, C. A.; Räth, C.

    2018-05-01

    Experiments of the recrystallization processes in two-dimensional complex plasmas are analyzed to rigorously test a recently developed scale-free phase transition theory. The "fractal-domain-structure" (FDS) theory is based on the kinetic theory of Frenkel. It assumes the formation of homogeneous domains, separated by defect lines, during crystallization and a fractal relationship between domain area and boundary length. For the defect number fraction and system energy a scale-free power-law relation is predicted. The long-range scaling behavior of the bond-order correlation function shows clearly that the complex plasma phase transitions are not of the Kosterlitz, Thouless, Halperin, Nelson, and Young type. Previous preliminary results obtained by counting the number of dislocations and applying a bond-order metric for structural analysis are reproduced. These findings are supplemented by extending the use of the bond-order metric to measure the defect number fraction and furthermore applying state-of-the-art analysis methods, allowing a systematic testing of the FDS theory with unprecedented scrutiny: A morphological analysis of lattice structure is performed via Minkowski tensor methods. Minkowski tensors form a complete family of additive, motion covariant and continuous morphological measures that are sensitive to nonlinear properties. The FDS theory is rigorously confirmed and predictions of the theory are reproduced extremely well. The predicted scale-free power-law relation between defect fraction number and system energy is verified for one more order of magnitude at high energies compared to the inherently discontinuous bond-order metric. It is found that the fractal relation between crystalline domain area and circumference is independent of the experiment, the particular Minkowski tensor method, and the particular choice of parameters. Thus, the fractal relationship seems to be inherent to two-dimensional phase transitions in complex plasmas. Minkowski tensor analysis turns out to be a powerful tool for investigations of crystallization processes. It is capable of revealing nonlinear local topological properties, however, still provides easily interpretable results founded on a solid mathematical framework.

  5. Theoretical study of the reaction mechanism of CH₃NO₂ with NO₂, NO and CO: the bimolecular reactions that cannot be ignored.

    PubMed

    Zhang, Ji-Dong; Kang, Li-Hua; Cheng, Xin-Lu

    2015-01-01

    The intriguing decompositions of nitro-containing explosives have been attracting interest. While theoretical investigations have long been concentrated mainly on unimolecular decompositions, bimolecular reactions have received little theoretical attention. In this paper, we investigate theoretically the bimolecular reactions between nitromethane (CH3NO2)-the simplest nitro-containing explosive-and its decomposition products, such as NO2, NO and CO, that are abundant during the decomposition process of CH3NO2. The structures and potential energy surface (PES) were explored at B3LYP/6-31G(d), B3P86/6-31G(d) and MP2/6-311 + G(d,p) levels, and energies were refined using CCSD(T)/cc-pVTZ methods. Quantum chemistry calculations revealed that the title reactions possess small barriers that can be comparable to, or smaller than, that of the initial decomposition reactions of CH3NO2. Considering that their reactants are abundant in the decomposition process of CH3NO2, we consider bimolecular reactions also to be of great importance, and worthy of further investigation. Moreover, our calculations show that NO2 can be oxidized by CH3NO2 to NO3 radical, which confirms the conclusion reached formerly by Irikura and Johnson [(2006) J Phys Chem A 110:13974-13978] that NO3 radical can be formed during the decomposition of nitramine explosives.

  6. Fast multi-scale feature fusion for ECG heartbeat classification

    NASA Astrophysics Data System (ADS)

    Ai, Danni; Yang, Jian; Wang, Zeyu; Fan, Jingfan; Ai, Changbin; Wang, Yongtian

    2015-12-01

    Electrocardiogram (ECG) is conducted to monitor the electrical activity of the heart by presenting small amplitude and duration signals; as a result, hidden information present in ECG data is difficult to determine. However, this concealed information can be used to detect abnormalities. In our study, a fast feature-fusion method of ECG heartbeat classification based on multi-linear subspace learning is proposed. The method consists of four stages. First, baseline and high frequencies are removed to segment heartbeat. Second, as an extension of wavelets, wavelet-packet decomposition is conducted to extract features. With wavelet-packet decomposition, good time and frequency resolutions can be provided simultaneously. Third, decomposed confidences are arranged as a two-way tensor, in which feature fusion is directly implemented with generalized N dimensional ICA (GND-ICA). In this method, co-relationship among different data information is considered, and disadvantages of dimensionality are prevented; this method can also be used to reduce computing compared with linear subspace-learning methods (PCA). Finally, support vector machine (SVM) is considered as a classifier in heartbeat classification. In this study, ECG records are obtained from the MIT-BIT arrhythmia database. Four main heartbeat classes are used to examine the proposed algorithm. Based on the results of five measurements, sensitivity, positive predictivity, accuracy, average accuracy, and t-test, our conclusion is that a GND-ICA-based strategy can be used to provide enhanced ECG heartbeat classification. Furthermore, large redundant features are eliminated, and classification time is reduced.

  7. Machine Learning Techniques for Global Sensitivity Analysis in Climate Models

    NASA Astrophysics Data System (ADS)

    Safta, C.; Sargsyan, K.; Ricciuto, D. M.

    2017-12-01

    Climate models studies are not only challenged by the compute intensive nature of these models but also by the high-dimensionality of the input parameter space. In our previous work with the land model components (Sargsyan et al., 2014) we identified subsets of 10 to 20 parameters relevant for each QoI via Bayesian compressive sensing and variance-based decomposition. Nevertheless the algorithms were challenged by the nonlinear input-output dependencies for some of the relevant QoIs. In this work we will explore a combination of techniques to extract relevant parameters for each QoI and subsequently construct surrogate models with quantified uncertainty necessary to future developments, e.g. model calibration and prediction studies. In the first step, we will compare the skill of machine-learning models (e.g. neural networks, support vector machine) to identify the optimal number of classes in selected QoIs and construct robust multi-class classifiers that will partition the parameter space in regions with smooth input-output dependencies. These classifiers will be coupled with techniques aimed at building sparse and/or low-rank surrogate models tailored to each class. Specifically we will explore and compare sparse learning techniques with low-rank tensor decompositions. These models will be used to identify parameters that are important for each QoI. Surrogate accuracy requirements are higher for subsequent model calibration studies and we will ascertain the performance of this workflow for multi-site ALM simulation ensembles.

  8. Impact of litter quantity on the soil bacteria community during the decomposition of Quercus wutaishanica litter.

    PubMed

    Zeng, Quanchao; Liu, Yang; An, Shaoshan

    2017-01-01

    The forest ecosystem is the main component of terrestrial ecosystems. The global climate and the functions and processes of soil microbes in the ecosystem are all influenced by litter decomposition. The effects of litter decomposition on the abundance of soil microorganisms remain unknown. Here, we analyzed soil bacterial communities during the litter decomposition process in an incubation experiment under treatment with different litter quantities based on annual litterfall data (normal quantity, 200 g/(m 2 /yr); double quantity, 400 g/(m 2 /yr) and control, no litter). The results showed that litter quantity had significant effects on soil carbon fractions, nitrogen fractions, and bacterial community compositions, but significant differences were not found in the soil bacterial diversity. The normal litter quantity enhanced the relative abundance of Actinobacteria and Firmicutes and reduced the relative abundance of Bacteroidetes, Plantctomycets and Nitrospiare. The Beta-, Gamma-, and Deltaproteobacteria were significantly less abundant in the normal quantity litter addition treatment, and were subsequently more abundant in the double quantity litter addition treatment. The bacterial communities transitioned from Proteobacteria-dominant (Beta-, Gamma-, and Delta) to Actinobacteria-dominant during the decomposition of the normal quantity of litter. A cluster analysis showed that the double litter treatment and the control had similar bacterial community compositions. These results suggested that the double quantity litter limited the shift of the soil bacterial community. Our results indicate that litter decomposition alters bacterial dynamics under the accumulation of litter during the vegetation restoration process, which provides important significant guidelines for the management of forest ecosystems.

  9. Real-time and rapid GNSS solutions from the M8.2 September 2017 Tehuantepec Earthquake and implications for Earthquake and Tsunami Early Warning Systems

    NASA Astrophysics Data System (ADS)

    Mencin, D.; Hodgkinson, K. M.; Mattioli, G. S.

    2017-12-01

    In support of hazard research and Earthquake Early Warning (EEW) Systems UNAVCO operates approximately 800 RT-GNSS stations throughout western North America and Alaska (EarthScope Plate Boundary Observatory), Mexico (TLALOCNet), and the pan-Caribbean region (COCONet). Our system produces and distributes raw data (BINEX and RTCM3) and real-time Precise Point Positions via the Trimble PIVOT Platform (RTX). The 2017-09-08 earthquake M8.2 located 98 km SSW of Tres Picos, Mexico is the first great earthquake to occur within the UNAVCO RT-GNSS footprint, which allows for a rigorous analysis of our dynamic and static processing methods. The need for rapid geodetic solutions ranges from seconds (EEW systems) to several minutes (Tsunami Warning and NEIC moment tensor and finite fault models). Here, we compare and quantify the relative processing strategies for producing static offsets, moment tensors and geodetically determined finite fault models using data recorded during this event. We also compare the geodetic solutions with the USGS NEIC seismically derived moment tensors and finite fault models, including displacement waveforms generated from these models. We define kinematic post-processed solutions from GIPSY-OASISII (v6.4) with final orbits and clocks as a "best" case reference to evaluate the performance of our different processing strategies. We find that static displacements of a few centimeters or less are difficult to resolve in the real-time GNSS position estimates. The standard daily 24-hour solutions provide the highest-quality data-set to determine coseismic offsets, but these solutions are delayed by at least 48 hours after the event. Dynamic displacements, estimated in real-time, however, show reasonable agreement with final, post-processed position estimates, and while individual position estimates have large errors, the real-time solutions offer an excellent operational option for EEW systems, including the use of estimated peak-ground displacements or directly inverting for finite-fault solutions. In the near-field, we find that the geodetically-derived moment tensors and finite fault models differ significantly with seismically-derived models, highlighting the utility of using geodetic data in hazard applications.

  10. Characterisation of the turbulent electromotive force and its magnetically-mediated quenching in a global EULAG-MHD simulation of solar convection

    NASA Astrophysics Data System (ADS)

    Simard, Corinne; Charbonneau, Paul; Dubé, Caroline

    2016-10-01

    We perform a mean-field analysis of the EULAG-MHD millenium simulation of global magnetohydrodynamical convection presented in Passos and Charbonneau (2014). The turbulent electromotive force (emf) operating in the simulation is assumed to be linearly related to the cyclic axisymmetric mean magnetic field and its first spatial derivatives. At every grid point in the simulation's meridional plane, this assumed relationship involves 27 independent tensorial coefficients. Expanding on Racine et al. (2011), we extract these coefficients from the simulation data through a least-squares minimization procedure based on singular value decomposition. The reconstructed α -tensor shows good agreement with that obtained by Racine et al. (2011), who did not include derivatives of the mean-field in their fit, as well as with the α -tensor extracted by Augustson et al. (2015) from a distinct ASH MHD simulation. The isotropic part of the turbulent magnetic diffusivity tensor β is positive definite and reaches values of 5.0 ×107 m2 s-1 in the middle of the convecting fluid layers. The spatial variations of both αϕϕ and βϕϕ component are well reproduced by expressions obtained under the Second Order Correlation Approximation, with a good matching of amplitude requiring a turbulent correlation time about five times smaller than the estimated turnover time of the small-scale turbulent flow. By segmenting the simulation data into epochs of magnetic cycle minima and maxima, we also measure α - and β -quenching. We find the magnetic quenching of the α -effect to be driven primarily by a reduction of the small-scale flow's kinetic helicity, with variations of the current helicity playing a lesser role in most locations in the simulation domain. Our measurements of turbulent diffusivity quenching are restricted to the βϕϕ component, but indicate a weaker quenching, by a factor of ≃ 1.36, than of the α -effect, which in our simulation drops by a factor of three between the minimum and maximum phases of the magnetic cycle.

  11. Three-Component Decomposition of Polarimetric SAR Data Integrating Eigen-Decomposition Results

    NASA Astrophysics Data System (ADS)

    Lu, Da; He, Zhihua; Zhang, Huan

    2018-01-01

    This paper presents a novel three-component scattering power decomposition of polarimetric SAR data. There are two problems in three-component decomposition method: volume scattering component overestimation in urban areas and artificially set parameter to be a fixed value. Though volume scattering component overestimation can be partly solved by deorientation process, volume scattering still dominants some oriented urban areas. The speckle-like decomposition results introduced by artificially setting value are not conducive to further image interpretation. This paper integrates the results of eigen-decomposition to solve the aforementioned problems. Two principal eigenvectors are used to substitute the surface scattering model and the double bounce scattering model. The decomposed scattering powers are obtained using a constrained linear least-squares method. The proposed method has been verified using an ESAR PolSAR image, and the results show that the proposed method has better performance in urban area.

  12. Electromagnetic stress tensor for an amorphous metamaterial medium

    NASA Astrophysics Data System (ADS)

    Wang, Neng; Wang, Shubo; Ng, Jack

    2018-03-01

    We analytically and numerically investigated the internal optical forces exerted by an electromagnetic wave inside an amorphous metamaterial medium. We derived, by using the principle of virtual work, the Helmholtz stress tensor, which takes into account the electrostriction effect. Several examples of amorphous media are considered, and different electromagnetic stress tensors, such as the Einstein-Laub tensor and Minkowski tensor, are also compared. It is concluded that the Helmholtz stress tensor is the appropriate tensor for such systems.

  13. Photogeneration of active formate decomposition catalysts to produce hydrogen from formate and water

    DOEpatents

    King, Jr., Allen D.; King, Robert B.; Sailers, III, Earl L.

    1983-02-08

    A process for producing hydrogen from formate and water by photogenerating an active formate decomposition catalyst from transition metal carbonyl precursor catalysts at relatively low temperatures and otherwise mild conditions is disclosed. Additionally, this process may be expanded to include the generation of formate from carbon monoxide and hydroxide such that the result is the water gas shift reaction.

  14. Quantification of diffusion tensor imaging in normal white matter maturation of early childhood using an automated processing pipeline.

    PubMed

    Loh, K B; Ramli, N; Tan, L K; Roziah, M; Rahmat, K; Ariffin, H

    2012-07-01

    The degree and status of white matter myelination can be sensitively monitored using diffusion tensor imaging (DTI). This study looks at the measurement of fractional anistropy (FA) and mean diffusivity (MD) using an automated ROI with an existing DTI atlas. Anatomical MRI and structural DTI were performed cross-sectionally on 26 normal children (newborn to 48 months old), using 1.5-T MRI. The automated processing pipeline was implemented to convert diffusion-weighted images into the NIfTI format. DTI-TK software was used to register the processed images to the ICBM DTI-81 atlas, while AFNI software was used for automated atlas-based volumes of interest (VOIs) and statistical value extraction. DTI exhibited consistent grey-white matter contrast. Triphasic temporal variation of the FA and MD values was noted, with FA increasing and MD decreasing rapidly early in the first 12 months. The second phase lasted 12-24 months during which the rate of FA and MD changes was reduced. After 24 months, the FA and MD values plateaued. DTI is a superior technique to conventional MR imaging in depicting WM maturation. The use of the automated processing pipeline provides a reliable environment for quantitative analysis of high-throughput DTI data. Diffusion tensor imaging outperforms conventional MRI in depicting white matter maturation. • DTI will become an important clinical tool for diagnosing paediatric neurological diseases. • DTI appears especially helpful for developmental abnormalities, tumours and white matter disease. • An automated processing pipeline assists quantitative analysis of high throughput DTI data.

  15. What is the right formalism to search for resonances?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mikhasenko, M.; Pilloni, A.; Nys, J.

    Hmore » adron decay chains constitute one of the main sources of information on the QCD spectrum. We discuss the differences between several partial wave analysis formalisms used in the literature to build the amplitudes. We match the helicity amplitudes to the covariant tensor basis. ereby, we pay attention to the analytical properties of the amplitudes and separate singularities of kinematical and dynamical nature. We study the analytical properties of the spin-orbit (LS) formalism, and some of the covariant tensor approaches. In particular, we explicitly build the amplitudes for the B → ψ π K and B → D ¯ π π decays, and show that the energy dependence of the covariant approach is model dependent. We also show that the usual recursive construction of covariant tensors explicitly violates crossing symmetry, which would lead to different resonance parameters extracted from scattering and decay processes.« less

  16. What is the right formalism to search for resonances?

    DOE PAGES

    Mikhasenko, M.; Pilloni, A.; Nys, J.; ...

    2018-03-17

    Hmore » adron decay chains constitute one of the main sources of information on the QCD spectrum. We discuss the differences between several partial wave analysis formalisms used in the literature to build the amplitudes. We match the helicity amplitudes to the covariant tensor basis. ereby, we pay attention to the analytical properties of the amplitudes and separate singularities of kinematical and dynamical nature. We study the analytical properties of the spin-orbit (LS) formalism, and some of the covariant tensor approaches. In particular, we explicitly build the amplitudes for the B → ψ π K and B → D ¯ π π decays, and show that the energy dependence of the covariant approach is model dependent. We also show that the usual recursive construction of covariant tensors explicitly violates crossing symmetry, which would lead to different resonance parameters extracted from scattering and decay processes.« less

  17. Atomic orbital-based SOS-MP2 with tensor hypercontraction. I. GPU-based tensor construction and exploiting sparsity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Chenchen; Martínez, Todd J.; SLAC National Accelerator Laboratory, Menlo Park, California 94025

    We present a tensor hypercontracted (THC) scaled opposite spin second order Møller-Plesset perturbation theory (SOS-MP2) method. By using THC, we reduce the formal scaling of SOS-MP2 with respect to molecular size from quartic to cubic. We achieve further efficiency by exploiting sparsity in the atomic orbitals and using graphical processing units (GPUs) to accelerate integral construction and matrix multiplication. The practical scaling of GPU-accelerated atomic orbital-based THC-SOS-MP2 calculations is found to be N{sup 2.6} for reference data sets of water clusters and alanine polypeptides containing up to 1600 basis functions. The errors in correlation energy with respect to density-fitting-SOS-MP2 aremore » less than 0.5 kcal/mol for all systems tested (up to 162 atoms).« less

  18. High-grade glioma diffusive modeling using statistical tissue information and diffusion tensors extracted from atlases.

    PubMed

    Roniotis, Alexandros; Manikis, Georgios C; Sakkalis, Vangelis; Zervakis, Michalis E; Karatzanis, Ioannis; Marias, Kostas

    2012-03-01

    Glioma, especially glioblastoma, is a leading cause of brain cancer fatality involving highly invasive and neoplastic growth. Diffusive models of glioma growth use variations of the diffusion-reaction equation in order to simulate the invasive patterns of glioma cells by approximating the spatiotemporal change of glioma cell concentration. The most advanced diffusive models take into consideration the heterogeneous velocity of glioma in gray and white matter, by using two different discrete diffusion coefficients in these areas. Moreover, by using diffusion tensor imaging (DTI), they simulate the anisotropic migration of glioma cells, which is facilitated along white fibers, assuming diffusion tensors with different diffusion coefficients along each candidate direction of growth. Our study extends this concept by fully exploiting the proportions of white and gray matter extracted by normal brain atlases, rather than discretizing diffusion coefficients. Moreover, the proportions of white and gray matter, as well as the diffusion tensors, are extracted by the respective atlases; thus, no DTI processing is needed. Finally, we applied this novel glioma growth model on real data and the results indicate that prognostication rates can be improved. © 2012 IEEE

  19. Diffusion Tensor Image Registration Using Hybrid Connectivity and Tensor Features

    PubMed Central

    Wang, Qian; Yap, Pew-Thian; Wu, Guorong; Shen, Dinggang

    2014-01-01

    Most existing diffusion tensor imaging (DTI) registration methods estimate structural correspondences based on voxelwise matching of tensors. The rich connectivity information that is given by DTI, however, is often neglected. In this article, we propose to integrate complementary information given by connectivity features and tensor features for improved registration accuracy. To utilize connectivity information, we place multiple anchors representing different brain anatomies in the image space, and define the connectivity features for each voxel as the geodesic distances from all anchors to the voxel under consideration. The geodesic distance, which is computed in relation to the tensor field, encapsulates information of brain connectivity. We also extract tensor features for every voxel to reflect the local statistics of tensors in its neighborhood. We then combine both connectivity features and tensor features for registration of tensor images. From the images, landmarks are selected automatically and their correspondences are determined based on their connectivity and tensor feature vectors. The deformation field that deforms one tensor image to the other is iteratively estimated and optimized according to the landmarks and their associated correspondences. Experimental results show that, by using connectivity features and tensor features simultaneously, registration accuracy is increased substantially compared with the cases using either type of features alone. PMID:24293159

  20. Analysis and Prediction of Sea Ice Evolution using Koopman Mode Decomposition Techniques

    DTIC Science & Technology

    2018-04-30

    Title: Analysis and Prediction of Sea Ice Evolution using Koopman Mode Decomposition Techniques Subject: Monthly Progress Report Period of...Resources: N/A TOTAL: $18,687 2 TECHNICAL STATUS REPORT Abstract The program goal is analysis of sea ice dynamical behavior using Koopman Mode Decompo...sition (KMD) techniques. The work in the program’s first month consisted of improvements to data processing code, inclusion of additional arctic sea ice

Top