Science.gov

Sample records for processing tensor decomposition

  1. Orthogonal tensor decompositions

    SciTech Connect

    Tamara G. Kolda

    2000-03-01

    The authors explore the orthogonal decomposition of tensors (also known as multi-dimensional arrays or n-way arrays) using two different definitions of orthogonality. They present numerous examples to illustrate the difficulties in understanding such decompositions. They conclude with a counterexample to a tensor extension of the Eckart-Young SVD approximation theorem by Leibovici and Sabatier [Linear Algebra Appl. 269(1998):307--329].

  2. Tensor gauge condition and tensor field decomposition

    NASA Astrophysics Data System (ADS)

    Zhu, Ben-Chao; Chen, Xiang-Song

    2015-10-01

    We discuss various proposals of separating a tensor field into pure-gauge and gauge-invariant components. Such tensor field decomposition is intimately related to the effort of identifying the real gravitational degrees of freedom out of the metric tensor in Einstein’s general relativity. We show that as for a vector field, the tensor field decomposition has exact correspondence to and can be derived from the gauge-fixing approach. The complication for the tensor field, however, is that there are infinitely many complete gauge conditions in contrast to the uniqueness of Coulomb gauge for a vector field. The cause of such complication, as we reveal, is the emergence of a peculiar gauge-invariant pure-gauge construction for any gauge field of spin ≥ 2. We make an extensive exploration of the complete tensor gauge conditions and their corresponding tensor field decompositions, regarding mathematical structures, equations of motion for the fields and nonlinear properties. Apparently, no single choice is superior in all aspects, due to an awkward fact that no gauge-fixing can reduce a tensor field to be purely dynamical (i.e. transverse and traceless), as can the Coulomb gauge in a vector case.

  3. An Iterative Reweighted Method for Tucker Decomposition of Incomplete Tensors

    NASA Astrophysics Data System (ADS)

    Yang, Linxiao; Fang, Jun; Li, Hongbin; Zeng, Bing

    2016-09-01

    We consider the problem of low-rank decomposition of incomplete multiway tensors. Since many real-world data lie on an intrinsically low dimensional subspace, tensor low-rank decomposition with missing entries has applications in many data analysis problems such as recommender systems and image inpainting. In this paper, we focus on Tucker decomposition which represents an Nth-order tensor in terms of N factor matrices and a core tensor via multilinear operations. To exploit the underlying multilinear low-rank structure in high-dimensional datasets, we propose a group-based log-sum penalty functional to place structural sparsity over the core tensor, which leads to a compact representation with smallest core tensor. The method for Tucker decomposition is developed by iteratively minimizing a surrogate function that majorizes the original objective function, which results in an iterative reweighted process. In addition, to reduce the computational complexity, an over-relaxed monotone fast iterative shrinkage-thresholding technique is adapted and embedded in the iterative reweighted process. The proposed method is able to determine the model complexity (i.e. multilinear rank) in an automatic way. Simulation results show that the proposed algorithm offers competitive performance compared with other existing algorithms.

  4. Tensor decomposition of EEG signals: a brief review.

    PubMed

    Cong, Fengyu; Lin, Qiu-Hua; Kuang, Li-Dan; Gong, Xiao-Feng; Astikainen, Piia; Ristaniemi, Tapani

    2015-06-15

    Electroencephalography (EEG) is one fundamental tool for functional brain imaging. EEG signals tend to be represented by a vector or a matrix to facilitate data processing and analysis with generally understood methodologies like time-series analysis, spectral analysis and matrix decomposition. Indeed, EEG signals are often naturally born with more than two modes of time and space, and they can be denoted by a multi-way array called as tensor. This review summarizes the current progress of tensor decomposition of EEG signals with three aspects. The first is about the existing modes and tensors of EEG signals. Second, two fundamental tensor decomposition models, canonical polyadic decomposition (CPD, it is also called parallel factor analysis-PARAFAC) and Tucker decomposition, are introduced and compared. Moreover, the applications of the two models for EEG signals are addressed. Particularly, the determination of the number of components for each mode is discussed. Finally, the N-way partial least square and higher-order partial least square are described for a potential trend to process and analyze brain signals of two modalities simultaneously.

  5. Smooth PARAFAC Decomposition for Tensor Completion

    NASA Astrophysics Data System (ADS)

    Yokota, Tatsuya; Zhao, Qibin; Cichocki, Andrzej

    2016-10-01

    In recent years, low-rank based tensor completion, which is a higher-order extension of matrix completion, has received considerable attention. However, the low-rank assumption is not sufficient for the recovery of visual data, such as color and 3D images, where the ratio of missing data is extremely high. In this paper, we consider "smoothness" constraints as well as low-rank approximations, and propose an efficient algorithm for performing tensor completion that is particularly powerful regarding visual data. The proposed method admits significant advantages, owing to the integration of smooth PARAFAC decomposition for incomplete tensors and the efficient selection of models in order to minimize the tensor rank. Thus, our proposed method is termed as "smooth PARAFAC tensor completion (SPC)." In order to impose the smoothness constraints, we employ two strategies, total variation (SPC-TV) and quadratic variation (SPC-QV), and invoke the corresponding algorithms for model learning. Extensive experimental evaluations on both synthetic and real-world visual data illustrate the significant improvements of our method, in terms of both prediction performance and efficiency, compared with many state-of-the-art tensor completion methods.

  6. Robust Face Clustering Via Tensor Decomposition.

    PubMed

    Cao, Xiaochun; Wei, Xingxing; Han, Yahong; Lin, Dongdai

    2015-11-01

    Face clustering is a key component either in image managements or video analysis. Wild human faces vary with the poses, expressions, and illumination changes. All kinds of noises, like block occlusions, random pixel corruptions, and various disguises may also destroy the consistency of faces referring to the same person. This motivates us to develop a robust face clustering algorithm that is less sensitive to these noises. To retain the underlying structured information within facial images, we use tensors to represent faces, and then accomplish the clustering task based on the tensor data. The proposed algorithm is called robust tensor clustering (RTC), which firstly finds a lower-rank approximation of the original tensor data using a L1 norm optimization function. Because L1 norm does not exaggerate the effect of noises compared with L2 norm, the minimization of the L1 norm approximation function makes RTC robust. Then, we compute high-order singular value decomposition of this approximate tensor to obtain the final clustering results. Different from traditional algorithms solving the approximation function with a greedy strategy, we utilize a nongreedy strategy to obtain a better solution. Experiments conducted on the benchmark facial datasets and gait sequences demonstrate that RTC has better performance than the state-of-the-art clustering algorithms and is more robust to noises. PMID:25546869

  7. An optimization approach for fitting canonical tensor decompositions.

    SciTech Connect

    Dunlavy, Daniel M.; Acar, Evrim; Kolda, Tamara Gibson

    2009-02-01

    Tensor decompositions are higher-order analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as the CANDECOMP/PARAFAC decomposition (CPD), which expresses a tensor as the sum of component rank-one tensors and is used in a multitude of applications such as chemometrics, signal processing, neuroscience, and web analysis. The task of computing the CPD, however, can be difficult. The typical approach is based on alternating least squares (ALS) optimization, which can be remarkably fast but is not very accurate. Previously, nonlinear least squares (NLS) methods have also been recommended; existing NLS methods are accurate but slow. In this paper, we propose the use of gradient-based optimization methods. We discuss the mathematical calculation of the derivatives and further show that they can be computed efficiently, at the same cost as one iteration of ALS. Computational experiments demonstrate that the gradient-based optimization methods are much more accurate than ALS and orders of magnitude faster than NLS.

  8. Tensor decomposition in potential energy surface representations.

    PubMed

    Ostrowski, Lukas; Ziegler, Benjamin; Rauhut, Guntram

    2016-09-14

    In order to reduce the operation count in vibration correlation methods, e.g., vibrational configuration interaction (VCI) theory, a tensor decomposition approach has been applied to the analytical representations of multidimensional potential energy surfaces (PESs). It is shown that a decomposition of the coefficients within the individual n-mode coupling terms in a multimode expansion of the PES is feasible and allows for convenient contractions of one-dimensional integrals with these newly determined factor matrices. Deviations in the final VCI frequencies of a set of small molecules were found to be negligible once the rank of the factors matrices is chosen appropriately. Recommendations for meaningful ranks are provided and different algorithms are discussed. PMID:27634247

  9. Tensor network decompositions in the presence of a global symmetry

    SciTech Connect

    Singh, Sukhwinder; Pfeifer, Robert N. C.; Vidal, Guifre

    2010-11-15

    Tensor network decompositions offer an efficient description of certain many-body states of a lattice system and are the basis of a wealth of numerical simulation algorithms. We discuss how to incorporate a global symmetry, given by a compact, completely reducible group G, in tensor network decompositions and algorithms. This is achieved by considering tensors that are invariant under the action of the group G. Each symmetric tensor decomposes into two types of tensors: degeneracy tensors, containing all the degrees of freedom, and structural tensors, which only depend on the symmetry group. In numerical calculations, the use of symmetric tensors ensures the preservation of the symmetry, allows selection of a specific symmetry sector, and significantly reduces computational costs. On the other hand, the resulting tensor network can be interpreted as a superposition of exponentially many spin networks. Spin networks are used extensively in loop quantum gravity, where they represent states of quantum geometry. Our work highlights their importance in the context of tensor network algorithms as well, thus setting the stage for cross-fertilization between these two areas of research.

  10. Tensor product decomposition methods for plasmas physics computations

    NASA Astrophysics Data System (ADS)

    Del-Castillo-Negrete, D.

    2012-03-01

    Tensor product decomposition (TPD) methods are a powerful linear algebra technique for the efficient representation of high dimensional data sets. In the simplest 2-dimensional case, TPD reduces to the singular value decomposition (SVD) of matrices. These methods, which are closely related to proper orthogonal decomposition techniques, have been extensively applied in signal and image processing, and to some fluid mechanics problems. However, their use in plasma physics computation is relatively new. Some recent applications include: data compression of 6-dimensional gyrokinetic plasma turbulence data sets,footnotetextD. R. Hatch, D. del-Castillo-Negrete, and P. W. Terry. Submitted to Journal Comp. Phys. (2011). noise reduction in particle methods,footnotetextR. Nguyen, D. del-Castillo-Negrete, K. Schneider, M. Farge, and G. Chen: Journal of Comp. Phys. 229, 2821-2839 (2010). and multiscale analysis of plasma turbulence.footnotetextS. Futatani, S. Benkadda, and D. del-Castillo-Negrete: Phys. of Plasmas, 16, 042506 (2009) The goal of this presentation is to discuss a novel application of TPD methods to projective integration of particle-based collisional plasma transport computations.

  11. 3D extension of Tensorial Polar Decomposition. Application to (photo-)elasticity tensors

    NASA Astrophysics Data System (ADS)

    Desmorat, Rodrigue; Desmorat, Boris

    2016-06-01

    The orthogonalized harmonic decomposition of symmetric fourth-order tensors (i.e. having major and minor indicial symmetries, such as elasticity tensors) is completed by a representation of harmonic fourth-order tensors H by means of two second-order harmonic (symmetric deviatoric) tensors only. A similar decomposition is obtained for non-symmetric tensors (i.e. having minor indicial symmetry only, such as photo-elasticity tensors or elasto-plasticity tangent operators) introducing a fourth-order major antisymmetric traceless tensor Z. The tensor Z is represented by means of one harmonic second-order tensor and one antisymmetric second-order tensor only. Representations of totally symmetric (rari-constant), symmetric and major antisymmetric fourth-order tensors are simple particular cases of the proposed general representation. Closed-form expressions for tensor decomposition are given in the monoclinic case. Practical applications to elasticity and photo-elasticity monoclinic tensors are finally presented. xml:lang="fr"

  12. Calculating vibrational spectra of molecules using tensor train decomposition

    NASA Astrophysics Data System (ADS)

    Rakhuba, Maxim; Oseledets, Ivan

    2016-09-01

    We propose a new algorithm for calculation of vibrational spectra of molecules using tensor train decomposition. Under the assumption that eigenfunctions lie on a low-parametric manifold of low-rank tensors we suggest using well-known iterative methods that utilize matrix inversion (locally optimal block preconditioned conjugate gradient method, inverse iteration) and solve corresponding linear systems inexactly along this manifold. As an application, we accurately compute vibrational spectra (84 states) of acetonitrile molecule CH3CN on a laptop in one hour using only 100 MB of memory to represent all computed eigenfunctions.

  13. Uncertainty propagation in orbital mechanics via tensor decomposition

    NASA Astrophysics Data System (ADS)

    Sun, Yifei; Kumar, Mrinal

    2016-03-01

    Uncertainty forecasting in orbital mechanics is an essential but difficult task, primarily because the underlying Fokker-Planck equation (FPE) is defined on a relatively high dimensional (6-D) state-space and is driven by the nonlinear perturbed Keplerian dynamics. In addition, an enormously large solution domain is required for numerical solution of this FPE (e.g. encompassing the entire orbit in the x-y-z subspace), of which the state probability density function (pdf) occupies a tiny fraction at any given time. This coupling of large size, high dimensionality and nonlinearity makes for a formidable computational task, and has caused the FPE for orbital uncertainty propagation to remain an unsolved problem. To the best of the authors' knowledge, this paper presents the first successful direct solution of the FPE for perturbed Keplerian mechanics. To tackle the dimensionality issue, the time-varying state pdf is approximated in the CANDECOMP/PARAFAC decomposition tensor form where all the six spatial dimensions as well as the time dimension are separated from one other. The pdf approximation for all times is obtained simultaneously via the alternating least squares algorithm. Chebyshev spectral differentiation is employed for discretization on account of its spectral ("super-fast") convergence rate. To facilitate the tensor decomposition and control the solution domain size, system dynamics is expressed using spherical coordinates in a noninertial reference frame. Numerical results obtained on a regular personal computer are compared with Monte Carlo simulations.

  14. Databases post-processing in Tensoral

    NASA Technical Reports Server (NTRS)

    Dresselhaus, Eliot

    1994-01-01

    The Center for Turbulent Research (CTR) post-processing effort aims to make turbulence simulations and data more readily and usefully available to the research and industrial communities. The Tensoral language, introduced in this document and currently existing in prototype form, is the foundation of this effort. Tensoral provides a convenient and powerful protocol to connect users who wish to analyze fluids databases with the authors who generate them. In this document we introduce Tensoral and its prototype implementation in the form of a user's guide. This guide focuses on use of Tensoral for post-processing turbulence databases. The corresponding document - the Tensoral 'author's guide' - which focuses on how authors can make databases available to users via the Tensoral system - is currently unwritten. Section 1 of this user's guide defines Tensoral's basic notions: we explain the class of problems at hand and how Tensoral abstracts them. Section 2 defines Tensoral syntax for mathematical expressions. Section 3 shows how these expressions make up Tensoral statements. Section 4 shows how Tensoral statements and expressions are embedded into other computer languages (such as C or Vectoral) to make Tensoral programs. We conclude with a complete example program.

  15. Study of recognizing human motion observed from an arbitrary viewpoint based on decomposition of a tensor containing multiple view motions

    NASA Astrophysics Data System (ADS)

    Hori, Takayuki; Ohya, Jun; Kurumisawa, Jun

    2011-03-01

    We propose a Tensor Decomposition based algorithm that recognizes the observed action performed by an unknown person and unknown viewpoint not included in the database. Our previous research aimed motion recognition from one single viewpoint. In this paper, we extend our approach for human motion recognition from an arbitrary viewpoint. To achieve this issue, we set tensor database which are multi-dimensional vectors with dimensions corresponding to human models, viewpoint angles, and action classes. The value of a tensor for a given combination of human silhouette model, viewpoint angle, and action class is the series of mesh feature vectors calculated each frame sequence. To recognize human motion, the actions of one of the persons in the tensor are replaced by the synthesized actions. Then, the core tensor for the replaced tensor is computed. This process is repeated for each combination of action, person, and viewpoint. For each iteration, the difference between the replaced and original core tensors is computed. The assumption that gives the minimal difference is the action recognition result. The recognition results show the validity of our proposed method, the method is experimentally compared with Nearest Neighbor rule. Our proposed method is very stable as each action was recognized with over 75% accuracy.

  16. Symmetric tensor decomposition description of fermionic many-body wave functions.

    PubMed

    Uemura, Wataru; Sugino, Osamu

    2012-12-21

    The configuration interaction (CI) is a versatile wave function theory for interacting fermions, but it involves an extremely long CI series. Using a symmetric tensor decomposition method, we convert the CI series into a compact and numerically tractable form. The converted series encompasses the Hartree-Fock state in the first term and rapidly converges to the full-CI state, as numerically tested by using small molecules. Provided that the length of the symmetric tensor decomposition CI series grows only moderately with the increasing complexity of the system, the new method will serve as one of the alternative variational methods to achieve full CI with enhanced practicability. PMID:23368456

  17. Tensor decomposition techniques in the solution of vibrational coupled cluster response theory eigenvalue equations

    NASA Astrophysics Data System (ADS)

    Godtliebsen, Ian H.; Hansen, Mads Bøttger; Christiansen, Ove

    2015-01-01

    We show how the eigenvalue equations of vibrational coupled cluster response theory can be solved using a subspace projection method with Davidson update, where basis vectors are stacked tensors decomposed into canonical (CP, Candecomp/Parafac) form. In each update step, new vectors are first orthogonalized to old vectors, followed by a tensor decomposition to a prescribed threshold TCP. The algorithm can provide excitation energies and eigenvectors of similar accuracy as a full vector approach and with only a very modest increase in the number of vectors required for convergence. The algorithm is illustrated with sample calculations for formaldehyde, 1,2,5-thiadiazole, and water. Analysis of the formaldehyde and thiadiazole calculations illustrate a number of interesting features of the algorithm. For example, the tensor decomposition threshold is optimally put to rather loose values, such as TCP = 10-2. With such thresholds for the tensor decompositions, the original eigenvalue equations can still be solved accurately. It is thus possible to directly calculate vibrational wave functions in tensor decomposed format.

  18. Tensor decomposition in post-Hartree–Fock methods. II. CCD implementation

    SciTech Connect

    Benedikt, Udo; Böhm, Karl-Heinz; Auer, Alexander A.

    2013-12-14

    In a previous publication, we have discussed the usage of tensor decomposition in the canonical polyadic (CP) tensor format for electronic structure methods. There, we focused on two-electron integrals and second order Møller-Plesset perturbation theory (MP2). In this work, we discuss the CP format for Coupled Cluster (CC) theory and present a pilot implementation for the Coupled Cluster Doubles method. We discuss the iterative solution of the CC amplitude equations using tensors in CP representation and present a tensor contraction scheme that minimizes the effort necessary for the rank reductions during the iterations. Furthermore, several details concerning the reduction of complexity of the algorithm, convergence of the CC iterations, truncation errors, and the choice of threshold for chemical accuracy are discussed.

  19. Thermochemical water decomposition processes

    NASA Technical Reports Server (NTRS)

    Chao, R. E.

    1974-01-01

    Thermochemical processes which lead to the production of hydrogen and oxygen from water without the consumption of any other material have a number of advantages when compared to other processes such as water electrolysis. It is possible to operate a sequence of chemical steps with net work requirements equal to zero at temperatures well below the temperature required for water dissociation in a single step. Various types of procedures are discussed, giving attention to halide processes, reverse Deacon processes, iron oxide and carbon oxide processes, and metal and alkali metal processes. Economical questions are also considered.

  20. Higher order singular value decomposition of tensors for fusion of registered images

    NASA Astrophysics Data System (ADS)

    Thomason, Michael G.; Gregor, Jens

    2011-01-01

    This paper describes a computational method using tensor math for higher order singular value decomposition (HOSVD) of registered images. Tensor decomposition is a rigorous way to expose structure embedded in multidimensional datasets. Given a dataset of registered 2-D images, the dataset is represented in tensor format and HOSVD of the tensor is computed to obtain a set of 2-D basis images. The basis images constitute a linear decomposition of the original dataset. HOSVD is data-driven and does not require the user to select parameters or assign thresholds. A specific application uses the basis images for pixel-level fusion of registered images into a single image for visualization. The fusion is optimized with respect to a measure of mean squared error. HOSVD and image fusion are illustrated empirically with four real datasets: (1) visible and infrared data of a natural scene, (2) MRI and x ray CT brain images, and in nondestructive testing (3) x ray, ultrasound, and eddy current images, and (4) x ray, ultrasound, and shearography images.

  1. Tensor decomposition in electronic structure calculations on 3D Cartesian grids

    SciTech Connect

    Khoromskij, B.N. Khoromskaia, V.; Chinnamsetty, S.R.; Flad, H.-J.

    2009-09-01

    In this paper, we investigate a novel approach based on the combination of Tucker-type and canonical tensor decomposition techniques for the efficient numerical approximation of functions and operators in electronic structure calculations. In particular, we study applicability of tensor approximations for the numerical solution of Hartree-Fock and Kohn-Sham equations on 3D Cartesian grids. We show that the orthogonal Tucker-type tensor approximation of electron density and Hartree potential of simple molecules leads to low tensor rank representations. This enables an efficient tensor-product convolution scheme for the computation of the Hartree potential using a collocation-type approximation via piecewise constant basis functions on a uniform nxnxn grid. Combined with the Richardson extrapolation, our approach exhibits O(h{sup 3}) convergence in the grid-size h=O(n{sup -1}). Moreover, this requires O(3rn+r{sup 3}) storage, where r denotes the Tucker rank of the electron density with r=O(logn), almost uniformly in n. For example, calculations of the Coulomb matrix and the Hartree-Fock energy for the CH{sub 4} molecule, with a pseudopotential on the C atom, achieved accuracies of the order of 10{sup -6} hartree with a grid-size n of several hundreds. Since the tensor-product convolution in 3D is performed via 1D convolution transforms, our scheme markedly outperforms the 3D-FFT in both the computing time and storage requirements.

  2. Exploiting multi-lead electrocardiogram correlations using robust third-order tensor decomposition.

    PubMed

    Padhy, Sibasankar; Dandapat, Samarendra

    2015-10-01

    In this Letter, a robust third-order tensor decomposition of multi-lead electrocardiogram (MECG) comprising of 12-leads is proposed to reduce the dimension of the storage data. An order-3 tensor structure is employed to represent the MECG data by rearranging the MECG information in three dimensions. The three-dimensions of the formed tensor represent the number of leads, beats and samples of some fixed ECG duration. Dimension reduction of such an arrangement exploits correlations present among the successive beats (intra-beat and inter-beat) and across the leads (inter-lead). The higher-order singular value decomposition is used to decompose the tensor data. In addition, multiscale analysis has been added for effective care of ECG information. It grossly segments the ECG characteristic waves (P-wave, QRS-complex, ST-segment and T-wave etc.) into different sub-bands. In the meantime, it separates high-frequency noise components into lower-order sub-bands which helps in removing noise from the original data. For evaluation purposes, we have used the publicly available PTB diagnostic database. The proposed method outperforms the existing algorithms where compression ratio is under 10 for MECG data. Results show that the original MECG data volume can be reduced by more than 45 times with acceptable diagnostic distortion level. PMID:26609416

  3. Multipole theory and the Hehl-Obukhov decomposition of the electromagnetic constitutive tensor

    NASA Astrophysics Data System (ADS)

    de Lange, O. L.; Raab, R. E.

    2015-05-01

    The Hehl-Obukhov decomposition expresses the 36 independent components of the electromagnetic constitutive tensor for a local linear anisotropic medium in a useful general form comprising seven macroscopic property tensors: four of second rank, two vectors, and a four-dimensional (pseudo)scalar. We consider homogeneous media and show that in semi-classical multipole theory, the first full realization of this formulation is obtained (in terms of molecular polarizability tensors) at third order (electric octopole-magnetic quadrupole order). The calculations are an extension of a direct method previously used at second order (electric quadrupole-magnetic dipole order). We consider in what sense this theory is independent of the choice of molecular coordinate origins relative to which polarizabilities are evaluated. The pseudoscalar (axion) observable is expressed relative to the crystallographic origin. The other six property tensors are invariant (with respect to an arbitrary choice of each molecular coordinate origin), or zero, at first and second orders. At third order, this invariance has to be imposed (by transformation of the response fields)—an aspect that is required by consideration of isotropic fluids and is consistent with the invariance of transmission phenomena in dielectrics. Alternative derivations of the property tensors are reviewed, with emphasis on the pseudoscalar, constraint-breaking, translational invariance, and uniqueness.

  4. Real-time object recognition in multidimensional images based on joined extended structural tensor and higher-order tensor decomposition methods

    NASA Astrophysics Data System (ADS)

    Cyganek, Boguslaw; Smolka, Bogdan

    2015-02-01

    In this paper a system for real-time recognition of objects in multidimensional video signals is proposed. Object recognition is done by pattern projection into the tensor subspaces obtained from the factorization of the signal tensors representing the input signal. However, instead of taking only the intensity signal the novelty of this paper is first to build the Extended Structural Tensor representation from the intensity signal that conveys information on signal intensities, as well as on higher-order statistics of the input signals. This way the higher-order input pattern tensors are built from the training samples. Then, the tensor subspaces are built based on the Higher-Order Singular Value Decomposition of the prototype pattern tensors. Finally, recognition relies on measurements of the distance of a test pattern projected into the tensor subspaces obtained from the training tensors. Due to high-dimensionality of the input data, tensor based methods require high memory and computational resources. However, recent achievements in the technology of the multi-core microprocessors and graphic cards allows real-time operation of the multidimensional methods as is shown and analyzed in this paper based on real examples of object detection in digital images.

  5. Tensoral for post-processing users and simulation authors

    NASA Technical Reports Server (NTRS)

    Dresselhaus, Eliot

    1993-01-01

    The CTR post-processing effort aims to make turbulence simulations and data more readily and usefully available to the research and industrial communities. The Tensoral language, which provides the foundation for this effort, is introduced here in the form of a user's guide. The Tensoral user's guide is presented in two main sections. Section one acts as a general introduction and guides database users who wish to post-process simulation databases. Section two gives a brief description of how database authors and other advanced users can make simulation codes and/or the databases they generate available to the user community via Tensoral database back ends. The two-part structure of this document conforms to the two-level design structure of the Tensoral language. Tensoral has been designed to be a general computer language for performing tensor calculus and statistics on numerical data. Tensoral's generality allows it to be used for stand-alone native coding of high-level post-processing tasks (as described in section one of this guide). At the same time, Tensoral's specialization to a minute task (namely, to numerical tensor calculus and statistics) allows it to be easily embedded into applications written partly in Tensoral and partly in other computer languages (here, C and Vectoral). Embedded Tensoral, aimed at advanced users for more general coding (e.g. of efficient simulations, for interfacing with pre-existing software, for visualization, etc.), is described in section two of this guide.

  6. [Application of three-way data analysis (second-order tensor decomposition) algorithms in analysis of liquid chromatography].

    PubMed

    Zhang, Jin; Peng, Qianrong; Xu, Longquan; Yang, Min; Wu, Aijing; Ye, Shizhu

    2014-11-01

    Using dropline separation, tangent skimming, and triangulation to estimate the area of an overlapping chromatographic peak might contribute to a large deviation. It is easy, however, to eliminate these errors caused by geometric segmentation using three-way data analysis (second-order tensor decomposition) algorithms. This method of chromatographic analysis has many advantages: automation, anti-interference, high accuracy in the resolution of overlapping chrom- atographic peaks. It even makes the final goal of analytical chemistry achievable without the aid of complicated separation procedures. The core of this method is the process of utilizing useful information and building models through chemometric algorithms. Three-way chromatographic data set can be divided into trilinear dataset and nontrilinear dataset, correspondingly, three-way data analysis (second-order tensor decomposition) algorithms can be divided into trilinear algorithms and nontrilinear algorithms. In this paper, three-way calibration used in liquid chromatography for complex chemical systems in the last decade is reviewed, and focused on sample pretreatment, auxiliary algorithms, the combination and comparison of correction algorithms. PMID:25764649

  7. Representing Matrix Cracks Through Decomposition of the Deformation Gradient Tensor in Continuum Damage Mechanics Methods

    NASA Technical Reports Server (NTRS)

    Leone, Frank A., Jr.

    2015-01-01

    A method is presented to represent the large-deformation kinematics of intraply matrix cracks and delaminations in continuum damage mechanics (CDM) constitutive material models. The method involves the additive decomposition of the deformation gradient tensor into 'crack' and 'bulk material' components. The response of the intact bulk material is represented by a reduced deformation gradient tensor, and the opening of an embedded cohesive interface is represented by a normalized cohesive displacement-jump vector. The rotation of the embedded interface is tracked as the material deforms and as the crack opens. The distribution of the total local deformation between the bulk material and the cohesive interface components is determined by minimizing the difference between the cohesive stress and the bulk material stress projected onto the cohesive interface. The improvements to the accuracy of CDM models that incorporate the presented method over existing approaches are demonstrated for a single element subjected to simple shear deformation and for a finite element model of a unidirectional open-hole tension specimen. The material model is implemented as a VUMAT user subroutine for the Abaqus/Explicit finite element software. The presented deformation gradient decomposition method reduces the artificial load transfer across matrix cracks subjected to large shearing deformations, and avoids the spurious secondary failure modes that often occur in analyses based on conventional progressive damage models.

  8. Towards Interactive Construction of Topical Hierarchy: A Recursive Tensor Decomposition Approach

    PubMed Central

    Wang, Chi; Liu, Xueqing; Song, Yanglei; Han, Jiawei

    2015-01-01

    Automatic construction of user-desired topical hierarchies over large volumes of text data is a highly desirable but challenging task. This study proposes to give users freedom to construct topical hierarchies via interactive operations such as expanding a branch and merging several branches. Existing hierarchical topic modeling techniques are inadequate for this purpose because (1) they cannot consistently preserve the topics when the hierarchy structure is modified; and (2) the slow inference prevents swift response to user requests. In this study, we propose a novel method, called STROD, that allows efficient and consistent modification of topic hierarchies, based on a recursive generative model and a scalable tensor decomposition inference algorithm with theoretical performance guarantee. Empirical evaluation shows that STROD reduces the runtime of construction by several orders of magnitude, while generating consistent and quality hierarchies. PMID:26705505

  9. Tensor-multi-scalar theories: relativistic stars and 3 + 1 decomposition

    NASA Astrophysics Data System (ADS)

    Horbatsch, Michael; Silva, Hector O.; Gerosa, Davide; Pani, Paolo; Berti, Emanuele; Gualtieri, Leonardo; Sperhake, Ulrich

    2015-10-01

    Gravitational theories with multiple scalar fields coupled to the metric and each other—a natural extension of the well studied single-scalar-tensor theories—are interesting phenomenological frameworks to describe deviations from general relativity in the strong-field regime. In these theories, the N-tuple of scalar fields takes values in a coordinate patch of an N-dimensional Riemannian target-space manifold whose properties are poorly constrained by weak-field observations. Here we introduce for simplicity a non-trivial model with two scalar fields and a maximally symmetric target-space manifold. Within this model we present a preliminary investigation of spontaneous scalarization for relativistic, perfect fluid stellar models in spherical symmetry. We find that the scalarization threshold is determined by the eigenvalues of a symmetric scalar-matter coupling matrix, and that the properties of strongly scalarized stellar configurations additionally depend on the target-space curvature radius. In preparation for numerical relativity simulations, we also write down the 3 + 1 decomposition of the field equations for generic tensor-multi-scalar theories.

  10. Enabling High-Dimensional Hierarchical Uncertainty Quantification by ANOVA and Tensor-Train Decomposition

    SciTech Connect

    Zhang, Zheng; Yang, Xiu; Oseledets, Ivan; Karniadakis, George E.; Daniel, Luca

    2015-01-31

    Hierarchical uncertainty quantification can reduce the computational cost of stochastic circuit simulation by employing spectral methods at different levels. This paper presents an efficient framework to simulate hierarchically some challenging stochastic circuits/systems that include high-dimensional subsystems. Due to the high parameter dimensionality, it is challenging to both extract surrogate models at the low level of the design hierarchy and to handle them in the high-level simulation. In this paper, we develop an efficient analysis of variance-based stochastic circuit/microelectromechanical systems simulator to efficiently extract the surrogate models at the low level. In order to avoid the curse of dimensionality, we employ tensor-train decomposition at the high level to construct the basis functions and Gauss quadrature points. As a demonstration, we verify our algorithm on a stochastic oscillator with four MEMS capacitors and 184 random parameters. This challenging example is efficiently simulated by our simulator at the cost of only 10min in MATLAB on a regular personal computer.

  11. Aridity and decomposition processes in complex landscapes

    NASA Astrophysics Data System (ADS)

    Ossola, Alessandro; Nyman, Petter

    2015-04-01

    Decomposition of organic matter is a key biogeochemical process contributing to nutrient cycles, carbon fluxes and soil development. The activity of decomposers depends on microclimate, with temperature and rainfall being major drivers. In complex terrain the fine-scale variation in microclimate (and hence water availability) as a result of slope orientation is caused by differences in incoming radiation and surface temperature. Aridity, measured as the long-term balance between net radiation and rainfall, is a metric that can be used to represent variations in water availability within the landscape. Since aridity metrics can be obtained at fine spatial scales, they could theoretically be used to investigate how decomposition processes vary across complex landscapes. In this study, four research sites were selected in tall open sclerophyll forest along a aridity gradient (Budyko dryness index ranging from 1.56 -2.22) where microclimate, litter moisture and soil moisture were monitored continuously for one year. Litter bags were packed to estimate decomposition rates (k) using leaves of a tree species not present in the study area (Eucalyptus globulus) in order to avoid home-field advantage effects. Litter mass loss was measured to assess the activity of macro-decomposers (6mm litter bag mesh size), meso-decomposers (1 mm mesh), microbes above-ground (0.2 mm mesh) and microbes below-ground (2 cm depth, 0.2 mm mesh). Four replicates for each set of bags were installed at each site and bags were collected at 1, 2, 4, 7 and 12 months since installation. We first tested whether differences in microclimate due to slope orientation have significant effects on decomposition processes. Then the dryness index was related to decomposition rates to evaluate if small-scale variation in decomposition can be predicted using readily available information on rainfall and radiation. Decomposition rates (k), calculated fitting single pool negative exponential models, generally

  12. Tensor based geology preserving reservoir parameterization with Higher Order Singular Value Decomposition (HOSVD)

    NASA Astrophysics Data System (ADS)

    Afra, Sardar; Gildin, Eduardo

    2016-09-01

    Parameter estimation through robust parameterization techniques has been addressed in many works associated with history matching and inverse problems. Reservoir models are in general complex, nonlinear, and large-scale with respect to the large number of states and unknown parameters. Thus, having a practical approach to replace the original set of highly correlated unknown parameters with non-correlated set of lower dimensionality, that captures the most significant features comparing to the original set, is of high importance. Furthermore, de-correlating system's parameters while keeping the geological description intact is critical to control the ill-posedness nature of such problems. We introduce the advantages of a new low dimensional parameterization approach for reservoir characterization applications utilizing multilinear algebra based techniques like higher order singular value decomposition (HOSVD). In tensor based approaches like HOSVD, 2D permeability images are treated as they are, i.e., the data structure is kept as it is, whereas in conventional dimensionality reduction algorithms like SVD data has to be vectorized. Hence, compared to classical methods, higher redundancy reduction with less information loss can be achieved through decreasing present redundancies in all dimensions. In other words, HOSVD approximation results in a better compact data representation with respect to least square sense and geological consistency in comparison with classical algorithms. We examined the performance of the proposed parameterization technique against SVD approach on the SPE10 benchmark reservoir model as well as synthetic channelized permeability maps to demonstrate the capability of the proposed method. Moreover, to acquire statistical consistency, we repeat all experiments for a set of 1000 unknown geological samples and provide comparison using RMSE analysis. Results prove that, for a fixed compression ratio, the performance of the proposed approach

  13. Parallel processing for pitch splitting decomposition

    NASA Astrophysics Data System (ADS)

    Barnes, Levi; Li, Yong; Wadkins, David; Biederman, Steve; Miloslavsky, Alex; Cork, Chris

    2009-10-01

    Decomposition of an input pattern in preparation for a double patterning process is an inherently global problem in which the influence of a local decomposition decision can be felt across an entire pattern. In spite of this, a large portion of the work can be massively distributed. Here, we discuss the advantages of geometric distribution for polygon operations with limited range of influence. Further, we have found that even the naturally global "coloring" step can, in large part, be handled in a geometrically local manner. In some practical cases, up to 70% of the work can be distributed geometrically. We also describe the methods for partitioning the problem into local pieces and present scaling data up to 100 CPUs. These techniques reduce DPT decomposition runtime by orders of magnitude.

  14. Biogeochemistry of Decomposition and Detrital Processing

    NASA Astrophysics Data System (ADS)

    Sanderman, J.; Amundson, R.

    2003-12-01

    Decomposition is a key ecological process that roughly balances net primary production in terrestrial ecosystems and is an essential process in resupplying nutrients to the plant community. Decomposition consists of three concurrent processes: communition or fragmentation, leaching of water-soluble compounds, and microbial catabolism. Decomposition can also be viewed as a sequential process, what Eijsackers and Zehnder (1990) compare to a Russian matriochka doll. Soil macrofauna fragment and partially solubilize plant residues, facilitating establishment of a community of decomposer microorganisms. This decomposer community will gradually shift as the most easily degraded plant compounds are utilized and the more recalcitrant materials begin to accumulate. Given enough time and the proper environmental conditions, most naturally occurring compounds can completely be mineralized to inorganic forms. Simultaneously with mineralization, the process of humification acts to transform a fraction of the plant residues into stable soil organic matter (SOM) or humus. For reference, Schlesinger (1990) estimated that only ˜0.7% of detritus eventually becomes stabilized into humus.Decomposition plays a key role in the cycling of most plant macro- and micronutrients and in the formation of humus. Figure 1 places the roles of detrital processing and mineralization within the context of the biogeochemical cycling of essential plant nutrients. Chapin (1991) found that while the atmosphere supplied 4% and mineral weathering supplied no nitrogen and <1% of phosphorus, internal nutrient recycling is the source for >95% of all the nitrogen and phosphorus uptake by tundra species in Barrow, Alaska. In a cool temperate forest, nutrient recycling accounted for 93%, 89%, 88%, and 65% of total sources for nitrogen, phosphorus, potassium, and calcium, respectively ( Chapin, 1991). (13K)Figure 1. A decomposition-centric biogeochemical model of nutrient cycling. Although there is significant

  15. Biogeochemistry of Decomposition and Detrital Processing

    NASA Astrophysics Data System (ADS)

    Sanderman, J.; Amundson, R.

    2003-12-01

    Decomposition is a key ecological process that roughly balances net primary production in terrestrial ecosystems and is an essential process in resupplying nutrients to the plant community. Decomposition consists of three concurrent processes: communition or fragmentation, leaching of water-soluble compounds, and microbial catabolism. Decomposition can also be viewed as a sequential process, what Eijsackers and Zehnder (1990) compare to a Russian matriochka doll. Soil macrofauna fragment and partially solubilize plant residues, facilitating establishment of a community of decomposer microorganisms. This decomposer community will gradually shift as the most easily degraded plant compounds are utilized and the more recalcitrant materials begin to accumulate. Given enough time and the proper environmental conditions, most naturally occurring compounds can completely be mineralized to inorganic forms. Simultaneously with mineralization, the process of humification acts to transform a fraction of the plant residues into stable soil organic matter (SOM) or humus. For reference, Schlesinger (1990) estimated that only ˜0.7% of detritus eventually becomes stabilized into humus.Decomposition plays a key role in the cycling of most plant macro- and micronutrients and in the formation of humus. Figure 1 places the roles of detrital processing and mineralization within the context of the biogeochemical cycling of essential plant nutrients. Chapin (1991) found that while the atmosphere supplied 4% and mineral weathering supplied no nitrogen and <1% of phosphorus, internal nutrient recycling is the source for >95% of all the nitrogen and phosphorus uptake by tundra species in Barrow, Alaska. In a cool temperate forest, nutrient recycling accounted for 93%, 89%, 88%, and 65% of total sources for nitrogen, phosphorus, potassium, and calcium, respectively ( Chapin, 1991). (13K)Figure 1. A decomposition-centric biogeochemical model of nutrient cycling. Although there is significant

  16. Tensoral: A system for post-processing turbulence simulation data

    NASA Technical Reports Server (NTRS)

    Dresselhaus, Eliot

    1993-01-01

    Many computer simulations in engineering and science -- and especially in computational fluid dynamics (CFD) -- produce huge quantities of numerical data. These data are often so large as to make even relatively simple post-processing of this data unwieldy. The data, once computed and quality-assured, is most likely analyzed by only a few people. As a result, much useful numerical data is under-utilized. Since future state-of-the-art simulations will produce even larger datasets, will use more complex flow geometries, and will be performed on more complex supercomputers, data management issues will become increasingly cumbersome. My goal is to provide software which will automate the present and future task of managing and post-processing large turbulence datasets. My research has focused on the development of these software tools -- specifically, through the development of a very high-level language called 'Tensoral'. The ultimate goal of Tensoral is to convert high-level mathematical expressions (tensor algebra, calculus, and statistics) into efficient low-level programs which numerically calculate these expressions given simulation datasets. This approach to the database and post-processing problem has several advantages. Using Tensoral the numerical and data management details of a simulation are shielded from the concerns of the end user. This shielding is carried out without sacrificing post-processor efficiency and robustness. Another advantage of Tensoral is that its very high-level nature lends itself to portability across a wide variety of computing (and supercomputing) platforms. This is especially important considering the rapidity of changes in supercomputing hardware.

  17. Tensor Algebra Library for NVidia Graphics Processing Units

    SciTech Connect

    Liakh, Dmitry

    2015-03-16

    This is a general purpose math library implementing basic tensor algebra operations on NVidia GPU accelerators. This software is a tensor algebra library that can perform basic tensor algebra operations, including tensor contractions, tensor products, tensor additions, etc., on NVidia GPU accelerators, asynchronously with respect to the CPU host. It supports a simultaneous use of multiple NVidia GPUs. Each asynchronous API function returns a handle which can later be used for querying the completion of the corresponding tensor algebra operation on a specific GPU. The tensors participating in a particular tensor operation are assumed to be stored in local RAM of a node or GPU RAM. The main research area where this library can be utilized is the quantum many-body theory (e.g., in electronic structure theory).

  18. Tensor Algebra Library for NVidia Graphics Processing Units

    2015-03-16

    This is a general purpose math library implementing basic tensor algebra operations on NVidia GPU accelerators. This software is a tensor algebra library that can perform basic tensor algebra operations, including tensor contractions, tensor products, tensor additions, etc., on NVidia GPU accelerators, asynchronously with respect to the CPU host. It supports a simultaneous use of multiple NVidia GPUs. Each asynchronous API function returns a handle which can later be used for querying the completion ofmore » the corresponding tensor algebra operation on a specific GPU. The tensors participating in a particular tensor operation are assumed to be stored in local RAM of a node or GPU RAM. The main research area where this library can be utilized is the quantum many-body theory (e.g., in electronic structure theory).« less

  19. Decomposition

    USGS Publications Warehouse

    Middleton, Beth A.

    2014-01-01

    A cornerstone of ecosystem ecology, decomposition was recognized as a fundamental process driving the exchange of energy in ecosystems by early ecologists such as Lindeman 1942 and Odum 1960). In the history of ecology, studies of decomposition were incorporated into the International Biological Program in the 1960s to compare the nature of organic matter breakdown in various ecosystem types. Such studies still have an important role in ecological studies of today. More recent refinements have brought debates on the relative role microbes, invertebrates and environment in the breakdown and release of carbon into the atmosphere, as well as how nutrient cycling, production and other ecosystem processes regulated by decomposition may shift with climate change. Therefore, this bibliography examines the primary literature related to organic matter breakdown, but it also explores topics in which decomposition plays a key supporting role including vegetation composition, latitudinal gradients, altered ecosystems, anthropogenic impacts, carbon storage, and climate change models. Knowledge of these topics is relevant to both the study of ecosystem ecology as well projections of future conditions for human societies.

  20. A non-statistical regularization approach and a tensor product decomposition method applied to complex flow data

    NASA Astrophysics Data System (ADS)

    von Larcher, Thomas; Blome, Therese; Klein, Rupert; Schneider, Reinhold; Wolf, Sebastian; Huber, Benjamin

    2016-04-01

    Handling high-dimensional data sets like they occur e.g. in turbulent flows or in multiscale behaviour of certain types in Geosciences are one of the big challenges in numerical analysis and scientific computing. A suitable solution is to represent those large data sets in an appropriate compact form. In this context, tensor product decomposition methods currently emerge as an important tool. One reason is that these methods often enable one to attack high-dimensional problems successfully, another that they allow for very compact representations of large data sets. We follow the novel Tensor-Train (TT) decomposition method to support the development of improved understanding of the multiscale behavior and the development of compact storage schemes for solutions of such problems. One long-term goal of the project is the construction of a self-consistent closure for Large Eddy Simulations (LES) of turbulent flows that explicitly exploits the tensor product approach's capability of capturing self-similar structures. Secondly, we focus on a mixed deterministic-stochastic subgrid scale modelling strategy currently under development for application in Finite Volume Large Eddy Simulation (LES) codes. Advanced methods of time series analysis for the databased construction of stochastic models with inherently non-stationary statistical properties and concepts of information theory based on a modified Akaike information criterion and on the Bayesian information criterion for the model discrimination are used to construct surrogate models for the non-resolved flux fluctuations. Vector-valued auto-regressive models with external influences form the basis for the modelling approach [1], [2], [4]. Here, we present the reconstruction capabilities of the two modeling approaches tested against 3D turbulent channel flow data computed by direct numerical simulation (DNS) for an incompressible, isothermal fluid at Reynolds number Reτ = 590 (computed by [3]). References [1] I

  1. Adaptation of motor imagery EEG classification model based on tensor decomposition

    NASA Astrophysics Data System (ADS)

    Li, Xinyang; Guan, Cuntai; Zhang, Haihong; Keng Ang, Kai; Ong, Sim Heng

    2014-10-01

    Objective. Session-to-session nonstationarity is inherent in brain-computer interfaces based on electroencephalography. The objective of this paper is to quantify the mismatch between the training model and test data caused by nonstationarity and to adapt the model towards minimizing the mismatch. Approach. We employ a tensor model to estimate the mismatch in a semi-supervised manner, and the estimate is regularized in the discriminative objective function. Main results. The performance of the proposed adaptation method was evaluated on a dataset recorded from 16 subjects performing motor imagery tasks on different days. The classification results validated the advantage of the proposed method in comparison with other regularization-based or spatial filter adaptation approaches. Experimental results also showed that there is a significant correlation between the quantified mismatch and the classification accuracy. Significance. The proposed method approached the nonstationarity issue from the perspective of data-model mismatch, which is more direct than data variation measurement. The results also demonstrated that the proposed method is effective in enhancing the performance of the feature extraction model.

  2. A Continuum Damage Mechanics Model to Predict Kink-Band Propagation Using Deformation Gradient Tensor Decomposition

    NASA Technical Reports Server (NTRS)

    Bergan, Andrew C.; Leone, Frank A., Jr.

    2016-01-01

    A new model is proposed that represents the kinematics of kink-band formation and propagation within the framework of a mesoscale continuum damage mechanics (CDM) model. The model uses the recently proposed deformation gradient decomposition approach to represent a kink band as a displacement jump via a cohesive interface that is embedded in an elastic bulk material. The model is capable of representing the combination of matrix failure in the frame of a misaligned fiber and instability due to shear nonlinearity. In contrast to conventional linear or bilinear strain softening laws used in most mesoscale CDM models for longitudinal compression, the constitutive response of the proposed model includes features predicted by detailed micromechanical models. These features include: 1) the rotational kinematics of the kink band, 2) an instability when the peak load is reached, and 3) a nonzero plateau stress under large strains.

  3. Tracking Hierarchical Processing in Morphological Decomposition with Brain Potentials

    ERIC Educational Resources Information Center

    Lavric, Aureliu; Elchlepp, Heike; Rastle, Kathleen

    2012-01-01

    One important debate in psycholinguistics concerns the nature of morphological decomposition processes in visual word recognition (e.g., darkness = {dark} + {-ness}). One theory claims that these processes arise during orthographic analysis and prior to accessing meaning (Rastle & Davis, 2008), and another argues that these processes arise through…

  4. Theoretical estimate on tensor-polarization asymmetry in proton-deuteron Drell-Yan process

    NASA Astrophysics Data System (ADS)

    Kumano, S.; Song, Qin-Tao

    2016-09-01

    Tensor-polarized parton distribution functions are new quantities in spin-1 hadrons such as the deuteron, and they could probe new quark-gluon dynamics in hadron and nuclear physics. In charged-lepton deep inelastic scattering, they are studied by the twist-2 structure functions b1 and b2. The HERMES Collaboration found unexpectedly large b1 values compared to a naive theoretical expectation based on the standard deuteron model. The situation should be significantly improved in the near future by an approved experiment to measure b1 at Thomas Jefferson National Accelerator Facility (JLab). There is also an interesting indication in the HERMES result that finite antiquark tensor polarization exists. It could play an important role in solving a mechanism on tensor structure in the quark-gluon level. The tensor-polarized antiquark distributions are not easily determined from the charged-lepton deep inelastic scattering; however, they can be measured in a proton-deuteron Drell-Yan process with a tensor-polarized deuteron target. In this article, we estimate the tensor-polarization asymmetry for a possible Fermilab Main-Injector experiment by using optimum tensor-polarized parton distribution functions to explain the HERMES measurement. We find that the asymmetry is typically a few percent. If it is measured, it could probe new hadron physics, and such studies could create an interesting field of high-energy spin physics. In addition, we find that a significant tensor-polarized gluon distribution should exist due to Q2 evolution, even if it were zero at a low Q2 scale. The tensor-polarized gluon distribution has never been observed, so it is an interesting future project.

  5. The ergodic decomposition of stationary discrete random processes

    NASA Technical Reports Server (NTRS)

    Gray, R. M.; Davisson, L. D.

    1974-01-01

    The ergodic decomposition is discussed, and a version focusing on the structure of individual sample functions of stationary processes is proved for the special case of discrete-time random processes with discrete alphabets. The result is stronger in this case than the usual theorem, and the proof is both intuitive and simple. Estimation-theoretic and information-theoretic interpretations are developed and applied to prove existence theorems for universal source codes, both noiseless and with a fidelity criterion.

  6. Capturing exponential variance using polynomial resources: applying tensor networks to nonequilibrium stochastic processes.

    PubMed

    Johnson, T H; Elliott, T J; Clark, S R; Jaksch, D

    2015-03-01

    Estimating the expected value of an observable appearing in a nonequilibrium stochastic process usually involves sampling. If the observable's variance is high, many samples are required. In contrast, we show that performing the same task without sampling, using tensor network compression, efficiently captures high variances in systems of various geometries and dimensions. We provide examples for which matching the accuracy of our efficient method would require a sample size scaling exponentially with system size. In particular, the high-variance observable e^{-βW}, motivated by Jarzynski's equality, with W the work done quenching from equilibrium at inverse temperature β, is exactly and efficiently captured by tensor networks.

  7. Analysis of benzoquinone decomposition in solution plasma process

    NASA Astrophysics Data System (ADS)

    Bratescu, M. A.; Saito, N.

    2016-01-01

    The decomposition of p-benzoquinone (p-BQ) in Solution Plasma Processing (SPP) was analyzed by Coherent Anti-Stokes Raman Spectroscopy (CARS) by monitoring the change of the anti-Stokes signal intensity of the vibrational transitions of the molecule, during and after SPP. Just in the beginning of the SPP treatment, the CARS signal intensities of the ring vibrational molecular transitions increased under the influence of the electric field of plasma. The results show that plasma influences the p-BQ molecules in two ways: (i) plasma produces a polarization and an orientation of the molecules in the local electric field of plasma and (ii) the gas phase plasma supplies, in the liquid phase, hydrogen and hydroxyl radicals, which reduce or oxidize the molecules, respectively, generating different carboxylic acids. The decomposition of p-BQ after SPP was confirmed by UV-visible absorption spectroscopy and liquid chromatography.

  8. Matrix decomposition graphics processing unit solver for Poisson image editing

    NASA Astrophysics Data System (ADS)

    Lei, Zhao; Wei, Li

    2012-10-01

    In recent years, gradient-domain methods have been widely discussed in the image processing field, including seamless cloning and image stitching. These algorithms are commonly carried out by solving a large sparse linear system: the Poisson equation. However, solving the Poisson equation is a computational and memory intensive task which makes it not suitable for real-time image editing. A new matrix decomposition graphics processing unit (GPU) solver (MDGS) is proposed to settle the problem. A matrix decomposition method is used to distribute the work among GPU threads, so that MDGS will take full advantage of the computing power of current GPUs. Additionally, MDGS is a hybrid solver (combines both the direct and iterative techniques) and has two-level architecture. These enable MDGS to generate identical solutions with those of the common Poisson methods and achieve high convergence rate in most cases. This approach is advantageous in terms of parallelizability, enabling real-time image processing, low memory-taken and extensive applications.

  9. Canonical polyadic decomposition of third-order semi-nonnegative semi-symmetric tensors using LU and QR matrix factorizations

    NASA Astrophysics Data System (ADS)

    Wang, Lu; Albera, Laurent; Kachenoura, Amar; Shu, Huazhong; Senhadji, Lotfi

    2014-12-01

    Semi-symmetric three-way arrays are essential tools in blind source separation (BSS) particularly in independent component analysis (ICA). These arrays can be built by resorting to higher order statistics of the data. The canonical polyadic (CP) decomposition of such semi-symmetric three-way arrays allows us to identify the so-called mixing matrix, which contains the information about the intensities of some latent source signals present in the observation channels. In addition, in many applications, such as the magnetic resonance spectroscopy (MRS), the columns of the mixing matrix are viewed as relative concentrations of the spectra of the chemical components. Therefore, the two loading matrices of the three-way array, which are equal to the mixing matrix, are nonnegative. Most existing CP algorithms handle the symmetry and the nonnegativity separately. Up to now, very few of them consider both the semi-nonnegativity and the semi-symmetry structure of the three-way array. Nevertheless, like all the methods based on line search, trust region strategies, and alternating optimization, they appear to be dependent on initialization, requiring in practice a multi-initialization procedure. In order to overcome this drawback, we propose two new methods, called [InlineEquation not available: see fulltext.] and [InlineEquation not available: see fulltext.], to solve the problem of CP decomposition of semi-nonnegative semi-symmetric three-way arrays. Firstly, we rewrite the constrained optimization problem as an unconstrained one. In fact, the nonnegativity constraint of the two symmetric modes is ensured by means of a square change of variable. Secondly, a Jacobi-like optimization procedure is adopted because of its good convergence property. More precisely, the two new methods use LU and QR matrix factorizations, respectively, which consist in formulating high-dimensional optimization problems into several sequential polynomial and rational subproblems. By using both LU

  10. Thermochemical processes for hydrogen production by water decomposition. Final report

    SciTech Connect

    Perlmutter, D.D.

    1980-08-01

    The principal contributions of the research are in the area of gas-solid reactions, ranging from models and data interpretation for fundamental kinetics and mixing of solids to simulations of engineering scale reactors. Models were derived for simulating the heat and mass transfer processes inside the reactor and tested by experiments. The effects of surface renewal of solids on the mass transfer phenomena were studied and related to the solid mixing. Catalysis by selected additives were studied experimentally. The separate results were combined in a simulation study of industrial-scale rotary reactor performance. A study was made of the controlled decompositions of a series of inorganic sulfates and their common hydrates, carried out in a Thermogravimetric Analyzer (TGA), a Differential Scanning Calorimeter (DSC), and a Differential Thermal Analyzer (DTA). Various sample sizes, heating rates, and ambient atmospheres were used to demonstrate their influence on the results. The purposes of this study were to: (i) reveal intermediate compounds, (ii) determine the stable temperature range of each compound, and (iii) measure reaction kinetics. In addition, several solid additives: carbon, metal oxides, and sodium chloride, were demonstrated to have catalytic effects to varying degrees for the different salts.

  11. CO2 decomposition using electrochemical process in molten salts

    NASA Astrophysics Data System (ADS)

    Otake, Koya; Kinoshita, Hiroshi; Kikuchi, Tatsuya; Suzuki, Ryosuke O.

    2012-08-01

    The electrochemical decomposition of CO2 gas to carbon and oxygen gas in LiCl-Li2O and CaCl2-CaO molten salts was studied. This process consists of electrochemical reduction of Li2O and CaO, as well as the thermal reduction of CO2 gas by the respective metallic Li and Ca. Two kinds of ZrO2 solid electrolytes were tested as an oxygen ion conductor, and the electrolytes removed oxygen ions from the molten salts to the outside of the reactor. After electrolysis in both salts, the aggregations of nanometer-scale amorphous carbon and rod-like graphite crystals were observed by transmission electron microscopy. When 9.7 %CO2-Ar mixed gas was blown into LiCl-Li2O and CaCl2-CaO molten salts, the current efficiency was evaluated to be 89.7 % and 78.5 %, respectively, by the exhaust gas analysis and the supplied charge. When a solid electrolyte with higher ionic conductivity was used, the current and carbon production became larger. It was found that the rate determining step is the diffusion of oxygen ions into the ZrO2 solid electrolyte.

  12. Image corruption detection in diffusion tensor imaging for post-processing and real-time monitoring.

    PubMed

    Li, Yue; Shea, Steven M; Lorenz, Christine H; Jiang, Hangyi; Chou, Ming-Chung; Mori, Susumu

    2013-01-01

    Due to the high sensitivity of diffusion tensor imaging (DTI) to physiological motion, clinical DTI scans often suffer a significant amount of artifacts. Tensor-fitting-based, post-processing outlier rejection is often used to reduce the influence of motion artifacts. Although it is an effective approach, when there are multiple corrupted data, this method may no longer correctly identify and reject the corrupted data. In this paper, we introduce a new criterion called "corrected Inter-Slice Intensity Discontinuity" (cISID) to detect motion-induced artifacts. We compared the performance of algorithms using cISID and other existing methods with regard to artifact detection. The experimental results show that the integration of cISID into fitting-based methods significantly improves the retrospective detection performance at post-processing analysis. The performance of the cISID criterion, if used alone, was inferior to the fitting-based methods, but cISID could effectively identify severely corrupted images with a rapid calculation time. In the second part of this paper, an outlier rejection scheme was implemented on a scanner for real-time monitoring of image quality and reacquisition of the corrupted data. The real-time monitoring, based on cISID and followed by post-processing, fitting-based outlier rejection, could provide a robust environment for routine DTI studies.

  13. Decomposition and hydrocarbon growth processes for hexadienes in nonpremixed flames

    SciTech Connect

    McEnally, Charles S.; Pfefferle, Lisa D.

    2008-03-15

    Alkadienes are formed during the decomposition of alkanes and play a key role in the formation of aromatics due to their degree of unsaturation. The experiments in this paper examined the decomposition and hydrocarbon growth mechanisms of a wide range of hexadiene isomers in soot-forming nonpremixed flames. Specifically, C3 to C12 hydrocarbon concentrations were measured on the centerlines of atmospheric-pressure methane/air coflowing nonpremixed flames doped with 2000 ppm of 1,3-, 1,4-, 1,5-, and 2,4-hexadiene and 2-methyl-1,3-, 3-methyl-1,3-, 2-methyl-1,4-, 3-methyl-1,4-pentadiene, and 2,3-dimethyl-1,3-butadiene. The hexadiene decomposition rates and hydrocarbon product concentrations showed that the primary decomposition mechanism was unimolecular fission of C-C single bonds, whose fission produced allyl and other resonantly stabilized products. The one isomer that does not contain any of these bonds, 2,4-hexadiene, isomerized by a six-center mechanism to 1,3-hexadiene. These decomposition pathways differ from those that have been observed previously for propadiene and 1,3-butadiene, and these differences affect aromatic hydrocarbon formation. 1,5-Hexadiene and 2,3-dimethyl-1,3-butadiene produced significantly more C{sub 3}H{sub 4} and C{sub 4}H{sub 4} than the other isomers, but less benzene, which suggests that benzene formation pathways other than the conventional C3 + C3 and C4 + C2 pathways were important in most of the hexadiene-doped flames. The most likely additional mechanism is cyclization of highly unsaturated C5 decomposition products, followed by methyl addition to cyclopentadienyl radicals. (author)

  14. EMT - Empirical-mode-decomposition-based Magneto-Telluric Processing

    NASA Astrophysics Data System (ADS)

    Neukirch, M.; Garcia, X.

    2012-04-01

    We present a new Magneto-Telluric (MT) data processing scheme based on an emerging non linear, non stationary time series analysis tool, called the Empirical Mode Decomposition (EMD) or Hilbert-Huang Transform (HHT), to transform data into a non-stationary frequency domain and a robust principal component regression to estimate the most likely MT transfer functions from the data with the 2-σ confidence intervals computed by a bootstrap algorithm. Optionally, data quality can be controlled by a physical coherence and a signal power filter. MT sources are assumed to be quasi stationary and therefore a (windowed) Fourier Transform is often applied to transform the time series into the frequency domain in which Transfer Functions (TF) are defined between the electromagnetic field components. This assumption can break down in the presence of noise or when the sources are non stationary, and then TF estimates can become unreliable when obtained through a stationary transform like the Fourier transform. Our TF estimation scheme naturally deals with non stationarity without introducing artifacts and, therefore, potentially can distinguish quasi-stationary sources and non-stationary noise. In contrast to previous works on using HHT for MT processing, we argue the necessity of a multivariate EMD to model the MT problem physically correctly and highlight the resulting possibility to use instantaneous parameters as independent and identically distributed variables. Furthermore, we define a homogenization between data channels of frequency discrepancies due to non stationarity and noise. The TF estimation in the frequency domain bases on a robust principal component analysis in order to find two source polarizations. These two principal components are used as predictor to regress robustly the data channels within a bootstrap algorithm to estimate the Earth's Transfer function with 2-σ confidence interval supplied by the measured data.The scheme can be used with and without

  15. C%2B%2B tensor toolbox user manual.

    SciTech Connect

    Plantenga, Todd D.; Kolda, Tamara Gibson

    2012-04-01

    The C++ Tensor Toolbox is a software package for computing tensor decompositions. It is based on the Matlab Tensor Toolbox, and is particularly optimized for sparse data sets. This user manual briefly overviews tensor decomposition mathematics, software capabilities, and installation of the package. Tensors (also known as multidimensional arrays or N-way arrays) are used in a variety of applications ranging from chemometrics to network analysis. The Tensor Toolbox provides classes for manipulating dense, sparse, and structured tensors in C++. The Toolbox compiles into libraries and is intended for use with custom applications written by users.

  16. Reaction behaviors of decomposition of monocrotophos in aqueous solution by UV and UV/O processes.

    PubMed

    Ku, Y; Wang, W; Shen, Y S

    2000-02-01

    The decomposition of monocrotophos (cis-3-dimethoxyphosphinyloxy-N-methyl-crotonamide) in aqueous solution by UV and UV/O(3) processes was studied. The experiments were carried out under various solution pH values to investigate the decomposition efficiencies of the reactant and organic intermediates in order to determine the completeness of decomposition. The photolytic decomposition rate of monocrotophos was increased with increasing solution pH because the solution pH affects the distribution and light absorbance of monocrotophos species. The combination of O(3) with UV light apparently promoted the decomposition and mineralization of monocrotophos in aqueous solution. For the UV/O(3) process, the breakage of the >C=C< bond of monocrotophos by ozone molecules was found to occur first, followed by mineralization by hydroxyl radicals to generate CO(3)(2-), PO4(3-), and NO(3)(-) anions in sequence. The quasi-global kinetics based on a simplified consecutive-parallel reaction scheme was developed to describe the temporal behavior of monocrotophos decomposition in aqueous solution by the UV/O(3) process. PMID:10648946

  17. Impact of heavy metals on mass and energy flux within the decomposition process in deciduous forests.

    PubMed

    Köhler, H R; Wein, C; Reiss, S; Storch, V; Alberti, G

    1995-04-01

    : Laboratory experiments on microbial decomposition and on the contribution of diplopods to organic matter decomposition in soil were combined with field studies to reveal the major points of heavy metal effects on the leaf litter decomposition process. The study focused on the accumulation of organic litter material in heavy metal-contaminated soils. Microbial decomposition of freshly fallen leaves remained quantitatively unaffected by artificial lead contamination (1000 mg kg(-1)). The same was true for further decomposed leaf litter material, provided that the breakdown of this material was not influenced by faunal components. Although nutrient absorption in diplopods is affected by high lead contents in the food, this effect alone, however, was shown not to be sufficient for the massive deceleration of the decomposition process under heavy metal influence which could not only be observed in the field but occurred in microcosm studies as well. Reduced reproduction and lower activity of the diplopods most likely were responsible for the observation that lead-influenced diplopods enhanced microbial activity in soil only in a lesser degree than uncontaminated animals did. This effect is assigned to represent the main reason for decreased decomposition rates and the subsequent accumulation of organic material in heavy metal-contaminated soils.

  18. Developmental process of the arcuate fasciculus from infancy to adolescence: a diffusion tensor imaging study

    PubMed Central

    Tak, Hyeong Jun; Kim, Jin Hyun; Son, Su Min

    2016-01-01

    We investigated the radiologic developmental process of the arcuate fasciculus (AF) using subcomponent diffusion tensor imaging (DTI) analysis in typically developing volunteers. DTI data were acquired from 96 consecutive typically developing children, aged 0–14 years. AF subcomponents, including the posterior, anterior, and direct AF tracts were analyzed. Success rates of analysis (AR) and fractional anisotropy (FA) values of each subcomponent tract were measured and compared. AR of all subcomponent tracts, except the posterior, showed a significant increase with aging (P < 0.05). Subcomponent tracts had a specific developmental sequence: First, the posterior AF tract, second, the anterior AF tract, and last, the direct AF tract in identical hemispheres. FA values of all subcomponent tracts, except right direct AF tract, showed correlation with subject's age (P < 0.05). Increased AR and FA values were observed in female subjects in young age (0–2 years) group compared with males (P < 0.05). The direct AF tract showed leftward hemispheric asymmetry and this tendency showed greater consolidation in older age (3–14 years) groups (P < 0.05). These findings demonstrated the radiologic developmental patterns of the AF from infancy to adolescence using subcomponent DTI analysis. The AF showed a specific developmental sequence, sex difference in younger age, and hemispheric asymmetry in older age. PMID:27482222

  19. The Dynamics of Cognition and Action: Mental Processes Inferred from Speed-Accuracy Decomposition.

    ERIC Educational Resources Information Center

    Meyer, David E.; And Others

    1988-01-01

    Theoretical/empirical foundations on which reaction times are measured and interpreted are discussed. Models of human information processing are reviewed. A hybrid procedure and analytical framework are introduced, using a speed-accuracy decomposition technique to analyze the intermediate products of rapid mental processes. Results invalidate many…

  20. Decomposition reactions as general Poisson processes: Theory and an experimental example

    NASA Astrophysics Data System (ADS)

    Rydén, Tobias; Wernersson, Mikael

    1995-10-01

    The classical theory of decomposition reaction kinetics depends on a ``large scale'' assumption. In this paper we show how this assumption can be replaced by the assumption that the nucleation process is a space-time Poisson process. This framework is unifying in the sense that it includes many earlier formulas as special cases, and it naturally takes boundary effects into account. We consider the conversion of a sphere in detail, and fit the parameters of this model to gypsum decomposition experimental data. The so obtained model shows, for this particular reaction, that the boundary effects decrease with temperature.

  1. Stage efficiency in the analysis of thermochemical water decomposition processes

    NASA Technical Reports Server (NTRS)

    Conger, W. L.; Funk, J. E.; Carty, R. H.; Soliman, M. A.; Cox, K. E.

    1976-01-01

    The procedure for analyzing thermochemical water-splitting processes using the figure of merit is expanded to include individual stage efficiencies and loss coefficients. The use of these quantities to establish the thermodynamic insufficiencies of each stage is shown. A number of processes are used to illustrate these concepts and procedures and to demonstrate the facility with which process steps contributing most to the cycle efficiency are found. The procedure allows attention to be directed to those steps of the process where the greatest increase in total cycle efficiency can be obtained.

  2. Exothermic Behavior of Thermal Decomposition of Sodium Percarbonate: Kinetic Deconvolution of Successive Endothermic and Exothermic Processes.

    PubMed

    Nakano, Masayoshi; Wada, Takeshi; Koga, Nobuyoshi

    2015-09-24

    This study focused on the kinetic modeling of the thermal decomposition of sodium percarbonate (SPC, sodium carbonate-hydrogen peroxide (2/3)). The reaction is characterized by apparently different kinetic profiles of mass-loss and exothermic behavior as recorded by thermogravimetry and differential scanning calorimetry, respectively. This phenomenon results from a combination of different kinetic features of the reaction involving two overlapping mass-loss steps controlled by the physico-geometry of the reaction and successive endothermic and exothermic processes caused by the detachment and decomposition of H2O2(g). For kinetic modeling, the overall reaction was initially separated into endothermic and exothermic processes using kinetic deconvolution analysis. Then, both of the endothermic and exothermic processes were further separated into two reaction steps accounting for the physico-geometrically controlled reaction that occurs in two steps. Kinetic modeling through kinetic deconvolution analysis clearly illustrates the appearance of the net exothermic effect is the result of a slight delay of the exothermic process to the endothermic process in each physico-geometrically controlled reaction step. This demonstrates that kinetic modeling attempted in this study is useful for interpreting the exothermic behavior of solid-state reactions such as the oxidative decomposition of solids and thermal decomposition of oxidizing agent. PMID:26371394

  3. Exothermic Behavior of Thermal Decomposition of Sodium Percarbonate: Kinetic Deconvolution of Successive Endothermic and Exothermic Processes.

    PubMed

    Nakano, Masayoshi; Wada, Takeshi; Koga, Nobuyoshi

    2015-09-24

    This study focused on the kinetic modeling of the thermal decomposition of sodium percarbonate (SPC, sodium carbonate-hydrogen peroxide (2/3)). The reaction is characterized by apparently different kinetic profiles of mass-loss and exothermic behavior as recorded by thermogravimetry and differential scanning calorimetry, respectively. This phenomenon results from a combination of different kinetic features of the reaction involving two overlapping mass-loss steps controlled by the physico-geometry of the reaction and successive endothermic and exothermic processes caused by the detachment and decomposition of H2O2(g). For kinetic modeling, the overall reaction was initially separated into endothermic and exothermic processes using kinetic deconvolution analysis. Then, both of the endothermic and exothermic processes were further separated into two reaction steps accounting for the physico-geometrically controlled reaction that occurs in two steps. Kinetic modeling through kinetic deconvolution analysis clearly illustrates the appearance of the net exothermic effect is the result of a slight delay of the exothermic process to the endothermic process in each physico-geometrically controlled reaction step. This demonstrates that kinetic modeling attempted in this study is useful for interpreting the exothermic behavior of solid-state reactions such as the oxidative decomposition of solids and thermal decomposition of oxidizing agent.

  4. Tensor sufficient dimension reduction

    PubMed Central

    Zhong, Wenxuan; Xing, Xin; Suslick, Kenneth

    2015-01-01

    Tensor is a multiway array. With the rapid development of science and technology in the past decades, large amount of tensor observations are routinely collected, processed, and stored in many scientific researches and commercial activities nowadays. The colorimetric sensor array (CSA) data is such an example. Driven by the need to address data analysis challenges that arise in CSA data, we propose a tensor dimension reduction model, a model assuming the nonlinear dependence between a response and a projection of all the tensor predictors. The tensor dimension reduction models are estimated in a sequential iterative fashion. The proposed method is applied to a CSA data collected for 150 pathogenic bacteria coming from 10 bacterial species and 14 bacteria from one control species. Empirical performance demonstrates that our proposed method can greatly improve the sensitivity and specificity of the CSA technique. PMID:26594304

  5. Morphological Decomposition and Semantic Integration in Word Processing

    ERIC Educational Resources Information Center

    Meunier, Fanny; Longtin, Catherine-Marie

    2007-01-01

    In the present study, we looked at cross-modal priming effects produced by auditory presentation of morphologically complex pseudowords in order to investigate semantic integration during the processing of French morphologically complex items. In Experiment 1, we used as primes pseudowords consisting of a non-interpretable combination of roots and…

  6. PROCESS OF COATING WITH NICKEL BY THE DECOMPOSITION OF NICKEL CARBONYL

    DOEpatents

    Hoover, T.B.

    1959-04-01

    An improved process is presented for the deposition of nickel coatings by the thermal decomposition of nickel carbonyl vapor. The improvement consists in incorporating a small amount of hydrogen sulfide gas in the nickel carbonyl plating gas. It is postulated that the hydrogen sulfide functions as a catalyst. i

  7. A study of the process of nonisothermal decomposition of phenolformaldehyde polymers by differential thermal analysis

    SciTech Connect

    Petrova, O.M.; Fedoseev, S.D.; Komarova, T.V.

    1984-01-01

    A calculation has been made of the activation energy of the thermal decomposition of phenol-formaldehyde polymers. It has been established that for nonisothermal conditions the rate of performance of the process does not affect the effective activation energy calculated by means of Piloyan's equation.

  8. Method for increasing steam decomposition in a coal gasification process

    DOEpatents

    Wilson, M.W.

    1987-03-23

    The gasification of coal in the presence of steam and oxygen is significantly enhanced by introducing a thermochemical water- splitting agent such as sulfuric acid, into the gasifier for decomposing the steam to provide additional oxygen and hydrogen usable in the gasification process for the combustion of the coal and enrichment of the gaseous gasification products. The addition of the water-splitting agent into the gasifier also allows for the operation of the reactor at a lower temperature.

  9. Method for increasing steam decomposition in a coal gasification process

    DOEpatents

    Wilson, Marvin W.

    1988-01-01

    The gasification of coal in the presence of steam and oxygen is significantly enhanced by introducing a thermochemical water-splitting agent such as sulfuric acid, into the gasifier for decomposing the steam to provide additional oxygen and hydrogen usable in the gasification process for the combustion of the coal and enrichment of the gaseous gasification products. The addition of the water-splitting agent into the gasifier also allows for the operation of the reactor at a lower temperature.

  10. Decomposition of acetone by hydrogen peroxide/ozone process in a rotating packed contactor.

    PubMed

    Ku, Young; Huang, Yun-Jen; Chen, Hua-Wei; Hou, Wei-Ming

    2011-07-01

    The direct use of ozone (O3) in water and wastewater treatment processes is found to be inefficient, incomplete, and limited by the ozone transfer between the gas-liquid interface because of its low solubility and instability in aqueous solutions. Therefore, rotating packed contactors were introduced to improve the transfer of ozone from the gaseous phase to the solution phase, and the effect of several reaction parameters were investigated on the temporal variations of acetone concentration in aqueous solution. The decomposition rate constant of acetone was enhanced by increasing the rotor speed from 450 to 1800 rpm. Increasing the hydrogen peroxide (H2O2)/O3 molar ratios accelerated the decomposition rate until a certain optimum H2O2/O3 molar ratio was reached; further addition of H2O2 inhibited the decomposition of acetone, possibly because excessive amounts of H2O2 added might serve as a scavenger to deplete hydroxyl free radicals.

  11. The processing of aluminum gasarites via thermal decomposition of interstitial hydrides

    NASA Astrophysics Data System (ADS)

    Licavoli, Joseph J.

    Gasarite structures are a unique type of metallic foam containing tubular pores. The original methods for their production limited them to laboratory study despite appealing foam properties. Thermal decomposition processing of gasarites holds the potential to increase the application of gasarite foams in engineering design by removing several barriers to their industrial scale production. The following study characterized thermal decomposition gasarite processing both experimentally and theoretically. It was found that significant variation was inherent to this process therefore several modifications were necessary to produce gasarites using this method. Conventional means to increase porosity and enhance pore morphology were studied. Pore morphology was determined to be more easily replicated if pores were stabilized by alumina additions and powders were dispersed evenly. In order to better characterize processing, high temperature and high ramp rate thermal decomposition data were gathered. It was found that the high ramp rate thermal decomposition behavior of several hydrides was more rapid than hydride kinetics at low ramp rates. This data was then used to estimate the contribution of several pore formation mechanisms to the development of pore structure. It was found that gas-metal eutectic growth can only be a viable pore formation mode if non-equilibrium conditions persist. Bubble capture cannot be a dominant pore growth mode due to high bubble terminal velocities. Direct gas evolution appears to be the most likely pore formation mode due to high gas evolution rate from the decomposing particulate and microstructural pore growth trends. The overall process was evaluated for its economic viability. It was found that thermal decomposition has potential for industrialization, but further refinements are necessary in order for the process to be viable.

  12. ChIP-PIT: Enhancing the Analysis of ChIP-Seq Data Using Convex-Relaxed Pair-Wise Interaction Tensor Decomposition.

    PubMed

    Zhu, Lin; Guo, Wei-Li; Deng, Su-Ping; Huang, De-Shuang

    2016-01-01

    In recent years, thanks to the efforts of individual scientists and research consortiums, a huge amount of chromatin immunoprecipitation followed by high-throughput sequencing (ChIP-seq) experimental data have been accumulated. Instead of investigating them independently, several recent studies have convincingly demonstrated that a wealth of scientific insights can be gained by integrative analysis of these ChIP-seq data. However, when used for the purpose of integrative analysis, a serious drawback of current ChIP-seq technique is that it is still expensive and time-consuming to generate ChIP-seq datasets of high standard. Most researchers are therefore unable to obtain complete ChIP-seq data for several TFs in a wide variety of cell lines, which considerably limits the understanding of transcriptional regulation pattern. In this paper, we propose a novel method called ChIP-PIT to overcome the aforementioned limitation. In ChIP-PIT, ChIP-seq data corresponding to a diverse collection of cell types, TFs and genes are fused together using the three-mode pair-wise interaction tensor (PIT) model, and the prediction of unperformed ChIP-seq experimental results is formulated as a tensor completion problem. Computationally, we propose efficient first-order method based on extensions of coordinate descent method to learn the optimal solution of ChIP-PIT, which makes it particularly suitable for the analysis of massive scale ChIP-seq data. Experimental evaluation the ENCODE data illustrate the usefulness of the proposed model.

  13. Chemical dehalogenation treatment: Base-catalyzed decomposition process (BCDP). Tech data sheet

    SciTech Connect

    Not Available

    1992-07-01

    The Base-Catalyzed Decomposition Process (BCDP) is an efficient, relatively inexpensive treatment process for polychlorinated biphenyls (PCBs). It is also effective on other halogenated contaminants such as insecticides, herbicides, pentachlorophenol (PCP), lindane, and chlorinated dibenzodioxins and furans. The heart of BCDP is the rotary reactor in which most of the decomposition takes place. The contaminated soil is first screened, processed with a crusher and pug mill, and stockpiled. Next, in the main treatment step, this stockpile is mixed with sodium bicarbonate (in the amount of 10% of the weight of the stockpile) and heated for about one hour at 630 F in the rotary reactor. Most (about 60% to 90%) of the PCBs in the soil are decomposed in this step. The remainder are volatilized, captured, and decomposed.

  14. Face recognition based tensor structure

    NASA Astrophysics Data System (ADS)

    Yang, De-qiang; Ye, Zhi-xia; Zhao, Yang; Liu, Li-mei

    2012-01-01

    Face recognition has broad applications, and it is a difficult problem since face image can change with photographic conditions, such as different illumination conditions, pose changes and camera angles. How to obtain some invariable features for a face image is the key issue for a face recognition algorithm. In this paper, a novel tensor structure of face image is proposed to represent image features with eight directions for a pixel value. The invariable feature of the face image is then obtained from gradient decomposition to make up the tensor structure. Then the singular value decomposition (SVD) and principal component analysis (PCA) of this tensor structure are used for face recognition. The experimental results from this study show that many difficultly recognized samples can correctly be recognized, and the recognition rate is increased by 9%-11% in comparison with same type of algorithms.

  15. Structure and process in semantic memory: new evidence based on speed-accuracy decomposition.

    PubMed

    Kounios, J; Osman, A M; Meyer, D E

    1987-03-01

    Reaction-time and accuracy data obtained from studies of sentence verification have not been rich enough to answer certain important theoretical questions about structures and processes in human semantic memory. However, a new technique called speed-accuracy decomposition (Meyer, Irwin, Osman, & Kounios, 1986) may help solve this problem. The technique allows intermediate products of sentence verification to be analyzed more precisely. Three experiments with speed-accuracy decomposition indicate that verification processes produce useful partial information before they are completed. Such information appears to accumulate continuously at a rate whose magnitude depends on the degree of relatedness between semantic categories. This outcome is consistent with continuous computational (e.g., semantic-feature comparison) models of semantic memory. An analysis of reaction-time minima suggests that a discrete all-or-none search process may also contribute at least occasionally to sentence verification. Further details regarding the nature of these processes and the memory structures on which they operate can be inferred from additional results obtained through speed-accuracy decomposition.

  16. Pentachlorophenol decomposition by electron beam process enhanced in the presence of Fe(III)-EDTA.

    PubMed

    Kwon, Bum Gun; Kim, Eunjung; Lee, Jai H

    2009-03-01

    This study focuses on the enhanced decomposition of pentachlorophenol (PCP) in an electron beam (E-beam) process. To attain this objective, we investigated a synergistic effect of ferric-ethylenediamineacetate (Fe(III)-EDTA) and H(2)O(2) as additives to produce additional hydroxyl radical (*OH) at low dose. In this process, aqueous electron and hydrogen atom rapidly react with O(2) molecules, thereby forming hydroperoxyl/superoxide anion radical (HO2*/O(2)(-)), which reduces the Fe(III)-EDTA into Fe(II)-EDTA. Further *OH is produced by a well-known Fenton-like reaction of Fe(II)-EDTA with H(2)O(2) formed newly in E-beam. The complete decomposition of the initial PCP at 0.1mM was enhanced even at very low dose (<10 kGy) with 20 microM Fe(III)-EDTA and H(2)O(2) less than 1mM. This observation was supported by the increased amount of Cl(-) produced by the decomposition of PCP. Thus, in the presence of Fe(III)-EDTA during E-beam irradiation, the HO2*/O(2)(-)-driven Fenton-like reaction produces much more ()OH, which is significant for the complete degradation of PCP. PMID:19117591

  17. Decomposition of gaseous organic contaminants by surface discharge induced plasma chemical processing -- SPCP

    SciTech Connect

    Oda, Tetsuji; Yamashita, Ryuichi; Haga, Ichiro; Takahashi, Tadashi; Masuda, Senichi

    1996-01-01

    The decomposition performance of the surface induced plasma chemical processing (SPCP) for chlorofluorocarbon (83 ppm CFC-113 in air), acetone, trichloroethylene, and isopropylalcohol was experimentally examined. In every case, very high decomposition performance, more than 90 or 99% removal rate, is realized when the residence time is about 1 second and the input electric power for a 16 cm{sup 3} reactor is about 10 W. Acetone is the most stable compound and alcohol is most easily decomposed. The decomposed product-analysis by a GasChromato-MassSpectrometer has just started but very poor results are obtained. In fact, some portion of the isopropylalcohol may change to acetone which is worse than alcohol. The necessary energy to decompose one mol gas diluted in the air is calculated form the experiments. The necessary energy level for acetone and trichloroethylene is about one-tenth or one-fiftieth of that for chlorofluorocarbon.

  18. Decomposition and Precipitation Process During Thermo-mechanical Fatigue of Duplex Stainless Steel

    NASA Astrophysics Data System (ADS)

    Weidner, Anja; Kolmorgen, Roman; Kubena, Ivo; Kulawinski, Dirk; Kruml, Tomas; Biermann, Horst

    2016-05-01

    The so-called 748 K (475 °C) embrittlement is one of the main drawbacks for the application of ferritic-austenitic duplex stainless steels (DSS) at higher temperatures caused by a spinodal decomposition of the ferritic phase. Thermo-mechanical fatigue tests performed on a DSS in the temperature range between 623 K and 873 K (350 °C and 600 °C) revealed no negative influence on the fatigue lifetime. However, an intensive subgrain formation occurred in the ferritic phase, which was accompanied by formation of fine precipitates. In order to study the decomposition process of the ferritic grains due to TMF testing, detailed investigations using scanning and transmission electron microscopy are presented. The nature of the precipitates was determined as the cubic face centered G-phase, which is characterized by an enrichment of Si, Mo, and Ni. Furthermore, the formation of secondary austenite within ferritic grains was observed.

  19. Controlled decomposition and oxidation: A treatment method for gaseous process effluents

    NASA Technical Reports Server (NTRS)

    Mckinley, Roger J. B., Sr.

    1990-01-01

    The safe disposal of effluent gases produced by the electronics industry deserves special attention. Due to the hazardous nature of many of the materials used, it is essential to control and treat the reactants and reactant by-products as they are exhausted from the process tool and prior to their release into the manufacturing facility's exhaust system and the atmosphere. Controlled decomposition and oxidation (CDO) is one method of treating effluent gases from thin film deposition processes. CDO equipment applications, field experience, and results of the use of CDO equipment and technological advances gained from the field experiences are discussed.

  20. A quantitative acoustic emission study on fracture processes in ceramics based on wavelet packet decomposition

    SciTech Connect

    Ning, J. G.; Chu, L.; Ren, H. L.

    2014-08-28

    We base a quantitative acoustic emission (AE) study on fracture processes in alumina ceramics on wavelet packet decomposition and AE source location. According to the frequency characteristics, as well as energy and ringdown counts of AE, the fracture process is divided into four stages: crack closure, nucleation, development, and critical failure. Each of the AE signals is decomposed by a 2-level wavelet package decomposition into four different (from-low-to-high) frequency bands (AA{sub 2}, AD{sub 2}, DA{sub 2}, and DD{sub 2}). The energy eigenvalues P{sub 0}, P{sub 1}, P{sub 2}, and P{sub 3} corresponding to these four frequency bands are calculated. By analyzing changes in P{sub 0} and P{sub 3} in the four stages, we determine the inverse relationship between AE frequency and the crack source size during ceramic fracture. AE signals with regard to crack nucleation can be expressed when P{sub 0} is less than 5 and P{sub 3} more than 60; whereas AE signals with regard to dangerous crack propagation can be expressed when more than 92% of P{sub 0} is greater than 4, and more than 95% of P{sub 3} is less than 45. Geiger location algorithm is used to locate AE sources and cracks in the sample. The results of this location algorithm are consistent with the positions of fractures in the sample when observed under a scanning electronic microscope; thus the locations of fractures located with Geiger's method can reflect the fracture process. The stage division by location results is in a good agreement with the division based on AE frequency characteristics. We find that both wavelet package decomposition and Geiger's AE source locations are suitable for the identification of the evolutionary process of cracks in alumina ceramics.

  1. ERP and Adaptive Autoregressive identification with spectral power decomposition to study rapid auditory processing in infants.

    PubMed

    Piazza, C; Cantiani, C; Tacchino, G; Molteni, M; Reni, G; Bianchi, A M

    2014-01-01

    The ability to process rapidly-occurring auditory stimuli plays an important role in the mechanisms of language acquisition. For this reason, the research community has begun to investigate infant auditory processing, particularly using the Event Related Potentials (ERP) technique. In this paper we approach this issue by means of time domain and time-frequency domain analysis. For the latter, we propose the use of Adaptive Autoregressive (AAR) identification with spectral power decomposition. Results show EEG delta-theta oscillation enhancement related to the processing of acoustic frequency and duration changes, suggesting that, as expected, power modulation encodes rapid auditory processing (RAP) in infants and that the time-frequency analysis method proposed is able to identify this modulation.

  2. An observation on the decomposition process of gasoline-ingested monkey carcasses in a secondary forest in Malaysia.

    PubMed

    Rumiza, A R; Khairul, O; Zuha, R M; Heo, C C

    2010-12-01

    This study was designed to mimic homicide or suicide cases using gasoline. Six adult long-tailed macaque (Macaca fascicularis), weighing between 2.5 to 4.0 kg, were equally divided into control and test groups. The control group was sacrificed by a lethal dose of phenobarbital intracardiac while test group was force fed with two doses of gasoline LD50 (37.7 ml/kg) after sedation with phenobarbital. All carcasses were then placed in a decomposition site to observe the decomposition and invasion process of cadaveric fauna on the carcasses. A total of five decomposition stages were recognized during this study. This study was performed during July 2007. Fresh stage of control and test carcasses occurred between 0 to 15 and 0 to 39 hours of exposure, respectively. The subsequent decomposition stages also exhibited the similar pattern whereby the decomposition process of control carcasses were faster than tested one. The first larvae were found on control carcasses after 9 hours of death while the test group carcasses had only their first blowfly eggs after 15 hours of exposure. Blow flies, Achoetandrus rufifacies and Chrysomya megacephala were the most dominant invader of both carcasses throughout the decaying process. Diptera collected from control carcasses comprised of scuttle fly, Megaselia scalaris and flesh fly, sarcophagid. We concluded that the presence of gasoline and its odor on the carcass had delayed the arrival of insect to the carcasses, thereby slowing down the decomposition process in the carcass by 6 hours.

  3. MATLAB Tensor Toolbox

    SciTech Connect

    Kolda, Tamara G.; Bader, Brett W.

    2006-08-03

    This software provides a collection of MATLAB classes for tensor manipulations that can be used for fast algorithm prototyping. The tensor class extends the functionality of MATLAB's multidimensional arrays by supporting additional operations such as tensor multiplication. We have also added support for sparse tensor, tensors in Kruskal or Tucker format, and tensors stored as matrices (both dense and sparse).

  4. Tensor Modeling Based for Airborne LiDAR Data Classification

    NASA Astrophysics Data System (ADS)

    Li, N.; Liu, C.; Pfeifer, N.; Yin, J. F.; Liao, Z. Y.; Zhou, Y.

    2016-06-01

    Feature selection and description is a key factor in classification of Earth observation data. In this paper a classification method based on tensor decomposition is proposed. First, multiple features are extracted from raw LiDAR point cloud, and raster LiDAR images are derived by accumulating features or the "raw" data attributes. Then, the feature rasters of LiDAR data are stored as a tensor, and tensor decomposition is used to select component features. This tensor representation could keep the initial spatial structure and insure the consideration of the neighborhood. Based on a small number of component features a k nearest neighborhood classification is applied.

  5. Denoising NMR time-domain signal by singular-value decomposition accelerated by graphics processing units.

    PubMed

    Man, Pascal P; Bonhomme, Christian; Babonneau, Florence

    2014-01-01

    We present a post-processing method that decreases the NMR spectrum noise without line shape distortion. As a result the signal-to-noise (S/N) ratio of a spectrum increases. This method is called Cadzow enhancement procedure that is based on the singular-value decomposition of time-domain signal. We also provide software whose execution duration is a few seconds for typical data when it is executed in modern graphic-processing unit. We tested this procedure not only on low sensitive nucleus (29)Si in hybrid materials but also on low gyromagnetic ratio, quadrupole nucleus (87)Sr in reference sample Sr(NO3)2. Improving the spectrum S/N ratio facilitates the determination of T/Q ratio of hybrid materials. It is also applicable to simulated spectrum, resulting shorter simulation duration for powder averaging. An estimation of the number of singular values needed for denoising is also provided. PMID:24880899

  6. Surface modification processes during methane decomposition on Cu-promoted Ni–ZrO2 catalysts

    PubMed Central

    Wolfbeisser, Astrid; Klötzer, Bernhard; Mayr, Lukas; Rameshan, Raffael; Zemlyanov, Dmitry; Bernardi, Johannes; Rupprechter, Günther

    2015-01-01

    The surface chemistry of methane on Ni–ZrO2 and bimetallic CuNi–ZrO2 catalysts and the stability of the CuNi alloy under reaction conditions of methane decomposition were investigated by combining reactivity measurements and in situ synchrotron-based near-ambient pressure XPS. Cu was selected as an exemplary promoter for modifying the reactivity of Ni and enhancing the resistance against coke formation. We observed an activation process occurring in methane between 650 and 735 K with the exact temperature depending on the composition which resulted in an irreversible modification of the catalytic performance of the bimetallic catalysts towards a Ni-like behaviour. The sudden increase in catalytic activity could be explained by an increase in the concentration of reduced Ni atoms at the catalyst surface in the active state, likely as a consequence of the interaction with methane. Cu addition to Ni improved the desired resistance against carbon deposition by lowering the amount of coke formed. As a key conclusion, the CuNi alloy shows limited stability under relevant reaction conditions. This system is stable only in a limited range of temperature up to ~700 K in methane. Beyond this temperature, segregation of Ni species causes a fast increase in methane decomposition rate. In view of the applicability of this system, a detailed understanding of the stability and surface composition of the bimetallic phases present and the influence of the Cu promoter on the surface chemistry under relevant reaction conditions are essential. PMID:25815163

  7. Linear friction weld process monitoring of fixture cassette deformations using empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Bakker, O. J.; Gibson, C.; Wilson, P.; Lohse, N.; Popov, A. A.

    2015-10-01

    Due to its inherent advantages, linear friction welding is a solid-state joining process of increasing importance to the aerospace, automotive, medical and power generation equipment industries. Tangential oscillations and forge stroke during the burn-off phase of the joining process introduce essential dynamic forces, which can also be detrimental to the welding process. Since burn-off is a critical phase in the manufacturing stage, process monitoring is fundamental for quality and stability control purposes. This study aims to improve workholding stability through the analysis of fixture cassette deformations. Methods and procedures for process monitoring are developed and implemented in a fail-or-pass assessment system for fixture cassette deformations during the burn-off phase. Additionally, the de-noised signals are compared to results from previous production runs. The observed deformations as a consequence of the forces acting on the fixture cassette are measured directly during the welding process. Data on the linear friction-welding machine are acquired and de-noised using empirical mode decomposition, before the burn-off phase is extracted. This approach enables a direct, objective comparison of the signal features with trends from previous successful welds. The capacity of the whole process monitoring system is validated and demonstrated through the analysis of a large number of signals obtained from welding experiments.

  8. Decomposition of aqueous diphenyloxide by ozonolysis and by combined gamma-ray-ozone processing.

    PubMed

    Popov, Petar; Getoff, Nikola

    2004-01-01

    Diphenyloxide (DPO) is one of many, rather toxic pollutants produced by combustion of fossil fuels, which are emitted to the atmosphere with flue gases and brought to ground water by rain and snow. Its decomposition is investigated by ozonolysis at room temperature and the major products like phenol, resorcinol, hydroquinone, dihydroxy-benzoic acid as well as the total yield of aldehydes and carboxylic acids were determined as a function of applied ozone concentration. In addition, the DPO-degradation was studied by a combined action of gamma-ray under continuous bubbling of a known ozone concentration. In this case the formation of the same products is observed, but their yields differ from the above ones. Based on the synergistic action of ozone and gamma-ray the DPO-radiolysis is rather efficient, leading to an initial-G-value of 11.3. Some probable reaction mechanisms are presented for explanation of the degradation process.

  9. Demonstration of base catalyzed decomposition process, Navy Public Works Center, Guam, Mariana Islands

    SciTech Connect

    Schmidt, A.J.; Freeman, H.D.; Brown, M.D.; Zacher, A.H.; Neuenschwander, G.N.; Wilcox, W.A.; Gano, S.R.; Kim, B.C.; Gavaskar, A.R.

    1996-02-01

    Base Catalyzed Decomposition (BCD) is a chemical dehalogenation process designed for treating soils and other substrate contaminated with polychlorinated biphenyls (PCB), pesticides, dioxins, furans, and other hazardous organic substances. PCBs are heavy organic liquids once widely used in industry as lubricants, heat transfer oils, and transformer dielectric fluids. In 1976, production was banned when PCBs were recognized as carcinogenic substances. It was estimated that significant quantities (one billion tons) of U.S. soils, including areas on U.S. military bases outside the country, were contaminated by PCB leaks and spills, and cleanup activities began. The BCD technology was developed in response to these activities. This report details the evolution of the process, from inception to deployment in Guam, and describes the process and system components provided to the Navy to meet the remediation requirements. The report is divided into several sections to cover the range of development and demonstration activities. Section 2.0 gives an overview of the project history. Section 3.0 describes the process chemistry and remediation steps involved. Section 4.0 provides a detailed description of each component and specific development activities. Section 5.0 details the testing and deployment operations and provides the results of the individual demonstration campaigns. Section 6.0 gives an economic assessment of the process. Section 7.0 presents the conclusions and recommendations form this project. The appendices contain equipment and instrument lists, equipment drawings, and detailed run and analytical data.

  10. Decomposition of aniline in aqueous solution by UV/TiO2 process with applying bias potential.

    PubMed

    Ku, Young; Chiu, Ping-Chin; Chou, Yiang-Chen

    2010-11-15

    Application of bias potential to the photocatalytic decomposition of aniline in aqueous solution was studied under various solution pH, bias potentials and concentrations of potassium chloride. The decomposition of aniline by UV/TiO(2) process was found to be enhanced with the application of bias potential of lower voltages; however, the electrolysis of aniline became more dominant as the applying bias potential exceeding 1.0 V. Based on the experimental results and calculated synergetic factors, the application of bias potential improved the decomposition of aniline more noticeably in acidic solutions than that in alkaline solutions. Decomposition of aniline by UV/bias/TiO(2) process in alkaline solutions was increased to certain extent with the concentration of potassium chloride present in aqueous solution. Experimental results also indicated that the energy consumed by applying bias potential for aniline decomposition by UV/bias/TiO(2) process might be much lower than that consumed for increasing light intensity for photocatalysis.

  11. A data-driven multidimensional signal-noise decomposition approach for GPR data processing

    NASA Astrophysics Data System (ADS)

    Chen, Chih-Sung; Jeng, Yih

    2015-12-01

    We demonstrate the possibility of applying a data-driven nonlinear filtering scheme in processing ground penetrating radar (GPR) data. The algorithm is based on the recently developed multidimensional ensemble empirical mode decomposition (MDEEMD) method which provides a frame of developing a variety of approaches in data analysis. The GPR data processing is very challenging due to the large data volume, special format, and geometrical sensitive attributes which are very easily affected by various noises. Approaches which work in other fields of data processing may not be equally applicable to GPR data. Therefore, the MDEEMD has to be modified to fit the special needs in the GPR data processing. In this study, we first give a brief review of the MDEEMD, and then provide the detailed procedure of implementing a 2D GPR filter by exploiting the modified MDEEMD. A complete synthetic model study shows the details of algorithm implementation. To assess the performance of the proposed approach, models of various signal to noise (S/N) ratios are discussed, and the results of conventional filtering method are also provided for comparison. Two real GPR field examples and onsite excavations indicate that the proposed approach is feasible for practical use.

  12. Investigating the ventral-lexical, dorsal-sublexical model of basic reading processes using diffusion tensor imaging.

    PubMed

    Cummine, Jacqueline; Dai, Wenjun; Borowsky, Ron; Gould, Layla; Rollans, Claire; Boliek, Carol

    2015-01-01

    Recent results from diffusion tensor imaging (DTI) studies provide evidence of a ventral-lexical stream and a dorsal-sublexical stream associated with reading processing. We investigated the relationship between behavioural reading speed for stimuli thought to rely on either the ventral-lexical, dorsal-sublexical, or both streams and white matter via fractional anisotropy (FA) and mean diffusivity (MD) using DTI tractography. Participants (N = 32) overtly named exception words (e.g., 'one', ventral-lexical), regular words (e.g., 'won', both streams), nonwords ('wum', dorsal-sublexical) and pseudohomophones ('wun', dorsal-sublexical) in a behavioural lab. Each participant then underwent a brain scan that included a 30-directional DTI sequence. Tractography was used to extract FA and MD values from four tracts of interest: inferior longitudinal fasciculus, uncinate fasciculus, arcuate fasciculus, and inferior fronto-occipital fasciculus. Median reaction times (RTs) for reading exception words and regular words both showed a significant correlation with the FA of the uncinate fasciculus thought to underlie the ventral processing stream, such that response time decreased as FA increased. In addition, RT for exception and regular words showed a relationship with MD of the uncinate fasciculus, such that response time increased as MD increased. Multiple regression analyses revealed that exception word RT accounted for unique variability in FA of the uncinate over and above regular words. There were no robust relationships found between pseudohomophones, or nonwords, and tracts thought to underlie the dorsal processing stream. These results support the notion that word recognition, in general, and exception word reading in particular, rely on ventral-lexical brain regions.

  13. Character Decomposition and Transposition Processes in Chinese Compound Words Modulates Attentional Blink

    PubMed Central

    Cao, Hongwen; Gao, Min; Yan, Hongmei

    2016-01-01

    The attentional blink (AB) is the phenomenon in which the identification of the second of two targets (T2) is attenuated if it is presented less than 500 ms after the first target (T1). Although the AB is eliminated in canonical word conditions, it remains unclear whether the character order in compound words affects the magnitude of the AB. Morpheme decomposition and transposition of Chinese two-character compound words can provide an effective means to examine AB priming and to assess combinations of the component representations inherent to visual word identification. In the present study, we examined the processing of consecutive targets in a rapid serial visual presentation (RSVP) paradigm using Chinese two-character compound words in which the two characters were transposed to form meaningful words or meaningless combinations (reversible, transposed, or canonical words). We found that when two Chinese characters that form a compound word, regardless of their order, are presented in an RSVP sequence, the likelihood of an AB for the second character is greatly reduced or eliminated compared to when the two characters constitute separate words rather than a compound word. Moreover, the order of the report for the two characters is more likely to be reversed when the normal order of the two characters in a compound word is reversed, especially when the interval between the presentation of the two characters is extremely short. These findings are more consistent with the cognitive strategy hypothesis than the resource-limited hypothesis during character decomposition and transposition of Chinese two-character compound words. These results suggest that compound characters are perceived as a unit, rather than two separate words. The data further suggest that readers could easily understand the text with character transpositions in compound words during Chinese reading. PMID:27379003

  14. Interactive multiscale tensor reconstruction for multiresolution volume visualization.

    PubMed

    Suter, Susanne K; Guitián, José A Iglesias; Marton, Fabio; Agus, Marco; Elsener, Andreas; Zollikofer, Christoph P E; Gopi, M; Gobbetti, Enrico; Pajarola, Renato

    2011-12-01

    Large scale and structurally complex volume datasets from high-resolution 3D imaging devices or computational simulations pose a number of technical challenges for interactive visual analysis. In this paper, we present the first integration of a multiscale volume representation based on tensor approximation within a GPU-accelerated out-of-core multiresolution rendering framework. Specific contributions include (a) a hierarchical brick-tensor decomposition approach for pre-processing large volume data, (b) a GPU accelerated tensor reconstruction implementation exploiting CUDA capabilities, and (c) an effective tensor-specific quantization strategy for reducing data transfer bandwidth and out-of-core memory footprint. Our multiscale representation allows for the extraction, analysis and display of structural features at variable spatial scales, while adaptive level-of-detail rendering methods make it possible to interactively explore large datasets within a constrained memory footprint. The quality and performance of our prototype system is evaluated on large structurally complex datasets, including gigabyte-sized micro-tomographic volumes.

  15. Fundamental phenomena on fuel decomposition and boundary layer combustion processes with applications to hybrid rocket motors

    NASA Astrophysics Data System (ADS)

    Kuo, Kenneth K.; Lu, Y. C.; Chiaverini, Martin J.; Harting, George C.

    1994-11-01

    An experimental study on the fundamental processes involved in fuel decomposition and boundary layer combustion in hybrid rocket motors is being conducted at the High Pressure Combustion Laboratory of the Pennsylvania State University. This research should provide a useful engineering technology base in the development of hybrid rocket motors as well as a fundamental understanding of the complex processes involved in hybrid propulsion. A high pressure slab motor has been designed and manufactured for conducting experimental investigations. Oxidizer (LOX or GOX) supply and control systems have been designed and partly constructed for the head-end injection into the test chamber. Experiments using HTPB fuel, as well as fuels supplied by NASA designated industrial companies will be conducted. Design and construction of fuel casting molds and sample holders have been completed. The portion of these items for industrial company fuel casting will be sent to the McDonnell Douglas Aerospace Corporation in the near future. The study focuses on the following areas: observation of solid fuel burning processes with LOX or GOX, measurement and correlation of solid fuel regression rate with operating conditions, measurement of flame temperature and radical species concentrations, determination of the solid fuel subsurface temperature profile, and utilization of experimental data for validation of a companion theoretical study (Part 2) also being conducted at PSU.

  16. Fundamental phenomena on fuel decomposition and boundary layer combustion processes with applications to hybrid rocket motors

    NASA Technical Reports Server (NTRS)

    Kuo, Kenneth K.; Lu, Y. C.; Chiaverini, Martin J.; Harting, George C.

    1994-01-01

    An experimental study on the fundamental processes involved in fuel decomposition and boundary layer combustion in hybrid rocket motors is being conducted at the High Pressure Combustion Laboratory of the Pennsylvania State University. This research should provide a useful engineering technology base in the development of hybrid rocket motors as well as a fundamental understanding of the complex processes involved in hybrid propulsion. A high pressure slab motor has been designed and manufactured for conducting experimental investigations. Oxidizer (LOX or GOX) supply and control systems have been designed and partly constructed for the head-end injection into the test chamber. Experiments using HTPB fuel, as well as fuels supplied by NASA designated industrial companies will be conducted. Design and construction of fuel casting molds and sample holders have been completed. The portion of these items for industrial company fuel casting will be sent to the McDonnell Douglas Aerospace Corporation in the near future. The study focuses on the following areas: observation of solid fuel burning processes with LOX or GOX, measurement and correlation of solid fuel regression rate with operating conditions, measurement of flame temperature and radical species concentrations, determination of the solid fuel subsurface temperature profile, and utilization of experimental data for validation of a companion theoretical study (Part 2) also being conducted at PSU.

  17. The classical model for moment tensors

    NASA Astrophysics Data System (ADS)

    Tape, W.; Tape, C.

    2013-12-01

    A seismic moment tensor is a description of an earthquake source, but the description is indirect. The moment tensor describes seismic radiation rather than the actual physical process that initiates the radiation. A moment tensor 'model' then ties the physical process to the moment tensor. The model is not unique, and the physical process is therefore not unique. In the classical moment tensor model (Aki and Richards, 1980), an earthquake arises from slip along a planar fault, but with the slip not necessarily in the plane of the fault. The model specifies the resulting moment tensor in terms of the slip vector, the fault normal vector, and the Lame elastic parameters, assuming isotropy. We review the classical model in the context of the fundamental lune. The lune is closely related to the space of moment tensors, and it provides a setting that is conceptually natural as well as pictorial. In addition to the classical model, we consider a crack plus double couple model (CDC model) in which a moment tensor is regarded as the sum of a crack tensor and a double couple. A compilation of full moment tensors from the literature reveals large deviations in Poisson's ratio as implied by the classical model. Either the classical model is inadequate or the published full moment tensors have very large uncertainties. We question the common interpretation of the isotropic component as a volume change in the source region.

  18. Multilinear operators for higher-order decompositions.

    SciTech Connect

    Kolda, Tamara Gibson

    2006-04-01

    We propose two new multilinear operators for expressing the matrix compositions that are needed in the Tucker and PARAFAC (CANDECOMP) decompositions. The first operator, which we call the Tucker operator, is shorthand for performing an n-mode matrix multiplication for every mode of a given tensor and can be employed to concisely express the Tucker decomposition. The second operator, which we call the Kruskal operator, is shorthand for the sum of the outer-products of the columns of N matrices and allows a divorce from a matricized representation and a very concise expression of the PARAFAC decomposition. We explore the properties of the Tucker and Kruskal operators independently of the related decompositions. Additionally, we provide a review of the matrix and tensor operations that are frequently used in the context of tensor decompositions.

  19. [Rates of decomposition processes in mountain soils of the Sudeten as a function of edaphic-climatic and biotic factors].

    PubMed

    Striganova, B R; Bienkowski, P

    2000-01-01

    The rate of grass litter decomposition was studied in soils of the Karkonosze Mountains of the Sudeten at different altitudes. Parallel structural-functional investigations of the soil animal population on the example of soil macrofauna were carried out and heavy metals were assayed in the soil at stationary plots to reveal the effects of both natural and anthropogenic factors on the soil biological activity. The recent contamination of soil in the Sudeten by heavy metals and sulfur does not affect the spatial distribution and abundance of the soil-dwelling invertebrates and the decomposition rates. The latter correlated to a high level of soil saprotroph activity. The activity of the decomposition processes depends on the soil content of organic matter, conditions of soil drainage, and the temperature of upper soil horizon. PMID:11149317

  20. [Rates of decomposition processes in mountain soils of the Sudeten as a function of edaphic-climatic and biotic factors].

    PubMed

    Striganova, B R; Bienkowski, P

    2000-01-01

    The rate of grass litter decomposition was studied in soils of the Karkonosze Mountains of the Sudeten at different altitudes. Parallel structural-functional investigations of the soil animal population on the example of soil macrofauna were carried out and heavy metals were assayed in the soil at stationary plots to reveal the effects of both natural and anthropogenic factors on the soil biological activity. The recent contamination of soil in the Sudeten by heavy metals and sulfur does not affect the spatial distribution and abundance of the soil-dwelling invertebrates and the decomposition rates. The latter correlated to a high level of soil saprotroph activity. The activity of the decomposition processes depends on the soil content of organic matter, conditions of soil drainage, and the temperature of upper soil horizon.

  1. Mathematical simulation of thermal decomposition processes in coking polymers during intense heating

    SciTech Connect

    Shlenskii, O.F.; Polyakov, A.A.

    1994-12-01

    Description of nonstationary heat transfer in heat-shielding materials based on cross-linked polymers, mathematical simulation of chemical engineering processes of treating coking and fiery coals, and designing calculations all require taking thermal destruction kinetics into account. The kinetics of chemical transformations affects the substance density change depending on the temperature, the time, the heat-release function, and other properties of materials. The traditionally accepted description of the thermal destruction kinetics of coking materials is based on formulating a set of kinetic equations, in which only chemical transformations are taken into account. However, such an approach does not necessarily agree with the obtained experimental data for the case of intense heating. The authors propose including the parameters characterizing the decrease of intermolecular interaction in a comparatively narrow temperature interval (20-40 K) into the set of kinetic equations. In the neighborhood of a certain temperature T{sub 1}, which is called the limiting temperature of thermal decomposition, a decrease in intermolecular interaction causes an increase in the rates of chemical and phase transformations. The effect of the enhancement of destruction processes has been found experimentally by the contact thermal analysis method.

  2. Feedback processes in cellulose thermal decomposition: implications for fire-retarding strategies and treatments

    NASA Astrophysics Data System (ADS)

    Ball, R.; McIntosh, A. C.; Brindley, J.

    2004-06-01

    A simple dynamical system that models the competitive thermokinetics and chemistry of cellulose decomposition is examined, with reference to evidence from experimental studies indicating that char formation is a low activation energy exothermal process and volatilization is a high activation energy endothermal process. The thermohydrolysis chemistry at the core of the primary competition is described. Essentially, the competition is between two nucleophiles, a molecule of water and an -OH group on C6 of an end glucosyl cation, to form either a reducing chain fragment with the propensity to undergo the bond-forming reactions that ultimately form char, or a levoglucosan end-fragment that depolymerizes to volatile products. The results of this analysis suggest that promotion of char formation under thermal stress can actually increase the production of flammable volatiles. Thus, we would like to convey an important safety message in this paper: in some situations where heat and mass transfer is restricted in cellulosic materials, such as furnishings, insulation, and stockpiles, the use of char-promoting treatments for fire retardation may have the effect of increasing the risk of flaming combustion.

  3. Empirical mode decomposition as a time-varying multirate signal processing system

    NASA Astrophysics Data System (ADS)

    Yang, Yanli

    2016-08-01

    Empirical mode decomposition (EMD) can adaptively split composite signals into narrow subbands termed intrinsic mode functions (IMFs). Although an analytical expression of IMFs extracted by EMD from signals is introduced in Yang et al. (2013) [1], it is only used for the case of extrema spaced uniformly. In this paper, the EMD algorithm is analyzed from digital signal processing perspective for the case of extrema spaced nonuniformly. Firstly, the extrema extraction is represented by a time-varying extrema decimator. The nonuniform extrema extraction is analyzed through modeling the time-varying extrema decimation at a fixed time point as a time-invariant decimation. Secondly, by using the impulse/summation approach, spline interpolation for knots spaced nonuniformly is shown as two basic operations, time-varying interpolation and filtering by a time-varying spline filter. Thirdly, envelopes of signals are written as the output of the time-varying spline filter. An expression of envelopes of signals in both time and frequency domain is presented. The EMD algorithm is then described as a time-varying multirate signal processing system. Finally, an equation to model IMFs is derived by using a matrix formulation in time domain for the general case of extrema spaced nonuniformly.

  4. Fundamental phenomena on fuel decomposition and boundary layer combustion processes with applications to hybrid rocket motors

    NASA Technical Reports Server (NTRS)

    Kuo, Kenneth K.; Lu, Y. C.; Chiaverini, Martin J.; Harting, George C.

    1994-01-01

    An experimental study on the fundamental processes involved in fuel decomposition and boundary layer combustion in hybrid rocket motors is being conducted at the High Pressure Combustion Laboratory of the Pennsylvania State University. This research should provide an engineering technology base for development of large scale hybrid rocket motors as well as a fundamental understanding of the complex processes involved in hybrid propulsion. A high pressure slab motor has been designed for conducting experimental investigations. Oxidizer (LOX or GOX) is injected through the head-end over a solid fuel (HTPB) surface. Experiments using fuels supplied by NASA designated industrial companies will also be conducted. The study focuses on the following areas: measurement and observation of solid fuel burning with LOX or GOX, correlation of solid fuel regression rate with operating conditions, measurement of flame temperature and radical species concentrations, determination of the solid fuel subsurface temperature profile, and utilization of experimental data for validation of a companion theoretical study also being conducted at PSU.

  5. Age-Related Modifications of Diffusion Tensor Imaging Parameters and White Matter Hyperintensities as Inter-Dependent Processes

    PubMed Central

    Pelletier, Amandine; Periot, Olivier; Dilharreguy, Bixente; Hiba, Bassem; Bordessoules, Martine; Chanraud, Sandra; Pérès, Karine; Amieva, Hélène; Dartigues, Jean-François; Allard, Michèle; Catheline, Gwénaëlle

    2016-01-01

    Microstructural changes of White Matter (WM) associated with aging have been widely described through Diffusion Tensor Imaging (DTI) parameters. In parallel, White Matter Hyperintensities (WMH) as observed on a T2-weighted MRI are extremely common in older individuals. However, few studies have investigated both phenomena conjointly. The present study investigates aging effects on DTI parameters in absence and in presence of WMH. Diffusion maps were constructed based on 21 directions DTI scans of young adults (n = 19, mean age = 33 SD = 7.4) and two age-matched groups of older adults, one presenting low-level-WMH (n = 20, mean age = 78, SD = 3.2) and one presenting high-level-WMH (n = 20, mean age = 79, SD = 5.4). Older subjects with low-level-WMH presented modifications of DTI parameters in comparison to younger subjects, fitting with the DTI pattern classically described in aging, i.e., Fractional Anisotropy (FA) decrease/Radial Diffusivity (RD) increase. Furthermore, older subjects with high-level-WMH showed higher DTI modifications in Normal Appearing White Matter (NAWM) in comparison to those with low-level-WMH. Finally, in older subjects with high-level-WMH, FA, and RD values of NAWM were associated with to WMH burden. Therefore, our findings suggest that DTI modifications and the presence of WMH would be two inter-dependent processes but occurring within different temporal windows. DTI changes would reflect the early phase of white matter changes and WMH would appear as a consequence of those changes. PMID:26834625

  6. Combined effects of leaf litter and soil microsite on decomposition process in arid rangelands.

    PubMed

    Carrera, Analía Lorena; Bertiller, Mónica Beatriz

    2013-01-15

    The objective of this study was to analyze the combined effects of leaf litter quality and soil properties on litter decomposition and soil nitrogen (N) mineralization at conserved (C) and disturbed by sheep grazing (D) vegetation states in arid rangelands of the Patagonian Monte. It was hypothesized that spatial differences in soil inorganic-N levels have larger impact on decomposition processes of non-recalcitrant than recalcitrant leaf litter (low and high concentration of secondary compounds, respectively). Leaf litter and upper soil were extracted from modal size plant patches (patch microsite) and the associated inter-patch area (inter-patch microsite) in C and D. Leaf litter was pooled per vegetation state and soil was pooled combining vegetation state and microsite. Concentrations of N and secondary compounds in leaf litter and total and inorganic-N in soil were assessed at each pooled sample. Leaf litter decay and soil N mineralization at microsites of C and D were estimated in 160 microcosms incubated at field capacity (16 month). C soils had higher total N than D soils (0.58 and 0.41 mg/g, respectively). Patch soil of C and inter-patch soil of D exhibited the highest values of inorganic-N (8.8 and 8.4 μg/g, respectively). Leaf litter of C was less recalcitrant and decomposed faster than that of D. Non-recalcitrant leaf litter decay and induced soil N mineralization had larger variation among microsites (coefficients of variation = 25 and 41%, respectively) than recalcitrant leaf litter (coefficients of variation = 12 and 32%, respectively). Changes in the canopy structure induced by grazing disturbance increased leaf litter recalcitrance, and reduced litter decay and soil N mineralization, independently of soil N levels. This highlights the importance of the combined effects of soil and leaf litter properties on N cycling probably with consequences for vegetation reestablishment and dynamics, rangeland resistance and resilience with implications

  7. Combined effects of leaf litter and soil microsite on decomposition process in arid rangelands.

    PubMed

    Carrera, Analía Lorena; Bertiller, Mónica Beatriz

    2013-01-15

    The objective of this study was to analyze the combined effects of leaf litter quality and soil properties on litter decomposition and soil nitrogen (N) mineralization at conserved (C) and disturbed by sheep grazing (D) vegetation states in arid rangelands of the Patagonian Monte. It was hypothesized that spatial differences in soil inorganic-N levels have larger impact on decomposition processes of non-recalcitrant than recalcitrant leaf litter (low and high concentration of secondary compounds, respectively). Leaf litter and upper soil were extracted from modal size plant patches (patch microsite) and the associated inter-patch area (inter-patch microsite) in C and D. Leaf litter was pooled per vegetation state and soil was pooled combining vegetation state and microsite. Concentrations of N and secondary compounds in leaf litter and total and inorganic-N in soil were assessed at each pooled sample. Leaf litter decay and soil N mineralization at microsites of C and D were estimated in 160 microcosms incubated at field capacity (16 month). C soils had higher total N than D soils (0.58 and 0.41 mg/g, respectively). Patch soil of C and inter-patch soil of D exhibited the highest values of inorganic-N (8.8 and 8.4 μg/g, respectively). Leaf litter of C was less recalcitrant and decomposed faster than that of D. Non-recalcitrant leaf litter decay and induced soil N mineralization had larger variation among microsites (coefficients of variation = 25 and 41%, respectively) than recalcitrant leaf litter (coefficients of variation = 12 and 32%, respectively). Changes in the canopy structure induced by grazing disturbance increased leaf litter recalcitrance, and reduced litter decay and soil N mineralization, independently of soil N levels. This highlights the importance of the combined effects of soil and leaf litter properties on N cycling probably with consequences for vegetation reestablishment and dynamics, rangeland resistance and resilience with implications

  8. KOALA: A program for the processing and decomposition of transient spectra

    NASA Astrophysics Data System (ADS)

    Grubb, Michael P.; Orr-Ewing, Andrew J.; Ashfold, Michael N. R.

    2014-06-01

    Extracting meaningful kinetic traces from time-resolved absorption spectra is a non-trivial task, particularly for solution phase spectra where solvent interactions can substantially broaden and shift the transition frequencies. Typically, each spectrum is composed of signal from a number of molecular species (e.g., excited states, intermediate complexes, product species) with overlapping spectral features. Additionally, the profiles of these spectral features may evolve in time (i.e., signal nonlinearity), further complicating the decomposition process. Here, we present a new program for decomposing mixed transient spectra into their individual component spectra and extracting the corresponding kinetic traces: KOALA (Kinetics Observed After Light Absorption). The software combines spectral target analysis with brute-force linear least squares fitting, which is computationally efficient because of the small nonlinear parameter space of most spectral features. Within, we demonstrate the application of KOALA to two sets of experimental transient absorption spectra with multiple mixed spectral components. Although designed for decomposing solution-phase transient absorption data, KOALA may in principle be applied to any time-evolving spectra with multiple components.

  9. KOALA: a program for the processing and decomposition of transient spectra.

    PubMed

    Grubb, Michael P; Orr-Ewing, Andrew J; Ashfold, Michael N R

    2014-06-01

    Extracting meaningful kinetic traces from time-resolved absorption spectra is a non-trivial task, particularly for solution phase spectra where solvent interactions can substantially broaden and shift the transition frequencies. Typically, each spectrum is composed of signal from a number of molecular species (e.g., excited states, intermediate complexes, product species) with overlapping spectral features. Additionally, the profiles of these spectral features may evolve in time (i.e., signal nonlinearity), further complicating the decomposition process. Here, we present a new program for decomposing mixed transient spectra into their individual component spectra and extracting the corresponding kinetic traces: KOALA (Kinetics Observed After Light Absorption). The software combines spectral target analysis with brute-force linear least squares fitting, which is computationally efficient because of the small nonlinear parameter space of most spectral features. Within, we demonstrate the application of KOALA to two sets of experimental transient absorption spectra with multiple mixed spectral components. Although designed for decomposing solution-phase transient absorption data, KOALA may in principle be applied to any time-evolving spectra with multiple components.

  10. Efficient MATLAB computations with sparse and factored tensors.

    SciTech Connect

    Bader, Brett William; Kolda, Tamara Gibson (Sandia National Lab, Livermore, CA)

    2006-12-01

    In this paper, the term tensor refers simply to a multidimensional or N-way array, and we consider how specially structured tensors allow for efficient storage and computation. First, we study sparse tensors, which have the property that the vast majority of the elements are zero. We propose storing sparse tensors using coordinate format and describe the computational efficiency of this scheme for various mathematical operations, including those typical to tensor decomposition algorithms. Second, we study factored tensors, which have the property that they can be assembled from more basic components. We consider two specific types: a Tucker tensor can be expressed as the product of a core tensor (which itself may be dense, sparse, or factored) and a matrix along each mode, and a Kruskal tensor can be expressed as the sum of rank-1 tensors. We are interested in the case where the storage of the components is less than the storage of the full tensor, and we demonstrate that many elementary operations can be computed using only the components. All of the efficiencies described in this paper are implemented in the Tensor Toolbox for MATLAB.

  11. Growth of lanthanum manganate buffer layers for coated conductors via a metal-organic decomposition process

    NASA Astrophysics Data System (ADS)

    Venkataraman, Kartik

    LaMnO3 (LMO) was identified as a possible buffer material for YBa2Cu3O7-x conductors due to its diffusion barrier properties and close lattice match with YBa2Cu 3O7-x. Growth of LMO films via a metal-organic decomposition (MOD) process on Ni, Ni-5at.%W (Ni-5W), and single crystal SrTiO3 substrates was investigated. Phase-pure LMO was grown via MOD on Ni and SrTiO 3 substrates at temperatures and oxygen pressures within a thermodynamic "process window" wherein LMO, Ni, Ni-5W, and SrTiO3 are all stable components. LMO could not be grown on Ni-5W in the "process window" because tungsten diffused from the substrate into the overlying film, where it reacted to form La and Mn tungstates. The kinetics of tungstate formation and crystallization of phase-pure LMO from the La and Mn acetate precursors are competitive in the temperature range explored (850--1100°C). Temperatures <850°C might mitigate tungsten diffusion from the substrate to the film sufficiently to obviate tungstate formation, but LMO films deposited via MOD require temperatures ≥850°C for nucleation and grain growth. Using a Y2O3 seed layer on Ni-5W to block tungsten from diffusing into the LMO film was explored; however, Y2O3 reacts with tungsten in the "process window" at 850--1100°C. Tungsten diffusion into Y2O3 can be blocked if epitaxial, crack-free NiWO4 and NiO layers are formed at the interface between Ni-5W and Y2O3. NiWO 4 only grows epitaxially if the overlying NiO and buffer layers are thick enough to mechanically suppress (011)-oriented NiWO4 grain growth. This is not the case when a bare 75 nm-thick Y2O3 film on Ni-5W is processed at 850°C. These studies show that the Ni-5W substrate must be at a low temperature to prevent tungsten diffusion, whereas the LMO precursor film must be at elevated temperature to crystallize. An excimer laser-assisted MOD process was used where a Y2O 3-coated Ni-5W substrate was held at 500°C in air and the pulsed laser photo-thermally heated the Y2O3 and LMO

  12. Unsupervised Tensor Mining for Big Data Practitioners.

    PubMed

    Papalexakis, Evangelos E; Faloutsos, Christos

    2016-09-01

    Multiaspect data are ubiquitous in modern Big Data applications. For instance, different aspects of a social network are the different types of communication between people, the time stamp of each interaction, and the location associated to each individual. How can we jointly model all those aspects and leverage the additional information that they introduce to our analysis? Tensors, which are multidimensional extensions of matrices, are a principled and mathematically sound way of modeling such multiaspect data. In this article, our goal is to popularize tensors and tensor decompositions to Big Data practitioners by demonstrating their effectiveness, outlining challenges that pertain to their application in Big Data scenarios, and presenting our recent work that tackles those challenges. We view this work as a step toward a fully automated, unsupervised tensor mining tool that can be easily and broadly adopted by practitioners in academia and industry.

  13. Unsupervised Tensor Mining for Big Data Practitioners.

    PubMed

    Papalexakis, Evangelos E; Faloutsos, Christos

    2016-09-01

    Multiaspect data are ubiquitous in modern Big Data applications. For instance, different aspects of a social network are the different types of communication between people, the time stamp of each interaction, and the location associated to each individual. How can we jointly model all those aspects and leverage the additional information that they introduce to our analysis? Tensors, which are multidimensional extensions of matrices, are a principled and mathematically sound way of modeling such multiaspect data. In this article, our goal is to popularize tensors and tensor decompositions to Big Data practitioners by demonstrating their effectiveness, outlining challenges that pertain to their application in Big Data scenarios, and presenting our recent work that tackles those challenges. We view this work as a step toward a fully automated, unsupervised tensor mining tool that can be easily and broadly adopted by practitioners in academia and industry. PMID:27642720

  14. Mathematical modeling of frontal process in thermal decomposition of a substance with allowance for the finite velocity of heat propagation

    SciTech Connect

    Shlenskii, O.F.; Murashov, G.G.

    1982-05-01

    In describing frontal processes of thermal decomposition of high-energy condensed substances, for example detonation, it is common practice to write the equation for the conservation of energy without any limitations on the heat propagation velocity (HPV). At the same time, it is known that in calculating fast processes of heat conduction, the assumption of an infinitely high HPV is not always justified. In order to evaluate the influence of the HPV on the results from calculations of heat conduction process under conditions of a short-term exothermic decomposition of a condensed substance, the solution of the problem of heating a semiinfinite, thermally unstable solid body with boundary conditions of the third kind on the surface has been examined.

  15. Empirical mode decomposition analysis of random processes in the solar atmosphere

    NASA Astrophysics Data System (ADS)

    Kolotkov, D. Y.; Anfinogentov, S. A.; Nakariakov, V. M.

    2016-08-01

    Context. Coloured noisy components with a power law spectral energy distribution are often shown to appear in solar signals of various types. Such a frequency-dependent noise may indicate the operation of various randomly distributed dynamical processes in the solar atmosphere. Aims: We develop a recipe for the correct usage of the empirical mode decomposition (EMD) technique in the presence of coloured noise, allowing for clear distinguishing between quasi-periodic oscillatory phenomena in the solar atmosphere and superimposed random background processes. For illustration, we statistically investigate extreme ultraviolet (EUV) emission intensity variations observed with SDO/AIA in the coronal (171 Å), chromospheric (304 Å), and upper photospheric (1600 Å) layers of the solar atmosphere, from a quiet sun and a sunspot umbrae region. Methods: EMD has been used for analysis because of its adaptive nature and essential applicability to the processing non-stationary and amplitude-modulated time series. For the comparison of the results obtained with EMD, we use the Fourier transform technique as an etalon. Results: We empirically revealed statistical properties of synthetic coloured noises in EMD, and suggested a scheme that allows for the detection of noisy components among the intrinsic modes obtained with EMD in real signals. Application of the method to the solar EUV signals showed that they indeed behave randomly and could be represented as a combination of different coloured noises characterised by a specific value of the power law indices in their spectral energy distributions. On the other hand, 3-min oscillations in the analysed sunspot were detected to have energies significantly above the corresponding noise level. Conclusions: The correct accounting for the background frequency-dependent random processes is essential when using EMD for analysis of oscillations in the solar atmosphere. For the quiet sun region the power law index was found to increase

  16. Joint Tensor Feature Analysis For Visual Object Recognition.

    PubMed

    Wong, Wai Keung; Lai, Zhihui; Xu, Yong; Wen, Jiajun; Ho, Chu Po

    2015-11-01

    Tensor-based object recognition has been widely studied in the past several years. This paper focuses on the issue of joint feature selection from the tensor data and proposes a novel method called joint tensor feature analysis (JTFA) for tensor feature extraction and recognition. In order to obtain a set of jointly sparse projections for tensor feature extraction, we define the modified within-class tensor scatter value and the modified between-class tensor scatter value for regression. The k-mode optimization technique and the L(2,1)-norm jointly sparse regression are combined together to compute the optimal solutions. The convergent analysis, computational complexity analysis and the essence of the proposed method/model are also presented. It is interesting to show that the proposed method is very similar to singular value decomposition on the scatter matrix but with sparsity constraint on the right singular value matrix or eigen-decomposition on the scatter matrix with sparse manner. Experimental results on some tensor datasets indicate that JTFA outperforms some well-known tensor feature extraction and selection algorithms. PMID:26470058

  17. Modelling regulation of decomposition and related root/mycorrhizal processes in arctic tundra soils. Final report

    SciTech Connect

    Linkins, A.E.

    1992-09-01

    Since this was the final year of this project principal activities were directed towards either collecting data needed to complete existing incomplete data sets or writing manuscripts. Data sets on Imnaviat Creek watershed basin are functionally complete and data finialized on the cellulose mineralizaiton and dust impact on soil organic carbon and phsophorus decomposition. Seven manuscripts were prepared, and are briefly outlined.

  18. Modelling regulation of decomposition and related root/mycorrhizal processes in arctic tundra soils

    SciTech Connect

    Linkins, A.E.

    1992-01-01

    Since this was the final year of this project principal activities were directed towards either collecting data needed to complete existing incomplete data sets or writing manuscripts. Data sets on Imnaviat Creek watershed basin are functionally complete and data finialized on the cellulose mineralizaiton and dust impact on soil organic carbon and phsophorus decomposition. Seven manuscripts were prepared, and are briefly outlined.

  19. The classical model for moment tensors

    NASA Astrophysics Data System (ADS)

    Tape, Walter; Tape, Carl

    2013-12-01

    A seismic moment tensor is a description of an earthquake source, but the description is indirect. The moment tensor describes seismic radiation rather than the actual physical process that initiates the radiation. A moment tensor `model' then ties the physical process to the moment tensor. The model is not unique, and the physical process is therefore not unique. In the classical moment tensor model, an earthquake arises from slip along a planar fault, but with the slip not necessarily in the plane of the fault. The model specifies the resulting moment tensor in terms of the slip vector, the fault normal vector and the Lamé elastic parameters, assuming isotropy. We review the classical model in the context of the fundamental lune. The lune is closely related to the space of moment tensors, and it provides a setting that is conceptually natural as well as pictorial. In addition to the classical model, we consider a crack plus double-couple model (CDC model) in which a moment tensor is regarded as the sum of a crack tensor and a double couple.

  20. Image processing using proper orthogonal and dynamic mode decompositions for the study of cavitation developing on a NACA0015 foil

    NASA Astrophysics Data System (ADS)

    Prothin, Sebastien; Billard, Jean-Yves; Djeridi, Henda

    2016-10-01

    The purpose of the present study is to get a better understanding of the hydrodynamic instabilities of sheet cavities which develop along solid walls. The main objective is to highlight the spatial and temporal behavior of such a cavity when it develops on a NACA0015 foil at high Reynolds number. Experimental results show a quasi-steady, periodic, bifurcation domain, with aperiodic cavity behavior corresponding to σ/2 α values of 5.75, 5, 4.3 and 3.58. Robust mathematical methods of signal postprocessing (proper orthogonal decomposition and dynamic mode decomposition) were applied in order to emphasize the spatio-temporal nature of the flow. These new techniques put in evidence the 3D effects due to the reentrant jet instabilities or due to propagating shock wave mechanism at the origin of the shedding process of the cavitation cloud.

  1. Multi-Zone Liquid Thrust Chamber Performance Code with Domain Decomposition for Parallel Processing

    NASA Technical Reports Server (NTRS)

    Navaz, Homayun K.

    2002-01-01

    -equation turbulence model, and two-phase flow. To overcome these limitations, the LTCP code is rewritten to include the multi-zone capability with domain decomposition that makes it suitable for parallel processing, i.e., enabling the code to run every zone or sub-domain on a separate processor. This can reduce the run time by a factor of 6 to 8, depending on the problem.

  2. Numerical modelling of thermal decomposition processes and associated damage in carbon fibre composites

    NASA Astrophysics Data System (ADS)

    Chippendale, R. D.; Golosnoy, I. O.; Lewin, P. L.

    2014-09-01

    Thermo-chemical degradation of carbon fibre composite (CFC) materials under intensive heat fluxes are modelled. The model couples together heat diffusion, polymer pyrolysis with associated gas production and convection through partially decomposed CFCs, and changes in transport properties of the material due to the damage. The model is verified by laser ablation experiments with controlled heat input. The numerical predictions indicate that the thermal gas transport has a minimal effect on the decomposition extent. On the other hand, the model shows that the internal gas pressure is large enough to cause fracture and delamination, and the damage extent may go far beyond the decomposition region as witnessed from experimental verification of the model.

  3. Composition of bacterial and archaeal communities during landfill refuse decomposition processes.

    PubMed

    Song, Liyan; Wang, Yangqing; Zhao, Heping; Long, David T

    2015-12-01

    Little is known about the archaeal and the bacterial diversities in a landfill during different phases of decomposition. In this study, the archaeal and the bacterial diversities of Laogang landfill (Shanghai, China) at two different decomposition phases (i.e., initial methanogenic phase (IMP) and stable methanogenic phase (SMP)), were culture-independently examined using PCR-based 454 pyrosequencing. A total of 47,753 sequences of 16S rRNA genes were retrieved from 69,954 reads and analyzed to evaluate the diversities of the archaeal and bacterial communities. The most predominant types of archaea were hydrogenotrophic Methanomicrobiales, and of bacteria were Proteobacteria, Firmicutes, and Bacteroidetes. As might be expected, their abundances varied at decomposition phases. Archaea Methanomicrobiales accounts for 97.6% of total archaeal population abundance in IMP and about 57.6% in SMP. The abundance of archaeal genus Halobacteriale was 0.1% in IMP and was 20.3% in the SMP. The abundance of Firmicutes was 21.3% in IMP and was 4.3% in SMP. The abundance of Bacteroidetes represented 11.5% of total bacterial in IMP and was dominant (49.4%) in SMP. Both the IMP and SMP had unique cellulolytic bacteria compositions. IMP consisted of members of Bacillus, Fibrobacter, and Eubacterium, while SMP harbored groups of Microbacterium. Both phases had Clostridium with different abundance, 4-5 folds higher in SMP.

  4. Genotypic diversity of an invasive plant species promotes litter decomposition and associated processes.

    PubMed

    Wang, Xiao-Yan; Miao, Yuan; Yu, Shuo; Chen, Xiao-Yong; Schmid, Bernhard

    2014-03-01

    Following studies that showed negative effects of species loss on ecosystem functioning, newer studies have started to investigate if similar consequences could result from reductions of genetic diversity within species. We tested the influence of genotypic richness and dissimilarity (plots containing one, three, six or 12 genotypes) in stands of the invasive plant Solidago canadensis in China on the decomposition of its leaf litter and associated soil animals over five monthly time intervals. We found that the logarithm of genotypic richness was positively linearly related to mass loss of C, N and P from the litter and to richness and abundance of soil animals on the litter samples. The mixing proportion of litter from two sites, but not genotypic dissimilarity of mixtures, had additional effects on measured variables. The litter diversity effects on soil animals were particularly strong under the most stressful conditions of hot weather in July: at this time richness and abundance of soil animals were higher in 12-genotype litter mixtures than even in the highest corresponding one-genotype litter. The litter diversity effects on decomposition were in part mediated by soil animals: the abundance of Acarina, when used as covariate in the analysis, fully explained the litter diversity effects on mass loss of N and P. Overall, our study shows that high genotypic richness of S. canadensis leaf litter positively affects richness and abundance of soil animals, which in turn accelerate litter decomposition and P release from litter. PMID:24276771

  5. In Vivo Generalized Diffusion Tensor Imaging (GDTI) Using Higher-Order Tensors (HOT)

    PubMed Central

    Liu, Chunlei; Mang, Sarah C.; Moseley, Michael E.

    2009-01-01

    Generalized diffusion tensor imaging (GDTI) using higher order tensor statistics (HOT) generalizes the technique of diffusion tensor imaging (DTI) by including the effect of non-Gaussian diffusion on the signal of magnetic resonance imaging (MRI). In GDTI-HOT, the effect of non-Gaussian diffusion is characterized by higher order tensor statistics (i.e. the cumulant tensors or the moment tensors) such as the covariance matrix (the second-order cumulant tensor), the skewness tensor (the third-order cumulant tensor) and the kurtosis tensor (the fourth-order cumulant tensor) etc. Previously, Monte Carlo simulations have been applied to verify the validity of this technique in reconstructing complicated fiber structures. However, no in vivo implementation of GDTI-HOT has been reported. The primary goal of this study is to establish GDTI-HOT as a feasible in vivo technique for imaging non-Gaussian diffusion. We show that probability distribution function (PDF) of the molecular diffusion process can be measured in vivo with GDTI-HOT and be visualized with 3D glyphs. By comparing GDTI-HOT to fiber structures that are revealed by the highest resolution DWI possible in vivo, we show that the GDTI-HOT can accurately predict multiple fiber orientations within one white matter voxel. Furthermore, through bootstrap analysis we demonstrate that in vivo measurement of HOT elements is reproducible with a small statistical variation that is similar to that of DTI. PMID:19953513

  6. Block term decomposition for modelling epileptic seizures

    NASA Astrophysics Data System (ADS)

    Hunyadi, Borbála; Camps, Daan; Sorber, Laurent; Paesschen, Wim Van; Vos, Maarten De; Huffel, Sabine Van; Lathauwer, Lieven De

    2014-12-01

    Recordings of neural activity, such as EEG, are an inherent mixture of different ongoing brain processes as well as artefacts and are typically characterised by low signal-to-noise ratio. Moreover, EEG datasets are often inherently multidimensional, comprising information in time, along different channels, subjects, trials, etc. Additional information may be conveyed by expanding the signal into even more dimensions, e.g. incorporating spectral features applying wavelet transform. The underlying sources might show differences in each of these modes. Therefore, tensor-based blind source separation techniques which can extract the sources of interest from such multiway arrays, simultaneously exploiting the signal characteristics in all dimensions, have gained increasing interest. Canonical polyadic decomposition (CPD) has been successfully used to extract epileptic seizure activity from wavelet-transformed EEG data (Bioinformatics 23(13):i10-i18, 2007; NeuroImage 37:844-854, 2007), where each source is described by a rank-1 tensor, i.e. by the combination of one particular temporal, spectral and spatial signature. However, in certain scenarios, where the seizure pattern is nonstationary, such a trilinear signal model is insufficient. Here, we present the application of a recently introduced technique, called block term decomposition (BTD) to separate EEG tensors into rank- ( L r , L r ,1) terms, allowing to model more variability in the data than what would be possible with CPD. In a simulation study, we investigate the robustness of BTD against noise and different choices of model parameters. Furthermore, we show various real EEG recordings where BTD outperforms CPD in capturing complex seizure characteristics.

  7. Geodesic-loxodromes for diffusion tensor interpolation and difference measurement.

    PubMed

    Kindlmann, Gordon; Estépar, Raúl San José; Niethammer, Marc; Haker, Steven; Westin, Carl-Fredrik

    2007-01-01

    In algorithms for processing diffusion tensor images, two common ingredients are interpolating tensors, and measuring the distance between them. We propose a new class of interpolation paths for tensors, termed geodesic-loxodromes, which explicitly preserve clinically important tensor attributes, such as mean diffusivity or fractional anisotropy, while using basic differential geometry to interpolate tensor orientation. This contrasts with previous Riemannian and Log-Euclidean methods that preserve the determinant. Path integrals of tangents of geodesic-loxodromes generate novel measures of over-all difference between two tensors, and of difference in shape and in orientation. PMID:18051037

  8. A uniform parameterization of moment tensors

    NASA Astrophysics Data System (ADS)

    Tape, C.; Tape, W.

    2015-12-01

    A moment tensor is a 3 x 3 symmetric matrix that expresses an earthquake source. We construct a parameterization of the five-dimensional space of all moment tensors of unit norm. The coordinates associated with the parameterization are closely related to moment tensor orientations and source types. The parameterization is uniform, in the sense that equal volumes in the coordinate domain of the parameterization correspond to equal volumes of moment tensors. Uniformly distributed points in the coordinate domain therefore give uniformly distributed moment tensors. A cartesian grid in the coordinate domain can be used to search efficiently over moment tensors. We find that uniformly distributed moment tensors have uniformly distributed orientations (eigenframes), but that their source types (eigenvalue triples) are distributed so as to favor double couples. An appropriate choice of a priori moment tensor probability is a prerequisite for parameter estimation. As a seemingly sensible choice, we consider the homogeneous probability, in which equal volumes of moment tensors are equally likely. We believe that it will lead to improved characterization of source processes.

  9. Compressive sensing of sparse tensors.

    PubMed

    Friedland, Shmuel; Li, Qun; Schonfeld, Dan

    2014-10-01

    Compressive sensing (CS) has triggered an enormous research activity since its first appearance. CS exploits the signal's sparsity or compressibility in a particular domain and integrates data compression and acquisition, thus allowing exact reconstruction through relatively few nonadaptive linear measurements. While conventional CS theory relies on data representation in the form of vectors, many data types in various applications, such as color imaging, video sequences, and multisensor networks, are intrinsically represented by higher order tensors. Application of CS to higher order data representation is typically performed by conversion of the data to very long vectors that must be measured using very large sampling matrices, thus imposing a huge computational and memory burden. In this paper, we propose generalized tensor compressive sensing (GTCS)-a unified framework for CS of higher order tensors, which preserves the intrinsic structure of tensor data with reduced computational complexity at reconstruction. GTCS offers an efficient means for representation of multidimensional data by providing simultaneous acquisition and compression from all tensor modes. In addition, we propound two reconstruction procedures, a serial method and a parallelizable method. We then compare the performance of the proposed method with Kronecker compressive sensing (KCS) and multiway compressive sensing (MWCS). We demonstrate experimentally that GTCS outperforms KCS and MWCS in terms of both reconstruction accuracy (within a range of compression ratios) and processing speed. The major disadvantage of our methods (and of MWCS as well) is that the compression ratios may be worse than that offered by KCS.

  10. Decomposition of intact chicken feathers by a thermophile in combination with an acidulocomposting garbage-treatment process.

    PubMed

    Shigeri, Yasushi; Matsui, Tatsunobu; Watanabe, Kunihiko

    2009-11-01

    In order to develop a practical method for the decomposition of intact chicken feathers, a moderate thermophile strain, Meiothermus ruber H328, having strong keratinolytic activity, was used in a bio-type garbage-treatment machine working with an acidulocomposting process. The addition of strain H328 cells (15 g) combined with acidulocomposting in the garbage machine resulted in 70% degradation of intact chicken feathers (30 g) within 14 d. This degradation efficiency is comparable to a previous result employing the strain as a single bacterium in flask culture, and it indicates that strain H328 can promote intact feather degradation activity in a garbage machine currently on the market.

  11. Decomposition of intact chicken feathers by a thermophile in combination with an acidulocomposting garbage-treatment process.

    PubMed

    Shigeri, Yasushi; Matsui, Tatsunobu; Watanabe, Kunihiko

    2009-11-01

    In order to develop a practical method for the decomposition of intact chicken feathers, a moderate thermophile strain, Meiothermus ruber H328, having strong keratinolytic activity, was used in a bio-type garbage-treatment machine working with an acidulocomposting process. The addition of strain H328 cells (15 g) combined with acidulocomposting in the garbage machine resulted in 70% degradation of intact chicken feathers (30 g) within 14 d. This degradation efficiency is comparable to a previous result employing the strain as a single bacterium in flask culture, and it indicates that strain H328 can promote intact feather degradation activity in a garbage machine currently on the market. PMID:19897897

  12. Toluene decomposition performance and NOx by-product formation during a DBD-catalyst process.

    PubMed

    Guo, Yufang; Liao, Xiaobin; Fu, Mingli; Huang, Haibao; Ye, Daiqi

    2015-02-01

    Characteristics of toluene decomposition and formation of nitrogen oxide (NOx) by-products were investigated in a dielectric barrier discharge (DBD) reactor with/without catalyst at room temperature and atmospheric pressure. Four kinds of metal oxides, i.e., manganese oxide (MnOx), iron oxide (FeOx), cobalt oxide (CoOx) and copper oxide (CuO), supported on Al2O3/nickel foam, were used as catalysts. It was found that introducing catalysts could improve toluene removal efficiency, promote decomposition of by-product ozone and enhance CO2 selectivity. In addition, NOx was suppressed with the decrease of specific energy density (SED) and the increase of humidity, gas flow rate and toluene concentration, or catalyst introduction. Among the four kinds of catalysts, the CuO catalyst showed the best performance in NOx suppression. The MnOx catalyst exhibited the lowest concentration of O3 and highest CO2 selectivity but the highest concentration of NOx. A possible pathway for NOx production in DBD was discussed. The contributions of oxygen active species and hydroxyl radicals are dominant in NOx suppression.

  13. Solar radiation influence on the decomposition process of diclofenac in surface waters.

    PubMed

    Bartels, Peter; von Tümpling, Wolf

    2007-03-01

    Diclofenac can be detected in surface water of many rivers with human impacts worldwide. The observed decrease of the diclofenac concentration in waters and the formation of its photochemical transformation products under the impact of natural irradiation during one to 16 days are explained in this article. In semi-natural laboratory tests and in a field experiment it could be shown, that sunlight stimulates the decomposition of diclofenac in surface waters. During one day intensive solar radiation in middle European summer diclofenac decomposes in the surface layer of the water (0 to 5 cm) up to 83%, determined in laboratory exposition experiments. After two weeks in a field experiment, the diclofenac was not detectable anymore in the water surface layer (limit of quantification: 5 ng/L). At a water depth of 50 cm, within two weeks 96% of the initial concentration was degraded, while in 100 cm depth 2/3 of the initial diclofenac concentration remained. With the decomposition, stable and meta-stable photolysis products were formed and observed by UV detection. Beyond that the chemical structure of these products were determined. Three transformation products, that were not described in the literature so far, were identified and quantified with GC-MS.

  14. Toluene decomposition performance and NOx by-product formation during a DBD-catalyst process.

    PubMed

    Guo, Yufang; Liao, Xiaobin; Fu, Mingli; Huang, Haibao; Ye, Daiqi

    2015-02-01

    Characteristics of toluene decomposition and formation of nitrogen oxide (NOx) by-products were investigated in a dielectric barrier discharge (DBD) reactor with/without catalyst at room temperature and atmospheric pressure. Four kinds of metal oxides, i.e., manganese oxide (MnOx), iron oxide (FeOx), cobalt oxide (CoOx) and copper oxide (CuO), supported on Al2O3/nickel foam, were used as catalysts. It was found that introducing catalysts could improve toluene removal efficiency, promote decomposition of by-product ozone and enhance CO2 selectivity. In addition, NOx was suppressed with the decrease of specific energy density (SED) and the increase of humidity, gas flow rate and toluene concentration, or catalyst introduction. Among the four kinds of catalysts, the CuO catalyst showed the best performance in NOx suppression. The MnOx catalyst exhibited the lowest concentration of O3 and highest CO2 selectivity but the highest concentration of NOx. A possible pathway for NOx production in DBD was discussed. The contributions of oxygen active species and hydroxyl radicals are dominant in NOx suppression. PMID:25662254

  15. Tensor hypercontraction. II. Least-squares renormalization

    NASA Astrophysics Data System (ADS)

    Parrish, Robert M.; Hohenstein, Edward G.; Martínez, Todd J.; Sherrill, C. David

    2012-12-01

    The least-squares tensor hypercontraction (LS-THC) representation for the electron repulsion integral (ERI) tensor is presented. Recently, we developed the generic tensor hypercontraction (THC) ansatz, which represents the fourth-order ERI tensor as a product of five second-order tensors [E. G. Hohenstein, R. M. Parrish, and T. J. Martínez, J. Chem. Phys. 137, 044103 (2012)], 10.1063/1.4732310. Our initial algorithm for the generation of the THC factors involved a two-sided invocation of overlap-metric density fitting, followed by a PARAFAC decomposition, and is denoted PARAFAC tensor hypercontraction (PF-THC). LS-THC supersedes PF-THC by producing the THC factors through a least-squares renormalization of a spatial quadrature over the otherwise singular 1/r12 operator. Remarkably, an analytical and simple formula for the LS-THC factors exists. Using this formula, the factors may be generated with O(N^5) effort if exact integrals are decomposed, or O(N^4) effort if the decomposition is applied to density-fitted integrals, using any choice of density fitting metric. The accuracy of LS-THC is explored for a range of systems using both conventional and density-fitted integrals in the context of MP2. The grid fitting error is found to be negligible even for extremely sparse spatial quadrature grids. For the case of density-fitted integrals, the additional error incurred by the grid fitting step is generally markedly smaller than the underlying Coulomb-metric density fitting error. The present results, coupled with our previously published factorizations of MP2 and MP3, provide an efficient, robust O(N^4) approach to both methods. Moreover, LS-THC is generally applicable to many other methods in quantum chemistry.

  16. Tensor hypercontraction. II. Least-squares renormalization.

    PubMed

    Parrish, Robert M; Hohenstein, Edward G; Martínez, Todd J; Sherrill, C David

    2012-12-14

    The least-squares tensor hypercontraction (LS-THC) representation for the electron repulsion integral (ERI) tensor is presented. Recently, we developed the generic tensor hypercontraction (THC) ansatz, which represents the fourth-order ERI tensor as a product of five second-order tensors [E. G. Hohenstein, R. M. Parrish, and T. J. Martínez, J. Chem. Phys. 137, 044103 (2012)]. Our initial algorithm for the generation of the THC factors involved a two-sided invocation of overlap-metric density fitting, followed by a PARAFAC decomposition, and is denoted PARAFAC tensor hypercontraction (PF-THC). LS-THC supersedes PF-THC by producing the THC factors through a least-squares renormalization of a spatial quadrature over the otherwise singular 1∕r(12) operator. Remarkably, an analytical and simple formula for the LS-THC factors exists. Using this formula, the factors may be generated with O(N(5)) effort if exact integrals are decomposed, or O(N(4)) effort if the decomposition is applied to density-fitted integrals, using any choice of density fitting metric. The accuracy of LS-THC is explored for a range of systems using both conventional and density-fitted integrals in the context of MP2. The grid fitting error is found to be negligible even for extremely sparse spatial quadrature grids. For the case of density-fitted integrals, the additional error incurred by the grid fitting step is generally markedly smaller than the underlying Coulomb-metric density fitting error. The present results, coupled with our previously published factorizations of MP2 and MP3, provide an efficient, robust O(N(4)) approach to both methods. Moreover, LS-THC is generally applicable to many other methods in quantum chemistry. PMID:23248986

  17. 3D reconstruction of tensors and vectors

    SciTech Connect

    Defrise, Michel; Gullberg, Grant T.

    2005-02-17

    Here we have developed formulations for the reconstruction of 3D tensor fields from planar (Radon) and line-integral (X-ray) projections of 3D vector and tensor fields. Much of the motivation for this work is the potential application of MRI to perform diffusion tensor tomography. The goal is to develop a theory for the reconstruction of both Radon planar and X-ray or line-integral projections because of the flexibility of MRI to obtain both of these type of projections in 3D. The development presented here for the linear tensor tomography problem provides insight into the structure of the nonlinear MRI diffusion tensor inverse problem. A particular application of tensor imaging in MRI is the potential application of cardiac diffusion tensor tomography for determining in vivo cardiac fiber structure. One difficulty in the cardiac application is the motion of the heart. This presents a need for developing future theory for tensor tomography in a motion field. This means developing a better understanding of the MRI signal for diffusion processes in a deforming media. The techniques developed may allow the application of MRI tensor tomography for the study of structure of fiber tracts in the brain, atherosclerotic plaque, and spine in addition to fiber structure in the heart. However, the relations presented are also applicable to other fields in medical imaging such as diffraction tomography using ultrasound. The mathematics presented can also be extended to exponential Radon transform of tensor fields and to other geometric acquisitions such as cone beam tomography of tensor fields.

  18. Seismically Inferred Rupture Process of the 2011 Tohoku-Oki Earthquake by Using Data-Validated 3D and 2.5D Green's Tensor Waveforms

    NASA Astrophysics Data System (ADS)

    Okamoto, T.; Takenaka, H.; Hara, T.; Nakamura, T.; Aoki, T.

    2014-12-01

    We analyze "seismic" rupture process of the March 11, 2011 Tohoku-Oki earthquake (GCMT Mw9.1) by using a non-linear multi-time-window waveform inversion method. We incorporate the effect of the near-source laterally heterogeneous structure on the synthetic Green's tensor waveforms; otherwise the analysis may result in erroneous solutions [1]. To increase the resolution we use teleseismic and strong-motion seismograms jointly because the one-sided distribution of strong-motion station may cause reduced resolution near the trench axis [2]. We use a 2.5D FDM [3] for teleseismic P-waves and a full 3D FDM that incorporates topography, oceanic water layer, 3D heterogeneity and attenuation for strong-motions [4]. We apply multi-GPU acceleration by using the TSUBAME supercomputer in Tokyo Institute of Technology [5]. We "validated" the Green's tensor waveforms with a point-source moment tensor inversion analysis for a small (Mw5.8) shallow event: we confirm the observed waveforms are reproduced well with the synthetics.The inferred slip distribution using the 2.5D and 3D Green's functions has large slips (max. 37 m) near the hypocenter and small slips near the trench (figure). Also an isolated slip region is identified close to Fukushima prefecture. These features are similar to those obtained by our preliminary study [4]. The land-ward large slips and trench-ward small slips have also been reported by [2]. It is remarkable that we confirmed these features by using data-validated Green's functions. On the other hand very large slips are inferred close to the trench when we apply "1D" Green's functions that do not incorporate the lateral heterogeneity. Our result suggests the trench-ward large deformation that caused large tsunamis did not radiate strong seismic waves. Very slow slips (e.g., the tsunami earthquake), delayed slips and anelastic deformation are among the candidates of the physical processes of the deformation.[1] Okamoto and Takenaka, EPS, 61, e17-e20, 2009

  19. Multiple seismogenic processes for high-frequency earthquakes at Katmai National Park, Alaska: Evidence from stress tensor inversions of fault-plane solutions

    USGS Publications Warehouse

    Moran, S.C.

    2003-01-01

    The volcanological significance of seismicity within Katmai National Park has been debated since the first seismograph was installed in 1963, in part because Katmai seismicity consists almost entirely of high-frequency earthquakes that can be caused by a wide range of processes. I investigate this issue by determining 140 well-constrained first-motion fault-plane solutions for shallow (depth < 9 km) earthquakes occuring between 1995 and 2001 and inverting these solutions for the stress tensor in different regions within the park. Earthquakes removed by several kilometers from the volcanic axis occur in a stress field characterized by horizontally oriented ??1 and ??3 axes, with ??1 rotated slightly (12??) relative to the NUVELIA subduction vector, indicating that these earthquakes are occurring in response to regional tectonic forces. On the other hand, stress tensors for earthquake clusters beneath several Katmai cluster volcanoes have vertically oriented ??1 axes, indicating that these events are occuring in response to local, not regional, processes. At Martin-Mageik, vertically oriented ??1 is most consistent with failure under edifice loading conditions in conjunction with localized pore pressure increases associated with hydrothermal circulation cells. At Trident-Novarupta, it is consistent with a number of possible models, including occurence along fractures formed during the 1912 eruption that now serve as horizontal conduits for migrating fluids and/or volatiles from nearby degassing and cooling magma bodies. At Mount Katmai, it is most consistent with continued seismicity along ring-fracture systems created in the 1912 eruption, perhaps enhanced by circulating hydrothermal fluids and/or seepage from the caldera-filling lake.

  20. A Domain Decomposition Approach for Large-Scale Simulations of Flow Processes in Hydrate-Bearing Geologic Media

    SciTech Connect

    Zhang, Keni; Moridis, G.J.; Wu, Y.-S.; Pruess, K.

    2008-07-01

    Simulation of the system behavior of hydrate-bearing geologic media involves solving fully coupled mass- and heat-balance equations. In this study, we develop a domain decomposition approach for large-scale gas hydrate simulations with coarse-granularity parallel computation. This approach partitions a simulation domain into small subdomains. The full model domain, consisting of discrete subdomains, is still simulated simultaneously by using multiple processes/processors. Each processor is dedicated to following tasks of the partitioned subdomain: updating thermophysical properties, assembling mass- and energy-balance equations, solving linear equation systems, and performing various other local computations. The linearized equation systems are solved in parallel with a parallel linear solver, using an efficient interprocess communication scheme. This new domain decomposition approach has been implemented into the TOUGH+HYDRATE code and has demonstrated excellent speedup and good scalability. In this paper, we will demonstrate applications for the new approach in simulating field-scale models for gas production from gas-hydrate deposits.

  1. When Policy Structures Technology: Balancing upfront decomposition and in-process coordination in Europe's decentralized space technology ecosystem

    NASA Astrophysics Data System (ADS)

    Vrolijk, Ademir; Szajnfarber, Zoe

    2015-01-01

    This paper examines the decentralization of European space technology research and development through the joint lenses of policy, systems architecture, and innovation contexts. It uses a detailed longitudinal case history of the development of a novel astrophysics instrument to explore the link between policy-imposed institutional decomposition and the architecture of the technical system. The analysis focuses on five instances of collaborative design decision-making and finds that matching between the technical and institutional architectures is a predictor of project success, consistent with the mirroring hypothesis in extant literature. Examined over time, the instances reveal stability in the loosely coupled nature of institutional arrangements and a trend towards more integral, or tightly coupled, technical systems. The stability of the institutional arrangements is explained as an artifact of the European Hultqvist policy and the trend towards integral technical systems is related to the increasing complexity of modern space systems. If these trends persist, the scale of the mismatch will continue to grow. As a first step towards mitigating this challenge, the paper develops a framework for balancing upfront decomposition and in-process coordination in collaborative development projects. The astrophysics instrument case history is used to illustrate how collaborations should be defined for a given inherent system complexity.

  2. Unraveling the Decomposition Process of Lead(II) Acetate: Anhydrous Polymorphs, Hydrates, and Byproducts and Room Temperature Phosphorescence.

    PubMed

    Martínez-Casado, Francisco J; Ramos-Riesco, Miguel; Rodríguez-Cheda, José A; Cucinotta, Fabio; Matesanz, Emilio; Miletto, Ivana; Gianotti, Enrica; Marchese, Leonardo; Matěj, Zdeněk

    2016-09-01

    Lead(II) acetate [Pb(Ac)2, where Ac = acetate group (CH3-COO(-))2] is a very common salt with many and varied uses throughout history. However, only lead(II) acetate trihydrate [Pb(Ac)2·3H2O] has been characterized to date. In this paper, two enantiotropic polymorphs of the anhydrous salt, a novel hydrate [lead(II) acetate hemihydrate: Pb(Ac)2·(1)/2H2O], and two decomposition products [corresponding to two different basic lead(II) acetates: Pb4O(Ac)6 and Pb2O(Ac)2] are reported, with their structures being solved for the first time. The compounds present a variety of molecular arrangements, being 2D or 1D coordination polymers. A thorough thermal analysis, by differential scanning calorimetry (DSC) and thermogravimetric analysis (TGA), was also carried out to study the behavior and thermal data of the salt and its decomposition process, in inert and oxygenated atmospheres, identifying the phases and byproducts that appear. The complex thermal behavior of lead(II) acetate is now solved, finding the existence of another hydrate, two anhydrous enantiotropic polymorphs, and some byproducts. Moreover, some of them are phosphorescent at room temperature. The compounds were studied by TGA, DSC, X-ray diffraction, and UV-vis spectroscopy. PMID:27548299

  3. Photocatalytic decomposition of bromate ion by the UV/P25-Graphene processes.

    PubMed

    Huang, Xin; Wang, Longyong; Zhou, Jizhi; Gao, Naiyun

    2014-06-15

    The photocatalysis of bromate (BrO3(-)) attracts much attention as BrO3(-) is a carcinogenic and genotoxic contaminant in drinking water. In this work, TiO2-graphene composite (P25-GR) photocatalyst for BrO3(-) reduction were prepared by a facile one-step hydrothermal method, which exhibited a higher capacity of BrO3(-) removal than P25 or GR did. The maximum removal of BrO3(-) was observed in the optimal conductions of 1% GR doping and at pH 6.8. Compared with that without UV, the higher decreasing of BrO3(-) on the composite indicates that BrO3(-) decomposition was predominantly contributed to photo-reduction with UV rather than adsorption. This hypothesis was supported by the decreasing of [BrO3(-)] with the synchronous increasing of [Br(-)] at nearly constant amount of total Bromine ([BrO3(-)] + [Br(-)]). Furthermore, the improvement of BrO3(-) reduction on P25-GR was observed in the treatment of a tap water. However, the efficiency of BrO3(-) removal was less than that in deionized water, probably due to the consumption of photo-generated electrons and the adsorption of natural organic matters (NOM) on graphene.

  4. The Search for a Volatile Human Specific Marker in the Decomposition Process

    PubMed Central

    Rosier, E.; Loix, S.; Develter, W.; Van de Voorde, W.; Tytgat, J.; Cuypers, E.

    2015-01-01

    In this study, a validated method using a thermal desorber combined with a gas chromatograph coupled to mass spectrometry was used to identify the volatile organic compounds released during decomposition of 6 human and 26 animal remains in a laboratory environment during a period of 6 months. 452 compounds were identified. Among them a human specific marker was sought using principle component analysis. We found a combination of 8 compounds (ethyl propionate, propyl propionate, propyl butyrate, ethyl pentanoate, pyridine, diethyl disulfide, methyl(methylthio)ethyl disulfide and 3-methylthio-1-propanol) that led to the distinction of human and pig remains from other animal remains. Furthermore, it was possible to separate the pig remains from human remains based on 5 esters (3-methylbutyl pentanoate, 3-methylbutyl 3-methylbutyrate, 3-methylbutyl 2-methylbutyrate, butyl pentanoate and propyl hexanoate). Further research in the field with full bodies has to corroborate these results and search for one or more human specific markers. These markers would allow a more efficiently training of cadaver dogs or portable detection devices could be developed. PMID:26375029

  5. Applying matching pursuit decomposition time-frequency processing to UGS footstep classification

    NASA Astrophysics Data System (ADS)

    Larsen, Brett W.; Chung, Hugh; Dominguez, Alfonso; Sciacca, Jacob; Kovvali, Narayan; Papandreou-Suppappola, Antonia; Allee, David R.

    2013-06-01

    The challenge of rapid footstep detection and classification in remote locations has long been an important area of study for defense technology and national security. Also, as the military seeks to create effective and disposable unattended ground sensors (UGS), computational complexity and power consumption have become essential considerations in the development of classification techniques. In response to these issues, a research project at the Flexible Display Center at Arizona State University (ASU) has experimented with footstep classification using the matching pursuit decomposition (MPD) time-frequency analysis method. The MPD provides a parsimonious signal representation by iteratively selecting matched signal components from a pre-determined dictionary. The resulting time-frequency representation of the decomposed signal provides distinctive features for different types of footsteps, including footsteps during walking or running activities. The MPD features were used in a Bayesian classification method to successfully distinguish between the different activities. The computational cost of the iterative MPD algorithm was reduced, without significant loss in performance, using a modified MPD with a dictionary consisting of signals matched to cadence temporal gait patterns obtained from real seismic measurements. The classification results were demonstrated with real data from footsteps under various conditions recorded using a low-cost seismic sensor.

  6. Interaural cross correlation of event-related potentials and diffusion tensor imaging in the evaluation of auditory processing disorder: a case study.

    PubMed

    Jerger, James; Martin, Jeffrey; McColl, Roderick

    2004-01-01

    In a previous publication (Jerger et al, 2002), we presented event-related potential (ERP) data on a pair of 10-year-old twin girls (Twins C and E), one of whom (Twin E) showed strong evidence of auditory processing disorder. For the present paper, we analyzed cross-correlation functions of ERP waveforms generated in response to the presentation of target stimuli to either the right or left ears in a dichotic paradigm. There were four conditions; three involved the processing of real words for either phonemic, semantic, or spectral targets; one involved the processing of a nonword acoustic signal. Marked differences in the cross-correlation functions were observed. In the case of Twin C, cross-correlation functions were uniformly normal across both hemispheres. The functions for Twin E, however, suggest poorly correlated neural activity over the left parietal region during the three word processing conditions, and over the right parietal area in the nonword acoustic condition. Differences between the twins' brains were evaluated using diffusion tensor magnetic resonance imaging (DTI). For Twin E, results showed reduced anisotropy over the length of the midline corpus callosum and adjacent lateral structures, implying reduced myelin integrity. Taken together, these findings suggest that failure to achieve appropriate temporally correlated bihemispheric brain activity in response to auditory stimulation, perhaps as a result of faulty interhemispheric communication via corpus callosum, may be a factor in at least some children with auditory processing disorder. PMID:15030103

  7. Feasibility study: Application of the geopressured-geothermal resource to pyrolytic conversion or decomposition/detoxification processes

    SciTech Connect

    Propp, W.A.; Grey, A.E.; Negus-de Wys, J.; Plum, M.M.; Haefner, D.R.

    1991-09-01

    This study presents a preliminary evaluation of the technical and economic feasibility of selected conceptual processes for pyrolytic conversion of organic feedstocks or the decomposition/detoxification of hazardous wastes by coupling the process to the geopressured-geothermal resource. The report presents a detailed discussion of the resource and of each process selected for evaluation including the technical evaluation of each. A separate section presents the economic methodology used and the evaluation of the technically viable process. A final section presents conclusions and recommendations. Three separate processes were selected for evaluation. These are pyrolytic conversion of biomass to petroleum like fluids, wet air oxidation (WAO) at subcritical conditions for destruction of hazardous waste, and supercritical water oxidation (SCWO) also for the destruction of hazardous waste. The scientific feasibility of all three processes has been previously established by various bench-scale and pilot-scale studies. For a variety of reasons detailed in the report the SCWO process is the only one deemed to be technically feasible, although the effects of the high solids content of the geothermal brine need further study. This technology shows tremendous promise for contributing to solving the nation's energy and hazardous waste problems. However, the current economic analysis suggests that it is uneconomical at this time. 50 refs., 5 figs., 7 tabs.

  8. EEG Classification for Hybrid Brain-Computer Interface Using a Tensor Based Multiclass Multimodal Analysis Scheme

    PubMed Central

    Ji, Hongfei; Li, Jie; Lu, Rongrong; Gu, Rong; Cao, Lei; Gong, Xiaoliang

    2016-01-01

    Electroencephalogram- (EEG-) based brain-computer interface (BCI) systems usually utilize one type of changes in the dynamics of brain oscillations for control, such as event-related desynchronization/synchronization (ERD/ERS), steady state visual evoked potential (SSVEP), and P300 evoked potentials. There is a recent trend to detect more than one of these signals in one system to create a hybrid BCI. However, in this case, EEG data were always divided into groups and analyzed by the separate processing procedures. As a result, the interactive effects were ignored when different types of BCI tasks were executed simultaneously. In this work, we propose an improved tensor based multiclass multimodal scheme especially for hybrid BCI, in which EEG signals are denoted as multiway tensors, a nonredundant rank-one tensor decomposition model is proposed to obtain nonredundant tensor components, a weighted fisher criterion is designed to select multimodal discriminative patterns without ignoring the interactive effects, and support vector machine (SVM) is extended to multiclass classification. Experiment results suggest that the proposed scheme can not only identify the different changes in the dynamics of brain oscillations induced by different types of tasks but also capture the interactive effects of simultaneous tasks properly. Therefore, it has great potential use for hybrid BCI. PMID:26880873

  9. EEG Classification for Hybrid Brain-Computer Interface Using a Tensor Based Multiclass Multimodal Analysis Scheme.

    PubMed

    Ji, Hongfei; Li, Jie; Lu, Rongrong; Gu, Rong; Cao, Lei; Gong, Xiaoliang

    2016-01-01

    Electroencephalogram- (EEG-) based brain-computer interface (BCI) systems usually utilize one type of changes in the dynamics of brain oscillations for control, such as event-related desynchronization/synchronization (ERD/ERS), steady state visual evoked potential (SSVEP), and P300 evoked potentials. There is a recent trend to detect more than one of these signals in one system to create a hybrid BCI. However, in this case, EEG data were always divided into groups and analyzed by the separate processing procedures. As a result, the interactive effects were ignored when different types of BCI tasks were executed simultaneously. In this work, we propose an improved tensor based multiclass multimodal scheme especially for hybrid BCI, in which EEG signals are denoted as multiway tensors, a nonredundant rank-one tensor decomposition model is proposed to obtain nonredundant tensor components, a weighted fisher criterion is designed to select multimodal discriminative patterns without ignoring the interactive effects, and support vector machine (SVM) is extended to multiclass classification. Experiment results suggest that the proposed scheme can not only identify the different changes in the dynamics of brain oscillations induced by different types of tasks but also capture the interactive effects of simultaneous tasks properly. Therefore, it has great potential use for hybrid BCI. PMID:26880873

  10. Using Regional Moment Tensors to Constrain Earthquake Processes following the 2010 Darfield and 2011 Canterbury New Zealand Earthquake Sequences

    NASA Astrophysics Data System (ADS)

    Herman, M. W.; Furlong, K. P.; Herrmann, R. B.; Benz, H.

    2011-12-01

    We model regional broadband data from the South Island of New Zealand to determine regional moment tensor solutions for the mainshock and selected aftershocks of the M7.0, 3 September 2011, M6.1, 21 February 2011 and M6.0 13 June 2011 earthquakes that occurred near Christchurch, New Zealand. Arrival time picks from both the local and regional strong motion and broadband data were used to determine preliminary earthquake locations using a previously published South Island velocity model. Rayleigh and Love surface wave dispersion measurements were then made from selected events to refine the velocity model in order to better match the predominantly large regional surface waves. RMT solutions were computed using the procedures of Herrmann et al. (2011). In total, we computed RMT solutions for 82 events in the magnitude range of Mw3.5-7.0. Although the crustal faulting behavior in the region has been argued to reflect a complex interaction of strike slip and thrust faulting, the dominant faulting style in the sequence is right-lateral, strike-slip (75 events), with nodal planes striking west-east to southwest-northeast. There are only five purely reverse mechanisms, at the western end of the sequence, in the vicinity of the Harper Hills blind thrust. The main Mw 7.0 rupture shows both local small-scale stepovers and one larger (~ 5-10 km width) right stepover near 172.40°E. Although we expect normal faulting associated with this larger stepover, during the first month after the main shock we observe only two normal fault mechanisms and 13 strike slip (inferred E-W right-lateral) events in the stepover region, and since that time, the sense of faulting has been dominated by right-lateral, strike-slip events, perhaps indicating a sequence of short E-W fault segments in the region. The February and June 2011 events occurred along the same trend at the eastern end of the sequence, and show similar strike slip mechanisms to the majority of events to the west, but the

  11. Scattering studies of self-assembling processes of polymer blends in spinodal decomposition. II. Temperature dependence

    NASA Astrophysics Data System (ADS)

    Takenaka, Mikihito; Hashimoto, Takeji

    1992-04-01

    Our previous work on time evolution of the interfacial structure for a near critical mixture of polybutadiene and polyisoprene undergoing the spinodal decomposition (SD) [T. Hashimoto, M. Takenaka, and H. Jinnai, J. Appl. Crystallogr. 24, 457 (1991)] was extended to explore the behavior as a function of temperature T, again using the time-resolved light scattering method. The study involved the investigation of the time evolutions of various characteristic parameters such as the wave number qm(t;T ) of the dominant mode of the concentration fluctuations, the maximum scattered intensity Im(t;T ), the scaled structure factor F(x;T ), the interfacial area density Σ(t;T ), and the characteristic interfacial thickness tI(t;T ) from the early-to-late stage SD, where t refers to time after the onset of SD and x refers to the reduced scattering vector defined by x=q/qm(t;T ); q is the magnitude of the scattering vector. The results confirm the model previously proposed at a given T over a wider temperature range corresponding to the quench depth ΔT=T-Ts =5.5-34.5 K, or ɛT=(χ-χs)/χs =4.50×10-2 to 2.79×10-1, where Ts is the spinodal temperature, and χ and χs are the Flory interaction parameters at T and Ts, respectively. This blend is noted to have a phase diagram of the lower critical solution temperature type.

  12. Algorithms for sparse nonnegative Tucker decompositions.

    PubMed

    Mørup, Morten; Hansen, Lars Kai; Arnfred, Sidse M

    2008-08-01

    There is a increasing interest in analysis of large-scale multiway data. The concept of multiway data refers to arrays of data with more than two dimensions, that is, taking the form of tensors. To analyze such data, decomposition techniques are widely used. The two most common decompositions for tensors are the Tucker model and the more restricted PARAFAC model. Both models can be viewed as generalizations of the regular factor analysis to data of more than two modalities. Nonnegative matrix factorization (NMF), in conjunction with sparse coding, has recently been given much attention due to its part-based and easy interpretable representation. While NMF has been extended to the PARAFAC model, no such attempt has been done to extend NMF to the Tucker model. However, if the tensor data analyzed are nonnegative, it may well be relevant to consider purely additive (i.e., nonnegative) Tucker decompositions). To reduce ambiguities of this type of decomposition, we develop updates that can impose sparseness in any combination of modalities, hence, proposed algorithms for sparse nonnegative Tucker decompositions (SN-TUCKER). We demonstrate how the proposed algorithms are superior to existing algorithms for Tucker decompositions when the data and interactions can be considered nonnegative. We further illustrate how sparse coding can help identify what model (PARAFAC or Tucker) is more appropriate for the data as well as to select the number of components by turning off excess components. The algorithms for SN-TUCKER can be downloaded from Mørup (2007).

  13. Coupling experimental data and a prototype model to probe the physical and chemical processes of 2,4-dinitroimidazole solid-phase thermal decomposition

    SciTech Connect

    Behrens, R.; Minier, L.; Bulusu, S.

    1998-12-31

    The time-dependent, solid-phase thermal decomposition behavior of 2,4-dinitroimidazole (2,4-DNI) has been measured utilizing simultaneous thermogravimetric modulated beam mass spectrometry (STMBMS) methods. The decomposition products consist of gaseous and non-volatile polymeric products. The temporal behavior of the gas formation rates of the identified products indicate that the overall thermal decomposition process is complex. In isothermal experiments with 2,4-DNI in the solid phase, four distinguishing features are observed: (1) elevated rates of gas formation are observed during the early stages of the decomposition, which appear to be correlated to the presence of exogenous water in the sample; (2) this is followed by a period of relatively constant rates of gas formation; (3) next, the rates of gas formation accelerate, characteristic of an autocatalytic reaction; (4) finally, the 2,4-DNI is depleted and gaseous decomposition products continue to evolve at a decreasing rate. A physicochemical and mathematical model of the decomposition of 2,4-DNI has been developed and applied to the experimental results. The first generation of this model is described in this paper. Differences between the first generation of the model and the experimental data collected under different conditions suggest refinements for the next generation of the model.

  14. Photocatalytic Decomposition of Methylene Blue Over MIL-53(Fe) Prepared Using Microwave-Assisted Process Under Visible Light Irradiation.

    PubMed

    Trinh, Nguyen Duy; Hong, Seong-Soo

    2015-07-01

    Iron-based MIL-53 crystals with uniform size were successfully synthesized using a microwave-assisted solvothermal method and characterized by XRD, FE-SEM and DRS. We also investigated the photocatalytic activity of MIL-53(Fe) for the decomposition of methylene blue using H2O2 as an electron acceptor. From XRD and SEM results, the fully crystallized MIL-53(Fe) materials were obtained regardless of preparation method. From DRS results, MIL-53(Fe) samples prepared using microwave-assisted process displayed the absorption spectrum up to the visible region and then they showed the high photocatalytic activity under visible light irradiation. The MIL-53(Fe) catalyst prepared by two times microwave irradiation showed the highest activity.

  15. Regeneration of glass nanofluidic chips through a multiple-step sequential thermochemical decomposition process at high temperatures.

    PubMed

    Xu, Yan; Wu, Qian; Shimatani, Yuji; Yamaguchi, Koji

    2015-10-01

    Due to the lack of regeneration methods, the reusability of nanofluidic chips is a significant technical challenge impeding the efficient and economic promotion of both fundamental research and practical applications on nanofluidics. Herein, a simple method for the total regeneration of glass nanofluidic chips was described. The method consists of sequential thermal treatment with six well-designed steps, which correspond to four sequential thermal and thermochemical decomposition processes, namely, dehydration, high-temperature redox chemical reaction, high-temperature gasification, and cooling. The method enabled the total regeneration of typical 'dead' glass nanofluidic chips by eliminating physically clogged nanoparticles in the nanochannels, removing chemically reacted organic matter on the glass surface and regenerating permanent functional surfaces of dissimilar materials localized in the nanochannels. The method provides a technical solution to significantly improve the reusability of glass nanofluidic chips and will be useful for the promotion and acceleration of research and applications on nanofluidics.

  16. Fundamental phenomena on fuel decomposition and boundary-layer combustion processes with applications to hybrid rocket motors

    NASA Technical Reports Server (NTRS)

    Kuo, Kenneth K.; Lu, Yeu-Cherng; Chiaverini, Martin J.; Harting, George C.; Johnson, David K.; Serin, Nadir

    1995-01-01

    The experimental study on the fundamental processes involved in fuel decomposition and boundary-layer combustion in hybrid rocket motors is continuously being conducted at the High Pressure Combustion Laboratory of The Pennsylvania State University. This research will provide a useful engineering technology base in the development of hybrid rocket motors as well as a fundamental understanding of the complex processes involved in hybrid propulsion. A high-pressure, 2-D slab motor has been designed, manufactured, and utilized for conducting seven test firings using HTPB fuel processed at PSU. A total of 20 fuel slabs have been received from the Mcdonnell Douglas Aerospace Corporation. Ten of these fuel slabs contain an array of fine-wire thermocouples for measuring solid fuel surface and subsurface temperatures. Diagnostic instrumentation used in the test include high-frequency pressure transducers for measuring static and dynamic motor pressures and fine-wire thermocouples for measuring solid fuel surface and subsurface temperatures. The ultrasonic pulse-echo technique as well as a real-time x-ray radiography system have been used to obtain independent measurements of instantaneous solid fuel regression rates.

  17. Fundamental phenomena on fuel decomposition and boundary-layer combustion processes with applications to hybrid rocket motors

    NASA Astrophysics Data System (ADS)

    Kuo, Kenneth K.; Lu, Yeu-Cherng; Chiaverini, Martin J.; Harting, George C.; Johnson, David K.; Serin, Nadir

    The experimental study on the fundamental processes involved in fuel decomposition and boundary-layer combustion in hybrid rocket motors is continuously being conducted at the High Pressure Combustion Laboratory of The Pennsylvania State University. This research will provide a useful engineering technology base in the development of hybrid rocket motors as well as a fundamental understanding of the complex processes involved in hybrid propulsion. A high-pressure, 2-D slab motor has been designed, manufactured, and utilized for conducting seven test firings using HTPB fuel processed at PSU. A total of 20 fuel slabs have been received from the Mcdonnell Douglas Aerospace Corporation. Ten of these fuel slabs contain an array of fine-wire thermocouples for measuring solid fuel surface and subsurface temperatures. Diagnostic instrumentation used in the test include high-frequency pressure transducers for measuring static and dynamic motor pressures and fine-wire thermocouples for measuring solid fuel surface and subsurface temperatures. The ultrasonic pulse-echo technique as well as a real-time x-ray radiography system have been used to obtain independent measurements of instantaneous solid fuel regression rates.

  18. Achieving Low Overpotential Li-O₂ Battery Operations by Li₂O₂ Decomposition through One-Electron Processes.

    PubMed

    Xie, Jin; Dong, Qi; Madden, Ian; Yao, Xiahui; Cheng, Qingmei; Dornath, Paul; Fan, Wei; Wang, Dunwei

    2015-12-01

    As a promising high-capacity energy storage technology, Li-O2 batteries face two critical challenges, poor cycle lifetime and low round-trip efficiencies, both of which are connected to the high overpotentials. The problem is particularly acute during recharge, where the reactions typically follow two-electron mechanisms that are inherently slow. Here we present a strategy that can significantly reduce recharge overpotentials. Our approach seeks to promote Li2O2 decomposition by one-electron processes, and the key is to stabilize the important intermediate of superoxide species. With the introduction of a highly polarizing electrolyte, we observe that recharge processes are successfully switched from a two-electron pathway to a single-electron one. While a similar one-electron route has been reported for the discharge processes, it has rarely been described for recharge except for the initial stage due to the poor mobilities of surface bound superoxide ions (O2(-)), a necessary intermediate for the mechanism. Key to our observation is the solvation of O2(-) by an ionic liquid electrolyte (PYR14TFSI). Recharge overpotentials as low as 0.19 V at 100 mA/g(carbon) are measured.

  19. On the Decomposition of Martensite During Bake Hardening of Thermomechanically Processed TRIP Steels

    SciTech Connect

    Pereloma, E. V.; Miller, Michael K; Timokhina, I. B.

    2008-01-01

    Thermomechanically processed (TMP) CMnSi transformation-induced plasticity (TRIP) steels with and without additions of Nb, Mo, or Al were subjected to prestraining and bake hardening. Atom probe tomography (APT) revealed the presence of fine C-rich clusters in the martensite of all studied steels after the thermomechanical processing. After bake hardening, the formation of iron carbides, containing from 25 to 90 at. pct C, was observed. The evolution of iron carbide compositions was independent of steel composition and was a function of carbide size.

  20. On the Decomposition of Martensite during Bake Hardening of Thermomechanically Processed Transformation-Induced Plasticity Steels

    NASA Astrophysics Data System (ADS)

    Pereloma, E. V.; Miller, M. K.; Timokhina, I. B.

    2008-12-01

    Thermomechanically processed (TMP) CMnSi transformation-induced plasticity (TRIP) steels with and without additions of Nb, Mo, or Al were subjected to prestraining and bake hardening. Atom probe tomography (APT) revealed the presence of fine C-rich clusters in the martensite of all studied steels after the thermomechanical processing. After bake hardening, the formation of iron carbides, containing from 25 to 90 at. pct C, was observed. The evolution of iron carbide compositions was independent of steel composition and was a function of carbide size.

  1. Towards a physical understanding of stratospheric cooling under global warming through a process-based decomposition method

    NASA Astrophysics Data System (ADS)

    Yang, Yang; Ren, R.-C.; Cai, Ming

    2016-02-01

    The stratosphere has been cooling under global warming, the causes of which are not yet well understood. This study applied a process-based decomposition method (CFRAM; Coupled Surface-Atmosphere Climate Feedback Response Analysis Method) to the simulation results of a Coupled Model Intercomparison Project, phase 5 (CMIP5) model (CCSM4; Community Climate System Model, version 4), to demonstrate the responsible radiative and non-radiative processes involved in the stratospheric cooling. By focusing on the long-term stratospheric temperature changes between the "historical run" and the 8.5 W m-2 Representative Concentration Pathway (RCP8.5) scenario, this study demonstrates that the changes of radiative radiation due to CO2, ozone and water vapor are the main divers of stratospheric cooling in both winter and summer. They contribute to the cooling changes by reducing the net radiative energy (mainly downward radiation) received by the stratospheric layer. In terms of the global average, their contributions are around -5, -1.5, and -1 K, respectively. However, the observed stratospheric cooling is much weaker than the cooling by radiative processes. It is because changes in atmospheric dynamic processes act to strongly mitigate the radiative cooling by yielding a roughly 4 K warming on the global average base. In particular, the much stronger/weaker dynamic warming in the northern/southern winter extratropics is associated with an increase of the planetary-wave activity in the northern winter, but a slight decrease in the southern winter hemisphere, under global warming. More importantly, although radiative processes dominate the stratospheric cooling, the spatial patterns are largely determined by the non-radiative effects of dynamic processes.

  2. Tensor Network Renormalization.

    PubMed

    Evenbly, G; Vidal, G

    2015-10-30

    We introduce a coarse-graining transformation for tensor networks that can be applied to study both the partition function of a classical statistical system and the Euclidean path integral of a quantum many-body system. The scheme is based upon the insertion of optimized unitary and isometric tensors (disentanglers and isometries) into the tensor network and has, as its key feature, the ability to remove short-range entanglement or correlations at each coarse-graining step. Removal of short-range entanglement results in scale invariance being explicitly recovered at criticality. In this way we obtain a proper renormalization group flow (in the space of tensors), one that in particular (i) is computationally sustainable, even for critical systems, and (ii) has the correct structure of fixed points, both at criticality and away from it. We demonstrate the proposed approach in the context of the 2D classical Ising model.

  3. Ozone decomposition

    PubMed Central

    Batakliev, Todor; Georgiev, Vladimir; Anachkov, Metody; Rakovsky, Slavcho

    2014-01-01

    Catalytic ozone decomposition is of great significance because ozone is a toxic substance commonly found or generated in human environments (aircraft cabins, offices with photocopiers, laser printers, sterilizers). Considerable work has been done on ozone decomposition reported in the literature. This review provides a comprehensive summary of the literature, concentrating on analysis of the physico-chemical properties, synthesis and catalytic decomposition of ozone. This is supplemented by a review on kinetics and catalyst characterization which ties together the previously reported results. Noble metals and oxides of transition metals have been found to be the most active substances for ozone decomposition. The high price of precious metals stimulated the use of metal oxide catalysts and particularly the catalysts based on manganese oxide. It has been determined that the kinetics of ozone decomposition is of first order importance. A mechanism of the reaction of catalytic ozone decomposition is discussed, based on detailed spectroscopic investigations of the catalytic surface, showing the existence of peroxide and superoxide surface intermediates. PMID:26109880

  4. Ozone decomposition.

    PubMed

    Batakliev, Todor; Georgiev, Vladimir; Anachkov, Metody; Rakovsky, Slavcho; Zaikov, Gennadi E

    2014-06-01

    Catalytic ozone decomposition is of great significance because ozone is a toxic substance commonly found or generated in human environments (aircraft cabins, offices with photocopiers, laser printers, sterilizers). Considerable work has been done on ozone decomposition reported in the literature. This review provides a comprehensive summary of the literature, concentrating on analysis of the physico-chemical properties, synthesis and catalytic decomposition of ozone. This is supplemented by a review on kinetics and catalyst characterization which ties together the previously reported results. Noble metals and oxides of transition metals have been found to be the most active substances for ozone decomposition. The high price of precious metals stimulated the use of metal oxide catalysts and particularly the catalysts based on manganese oxide. It has been determined that the kinetics of ozone decomposition is of first order importance. A mechanism of the reaction of catalytic ozone decomposition is discussed, based on detailed spectroscopic investigations of the catalytic surface, showing the existence of peroxide and superoxide surface intermediates. PMID:26109880

  5. Ozone decomposition.

    PubMed

    Batakliev, Todor; Georgiev, Vladimir; Anachkov, Metody; Rakovsky, Slavcho; Zaikov, Gennadi E

    2014-06-01

    Catalytic ozone decomposition is of great significance because ozone is a toxic substance commonly found or generated in human environments (aircraft cabins, offices with photocopiers, laser printers, sterilizers). Considerable work has been done on ozone decomposition reported in the literature. This review provides a comprehensive summary of the literature, concentrating on analysis of the physico-chemical properties, synthesis and catalytic decomposition of ozone. This is supplemented by a review on kinetics and catalyst characterization which ties together the previously reported results. Noble metals and oxides of transition metals have been found to be the most active substances for ozone decomposition. The high price of precious metals stimulated the use of metal oxide catalysts and particularly the catalysts based on manganese oxide. It has been determined that the kinetics of ozone decomposition is of first order importance. A mechanism of the reaction of catalytic ozone decomposition is discussed, based on detailed spectroscopic investigations of the catalytic surface, showing the existence of peroxide and superoxide surface intermediates.

  6. Decomposition of cyclohexanoic acid by the UV/H2O2 process under various conditions.

    PubMed

    Afzal, Atefeh; Drzewicz, Przemysław; Martin, Jonathan W; Gamal El-Din, Mohamed

    2012-06-01

    Naphthenic acids (NAs) are a broad range of alicyclic and aliphatic compounds that are persistent and contribute to the toxicity of oil sands process affected water (OSPW). In this investigation, cyclohexanoic acid (CHA) was selected as a model naphthenic acid, and its oxidation was investigated using advanced oxidation employing a low-pressure ultraviolet light in the presence of hydrogen peroxide (UV/H(2)O(2) process). The effects of two pHs and common OSPW constituents, such as chloride (Cl(-)) and carbonate (CO(3)(2-)) were investigated in ultrapure water. The optimal molar ratio of H(2)O(2) to CHA in the treatment process was also investigated. The pH had no significant effect on the degradation, nor on the formation and degradation of byproducts in ultrapure water. The presence of CO(3)(2-) or Cl(-) significantly decreased the CHA degradation rate. The presence of 700 mg/L CO(3)(2-) or 500 mg/L Cl(-), typical concentrations in OSPW, caused a 55% and 23% decrease in the pseudo-first order degradation rate constants for CHA, respectively. However, no change in byproducts or in the degradation trend of byproducts, in the presence of scavengers was observed. A real OSPW matrix also had a significant impact by decreasing the CHA degradation rate, such that by spiking CHA into the OSPW, the degradation rate decreased up to 82% relative to that in ultrapure water. The results of this study show that UV/H(2)O(2) AOP is capable of degrading CHA as a model NA in ultrapure water. However, in the real applications, the effect of radical scavengers should be taken into consideration for the achievement of best performance of the process. PMID:22521165

  7. [Effect of natural and hydrothermal synthetic goethite on the release of methane in the anaerobic decomposition process of organic matter].

    PubMed

    Yao, Dun-Fan; Chen, Tian-Hu; Wang, Jin; Zhou, Yue-Fei; Yue, Zheng-Bo

    2013-02-01

    The effects of natural goethite (NGt) and synthetic goethite (SGt) on the release of methane in the anaerobic biochemical system consisted of dissimilatory iron-reducing bacteria (DIRB) and methane-producing bacteria (MPB) were investigated through batch tests with sodium acetate as the carbon source. To explore the effects and mechanisms of both mineral materials on the release of methane in the anaerobic decomposition process of organic matter in the presence of DIRB, the main gas components and total organic carbon (TOC) , total inorganic carbon (TIC), and Fe2+ in the aqueous phase of the experimental process were determined and XRD analyses were conducted for the solid-phase product. Moreover, the minerals were analyzed by specific surface area (BET), X-ray diffraction (XRD), X-ray fluorescence (XRF). Modified Gompertz equation was used to fit the cumulative methane and carbon dioxide. Results showed that the maximum cumulative production of methane was brought forward by 60-78 days by the addition of goethite and CO2 was effectively reduced by 30% - 67% compared with the control samples. SGt was more effective than NGt in promoting the release of CH4 and reducing the CO, emission. Furthermore, the analysis of the solid product showed that the addition of goethite can fix part of CO2 by the formation of siderite.

  8. Physical and chemical processes of low-temperature plasma decomposition of liquids under ultrasonic treatment

    NASA Astrophysics Data System (ADS)

    Bulychev, N. A.; Kazaryan, M. A.

    2015-12-01

    In this work, a low-temperature plasma initiated in liquid media between electrodes has been shown to be able to decompose hydrogen containing organic molecules leading to obtaining gaseous products with volume part of hydrogen higher than 90% (up to gas chromatography data). Preliminary evaluations of energetic efficiency, calculated from combustion energy of hydrogen and initial liquids and electrical energy consumption have demonstrated the efficiency about 60-70% depending on initial liquids composition. Theoretical calculations of voltage and current values for this process have been done, that is in good agreement with experimental data.

  9. General route for the decomposition of InAs quantum dots during the capping process

    NASA Astrophysics Data System (ADS)

    González, D.; Reyes, D. F.; Utrilla, A. D.; Ben, T.; Braza, V.; Guzman, A.; Hierro, A.; Ulloa, J. M.

    2016-03-01

    The effect of the capping process on the morphology of InAs/GaAs quantum dots (QDs) by using different GaAs-based capping layers (CLs), ranging from strain reduction layers to strain compensating layers, has been studied by transmission microscopic techniques. For this, we have measured simultaneously the height and diameter in buried and uncapped QDs covering populations of hundreds of QDs that are statistically reliable. First, the uncapped QD population evolves in all cases from a pyramidal shape into a more homogenous distribution of buried QDs with a spherical-dome shape, despite the different mechanisms implicated in the QD capping. Second, the shape of the buried QDs depends only on the final QD size, where the radius of curvature is function of the base diameter independently of the CL composition and growth conditions. An asymmetric evolution of the QDs’ morphology takes place, in which the QD height and base diameter are modified in the amount required to adopt a similar stable shape characterized by a averaged aspect ratio of 0.21. Our results contradict the traditional model of QD material redistribution from the apex to the base and point to a different universal behavior of the overgrowth processes in self-organized InAs QDs.

  10. Temperature Adaptations in the Terminal Processes of Anaerobic Decomposition of Yellowstone National Park and Icelandic Hot Spring Microbial Mats

    PubMed Central

    Sandbeck, Kenneth A.; Ward, David M.

    1982-01-01

    The optimum temperatures for methanogenesis in microbial mats of four neutral to alkaline, low-sulfate hot springs in Yellowstone National Park were between 50 and 60°C, which was 13 to 23°C lower than the upper temperature for mat development. Significant methanogenesis at 65°C was only observed in one of the springs. Methane production in samples collected at a 51 or 62°C site in Octopus Spring was increased by incubation at higher temperatures and was maximal at 70°C. Strains of Methanobacterium thermoautotrophicum were isolated from 50, 55, 60, and 65°C sites in Octopus Spring at the temperatures of the collection sites. The optimum temperature for growth and methanogenesis of each isolate was 65°C. Similar results were found for the potential rate of sulfate reduction in an Icelandic hot spring microbial mat in which sulfate reduction dominated methane production as a terminal process in anaerobic decomposition. The potential rate of sulfate reduction along the thermal gradient of the mat was greatest at 50°C, but incubation at 60°C of the samples obtained at 50°C increased the rate. Adaptation to different mat temperatures, common among various microorganisms and processes in the mats, did not appear to occur in the processes and microorganisms which terminate the anaerobic food chain. Other factors must explain why the maximal rates of these processes are restricted to moderate temperatures of the mat ecosystem. PMID:16346109

  11. Microscopic Approaches to Decomposition and Burning Processes of a Micro Plastic Resin Particle under Abrupt Heating

    NASA Astrophysics Data System (ADS)

    Ohiwa, Norio; Ishino, Yojiro; Yamamoto, Atsunori; Yamakita, Ryuji

    To elucidate the possibility and availability of thermal recycling of waste plastic resin from a basic and microscopic viewpoint, a series of abrupt heating processes of a spherical micro plastic particle having a diameter of about 200 μm is observed, when it is abruptly exposed to hot oxidizing combustion gas. Three ingenious devices are introduced and two typical plastic resins of polyethylene terephthalate and polyethylene are used. In this paper the dependency of internal and external appearances of residual plastic embers on the heating time and the ingredients of plastic resins is optically analyzed, along with appearances of internal micro bubbling, multiple micro explosions and jets, and micro diffusion flames during abrupt heating. Based on temporal variations of the surface area of a micro plastic particle, the apparent burning rate constant is also evaluated and compared with those of well-known volatile liquid fuels.

  12. Decomposition of lignin from sugar cane bagasse during ozonation process monitored by optical and mass spectrometries.

    PubMed

    Souza-Corrêa, J A; Ridenti, M A; Oliveira, C; Araújo, S R; Amorim, J

    2013-03-21

    Mass spectrometry was used to monitor neutral chemical species from sugar cane bagasse that could volatilize during the bagasse ozonation process. Lignin fragments and some radicals liberated by direct ozone reaction with the biomass structure were detected. Ozone density was monitored during the ozonation by optical absorption spectroscopy. The optical results indicated that the ozone interaction with the bagasse material was better for bagasse particle sizes less than or equal to 0.5 mm. Both techniques have shown that the best condition for the ozone diffusion in the bagasse was at 50% of its moisture content. In addition, Fourier transform infrared spectroscopy (FTIR) and scanning electron microscopy (SEM) were employed to analyze the lignin bond disruptions and morphology changes of the bagasse surface that occurred due to the ozonolysis reactions as well. Appropriate chemical characterization of the lignin content in bagasse before and after its ozonation was also carried out.

  13. Decomposition techniques

    USGS Publications Warehouse

    Chao, T.T.; Sanzolone, R.F.

    1992-01-01

    Sample decomposition is a fundamental and integral step in the procedure of geochemical analysis. It is often the limiting factor to sample throughput, especially with the recent application of the fast and modern multi-element measurement instrumentation. The complexity of geological materials makes it necessary to choose the sample decomposition technique that is compatible with the specific objective of the analysis. When selecting a decomposition technique, consideration should be given to the chemical and mineralogical characteristics of the sample, elements to be determined, precision and accuracy requirements, sample throughput, technical capability of personnel, and time constraints. This paper addresses these concerns and discusses the attributes and limitations of many techniques of sample decomposition along with examples of their application to geochemical analysis. The chemical properties of reagents as to their function as decomposition agents are also reviewed. The section on acid dissolution techniques addresses the various inorganic acids that are used individually or in combination in both open and closed systems. Fluxes used in sample fusion are discussed. The promising microwave-oven technology and the emerging field of automation are also examined. A section on applications highlights the use of decomposition techniques for the determination of Au, platinum group elements (PGEs), Hg, U, hydride-forming elements, rare earth elements (REEs), and multi-elements in geological materials. Partial dissolution techniques used for geochemical exploration which have been treated in detail elsewhere are not discussed here; nor are fire-assaying for noble metals and decomposition techniques for X-ray fluorescence or nuclear methods be discussed. ?? 1992.

  14. Joint application of a statistical optimization process and Empirical Mode Decomposition to Magnetic Resonance Sounding Noise Cancelation

    NASA Astrophysics Data System (ADS)

    Ghanati, Reza; Fallahsafari, Mahdi; Hafizi, Mohammad Kazem

    2014-12-01

    The signal quality of Magnetic Resonance Sounding (MRS) measurements is a crucial criterion. The accuracy of the estimation of the signal parameters (i.e. E0 and T2*) strongly depends on amplitude and conditions of ambient electromagnetic interferences at the site of investigation. In this paper, in order to enhance the performance in the noisy environments, a two-step noise cancelation approach based on the Empirical Mode Decomposition (EMD) and a statistical method is proposed. In the first stage, the noisy signal is adaptively decomposed into intrinsic oscillatory components called intrinsic mode functions (IMFs) by means of the EMD algorithm. Afterwards based on an automatic procedure the noisy IMFs are detected, and then the partly de-noised signal is reconstructed through the no-noise IMFs. In the second stage, the signal obtained from the initial section enters an optimization process to cancel the remnant noise, and consequently, estimate the signal parameters. The strategy is tested on a synthetic MRS signal contaminated with Gaussian noise, spiky events and harmonic noise, and on real data. By applying successively the proposed steps, we can remove the noise from the signal to a high extent and the performance indexes, particularly signal to noise ratio, will increase significantly.

  15. Phenol Decomposition Process by Pulsed-discharge Plasma above a Water Surface in Oxygen and Argon Atmosphere

    NASA Astrophysics Data System (ADS)

    Shiota, Haruki; Itabashi, Hideyuki; Satoh, Kohki; Itoh, Hidenori

    By-products from phenol by the exposure of pulsed-discharge plasma above a phenol aqueous solution are investigated by gas chromatography mass spectrometry, and the decomposition process of phenol is deduced. When Ar is used as a background gas, catechol, hydroquinone and 4-hydroxy-2-cyclohexene-1-on are produced, and no O3 is detected; therefore, active species such as OH, O, HO2, H2O2, which are produced from H2O in the discharge, can convert phenol into those by-products. When O2 is used as a background gas, formic acid, maleic acid, succinic acid and 4,6-dihydroxy-2,4-hexadienoic acid are produced in addition to catechol and hydroquinone. O3 is produced in the discharge plasma, so that phenol is probably decomposed into 4,6-dihydroxy-2,4-hexadienoic acid by 1,3-dipolar addition reaction with O3, and then 4,6-dihydroxy-2,4-hexadienoic acid can be decomposed into formic acid, maleic acid and succinic acid by 1,3-dipolar addition reaction with O3.

  16. Comparison of the thermal decomposition processes of several aminoalcohol-based ZnO inks with one containing ethanolamine

    NASA Astrophysics Data System (ADS)

    Gómez-Núñez, Alberto; Roura, Pere; López, Concepción; Vilà, Anna

    2016-09-01

    Four inks for the production of ZnO semiconducting films have been prepared with zinc acetate dihydrate as precursor salt and one among the following aminoalcohols: aminopropanol (APr), aminomethyl butanol (AMB), aminophenol (APh) and aminobenzyl alcohol (AB) as stabilizing agent. Their thermal decomposition process has been analyzed in situ by thermogravimetric analysis (TGA), differential scanning calorimetry (DSC) and evolved gas analysis (EGA), whereas the solid product has been analysed ex-situ by X-ray diffraction (XRD) and infrared spectroscopy (IR). Although, except for the APh ink, crystalline ZnO is already obtained at 300 °C, the films contain an organic residue that evolves at higher temperature in the form of a large variety of nitrogen-containing cyclic compounds. The results indicate that APr can be a better stabilizing agent than ethanolamine (EA). It gives larger ZnO crystal sizes with similar carbon content. However, a common drawback of all the amino stabilizers (EA included) is that nitrogen atoms have not been completely removed from the ZnO film at the highest temperature of our experiments (600 °C).

  17. Application of Contois, Tessier, and first-order kinetics for modeling and simulation of a composting decomposition process.

    PubMed

    Wang, Yongjiang; Witarsa, Freddy

    2016-11-01

    An integrated model was developed by associating separate degradation kinetics for an array of degradations during a decomposition process, which was considered as a novelty of this study. The raw composting material was divided into soluble, hemi-/cellulose, lignin, NBVS, ash, water, and free air-space. Considering their specific capabilities of expressing certain degradation phenomenon, Contois, Tessier (an extension to Monod kinetic), and first-order kinetics were employed to calculate the biochemical rates. It was found that the degradation of soluble substrate was relatively faster which could reach a maximum rate of about 0.4perhour. The hydrolysis of lignin was rate-limiting with a maximum rate of about 0.04perhour. The dry-based peak concentrations of soluble, hemi-/cellulose and lignin degraders were about 0.9, 0.2 and 0.3kgm(-3), respectively. Model developed, as a platform, allows degradation simulation of composting material that could be separated into the different components used in this study. PMID:27595704

  18. Morphology and phase modifications of MoO{sub 3} obtained by metallo-organic decomposition processes

    SciTech Connect

    Barros Santos, Elias de; Martins de Souza e Silva, Juliana; Odone Mazali, Italo

    2010-11-15

    Molybdenum oxide samples were prepared using different temperatures and atmospheric conditions by metallo-organic decomposition processes and were characterized by XRD, SEM and DRS UV/Vis and Raman spectroscopies. Variation in the synthesis conditions resulted in solids with different morphologies and oxygen vacancy concentrations. Intense characteristic Raman bands of crystalline orthorhombic {alpha}-MoO{sub 3}, occurring at 992 cm{sup -1} and 820 cm{sup -1}, are observed and their shifts can be related to the differences in the structure of the solids obtained. The sample obtained under nitrogen flow at 1073 K is a phase mixture of orthorhombic {alpha}-MoO{sub 3} and monoclinic {beta}-MoO{sub 3}. The characterization results suggest that the molybdenum oxide samples are non-stoichiometric and are described as MoO{sub x} with x < 2.94. Variations in the reaction conditions make it possible to tune the number of oxygen defects and the band gap of the final material.

  19. Fuel decomposition and boundary-layer combustion processes of hybrid rocket motors

    NASA Technical Reports Server (NTRS)

    Chiaverini, Martin J.; Harting, George C.; Lu, Yeu-Cherng; Kuo, Kenneth K.; Serin, Nadir; Johnson, David K.

    1995-01-01

    Using a high-pressure, two-dimensional hybrid motor, an experimental investigation was conducted on fundamental processes involved in hybrid rocket combustion. HTPB (Hydroxyl-terminated Polybutadiene) fuel cross-linked with diisocyanate was burned with GOX under various operating conditions. Large-amplitude pressure oscillations were encountered in earlier test runs. After identifying the source of instability and decoupling the GOX feed-line system and combustion chamber, the pressure oscillations were drastically reduced from +/-20% of the localized mean pressure to an acceptable range of +/-1.5% Embedded fine-wire thermocouples indicated that the surface temperature of the burning fuel was around 1000 K depending upon axial locations and operating conditions. Also, except near the leading-edge region, the subsurface thermal wave profiles in the upstream locations are thicker than those in the downstream locations since the solid-fuel regression rate, in general, increases with distance along the fuel slab. The recovered solid fuel slabs in the laminar portion of the boundary layer exhibited smooth surfaces, indicating the existence of a liquid melt layer on the burning fuel surface in the upstream region. After the transition section, which displayed distinct transverse striations, the surface roughness pattern became quite random and very pronounced in the downstream turbulent boundary-layer region. Both real-time X-ray radiography and ultrasonic pulse-echo techniques were used to determine the instantaneous web thickness burned and instantaneous solid-fuel regression rates over certain portions of the fuel slabs. Globally averaged and axially dependent but time-averaged regression rates were also obtained and presented.

  20. Exotic species as modifiers of ecosystem processes: Litter decomposition in native and invaded secondary forests of NW Argentina

    NASA Astrophysics Data System (ADS)

    Aragón, Roxana; Montti, Lia; Ayup, María Marta; Fernández, Romina

    2014-01-01

    Invasions of exotic tree species can cause profound changes in community composition and structure, and may even cause legacy effect on nutrient cycling via litter production. In this study, we compared leaf litter decomposition of two invasive exotic trees (Ligustrum lucidum and Morus sp.) and two dominant native trees (Cinnamomum porphyria and Cupania vernalis) in native and invaded (Ligustrum-dominated) forest stands in NW Argentina. We measured leaf attributes and environmental characteristics in invaded and native stands to isolate the effects of litter quality and habitat characteristics. Species differed in their decomposition rates and, as predicted by the different species colonization status (pioneer vs. late successional), exotic species decayed more rapidly than native ones. Invasion by L. lucidum modified environmental attributes by reducing soil humidity. Decomposition constants (k) tended to be slightly lower (-5%) for all species in invaded stands. High SLA, low tensile strength, and low C:N of Morus sp. distinguish this species from the native ones and explain its higher decomposition rate. Contrary to our expectations, L. lucidum leaf attributes were similar to those of native species. Decomposition rates also differed between the two exotic species (35% higher in Morus sp.), presumably due to leaf attributes and colonization status. Given the high decomposition rate of L. lucidum litter (more than 6 times that of natives) we expect an acceleration of nutrient circulation at ecosystem level in Ligustrum-dominated stands. This may occur in spite of the modified environmental conditions that are associated with L. lucidum invasion.

  1. Measuring Nematic Susceptibilities from the Elastoresistivity Tensor

    NASA Astrophysics Data System (ADS)

    Hristov, A. T.; Shapiro, M. C.; Hlobil, Patrick; Maharaj, Akash; Chu, Jiun-Haw; Fisher, Ian

    The elastoresistivity tensor mijkl relates changes in resistivity to the strain on a material. As a fourth-rank tensor, it contains considerably more information about the material than the simpler (second-rank) resistivity tensor; in particular, certain elastoresistivity coefficients can be related to thermodynamic susceptibilities and serve as a direct probe of symmetry breaking at a phase transition. The aim of this talk is twofold. First, we enumerate how symmetry both constrains the structure of the elastoresistivity tensor into an easy-to-understand form and connects tensor elements to thermodynamic susceptibilities. In the process, we generalize previous studies of elastoresistivity to include the effects of magnetic field. Second, we describe an approach to measuring quantities in the elastoresistivity tensor with a novel transverse measurement, which is immune to relative strain offsets. These techniques are then applied to BaFe2As2 in a proof of principle measurement. This work is supported by the Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division, under Contract DE-AC02-76SF00515.

  2. Thermal decomposition of energetic materials. 5. reaction processes of 1,3,5-trinitrohexahydro-s-triazine below its melting point.

    PubMed

    Maharrey, Sean; Behrens, Richard

    2005-12-15

    Through the use of simultaneous thermogravimetry modulated beam mass spectrometry, optical microscopy, hot-stage time-lapsed microscopy, and scanning electron microscopy measurements, the physical and chemical processes that control the thermal decomposition of 1,3,5-trinitrohexahydro-s-triazine (RDX) below its melting point (160-189 degrees C) have been identified. Two gas-phase reactions of RDX are predominant during the early stages of an experiment. One involves the loss of HONO and HNO and leads to the formation of H2O, NO, NO2, and oxy-s-triazine (OST) or s-triazine. The other involves the reaction of NO with RDX to form NO2 and 1-nitroso-3,5-dinitrohexahydro-s-triazine (ONDNTA), which subsequently decomposes to form a set of products of which CH2O and N2O are the most abundant. Products from the gas-phase RDX decomposition reactions, such as ONDNTA, deposit on the surface of the RDX particles and lead to the development of a new set of reaction pathways that occur on the surface of the RDX particles. The initial surface reactions occur on surfaces of those RDX particles in the sample that can accumulate the greatest amount of products from the gas-phase reactions. Initial surface reactions are characterized by the formation of islands of reactivity on the RDX surface and lead to the development of an orange-colored nonvolatile residue (NVR) film on the surface of the RDX particles. The NVR film is most likely formed via the decomposition of ONDNTA on the surface of the RDX particles. The NVR film is a nonstoichiometric and dynamic material, which reacts directly with RDX and ONDNTA, and is composed of remnants from RDX and ONDNTA molecules that have reacted with the NVR. Reactions involving the NVR become dominant during the later stage of the decomposition process. The NVR reacts with RDX to form ONDNTA via abstraction of an oxygen atom from an NO2 group. ONDNTA may undergo rapid loss of N2 and NO2 with the remaining portion of the molecule being

  3. The non-uniqueness of the atomistic stress tensor and its relationship to the generalized Beltrami representation

    NASA Astrophysics Data System (ADS)

    Admal, Nikhil Chandra; Tadmor, E. B.

    2016-08-01

    The non-uniqueness of the atomistic stress tensor is a well-known issue when defining continuum fields for atomistic systems. In this paper, we study the non-uniqueness of the atomistic stress tensor stemming from the non-uniqueness of the potential energy representation. In particular, we show using rigidity theory that the distribution associated with the potential part of the atomistic stress tensor can be decomposed into an irrotational part that is independent of the potential energy representation, and a traction-free solenoidal part. Therefore, we have identified for the atomistic stress tensor a discrete analog of the continuum generalized Beltrami representation (a version of the vector Helmholtz decomposition for symmetric tensors). We demonstrate the validity of these analogies using a numerical test. A program for performing the decomposition of the atomistic stress tensor called MDStressLab is available online at

  4. Woodland Decomposition.

    ERIC Educational Resources Information Center

    Napier, J.

    1988-01-01

    Outlines the role of the main organisms involved in woodland decomposition and discusses some of the variables affecting the rate of nutrient cycling. Suggests practical work that may be of value to high school students either as standard practice or long-term projects. (CW)

  5. Singular value decomposition for photon-processing nuclear imaging systems and applications for reconstruction and computing null functions

    NASA Astrophysics Data System (ADS)

    Jha, Abhinav K.; Barrett, Harrison H.; Frey, Eric C.; Clarkson, Eric; Caucci, Luca; Kupinski, Matthew A.

    2015-09-01

    Recent advances in technology are enabling a new class of nuclear imaging systems consisting of detectors that use real-time maximum-likelihood (ML) methods to estimate the interaction position, deposited energy, and other attributes of each photon-interaction event and store these attributes in a list format. This class of systems, which we refer to as photon-processing (PP) nuclear imaging systems, can be described by a fundamentally different mathematical imaging operator that allows processing of the continuous-valued photon attributes on a per-photon basis. Unlike conventional photon-counting (PC) systems that bin the data into images, PP systems do not have any binning-related information loss. Mathematically, while PC systems have an infinite-dimensional null space due to dimensionality considerations, PP systems do not necessarily suffer from this issue. Therefore, PP systems have the potential to provide improved performance in comparison to PC systems. To study these advantages, we propose a framework to perform the singular-value decomposition (SVD) of the PP imaging operator. We use this framework to perform the SVD of operators that describe a general two-dimensional (2D) planar linear shift-invariant (LSIV) PP system and a hypothetical continuously rotating 2D single-photon emission computed tomography (SPECT) PP system. We then discuss two applications of the SVD framework. The first application is to decompose the object being imaged by the PP imaging system into measurement and null components. We compare these components to the measurement and null components obtained with PC systems. In the process, we also present a procedure to compute the null functions for a PC system. The second application is designing analytical reconstruction algorithms for PP systems. The proposed analytical approach exploits the fact that PP systems acquire data in a continuous domain to estimate a continuous object function. The approach is parallelizable and

  6. Singular value decomposition for photon-processing nuclear imaging systems and applications for reconstruction and computing null functions.

    PubMed

    Jha, Abhinav K; Barrett, Harrison H; Frey, Eric C; Clarkson, Eric; Caucci, Luca; Kupinski, Matthew A

    2015-09-21

    Recent advances in technology are enabling a new class of nuclear imaging systems consisting of detectors that use real-time maximum-likelihood (ML) methods to estimate the interaction position, deposited energy, and other attributes of each photon-interaction event and store these attributes in a list format. This class of systems, which we refer to as photon-processing (PP) nuclear imaging systems, can be described by a fundamentally different mathematical imaging operator that allows processing of the continuous-valued photon attributes on a per-photon basis. Unlike conventional photon-counting (PC) systems that bin the data into images, PP systems do not have any binning-related information loss. Mathematically, while PC systems have an infinite-dimensional null space due to dimensionality considerations, PP systems do not necessarily suffer from this issue. Therefore, PP systems have the potential to provide improved performance in comparison to PC systems. To study these advantages, we propose a framework to perform the singular-value decomposition (SVD) of the PP imaging operator. We use this framework to perform the SVD of operators that describe a general two-dimensional (2D) planar linear shift-invariant (LSIV) PP system and a hypothetical continuously rotating 2D single-photon emission computed tomography (SPECT) PP system. We then discuss two applications of the SVD framework. The first application is to decompose the object being imaged by the PP imaging system into measurement and null components. We compare these components to the measurement and null components obtained with PC systems. In the process, we also present a procedure to compute the null functions for a PC system. The second application is designing analytical reconstruction algorithms for PP systems. The proposed analytical approach exploits the fact that PP systems acquire data in a continuous domain to estimate a continuous object function. The approach is parallelizable and

  7. Singular value decomposition for photon-processing nuclear imaging systems and applications for reconstruction and computing null functions.

    PubMed

    Jha, Abhinav K; Barrett, Harrison H; Frey, Eric C; Clarkson, Eric; Caucci, Luca; Kupinski, Matthew A

    2015-09-21

    Recent advances in technology are enabling a new class of nuclear imaging systems consisting of detectors that use real-time maximum-likelihood (ML) methods to estimate the interaction position, deposited energy, and other attributes of each photon-interaction event and store these attributes in a list format. This class of systems, which we refer to as photon-processing (PP) nuclear imaging systems, can be described by a fundamentally different mathematical imaging operator that allows processing of the continuous-valued photon attributes on a per-photon basis. Unlike conventional photon-counting (PC) systems that bin the data into images, PP systems do not have any binning-related information loss. Mathematically, while PC systems have an infinite-dimensional null space due to dimensionality considerations, PP systems do not necessarily suffer from this issue. Therefore, PP systems have the potential to provide improved performance in comparison to PC systems. To study these advantages, we propose a framework to perform the singular-value decomposition (SVD) of the PP imaging operator. We use this framework to perform the SVD of operators that describe a general two-dimensional (2D) planar linear shift-invariant (LSIV) PP system and a hypothetical continuously rotating 2D single-photon emission computed tomography (SPECT) PP system. We then discuss two applications of the SVD framework. The first application is to decompose the object being imaged by the PP imaging system into measurement and null components. We compare these components to the measurement and null components obtained with PC systems. In the process, we also present a procedure to compute the null functions for a PC system. The second application is designing analytical reconstruction algorithms for PP systems. The proposed analytical approach exploits the fact that PP systems acquire data in a continuous domain to estimate a continuous object function. The approach is parallelizable and

  8. Estimating missing tensor data by face synthesis for expression recognition

    NASA Astrophysics Data System (ADS)

    Tan, Huachun; Chen, Hao; Zhang, Jie

    2009-01-01

    In this paper, a new method of facial expression recognition is proposed for missing tensor data. In this method, the missing tensor data is estimated by facial expression synthesis in order to construct the full tensor, which is used for multi-factorization face analysis. The full tensor data allows for the full use of the information of a given database, and hence improves the performance of face analysis. Compared with EM algorithm for missing data estimation, the proposed method avoids iteration process and reduces the estimation complexity. The proposed missing tensor data estimation is applied for expression recognition. The experimental results show that the proposed method is performing better than only utilize the original smaller tensor.

  9. Effect of mountain climatic elevation gradient and litter origin on decomposition processes: long-term experiment with litter-bags

    NASA Astrophysics Data System (ADS)

    Klimek, Beata; Niklińska, Maria; Chodak, Marcin

    2013-04-01

    Temperature is one of the most important factors affecting soil organic matter decomposition. Mountain areas with vertical gradients of temperature and precipitation provide an opportunity to observe climate changes similar to those observed at various latitudes and may serve as an approximation for climatic changes. The aim of the study was to compare the effects of climatic conditions and initial properties of litter on decomposition processes and thermal sensitivity of forest litter. The litter was collected at three altitudes (600, 900, 1200 m a.s.l.) in the Beskidy Mts (southern Poland), put into litter-bags and exposed in the field since autumn 2011. The litter collected at single altitude was exposed at the altitude it was taken and also at the two other altitudes. The litter-bags were laid out on five mountains, treated as replicates. Starting on April 2012, single sets of litter-bags were collected every five weeks. The laboratory measurements included determination of dry mass loss and chemical composition (Corg, Nt, St, Mg, Ca, Na, K, Cu, Zn) of the litter. In the additional litter-bag sets, taken in spring and autumn 2012, microbial properties were measured. To determine the effect of litter properties and climatic conditions of elevation sites on decomposing litter thermal sensitivity the respiration rate of litter was measured at 5°C, 15°C and 25°C and calculated as Q10 L and Q10 H (ratios of respiration rate between 5° and 15°C and between 15°C and 25°C, respectively). The functional diversity of soil microbes was measured with Biolog® ECO plates, structural diversity with phospholipid fatty acids (PLFA). Litter mass lost during first year of incubation was characterized by high variability and mean mass lost ranged up to a 30% of initial mass. After autumn sampling we showed, that mean respiration rate of litter (dry mass) from the 600m a.s.l site exposed on 600m a.s.l. was the highest at each tested temperature. In turn, the lowest mean

  10. In-situ and self-distributed: A new understanding on catalyzed thermal decomposition process of ammonium perchlorate over Nd{sub 2}O{sub 3}

    SciTech Connect

    Zou, Min Wang, Xin Jiang, Xiaohong Lu, Lude

    2014-05-01

    Catalyzed thermal decomposition process of ammonium perchlorate (AP) over neodymium oxide (Nd{sub 2}O{sub 3}) was investigated. Catalytic performances of nanometer-sized Nd{sub 2}O{sub 3} and micrometer-sized Nd{sub 2}O{sub 3} were evaluated by differential scanning calorimetry (DSC). In contrast to universal concepts, catalysts in different sizes have nearly similar catalytic activities. Based on structural and morphological variation of the catalysts during the reaction, combined with mass spectrum analyses and studies of unmixed style, a new understanding of this catalytic process was proposed. We believed that the newly formed chloride neodymium oxide (NdOCl) was the real catalytic species in the overall thermal decomposition of AP over Nd{sub 2}O{sub 3}. Meanwhile, it was the “self-distributed” procedure which occurred within the reaction that also worked for the improvement of overall catalytic activities. This work is of great value in understanding the roles of micrometer-sized catalysts used in heterogeneous reactions, especially the solid–solid reactions which could generate a large quantity of gaseous species. - Graphical abstract: In-situ and self-distributed reaction process in thermal decomposition of AP catalyzed by Nd{sub 2}O{sub 3}. - Highlights: • Micro- and nano-Nd{sub 2}O{sub 3} for catalytic thermal decomposition of AP. • No essential differences on their catalytic performances. • Structural and morphological variation of catalysts digs out catalytic mechanism. • This catalytic process is “in-situ and self-distributed” one.

  11. Entanglement, tensor networks and black hole horizons

    NASA Astrophysics Data System (ADS)

    Molina-Vilaplana, J.; Prior, J.

    2014-11-01

    We elaborate on a previous proposal by Hartman and Maldacena on a tensor network which accounts for the scaling of the entanglement entropy in a system at a finite temperature. In this construction, the ordinary entanglement renormalization flow given by the class of tensor networks known as the Multi Scale Entanglement Renormalization Ansatz (MERA), is supplemented by an additional entanglement structure at the length scale fixed by the temperature. The network comprises two copies of a MERA circuit with a fixed number of layers and a pure matrix product state which joins both copies by entangling the infrared degrees of freedom of both MERA networks. The entanglement distribution within this bridge state defines reduced density operators on both sides which cause analogous effects to the presence of a black hole horizon when computing the entanglement entropy at finite temperature in the AdS/CFT correspondence. The entanglement and correlations during the thermalization process of a system after a quantum quench are also analyzed. To this end, a full tensor network representation of the action of local unitary operations on the bridge state is proposed. This amounts to a tensor network which grows in size by adding succesive layers of bridge states. Finally, we discuss on the holographic interpretation of the tensor network through a notion of distance within the network which emerges from its entanglement distribution.

  12. A LOW-COST PROCESS FOR THE SYNTHESIS OF NANOSIZE YTTRIA-STABILIZED ZIRCONIA (YSZ) BY MOLECULAR DECOMPOSITION

    SciTech Connect

    Anil V. Virkar

    2004-05-06

    This report summarizes the results of work done during the performance period on this project, between October 1, 2002 and December 31, 2003, with a three month no-cost extension. The principal objective of this work was to develop a low-cost process for the synthesis of sinterable, fine powder of YSZ. The process is based on molecular decomposition (MD) wherein very fine particles of YSZ are formed by: (1) Mixing raw materials in a powder form, (2) Synthesizing compound containing YSZ and a fugitive constituent by a conventional process, and (3) Selectively leaching (decomposing) the fugitive constituent, thus leaving behind insoluble YSZ of a very fine particle size. While there are many possible compounds, which can be used as precursors, the one selected for the present work was Y-doped Na{sub 2}ZrO{sub 3}, where the fugitive constituent is Na{sub 2}O. It can be readily demonstrated that the potential cost of the MD process for the synthesis of very fine (or nanosize) YSZ is considerably lower than the commonly used processes, namely chemical co-precipitation and combustion synthesis. Based on the materials cost alone, for a 100 kg batch, the cost of YSZ made by chemical co-precipitation is >$50/kg, while that of the MD process should be <$10/kg. Significant progress was made during the performance period on this project. The highlights of the progress are given here in a bullet form. (1) From the two selected precursors listed in Phase I proposal, namely Y-doped BaZrO{sub 3} and Y-doped Na{sub 2}ZrO{sub 3}, selection of Y-doped Na{sub 2}ZrO{sub 3} was made for the synthesis of nanosize (or fine) YSZ. This was based on the potential cost of the precursor, the need to use only water for leaching, and the short time required for the process. (2) For the synthesis of calcia-stabilized zirconia (CSZ), which has the potential for use in place of YSZ in the anode of SOFC, Ca-doped Na{sub 2}ZrO{sub 3} was demonstrated as a suitable precursor. (3) Synthesis of Y

  13. Laser beam direct writing of fine lines of alpha-Fe2O3 from metalorganic spin-coated films and transient behavior study of laser decomposition process

    NASA Astrophysics Data System (ADS)

    Xue, Songsheng; Ousi-Benomar, Wahib; Lessard, Roger A.

    1994-07-01

    Fine lines of (alpha) -Fe2O3 have been formed on quartz substrates by laser beam direct writing on metalorganic spin-coated films. A modulated krypton ion writing laser beam and a He-Ne probing laser beam were colinearly focused onto the films with a spot size about 10 to 50 micrometers in diameter. A series of characterizations have been conducted on the written lines by employing different techniques ranging from thermogravimetric analysis, Fourier transform infrared spectroscopy, scanning electron spectroscopy, and x-ray diffraction to transmission electron microscopy. In this way, a better understanding has been achieved regarding the metalorganic decomposition mechanism, structure, and morphology of the laser written lines. From the time-resolved transmittance change induced by the krypton ion laser pulse irradiation, transient behavior of laser decomposition process of metalorganic materials has also been studied.

  14. Superconducting tensor gravity gradiometer

    NASA Technical Reports Server (NTRS)

    Paik, H. J.

    1981-01-01

    The employment of superconductivity and other material properties at cryogenic temperatures to fabricate sensitive, low-drift, gravity gradiometer is described. The device yields a reduction of noise of four orders of magnitude over room temperature gradiometers, and direct summation and subtraction of signals from accelerometers in varying orientations are possible with superconducting circuitry. Additional circuits permit determination of the linear and angular acceleration vectors independent of the measurement of the gravity gradient tensor. A dewar flask capable of maintaining helium in a liquid state for a year's duration is under development by NASA, and a superconducting tensor gravity gradiometer for the NASA Geodynamics Program is intended for a LEO polar trajectory to measure the harmonic expansion coefficients of the earth's gravity field up to order 300.

  15. Grid-based electronic structure calculations: The tensor decomposition approach

    NASA Astrophysics Data System (ADS)

    Rakhuba, M. V.; Oseledets, I. V.

    2016-05-01

    We present a fully grid-based approach for solving Hartree-Fock and all-electron Kohn-Sham equations based on low-rank approximation of three-dimensional electron orbitals. Due to the low-rank structure the total complexity of the algorithm depends linearly with respect to the one-dimensional grid size. Linear complexity allows for the usage of fine grids, e.g. 81923 and, thus, cheap extrapolation procedure. We test the proposed approach on closed-shell atoms up to the argon, several molecules and clusters of hydrogen atoms. All tests show systematical convergence with the required accuracy.

  16. Direct solution of the Chemical Master Equation using quantized tensor trains.

    PubMed

    Kazeev, Vladimir; Khammash, Mustafa; Nip, Michael; Schwab, Christoph

    2014-03-01

    The Chemical Master Equation (CME) is a cornerstone of stochastic analysis and simulation of models of biochemical reaction networks. Yet direct solutions of the CME have remained elusive. Although several approaches overcome the infinite dimensional nature of the CME through projections or other means, a common feature of proposed approaches is their susceptibility to the curse of dimensionality, i.e. the exponential growth in memory and computational requirements in the number of problem dimensions. We present a novel approach that has the potential to "lift" this curse of dimensionality. The approach is based on the use of the recently proposed Quantized Tensor Train (QTT) formatted numerical linear algebra for the low parametric, numerical representation of tensors. The QTT decomposition admits both, algorithms for basic tensor arithmetics with complexity scaling linearly in the dimension (number of species) and sub-linearly in the mode size (maximum copy number), and a numerical tensor rounding procedure which is stable and quasi-optimal. We show how the CME can be represented in QTT format, then use the exponentially-converging hp-discontinuous Galerkin discretization in time to reduce the CME evolution problem to a set of QTT-structured linear equations to be solved at each time step using an algorithm based on Density Matrix Renormalization Group (DMRG) methods from quantum chemistry. Our method automatically adapts the "basis" of the solution at every time step guaranteeing that it is large enough to capture the dynamics of interest but no larger than necessary, as this would increase the computational complexity. Our approach is demonstrated by applying it to three different examples from systems biology: independent birth-death process, an example of enzymatic futile cycle, and a stochastic switch model. The numerical results on these examples demonstrate that the proposed QTT method achieves dramatic speedups and several orders of magnitude storage

  17. Direct solution of the Chemical Master Equation using quantized tensor trains.

    PubMed

    Kazeev, Vladimir; Khammash, Mustafa; Nip, Michael; Schwab, Christoph

    2014-03-01

    The Chemical Master Equation (CME) is a cornerstone of stochastic analysis and simulation of models of biochemical reaction networks. Yet direct solutions of the CME have remained elusive. Although several approaches overcome the infinite dimensional nature of the CME through projections or other means, a common feature of proposed approaches is their susceptibility to the curse of dimensionality, i.e. the exponential growth in memory and computational requirements in the number of problem dimensions. We present a novel approach that has the potential to "lift" this curse of dimensionality. The approach is based on the use of the recently proposed Quantized Tensor Train (QTT) formatted numerical linear algebra for the low parametric, numerical representation of tensors. The QTT decomposition admits both, algorithms for basic tensor arithmetics with complexity scaling linearly in the dimension (number of species) and sub-linearly in the mode size (maximum copy number), and a numerical tensor rounding procedure which is stable and quasi-optimal. We show how the CME can be represented in QTT format, then use the exponentially-converging hp-discontinuous Galerkin discretization in time to reduce the CME evolution problem to a set of QTT-structured linear equations to be solved at each time step using an algorithm based on Density Matrix Renormalization Group (DMRG) methods from quantum chemistry. Our method automatically adapts the "basis" of the solution at every time step guaranteeing that it is large enough to capture the dynamics of interest but no larger than necessary, as this would increase the computational complexity. Our approach is demonstrated by applying it to three different examples from systems biology: independent birth-death process, an example of enzymatic futile cycle, and a stochastic switch model. The numerical results on these examples demonstrate that the proposed QTT method achieves dramatic speedups and several orders of magnitude storage

  18. Catalyst for sodium chlorate decomposition

    NASA Technical Reports Server (NTRS)

    Wydeven, T.

    1972-01-01

    Production of oxygen by rapid decomposition of cobalt oxide and sodium chlorate mixture is discussed. Cobalt oxide serves as catalyst to accelerate reaction. Temperature conditions and chemical processes involved are described.

  19. Gauge- and frame-independent decomposition of nucleon spin

    SciTech Connect

    Wakamatsu, M.

    2011-01-01

    In a recent paper, we have shown that the way of gauge-invariant decomposition of the nucleon spin is not necessarily unique, but there still exists a preferable decomposition from the observational viewpoint. What was not complete in this argument is a fully satisfactory answer to the following questions. Does the proposed gauge-invariant decomposition, especially the decomposition of the gluon total angular momentum into its spin and orbital parts, correspond to observables which can be extracted from high-energy deep-inelastic-scattering measurements? Is this decomposition not only gauge invariant but also Lorentz frame independent, so that it is legitimately thought to reflect an intrinsic property of the nucleon? We show that we can answer both of these questions affirmatively by making full use of a gauge-invariant decomposition of the covariant angular-momentum tensor of QCD in an arbitrary Lorentz frame.

  20. Carbon decomposition process of the residual biomass in the paddy soil of a single-crop rice field

    NASA Astrophysics Data System (ADS)

    Okada, K.; Iwata, T.

    2014-12-01

    In cultivated fields, residual organic matter is plowed into soil after harvest and decaying in fallow season. Greenhouse gases such as CO2 and CH4 is generated by the decomposition of the substantial organic matter and released into the atmosphere. In some fields, open burning is carried out by tradition, when carbon in residual matter is released into atmosphere as CO2. However, burning effect on carbon budget between crop lands and atmosphere is not entirely considered yet. In this study, coarse organic matter (COM) in paddy soil of a single-crop rice field was sampled on regular intervals between January, 2011 and August, 2014 The amount of carbon release from residual matter was estimated by analyzing of the variations in carbon content of COM. Effects of soil temperature (Ts) and soil water content (SWC) at the paddy field on the rate of carbon decomposition was investigated. Though decreasing rate of COM was much smaller in winter season, it is accelerated at the warming season between April and June every year. Decomposition was resisted for next rice cultivated season despite of highest soil temperature. In addition, the observational field was divided into two areas, and three time open burning experiments were conducted in November, 2011, 2012, and 2013. In each year, three sampling surveys, or plants before harvest and residuals before and after the burning experiment, were done. From these surveys, it is suggested that about 48±2% of carbon contents of above-ground plant was yield out as grain by harvest, and about 27±2% of carbon emitted as CO2 by burning. Carbon content of residuals plowed into soil after the harvest was estimated 293±1 and 220±36gC/m2 in no-burned and burned area, respectively, based on three-years average. It is estimated that 70 and 60% of the first input amount of COM was decomposed after a year in no-burned and burned area, respectively.

  1. Projectors and seed conformal blocks for traceless mixed-symmetry tensors

    NASA Astrophysics Data System (ADS)

    Costa, Miguel S.; Hansen, Tobias; Penedones, João; Trevisani, Emilio

    2016-07-01

    In this paper we derive the projectors to all irreducible SO( d) representations (traceless mixed-symmetry tensors) that appear in the partial wave decomposition of a conformal correlator of four stress-tensors in d dimensions. These projectors are given in a closed form for arbitrary length l 1 of the first row of the Young diagram. The appearance of Gegenbauer polynomials leads directly to recursion relations in l 1 for seed conformal blocks. Further results include a differential operator that generates the projectors to traceless mixed-symmetry tensors and the general normalization constant of the shadow operator.

  2. FaRe: A Mathematica package for tensor reduction of Feynman integrals

    NASA Astrophysics Data System (ADS)

    Re Fiorentin, Michele

    2016-08-01

    In this paper, we present FaRe, a package for Mathematica that implements the decomposition of a generic tensor Feynman integral, with arbitrary loop number, into scalar integrals in higher dimension. In order for FaRe to work, the package FeynCalc is needed, so that the tensor structure of the different contributions is preserved and the obtained scalar integrals are grouped accordingly. FaRe can prove particularly useful when it is preferable to handle Feynman integrals with free Lorentz indices and tensor reduction of high-order integrals is needed. This can then be achieved with several powerful existing tools.

  3. Killing and conformal Killing tensors

    NASA Astrophysics Data System (ADS)

    Heil, Konstantin; Moroianu, Andrei; Semmelmann, Uwe

    2016-08-01

    We introduce an appropriate formalism in order to study conformal Killing (symmetric) tensors on Riemannian manifolds. We reprove in a simple way some known results in the field and obtain several new results, like the classification of conformal Killing 2-tensors on Riemannian products of compact manifolds, Weitzenböck formulas leading to non-existence results, and construct various examples of manifolds with conformal Killing tensors.

  4. Thermal decomposition of [Co(en)3][Fe(CN)6]∙ 2H2O: Topotactic dehydration process, valence and spin exchange mechanism elucidation

    PubMed Central

    2013-01-01

    Background The Prussian blue analogues represent well-known and extensively studied group of coordination species which has many remarkable applications due to their ion-exchange, electron transfer or magnetic properties. Among them, Co-Fe Prussian blue analogues have been extensively studied due to the photoinduced magnetization. Surprisingly, their suitability as precursors for solid-state synthesis of magnetic nanoparticles is almost unexplored. In this paper, the mechanism of thermal decomposition of [Co(en)3][Fe(CN)6] ∙∙ 2H2O (1a) is elucidated, including the topotactic dehydration, valence and spins exchange mechanisms suggestion and the formation of a mixture of CoFe2O4-Co3O4 (3:1) as final products of thermal degradation. Results The course of thermal decomposition of 1a in air atmosphere up to 600°C was monitored by TG/DSC techniques, 57Fe Mössbauer and IR spectroscopy. As first, the topotactic dehydration of 1a to the hemihydrate [Co(en)3][Fe(CN)6] ∙∙ 1/2H2O (1b) occurred with preserving the single-crystal character as was confirmed by the X-ray diffraction analysis. The consequent thermal decomposition proceeded in further four stages including intermediates varying in valence and spin states of both transition metal ions in their structures, i.e. [FeII(en)2(μ-NC)CoIII(CN)4], FeIII(NH2CH2CH3)2(μ-NC)2CoII(CN)3] and FeIII[CoII(CN)5], which were suggested mainly from 57Fe Mössbauer, IR spectral and elemental analyses data. Thermal decomposition was completed at 400°C when superparamagnetic phases of CoFe2O4 and Co3O4 in the molar ratio of 3:1 were formed. During further temperature increase (450 and 600°C), the ongoing crystallization process gave a new ferromagnetic phase attributed to the CoFe2O4-Co3O4 nanocomposite particles. Their formation was confirmed by XRD and TEM analyses. In-field (5 K / 5 T) Mössbauer spectrum revealed canting of Fe(III) spin in almost fully inverse spinel structure of CoFe2O4. Conclusions It has been found

  5. Notes on super Killing tensors

    NASA Astrophysics Data System (ADS)

    Howe, P. S.; Lindström, U.

    2016-03-01

    The notion of a Killing tensor is generalised to a superspace setting. Conserved quantities associated with these are defined for superparticles and Poisson brackets are used to define a supersymmetric version of the even Schouten-Nijenhuis bracket. Superconformal Killing tensors in flat superspaces are studied for spacetime dimensions 3,4,5,6 and 10. These tensors are also presented in analytic superspaces and super-twistor spaces for 3,4 and 6 dimensions. Algebraic structures associated with superconformal Killing tensors are also briefly discussed.

  6. On Endomorphisms of Quantum Tensor Space

    NASA Astrophysics Data System (ADS)

    Lehrer, Gustav Isaac; Zhang, Ruibin

    2008-12-01

    We give a presentation of the endomorphism algebra End_{mathcal {U}q(mathfrak {sl}2)}(V^{⊗ r}) , where V is the three-dimensional irreducible module for quantum {mathfrak {sl}_2} over the function field {mathbb {C}(q^{1/2})} . This will be as a quotient of the Birman-Wenzl-Murakami algebra BMW r ( q) : = BMW r ( q -4, q 2 - q -2) by an ideal generated by a single idempotent Φ q . Our presentation is in analogy with the case where V is replaced by the two-dimensional irreducible {mathcal {U}_q(mathfrak {sl}2)} -module, the BMW algebra is replaced by the Hecke algebra H r ( q) of type A r-1, Φ q is replaced by the quantum alternator in H 3( q), and the endomorphism algebra is the classical realisation of the Temperley-Lieb algebra on tensor space. In particular, we show that all relations among the endomorphisms defined by the R-matrices on {V^{⊗ r}} are consequences of relations among the three R-matrices acting on {V^{⊗ 4}}. The proof makes extensive use of the theory of cellular algebras. Potential applications include the decomposition of tensor powers when q is a root of unity.

  7. Reducing tensor magnetic gradiometer data for unexploded ordnance detection

    USGS Publications Warehouse

    Bracken, Robert E.; Brown, Philip J.

    2005-01-01

    We performed a survey to demonstrate the effectiveness of a prototype tensor magnetic gradiometer system (TMGS) for detection of buried unexploded ordnance (UXO). In order to achieve a useful result, we designed a data-reduction procedure that resulted in a realistic magnetic gradient tensor and devised a simple way of viewing complicated tensor data, not only to assess the validity of the final resulting tensor, but also to preview the data at interim stages of processing. The final processed map of the surveyed area clearly shows a sharp anomaly that peaks almost directly over the target UXO. This map agrees well with a modeled map derived from dipolar sources near the known target locations. From this agreement, it can be deduced that the reduction process is valid, making the prototype TMGS a foundation for development of future systems and processes.

  8. Full Moment Tensor Inversion as a Practical Tool in Case of Discrimination of Tectonic and Anthropogenic Seismicity in Poland

    NASA Astrophysics Data System (ADS)

    Lizurek, Grzegorz

    2016-08-01

    Tectonic seismicity in Poland is sparse. The biggest event was located near Myślenice in 17th century of magnitude 5.6. On the other hand, the anthropogenic seismicity is one of the highest in Europe related, for example, to underground mining in Upper Silesian Coal Basin (USCB) and Legnica Głogów Copper District (LGCD), open pit mining in "Bełchatów" brown coal mine and reservoir impoundment of Czorsztyn artificial lake. The level of seismic activity in these areas varies from tens to thousands of events per year. Focal mechanism and full moment tensor (MT) decomposition allow for deeper understanding of the seismogenic process leading to tectonic, induced, and triggered seismic events. The non-DC components of moment tensors are considered as an indicator of the induced seismicity. In this work, the MT inversion and decomposition is proved to be a robust tool for unveiling collapse-type events as well as the other induced events in Polish underground mining areas. The robustness and limitations of the presented method is exemplified by synthetic tests and by analyzing weak tectonic earthquakes. The spurious non-DC components of full MT solutions due to the noise and poor focal coverage are discussed. The results of the MT inversions of the human-related and tectonic earthquakes from Poland indicate this method as a useful part of the tectonic and anthropogenic seismicity discrimination workflow.

  9. A self-documenting source-independent data format for computer processing of tensor time series. [for filing satellite geophysical data

    NASA Technical Reports Server (NTRS)

    Mcpherron, R. L.

    1976-01-01

    The UCLA Space Science Group has developed a fixed format intermediate data set called a block data set, which is designed to hold multiple segments of multicomponent sampled data series. The format is sufficiently general so that tensor functions of one or more independent variables can be stored in the form of virtual data. This makes it possible for the unit data records of the block data set to be arrays of a single dependent variable rather than discrete samples. The format is self-documenting with parameter, label and header records completely characterizing the contents of the file. The block data set has been applied to the filing of satellite data (of ATS-6 among others).

  10. Tensor Target Polarization at TRIUMF

    NASA Astrophysics Data System (ADS)

    Smith, G.

    2014-10-01

    The first measurements of tensor observables in πvec d scattering experiments were performed in the mid-80's at TRIUMF, and later at SIN/PSI. The full suite of tensor observables accessible in πvec d elastic scattering were measured: T20, T21, and T22. The vector analyzing power iT11 was also measured. These results led to a better understanding of the three-body theory used to describe this reaction. A direct measurement of the target tensor polarization was also made independent of the usual NMR techniques by exploiting the (nearly) model-independent result for the tensor analyzing power at 90°cm in the πvec d → 2p reaction. This method was also used to check efforts to enhance the tensor polarization by RF burning of the NMR spectrum. A brief description of the methods developed to measure and analyze these experiments is provided.

  11. Catalytic performance of limonite in the decomposition of ammonia in the coexistence of typical fuel gas components produced in an air-blown coal gasification process

    SciTech Connect

    Naoto Tsubouchi; Hiroyuki Hashimoto; Yasuo Ohtsuka

    2007-12-15

    Catalytic decomposition of 2000 ppm NH{sub 3} in different atmospheres with an Australian {alpha}-FeOOH-rich limonite ore at 750-950{sup o}C under a high space velocity of 45000 h{sup -1} has been studied with a cylindrical quartz reactor to develop a novel hot gas cleanup method of removing NH{sub 3} from fuel gas produced in an air-blown coal gasification process for an integrated gasification combined cycle (IGCC) technology. The limonite shows very high catalytic activity for the decomposition of NH{sub 3} diluted with inert gas at 750{sup o}C, regardless of whether the catalyst material is subjected to H{sub 2} reduction before the reaction or not. Conversion of NH{sub 3} to N{sub 2} over the reduced limonite reaches {ge}99% at 750-950{sup o}C, and the catalyst maintains the high performance for about 40 h at 750{sup o}C. When the decomposition reaction is carried out in the presence of fuel gas components, the coexistence of syngas (20% CO/10% H{sub 2}) causes not only the serious deactivation of the limonite catalyst but also the appreciable formation of deposited carbon and CO{sub 2}. On the other hand, the addition of 10% CO{sub 2} or 3% H{sub 2}O to the syngas improves the catalytic performance and concurrently suppresses the carbon deposition almost completely, and the NH{sub 3} conversion in the 3% H{sub 2}O-containing syngas reaches about 90% and almost 100% at 750 and 850 {sup o}C, respectively. Influential factors controlling the catalytic activity of the limonite ore in the coexistence of fuel gas components are discussed on the basis of the results of the powder X-ray diffraction measurements, thermodynamic calculations, and some model experiments. 16 refs., 11 figs., 1 tab.

  12. Extracting the diffusion tensor from molecular dynamics simulation with Milestoning

    SciTech Connect

    Mugnai, Mauro L.; Elber, Ron

    2015-01-07

    We propose an algorithm to extract the diffusion tensor from Molecular Dynamics simulations with Milestoning. A Kramers-Moyal expansion of a discrete master equation, which is the Markovian limit of the Milestoning theory, determines the diffusion tensor. To test the algorithm, we analyze overdamped Langevin trajectories and recover a multidimensional Fokker-Planck equation. The recovery process determines the flux through a mesh and estimates local kinetic parameters. Rate coefficients are converted to the derivatives of the potential of mean force and to coordinate dependent diffusion tensor. We illustrate the computation on simple models and on an atomically detailed system—the diffusion along the backbone torsions of a solvated alanine dipeptide.

  13. Extracting the diffusion tensor from molecular dynamics simulation with Milestoning.

    PubMed

    Mugnai, Mauro L; Elber, Ron

    2015-01-01

    We propose an algorithm to extract the diffusion tensor from Molecular Dynamics simulations with Milestoning. A Kramers-Moyal expansion of a discrete master equation, which is the Markovian limit of the Milestoning theory, determines the diffusion tensor. To test the algorithm, we analyze overdamped Langevin trajectories and recover a multidimensional Fokker-Planck equation. The recovery process determines the flux through a mesh and estimates local kinetic parameters. Rate coefficients are converted to the derivatives of the potential of mean force and to coordinate dependent diffusion tensor. We illustrate the computation on simple models and on an atomically detailed system-the diffusion along the backbone torsions of a solvated alanine dipeptide.

  14. Skyrme tensor force in heavy ion collisions

    NASA Astrophysics Data System (ADS)

    Stevenson, P. D.; Suckling, E. B.; Fracasso, S.; Barton, M. C.; Umar, A. S.

    2016-05-01

    Background: It is generally acknowledged that the time-dependent Hartree-Fock (TDHF) method provides a useful foundation for a fully microscopic many-body theory of low-energy heavy ion reactions. The TDHF method is also known in nuclear physics in the small-amplitude domain, where it provides a useful description of collective states, and is based on the mean-field formalism, which has been a relatively successful approximation to the nuclear many-body problem. Currently, the TDHF theory is being widely used in the study of fusion excitation functions, fission, and deep-inelastic scattering of heavy mass systems, while providing a natural foundation for many other studies. Purpose: With the advancement of computational power it is now possible to undertake TDHF calculations without any symmetry assumptions and incorporate the major strides made by the nuclear structure community in improving the energy density functionals used in these calculations. In particular, time-odd and tensor terms in these functionals are naturally present during the dynamical evolution, while being absent or minimally important for most static calculations. The parameters of these terms are determined by the requirement of Galilean invariance or local gauge invariance but their significance for the reaction dynamics have not been fully studied. This work addresses this question with emphasis on the tensor force. Method: The full version of the Skyrme force, including terms arising only from the Skyrme tensor force, is applied to the study of collisions within a completely symmetry-unrestricted TDHF implementation. Results: We examine the effect on upper fusion thresholds with and without the tensor force terms and find an effect on the fusion threshold energy of the order several MeV. Details of the distribution of the energy within terms in the energy density functional are also discussed. Conclusions: Terms in the energy density functional linked to the tensor force can play a non

  15. Efficient Nonnegative Tucker Decompositions: Algorithms and Uniqueness.

    PubMed

    Zhou, Guoxu; Cichocki, Andrzej; Zhao, Qibin; Xie, Shengli

    2015-12-01

    Nonnegative Tucker decomposition (NTD) is a powerful tool for the extraction of nonnegative parts-based and physically meaningful latent components from high-dimensional tensor data while preserving the natural multilinear structure of data. However, as the data tensor often has multiple modes and is large scale, the existing NTD algorithms suffer from a very high computational complexity in terms of both storage and computation time, which has been one major obstacle for practical applications of NTD. To overcome these disadvantages, we show how low (multilinear) rank approximation (LRA) of tensors is able to significantly simplify the computation of the gradients of the cost function, upon which a family of efficient first-order NTD algorithms are developed. Besides dramatically reducing the storage complexity and running time, the new algorithms are quite flexible and robust to noise, because any well-established LRA approaches can be applied. We also show how nonnegativity incorporating sparsity substantially improves the uniqueness property and partially alleviates the curse of dimensionality of the Tucker decompositions. Simulation results on synthetic and real-world data justify the validity and high efficiency of the proposed NTD algorithms.

  16. Modified foreground segmentation for object tracking using wavelets in a tensor framework

    NASA Astrophysics Data System (ADS)

    Kapoor, Rajiv; Rohilla, Rajesh

    2015-09-01

    Subspace-based techniques have become important in behaviour analysis, appearance modelling and tracking. Various vector and tensor subspace learning techniques are already known that perform their operations in offline as well as in an online manner. In this work, we have improved upon a tensor-based subspace learning by using fourth-order decomposition and wavelets so as to have an advanced adaptive algorithm for robust and efficient background modelling and tracking in coloured video sequences. The proposed algorithm known as fourth-order incremental tensor subspace learning algorithm uses the spatio-colour-temporal information by adaptive online update of the means and the eigen basis for each unfolding matrix using tensor decomposition to fourth-order image tensors. The proposed method employs the wavelet transformation to an optimum decomposition level in order to reduce the computational complexity by working on the approximate counterpart of the original scenes and also reduces noise in the given scene. Our tracking method is an unscented particle filter that utilises appearance knowledge and estimates the new state of the intended object. Various experiments have been performed to demonstrate the promising and convincing nature of the proposed method and the method works better than existing methods.

  17. Link prediction on evolving graphs using matrix and tensor factorizations.

    SciTech Connect

    Dunlavy, Daniel M.; Acar, Evrim; Kolda, Tamara Gibson

    2010-06-01

    The data in many disciplines such as social networks, web analysis, etc. is link-based, and the link structure can be exploited for many different data mining tasks. In this paper, we consider the problem of temporal link prediction: Given link data for time periods 1 through T, can we predict the links in time period T + 1? Specifically, we look at bipartite graphs changing over time and consider matrix- and tensor-based methods for predicting links. We present a weight-based method for collapsing multi-year data into a single matrix. We show how the well-known Katz method for link prediction can be extended to bipartite graphs and, moreover, approximated in a scalable way using a truncated singular value decomposition. Using a CANDECOMP/PARAFAC tensor decomposition of the data, we illustrate the usefulness of exploiting the natural three-dimensional structure of temporal link data. Through several numerical experiments, we demonstrate that both matrix- and tensor-based techniques are effective for temporal link prediction despite the inherent difficulty of the problem.

  18. Parallel Tensor Compression for Large-Scale Scientific Data.

    SciTech Connect

    Kolda, Tamara G.; Ballard, Grey; Austin, Woody Nathan

    2015-10-01

    As parallel computing trends towards the exascale, scientific data produced by high-fidelity simulations are growing increasingly massive. For instance, a simulation on a three-dimensional spatial grid with 512 points per dimension that tracks 64 variables per grid point for 128 time steps yields 8 TB of data. By viewing the data as a dense five way tensor, we can compute a Tucker decomposition to find inherent low-dimensional multilinear structure, achieving compression ratios of up to 10000 on real-world data sets with negligible loss in accuracy. So that we can operate on such massive data, we present the first-ever distributed memory parallel implementation for the Tucker decomposition, whose key computations correspond to parallel linear algebra operations, albeit with nonstandard data layouts. Our approach specifies a data distribution for tensors that avoids any tensor data redistribution, either locally or in parallel. We provide accompanying analysis of the computation and communication costs of the algorithms. To demonstrate the compression and accuracy of the method, we apply our approach to real-world data sets from combustion science simulations. We also provide detailed performance results, including parallel performance in both weak and strong scaling experiments.

  19. The Hy-C process (thermal decomposition of natural gas): Potentially the lowest cost source of hydrogen with the least CO{sub 2} emission

    SciTech Connect

    Steinberg, M.

    1994-12-01

    The abundance of natural gas as a natural resource and its high hydrogen content make it a prime candidate for a low cost supply of hydrogen. The thermal decomposition of natural gas by methane pyrolysis produces carbon and hydrogen. The process energy required to produce one mol of hydrogen is only 5.3% of the higher heating value of methane. The thermal efficiency for hydrogen production as a fuel without the use of carbon as a fuel, can be as high as 60%. Conventional steam reforming of methane requires 8.9% process energy per mole of hydrogen even though 4 moles of hydrogen can be produced per mole of methane, compared to 2 moles by methane pyrolysis. When considering greenhouse global gas warming, methane pyrolysis produces the least amount of CO{sub 2} emissions per unit of hydrogen and can be totally eliminated when the carbon produced is either sequestered or sold as a materials commodity, and hydrogen is used to fuel the process. Conventional steam reforming of natural gas and CO shifting produces large amounts of CO{sub 2} emissions. The energy requirement for non-fossil, solar, nuclear, and hydropower production of hydrogen, mainly through electrolysis, is much greater than that from natural gas. From the resource available energy and environmental points of view, production of hydrogen by methane pyrolysis is most attractive. The by-product carbon black, when credited as a saleable material, makes hydrogen by thermal decomposition of natural gas (the Hy-C process) potentially the lowest cost source of large amounts of hydrogen.

  20. Relationship between the Decomposition Process of Coarse Woody Debris and Fungal Community Structure as Detected by High-Throughput Sequencing in a Deciduous Broad-Leaved Forest in Japan

    PubMed Central

    Yamashita, Satoshi; Masuya, Hayato; Abe, Shin; Masaki, Takashi; Okabe, Kimiko

    2015-01-01

    We examined the relationship between the community structure of wood-decaying fungi, detected by high-throughput sequencing, and the decomposition rate using 13 years of data from a forest dynamics plot. For molecular analysis and wood density measurements, drill dust samples were collected from logs and stumps of Fagus and Quercus in the plot. Regression using a negative exponential model between wood density and time since death revealed that the decomposition rate of Fagus was greater than that of Quercus. The residual between the expected value obtained from the regression curve and the observed wood density was used as a decomposition rate index. Principal component analysis showed that the fungal community compositions of both Fagus and Quercus changed with time since death. Principal component analysis axis scores were used as an index of fungal community composition. A structural equation model for each wood genus was used to assess the effect of fungal community structure traits on the decomposition rate and how the fungal community structure was determined by the traits of coarse woody debris. Results of the structural equation model suggested that the decomposition rate of Fagus was affected by two fungal community composition components: one that was affected by time since death and another that was not affected by the traits of coarse woody debris. In contrast, the decomposition rate of Quercus was not affected by coarse woody debris traits or fungal community structure. These findings suggest that, in the case of Fagus coarse woody debris, the fungal community structure is related to the decomposition process of its host substrate. Because fungal community structure is affected partly by the decay stage and wood density of its substrate, these factors influence each other. Further research on interactive effects is needed to improve our understanding of the relationship between fungal community structure and the woody debris decomposition process

  1. Relationship between the decomposition process of coarse woody debris and fungal community structure as detected by high-throughput sequencing in a deciduous broad-leaved forest in Japan.

    PubMed

    Yamashita, Satoshi; Masuya, Hayato; Abe, Shin; Masaki, Takashi; Okabe, Kimiko

    2015-01-01

    We examined the relationship between the community structure of wood-decaying fungi, detected by high-throughput sequencing, and the decomposition rate using 13 years of data from a forest dynamics plot. For molecular analysis and wood density measurements, drill dust samples were collected from logs and stumps of Fagus and Quercus in the plot. Regression using a negative exponential model between wood density and time since death revealed that the decomposition rate of Fagus was greater than that of Quercus. The residual between the expected value obtained from the regression curve and the observed wood density was used as a decomposition rate index. Principal component analysis showed that the fungal community compositions of both Fagus and Quercus changed with time since death. Principal component analysis axis scores were used as an index of fungal community composition. A structural equation model for each wood genus was used to assess the effect of fungal community structure traits on the decomposition rate and how the fungal community structure was determined by the traits of coarse woody debris. Results of the structural equation model suggested that the decomposition rate of Fagus was affected by two fungal community composition components: one that was affected by time since death and another that was not affected by the traits of coarse woody debris. In contrast, the decomposition rate of Quercus was not affected by coarse woody debris traits or fungal community structure. These findings suggest that, in the case of Fagus coarse woody debris, the fungal community structure is related to the decomposition process of its host substrate. Because fungal community structure is affected partly by the decay stage and wood density of its substrate, these factors influence each other. Further research on interactive effects is needed to improve our understanding of the relationship between fungal community structure and the woody debris decomposition process.

  2. Tensor Network Contractions for #SAT

    NASA Astrophysics Data System (ADS)

    Biamonte, Jacob D.; Morton, Jason; Turner, Jacob

    2015-09-01

    The computational cost of counting the number of solutions satisfying a Boolean formula, which is a problem instance of #SAT, has proven subtle to quantify. Even when finding individual satisfying solutions is computationally easy (e.g. 2-SAT, which is in ), determining the number of solutions can be #-hard. Recently, computational methods simulating quantum systems experienced advancements due to the development of tensor network algorithms and associated quantum physics-inspired techniques. By these methods, we give an algorithm using an axiomatic tensor contraction language for n-variable #SAT instances with complexity where c is the number of COPY-tensors, g is the number of gates, and d is the maximal degree of any COPY-tensor. Thus, n-variable counting problems can be solved efficiently when their tensor network expression has at most COPY-tensors and polynomial fan-out. This framework also admits an intuitive proof of a variant of the Tovey conjecture (the r,1-SAT instance of the Dubois-Tovey theorem). This study increases the theory, expressiveness and application of tensor based algorithmic tools and provides an alternative insight on these problems which have a long history in statistical physics and computer science.

  3. Tensor classification of structure in smoothed particle hydrodynamics density fields

    NASA Astrophysics Data System (ADS)

    Forgan, Duncan; Bonnell, Ian; Lucas, William; Rice, Ken

    2016-04-01

    As hydrodynamic simulations increase in scale and resolution, identifying structures with non-trivial geometries or regions of general interest becomes increasingly challenging. There is a growing need for algorithms that identify a variety of different features in a simulation without requiring a `by eye' search. We present tensor classification as such a technique for smoothed particle hydrodynamics (SPH). These methods have already been used to great effect in N-Body cosmological simulations, which require smoothing defined as an input free parameter. We show that tensor classification successfully identifies a wide range of structures in SPH density fields using its native smoothing, removing a free parameter from the analysis and preventing the need for tessellation of the density field, as required by some classification algorithms. As examples, we show that tensor classification using the tidal tensor and the velocity shear tensor successfully identifies filaments, shells and sheet structures in giant molecular cloud simulations, as well as spiral arms in discs. The relationship between structures identified using different tensors illustrates how different forces compete and co-operate to produce the observed density field. We therefore advocate the use of multiple tensors to classify structure in SPH simulations, to shed light on the interplay of multiple physical processes.

  4. MATLAB tensor classes for fast algorithm prototyping.

    SciTech Connect

    Bader, Brett William; Kolda, Tamara Gibson

    2004-10-01

    Tensors (also known as mutidimensional arrays or N-way arrays) are used in a variety of applications ranging from chemometrics to psychometrics. We describe four MATLAB classes for tensor manipulations that can be used for fast algorithm prototyping. The tensor class extends the functionality of MATLAB's multidimensional arrays by supporting additional operations such as tensor multiplication. The tensor as matrix class supports the 'matricization' of a tensor, i.e., the conversion of a tensor to a matrix (and vice versa), a commonly used operation in many algorithms. Two additional classes represent tensors stored in decomposed formats: cp tensor and tucker tensor. We descibe all of these classes and then demonstrate their use by showing how to implement several tensor algorithms that have appeared in the literature.

  5. Mode decomposition evolution equations

    PubMed Central

    Wang, Yang; Wei, Guo-Wei; Yang, Siyang

    2011-01-01

    Partial differential equation (PDE) based methods have become some of the most powerful tools for exploring the fundamental problems in signal processing, image processing, computer vision, machine vision and artificial intelligence in the past two decades. The advantages of PDE based approaches are that they can be made fully automatic, robust for the analysis of images, videos and high dimensional data. A fundamental question is whether one can use PDEs to perform all the basic tasks in the image processing. If one can devise PDEs to perform full-scale mode decomposition for signals and images, the modes thus generated would be very useful for secondary processing to meet the needs in various types of signal and image processing. Despite of great progress in PDE based image analysis in the past two decades, the basic roles of PDEs in image/signal analysis are only limited to PDE based low-pass filters, and their applications to noise removal, edge detection, segmentation, etc. At present, it is not clear how to construct PDE based methods for full-scale mode decomposition. The above-mentioned limitation of most current PDE based image/signal processing methods is addressed in the proposed work, in which we introduce a family of mode decomposition evolution equations (MoDEEs) for a vast variety of applications. The MoDEEs are constructed as an extension of a PDE based high-pass filter (Europhys. Lett., 59(6): 814, 2002) by using arbitrarily high order PDE based low-pass filters introduced by Wei (IEEE Signal Process. Lett., 6(7): 165, 1999). The use of arbitrarily high order PDEs is essential to the frequency localization in the mode decomposition. Similar to the wavelet transform, the present MoDEEs have a controllable time-frequency localization and allow a perfect reconstruction of the original function. Therefore, the MoDEE operation is also called a PDE transform. However, modes generated from the present approach are in the spatial or time domain and can be

  6. Tensor-network algorithm for nonequilibrium relaxation in the thermodynamic limit.

    PubMed

    Hotta, Yoshihito

    2016-06-01

    We propose a tensor-network algorithm for discrete-time stochastic dynamics of a homogeneous system in the thermodynamic limit. We map a d-dimensional nonequilibrium Markov process to a (d+1)-dimensional infinite tensor network by using a higher-order singular-value decomposition. As an application of the algorithm, we compute the nonequilibrium relaxation from a fully magnetized state to equilibrium of the one- and two-dimensional Ising models with periodic boundary conditions. Utilizing the translational invariance of the systems, we analyze the behavior in the thermodynamic limit directly. We estimated the dynamical critical exponent z=2.16(5) for the two-dimensional Ising model. Our approach fits well with the framework of the nonequilibrium-relaxation method. Our algorithm can compute time evolution of the magnetization of a large system precisely for a relatively short period. In the nonequilibrium-relaxation method, one needs to simulate dynamics of a large system for a short time. The combination of the two provides a different approach to the study of critical phenomena.

  7. Tensor-network algorithm for nonequilibrium relaxation in the thermodynamic limit

    NASA Astrophysics Data System (ADS)

    Hotta, Yoshihito

    2016-06-01

    We propose a tensor-network algorithm for discrete-time stochastic dynamics of a homogeneous system in the thermodynamic limit. We map a d -dimensional nonequilibrium Markov process to a (d +1 ) -dimensional infinite tensor network by using a higher-order singular-value decomposition. As an application of the algorithm, we compute the nonequilibrium relaxation from a fully magnetized state to equilibrium of the one- and two-dimensional Ising models with periodic boundary conditions. Utilizing the translational invariance of the systems, we analyze the behavior in the thermodynamic limit directly. We estimated the dynamical critical exponent z =2.16 (5 ) for the two-dimensional Ising model. Our approach fits well with the framework of the nonequilibrium-relaxation method. Our algorithm can compute time evolution of the magnetization of a large system precisely for a relatively short period. In the nonequilibrium-relaxation method, one needs to simulate dynamics of a large system for a short time. The combination of the two provides a different approach to the study of critical phenomena.

  8. A viscoelastic Unitary Crack-Opening strain tensor for crack width assessment in fractured concrete structures

    NASA Astrophysics Data System (ADS)

    Sciumè, Giuseppe; Benboudjema, Farid

    2016-09-01

    A post-processing technique which allows computing crack width in concrete is proposed for a viscoelastic damage model. Concrete creep is modeled by means of a Kelvin-Voight cell while the damage model is that of Mazars in its local form. Due to the local damage approach, the constitutive model is regularized with respect to finite element mesh to avoid mesh dependency in the computed solution (regularization is based on fracture energy). The presented method is an extension to viscoelasticity of the approach proposed by Matallah et al. (Int. J. Numer. Anal. Methods Geomech. 34(15):1615-1633, 2010) for a purely elastic damage model. The viscoelastic Unitary Crack-Opening (UCO) strain tensor is computed accounting for evolution with time of surplus of stress related to damage; this stress is obtained from decomposition of the effective stress tensor. From UCO the normal crack width is then derived accounting for finite element characteristic length in the direction orthogonal to crack. This extension is quite natural and allows for accounting of creep impact on opening/closing of cracks in time dependent problems. A graphical interpretation of the viscoelastic UCO using Mohr's circles is proposed and application cases together with a theoretical validation are presented to show physical consistency of computed viscoelastic UCO.

  9. Real-time framework for tensor-based image enhancement for object classification

    NASA Astrophysics Data System (ADS)

    Cyganek, Bogusław; Smołka, Bogdan

    2016-04-01

    In many practical situations visual pattern recognition is vastly burdened by low quality of input images due to noise, geometrical distortions, as well as low quality of the acquisition hardware. However, although there are techniques of image quality improvements, such as nonlinear filtering, there are only few attempts reported in the literature that try to build these enhancement methods into a complete chain for multi-dimensional object recognition such as color video or hyperspectral images. In this work we propose a joint multilinear signal filtering and classification system built upon the multi-dimensional (tensor) approach. Tensor filtering is performed by the multi-dimensional input signal projection into the tensor subspace spanned by the best-rank tensor decomposition method. On the other hand, object classification is done by construction of the tensor sub-space constructed based on the Higher-Order Singular Value Decomposition method applied to the prototype patters. In the experiments we show that the proposed chain allows high object recognition accuracy in the real-time even from the poor quality prototypes. Even more importantly, the proposed framework allows unified classification of signals of any dimensions, such as color images or video sequences which are exemplars of 3D and 4D tensors, respectively. The paper discussed also some practical issues related to implementation of the key components of the proposed system.

  10. Tensor-polarized structure functions: Tensor structure of deuteron in 2020's

    NASA Astrophysics Data System (ADS)

    Kumano, S.

    2014-10-01

    We explain spin structure for a spin-one hadron, in which there are new structure functions, in addition to the ones (F1, F2, g1, g2) which exist for the spin-1/2 nucleon, associated with its tensor structure. The new structure functions are b1, b2, b3, and b4 in deep inelastic scattering of a charged-lepton from a spin-one hadron such as the deuteron. Among them, twist- two functions are related by the Callan-Gross type relation b2 = 2xb1 in the Bjorken scaling limit. First, these new structure functions are introduced, and useful formulae are derived for projection operators of b1-4 from a hadron tensor Wμν. Second, a sum rule is explained for b1, and possible tensor-polarized distributions are discussed by using HERMES data in order to propose future experimental measurements and to compare them with theoretical models. A proposal was approved to measure b1 at the Thomas Jefferson National Accelerator Facility (JLab), so that much progress is expected for b1 in the near future. Third, formalisms of polarized proton-deuteron Drell-Yan processes are explained for probing especially tensor- polarized antiquark distributions, which were suggested by the HERMES data. The studies of the tensor-polarized structure functions will open a new era in 2020's for tensor-structure studies in terms of quark and gluon degrees of freedom, which are very different from ordinary descriptions in terms of nucleons and mesons.

  11. Tensor based singular spectrum analysis for automatic scoring of sleep EEG.

    PubMed

    Kouchaki, Samaneh; Sanei, Saeid; Arbon, Emma L; Dijk, Derk-Jan

    2015-01-01

    A new supervised approach for decomposition of single channel signal mixtures is introduced in this paper. The performance of the traditional singular spectrum analysis algorithm is significantly improved by applying tensor decomposition instead of traditional singular value decomposition. As another contribution to this subspace analysis method, the inherent frequency diversity of the data has been effectively exploited to highlight the subspace of interest. As an important application, sleep electroencephalogram has been analyzed and the stages of sleep for the subjects in normal condition, with sleep restriction, and with sleep extension have been accurately estimated and compared with the results of sleep scoring by clinical experts.

  12. Tensor integrand reduction via Laurent expansion

    NASA Astrophysics Data System (ADS)

    Hirschi, Valentin; Peraro, Tiziano

    2016-06-01

    We introduce a new method for the application of one-loop integrand reduction via the Laurent expansion algorithm, as implemented in the public C ++ library N inja. We show how the coefficients of the Laurent expansion can be computed by suitable contractions of the loop numerator tensor with cut-dependent projectors, making it possible to interface N inja to any one-loop matrix element generator that can provide the components of this tensor. We implemented this technique in the N inja library and interfaced it to M adL oop, which is part of the public M adG raph5_ aMC@NLO framework. We performed a detailed performance study, comparing against other public reduction tools, namely C utT ools, S amurai, IREGI, PJF ry++ and G olem95. We find that N inja out-performs traditional integrand reduction in both speed and numerical stability, the latter being on par with that of the tensor integral reduction tool Golem95 which is however more limited and slower than N inja. We considered many benchmark multi-scale processes of increasing complexity, involving QCD and electro-weak corrections as well as effective non-renormalizable couplings, showing that N inja's performance scales well with both the rank and multiplicity of the considered process.

  13. Tensor integrand reduction via Laurent expansion

    DOE PAGES

    Hirschi, Valentin; Peraro, Tiziano

    2016-06-09

    We introduce a new method for the application of one-loop integrand reduction via the Laurent expansion algorithm, as implemented in the public C++ library Ninja. We show how the coefficients of the Laurent expansion can be computed by suitable contractions of the loop numerator tensor with cut-dependent projectors, making it possible to interface Ninja to any one-loop matrix element generator that can provide the components of this tensor. We implemented this technique in the Ninja library and interfaced it to MadLoop, which is part of the public MadGraph5_aMC@NLO framework. We performed a detailed performance study, comparing against other public reductionmore » tools, namely CutTools, Samurai, IREGI, PJFry++ and Golem95. We find that Ninja out-performs traditional integrand reduction in both speed and numerical stability, the latter being on par with that of the tensor integral reduction tool Golem95 which is however more limited and slower than Ninja. Lastly, we considered many benchmark multi-scale processes of increasing complexity, involving QCD and electro-weak corrections as well as effective non-renormalizable couplings, showing that Ninja’s performance scales well with both the rank and multiplicity of the considered process.« less

  14. Metallo-Organic Decomposition (MOD) film development

    NASA Technical Reports Server (NTRS)

    Parker, J.

    1986-01-01

    The processing techniques and problems encountered in formulating metallo-organic decomposition (MOD) films used in contracting structures for thin solar cells are described. The use of thermogravimetric analysis (TGA) and differential scanning calorimetry (DSC) techniques performed at Jet Propulsion Laboratory (JPL) in understanding the decomposition reactions lead to improvements in process procedures. The characteristics of the available MOD films were described in detail.

  15. Hydrogen peroxide catalytic decomposition

    NASA Technical Reports Server (NTRS)

    Parrish, Clyde F. (Inventor)

    2010-01-01

    Nitric oxide in a gaseous stream is converted to nitrogen dioxide using oxidizing species generated through the use of concentrated hydrogen peroxide fed as a monopropellant into a catalyzed thruster assembly. The hydrogen peroxide is preferably stored at stable concentration levels, i.e., approximately 50%-70% by volume, and may be increased in concentration in a continuous process preceding decomposition in the thruster assembly. The exhaust of the thruster assembly, rich in hydroxyl and/or hydroperoxy radicals, may be fed into a stream containing oxidizable components, such as nitric oxide, to facilitate their oxidation.

  16. Benefits and Costs of Lexical Decomposition and Semantic Integration during the Processing of Transparent and Opaque English Compounds

    ERIC Educational Resources Information Center

    Ji, Hongbo; Gagne, Christina L.; Spalding, Thomas L.

    2011-01-01

    Six lexical decision experiments were conducted to examine the influence of complex structure on the processing speed of English compounds. All experiments revealed that semantically transparent compounds (e.g., "rosebud") were processed more quickly than matched monomorphemic words (e.g., "giraffe"). Opaque compounds (e.g., "hogwash") were also…

  17. Development of the Tensoral Computer Language

    NASA Technical Reports Server (NTRS)

    Ferziger, Joel; Dresselhaus, Eliot

    1996-01-01

    The research scientist or engineer wishing to perform large scale simulations or to extract useful information from existing databases is required to have expertise in the details of the particular database, the numerical methods and the computer architecture to be used. This poses a significant practical barrier to the use of simulation data. The goal of this research was to develop a high-level computer language called Tensoral, designed to remove this barrier. The Tensoral language provides a framework in which efficient generic data manipulations can be easily coded and implemented. First of all, Tensoral is general. The fundamental objects in Tensoral represent tensor fields and the operators that act on them. The numerical implementation of these tensors and operators is completely and flexibly programmable. New mathematical constructs and operators can be easily added to the Tensoral system. Tensoral is compatible with existing languages. Tensoral tensor operations co-exist in a natural way with a host language, which may be any sufficiently powerful computer language such as Fortran, C, or Vectoral. Tensoral is very-high-level. Tensor operations in Tensoral typically act on entire databases (i.e., arrays) at one time and may, therefore, correspond to many lines of code in a conventional language. Tensoral is efficient. Tensoral is a compiled language. Database manipulations are simplified optimized and scheduled by the compiler eventually resulting in efficient machine code to implement them.

  18. Adaptive Multilinear Tensor Product Wavelets.

    PubMed

    Weiss, Kenneth; Lindstrom, Peter

    2016-01-01

    Many foundational visualization techniques including isosurfacing, direct volume rendering and texture mapping rely on piecewise multilinear interpolation over the cells of a mesh. However, there has not been much focus within the visualization community on techniques that efficiently generate and encode globally continuous functions defined by the union of multilinear cells. Wavelets provide a rich context for analyzing and processing complicated datasets. In this paper, we exploit adaptive regular refinement as a means of representing and evaluating functions described by a subset of their nonzero wavelet coefficients. We analyze the dependencies involved in the wavelet transform and describe how to generate and represent the coarsest adaptive mesh with nodal function values such that the inverse wavelet transform is exactly reproduced via simple interpolation (subdivision) over the mesh elements. This allows for an adaptive, sparse representation of the function with on-demand evaluation at any point in the domain. We focus on the popular wavelets formed by tensor products of linear B-splines, resulting in an adaptive, nonconforming but crack-free quadtree (2D) or octree (3D) mesh that allows reproducing globally continuous functions via multilinear interpolation over its cells.

  19. Tensor analysis methods for activity characterization in spatiotemporal data

    SciTech Connect

    Haass, Michael Joseph; Van Benthem, Mark Hilary; Ochoa, Edward M.

    2014-03-01

    Tensor (multiway array) factorization and decomposition offers unique advantages for activity characterization in spatio-temporal datasets because these methods are compatible with sparse matrices and maintain multiway structure that is otherwise lost in collapsing for regular matrix factorization. This report describes our research as part of the PANTHER LDRD Grand Challenge to develop a foundational basis of mathematical techniques and visualizations that enable unsophisticated users (e.g. users who are not steeped in the mathematical details of matrix algebra and mulitway computations) to discover hidden patterns in large spatiotemporal data sets.

  20. A Tensor-Based Subspace Approach for Bistatic MIMO Radar in Spatial Colored Noise

    PubMed Central

    Wang, Xianpeng; Wang, Wei; Li, Xin; Wang, Junxiang

    2014-01-01

    In this paper, a new tensor-based subspace approach is proposed to estimate the direction of departure (DOD) and the direction of arrival (DOA) for bistatic multiple-input multiple-output (MIMO) radar in the presence of spatial colored noise. Firstly, the received signals can be packed into a third-order measurement tensor by exploiting the inherent structure of the matched filter. Then, the measurement tensor can be divided into two sub-tensors, and a cross-covariance tensor is formulated to eliminate the spatial colored noise. Finally, the signal subspace is constructed by utilizing the higher-order singular value decomposition (HOSVD) of the cross-covariance tensor, and the DOD and DOA can be obtained through the estimation of signal parameters via rotational invariance technique (ESPRIT) algorithm, which are paired automatically. Since the multidimensional inherent structure and the cross-covariance tensor technique are used, the proposed method provides better angle estimation performance than Chen's method, the ESPRIT algorithm and the multi-SVD method. Simulation results confirm the effectiveness and the advantage of the proposed method. PMID:24573313

  1. A tensor-based subspace approach for bistatic MIMO radar in spatial colored noise.

    PubMed

    Wang, Xianpeng; Wang, Wei; Li, Xin; Wang, Junxiang

    2014-02-25

    In this paper, a new tensor-based subspace approach is proposed to estimate the direction of departure (DOD) and the direction of arrival (DOA) for bistatic multiple-input multiple-output (MIMO) radar in the presence of spatial colored noise. Firstly, the received signals can be packed into a third-order measurement tensor by exploiting the inherent structure of the matched filter. Then, the measurement tensor can be divided into two sub-tensors, and a cross-covariance tensor is formulated to eliminate the spatial colored noise. Finally, the signal subspace is constructed by utilizing the higher-order singular value decomposition (HOSVD) of the cross-covariance tensor, and the DOD and DOA can be obtained through the estimation of signal parameters via rotational invariance technique (ESPRIT) algorithm, which are paired automatically. Since the multidimensional inherent structure and the cross-covariance tensor technique are used, the proposed method provides better angle estimation performance than Chen's method, the ESPRIT algorithm and the multi-SVD method. Simulation results confirm the effectiveness and the advantage of the proposed method.

  2. Diffusion tensor image registration using polynomial expansion

    NASA Astrophysics Data System (ADS)

    Wang, Yuanjun; Chen, Zengai; Nie, Shengdong; Westin, Carl-Fredrik

    2013-09-01

    In this paper, we present a deformable registration framework for the diffusion tensor image (DTI) using polynomial expansion. The use of polynomial expansion in image registration has previously been shown to be beneficial due to fast convergence and high accuracy. However, earlier work was developed only for 3D scalar medical image registration. In this work, it is shown how polynomial expansion can be applied to DTI registration. A new measurement is proposed for DTI registration evaluation, which seems to be robust and sensitive in evaluating the result of DTI registration. We present the algorithms for DTI registration using polynomial expansion by the fractional anisotropy image, and an explicit tensor reorientation strategy is inherent to the registration process. Analytic transforms with high accuracy are derived from polynomial expansion and used for transforming the tensor's orientation. Three measurements for DTI registration evaluation are presented and compared in experimental results. The experiments for algorithm validation are designed from simple affine deformation to nonlinear deformation cases, and the algorithms using polynomial expansion give a good performance in both cases. Inter-subject DTI registration results are presented showing the utility of the proposed method.

  3. Integrated calibration of magnetic gradient tensor system

    NASA Astrophysics Data System (ADS)

    Gang, Yin; Yingtang, Zhang; Hongbo, Fan; GuoQuan, Ren; Zhining, Li

    2015-01-01

    Measurement precision of a magnetic gradient tensor system is not only connected with the imperfect performance of magnetometers such as bias, scale factor, non-orthogonality and misalignment errors, but also connected with the external soft-iron and hard-iron magnetic distortion fields when the system is used as a strapdown device. So an integrated scalar calibration method is proposed in this paper. In the first step, a mathematical model for scalar calibration of a single three-axis magnetometer is established, and a least squares ellipsoid fitting algorithm is proposed to estimate the detailed error parameters. For the misalignment errors existing at different magnetometers caused by the installation process and misalignment errors aroused by ellipsoid fitting estimation, a calibration method for combined misalignment errors is proposed in the second step to switch outputs of different magnetometers into the ideal reference orthogonal coordinate system. In order to verify effectiveness of the proposed method, simulation and experiment with a cross-magnetic gradient tensor system are performed, and the results show that the proposed method estimates error parameters and improves the measurement accuracy of magnetic gradient tensor greatly.

  4. Decomposition Rate and Pattern in Hanging Pigs.

    PubMed

    Lynch-Aird, Jeanne; Moffatt, Colin; Simmons, Tal

    2015-09-01

    Accurate prediction of the postmortem interval requires an understanding of the decomposition process and the factors acting upon it. A controlled experiment, over 60 days at an outdoor site in the northwest of England, used 20 freshly killed pigs (Sus scrofa) as human analogues to study decomposition rate and pattern. Ten pigs were hung off the ground and ten placed on the surface. Observed differences in the decomposition pattern required a new decomposition scoring scale to be produced for the hanging pigs to enable comparisons with the surface pigs. The difference in the rate of decomposition between hanging and surface pigs was statistically significant (p=0.001). Hanging pigs reached advanced decomposition stages sooner, but lagged behind during the early stages. This delay is believed to result from lower variety and quantity of insects, due to restricted beetle access to the aerial carcass, and/or writhing maggots falling from the carcass.

  5. Tensor Target Polarization at TRIUMF

    SciTech Connect

    Smith, G

    2014-10-27

    The first measurements of tensor observables in $\\pi \\vec{d}$ scattering experiments were performed in the mid-80's at TRIUMF, and later at SIN/PSI. The full suite of tensor observables accessible in $\\pi \\vec{d}$ elastic scattering were measured: $T_{20}$, $T_{21}$, and $T_{22}$. The vector analyzing power $iT_{11}$ was also measured. These results led to a better understanding of the three-body theory used to describe this reaction. %Some measurements were also made in the absorption and breakup channels. A direct measurement of the target tensor polarization was also made independent of the usual NMR techniques by exploiting the (nearly) model-independent result for the tensor analyzing power at 90$^\\circ _{cm}$ in the $\\pi \\vec{d} \\rightarrow 2p$ reaction. This method was also used to check efforts to enhance the tensor polarization by RF burning of the NMR spectrum. A brief description of the methods developed to measure and analyze these experiments is provided.

  6. Conceptualizing and Estimating Process Speed in Studies Employing Ecological Momentary Assessment Designs: A Multilevel Variance Decomposition Approach

    ERIC Educational Resources Information Center

    Shiyko, Mariya P.; Ram, Nilam

    2011-01-01

    Researchers have been making use of ecological momentary assessment (EMA) and other study designs that sample feelings and behaviors in real time and in naturalistic settings to study temporal dynamics and contextual factors of a wide variety of psychological, physiological, and behavioral processes. As EMA designs become more widespread,…

  7. Total Variation Regularized Tensor RPCA for Background Subtraction From Compressive Measurements.

    PubMed

    Cao, Wenfei; Wang, Yao; Sun, Jian; Meng, Deyu; Yang, Can; Cichocki, Andrzej; Xu, Zongben

    2016-09-01

    Background subtraction has been a fundamental and widely studied task in video analysis, with a wide range of applications in video surveillance, teleconferencing, and 3D modeling. Recently, motivated by compressive imaging, background subtraction from compressive measurements (BSCM) is becoming an active research task in video surveillance. In this paper, we propose a novel tensor-based robust principal component analysis (TenRPCA) approach for BSCM by decomposing video frames into backgrounds with spatial-temporal correlations and foregrounds with spatio-temporal continuity in a tensor framework. In this approach, we use 3D total variation to enhance the spatio-temporal continuity of foregrounds, and Tucker decomposition to model the spatio-temporal correlations of video background. Based on this idea, we design a basic tensor RPCA model over the video frames, dubbed as the holistic TenRPCA model. To characterize the correlations among the groups of similar 3D patches of video background, we further design a patch-group-based tensor RPCA model by joint tensor Tucker decompositions of 3D patch groups for modeling the video background. Efficient algorithms using the alternating direction method of multipliers are developed to solve the proposed models. Extensive experiments on simulated and real-world videos demonstrate the superiority of the proposed approaches over the existing state-of-the-art approaches. PMID:27305675

  8. Locally extracting scalar, vector and tensor modes in cosmological perturbation theory

    NASA Astrophysics Data System (ADS)

    Clarkson, Chris; Osano, Bob

    2011-11-01

    Cosmological perturbation theory relies on the decomposition of perturbations into so-called scalar, vector and tensor modes. This decomposition is non-local and depends on unknowable boundary conditions. The non-locality is particularly important at second and higher order because perturbative modes are sourced by products of lower order modes, which must be integrated over all space in order to isolate each mode. However, given a trace-free rank-2 tensor, a locally defined scalar mode may be trivially derived by taking two divergences, which knocks out the vector and tensor degrees of freedom. A similar local differential operation will return a pure vector mode. This means that scalar and vector degrees of freedom have local descriptions. The corresponding local extraction of the tensor mode is unknown however. We give it here. The operators we define are useful for defining gauge-invariant quantities at second order. We perform much of our analysis using an index-free ‘vector-calculus’ approach which makes manipulating tensor equations considerably simpler.

  9. The Riegeom package: abstract tensor calculation

    NASA Astrophysics Data System (ADS)

    Portugal, R.

    2000-04-01

    This paper describes a new package for abstract tensor calculation. Riegeom can efficiently simplify generic tensor expressions written in the indicial format. It addresses the problem of the cyclic symmetry and the dimension dependent relations of Riemann tensor polynomials. There are tools to manipulate tensors such as substitution and symmetrization functions. The main tensors of the Riemannian geometry have been implemented. The underlying algorithms are based on a precise mathematical formulation of canonical form of tensor expressions described elsewhere. Riegeom is implemented over the Maple system.

  10. Scalar-tensor cosmological models

    NASA Astrophysics Data System (ADS)

    Serna, A.; Alimi, J. M.

    1996-03-01

    We analyze the qualitative behavior of scalar-tensor cosmologies with an arbitrary monotonic $\\omega(\\Phi)$ function. In particular, we are interested in scalar-tensor theories distinguishable at early epochs from general relativity (GR) but leading to predictions compatible with solar-system experiments. After extending the method developed by Lorentz-Petzold and Barrow, we establish the conditions required for convergence towards GR at $t \\rightarrow \\infty$. Then, we obtain all the asymptotic analytical solutions at early times which are possible in the framework of these theories. The subsequent qualitative evolution, from these asymptotic solutions until their later convergence towards GR, is analyzed by means of numerical computations. From this analysis, we are able to establish a classification of the different qualitative behaviors of scalar-tensor cosmological models with an arbitrary monotonic $\\omega(\\Phi)$ function

  11. O( N) Random Tensor Models

    NASA Astrophysics Data System (ADS)

    Carrozza, Sylvain; Tanasa, Adrian

    2016-11-01

    We define in this paper a class of three-index tensor models, endowed with {O(N)^{⊗ 3}} invariance ( N being the size of the tensor). This allows to generate, via the usual QFT perturbative expansion, a class of Feynman tensor graphs which is strictly larger than the class of Feynman graphs of both the multi-orientable model (and hence of the colored model) and the U( N) invariant models. We first exhibit the existence of a large N expansion for such a model with general interactions. We then focus on the quartic model and we identify the leading and next-to-leading order (NLO) graphs of the large N expansion. Finally, we prove the existence of a critical regime and we compute the critical exponents, both at leading order and at NLO. This is achieved through the use of various analytic combinatorics techniques.

  12. Factors controlling bark decomposition and its role in wood decomposition in five tropical tree species

    PubMed Central

    Dossa, Gbadamassi G. O.; Paudel, Ekananda; Cao, Kunfang; Schaefer, Douglas; Harrison, Rhett D.

    2016-01-01

    Organic matter decomposition represents a vital ecosystem process by which nutrients are made available for plant uptake and is a major flux in the global carbon cycle. Previous studies have investigated decomposition of different plant parts, but few considered bark decomposition or its role in decomposition of wood. However, bark can comprise a large fraction of tree biomass. We used a common litter-bed approach to investigate factors affecting bark decomposition and its role in wood decomposition for five tree species in a secondary seasonal tropical rain forest in SW China. For bark, we implemented a litter bag experiment over 12 mo, using different mesh sizes to investigate effects of litter meso- and macro-fauna. For wood, we compared the decomposition of branches with and without bark over 24 mo. Bark in coarse mesh bags decomposed 1.11–1.76 times faster than bark in fine mesh bags. For wood decomposition, responses to bark removal were species dependent. Three species with slow wood decomposition rates showed significant negative effects of bark-removal, but there was no significant effect in the other two species. Future research should also separately examine bark and wood decomposition, and consider bark-removal experiments to better understand roles of bark in wood decomposition. PMID:27698461

  13. Total angular momentum waves for scalar, vector, and tensor fields

    NASA Astrophysics Data System (ADS)

    Dai, Liang; Kamionkowski, Marc; Jeong, Donghui

    2012-12-01

    Most calculations in cosmological perturbation theory, including those dealing with the inflationary generation of perturbations, their time evolution, and their observational consequences, decompose those perturbations into plane waves (Fourier modes). However, for some calculations, particularly those involving observations performed on a spherical sky, a decomposition into waves of fixed total angular momentum (TAM) may be more appropriate. Here we introduce TAM waves—solutions of fixed total angular momentum to the Helmholtz equation—for three-dimensional scalar, vector, and tensor fields. The vector TAM waves of given total angular momentum can be decomposed further into a set of three basis functions of fixed orbital angular momentum, a set of fixed helicity, or a basis consisting of a longitudinal (L) and two transverse (E and B) TAM waves. The symmetric traceless rank-2 tensor TAM waves can be similarly decomposed into a basis of fixed orbital angular momentum or fixed helicity, or a basis that consists of a longitudinal (L), two vector (VE and VB, of opposite parity), and two tensor (TE and TB, of opposite parity) waves. We show how all of the vector and tensor TAM waves can be obtained by applying derivative operators to scalar TAM waves. This operator approach then allows one to decompose a vector field into three covariant scalar fields for the L, E, and B components and symmetric-traceless-tensor fields into five covariant scalar fields for the L, VE, VB, TE, and TB components. We provide projections of the vector and tensor TAM waves onto vector and tensor spherical harmonics. We provide calculational detail to facilitate the assimilation of this formalism into cosmological calculations. As an example, we calculate the power spectra of the deflection angle for gravitational lensing by density perturbations and by gravitational waves. We comment on an alternative approach to cosmic microwave background fluctuations based on TAM waves. An

  14. Tensor numerical methods in quantum chemistry: from Hartree-Fock to excitation energies.

    PubMed

    Khoromskaia, Venera; Khoromskij, Boris N

    2015-12-21

    We resume the recent successes of the grid-based tensor numerical methods and discuss their prospects in real-space electronic structure calculations. These methods, based on the low-rank representation of the multidimensional functions and integral operators, first appeared as an accurate tensor calculus for the 3D Hartree potential using 1D complexity operations, and have evolved to entirely grid-based tensor-structured 3D Hartree-Fock eigenvalue solver. It benefits from tensor calculation of the core Hamiltonian and two-electron integrals (TEI) in O(n log n) complexity using the rank-structured approximation of basis functions, electron densities and convolution integral operators all represented on 3D n × n × n Cartesian grids. The algorithm for calculating TEI tensor in a form of the Cholesky decomposition is based on multiple factorizations using algebraic 1D "density fitting" scheme, which yield an almost irreducible number of product basis functions involved in the 3D convolution integrals, depending on a threshold ε > 0. The basis functions are not restricted to separable Gaussians, since the analytical integration is substituted by high-precision tensor-structured numerical quadratures. The tensor approaches to post-Hartree-Fock calculations for the MP2 energy correction and for the Bethe-Salpeter excitation energies, based on using low-rank factorizations and the reduced basis method, were recently introduced. Another direction is towards the tensor-based Hartree-Fock numerical scheme for finite lattices, where one of the numerical challenges is the summation of electrostatic potentials of a large number of nuclei. The 3D grid-based tensor method for calculation of a potential sum on a L × L × L lattice manifests the linear in L computational work, O(L), instead of the usual O(L(3) log L) scaling by the Ewald-type approaches. PMID:26016539

  15. Co-composting of rose oil processing waste with caged layer manure and straw or sawdust: effects of carbon source and C/N ratio on decomposition.

    PubMed

    Onursal, Emrah; Ekinci, Kamil

    2015-04-01

    Rose oil is a specific essential oil that is produced mainly for the cosmetics industry in a few selected locations around the world. Rose oil production is a water distillation process from petals of Rosa damascena Mill. Since the oil content of the rose petals of this variety is between 0.3-0.4% (w/w), almost 4000 to 3000 kg of rose petals are needed to produce 1 kg of rose oil. Rose oil production is a seasonal activity and takes place during the relatively short period where the roses are blooming. As a result, large quantities of solid waste are produced over a limited time interval. This research aims: (i) to determine the possibilities of aerobic co-composting as a waste management option for rose oil processing waste with caged layer manure; (ii) to identify effects of different carbon sources - straw or sawdust on co-composting of rose oil processing waste and caged layer manure, which are both readily available in Isparta, where significant rose oil production also takes place; (iii) to determine the effects of different C/N ratios on co-composting by the means of organic matter decomposition and dry matter loss. Composting experiments were carried out by 12 identical laboratory-scale composting reactors (60 L) simultaneously. The results of the study showed that the best results were obtained with a mixture consisting of 50% rose oil processing waste, 64% caged layer manure and 15% straw wet weight in terms of organic matter loss (66%) and dry matter loss (38%).

  16. Co-composting of rose oil processing waste with caged layer manure and straw or sawdust: effects of carbon source and C/N ratio on decomposition.

    PubMed

    Onursal, Emrah; Ekinci, Kamil

    2015-04-01

    Rose oil is a specific essential oil that is produced mainly for the cosmetics industry in a few selected locations around the world. Rose oil production is a water distillation process from petals of Rosa damascena Mill. Since the oil content of the rose petals of this variety is between 0.3-0.4% (w/w), almost 4000 to 3000 kg of rose petals are needed to produce 1 kg of rose oil. Rose oil production is a seasonal activity and takes place during the relatively short period where the roses are blooming. As a result, large quantities of solid waste are produced over a limited time interval. This research aims: (i) to determine the possibilities of aerobic co-composting as a waste management option for rose oil processing waste with caged layer manure; (ii) to identify effects of different carbon sources - straw or sawdust on co-composting of rose oil processing waste and caged layer manure, which are both readily available in Isparta, where significant rose oil production also takes place; (iii) to determine the effects of different C/N ratios on co-composting by the means of organic matter decomposition and dry matter loss. Composting experiments were carried out by 12 identical laboratory-scale composting reactors (60 L) simultaneously. The results of the study showed that the best results were obtained with a mixture consisting of 50% rose oil processing waste, 64% caged layer manure and 15% straw wet weight in terms of organic matter loss (66%) and dry matter loss (38%). PMID:25784689

  17. Highlighting earthworm contribution in uplifting biochemical response for organic matter decomposition during vermifiltration processing sewage sludge: Insights from proteomics.

    PubMed

    Xing, Meiyan; Wang, Yin; Xu, Ting; Yang, Jian

    2016-09-01

    A vermifilter (VF) was steadily operated to explore the mechanism of lower microbial biomass and higher enzymatic activities due to the presence of earthworms, with a conventional biofilter (BF) as a control. The analysis of 2-DE indicated that 432 spots and 488 spots were clearly detected in the VF and BF biofilm. Furthermore, MALDI-TOF/TOF MS revealed that six differential up-regulated proteins, namely Aldehyde Dehydrogenase, Molecular chaperone GroEL, ATP synthase subunit alpha, Flagellin, Chaperone protein HtpG and ATP synthase subunit beta, changed progressively. Based on Gene Ontology annotation, these differential proteins mainly performed 71.38% ATP binding and 16.23% response to stress functions. Taken the VF process performance merits into considerations, it was addressed that earthworm activities biochemically strengthened energy releasing of the microbial metabolism in an uncoupled manner.

  18. Highlighting earthworm contribution in uplifting biochemical response for organic matter decomposition during vermifiltration processing sewage sludge: Insights from proteomics.

    PubMed

    Xing, Meiyan; Wang, Yin; Xu, Ting; Yang, Jian

    2016-09-01

    A vermifilter (VF) was steadily operated to explore the mechanism of lower microbial biomass and higher enzymatic activities due to the presence of earthworms, with a conventional biofilter (BF) as a control. The analysis of 2-DE indicated that 432 spots and 488 spots were clearly detected in the VF and BF biofilm. Furthermore, MALDI-TOF/TOF MS revealed that six differential up-regulated proteins, namely Aldehyde Dehydrogenase, Molecular chaperone GroEL, ATP synthase subunit alpha, Flagellin, Chaperone protein HtpG and ATP synthase subunit beta, changed progressively. Based on Gene Ontology annotation, these differential proteins mainly performed 71.38% ATP binding and 16.23% response to stress functions. Taken the VF process performance merits into considerations, it was addressed that earthworm activities biochemically strengthened energy releasing of the microbial metabolism in an uncoupled manner. PMID:27287202

  19. Thermal decomposition of crystalline Ni(II)-Cr(III) layered double hydroxide: a structural study of the segregation process.

    PubMed

    Sileo, Elsa E; Jobbagy, Matías; Paiva-Santos, Carlos O; Regazzoni, Alberto E

    2005-05-26

    A structural study of the thermal evolution of Ni(0.69)Cr(0.31)(OH)(2)(CO(3))(0.155) x nH(2)O into NiO and tetragonal NiCr(2)O(4) is reported. The characteristic structural parameters of the two coexisting crystalline phases, as well as their relative abundance, were determined by Rietveld refinement of powder x-ray diffraction (PXRD) patterns. The results of the simulations allowed us to elucidate the mechanism of the demixing process of the oxides. It is demonstrated that nucleation of a metastable nickel chromite within the common oxygen framework of the parent Cr(III)-doped bunsenite is the initial step of the cationic redistribution. The role that trivalent cations play in the segregation of crystalline spinels is also discussed. PMID:16852228

  20. Decomposition of dinitrotoluene isomers and 2,4,6-trinitrotoluene in spent acid from toluene nitration process by ozonation and photo-ozonation.

    PubMed

    Chen, Wen-Shing; Juan, Chien-Neng; Wei, Kuo-Ming

    2007-08-17

    Ozone and UV/O3 were employed to mineralize dinitrotoluene (DNT) isomers and 2,4,6-trinitrotoluene (TNT) in spent acid from toluene nitration process. The oxidative degradation tests were carried out to elucidate the influence of various operating variables on the performance of mineralization of total organic compounds (TOC) in spent acid, including reaction temperature, intensity of UV (254 nm) irradiation, dosage of ozone and concentration of sulfuric acid. It is remarkable that the nearly complete mineralization of organic compounds can be achieved by ozonation combined with UV irradiation. Nevertheless, the hydroxyl radicals (*OH) would not be generated by either ozone decomposition or photolysis of ozone under the experimental condition of this study. According to the spectra identified by gas chromatograph/mass spectrometer (GC/MS) and further confirmed by gas chromatograph/flame ionization detector (GC/FID), the multiple oxidation pathways of DNT isomers are given, which include o-, m-, p-mononitrotoluene (MNT) and 1,3-dinitrobenzene, respectively. In addition, oxidative degradation of 2,4,6-TNT leads to a 1,3,5-trinitrobenzene intermediate. PMID:17257749

  1. Decomposition of dinitrotoluene isomers and 2,4,6-trinitrotoluene in spent acid from toluene nitration process by ozonation and photo-ozonation.

    PubMed

    Chen, Wen-Shing; Juan, Chien-Neng; Wei, Kuo-Ming

    2007-08-17

    Ozone and UV/O3 were employed to mineralize dinitrotoluene (DNT) isomers and 2,4,6-trinitrotoluene (TNT) in spent acid from toluene nitration process. The oxidative degradation tests were carried out to elucidate the influence of various operating variables on the performance of mineralization of total organic compounds (TOC) in spent acid, including reaction temperature, intensity of UV (254 nm) irradiation, dosage of ozone and concentration of sulfuric acid. It is remarkable that the nearly complete mineralization of organic compounds can be achieved by ozonation combined with UV irradiation. Nevertheless, the hydroxyl radicals (*OH) would not be generated by either ozone decomposition or photolysis of ozone under the experimental condition of this study. According to the spectra identified by gas chromatograph/mass spectrometer (GC/MS) and further confirmed by gas chromatograph/flame ionization detector (GC/FID), the multiple oxidation pathways of DNT isomers are given, which include o-, m-, p-mononitrotoluene (MNT) and 1,3-dinitrobenzene, respectively. In addition, oxidative degradation of 2,4,6-TNT leads to a 1,3,5-trinitrobenzene intermediate.

  2. Space-time with a fluctuating metric tensor model

    NASA Astrophysics Data System (ADS)

    Morozov, A. N.

    2016-07-01

    A presented physical time model is based on the assumption that time is a random Poisson process, the intensity of which depends on natural irreversible processes. The introduction of metric tensor space-time fluctuations allowing describing the impact of stochastic gravitational background has been demonstrated. The use of spectral lines broadening measurement for the registration of relic gravitational waves has been suggested.

  3. Growth and barium zirconium oxide doping study on superconducting M-barium copper oxide (M = yttrium, samarium) films using a fluorine-free metal organic decomposition process

    NASA Astrophysics Data System (ADS)

    Lu, Feng

    We present a fluorine-free metal organic deposition (F-free MOD) process - which is possibly a rapid and economic alternative to commercial trifluoroacetates metal organic deposition (TFA-MOD) and metal organic chemical vapor deposition (MOCVD) processes - for the fabrication of high quality epitaxial high temperature superconducting YBa2Cu3O7-x (YBCO) films on both Rolling-Assisted Biaxially Textured Substrates (RABiTS) and single crystal substrates. We first studied the growth of YBCO and SmBCO films, and their resulting microstructure and superconducting properties. We produced epitaxial c-axis YBCO films with high critical current density (Jc) in excess of 106 A/cm2 at 77K in self field at the thickness of ˜1 mum. Because industrial applications demand high quality YBCO films with very high Jc, we investigated introducing BaZrO3 (BZO) nano-pinning sites in HTS thin films by our F-free MOD technique to improve Jc and the global pinning force (Fp). BZO-doped YBCO films were fabricated by adding extra Ba and Zr in the precursor solutions, according to the molar formula 1 YBCO + x BZO. We found the BZO content affects the growth of YBCO films and determined the optimum BZO content which leads to the most effective pinning enhancement and the least YBCO degradation. We achieved the maximum pinning force of ˜ 10 GN/m3 for x = 0.10 BZO-doped, 200 nm thick YBCO film on SrTiO3 single crystal substrates by modifying the pyrolysis from a one-step to a two-plateau decomposition during the F-free MOD process. For growing optimum BZO-doped YBCO films on RABiTS substrates, the F-free MOD process was also optimized by adjusting the maximum growth temperature and growth time to achieve stronger pinning forces. Through-process quenching studies indicate that BZO form 10--25 nm nanoparticles at the early stage of the process and are stable during the following YBCO growth, demonstrating that chemically doping YBCO films with BZO using the F-free MOD process is a very effective

  4. Characteristics of the residual stress tensor as a function of length scale in simulations of stably stratified turbulence

    NASA Astrophysics Data System (ADS)

    de Braganca Alves, Felipe Augusto; de Bruyn Kops, Stephen

    2015-11-01

    A priori analysis of the relationships between the deviatoric residual stress tensor τr and kinematic tensors is made for stably stratified Boussinesq turbulence. Two data sets from direct numerical simulation are used for the analyses: the decaying Taylor-Green simulations of Riley and de Bruyn Kops (2003), and the forced homogeneous stratified turbulence simulations of Almalkie and de Bruyn Kops (2012) resolved on up to 8192 × 8192 × 4096 grid points. The data sets are filtered using a Gaussian kernel with filter widths up to the buoyancy scale. Through tensor decomposition theorems described in Thompson et al. (2010) the relationship between the strain rate tensor and the residual stress is quantified for each filter width and case. This is also done for the tensor formed by the Lie product between the strain rate and rate of rotation tensors. The role of each tensor, seen as a part of the residual stress tensor, is analyzed, in particular with respect to filtered kinetic energy budget equation. The authors acknowledge the support from CAPES grant BEX 13649/13-2, DoD HPCMP Frontier Project FPCFD-FY14-007 and ONR grant N00014-15-1-2248.

  5. Elucidating effects of atmospheric deposition and peat decomposition processes on mercury accumulation rates in a northern Minnesota peatland over last 10,000 cal years

    NASA Astrophysics Data System (ADS)

    Nater, E. A.; Furman, O.; Toner, B. M.; Sebestyen, S. D.; Tfaily, M. M.; Chanton, J.; Fissore, C.; McFarlane, K. J.; Hanson, P. J.; Iversen, C. M.; Kolka, R. K.

    2014-12-01

    Climate change has the potential to affect mercury (Hg), sulfur (S) and carbon (C) stores and cycling in northern peatland ecosystems (NPEs). SPRUCE (Spruce and Peatland Responses Under Climate and Environmental change) is an interdisciplinary study of the effects of elevated temperature and CO2 enrichment on NPEs. Peat cores (0-3.0 m) were collected from 16 large plots located on the S1 peatland (an ombrotrophic bog treed with Picea mariana and Larix laricina) in August, 2012 for baseline characterization before the experiment begins. Peat samples were analyzed at depth increments for total Hg, bulk density, humification indices, and elemental composition. Net Hg accumulation rates over the last 10,000 years were derived from Hg concentrations and peat accumulation rates based on peat depth chronology established using 14C and 13C dating of peat cores. Historic Hg deposition rates are being modeled from pre-industrial deposition rates in S1 scaled by regional lake sediment records. Effects of peatland processes and factors (hydrology, decomposition, redox chemistry, vegetative changes, microtopography) on the biogeochemistry of Hg, S, and other elements are being assessed by comparing observed elemental depth profiles with accumulation profiles predicted solely from atmospheric deposition. We are using principal component analyses and cluster analyses to elucidate relationships between humification indices, peat physical properties, and inorganic and organic geochemistry data to interpret the main processes controlling net Hg accumulation and elemental concentrations in surface and subsurface peat layers. These findings are critical to predicting how climate change will affect future accumulation of Hg as well as existing Hg stores in NPE, and for providing reference baselines for SPRUCE future investigations.

  6. Collaborative Research: Process-resolving Decomposition of the Global Temperature Response to Modes of Low Frequency Variability in a Changing Climate

    SciTech Connect

    Cai, Ming; Deng, Yi

    2015-02-06

    El Niño-Southern Oscillation (ENSO) and Annular Modes (AMs) represent respectively the most important modes of low frequency variability in the tropical and extratropical circulations. The future projection of the ENSO and AM variability, however, remains highly uncertain with the state-of-the-art coupled general circulation models. A comprehensive understanding of the factors responsible for the inter-model discrepancies in projecting future changes in the ENSO and AM variability, in terms of multiple feedback processes involved, has yet to be achieved. The proposed research aims to identify sources of such uncertainty and establish a set of process-resolving quantitative evaluations of the existing predictions of the future ENSO and AM variability. The proposed process-resolving evaluations are based on a feedback analysis method formulated in Lu and Cai (2009), which is capable of partitioning 3D temperature anomalies/perturbations into components linked to 1) radiation-related thermodynamic processes such as cloud and water vapor feedbacks, 2) local dynamical processes including convection and turbulent/diffusive energy transfer and 3) non-local dynamical processes such as the horizontal energy transport in the oceans and atmosphere. Taking advantage of the high-resolution, multi-model ensemble products from the Coupled Model Intercomparison Project Phase 5 (CMIP5) soon to be available at the Lawrence Livermore National Lab, we will conduct a process-resolving decomposition of the global three-dimensional (3D) temperature (including SST) response to the ENSO and AM variability in the preindustrial, historical and future climate simulated by these models. Specific research tasks include 1) identifying the model-observation discrepancies in the global temperature response to ENSO and AM variability and attributing such discrepancies to specific feedback processes, 2) delineating the influence of anthropogenic radiative forcing on the key feedback processes

  7. Scalable tensor factorizations with missing data.

    SciTech Connect

    Morup, Morten; Dunlavy, Daniel M.; Acar, Evrim; Kolda, Tamara Gibson

    2010-04-01

    The problem of missing data is ubiquitous in domains such as biomedical signal processing, network traffic analysis, bibliometrics, social network analysis, chemometrics, computer vision, and communication networks|all domains in which data collection is subject to occasional errors. Moreover, these data sets can be quite large and have more than two axes of variation, e.g., sender, receiver, time. Many applications in those domains aim to capture the underlying latent structure of the data; in other words, they need to factorize data sets with missing entries. If we cannot address the problem of missing data, many important data sets will be discarded or improperly analyzed. Therefore, we need a robust and scalable approach for factorizing multi-way arrays (i.e., tensors) in the presence of missing data. We focus on one of the most well-known tensor factorizations, CANDECOMP/PARAFAC (CP), and formulate the CP model as a weighted least squares problem that models only the known entries. We develop an algorithm called CP-WOPT (CP Weighted OPTimization) using a first-order optimization approach to solve the weighted least squares problem. Based on extensive numerical experiments, our algorithm is shown to successfully factor tensors with noise and up to 70% missing data. Moreover, our approach is significantly faster than the leading alternative and scales to larger problems. To show the real-world usefulness of CP-WOPT, we illustrate its applicability on a novel EEG (electroencephalogram) application where missing data is frequently encountered due to disconnections of electrodes.

  8. Tensorially consistent microleveling of high resolution full tensor gradiometry data

    NASA Astrophysics Data System (ADS)

    Schiffler, M.; Queitsch, M.; Schneider, M.; Stolz, R.; Krech, W.; Meyer, H.; Kukowski, N.

    2013-12-01

    Full Tensor Magnetic Gradiometry (FTMG) data obtained with Superconductive Quantum Interference Device (SQUID) sensors offer high resolution and a low signal-to-noise ratio. In airborne operation, processing steps for leveling of flight lines using tie-lines and subsequent micro-leveling become important. Airborne SQUID-FTMG surveys show that in magnetically calm regions the overall measurement system noise level of ≈10pT/m RMS is the main contribution to the magnetograms and line-dependent artifacts become visible. Both tie-line and micro-leveling are used to remove these artifacts (corrugations). But, in the application of these standard leveling routines - originally designed for total magnetic intensity measurements - to the tensor components independently, the tracelessness and the symmetry of the resulting corrected tensor is not preserved. We show that tie-line leveling for airborne SQUID-FTMG data can be surpassed using the presented micro-leveling algorithm and discuss how it is designed to preserve the tensor properties. The micro-leveling process is performed via a moving median filter using a geometric median which preserves the properties of the tensor either to the entire tensor at once or to its structural part (eigenvalues) and rotational part (eigenvectors or idempotences) independently. We discuss the impact on data quality for the different micro-leveling methods. At each observation point, the median along the distance of the flight line is subtracted and the median in a specific footprint radius is added. For application of this filter to the rotational states, we use quaternions and quaternion interpolation. Examples of the new processing methods on data acquired with the FTMG system will be presented in this work.

  9. From Second to Higher Order Tensors in Diffusion-MRI

    NASA Astrophysics Data System (ADS)

    Ghosh, Aurobrata; Deriche, Rachid

    Diffusion MRI, which is sensitive to the Brownian motion of molecules, has become today an excellent medical tool for probing the tissue micro-structure of cerebral white matter in vivo and non-invasively. It makes it possible to reconstruct fiber pathways and segment major fiber bundles that reflect the structures in the brain which are not visible to other non-invasive imaging modalities. Since this is possible without operating on the subject, but by integrating partial information from Diffusion Weighted Images into a reconstructed ‘complete’ image of diffusion, Diffusion MRI opens a whole new domain of image processing. Here we shall explore the role that tensors play in the mathematical model. We shall primarily deal with Cartesian tensors and begin with 2nd order tensors, since these are at the core of Diffusion Tensor Imaging. We shall then explore higher and even ordered symmetric tensors, that can take into account more complex micro-geometries of biological tissues such as axonal crossings in the white matter.

  10. Tensor numerical methods in quantum chemistry: from Hartree-Fock to excitation energies

    NASA Astrophysics Data System (ADS)

    Khoromskaia, Venera; Khoromskij, Boris N.

    We resume the recent successes of the grid-based tensor numerical methods and discuss their prospects in real-space electronic structure calculations. These methods, based on the low-rank representation of the multidimensional functions and integral operators, led to entirely grid-based tensor-structured 3D Hartree-Fock eigenvalue solver. It benefits from tensor calculation of the core Hamiltonian and two-electron integrals (TEI) in $O(n\\log n)$ complexity using the rank-structured approximation of basis functions, electron densities and convolution integral operators all represented on 3D $n\\times n\\times n $ Cartesian grids. The algorithm for calculating TEI tensor in a form of the Cholesky decomposition is based on multiple factorizations using algebraic 1D ``density fitting`` scheme. The basis functions are not restricted to separable Gaussians, since the analytical integration is substituted by high-precision tensor-structured numerical quadratures. The tensor approaches to post-Hartree-Fock calculations for the MP2 energy correction and for the Bethe-Salpeter excited states, based on using low-rank factorizations and the reduced basis method, were recently introduced. Another direction is related to the recent attempts to develop a tensor-based Hartree-Fock numerical scheme for finite lattice-structured systems, where one of the numerical challenges is the summation of electrostatic potentials of a large number of nuclei. The 3D grid-based tensor method for calculation of a potential sum on a $L\\times L\\times L$ lattice manifests the linear in $L$ computational work, $O(L)$, instead of the usual $O(L^3 \\log L)$ scaling by the Ewald-type approaches.

  11. A uniform parametrization of moment tensors

    NASA Astrophysics Data System (ADS)

    Tape, Walter; Tape, Carl

    2015-09-01

    A moment tensor is a 3 × 3 symmetric matrix that expresses an earthquake source. We construct a parametrization of the 5-D space of all moment tensors of unit norm. The coordinates associated with the parametrization are closely related to moment tensor orientations and source types. The parametrization is uniform, in the sense that equal volumes in the coordinate domain of the parametrization correspond to equal volumes of moment tensors. Uniformly distributed points in the coordinate domain therefore give uniformly distributed moment tensors. A cartesian grid in the coordinate domain can be used to search efficiently over moment tensors. We find that uniformly distributed moment tensors have uniformly distributed orientations (eigenframes), but that their source types (eigenvalue triples) are distributed so as to favour double couples.

  12. Killing(-Yano) tensors in string theory

    NASA Astrophysics Data System (ADS)

    Chervonyi, Yuri; Lunin, Oleg

    2015-09-01

    We construct the Killing(-Yano) tensors for a large class of charged black holes in higher dimensions and study general properties of such tensors, in particular, their behavior under string dualities. Killing(-Yano) tensors encode the symmetries beyond isometries, which lead to insights into dynamics of particles and fields on a given geometry by providing a set of conserved quantities. By analyzing the eigenvalues of the Killing tensor, we provide a prescription for constructing several conserved quantities starting from a single object, and we demonstrate that Killing tensors in higher dimensions are always associated with ellipsoidal coordinates. We also determine the transformations of the Killing(-Yano) tensors under string dualities, and find the unique modification of the Killing-Yano equation consistent with these symmetries. These results are used to construct the explicit form of the Killing(-Yano) tensors for the Myers-Perry black hole in arbitrary number of dimensions and for its charged version.

  13. Tensor based tumor tissue type differentiation using magnetic resonance spectroscopic imaging.

    PubMed

    Bharath, H N; Sima, D M; Sauwen, N; Himmelreich, U; De Lathauwer, L; Van Huffel, S

    2015-08-01

    Magnetic resonance spectroscopic imaging (MRSI) has the potential to characterise different tissue types in brain tumors. Blind source separation techniques are used to extract the specific tissue profiles and their corresponding distribution from the MRSI data. A 3-dimensional MRSI tensor is constructed from in vivo 2D-MRSI data of individual tumor patients. Non-negative canonical polyadic decomposition (NCPD) with common factor in mode-1 and mode-2 and l(1) regularization on mode-3 is applied on the MRSI tensor to differentiate various tissue types. Initial in vivo study shows that NCPD has better performance in identifying tumor and necrotic tissue type in high grade glioma patients compared to previous matrix-based decompositions, such as non-negative matrix factorization and hierarchical non-negative matrix factorization. PMID:26737904

  14. Collaborative Research: Process-Resolving Decomposition of the Global Temperature Response to Modes of Low Frequency Variability in a Changing Climate

    SciTech Connect

    Deng, Yi

    2014-11-24

    DOE-GTRC-05596 11/24/2104 Collaborative Research: Process-Resolving Decomposition of the Global Temperature Response to Modes of Low Frequency Variability in a Changing Climate PI: Dr. Yi Deng (PI) School of Earth and Atmospheric Sciences Georgia Institute of Technology 404-385-1821, yi.deng@eas.gatech.edu El Niño-Southern Oscillation (ENSO) and Annular Modes (AMs) represent respectively the most important modes of low frequency variability in the tropical and extratropical circulations. The projection of future changes in the ENSO and AM variability, however, remains highly uncertain with the state-of-the-science climate models. This project conducted a process-resolving, quantitative evaluations of the ENSO and AM variability in the modern reanalysis observations and in climate model simulations. The goal is to identify and understand the sources of uncertainty and biases in models’ representation of ENSO and AM variability. Using a feedback analysis method originally formulated by one of the collaborative PIs, we partitioned the 3D atmospheric temperature anomalies and surface temperature anomalies associated with ENSO and AM variability into components linked to 1) radiation-related thermodynamic processes such as cloud and water vapor feedbacks, 2) local dynamical processes including convection and turbulent/diffusive energy transfer and 3) non-local dynamical processes such as the horizontal energy transport in the oceans and atmosphere. In the past 4 years, the research conducted at Georgia Tech under the support of this project has led to 15 peer-reviewed publications and 9 conference/workshop presentations. Two graduate students and one postdoctoral fellow also received research training through participating the project activities. This final technical report summarizes key scientific discoveries we made and provides also a list of all publications and conference presentations resulted from research activities at Georgia Tech. The main findings include

  15. Competition between the tensor light shift and nonlinear Zeeman effect

    SciTech Connect

    Chalupczak, W.; Wojciechowski, A.; Pustelny, S.; Gawlik, W.

    2010-08-15

    Many precision measurements (e.g., in spectroscopy, atomic clocks, quantum-information processing, etc.) suffer from systematic errors introduced by the light shift. In our experimental configuration, however, the tensor light shift plays a positive role enabling the observation of spectral features otherwise masked by the cancellation of the transition amplitudes and creating resonances at a frequency unperturbed either by laser power or beam inhomogeneity. These phenomena occur thanks to the special relation between the nonlinear Zeeman and light shift effects. The interplay between these two perturbations is systematically studied and the cancellation of the nonlinear Zeeman effect by the tensor light shift is demonstrated.

  16. Cadaver decomposition in terrestrial ecosystems

    NASA Astrophysics Data System (ADS)

    Carter, David O.; Yellowlees, David; Tibbett, Mark

    2007-01-01

    A dead mammal (i.e. cadaver) is a high quality resource (narrow carbon:nitrogen ratio, high water content) that releases an intense, localised pulse of carbon and nutrients into the soil upon decomposition. Despite the fact that as much as 5,000 kg of cadaver can be introduced to a square kilometre of terrestrial ecosystem each year, cadaver decomposition remains a neglected microsere. Here we review the processes associated with the introduction of cadaver-derived carbon and nutrients into soil from forensic and ecological settings to show that cadaver decomposition can have a greater, albeit localised, effect on belowground ecology than plant and faecal resources. Cadaveric materials are rapidly introduced to belowground floral and faunal communities, which results in the formation of a highly concentrated island of fertility, or cadaver decomposition island (CDI). CDIs are associated with increased soil microbial biomass, microbial activity (C mineralisation) and nematode abundance. Each CDI is an ephemeral natural disturbance that, in addition to releasing energy and nutrients to the wider ecosystem, acts as a hub by receiving these materials in the form of dead insects, exuvia and puparia, faecal matter (from scavengers, grazers and predators) and feathers (from avian scavengers and predators). As such, CDIs contribute to landscape heterogeneity. Furthermore, CDIs are a specialised habitat for a number of flies, beetles and pioneer vegetation, which enhances biodiversity in terrestrial ecosystems.

  17. Tensor completion for estimating missing values in visual data.

    PubMed

    Liu, Ji; Musialski, Przemyslaw; Wonka, Peter; Ye, Jieping

    2013-01-01

    In this paper, we propose an algorithm to estimate missing values in tensors of visual data. The values can be missing due to problems in the acquisition process or because the user manually identified unwanted outliers. Our algorithm works even with a small amount of samples and it can propagate structure to fill larger missing regions. Our methodology is built on recent studies about matrix completion using the matrix trace norm. The contribution of our paper is to extend the matrix case to the tensor case by proposing the first definition of the trace norm for tensors and then by building a working algorithm. First, we propose a definition for the tensor trace norm that generalizes the established definition of the matrix trace norm. Second, similarly to matrix completion, the tensor completion is formulated as a convex optimization problem. Unfortunately, the straightforward problem extension is significantly harder to solve than the matrix case because of the dependency among multiple constraints. To tackle this problem, we developed three algorithms: simple low rank tensor completion (SiLRTC), fast low rank tensor completion (FaLRTC), and high accuracy low rank tensor completion (HaLRTC). The SiLRTC algorithm is simple to implement and employs a relaxation technique to separate the dependent relationships and uses the block coordinate descent (BCD) method to achieve a globally optimal solution; the FaLRTC algorithm utilizes a smoothing scheme to transform the original nonsmooth problem into a smooth one and can be used to solve a general tensor trace norm minimization problem; the HaLRTC algorithm applies the alternating direction method of multipliers (ADMMs) to our problem. Our experiments show potential applications of our algorithms and the quantitative evaluation indicates that our methods are more accurate and robust than heuristic approaches. The efficiency comparison indicates that FaLTRC and HaLRTC are more efficient than SiLRTC and between FaLRTC an

  18. Local virial and tensor theorems.

    PubMed

    Cohen, Leon

    2011-11-17

    We show that for any wave function and potential the local virial theorem can always be satisfied 2K(r) = r·ΔV by choosing a particular expression for the local kinetic energy. In addition, we show that for each choice of local kinetic energy there are an infinite number of quasi-probability distributions which will generate the same expression. We also consider the local tensor virial theorem.

  19. Local virial and tensor theorems.

    PubMed

    Cohen, Leon

    2011-11-17

    We show that for any wave function and potential the local virial theorem can always be satisfied 2K(r) = r·ΔV by choosing a particular expression for the local kinetic energy. In addition, we show that for each choice of local kinetic energy there are an infinite number of quasi-probability distributions which will generate the same expression. We also consider the local tensor virial theorem. PMID:21863837

  20. Multiview face recognition: from TensorFace to V-TensorFace and K-TensorFace.

    PubMed

    Tian, Chunna; Fan, Guoliang; Gao, Xinbo; Tian, Qi

    2012-04-01

    Face images under uncontrolled environments suffer from the changes of multiple factors such as camera view, illumination, expression, etc. Tensor analysis provides a way of analyzing the influence of different factors on facial variation. However, the TensorFace model creates a difficulty in representing the nonlinearity of view subspace. In this paper, to break this limitation, we present a view-manifold-based TensorFace (V-TensorFace), in which the latent view manifold preserves the local distances in the multiview face space. Moreover, a kernelized TensorFace (K-TensorFace) for multiview face recognition is proposed to preserve the structure of the latent manifold in the image space. Both methods provide a generative model that involves a continuous view manifold for unseen view representation. Most importantly, we propose a unified framework to generalize TensorFace, V-TensorFace, and K-TensorFace. Finally, an expectation-maximization like algorithm is developed to estimate the identity and view parameters iteratively for a face image of an unknown/unseen view. The experiment on the PIE database shows the effectiveness of the manifold construction method. Extensive comparison experiments on Weizmann and Oriental Face databases for multiview face recognition demonstrate the superiority of the proposed V- and K-TensorFace methods over the view-based principal component analysis and other state-of-the-art approaches for such purpose. PMID:22318490

  1. Decomposition of Sodium Tetraphenylborate

    SciTech Connect

    Barnes, M.J.

    1998-11-20

    The chemical decomposition of aqueous alkaline solutions of sodium tetraphenylborate (NaTPB) has been investigated. The focus of the investigation is on the determination of additives and/or variables which influence NaTBP decomposition. This document describes work aimed at providing better understanding into the relationship of copper (II), solution temperature, and solution pH to NaTPB stability.

  2. Generalised tensor fluctuations and inflation

    SciTech Connect

    Cannone, Dario; Tasinato, Gianmassimo; Wands, David E-mail: g.tasinato@swansea.ac.uk

    2015-01-01

    Using an effective field theory approach to inflation, we examine novel properties of the spectrum of inflationary tensor fluctuations, that arise when breaking some of the symmetries or requirements usually imposed on the dynamics of perturbations. During single-clock inflation, time-reparameterization invariance is broken by a time-dependent cosmological background. In order to explore more general scenarios, we consider the possibility that spatial diffeomorphism invariance is also broken by effective mass terms or by derivative operators for the metric fluctuations in the Lagrangian. We investigate the cosmological consequences of the breaking of spatial diffeomorphisms, focussing on operators that affect the power spectrum of fluctuations. We identify the operators for tensor fluctuations that can provide a blue spectrum without violating the null energy condition, and operators for scalar fluctuations that lead to non-conservation of the comoving curvature perturbation on superhorizon scales even in single-clock inflation. In the last part of our work, we also examine the consequences of operators containing more than two spatial derivatives, discussing how they affect the sound speed of tensor fluctuations, and showing that they can mimic some of the interesting effects of symmetry breaking operators, even in scenarios that preserve spatial diffeomorphism invariance.

  3. Sparse alignment for robust tensor learning.

    PubMed

    Lai, Zhihui; Wong, Wai Keung; Xu, Yong; Zhao, Cairong; Sun, Mingming

    2014-10-01

    Multilinear/tensor extensions of manifold learning based algorithms have been widely used in computer vision and pattern recognition. This paper first provides a systematic analysis of the multilinear extensions for the most popular methods by using alignment techniques, thereby obtaining a general tensor alignment framework. From this framework, it is easy to show that the manifold learning based tensor learning methods are intrinsically different from the alignment techniques. Based on the alignment framework, a robust tensor learning method called sparse tensor alignment (STA) is then proposed for unsupervised tensor feature extraction. Different from the existing tensor learning methods, L1- and L2-norms are introduced to enhance the robustness in the alignment step of the STA. The advantage of the proposed technique is that the difficulty in selecting the size of the local neighborhood can be avoided in the manifold learning based tensor feature extraction algorithms. Although STA is an unsupervised learning method, the sparsity encodes the discriminative information in the alignment step and provides the robustness of STA. Extensive experiments on the well-known image databases as well as action and hand gesture databases by encoding object images as tensors demonstrate that the proposed STA algorithm gives the most competitive performance when compared with the tensor-based unsupervised learning methods. PMID:25291733

  4. Inflationary tensor perturbations after BICEP2.

    PubMed

    Caligiuri, Jerod; Kosowsky, Arthur

    2014-05-16

    The measurement of B-mode polarization of the cosmic microwave background at large angular scales by the BICEP experiment suggests a stochastic gravitational wave background from early-Universe inflation with a surprisingly large amplitude. The power spectrum of these tensor perturbations can be probed both with further measurements of the microwave background polarization at smaller scales and also directly via interferometry in space. We show that sufficiently sensitive high-resolution B-mode measurements will ultimately have the ability to test the inflationary consistency relation between the amplitude and spectrum of the tensor perturbations, confirming their inflationary origin. Additionally, a precise B-mode measurement of the tensor spectrum will predict the tensor amplitude on solar system scales to 20% accuracy for an exact power-law tensor spectrum, so a direct detection will then measure the running of the tensor spectral index to high precision.

  5. Gravitoelectromagnetic analogy based on tidal tensors

    SciTech Connect

    Costa, L. Filipe O.; Herdeiro, Carlos A. R.

    2008-07-15

    We propose a new approach to a physical analogy between general relativity and electromagnetism, based on tidal tensors of both theories. Using this approach we write a covariant form for the gravitational analogues of the Maxwell equations, which makes transparent both the similarities and key differences between the two interactions. The following realizations of the analogy are given. The first one matches linearized gravitational tidal tensors to exact electromagnetic tidal tensors in Minkowski spacetime. The second one matches exact magnetic gravitational tidal tensors for ultrastationary metrics to exact magnetic tidal tensors of electromagnetism in curved spaces. In the third we show that our approach leads to a two-step exact derivation of Papapetrou's equation describing the force exerted on a spinning test particle. Analogous scalar invariants built from tidal tensors of both theories are also discussed.

  6. Tensor coupling effect on relativistic symmetries

    NASA Astrophysics Data System (ADS)

    Chen, ShouWan; Li, DongPeng; Guo, JianYou

    2016-08-01

    The similarity renormalization group is used to transform the Dirac Hamiltonian with tensor coupling into a diagonal form. The upper (lower) diagonal element becomes a Schr¨odinger-like operator with the tensor component separated from the original Hamiltonian. Based on the operator, the tensor effect of the relativistic symmetries is explored with a focus on the single-particle energy contributed by the tensor coupling. The results show that the tensor coupling destroying (improving) the spin (pseudospin) symmetry is mainly attributed to the coupling of the spin-orbit and the tensor term, which plays an opposite role in the single-particle energy for the (pseudo-) spin-aligned and spin-unaligned states and has an important influence on the shell structure and its evolution.

  7. Inflationary tensor perturbations after BICEP2.

    PubMed

    Caligiuri, Jerod; Kosowsky, Arthur

    2014-05-16

    The measurement of B-mode polarization of the cosmic microwave background at large angular scales by the BICEP experiment suggests a stochastic gravitational wave background from early-Universe inflation with a surprisingly large amplitude. The power spectrum of these tensor perturbations can be probed both with further measurements of the microwave background polarization at smaller scales and also directly via interferometry in space. We show that sufficiently sensitive high-resolution B-mode measurements will ultimately have the ability to test the inflationary consistency relation between the amplitude and spectrum of the tensor perturbations, confirming their inflationary origin. Additionally, a precise B-mode measurement of the tensor spectrum will predict the tensor amplitude on solar system scales to 20% accuracy for an exact power-law tensor spectrum, so a direct detection will then measure the running of the tensor spectral index to high precision. PMID:24877926

  8. The Invar tensor package: Differential invariants of Riemann

    NASA Astrophysics Data System (ADS)

    Martín-García, J. M.; Yllanes, D.; Portugal, R.

    2008-10-01

    The long standing problem of the relations among the scalar invariants of the Riemann tensor is computationally solved for all 6ṡ10 objects with up to 12 derivatives of the metric. This covers cases ranging from products of up to 6 undifferentiated Riemann tensors to cases with up to 10 covariant derivatives of a single Riemann. We extend our computer algebra system Invar to produce within seconds a canonical form for any of those objects in terms of a basis. The process is as follows: (1) an invariant is converted in real time into a canonical form with respect to the permutation symmetries of the Riemann tensor; (2) Invar reads a database of more than 6ṡ10 relations and applies those coming from the cyclic symmetry of the Riemann tensor; (3) then applies the relations coming from the Bianchi identity, (4) the relations coming from commutations of covariant derivatives, (5) the dimensionally-dependent identities for dimension 4, and finally (6) simplifies invariants that can be expressed as product of dual invariants. Invar runs on top of the tensor computer algebra systems xTensor (for Mathematica) and Canon (for Maple). Program summaryProgram title:Invar Tensor Package v2.0 Catalogue identifier:ADZK_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZK_v2_0.html Program obtainable from:CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:3 243 249 No. of bytes in distributed program, including test data, etc.:939 Distribution format:tar.gz Programming language:Mathematica and Maple Computer:Any computer running Mathematica versions 5.0 to 6.0 or Maple versions 9 and 11 Operating system:Linux, Unix, Windows XP, MacOS RAM:100 Mb Word size:64 or 32 bits Supplementary material:The new database of relations is much larger than that for the previous version and therefore has not been included in

  9. Modeling individual HRTF tensor using high-order partial least squares

    NASA Astrophysics Data System (ADS)

    Huang, Qinghua; Li, Lin

    2014-12-01

    A tensor is used to describe head-related transfer functions (HRTFs) depending on frequencies, sound directions, and anthropometric parameters. It keeps the multi-dimensional structure of measured HRTFs. To construct a multi-linear HRTF personalization model, an individual core tensor is extracted from the original HRTFs using high-order singular value decomposition (HOSVD). The individual core tensor in lower-dimensional space acts as the output of the multi-linear model. Some key anthropometric parameters as the inputs of the model are selected by Laplacian scores and correlation analyses between all the measured parameters and the individual core tensor. Then, the multi-linear regression model is constructed by high-order partial least squares (HOPLS), aiming to seek a joint subspace approximation for both the selected parameters and the individual core tensor. The numbers of latent variables and loadings are used to control the complexity of the model and prevent overfitting feasibly. Compared with the partial least squares regression (PLSR) method, objective simulations demonstrate the better performance for predicting individual HRTFs especially for the sound directions ipsilateral to the concerned ear. The subjective listening tests show that the predicted individual HRTFs are approximate to the measured HRTFs for the sound localization.

  10. Denoising of hyperspectral images by best multilinear rank approximation of a tensor

    NASA Astrophysics Data System (ADS)

    Marin-McGee, Maider; Velez-Reyes, Miguel

    2010-04-01

    The hyperspectral image cube can be modeled as a three dimensional array. Tensors and the tools of multilinear algebra provide a natural framework to deal with this type of mathematical object. Singular value decomposition (SVD) and its variants have been used by the HSI community for denoising of hyperspectral imagery. Denoising of HSI using SVD is achieved by finding a low rank approximation of a matrix representation of the hyperspectral image cube. This paper investigates similar concepts in hyperspectral denoising by using a low multilinear rank approximation the given HSI tensor representation. The Best Multilinear Rank Approximation (BMRA) of a given tensor A is to find a lower multilinear rank tensor B that is as close as possible to A in the Frobenius norm. Different numerical methods to compute the BMRA using Alternating Least Square (ALS) method and Newton's Methods over product of Grassmann manifolds are presented. The effect of the multilinear rank, the numerical method used to compute the BMRA, and different parameter choices in those methods are studied. Results show that comparable results are achievable with both ALS and Newton type methods. Also, classification results using the filtered tensor are better than those obtained either with denoising using SVD or MNF.

  11. The Topology of Symmetric Tensor Fields

    NASA Technical Reports Server (NTRS)

    Levin, Yingmei; Batra, Rajesh; Hesselink, Lambertus; Levy, Yuval

    1997-01-01

    Combinatorial topology, also known as "rubber sheet geometry", has extensive applications in geometry and analysis, many of which result from connections with the theory of differential equations. A link between topology and differential equations is vector fields. Recent developments in scientific visualization have shown that vector fields also play an important role in the analysis of second-order tensor fields. A second-order tensor field can be transformed into its eigensystem, namely, eigenvalues and their associated eigenvectors without loss of information content. Eigenvectors behave in a similar fashion to ordinary vectors with even simpler topological structures due to their sign indeterminacy. Incorporating information about eigenvectors and eigenvalues in a display technique known as hyperstreamlines reveals the structure of a tensor field. The simplify and often complex tensor field and to capture its important features, the tensor is decomposed into an isotopic tensor and a deviator. A tensor field and its deviator share the same set of eigenvectors, and therefore they have a similar topological structure. A a deviator determines the properties of a tensor field, while the isotopic part provides a uniform bias. Degenerate points are basic constituents of tensor fields. In 2-D tensor fields, there are only two types of degenerate points; while in 3-D, the degenerate points can be characterized in a Q'-R' plane. Compressible and incompressible flows share similar topological feature due to the similarity of their deviators. In the case of the deformation tensor, the singularities of its deviator represent the area of vortex core in the field. In turbulent flows, the similarities and differences of the topology of the deformation and the Reynolds stress tensors reveal that the basic addie-viscosity assuptions have their validity in turbulence modeling under certain conditions.

  12. Some classes of renormalizable tensor models

    NASA Astrophysics Data System (ADS)

    Geloun, Joseph Ben; Livine, Etera R.

    2013-08-01

    We identify new families of renormalizable tensor models from anterior renormalizable tensor models via a mapping capable of reducing or increasing the rank of the theory without having an effect on the renormalizability property. Mainly, a version of the rank 3 tensor model as defined by Ben Geloun and Samary [Ann. Henri Poincare 14, 1599 (2013); e-print arXiv:1201.0176 [hep-th

  13. Curvature tensors unified field equations on SEXn

    NASA Astrophysics Data System (ADS)

    Chung, Kyung Tae; Lee, Il Young

    1988-09-01

    We study the curvature tensors and field equations in the n-dimensional SE manifold SEXn. We obtain several basic properties of the vectors S λ and U λ and then of the SE curvature tensor and its contractions, such as a generalized Ricci identity, a generalized Bianchi identity, and two variations of the Bianchi identity satisfied by the SE Einstein tensor. Finally, a system of field equations is discussed in SEXn and one of its particular solutions is constructed and displayed.

  14. Hardware Implementation of Singular Value Decomposition

    NASA Astrophysics Data System (ADS)

    Majumder, Swanirbhar; Shaw, Anil Kumar; Sarkar, Subir Kumar

    2016-06-01

    Singular value decomposition (SVD) is a useful decomposition technique which has important role in various engineering fields such as image compression, watermarking, signal processing, and numerous others. SVD does not involve convolution operation, which make it more suitable for hardware implementation, unlike the most popular transforms. This paper reviews the various methods of hardware implementation for SVD computation. This paper also studies the time complexity and hardware complexity in various methods of SVD computation.

  15. Asbestos-induced decomposition of hydrogen peroxide

    SciTech Connect

    Eberhardt, M.K.; Roman-Franco, A.A.; Quiles, M.R.

    1985-08-01

    Decomposition of H/sub 2/O/sub 2/ by chrysotile asbestos was demonstrated employing titration with KMnO/sub 4/. The participation of OH radicals in this process was delineated employing the OH radical scavenger dimethyl sulfoxide (DMSO). A mechanism involving the Fenton and Haber-Weiss reactions as the pathway for the H/sub 2/O/sub 2/ decomposition and OH radical production is postulated.

  16. Diffusion Tensor Estimation by Maximizing Rician Likelihood

    PubMed Central

    Landman, Bennett; Bazin, Pierre-Louis; Prince, Jerry

    2012-01-01

    Diffusion tensor imaging (DTI) is widely used to characterize white matter in health and disease. Previous approaches to the estimation of diffusion tensors have either been statistically suboptimal or have used Gaussian approximations of the underlying noise structure, which is Rician in reality. This can cause quantities derived from these tensors — e.g., fractional anisotropy and apparent diffusion coefficient — to diverge from their true values, potentially leading to artifactual changes that confound clinically significant ones. This paper presents a novel maximum likelihood approach to tensor estimation, denoted Diffusion Tensor Estimation by Maximizing Rician Likelihood (DTEMRL). In contrast to previous approaches, DTEMRL considers the joint distribution of all observed data in the context of an augmented tensor model to account for variable levels of Rician noise. To improve numeric stability and prevent non-physical solutions, DTEMRL incorporates a robust characterization of positive definite tensors and a new estimator of underlying noise variance. In simulated and clinical data, mean squared error metrics show consistent and significant improvements from low clinical SNR to high SNR. DTEMRL may be readily supplemented with spatial regularization or a priori tensor distributions for Bayesian tensor estimation. PMID:23132746

  17. Fast stray field computation on tensor grids.

    PubMed

    Exl, L; Auzinger, W; Bance, S; Gusenbauer, M; Reichel, F; Schrefl, T

    2012-04-01

    A direct integration algorithm is described to compute the magnetostatic field and energy for given magnetization distributions on not necessarily uniform tensor grids. We use an analytically-based tensor approximation approach for function-related tensors, which reduces calculations to multilinear algebra operations. The algorithm scales with N(4/3) for N computational cells used and with N(2/3) (sublinear) when magnetization is given in canonical tensor format. In the final section we confirm our theoretical results concerning computing times and accuracy by means of numerical examples.

  18. Fast stray field computation on tensor grids

    PubMed Central

    Exl, L.; Auzinger, W.; Bance, S.; Gusenbauer, M.; Reichel, F.; Schrefl, T.

    2012-01-01

    A direct integration algorithm is described to compute the magnetostatic field and energy for given magnetization distributions on not necessarily uniform tensor grids. We use an analytically-based tensor approximation approach for function-related tensors, which reduces calculations to multilinear algebra operations. The algorithm scales with N4/3 for N computational cells used and with N2/3 (sublinear) when magnetization is given in canonical tensor format. In the final section we confirm our theoretical results concerning computing times and accuracy by means of numerical examples. PMID:24910469

  19. Variance decomposition in stochastic simulators

    NASA Astrophysics Data System (ADS)

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  20. Variance decomposition in stochastic simulators.

    PubMed

    Le Maître, O P; Knio, O M; Moraes, A

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  1. Variance decomposition in stochastic simulators

    SciTech Connect

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  2. Covariant Conformal Decomposition of Einstein Equations

    NASA Astrophysics Data System (ADS)

    Gourgoulhon, E.; Novak, J.

    It has been shown1,2 that the usual 3+1 form of Einstein's equations may be ill-posed. This result has been previously observed in numerical simulations3,4. We present a 3+1 type formalism inspired by these works to decompose Einstein's equations. This decomposition is motivated by the aim of stable numerical implementation and resolution of the equations. We introduce the conformal 3-``metric'' (scaled by the determinant of the usual 3-metric) which is a tensor density of weight -2/3. The Einstein equations are then derived in terms of this ``metric'', of the conformal extrinsic curvature and in terms of the associated derivative. We also introduce a flat 3-metric (the asymptotic metric for isolated systems) and the associated derivative. Finally, the generalized Dirac gauge (introduced by Smarr and York5) is used in this formalism and some examples of formulation of Einstein's equations are shown.

  3. Electroproduction of tensor mesons in QCD

    NASA Astrophysics Data System (ADS)

    Braun, V. M.; Kivel, N.; Strohmaier, M.; Vladimirov, A. A.

    2016-06-01

    Due to multiple possible polarizations hard exclusive production of tensor mesons by virtual photons or in heavy meson decays offers interesting possibilities to study the helicity structure of the underlying short-distance process. Motivated by the first measurement of the transition form factor γ∗γ → f 2(1270) at large momentum transfers by the BELLE collaboration we present an improved QCD analysis of this reaction in the framework of collinear factorization including contributions of twist-three quark-antiquark-gluon operators and an estimate of soft end-point corrections using light-cone sum rules. The results appear to be in good agreement with the data, in particular the predicted scaling behavior is reproduced in all cases.

  4. Adaptive Fourier decomposition based ECG denoising.

    PubMed

    Wang, Ze; Wan, Feng; Wong, Chi Man; Zhang, Liming

    2016-10-01

    A novel ECG denoising method is proposed based on the adaptive Fourier decomposition (AFD). The AFD decomposes a signal according to its energy distribution, thereby making this algorithm suitable for separating pure ECG signal and noise with overlapping frequency ranges but different energy distributions. A stop criterion for the iterative decomposition process in the AFD is calculated on the basis of the estimated signal-to-noise ratio (SNR) of the noisy signal. The proposed AFD-based method is validated by the synthetic ECG signal using an ECG model and also real ECG signals from the MIT-BIH Arrhythmia Database both with additive Gaussian white noise. Simulation results of the proposed method show better performance on the denoising and the QRS detection in comparing with major ECG denoising schemes based on the wavelet transform, the Stockwell transform, the empirical mode decomposition, and the ensemble empirical mode decomposition.

  5. Temporal dynamics of biotic and abiotic drivers of litter decomposition.

    PubMed

    García-Palacios, Pablo; Shaw, E Ashley; Wall, Diana H; Hättenschwiler, Stephan

    2016-05-01

    Climate, litter quality and decomposers drive litter decomposition. However, little is known about whether their relative contribution changes at different decomposition stages. To fill this gap, we evaluated the relative importance of leaf litter polyphenols, decomposer communities and soil moisture for litter C and N loss at different stages throughout the decomposition process. Although both microbial and nematode communities regulated litter C and N loss in the early decomposition stages, soil moisture and legacy effects of initial differences in litter quality played a major role in the late stages of the process. Our results provide strong evidence for substantial shifts in how biotic and abiotic factors control litter C and N dynamics during decomposition. Taking into account such temporal dynamics will increase the predictive power of decomposition models that are currently limited by a single-pool approach applying control variables uniformly to the entire decay process.

  6. Communication: Acceleration of coupled cluster singles and doubles via orbital-weighted least-squares tensor hypercontraction

    SciTech Connect

    Parrish, Robert M.; Sherrill, C. David; Hohenstein, Edward G.; Kokkila, Sara I. L.; Martínez, Todd J.

    2014-05-14

    We apply orbital-weighted least-squares tensor hypercontraction decomposition of the electron repulsion integrals to accelerate the coupled cluster singles and doubles (CCSD) method. Using accurate and flexible low-rank factorizations of the electron repulsion integral tensor, we are able to reduce the scaling of the most vexing particle-particle ladder term in CCSD from O(N{sup 6}) to O(N{sup 5}), with remarkably low error. Combined with a T{sub 1}-transformed Hamiltonian, this leads to substantial practical accelerations against an optimized density-fitted CCSD implementation.

  7. Detecting the community structure and activity patterns of temporal networks: a non-negative tensor factorization approach.

    PubMed

    Gauvin, Laetitia; Panisson, André; Cattuto, Ciro

    2014-01-01

    The increasing availability of temporal network data is calling for more research on extracting and characterizing mesoscopic structures in temporal networks and on relating such structure to specific functions or properties of the system. An outstanding challenge is the extension of the results achieved for static networks to time-varying networks, where the topological structure of the system and the temporal activity patterns of its components are intertwined. Here we investigate the use of a latent factor decomposition technique, non-negative tensor factorization, to extract the community-activity structure of temporal networks. The method is intrinsically temporal and allows to simultaneously identify communities and to track their activity over time. We represent the time-varying adjacency matrix of a temporal network as a three-way tensor and approximate this tensor as a sum of terms that can be interpreted as communities of nodes with an associated activity time series. We summarize known computational techniques for tensor decomposition and discuss some quality metrics that can be used to tune the complexity of the factorized representation. We subsequently apply tensor factorization to a temporal network for which a ground truth is available for both the community structure and the temporal activity patterns. The data we use describe the social interactions of students in a school, the associations between students and school classes, and the spatio-temporal trajectories of students over time. We show that non-negative tensor factorization is capable of recovering the class structure with high accuracy. In particular, the extracted tensor components can be validated either as known school classes, or in terms of correlated activity patterns, i.e., of spatial and temporal coincidences that are determined by the known school activity schedule.

  8. CAST: Contraction Algorithm for Symmetric Tensors

    SciTech Connect

    Rajbhandari, Samyam; NIkam, Akshay; Lai, Pai-Wei; Stock, Kevin; Krishnamoorthy, Sriram; Sadayappan, Ponnuswamy

    2014-09-22

    Tensor contractions represent the most compute-intensive core kernels in ab initio computational quantum chemistry and nuclear physics. Symmetries in these tensor contractions makes them difficult to load balance and scale to large distributed systems. In this paper, we develop an efficient and scalable algorithm to contract symmetric tensors. We introduce a novel approach that avoids data redistribution in contracting symmetric tensors while also avoiding redundant storage and maintaining load balance. We present experimental results on two parallel supercomputers for several symmetric contractions that appear in the CCSD quantum chemistry method. We also present a novel approach to tensor redistribution that can take advantage of parallel hyperplanes when the initial distribution has replicated dimensions, and use collective broadcast when the final distribution has replicated dimensions, making the algorithm very efficient.

  9. Inflation and alternatives with blue tensor spectra

    SciTech Connect

    Wang, Yi; Xue, Wei E-mail: wei.xue@sissa.it

    2014-10-01

    We study the tilt of the primordial gravitational waves spectrum. A hint of blue tilt is shown from analyzing the BICEP2 and POLARBEAR data. Motivated by this, we explore the possibilities of blue tensor spectra from the very early universe cosmology models, including null energy condition violating inflation, inflation with general initial conditions, and string gas cosmology, etc. For the simplest G-inflation, blue tensor spectrum also implies blue scalar spectrum. In general, the inflation models with blue tensor spectra indicate large non-Gaussianities. On the other hand, string gas cosmology predicts blue tensor spectrum with highly Gaussian fluctuations. If further experiments do confirm the blue tensor spectrum, non-Gaussianity becomes a distinguishing test between inflation and alternatives.

  10. Tensor dissimilarity based adaptive seeding algorithm for DT-MRI visualization with streamtubes

    NASA Astrophysics Data System (ADS)

    Weldeselassie, Yonas T.; Hamarneh, Ghassan; Weiskopf, Daniel

    2007-03-01

    In this paper, we propose an adaptive seeding strategy for visualization of diffusion tensor magnetic resonance imaging (DT-MRI) data using streamtubes. DT-MRI is a medical imaging modality that captures unique water diffusion properties and fiber orientation information of the imaged tissues. Visualizing DT-MRI data using streamtubes has the advantage that not only the anisotropic nature of the diffusion is visualized but also the underlying anatomy of biological structures is revealed. This makes streamtubes significant for the analysis of fibrous tissues in medical images. In order to avoid rendering multiple similar streamtubes, an adaptive seeding strategy is employed which takes into account similarity of tensors in a given region. The goal is to automate the process of generating seed points such that regions with dissimilar tensors are assigned more seed points compared to regions with similar tensors. The algorithm is based on tensor dissimilarity metrics that take into account both diffusion magnitudes and directions to optimize the seeding positions and density of streamtubes in order to reduce the visual clutter. Two recent advances in tensor calculus and tensor dissimilarity metrics are utilized: the Log-Euclidean and the J-divergence. Results show that adaptive seeding not only helps to cull unnecessary streamtubes that would obscure visualization but also do so without having to compute the culled streamtubes, which makes the visualization process faster.

  11. X-ray tensor tomography

    NASA Astrophysics Data System (ADS)

    Malecki, A.; Potdevin, G.; Biernath, T.; Eggl, E.; Willer, K.; Lasser, T.; Maisenbacher, J.; Gibmeier, J.; Wanner, A.; Pfeiffer, F.

    2014-02-01

    Here we introduce a new concept for x-ray computed tomography that yields information about the local micro-morphology and its orientation in each voxel of the reconstructed 3D tomogram. Contrary to conventional x-ray CT, which only reconstructs a single scalar value for each point in the 3D image, our approach provides a full scattering tensor with multiple independent structural parameters in each volume element. In the application example shown in this study, we highlight that our method can visualize sub-pixel fiber orientations in a carbon composite sample, hence demonstrating its value for non-destructive testing applications. Moreover, as the method is based on the use of a conventional x-ray tube, we believe that it will also have a great impact in the wider range of material science investigations and in future medical diagnostics. The authors declare no competing financial interests.

  12. Global decomposition experiment shows soil animal impacts on decomposition are climate-dependent

    PubMed Central

    WALL, DIANA H; BRADFORD, MARK A; ST JOHN, MARK G; TROFYMOW, JOHN A; BEHAN-PELLETIER, VALERIE; BIGNELL, DAVID E; DANGERFIELD, J MARK; PARTON, WILLIAM J; RUSEK, JOSEF; VOIGT, WINFRIED; WOLTERS, VOLKMAR; GARDEL, HOLLEY ZADEH; AYUKE, FRED O; BASHFORD, RICHARD; BELJAKOVA, OLGA I; BOHLEN, PATRICK J; BRAUMAN, ALAIN; FLEMMING, STEPHEN; HENSCHEL, JOH R; JOHNSON, DAN L; JONES, T HEFIN; KOVAROVA, MARCELA; KRANABETTER, J MARTY; KUTNY, LES; LIN, KUO-CHUAN; MARYATI, MOHAMED; MASSE, DOMINIQUE; POKARZHEVSKII, ANDREI; RAHMAN, HOMATHEVI; SABARÁ, MILLOR G; SALAMON, JOERG-ALFRED; SWIFT, MICHAEL J; VARELA, AMANDA; VASCONCELOS, HERALDO L; WHITE, DON; ZOU, XIAOMING

    2008-01-01

    Climate and litter quality are primary drivers of terrestrial decomposition and, based on evidence from multisite experiments at regional and global scales, are universally factored into global decomposition models. In contrast, soil animals are considered key regulators of decomposition at local scales but their role at larger scales is unresolved. Soil animals are consequently excluded from global models of organic mineralization processes. Incomplete assessment of the roles of soil animals stems from the difficulties of manipulating invertebrate animals experimentally across large geographic gradients. This is compounded by deficient or inconsistent taxonomy. We report a global decomposition experiment to assess the importance of soil animals in C mineralization, in which a common grass litter substrate was exposed to natural decomposition in either control or reduced animal treatments across 30 sites distributed from 43°S to 68°N on six continents. Animals in the mesofaunal size range were recovered from the litter by Tullgren extraction and identified to common specifications, mostly at the ordinal level. The design of the trials enabled faunal contribution to be evaluated against abiotic parameters between sites. Soil animals increase decomposition rates in temperate and wet tropical climates, but have neutral effects where temperature or moisture constrain biological activity. Our findings highlight that faunal influences on decomposition are dependent on prevailing climatic conditions. We conclude that (1) inclusion of soil animals will improve the predictive capabilities of region- or biome-scale decomposition models, (2) soil animal influences on decomposition are important at the regional scale when attempting to predict global change scenarios, and (3) the statistical relationship between decomposition rates and climate, at the global scale, is robust against changes in soil faunal abundance and diversity.

  13. Decomposing Nekrasov decomposition

    NASA Astrophysics Data System (ADS)

    Morozov, A.; Zenkevich, Y.

    2016-02-01

    AGT relations imply that the four-point conformal block admits a decomposition into a sum over pairs of Young diagrams of essentially rational Nekrasov functions — this is immediately seen when conformal block is represented in the form of a matrix model. However, the q-deformation of the same block has a deeper decomposition — into a sum over a quadruple of Young diagrams of a product of four topological vertices. We analyze the interplay between these two decompositions, their properties and their generalization to multi-point conformal blocks. In the latter case we explain how Dotsenko-Fateev all-with-all (star) pair "interaction" is reduced to the quiver model nearest-neighbor (chain) one. We give new identities for q-Selberg averages of pairs of generalized Macdonald polynomials. We also translate the slicing invariance of refined topological strings into the language of conformal blocks and interpret it as abelianization of generalized Macdonald polynomials.

  14. Novel techniques for data decomposition and load balancing for parallel processing of vision systems: Implementation and evaluation using a motion estimation system

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.

    1989-01-01

    Computer vision systems employ a sequence of vision algorithms in which the output of an algorithm is the input of the next algorithm in the sequence. Algorithms that constitute such systems exhibit vastly different computational characteristics, and therefore, require different data decomposition techniques and efficient load balancing techniques for parallel implementation. However, since the input data for a task is produced as the output data of the previous task, this information can be exploited to perform knowledge based data decomposition and load balancing. Presented here are algorithms for a motion estimation system. The motion estimation is based on the point correspondence between the involved images which are a sequence of stereo image pairs. Researchers propose algorithms to obtain point correspondences by matching feature points among stereo image pairs at any two consecutive time instants. Furthermore, the proposed algorithms employ non-iterative procedures, which results in saving considerable amounts of computation time. The system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from consecutive time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters.

  15. Mueller matrix differential decomposition.

    PubMed

    Ortega-Quijano, Noé; Arce-Diego, José Luis

    2011-05-15

    We present a Mueller matrix decomposition based on the differential formulation of the Mueller calculus. The differential Mueller matrix is obtained from the macroscopic matrix through an eigenanalysis. It is subsequently resolved into the complete set of 16 differential matrices that correspond to the basic types of optical behavior for depolarizing anisotropic media. The method is successfully applied to the polarimetric analysis of several samples. The differential parameters enable one to perform an exhaustive characterization of anisotropy and depolarization. This decomposition is particularly appropriate for studying media in which several polarization effects take place simultaneously. PMID:21593943

  16. Tensor representation techniques for full configuration interaction: A Fock space approach using the canonical product format

    NASA Astrophysics Data System (ADS)

    Böhm, Karl-Heinz; Auer, Alexander A.; Espig, Mike

    2016-06-01

    In this proof-of-principle study, we apply tensor decomposition techniques to the Full Configuration Interaction (FCI) wavefunction in order to approximate the wavefunction parameters efficiently and to reduce the overall computational effort. For this purpose, the wavefunction ansatz is formulated in an occupation number vector representation that ensures antisymmetry. If the canonical product format tensor decomposition is then applied, the Hamiltonian and the wavefunction can be cast into a multilinear product form. As a consequence, the number of wavefunction parameters does not scale to the power of the number of particles (or orbitals) but depends on the rank of the approximation and linearly on the number of particles. The degree of approximation can be controlled by a single threshold for the rank reduction procedure required in the algorithm. We demonstrate that using this approximation, the FCI Hamiltonian matrix can be stored with N5 scaling. The error of the approximation that is introduced is below Millihartree for a threshold of ɛ = 10-4 and no convergence problems are observed solving the FCI equations iteratively in the new format. While promising conceptually, all effort of the algorithm is shifted to the required rank reduction procedure after the contraction of the Hamiltonian with the coefficient tensor. At the current state, this crucial step is the bottleneck of our approach and even for an optimistic estimate, the algorithm scales beyond N10 and future work has to be directed towards reduction-free algorithms.

  17. Tensor representation techniques for full configuration interaction: A Fock space approach using the canonical product format.

    PubMed

    Böhm, Karl-Heinz; Auer, Alexander A; Espig, Mike

    2016-06-28

    In this proof-of-principle study, we apply tensor decomposition techniques to the Full Configuration Interaction (FCI) wavefunction in order to approximate the wavefunction parameters efficiently and to reduce the overall computational effort. For this purpose, the wavefunction ansatz is formulated in an occupation number vector representation that ensures antisymmetry. If the canonical product format tensor decomposition is then applied, the Hamiltonian and the wavefunction can be cast into a multilinear product form. As a consequence, the number of wavefunction parameters does not scale to the power of the number of particles (or orbitals) but depends on the rank of the approximation and linearly on the number of particles. The degree of approximation can be controlled by a single threshold for the rank reduction procedure required in the algorithm. We demonstrate that using this approximation, the FCI Hamiltonian matrix can be stored with N(5) scaling. The error of the approximation that is introduced is below Millihartree for a threshold of ϵ = 10(-4) and no convergence problems are observed solving the FCI equations iteratively in the new format. While promising conceptually, all effort of the algorithm is shifted to the required rank reduction procedure after the contraction of the Hamiltonian with the coefficient tensor. At the current state, this crucial step is the bottleneck of our approach and even for an optimistic estimate, the algorithm scales beyond N(10) and future work has to be directed towards reduction-free algorithms. PMID:27369492

  18. Calculation and Analysis of magnetic gradient tensor components of global magnetic models

    NASA Astrophysics Data System (ADS)

    Schiffler, Markus; Queitsch, Matthias; Schneider, Michael; Stolz, Ronny; Krech, Wolfram; Meyer, Hans-Georg; Kukowski, Nina

    2014-05-01

    Magnetic mapping missions like SWARM and its predecessors, e.g. the CHAMP and MAGSAT programs, offer high resolution Earth's magnetic field data. These datasets are usually combined with magnetic observatory and survey data, and subject to harmonic analysis. The derived spherical harmonic coefficients enable magnetic field modelling using a potential series expansion. Recently, new instruments like the JeSSY STAR Full Tensor Magnetic Gradiometry system equipped with very high sensitive sensors can directly measure the magnetic field gradient tensor components. The full understanding of the quality of the measured data requires the extension of magnetic field models to gradient tensor components. In this study, we focus on the extension of the derivation of the magnetic field out of the potential series magnetic field gradient tensor components and apply the new theoretical framework to the International Geomagnetic Reference Field (IGRF) and the High Definition Magnetic Model (HDGM). The gradient tensor component maps for entire Earth's surface produced for the IGRF show low values and smooth variations reflecting the core and mantle contributions whereas those for the HDGM gives a novel tool to unravel crustal structure and deep-situated ore bodies. For example, the Thor Suture and the Sorgenfrei-Thornquist Zone in Europe are delineated by a strong northward gradient. Derived from Eigenvalue decomposition of the magnetic gradient tensor, the scaled magnetic moment, normalized source strength (NSS) and the bearing of the lithospheric sources are presented. The NSS serves as a tool for estimating the lithosphere-asthenosphere boundary as well as the depth of plutons and ore bodies. Furthermore changes in magnetization direction parallel to the mid-ocean ridges can be obtained from the scaled magnetic moment and the normalized source strength discriminates the boundaries between the anomalies of major continental provinces like southern Africa or the Eastern European

  19. Spatial Mapping of Translational Diffusion Coefficients Using Diffusion Tensor Imaging: A Mathematical Description

    PubMed Central

    SHETTY, ANIL N.; CHIANG, SHARON; MALETIC-SAVATIC, MIRJANA; KASPRIAN, GREGOR; VANNUCCI, MARINA; LEE, WESLEY

    2016-01-01

    In this article, we discuss the theoretical background for diffusion weighted imaging and diffusion tensor imaging. Molecular diffusion is a random process involving thermal Brownian motion. In biological tissues, the underlying microstructures restrict the diffusion of water molecules, making diffusion directionally dependent. Water diffusion in tissue is mathematically characterized by the diffusion tensor, the elements of which contain information about the magnitude and direction of diffusion and is a function of the coordinate system. Thus, it is possible to generate contrast in tissue based primarily on diffusion effects. Expressing diffusion in terms of the measured diffusion coefficient (eigenvalue) in any one direction can lead to errors. Nowhere is this more evident than in white matter, due to the preferential orientation of myelin fibers. The directional dependency is removed by diagonalization of the diffusion tensor, which then yields a set of three eigenvalues and eigenvectors, representing the magnitude and direction of the three orthogonal axes of the diffusion ellipsoid, respectively. For example, the eigenvalue corresponding to the eigenvector along the long axis of the fiber corresponds qualitatively to diffusion with least restriction. Determination of the principal values of the diffusion tensor and various anisotropic indices provides structural information. We review the use of diffusion measurements using the modified Stejskal–Tanner diffusion equation. The anisotropy is analyzed by decomposing the diffusion tensor based on symmetrical properties describing the geometry of diffusion tensor. We further describe diffusion tensor properties in visualizing fiber tract organization of the human brain. PMID:27441031

  20. Ultrasound elastic tensor imaging: comparison with MR diffusion tensor imaging in the myocardium

    NASA Astrophysics Data System (ADS)

    Lee, Wei-Ning; Larrat, Benoît; Pernot, Mathieu; Tanter, Mickaël

    2012-08-01

    We have previously proven the feasibility of ultrasound-based shear wave imaging (SWI) to non-invasively characterize myocardial fiber orientation in both in vitro porcine and in vivo ovine hearts. The SWI-estimated results were in good correlation with histology. In this study, we proposed a new and robust fiber angle estimation method through a tensor-based approach for SWI, coined together as elastic tensor imaging (ETI), and compared it with magnetic resonance diffusion tensor imaging (DTI), a current gold standard and extensively reported non-invasive imaging technique for mapping fiber architecture. Fresh porcine (n = 5) and ovine (n = 5) myocardial samples (20 × 20 × 30 mm3) were studied. ETI was firstly performed to generate shear waves and to acquire the wave events at ultrafast frame rate (8000 fps). A 2.8 MHz phased array probe (pitch = 0.28 mm), connected to a prototype ultrasound scanner, was mounted on a customized MRI-compatible rotation device, which allowed both the rotation of the probe from -90° to 90° at 5° increments and co-registration between two imaging modalities. Transmural shear wave speed at all propagation directions realized was firstly estimated. The fiber angles were determined from the shear wave speed map using the least-squares method and eigen decomposition. The test myocardial sample together with the rotation device was then placed inside a 7T MRI scanner. Diffusion was encoded in six directions. A total of 270 diffusion-weighted images (b = 1000 s mm-2, FOV = 30 mm, matrix size = 60 × 64, TR = 6 s, TE = 19 ms, 24 averages) and 45 B0 images were acquired in 14 h 30 min. The fiber structure was analyzed by the fiber-tracking module in software, MedINRIA. The fiber orientation in the overlapped myocardial region which both ETI and DTI accessed was therefore compared, thanks to the co-registered imaging system. Results from all ten samples showed good correlation (r2 = 0.81, p < 0.0001) and good agreement (3.05° bias

  1. Incremental Discriminant Analysis in Tensor Space

    PubMed Central

    Chang, Liu; Weidong, Zhao; Tao, Yan; Qiang, Pu; Xiaodan, Du

    2015-01-01

    To study incremental machine learning in tensor space, this paper proposes incremental tensor discriminant analysis. The algorithm employs tensor representation to carry on discriminant analysis and combine incremental learning to alleviate the computational cost. This paper proves that the algorithm can be unified into the graph framework theoretically and analyzes the time and space complexity in detail. The experiments on facial image detection have shown that the algorithm not only achieves sound performance compared with other algorithms, but also reduces the computational issues apparently. PMID:26339229

  2. Low uncertainty method for inertia tensor identification

    NASA Astrophysics Data System (ADS)

    Barreto, J. P.; Muñoz, L. E.

    2016-02-01

    The uncertainty associated with the experimental identification of the inertia tensor can be reduced by implementing adequate rotational and translational motions in the experiment. This paper proposes a particular 3D trajectory that improves the experimental measurement of the inertia tensor of rigid bodies. Such a trajectory corresponds to a motion in which the object is rotated around a large number of instantaneous axes, while the center of gravity remains static. The uncertainty in the inertia tensor components obtained with this practice is reduced by 45% in average, compared with those calculated using simple rotations around three perpendicular axes (Roll, Pitch, Yaw).

  3. Tensor methods for large, sparse unconstrained optimization

    SciTech Connect

    Bouaricha, A.

    1996-11-01

    Tensor methods for unconstrained optimization were first introduced by Schnabel and Chow [SIAM J. Optimization, 1 (1991), pp. 293-315], who describe these methods for small to moderate size problems. This paper extends these methods to large, sparse unconstrained optimization problems. This requires an entirely new way of solving the tensor model that makes the methods suitable for solving large, sparse optimization problems efficiently. We present test results for sets of problems where the Hessian at the minimizer is nonsingular and where it is singular. These results show that tensor methods are significantly more efficient and more reliable than standard methods based on Newton`s method.

  4. Tensor network and a black hole

    NASA Astrophysics Data System (ADS)

    Matsueda, Hiroaki; Ishihara, Masafumi; Hashizume, Yoichiro

    2013-03-01

    A tensor-network variational formalism of thermofield dynamics is introduced. The formalism relates the original Hilbert space with its tilde space by a product of two copies of a tensor network. Then, their interface becomes an event horizon, and the logarithm of the tensor rank corresponds to the black hole entropy. Eventually, a multiscale entanglement renormalization ansatz reproduces an anti-de Sitter black hole at finite temperature. Our finding shows rich functionalities of multiscale entanglement renormalization ansatz as efficient graphical representation of AdS/CFT correspondence.

  5. Killing tensors, warped products and the orthogonal separation of the Hamilton-Jacobi equation

    SciTech Connect

    Rajaratnam, Krishan McLenaghan, Raymond G.

    2014-01-15

    We study Killing tensors in the context of warped products and apply the results to the problem of orthogonal separation of the Hamilton-Jacobi equation. This work is motivated primarily by the case of spaces of constant curvature where warped products are abundant. We first characterize Killing tensors which have a natural algebraic decomposition in warped products. We then apply this result to show how one can obtain the Killing-Stäckel space (KS-space) for separable coordinate systems decomposable in warped products. This result in combination with Benenti's theory for constructing the KS-space of certain special separable coordinates can be used to obtain the KS-space for all orthogonal separable coordinates found by Kalnins and Miller in Riemannian spaces of constant curvature. Next we characterize when a natural Hamiltonian is separable in coordinates decomposable in a warped product by showing that the conditions originally given by Benenti can be reduced. Finally, we use this characterization and concircular tensors (a special type of torsionless conformal Killing tensor) to develop a general algorithm to determine when a natural Hamiltonian is separable in a special class of separable coordinates which include all orthogonal separable coordinates in spaces of constant curvature.

  6. Hydrazine decomposition and other reactions

    NASA Technical Reports Server (NTRS)

    Armstrong, Warren E. (Inventor); La France, Donald S. (Inventor); Voge, Hervey H. (Inventor)

    1978-01-01

    This invention relates to the catalytic decomposition of hydrazine, catalysts useful for this decomposition and other reactions, and to reactions in hydrogen atmospheres generally using carbon-containing catalysts.

  7. Tensor-based classification of an auditory mobile BCI without a subject-specific calibration phase

    NASA Astrophysics Data System (ADS)

    Zink, Rob; Hunyadi, Borbála; Van Huffel, Sabine; De Vos, Maarten

    2016-04-01

    Objective. One of the major drawbacks in EEG brain-computer interfaces (BCI) is the need for subject-specific training of the classifier. By removing the need for a supervised calibration phase, new users could potentially explore a BCI faster. In this work we aim to remove this subject-specific calibration phase and allow direct classification. Approach. We explore canonical polyadic decompositions and block term decompositions of the EEG. These methods exploit structure in higher dimensional data arrays called tensors. The BCI tensors are constructed by concatenating ERP templates from other subjects to a target and non-target trial and the inherent structure guides a decomposition that allows accurate classification. We illustrate the new method on data from a three-class auditory oddball paradigm. Main results. The presented approach leads to a fast and intuitive classification with accuracies competitive with a supervised and cross-validated LDA approach. Significance. The described methods are a promising new way of classifying BCI data with a forthright link to the original P300 ERP signal over the conventional and widely used supervised approaches.

  8. Evolution of tensor perturbations in scalar-tensor theories of gravity

    SciTech Connect

    Carloni, Sante; Dunsby, Peter K. S.

    2007-03-15

    The evolution equations for tensor perturbations in a generic scalar-tensor theory of gravity are presented. Exact solutions are given for a specific class of theories and Friedmann-Lemaitre-Robertson-Walker backgrounds. In these cases it is shown that, although the evolution of tensor models depends on the choice of parameters of the theory, no amplification is possible if the gravitational interaction is attractive.

  9. Interpretation of the Weyl tensor

    NASA Astrophysics Data System (ADS)

    Hofmann, Stefan; Niedermann, Florian; Schneider, Robert

    2013-09-01

    According to folklore in general relativity, the Weyl tensor can be decomposed into parts corresponding to Newton-like, incoming and outgoing wavelike field components. It is shown here that this one-to-one correspondence does not hold for space-time geometries with cylindrical isometries. This is done by investigating some well-known exact solutions of Einstein’s field equations with whole-cylindrical symmetry, for which the physical interpretation is very clear, but for which the standard Weyl interpretation would give contradictory results. For planar or spherical geometries, however, the standard interpretation works for both static and dynamical space-times. It is argued that one reason for the failure in the cylindrical case is that for waves spreading in two spatial dimensions there is no local criterion to distinguish incoming and outgoing waves already at the linear level. It turns out that Thorne’s local energy notion, subject to certain qualifications, provides an efficient diagnostic tool to extract the proper physical interpretation of the space-time geometry in the case of cylindrical configurations.

  10. Kinetic-energy-momentum tensor in electrodynamics

    NASA Astrophysics Data System (ADS)

    Sheppard, Cheyenne J.; Kemp, Brandon A.

    2016-01-01

    We show that the Einstein-Laub formulation of electrodynamics is invalid since it yields a stress-energy-momentum (SEM) tensor that is not frame invariant. Two leading hypotheses for the kinetic formulation of electrodynamics (Chu and Einstein-Laub) are studied by use of the relativistic principle of virtual power, mathematical modeling, Lagrangian methods, and SEM transformations. The relativistic principle of virtual power is used to demonstrate the field dynamics associated with energy relations within a relativistic framework. Lorentz transformations of the respective SEM tensors demonstrate the relativistic frameworks for each studied formulation. Mathematical modeling of stationary and moving media is used to illustrate the differences and discrepancies of specific proposed kinetic formulations, where energy relations and conservation theorems are employed. Lagrangian methods are utilized to derive the field kinetic Maxwell's equations, which are studied with respect to SEM tensor transforms. Within each analysis, the Einstein-Laub formulation violates special relativity, which invalidates the Einstein-Laub SEM tensor.

  11. Quantum integrability of quadratic Killing tensors

    SciTech Connect

    Duval, C.; Valent, G.

    2005-05-01

    Quantum integrability of classical integrable systems given by quadratic Killing tensors on curved configuration spaces is investigated. It is proven that, using a 'minimal' quantization scheme, quantum integrability is ensured for a large class of classic examples.

  12. The Weyl tensor correlator in cosmological spacetimes

    SciTech Connect

    Fröb, Markus B.

    2014-12-05

    We give a general expression for the Weyl tensor two-point function in a general Friedmann-Lemaître-Robertson-Walker spacetime. We work in reduced phase space for the perturbations, i.e., quantize only the dynamical degrees of freedom without adding any gauge-fixing term. The general formula is illustrated by a calculation in slow-roll single-field inflation to first order in the slow-roll parameters ϵ and δ, and the result is shown to have the correct de Sitter limit as ϵ,δ→0. Furthermore, it is seen that the Weyl tensor correlation function in slow-roll does not suffer from infrared divergences, unlike the two-point functions of the metric and scalar field perturbations. Lastly, we show how to recover the usual tensor power spectrum from the Weyl tensor correlation function.

  13. Shifted power method for computing tensor eigenpairs.

    SciTech Connect

    Mayo, Jackson R.; Kolda, Tamara Gibson

    2010-10-01

    Recent work on eigenvalues and eigenvectors for tensors of order m {>=} 3 has been motivated by applications in blind source separation, magnetic resonance imaging, molecular conformation, and more. In this paper, we consider methods for computing real symmetric-tensor eigenpairs of the form Ax{sup m-1} = {lambda}x subject to {parallel}x{parallel} = 1, which is closely related to optimal rank-1 approximation of a symmetric tensor. Our contribution is a novel shifted symmetric higher-order power method (SS-HOPM), which we showis guaranteed to converge to a tensor eigenpair. SS-HOPM can be viewed as a generalization of the power iteration method for matrices or of the symmetric higher-order power method. Additionally, using fixed point analysis, we can characterize exactly which eigenpairs can and cannot be found by the method. Numerical examples are presented, including examples from an extension of the method to fnding complex eigenpairs.

  14. Shifted power method for computing tensor eigenvalues.

    SciTech Connect

    Mayo, Jackson R.; Kolda, Tamara Gibson

    2010-07-01

    Recent work on eigenvalues and eigenvectors for tensors of order m >= 3 has been motivated by applications in blind source separation, magnetic resonance imaging, molecular conformation, and more. In this paper, we consider methods for computing real symmetric-tensor eigenpairs of the form Ax{sup m-1} = lambda x subject to ||x||=1, which is closely related to optimal rank-1 approximation of a symmetric tensor. Our contribution is a shifted symmetric higher-order power method (SS-HOPM), which we show is guaranteed to converge to a tensor eigenpair. SS-HOPM can be viewed as a generalization of the power iteration method for matrices or of the symmetric higher-order power method. Additionally, using fixed point analysis, we can characterize exactly which eigenpairs can and cannot be found by the method. Numerical examples are presented, including examples from an extension of the method to finding complex eigenpairs.

  15. Quantum theory with bold operator tensors.

    PubMed

    Hardy, Lucien

    2015-08-01

    In this paper, we present a formulation of quantum theory in terms of bold operator tensors. A circuit is built up of operations where an operation corresponds to a use of an apparatus. We associate collections of operator tensors (which together comprise a bold operator) with these apparatus uses. We give rules for combining bold operator tensors such that, for a circuit, they give a probability distribution over the possible outcomes. If we impose certain physicality constraints on the bold operator tensors, then we get exactly the quantum formalism. We provide both symbolic and diagrammatic ways to represent these calculations. This approach is manifestly covariant in that it does not require us to foliate the circuit into time steps and then evolve a state. Thus, the approach forms a natural starting point for an operational approach to quantum field theory.

  16. The energy-momentum tensor(s) in classical gauge theories

    DOE PAGES

    Gieres, Francois; Blaschke, Daniel N.; Reboud, Meril; Schweda, Manfred

    2016-07-01

    We give an introduction to, and review of, the energy–momentum tensors in classical gauge field theories in Minkowski space, and to some extent also in curved space–time. For the canonical energy–momentum tensor of non-Abelian gauge fields and of matter fields coupled to such fields, we present a new and simple improvement procedure based on gauge invariance for constructing a gauge invariant, symmetric energy–momentum tensor. Here, the relationship with the Einstein–Hilbert tensor following from the coupling to a gravitational field is also discussed.

  17. Assessment of bias for MRI diffusion tensor imaging using SIMEX.

    PubMed

    Lauzon, Carolyn B; Asman, Andrew J; Crainiceanu, Ciprian; Caffo, Brian C; Landman, Bennett A

    2011-01-01

    Diffusion Tensor Imaging (DTI) is a Magnetic Resonance Imaging method for measuring water diffusion in vivo. One powerful DTI contrast is fractional anisotropy (FA). FA reflects the strength of water's diffusion directional preference and is a primary metric for neuronal fiber tracking. As with other DTI contrasts, FA measurements are obscured by the well established presence of bias. DTI bias has been challenging to assess because it is a multivariable problem including SNR, six tensor parameters, and the DTI collection and processing method used. SIMEX is a modem statistical technique that estimates bias by tracking measurement error as a function of added noise. Here, we use SIMEX to assess bias in FA measurements and show the method provides; i) accurate FA bias estimates, ii) representation of FA bias that is data set specific and accessible to non-statisticians, and iii) a first time possibility for incorporation of bias into DTI data analysis. PMID:21995019

  18. Adaptive registration of diffusion tensor images on lie groups

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Chen, LeiTing; Cai, HongBin; Qiu, Hang; Fei, Nanxi

    2016-08-01

    With diffusion tensor imaging (DTI), more exquisite information on tissue microstructure is provided for medical image processing. In this paper, we present a locally adaptive topology preserving method for DTI registration on Lie groups. The method aims to obtain more plausible diffeomorphisms for spatial transformations via accurate approximation for the local tangent space on the Lie group manifold. In order to capture an exact geometric structure of the Lie group, the local linear approximation is efficiently optimized by using the adaptive selection of the local neighborhood sizes on the given set of data points. Furthermore, numerical comparative experiments are conducted on both synthetic data and real DTI data to demonstrate that the proposed method yields a higher degree of topology preservation on a dense deformation tensor field while improving the registration accuracy.

  19. Anisotropy of Local Stress Tensor Leads to Line Tension

    NASA Astrophysics Data System (ADS)

    Shao, Mingzhe; Wang, Jianjun; Zhou, Xin

    2015-04-01

    Line tension of three-phase contact lines is an important physical quantity in understanding many physical processes such as heterogeneous nucleation, soft lithography and behaviours in biomembrane, such as budding, fission and fusion. Although the concept of line tension was proposed as the excess free energy in three-phase coexistence regions a century ago, its microscopic origin is subtle and achieves long-term concerns. In this paper, we correlate line tension with anisotropy of diagonal components of stress tensor and give a general formula of line tension. By performing molecular dynamic simulations, we illustrate the formula proposed in Lennard-Jones gas/liquid/liquid and gas/liquid/solid systems, and find that the spatial distribution of line tension can be well revealed when the local distribution of stress tensor is considered.

  20. Temperature-polarization correlations from tensor fluctuations

    SciTech Connect

    Crittenden, R.G.; Coulson, D.; Turok, N.G. |

    1995-11-15

    We study the polarization-temperature correlations on the cosmic microwave sky resulting from an initial scale-invariant spectrum of tensor (gravity wave) fluctuations, such as those which might arise during inflation. The correlation function has the opposite sign to that for scalar fluctuations on large scales, raising the possibility of a direct determination of whether the microwave anisotropies have a significant tensor component. We briefly discuss the important problem of estimating the expected foreground contamination.

  1. Novel Physics with Tensor Polarized Deuteron Targets

    SciTech Connect

    Slifer, Karl J.; Long, Elena A.

    2013-09-01

    Development of solid spin-1 polarized targets will open the study of tensor structure functions to precise measurement, and holds the promise to enable a new generation of polarized scattering experiments. In this talk we will discuss a measurement of the leading twist tensor structure function b1, along with prospects for future experiments with a solid tensor polarized target. The recently approved JLab experiment E12-13-011 will measure the lead- ing twist tensor structure function b1, which provides a unique tool to study partonic effects, while also being sensitive to coherent nuclear properties in the simplest nuclear system. At low x, shadowing effects are expected to dominate b1, while at larger values, b1 provides a clean probe of exotic QCD effects, such as hidden color due to 6-quark configuration. Since the deuteron wave function is relatively well known, any non-standard effects are expected to be readily observable. All available models predict a small or vanishing value of b1 at moderate x. However, the first pioneer measurement of b1 at HERMES revealed a crossover to an anomalously large negative value in the region 0.2 < x < 0.5, albeit with relatively large experimental uncertainty. E12-13-011 will perform an inclusive measurement of the deuteron tensor asymmetry in the region 0.16 < x < 0.49, for 0.8 < Q2 < 5.0 GeV2. The UVa solid polarized ND3 target will be used, along with the Hall C spectrometers, and an unpolarized 115 nA beam. This measurement will provide access to the tensor quark polarization, and allow a test of the Close-Kumano sum rule, which vanishes in the absence of tensor polarization in the quark sea. Until now, tensor structure has been largely unexplored, so the study of these quantities holds the potential of initiating a new field of spin physics at Jefferson Lab.

  2. Calibration of SQUID vector magnetometers in full tensor gradiometry systems

    NASA Astrophysics Data System (ADS)

    Schiffler, M.; Queitsch, M.; Stolz, R.; Chwala, A.; Krech, W.; Meyer, H.-G.; Kukowski, N.

    2014-08-01

    Measurement of magnetic vector or tensor quantities, namely of field or field gradient, delivers more details of the underlying geological setting in geomagnetic prospection than a scalar measurement of a single component or of the scalar total magnetic intensity. Currently, highest measurement resolutions are achievable with superconducting quantum interference device (SQUID)-based systems. Due to technological limitations, it is necessary to suppress the parasitic magnetic field response from the SQUID gradiometer signals, which are a superposition of one tensor component and all three orthogonal magnetic field components. This in turn requires an accurate estimation of the local magnetic field. Such a measurement can itself be achieved via three additional orthogonal SQUID reference magnetometers. It is the calibration of such a SQUID reference vector magnetometer system that is the subject of this paper. A number of vector magnetometer calibration methods are described in the literature. We present two methods that we have implemented and compared, for their suitability of rapid data processing and integration into a full tensor magnetic gradiometry, SQUID-based, system. We conclude that the calibration routines must necessarily model fabrication misalignments, field offset and scale factors, and include comparison with a reference magnetic field. In order to enable fast processing on site, the software must be able to function as a stand-alone toolbox.

  3. Particle creation from the quantum stress tensor

    NASA Astrophysics Data System (ADS)

    Firouzjaee, Javad T.; Ellis, George F. R.

    2015-05-01

    Among the different methods to derive particle creation, finding the quantum stress tensor expectation value gives a covariant quantity which can be used for examining the backreaction issue. However this tensor also includes vacuum polarization in a way that depends on the vacuum chosen. Here we review different aspects of particle creation by looking at energy conservation and at the quantum stress tensor. We show that in the case of general spherically symmetric black holes that have a dynamical horizon, as occurs in a cosmological context, one cannot have pair creation on the horizon because this violates energy conservation. This confirms the results obtained in other ways in a previous paper [J. T. Firouzjaee and G. F. R. Ellis, Gen. Relativ. Gravit. 47, 6 (2015)]. Looking at the expectation value of the quantum stress tensor with three different definitions of the vacuum state, we study the nature of particle creation and vacuum polarization in black hole and cosmological models, and the associated stress-energy tensors. We show that the thermal temperature that is calculated from the particle flux given by the quantum stress tensor is compatible with the temperature determined by the affine null parameter approach. Finally, we show that in the spherically symmetric dynamic case, we can neglect the backscattering term and only consider the s-wave term near the future apparent horizon.

  4. Visualization of Tensor Fields Using Superquadric Glyphs

    PubMed Central

    Ennis, Daniel B.; Kindlman, Gordon; Rodriguez, Ignacio; Helm, Patrick A.; McVeigh, Elliot R.

    2007-01-01

    The spatially varying tensor fields that arise in magnetic resonance imaging are difficult to visualize due to the multivariate nature of the data. To improve the understanding of myocardial structure and function a family of objects called glyphs, derived from superquadric parametric functions, are used to create informative and intuitive visualizations of the tensor fields. The superquadric glyphs are used to visualize both diffusion and strain tensors obtained in canine myocardium. The eigensystem of each tensor defines the glyph shape and orientation. Superquadric functions provide a continuum of shapes across four distinct eigensystems (λi, sorted eigenvalues), λ1 = λ2 = λ3 (spherical), λ1 < λ2 = λ3 (oblate), λ1 > λ2 = λ3 (prolate), and λ1 > λ2 > λ3 (cuboid). The superquadric glyphs are especially useful for identifying regions of anisotropic structure and function. Diffusion tensor renderings exhibit fiber angle trends and orthotropy (three distinct eigenvalues). Visualization of strain tensors with superquadric glyphs compactly exhibits radial thickening gradients, circumferential and longitudinal shortening, and torsion combined. The orthotropic nature of many biologic tissues and their DTMRI and strain data require visualization strategies that clearly exhibit the anisotropy of the data if it is to be interpreted properly. Superquadric glyphs improve the ability to distinguish fiber orientation and tissue orthotropy compared to ellipsoids. PMID:15690516

  5. Visualization of 3-D tensor fields

    NASA Technical Reports Server (NTRS)

    Hesselink, L.

    1996-01-01

    Second-order tensor fields have applications in many different areas of physics, such as general relativity and fluid mechanics. The wealth of multivariate information in tensor fields makes them more complex and abstract than scalar and vector fields. Visualization is a good technique for scientists to gain new insights from them. Visualizing a 3-D continuous tensor field is equivalent to simultaneously visualizing its three eigenvector fields. In the past, research has been conducted in the area of two-dimensional tensor fields. It was shown that degenerate points, defined as points where eigenvalues are equal to each other, are the basic singularities underlying the topology of tensor fields. Moreover, it was shown that eigenvectors never cross each other except at degenerate points. Since we live in a three-dimensional world, it is important for us to understand the underlying physics of this world. In this report, we describe a new method for locating degenerate points along with the conditions for classifying them in three-dimensional space. Finally, we discuss some topological features of three-dimensional tensor fields, and interpret topological patterns in terms of physical properties.

  6. Catalytic Decomposition of PH3 on Heated Tungsten Wire Surfaces

    NASA Astrophysics Data System (ADS)

    Umemoto, Hironobu; Nishihara, Yushin; Ishikawa, Takuma; Yamamoto, Shingo

    2012-08-01

    The catalytic decomposition processes of PH3 on heated tungsten surfaces were studied to clarify the mechanisms governing phosphorus doping into silicon substrates. Mass spectrometric measurements show that PH3 can be decomposed by more than 50% over 2000 K. H, P, PH, and PH2 radicals were identified by laser spectroscopic techniques. Absolute density measurements of these radical species, as well as their PH3 flow rate dependence, show that the major products on the catalyst surfaces are P and H atoms, while PH and PH2 are produced in secondary processes in the gas phase. In other words, catalytic decomposition, unlike plasma decomposition processes, can be a clean source of P atoms, which can be the only major dopant precursors. In the presence of an excess amount of H2, the apparent decomposition efficiency is small. This can be explained by rapid cyclic reactions including decomposition, deposition, and etching to reproduce PH3.

  7. Ecotoxicity effects triggered in aquatic organisms by invasive Acer negundo and native Alnus glutinosa leaf leachates obtained in the process of aerobic decomposition.

    PubMed

    Manusadžianas, Levonas; Darginavičienė, Jūratė; Gylytė, Brigita; Jurkonienė, Sigita; Krevš, Alina; Kučinskienė, Alė; Mačkinaitė, Rimutė; Pakalnis, Romas; Sadauskas, Kazys; Sendžikaitė, Jūratė; Vitkus, Rimantas

    2014-10-15

    The replacement of autochthonous tree species by invasive ones in coastal zones of freshwater bodies induces additional alteration of hydrochemical and microbiological characteristics due to decomposition of fallen leaves of non-indigenous species, which can lead to ecotoxic response of the littoral biota. Leaves of invasive to Lithuania boxelder maple (Acer negundo) and autochthonous black alder (Alnus glutinosa) lost more than half of biomass and released stable amount of DOC (60-70 mg/L) throughout 90-day mesocosm experiment under aerobic conditions. This, along with the relatively small BOD7 values detected after some variation within the first month confirms effective biodegradation by fungi and bacteria. The ambient water was more enriched with different forms of N and P by decomposing boxelder maple than by alder leaves. During the first month, both leachates were more toxic to charophyte (Nitellopsis obtusa) at mortality and membrane depolarization levels, while later to two crustacean species. Biomarker response, H(+)-ATPase activity in membrane preparations from N. obtusa, was stronger for A. negundo. Generally, boxelder maple leaf leachates were more toxic to tested hydrobionts and this coincides with previous study on leaves of the same pair of tree species conducted under microaerobic conditions (Krevš et al., 2013). PMID:25058932

  8. Ecotoxicity effects triggered in aquatic organisms by invasive Acer negundo and native Alnus glutinosa leaf leachates obtained in the process of aerobic decomposition.

    PubMed

    Manusadžianas, Levonas; Darginavičienė, Jūratė; Gylytė, Brigita; Jurkonienė, Sigita; Krevš, Alina; Kučinskienė, Alė; Mačkinaitė, Rimutė; Pakalnis, Romas; Sadauskas, Kazys; Sendžikaitė, Jūratė; Vitkus, Rimantas

    2014-10-15

    The replacement of autochthonous tree species by invasive ones in coastal zones of freshwater bodies induces additional alteration of hydrochemical and microbiological characteristics due to decomposition of fallen leaves of non-indigenous species, which can lead to ecotoxic response of the littoral biota. Leaves of invasive to Lithuania boxelder maple (Acer negundo) and autochthonous black alder (Alnus glutinosa) lost more than half of biomass and released stable amount of DOC (60-70 mg/L) throughout 90-day mesocosm experiment under aerobic conditions. This, along with the relatively small BOD7 values detected after some variation within the first month confirms effective biodegradation by fungi and bacteria. The ambient water was more enriched with different forms of N and P by decomposing boxelder maple than by alder leaves. During the first month, both leachates were more toxic to charophyte (Nitellopsis obtusa) at mortality and membrane depolarization levels, while later to two crustacean species. Biomarker response, H(+)-ATPase activity in membrane preparations from N. obtusa, was stronger for A. negundo. Generally, boxelder maple leaf leachates were more toxic to tested hydrobionts and this coincides with previous study on leaves of the same pair of tree species conducted under microaerobic conditions (Krevš et al., 2013).

  9. Effect of the annealing process on the microstructure of La 2Zr 2O 7 thin layers epitaxially grown on LaAlO 3 by metalorganic decomposition

    NASA Astrophysics Data System (ADS)

    Jiménez, C.; Caroff, T.; Rapenne, L.; Morlens, S.; Santos, E.; Odier, P.; Weiss, F.

    2009-05-01

    La 2Zr 2O 7 (LZO) films have been grown by metalorganic decomposition (MOD) to be used as buffer layers for coated conductors. A characteristic feature of LZO thin films deposited by MOD is the formation of nanovoids in an almost single crystal structure of LZO pyrochlore phase. Annealing parameters (heating ramp, temperature, pressure, etc.) were varied to establish their influence on the microstructure of the LZO layers. X-ray diffraction (XRD) and transmission electron microscopy (TEM) were used for sample characterization. The epitaxial pyrochlore phase was obtained for annealing temperatures higher than 850 °C whatever the other annealing conditions. However, the film microstructure, in particular, nanovoids shape and size, is strongly dependent on heating ramp and pressure during annealing. When using low heating ramp, percolation of voids creates diffusion channels for oxygen which are detrimental for the substrate protection during coated conductor fabrication. From this point of view high heating rates are more adapted to the growth of LZO layers.

  10. Morphological Decomposition in Reading Hebrew Homographs

    ERIC Educational Resources Information Center

    Miller, Paul; Liran-Hazan, Batel; Vaknin, Vered

    2016-01-01

    The present work investigates whether and how morphological decomposition processes bias the reading of Hebrew heterophonic homographs, i.e., unique orthographic patterns that are associated with two separate phonological, semantic entities depicted by means of two morphological structures (linear and nonlinear). In order to reveal the nature of…

  11. Moment tensors of a dislocation in a porous medium

    NASA Astrophysics Data System (ADS)

    Wang, Zhi; Hu, Hengshan

    2016-06-01

    A dislocation can be represented by a moment tensor for calculating seismic waves. However, the moment tensor expression was derived in an elastic medium and cannot completely describe a dislocation in a porous medium. In this paper, effective moment tensors of a dislocation in a porous medium are derived. It is found that the dislocation is equivalent to two independent moment tensors, i.e., the bulk moment tensor acting on the bulk of the porous medium and the isotropic fluid moment tensor acting on the pore fluid. Both of them are caused by the solid dislocation as well as the fluid-solid relative motion corresponding to fluid injection towards the surrounding rocks (or fluid outflow) through the fault plane. For a shear dislocation, the fluid moment tensor is zero, and the dislocation is equivalent to a double couple acting on the bulk; for an opening dislocation or fluid injection, the two moment tensors are needed to describe the source. The fluid moment tensor only affects the radiated compressional waves. By calculating the ratio of the radiation fields generated by unit fluid moment tensor and bulk moment tensor, it is found that the fast compressional wave radiated by the bulk moment tensor is much stronger than that radiated by the fluid moment tensor, while the slow compressional wave radiated by the fluid moment tensor is several times stronger than that radiated by the bulk moment tensor.

  12. General tensor discriminant analysis and gabor features for gait recognition.

    PubMed

    Tao, Dacheng; Li, Xuelong; Wu, Xindong; Maybank, Stephen J

    2007-10-01

    The traditional image representations are not suited to conventional classification methods, such as the linear discriminant analysis (LDA), because of the under sample problem (USP): the dimensionality of the feature space is much higher than the number of training samples. Motivated by the successes of the two dimensional LDA (2DLDA) for face recognition, we develop a general tensor discriminant analysis (GTDA) as a preprocessing step for LDA. The benefits of GTDA compared with existing preprocessing methods, e.g., principal component analysis (PCA) and 2DLDA, include 1) the USP is reduced in subsequent classification by, for example, LDA; 2) the discriminative information in the training tensors is preserved; and 3) GTDA provides stable recognition rates because the alternating projection optimization algorithm to obtain a solution of GTDA converges, while that of 2DLDA does not. We use human gait recognition to validate the proposed GTDA. The averaged gait images are utilized for gait representation. Given the popularity of Gabor function based image decompositions for image understanding and object recognition, we develop three different Gabor function based image representations: 1) the GaborD representation is the sum of Gabor filter responses over directions, 2) GaborS is the sum of Gabor filter responses over scales, and 3) GaborSD is the sum of Gabor filter responses over scales and directions. The GaborD, GaborS and GaborSD representations are applied to the problem of recognizing people from their averaged gait images.A large number of experiments were carried out to evaluate the effectiveness (recognition rate) of gait recognition based on first obtaining a Gabor, GaborD, GaborS or GaborSD image representation, then using GDTA to extract features and finally using LDA for classification. The proposed methods achieved good performance for gait recognition based on image sequences from the USF HumanID Database. Experimental comparisons are made with nine

  13. General tensor discriminant analysis and gabor features for gait recognition.

    PubMed

    Tao, Dacheng; Li, Xuelong; Wu, Xindong; Maybank, Stephen J

    2007-10-01

    The traditional image representations are not suited to conventional classification methods, such as the linear discriminant analysis (LDA), because of the under sample problem (USP): the dimensionality of the feature space is much higher than the number of training samples. Motivated by the successes of the two dimensional LDA (2DLDA) for face recognition, we develop a general tensor discriminant analysis (GTDA) as a preprocessing step for LDA. The benefits of GTDA compared with existing preprocessing methods, e.g., principal component analysis (PCA) and 2DLDA, include 1) the USP is reduced in subsequent classification by, for example, LDA; 2) the discriminative information in the training tensors is preserved; and 3) GTDA provides stable recognition rates because the alternating projection optimization algorithm to obtain a solution of GTDA converges, while that of 2DLDA does not. We use human gait recognition to validate the proposed GTDA. The averaged gait images are utilized for gait representation. Given the popularity of Gabor function based image decompositions for image understanding and object recognition, we develop three different Gabor function based image representations: 1) the GaborD representation is the sum of Gabor filter responses over directions, 2) GaborS is the sum of Gabor filter responses over scales, and 3) GaborSD is the sum of Gabor filter responses over scales and directions. The GaborD, GaborS and GaborSD representations are applied to the problem of recognizing people from their averaged gait images.A large number of experiments were carried out to evaluate the effectiveness (recognition rate) of gait recognition based on first obtaining a Gabor, GaborD, GaborS or GaborSD image representation, then using GDTA to extract features and finally using LDA for classification. The proposed methods achieved good performance for gait recognition based on image sequences from the USF HumanID Database. Experimental comparisons are made with nine

  14. Computation of streaming potential in porous media: Modified permeability tensor

    NASA Astrophysics Data System (ADS)

    Bandopadhyay, Aditya; DasGupta, Debabrata; Mitra, Sushanta K.; Chakraborty, Suman

    2015-11-01

    We quantify the pressure-driven electrokinetic transport of electrolytes in porous media through a matched asymptotic expansion based method to obtain a homogenized description of the upscaled transport. The pressure driven flow of aqueous electrolytes over charged surfaces leads to the generation of an induced electric potential, commonly termed as the streaming potential. We derive an expression for the modified permeability tensor, K↔eff, which is analogous to the Darcy permeability tensor with due accounting for the induced streaming potential. The porous media herein are modeled as spatially periodic. The modified permeability tensor is obtained for both topographically simple and complex domains by enforcing a zero net global current. Towards resolving the complicated details of the porous medium in a computationally efficient framework, the domain identification and reconstruction of the geometries are performed using adaptive quadtree (in 2D) and octree (in 3D) algorithms, which allows one to resolve the solid-liquid interface as per the desired level of resolution. We discuss the influence of the induced streaming potential on the modification of the Darcy law in connection to transport processes through porous plugs, clays and soils by considering a case-study on Berea sandstone.

  15. Optimizing Tensor Contraction Expressions for Hybrid CPU-GPU Execution

    SciTech Connect

    Ma, Wenjing; Krishnamoorthy, Sriram; Villa, Oreste; Kowalski, Karol; Agrawal, Gagan

    2013-03-01

    Tensor contractions are generalized multidimensional matrix multiplication operations that widely occur in quantum chemistry. Efficient execution of tensor contractions on Graphics Processing Units (GPUs) requires several challenges to be addressed, including index permutation and small dimension-sizes reducing thread block utilization. Moreover, to apply the same optimizations to various expressions, we need a code generation tool. In this paper, we present our approach to automatically generate CUDA code to execute tensor contractions on GPUs, including management of data movement between CPU and GPU. To evaluate our tool, GPU-enabled code is generated for the most expensive contractions in CCSD(T), a key coupled cluster method, and incorporated into NWChem, a popular computational chemistry suite. For this method, we demonstrate speedup over a factor of 8.4 using one GPU (instead of one core per node) and over 2.6 when utilizing the entire system using hybrid CPU+GPU solution with 2 GPUs and 5 cores (instead of 7 cores per node). Finally, we analyze the implementation behavior on future GPU systems.

  16. Diffusion tensor smoothing through weighted Karcher means.

    PubMed

    Carmichael, Owen; Chen, Jun; Paul, Debashis; Peng, Jie

    2013-01-01

    Diffusion tensor magnetic resonance imaging (MRI) quantifies the spatial distribution of water Diffusion at each voxel on a regular grid of locations in a biological specimen by Diffusion tensors- 3 × 3 positive definite matrices. Removal of noise from DTI is an important problem due to the high scientific relevance of DTI and relatively low signal to noise ratio it provides. Leading approaches to this problem amount to estimation of weighted Karcher means of Diffusion tensors within spatial neighborhoods, under various metrics imposed on the space of tensors. However, it is unclear how the behavior of these estimators varies with the magnitude of DTI sensor noise (the noise resulting from the thermal e!ects of MRI scanning) as well as the geometric structure of the underlying Diffusion tensor neighborhoods. In this paper, we combine theoretical analysis, empirical analysis of simulated DTI data, and empirical analysis of real DTI scans to compare the noise removal performance of three kernel-based DTI smoothers that are based on Euclidean, log-Euclidean, and affine-invariant metrics. The results suggest, contrary to conventional wisdom, that imposing a simplistic Euclidean metric may in fact provide comparable or superior noise removal, especially in relatively unstructured regions and/or in the presence of moderate to high levels of sensor noise. On the contrary, log-Euclidean and affine-invariant metrics may lead to better noise removal in highly structured anatomical regions, especially when the sensor noise is of low magnitude. These findings emphasize the importance of considering the interplay of sensor noise magnitude and tensor field geometric structure when assessing Diffusion tensor smoothing options. They also point to the necessity for continued development of smoothing methods that perform well across a large range of scenarios. PMID:25419264

  17. An Adaptive Spectrally Weighted Structure Tensor Applied to Tensor Anisotropic Nonlinear Diffusion for Hyperspectral Images

    ERIC Educational Resources Information Center

    Marin Quintero, Maider J.

    2013-01-01

    The structure tensor for vector valued images is most often defined as the average of the scalar structure tensors in each band. The problem with this definition is the assumption that all bands provide the same amount of edge information giving them the same weights. As a result non-edge pixels can be reinforced and edges can be weakened…

  18. Photodegradation at day, microbial decomposition at night - decomposition in arid lands

    NASA Astrophysics Data System (ADS)

    Gliksman, Daniel; Gruenzweig, Jose

    2014-05-01

    Our current knowledge of decomposition in dry seasons and its role in carbon turnover is fragmentary. So far, decomposition during dry seasons was mostly attributed to abiotic mechanisms, mainly photochemical and thermal degradation, while the contribution of microorganisms to the decay process was excluded. We asked whether microbial decomposition occurs during the dry season and explored its interaction with photochemical degradation under Mediterranean climate. We conducted a litter bag experiment with local plant litter and manipulated litter exposure to radiation using radiation filters. We found notable rates of CO2 fluxes from litter which were related to microbial activity mainly during night-time throughout the dry season. This activity was correlated with litter moisture content and high levels of air humidity and dew. Day-time CO2 fluxes were related to solar radiation, and radiation manipulation suggested photodegradation as the underlying mechanism. In addition, a decline in microbial activity was followed by a reduction in photodegradation-related CO2 fluxes. The levels of microbial decomposition and photodegradation in the dry season were likely the factors influencing carbon mineralization during the subsequent wet season. This study showed that microbial decomposition can be a dominant contributor to CO2 emissions and mass loss in the dry season and it suggests a regulating effect of microbial activity on photodegradation. Microbial decomposition is an important contributor to the dry season decomposition and impacts the annual litter turn-over rates in dry regions. Global warming may lead to reduced moisture availability and dew deposition, which may greatly influence not only microbial decomposition of plant litter, but also photodegradation.

  19. Factors influencing leaf litter decomposition: An intersite decomposition experiment across China

    USGS Publications Warehouse

    Zhou, G.; Guan, L.; Wei, X.; Tang, X.; Liu, S.; Liu, J.; Zhang, Dongxiao; Yan, J.

    2008-01-01

    The Long-Term Intersite Decomposition Experiment in China (hereafter referred to as LTIDE-China) was established in 2002 to study how substrate quality and macroclimate factors affect leaf litter decomposition. The LTIDE-China includes a wide variety of natural and managed ecosystems, consisting of 12 forest types (eight regional broadleaf forests, three needle-leaf plantations and one broadleaf plantation) at eight locations across China. Samples of mixed leaf litter from the south subtropical evergreen broadleaf forest in Dinghushan (referred to as the DHS sample) were translocated to all 12 forest types. The leaf litter from each of other 11 forest types was placed in its original forest to enable comparison of decomposition rates of DHS and local litters. The experiment lasted for 30 months, involving collection of litterbags from each site every 3 months. Our results show that annual decomposition rate-constants, as represented by regression fitted k-values, ranged from 0.169 to 1.454/year. Climatic factors control the decomposition rate, in which mean annual temperature and annual actual evapotranspiration are dominant and mean annual precipitation is subordinate. Initial C/N and N/P ratios were demonstrated to be important factors of regulating litter decomposition rate. Decomposition process may apparently be divided into two phases controlled by different factors. In our study, 0.75 years is believed to be the dividing line of the two phases. The fact that decomposition rates of DHS litters were slower than those of local litters may have been resulted from the acclimation of local decomposer communities to extraneous substrate. ?? 2008 Springer Science+Business Media B.V.

  20. Tensor network algorithm by coarse-graining tensor renormalization on finite periodic lattices

    NASA Astrophysics Data System (ADS)

    Zhao, Hui-Hai; Xie, Zhi-Yuan; Xiang, Tao; Imada, Masatoshi

    2016-03-01

    We develop coarse-graining tensor renormalization group algorithms to compute physical properties of two-dimensional lattice models on finite periodic lattices. Two different coarse-graining strategies, one based on the tensor renormalization group and the other based on the higher-order tensor renormalization group, are introduced. In order to optimize the tensor network model globally, a sweeping scheme is proposed to account for the renormalization effect from the environment tensors under the framework of second renormalization group. We demonstrate the algorithms by the classical Ising model on the square lattice and the Kitaev model on the honeycomb lattice, and show that the finite-size algorithms achieve substantially more accurate results than the corresponding infinite-size ones.

  1. [Tensor Feature Extraction Using Multi-linear Principal Component Analysis for Brain Computer Interface].

    PubMed

    Wang, Jinjia; Yang, Liang

    2015-06-01

    The brain computer interface (BCI) can be used to control external devices directly through electroencephalogram (EEG) information. A multi-linear principal component analysis (MPCA) framework was used for the limitations of tensor form of multichannel EEG signals processing based on traditional principal component analysis (PCA) and two-dimensional principal component analysis (2DPCA). Based on MPCA, we used the projection of tensor-matrix to achieve the goal of dimensionality reduction and features exaction. Then we used the Fisher linear classifier to classify the features. Furthermore, we used this novel method on the BCI competition II dataset 4 and BCI competition N dataset 3 in the experiment. The second-order tensor representation of time-space EEG data and the third-order tensor representation of time-space-frequency BEG data were used. The best results that were superior to those from other dimensionality reduction methods were obtained by much debugging on parameter P and testQ. For two-order tensor, the highest accuracy rates could be achieved as 81.0% and 40.1%, and for three-order tensor, the highest accuracy rates were 76.0% and 43.5%, respectively.

  2. Velocity gradient dynamics in compressible turbulence: Characterization of pressure-Hessian tensor

    NASA Astrophysics Data System (ADS)

    Suman, Sawan; Girimaji, Sharath S.

    2013-12-01

    Pressure-Hessian tensor produces the most significant difference between incompressible and compressible velocity gradient dynamics in turbulent flows. Characterization of pressure-Hessian tensor as a function of the level of compressibility is therefore of much interest. Using direct numerical simulation results, we demonstrate that the pressure-Hessian tensor behavior can be most exclusively characterized in terms of the compressibility parameter δ which is defined to be the growth-rate of dilatation-rate. A key compressibility effect is the distinct change in the alignment between pressure-Hessian and velocity gradient tensors with increasing δ. In incompressible turbulence, the pressure-Hessian eigenvectors exhibit a mild tendency to align at 45° angle with the principal directions of strain rate. With increasing δ, the pressure-Hessian tensor shows progressively stronger tendency to align along principal directions of the local strain-rate tensor. We show that this change in pressure-Hessian orientation causes the compressible velocity gradient dynamics to significantly differ from its incompressible counterpart. In incompressible turbulence, pressure mildly moderates the inherent gradient-steepening tendencies of the nonlinear inertial term. On the other hand, in highly compressible turbulence (extreme values of δ), pressure effects can lead to intense gradient steepening or smoothing depending upon the growth-rate of dilatation rate, thereby profoundly altering the cascade process.

  3. Tensor subspace analysis for spatial-spectral classification of hyperspectral data

    NASA Astrophysics Data System (ADS)

    Fan, Lei; Messinger, David W.

    2016-05-01

    Remotely sensed data fusion aims to integrate multi-source information generated from different perspectives, acquired with different sensors or captured at different times in order to produce fused data that contains more information than one individual data source. Recently, extended morphological attribute profiles (EMAPs) were proposed to embed contextual information, such as texture, shape, size and etc., into a high dimensional feature space as an alternative data source to hyperspectral image (HSI). Although EMAPs provide greater capabilities in modeling both spatial and spectral information, they lead to an increase in the dimensionality of the extracted features. Conventionally, a data point in high dimensional feature space is represented by a vector. For HSI, this data representation has one obvious shortcoming in that only spectral knowledge is utilized without contextual relationship being exploited. Tensors provide a natural representation for HSI data by incorporating both spatial neighborhood awareness and spectral information. Besides, tensors can be conveniently incorporated into a superpixel-based HSI image processing framework. In our paper, three tensor-based dimensionality reduction (DR) approaches were generalized for high dimensional image with promising results reported. Among the tensor-based DR approaches, the Tensor Locality Preserving Projection (TLPP) algorithm utilized graph Laplacian to model the pairwise relationship among the tensor data points. It also demonstrated excellent performance for both pixel-wise and superpixel-wise classification on Pavia University dataset.

  4. A 3 omega method to measure an arbitrary anisotropic thermal conductivity tensor.

    PubMed

    Mishra, Vivek; Hardin, Corey L; Garay, Javier E; Dames, Chris

    2015-05-01

    Previous use of the 3 omega method has been limited to materials with thermal conductivity tensors that are either isotropic or have their principal axes aligned with the natural cartesian coordinate system defined by the heater line and sample surface. Here, we consider the more general case of an anisotropic thermal conductivity tensor with finite off-diagonal terms in this coordinate system. An exact closed form solution for surface temperature has been found for the case of an ideal 3 omega heater line of finite width and infinite length, and verified numerically. We find that the common slope method of data processing yields the determinant of the thermal conductivity tensor, which is invariant upon rotation about the heater line's axis. Following this analytic result, an experimental scheme is proposed to isolate the thermal conductivity tensor elements. Using two heater lines and a known volumetric heat capacity, the arbitrary 2-dimensional anisotropic thermal conductivity tensor can be measured with a low frequency sweep. Four heater lines would be required to extend this method to measure all 6 unknown tensor elements in 3 dimensions. Experiments with anisotropic layered mica are carried out to demonstrate the analytical results.

  5. Reliability of methods to separate stress tensors from heterogeneous fault-slip data

    NASA Astrophysics Data System (ADS)

    Liesa, Carlos L.; Lisle, Richard J.

    2004-03-01

    The reliability of methods for separating palaeostress tensors from heterogeneous fault-slip data is evaluated. The methods of Etchecopar et al. (1981), Yamaji (2000), and the cluster procedure of Nemcok and Lisle (1995) are assessed but the results can probably be extrapolated to other methods based on similar assumptions. Heterogeneous fault-slip data sets, artificially generated by mixing two natural homogeneous data sets, have been used to evaluate both the role of the relative dominance (in number of faults taken from each tensor) and the difference between the parent tensors. The results obtained from a natural heterogeneous data set were compared with additional field data to evaluate and constrain the tensor separation process as well. Results suggest that attempts to devise a fully automatic separation procedure for distinguishing homogeneous data sets from heterogeneous ones will be unsuccessful because the researcher will always be required to take some part in the correct choice of the tensors. In this sense, additional structural data such as geometrical characteristics of the faults (e.g. conjugate or quasi-conjugate Andersonian systems), stylolites or tension gashes will be very useful for the correct separation of stress tensors from fault-slip data.

  6. A 3 omega method to measure an arbitrary anisotropic thermal conductivity tensor

    NASA Astrophysics Data System (ADS)

    Mishra, Vivek; Hardin, Corey L.; Garay, Javier E.; Dames, Chris

    2015-05-01

    Previous use of the 3 omega method has been limited to materials with thermal conductivity tensors that are either isotropic or have their principal axes aligned with the natural cartesian coordinate system defined by the heater line and sample surface. Here, we consider the more general case of an anisotropic thermal conductivity tensor with finite off-diagonal terms in this coordinate system. An exact closed form solution for surface temperature has been found for the case of an ideal 3 omega heater line of finite width and infinite length, and verified numerically. We find that the common slope method of data processing yields the determinant of the thermal conductivity tensor, which is invariant upon rotation about the heater line's axis. Following this analytic result, an experimental scheme is proposed to isolate the thermal conductivity tensor elements. Using two heater lines and a known volumetric heat capacity, the arbitrary 2-dimensional anisotropic thermal conductivity tensor can be measured with a low frequency sweep. Four heater lines would be required to extend this method to measure all 6 unknown tensor elements in 3 dimensions. Experiments with anisotropic layered mica are carried out to demonstrate the analytical results.

  7. Tensor scale-based image registration

    NASA Astrophysics Data System (ADS)

    Saha, Punam K.; Zhang, Hui; Udupa, Jayaram K.; Gee, James C.

    2003-05-01

    Tangible solutions to image registration are paramount in longitudinal as well as multi-modal medical imaging studies. In this paper, we introduce tensor scale - a recently developed local morphometric parameter - in rigid image registration. A tensor scale-based registration method incorporates local structure size, orientation and anisotropy into the matching criterion, and therefore, allows efficient multi-modal image registration and holds potential to overcome the effects of intensity inhomogeneity in MRI. Two classes of two-dimensional image registration methods are proposed - (1) that computes angular shift between two images by correlating their tensor scale orientation histogram, and (2) that registers two images by maximizing the similarity of tensor scale features. Results of applications of the proposed methods on proton density and T2-weighted MR brain images of (1) the same slice of the same subject, and (2) different slices of the same subject are presented. The basic superiority of tensor scale-based registration over intensity-based registration is that it may allow the use of local Gestalts formed by the intensity patterns over the image instead of simply considering intensities as isolated events at the pixel level. This would be helpful in dealing with the effects of intensity inhomogeneity and noise in MRI.

  8. Primordial tensor modes of the early Universe

    NASA Astrophysics Data System (ADS)

    Martínez, Florencia Benítez; Olmedo, Javier

    2016-06-01

    We study cosmological tensor perturbations on a quantized background within the hybrid quantization approach. In particular, we consider a flat, homogeneous and isotropic spacetime and small tensor inhomogeneities on it. We truncate the action to second order in the perturbations. The dynamics is ruled by a homogeneous scalar constraint. We carry out a canonical transformation in the system where the Hamiltonian for the tensor perturbations takes a canonical form. The new tensor modes now admit a standard Fock quantization with a unitary dynamics. We then combine this representation with a generic quantum scheme for the homogeneous sector. We adopt a Born-Oppenheimer ansatz for the solutions to the constraint operator, previously employed to study the dynamics of scalar inhomogeneities. We analyze the approximations that allow us to recover, on the one hand, a Schrödinger equation similar to the one emerging in the dressed metric approach and, on the other hand, the ones necessary for the effective evolution equations of these primordial tensor modes within the hybrid approach to be valid. Finally, we consider loop quantum cosmology as an example where these quantization techniques can be applied and compare with other approaches.

  9. MULTILINEAR TENSOR REGRESSION FOR LONGITUDINAL RELATIONAL DATA

    PubMed Central

    Hoff, Peter D.

    2016-01-01

    A fundamental aspect of relational data, such as from a social network, is the possibility of dependence among the relations. In particular, the relations between members of one pair of nodes may have an effect on the relations between members of another pair. This article develops a type of regression model to estimate such effects in the context of longitudinal and multivariate relational data, or other data that can be represented in the form of a tensor. The model is based on a general multilinear tensor regression model, a special case of which is a tensor autoregression model in which the tensor of relations at one time point are parsimoniously regressed on relations from previous time points. This is done via a separable, or Kronecker-structured, regression parameter along with a separable covariance model. In the context of an analysis of longitudinal multivariate relational data, it is shown how the multilinear tensor regression model can represent patterns that often appear in relational and network data, such as reciprocity and transitivity. PMID:27458495

  10. Non-standard symmetries and Killing tensors

    NASA Astrophysics Data System (ADS)

    Visinescu, Mihai

    2009-10-01

    Higher order symmetries corresponding to Killing tensors are investigated. The intimate relation between Killing-Yano tensors and non-standard supersymmetries is pointed out. The gravitational anomalies are absent if the hidden symmetry is associated with a Killing-Yano tensor. In the Dirac theory on curved spaces, Killing-Yano tensors generate Dirac type operators involved in interesting algebraic structures as dynamical algebras or even infinite dimensional algebras or superalgebras. The general results are applied to space-times which appear in modern studies. The 4-dimensional Euclidean Taub-NUT space and its generalizations introduced by Iwai and Katayama are analyzed from the point of view of hidden symmetries. One presents the infinite dimensional superalgebra of Dirac type operators on Taub-NUT space that can be seen as a twisted loop algebra. The axial anomaly, interpreted as the index of the Dirac operator, is computed for the generalized Taub-NUT metrics. The existence of the conformal Killing-Yano tensors is investigated for some spaces with mixed Sasakian structures.

  11. Spectral line polarization with angle-dependent partial frequency redistribution. I. A Stokes parameters decomposition for Rayleigh scattering

    NASA Astrophysics Data System (ADS)

    Frisch, H.

    2010-11-01

    Context. The linear polarization of a strong resonance lines observed near the solar limb is created by a multiple-scattering process. Partial frequency redistribution (PRD) effects must be accounted for to explain the polarization profiles. The redistribution matrix describing the scattering process is a sum of terms, each containing a PRD function multiplied by a Rayleigh type phase matrix. A standard approximation made in calculating the polarization is to average the PRD functions over all the scattering angles, because the numerical work needed to take the angle-dependence of the PRD functions into account is large and not always needed for reasonable evaluations of the polarization. Aims: This paper describes a Stokes parameters decomposition method, that is applicable in plane-parallel cylindrically symmetrical media, which aims at simplifying the numerical work needed to overcome the angle-average approximation. Methods: The decomposition method relies on an azimuthal Fourier expansion of the PRD functions associated to a decomposition of the phase matrices in terms of the Landi Degl'Innocenti irreducible spherical tensors for polarimetry T^K_Q(i, Ω) (i Stokes parameter index, Ω ray direction). The terms that depend on the azimuth of the scattering angle are retained in the phase matrices. Results: It is shown that the Stokes parameters I and Q, which have the same cylindrical symmetry as the medium, can be expressed in terms of four cylindrically symmetrical components I_Q^K (K = Q = 0, K = 2, Q = 0, 1, 2). The components with Q = 1, 2 are created by the angular dependence of the PRD functions. They go to zero at disk center, ensuring that Stokes Q also goes to zero. Each component I_Q^K is a solution to a standard radiative transfer equation. The source term S_Q^K are significantly simpler than the source terms corresponding to I and Q. They satisfy a set of integral equations that can be solved by an accelerated lambda iteration (ALI) method.

  12. Error reduction in EMG signal decomposition.

    PubMed

    Kline, Joshua C; De Luca, Carlo J

    2014-12-01

    Decomposition of the electromyographic (EMG) signal into constituent action potentials and the identification of individual firing instances of each motor unit in the presence of ambient noise are inherently probabilistic processes, whether performed manually or with automated algorithms. Consequently, they are subject to errors. We set out to classify and reduce these errors by analyzing 1,061 motor-unit action-potential trains (MUAPTs), obtained by decomposing surface EMG (sEMG) signals recorded during human voluntary contractions. Decomposition errors were classified into two general categories: location errors representing variability in the temporal localization of each motor-unit firing instance and identification errors consisting of falsely detected or missed firing instances. To mitigate these errors, we developed an error-reduction algorithm that combines multiple decomposition estimates to determine a more probable estimate of motor-unit firing instances with fewer errors. The performance of the algorithm is governed by a trade-off between the yield of MUAPTs obtained above a given accuracy level and the time required to perform the decomposition. When applied to a set of sEMG signals synthesized from real MUAPTs, the identification error was reduced by an average of 1.78%, improving the accuracy to 97.0%, and the location error was reduced by an average of 1.66 ms. The error-reduction algorithm in this study is not limited to any specific decomposition strategy. Rather, we propose it be used for other decomposition methods, especially when analyzing precise motor-unit firing instances, as occurs when measuring synchronization.

  13. Domain decomposition methods in aerodynamics

    NASA Technical Reports Server (NTRS)

    Venkatakrishnan, V.; Saltz, Joel

    1990-01-01

    Compressible Euler equations are solved for two-dimensional problems by a preconditioned conjugate gradient-like technique. An approximate Riemann solver is used to compute the numerical fluxes to second order accuracy in space. Two ways to achieve parallelism are tested, one which makes use of parallelism inherent in triangular solves and the other which employs domain decomposition techniques. The vectorization/parallelism in triangular solves is realized by the use of a recording technique called wavefront ordering. This process involves the interpretation of the triangular matrix as a directed graph and the analysis of the data dependencies. It is noted that the factorization can also be done in parallel with the wave front ordering. The performances of two ways of partitioning the domain, strips and slabs, are compared. Results on Cray YMP are reported for an inviscid transonic test case. The performances of linear algebra kernels are also reported.

  14. TensorPack: a Maple-based software package for the manipulation of algebraic expressions of tensors in general relativity

    NASA Astrophysics Data System (ADS)

    Huf, P. A.; Carminati, J.

    2015-09-01

    In this paper we: (1) introduce TensorPack, a software package for the algebraic manipulation of tensors in covariant index format in Maple; (2) briefly demonstrate the use of the package with an orthonormal tensor proof of the shearfree conjecture for dust. TensorPack is based on the Riemann and Canon tensor software packages and uses their functions to express tensors in an indexed covariant format. TensorPack uses a string representation as input and provides functions for output in index form. It extends the functionality to basic algebra of tensors, substitution, covariant differentiation, contraction, raising/lowering indices, symmetry functions and other accessory functions. The output can be merged with text in the Maple environment to create a full working document with embedded dynamic functionality. The package offers potential for manipulation of indexed algebraic tensor expressions in a flexible software environment.

  15. O(N) Random Tensor Models

    NASA Astrophysics Data System (ADS)

    Carrozza, Sylvain; Tanasa, Adrian

    2016-08-01

    We define in this paper a class of three-index tensor models, endowed with {O(N)^{⊗ 3}} invariance (N being the size of the tensor). This allows to generate, via the usual QFT perturbative expansion, a class of Feynman tensor graphs which is strictly larger than the class of Feynman graphs of both the multi-orientable model (and hence of the colored model) and the U(N) invariant models. We first exhibit the existence of a large N expansion for such a model with general interactions. We then focus on the quartic model and we identify the leading and next-to-leading order (NLO) graphs of the large N expansion. Finally, we prove the existence of a critical regime and we compute the critical exponents, both at leading order and at NLO. This is achieved through the use of various analytic combinatorics techniques.

  16. Spectral analysis of the full gravity tensor

    NASA Astrophysics Data System (ADS)

    Rummel, R.; van Gelderen, M.

    1992-10-01

    It is shown that, when the five independent components of the gravity tensor are grouped into (Gamma-zz), (Gamma-xz, Gamma-yz), and (Gamma-xx - Gamma-yy, 2Gamma-xy) sets and expanded into an infinite series of pure-spin spherical harmonic tensors, it is possible to derive simple eigenvalue connections between these three sets and the spherical harmonic expansion of the gravity potential. The three eigenvalues are (n + 1)(n + 2), -(n + 2) sq rt of n(n + 1), and sq rt of (n - 1)n(n + 1)(n + 2). The joint ESA and NASA Aristoteles mission is designed to measure with high precision the tensor components Gamma-zz, Gamma-yz, and Gamma-yy, which will make it possible to determine the global gravity field in six months time with a high precision.

  17. Low Temperature Decomposition Rates for Tetraphenylborate Ion

    SciTech Connect

    Walker, D.D.

    1998-11-18

    Previous studies indicated that palladium is catalyzes rapid decomposition of alkaline tetraphenylborate slurries. Additional evidence suggest that Pd(II) reduces to Pd(0) during catalyst activation. Further use of tetraphenylborate ion in the decontamination of radioactive waste may require removal of the catalyst or cooling to temperatures at which the decomposition reaction proceeds slowly and does not adversely affect processing. Recent tests showed that tetraphenylborate did not react appreciably at 25 degrees Celsius over six months suggesting the potential to avoid the decomposition at low temperatures. The lack of reaction at low temperature could reflect very slow kinetics at the lower temperature, or may indicate a catalyst ''deactivation'' process. Previous tests in the temperature range 35 to 70 degrees Celsius provided a low precision estimate of the activation energy of the reaction with which to predict the rate of reaction at 25 percent Celsius. To understand the observations at 25 degrees Celsius, experiments must separate the catalyst activation step and the subsequent reaction with TPB. Tests described in this report represent an initial attempt to separate the two steps and determine the rate and activation energy of the reaction between active catalyst and TPB. The results of these tests indicate that the absence of reaction at 25 degrees Celsius was caused by failure to activate the catalyst or the presence of a deactivating mechanism. In the presence of activated catalyst, the decomposition reaction rate is significant.

  18. Role of the tensor interaction in He isotopes with a tensor-optimized shell model

    SciTech Connect

    Myo, Takayuki; Umeya, Atsushi; Toki, Hiroshi; Ikeda, Kiyomi

    2011-09-15

    We studied the role of the tensor interaction in He isotopes systematically on the basis of the tensor-optimized shell model (TOSM). We use a bare nucleon-nucleon interaction AV8{sup '} obtained from nucleon-nucleon scattering data. The short-range correlation is treated in the unitary correlation operator method (UCOM). Using the TOSM + UCOM approach, we investigate the role of tensor interaction on each spectrum in He isotopes. It is found that the tensor interaction enhances the LS splitting energy observed in {sup 5}He, in which the p{sub 1/2} and p{sub 3/2} orbits play different roles on the tensor correlation. In {sup 6,7,8}He, the low-lying states containing extra neutrons in the p{sub 3/2} orbit gain the tensor contribution. On the other hand, the excited states containing extra neutrons in the p{sub 1/2} orbit lose the tensor contribution due to the Pauli-blocking effect with the 2p2h states in the {sup 4}He core configuration.

  19. Hydrogen iodide decomposition

    DOEpatents

    O'Keefe, Dennis R.; Norman, John H.

    1983-01-01

    Liquid hydrogen iodide is decomposed to form hydrogen and iodine in the presence of water using a soluble catalyst. Decomposition is carried out at a temperature between about 350.degree. K. and about 525.degree. K. and at a corresponding pressure between about 25 and about 300 atmospheres in the presence of an aqueous solution which acts as a carrier for the homogeneous catalyst. Various halides of the platinum group metals, particularly Pd, Rh and Pt, are used, particularly the chlorides and iodides which exhibit good solubility. After separation of the H.sub.2, the stream from the decomposer is countercurrently extracted with nearly dry HI to remove I.sub.2. The wet phase contains most of the catalyst and is recycled directly to the decomposition step. The catalyst in the remaining almost dry HI-I.sub.2 phase is then extracted into a wet phase which is also recycled. The catalyst-free HI-I.sub.2 phase is finally distilled to separate the HI and I.sub.2. The HI is recycled to the reactor; the I.sub.2 is returned to a reactor operating in accordance with the Bunsen equation to create more HI.

  20. Blue running of the primordial tensor spectrum

    SciTech Connect

    Gong, Jinn-Ouk

    2014-07-01

    We examine the possibility of positive spectral index of the power spectrum of the primordial tensor perturbation produced during inflation in the light of the detection of the B-mode polarization by the BICEP2 collaboration. We find a blue tilt is in general possible when the slow-roll parameter decays rapidly. We present two known examples in which a positive spectral index for the tensor power spectrum can be obtained. We also briefly discuss other consistency tests for further studies on inflationary dynamics.

  1. Layout decomposition and synthesis for a modular technology to solve the edge-placement challenges by combining selective etching, direct stitching, and alternating-material self-aligned multiple patterning processes

    NASA Astrophysics Data System (ADS)

    Liu, Hongyi; Han, Ting; Zhou, Jun; Chen, Yijian

    2016-03-01

    To overcome the prohibitive barriers of edge-placement errors (EPE) in the cut/block/via step of complementary lithography, we propose a modular patterning approach by combining layout stitching, selective etching, and alternating-material self-aligned multiple patterning (altSAMP) processes. In this patterning approach, altSAMP is used to create line arrays with two materials alternatively which allow a highly selective etching process to remove one material without attacking the other, therefore more significant EPE effect can be tolerated in line-cutting step. With no need of connecting vias, the stitching process can form 2-D features by directly stitching two components of patterns together to create 2-D design freedom as well as multiple-CD/pitch capability. By adopting this novel approach, we can potentially achieve higher processing yield and more 2-D design freedom for continuous IC scaling down to 5 nm. We developed layout decomposition and synthesis algorithms for critical layers, and the fin/gate/metal layer from NSCU open cell library is used to test the proposed algorithms.

  2. Fast Density Inversion Solution for Full Tensor Gravity Gradiometry Data

    NASA Astrophysics Data System (ADS)

    Hou, Zhenlong; Wei, Xiaohui; Huang, Danian

    2016-02-01

    We modify the classical preconditioned conjugate gradient method for full tensor gravity gradiometry data. The resulting parallelized algorithm is implemented on a cluster to achieve rapid density inversions for various scenarios, overcoming the problems of computation time and memory requirements caused by too many iterations. The proposed approach is mainly based on parallel programming using the Message Passing Interface, supplemented by Open Multi-Processing. Our implementation is efficient and scalable, enabling its use with large-scale data. We consider two synthetic models and real survey data from Vinton Dome, US, and demonstrate that our solutions are reliable and feasible.

  3. Complex rupture process of the March 19, 2013, Rudna mine (Poland) seismic event - local and regional view

    NASA Astrophysics Data System (ADS)

    Rudzinski, Lukasz; Cesca, Simone; Lizurek, Grzegorz

    2015-04-01

    On March 19th, 2013 a strong shallow induced seismic event struck a mining panel in the room-and-pillar Rudna copper mine, SE Poland. The event caused important damages at the mining tunnel and trapped 19 miners, which were safely rescued few hour later. Despite mining induced seismicity is frequent at this mine, the March 19 event was unusual because of its larger magnitude, its occurrence far from the mining stopes, and because it was accompanied by a strong hazardous rockburst. The mining inspections following the event verified the occurrence of a rockfall with tunnel floor uplift, but also recognized the presence of a faulting structure at the hypocentral location. The availability of three monitoring networks, including local and regional data, short-period and broadband seismometers, as well as surface and in-mine installation, give an optimal set up to determine rupture parameters and compare the performance and results from different installations. We perform waveform and spectral based analysis to infer source properties, with a particular interest to the determination of the rupture processes, using different moment tensor inversion techniques. Our results are surprisingly different, ranging from a dominant thrust mechanism, resolved at closest distances, to a collapse-type rupture, resolved at regional distances. We proof that a complex rupture model is needed to explain all observations and justify these discrepancies. The final scenario indicates that the rupture nucleated as a weaker thrust mechanism, along a pre-existing weakened surface, and continued in a more energetic collapse event. The local surface LUMINEOS network has the potential to resolve both subevents, but not using a standard moment tensor decomposition. We propose here a new moment tensor decomposition and an alternative moment tensor fitting procedure, which can be used to analyze the moment tensor of collapse sources.

  4. Non-isothermal decomposition kinetics of diosgenin

    NASA Astrophysics Data System (ADS)

    Chen, Fei-xiong; Fu, Li; Feng, Lu; Liu, Chuo-chuo; Ren, Bao-zeng

    2013-10-01

    The thermal stability and kinetics of isothermal decomposition of diosgenin were studied by thermogravimetry (TG) and Differential Scanning Calorimeter (DSC). The activation energy of the thermal decomposition process was determined from the analysis of TG curves by the methods of Flynn-Wall-Ozawa, Doyle, Šatava-Šesták and Kissinger, respectively. The mechanism of thermal decomposition was determined to be Avrami-Erofeev equation ( n = 1/3, n is the reaction order) with integral form G(α) = [-ln(1 - α)]1/3 (α = 0.10-0.80). E a and log A [s-1] were determined to be 44.10 kJ mol-1 and 3.12, respectively. Moreover, the thermodynamics properties of Δ H ≠, Δ S ≠, and Δ G ≠ of this reaction were 38.18 kJ mol-1, -199.76 J mol-1 K-1, and 164.36 kJ mol-1 in the stage of thermal decomposition.

  5. 2D Spinodal Decomposition in Forced Turbulence

    NASA Astrophysics Data System (ADS)

    Fan, Xiang; Diamond, Patrick; Chacon, Luis; Li, Hui

    2015-11-01

    Spinodal decomposition is a second order phase transition for binary fluid mixture, from one thermodynamic phase to form two coexisting phases. The governing equation for this coarsening process below critical temperature, Cahn-Hilliard Equation, is very similar to 2D MHD Equation, especially the conserved quantities have a close correspondence between each other, so theories for MHD turbulence are used to study spinodal decomposition in forced turbulence. Domain size is increased with time along with the inverse cascade, and the length scale can be arrested by a forced turbulence with direct cascade. The two competing mechanisms lead to a stabilized domain size length scale, which can be characterized by Hinze Scale. The 2D spinodal decomposition in forced turbulence is studied by both theory and simulation with ``pixie2d.'' This work focuses on the relation between Hinze scale and spectra and cascades. Similarities and differences between spinodal decomposition and MHD are investigated. Also some transport properties are studied following MHD theories. This work is supported by the Department of Energy under Award Number DE-FG02-04ER54738.

  6. Positivity of linear maps under tensor powers

    NASA Astrophysics Data System (ADS)

    Müller-Hermes, Alexander; Reeb, David; Wolf, Michael M.

    2016-01-01

    We investigate linear maps between matrix algebras that remain positive under tensor powers, i.e., under tensoring with n copies of themselves. Completely positive and completely co-positive maps are trivial examples of this kind. We show that for every n ∈ ℕ, there exist non-trivial maps with this property and that for two-dimensional Hilbert spaces there is no non-trivial map for which this holds for all n. For higher dimensions, we reduce the existence question of such non-trivial "tensor-stable positive maps" to a one-parameter family of maps and show that an affirmative answer would imply the existence of non-positive partial transpose bound entanglement. As an application, we show that any tensor-stable positive map that is not completely positive yields an upper bound on the quantum channel capacity, which for the transposition map gives the well-known cb-norm bound. We, furthermore, show that the latter is an upper bound even for the local operations and classical communications-assisted quantum capacity, and that moreover it is a strong converse rate for this task.

  7. Spacetime encodings. III. Second order Killing tensors

    SciTech Connect

    Brink, Jeandrew

    2010-01-15

    This paper explores the Petrov type D, stationary axisymmetric vacuum (SAV) spacetimes that were found by Carter to have separable Hamilton-Jacobi equations, and thus admit a second-order Killing tensor. The derivation of the spacetimes presented in this paper borrows from ideas about dynamical systems, and illustrates concepts that can be generalized to higher-order Killing tensors. The relationship between the components of the Killing equations and metric functions are given explicitly. The origin of the four separable coordinate systems found by Carter is explained and classified in terms of the analytic structure associated with the Killing equations. A geometric picture of what the orbital invariants may represent is built. Requiring that a SAV spacetime admits a second-order Killing tensor is very restrictive, selecting very few candidates from the group of all possible SAV spacetimes. This restriction arises due to the fact that the consistency conditions associated with the Killing equations require that the field variables obey a second-order differential equation, as opposed to a fourth-order differential equation that imposes the weaker condition that the spacetime be SAV. This paper introduces ideas that could lead to the explicit computation of more general orbital invariants in the form of higher-order Killing tensors.

  8. An integral relation for tensor polynomials

    NASA Astrophysics Data System (ADS)

    Vshivtseva, P. A.; Denisov, V. I.; Denisova, I. P.

    2011-02-01

    We prove two lemmas and one theorem that allow integrating the product of an arbitrary number of unit vectors and the Legendre polynomials over a sphere of arbitrary radius. Such integral tensor products appear in solving inhomogeneous Helmholtz equations whose right-hand side is proportional to the product of a nonfixed number of unit vectors.

  9. Tensor squeezed limits and the Higuchi bound

    NASA Astrophysics Data System (ADS)

    Bordin, Lorenzo; Creminelli, Paolo; Mirbabayi, Mehrdad; Noreña, Jorge

    2016-09-01

    We point out that tensor consistency relations—i.e. the behavior of primordial correlation functions in the limit a tensor mode has a small momentum—are more universal than scalar consistency relations. They hold in the presence of multiple scalar fields and as long as anisotropies are diluted exponentially fast. When de Sitter isometries are approximately respected during inflation this is guaranteed by the Higuchi bound, which forbids the existence of light particles with spin: de Sitter space can support scalar hair but no curly hair. We discuss two indirect ways to look for the violation of tensor consistency relations in observations, as a signature of models in which inflation is not a strong isotropic attractor, such as solid inflation: (a) graviton exchange contribution to the scalar four-point function; (b) quadrupolar anisotropy of the scalar power spectrum due to super-horizon tensor modes. This anisotropy has a well-defined statistics which can be distinguished from cases in which the background has a privileged direction.

  10. Radiation Forces and Torques without Stress (Tensors)

    ERIC Educational Resources Information Center

    Bohren, Craig F.

    2011-01-01

    To understand radiation forces and torques or to calculate them does not require invoking photon or electromagnetic field momentum transfer or stress tensors. According to continuum electromagnetic theory, forces and torques exerted by radiation are a consequence of electric and magnetic fields acting on charges and currents that the fields induce…

  11. Quantum tensor product structures are observable induced.

    PubMed

    Zanardi, Paolo; Lidar, Daniel A; Lloyd, Seth

    2004-02-13

    It is argued that the partition of a quantum system into subsystems is dictated by the set of operationally accessible interactions and measurements. The emergence of a multipartite tensor product structure of the state space and the associated notion of quantum entanglement are then relative and observable induced. We develop a general algebraic framework aimed to formalize this concept.

  12. Black holes in scalar-tensor gravity.

    PubMed

    Sotiriou, Thomas P; Faraoni, Valerio

    2012-02-24

    Hawking has proven that black holes which are stationary as the end point of gravitational collapse in Brans-Dicke theory (without a potential) are no different than in general relativity. We extend this proof to the much more general class of scalar-tensor and f(R) gravity theories, without assuming any symmetries apart from stationarity.

  13. Cosmic Ray Diffusion Tensor Throughout the Heliosphere

    NASA Astrophysics Data System (ADS)

    Pei, C.; Bieber, J. W.; Breech, B.; Burger, R. A.; Clem, J.; Matthaeus, W. H.

    2008-12-01

    We calculate the cosmic ray diffusion tensor based on a recently developed model of magnetohydrodynamic (MHD) turbulence in the expanding solar wind [Breech et al., 2008.]. Parameters of this MHD model are tuned by using published observations from Helios, Voyager 2, and Ulysses. We present solutions of two turbulence parameter sets and derive the characteristics of the cosmic ray diffusion tensor for each. We determine the parallel diffusion coefficient of the cosmic ray following the method presented in Bieber et al. [1995]. We use the nonlinear guiding center (NLGC) theory to obtain the perpendicular diffusion coefficient of the cosmic ray [Matthaeus et al. 2003]. We find that (1) the radial mean free path decreases from 1 AU to 20 AU for both turbulence scenarios; (2) after 40 AU the radial mean free path is nearly constant; (3) the radial mean free path is dominated by the parallel component before 20 AU, after which the perpendicular component becomes important; (4) the rigidity P dependence of the parallel component of the diffusion tensor is proportional to P.404 for one turbulence scenario and P.374 for the other at 1 AU from 0.1 GVto 10 GV, but in the outer heliosphere its dependence becomes stronger above 4 GV; (5) the rigidity P dependence of the perpendicular component of the diffusion tensor is very weak. Supported by NASA Heliophysics Guest Investigator grant NNX07AH73G and by NASA Heliophysics Theory grant NNX08AI47G.

  14. Nonlinear symmetries on spaces admitting Killing tensors

    NASA Astrophysics Data System (ADS)

    Visinescu, Mihai

    2010-04-01

    Nonlinear symmetries corresponding to Killing tensors are investigated. The intimate relation between Killing-Yano tensors and non-standard supersymmetries is pointed out. The gravitational anomalies are absent if the hidden symmetry is associated with a Killing-Yano tensor. In the case of the nonlinear symmetries the dynamical algebras of the Dirac-type operators is more involved and could be organized as infinite dimensional algebras or superalgebras. The general results are applied to some concrete spaces involved in theories of modern physics. As a first example it is considered the 4-dimensional Euclidean Taub-NUT space and its generalizations introduced by Iwai and Katayama. One presents the infinite dimensional superalgebra of Dirac type operators on Taub-NUT space that could be seen as a graded loop superalgebra of the Kac-Moody type. The axial anomaly, interpreted as the index of the Dirac operator, is computed for the generalized Taub-NUT metrics. Finally the existence of the conformal Killing-Yano tensors is investigated for some spaces with mixed Sasakian structures.

  15. Erbium hydride decomposition kinetics.

    SciTech Connect

    Ferrizz, Robert Matthew

    2006-11-01

    Thermal desorption spectroscopy (TDS) is used to study the decomposition kinetics of erbium hydride thin films. The TDS results presented in this report are analyzed quantitatively using Redhead's method to yield kinetic parameters (E{sub A} {approx} 54.2 kcal/mol), which are then utilized to predict hydrogen outgassing in vacuum for a variety of thermal treatments. Interestingly, it was found that the activation energy for desorption can vary by more than 7 kcal/mol (0.30 eV) for seemingly similar samples. In addition, small amounts of less-stable hydrogen were observed for all erbium dihydride films. A detailed explanation of several approaches for analyzing thermal desorption spectra to obtain kinetic information is included as an appendix.

  16. Direct Sum Decomposition of Groups

    ERIC Educational Resources Information Center

    Thaheem, A. B.

    2005-01-01

    Direct sum decomposition of Abelian groups appears in almost all textbooks on algebra for undergraduate students. This concept plays an important role in group theory. One simple example of this decomposition is obtained by using the kernel and range of a projection map on an Abelian group. The aim in this pedagogical note is to establish a direct…

  17. Role of Tensor Force in Light Nuclei Based on the Tensor Optimized Shell Model

    SciTech Connect

    Myo, Takayuki; Umeya, Atsushi; Ikeda, Kiyomi; Valverde, Manuel; Toki, Hiroshi

    2011-10-21

    We propose a new theoretical approach to describe nucleus using bare nuclear interaction, in which the tensor and short-range correlations are described with the tensor optimized shell model (TOSM) and the unitary correlation operator method (UCOM), respectively. We show the obtained results of He isotopes using TOSM+UCOM, such as the importance of the pn-pair correlated by the tensor force, and the structure differences in the LS partners of 3/2{sup -} and 1/2{sup -} states of {sup 5}He. We also apply TOSM to the analysis of two-neutron halo nucleus {sup 11}Li, on the basis of the ''core described in TOSM''+n+n model. The halo formation of {sup 11}Li is naturally explained, in which the tensor correlation in the {sup 9}Li core is Pauli-blocked on the p-wave neutrons in {sup 11}Li and the s-wave component of halo structure is enhanced.

  18. Application of the base catalyzed decomposition process to treatment of PCB-contaminated insulation and other materials associated with US Navy vessels. Final report

    SciTech Connect

    Schmidt, A.J.; Zacher, A.H.; Gano, S.R.

    1996-09-01

    The BCD process was applied to dechlorination of two types of PCB-contaminated materials generated from Navy vessel decommissioning activities at Puget Sound Naval Shipyard: insulation of wool felt impregnated with PCB, and PCB-containing paint chips/debris from removal of paint from metal surfaces. The BCD process is a two-stage, low-temperature chemical dehalogenation process. In Stage 1, the materials are mixed with sodium bicarbonate and heated to 350 C. The volatilized halogenated contaminants (eg, PCBs, dioxins, furans), which are collected in a small volume of particulates and granular activated carbon, are decomposed by the liquid-phase reaction (Stage 2) in a stirred-tank reactor, using a high-boiling-point hydrocarbon oil as the reaction medium, with addition of a hydrogen donor, a base (NaOH), and a catalyst. The tests showed that treating wool felt insulation and paint chip wastes with Stage 2 on a large scale is feasible, but compared with current disposal costs for PCB-contaminated materials, using Stage 2 would not be economical at this time. For paint chips generated from shot/sand blasting, the solid-phase BCD process (Stage 1) should be considered, if paint removal activities are accelerated in the future.

  19. The operator tensor formulation of quantum theory.

    PubMed

    Hardy, Lucien

    2012-07-28

    In this paper, we provide what might be regarded as a manifestly covariant presentation of discrete quantum theory. A typical quantum experiment has a bunch of apparatuses placed so that quantum systems can pass between them. We regard each use of an apparatus, along with some given outcome on the apparatus (a certain detector click or a certain meter reading for example), as an operation. An operation (e.g. B(b(2)a(3))(a(1))) can have zero or more quantum systems inputted into it and zero or more quantum systems outputted from it. The operation B(b(2)a(3))(a(1)) has one system of type a inputted, and one system of type b and one system of type a outputted. We can wire together operations to form circuits, for example, A(a(1))B(b(2)a(3))(a(1))C(b(2)a(3)). Each repeated integer label here denotes a wire connecting an output to an input of the same type. As each operation in a circuit has an outcome associated with it, a circuit represents a set of outcomes that can happen in a run of the experiment. In the operator tensor formulation of quantum theory, each operation corresponds to an operator tensor. For example, the operation B(b(2)a(3))(a(1)) corresponds to the operator tensor B(b(2)a(3))(a(1)). Further, the probability for a general circuit is given by replacing operations with corresponding operator tensors as in Prob(A(a(1))B(b(2)a(3))(a(1))C(b(2)a(3))) = Â(a(1))B(b(2)a(3))(a(1))C(b(2)a(3)). Repeated integer labels indicate that we multiply in the associated subspace and then take the partial trace over that subspace. Operator tensors must be physical (namely, they must have positive input transpose and satisfy a certain normalization condition).

  20. A preliminary report on the development of MATLAB tensor classes for fast algorithm prototyping.

    SciTech Connect

    Bader, Brett William; Kolda, Tamara Gibson

    2004-07-01

    We describe three MATLAB classes for manipulating tensors in order to allow fast algorithm prototyping. A tensor is a multidimensional or N-way array. We present a tensor class for manipulating tensors which allows for tensor multiplication and 'matricization.' We have further added two classes for representing tensors in decomposed format: cp{_}tensor and tucker{_}tensor. We demonstrate the use of these classes by implementing several algorithms that have appeared in the literature.

  1. Hierarchical decomposition model for reconfigurable architecture

    NASA Astrophysics Data System (ADS)

    Erdogan, Simsek; Wahab, Abdul

    1996-10-01

    This paper introduces a systematic approach for abstract modeling of VLSI digital systems using a hierarchical decomposition process and HDL. In particular, the modeling of the back propagation neural network on a massively parallel reconfigurable hardware is used to illustrate the design process rather than toy examples. Based on the design specification of the algorithm, a functional model is developed through successive refinement and decomposition for execution on the reconfiguration machine. First, a top- level block diagram of the system is derived. Then, a schematic sheet of the corresponding structural model is developed to show the interconnections of the main functional building blocks. Next, the functional blocks are decomposed iteratively as required. Finally, the blocks are modeled using HDL and verified against the block specifications.

  2. Wood decomposition as influenced by invertebrates.

    PubMed

    Ulyshen, Michael D

    2016-02-01

    The diversity and habitat requirements of invertebrates associated with dead wood have been the subjects of hundreds of studies in recent years but we still know very little about the ecological or economic importance of these organisms. The purpose of this review is to examine whether, how and to what extent invertebrates affect wood decomposition in terrestrial ecosystems. Three broad conclusions can be reached from the available literature. First, wood decomposition is largely driven by microbial activity but invertebrates also play a significant role in both temperate and tropical environments. Primary mechanisms include enzymatic digestion (involving both endogenous enzymes and those produced by endo- and ectosymbionts), substrate alteration (tunnelling and fragmentation), biotic interactions and nitrogen fertilization (i.e. promoting nitrogen fixation by endosymbiotic and free-living bacteria). Second, the effects of individual invertebrate taxa or functional groups can be accelerative or inhibitory but the cumulative effect of the entire community is generally to accelerate wood decomposition, at least during the early stages of the process (most studies are limited to the first 2-3 years). Although methodological differences and design limitations preclude meta-analysis, studies aimed at quantifying the contributions of invertebrates to wood decomposition commonly attribute 10-20% of wood loss to these organisms. Finally, some taxa appear to be particularly influential with respect to promoting wood decomposition. These include large wood-boring beetles (Coleoptera) and termites (Termitoidae), especially fungus-farming macrotermitines. The presence or absence of these species may be more consequential than species richness and the influence of invertebrates is likely to vary biogeographically.

  3. Decomposition of Amino Diazeniumdiolates (NONOates): Molecular Mechanisms

    SciTech Connect

    Shaikh, Nizamuddin; Valiev, Marat; Lymar, Sergei V.

    2014-08-23

    Although diazeniumdiolates (X[N(O)NO]-) are extensively used in biochemical, physiological, and pharmacological studies due to their ability to slowly release NO and/or its congeneric nitroxyl, the mechanisms of these processes remain obscure. In this work, we used a combination of spectroscopic, kinetic, and computational techniques to arrive at a qualitatively consistent molecular mechanism for decomposition of amino diazeniumdiolates (amino NONOates: R2N[N(O)NO]-, where R = -N(C2H5)2 (1), -N(C3H4NH2)2 (2), or -N(C2H4NH2)2 (3)). Decomposition of these NONOates is triggered by protonation of their [NN(O)NO]- group with apparent pKa and decomposition rate constants of 4.6 and 1 s-1 for 1-H, 3.5 and 83 x 10-3 s-1 for 2-H, and 3.8 and 3.3 x 10-3 s-1 for 3-H. Although protonation occurs mainly on the O atoms of the functional group, only the minor R2N(H)N(O)NO tautomer (population ~0.01%, for 1) undergoes the N-N heterolytic bond cleavage (k ~102 s-1 for 1) leading to amine and NO. Decompositions of protonated amino NONOates are strongly temperature-dependent; activation enthalpies are 20.4 and 19.4 kcal/mol for 1 and 2, respectively, which includes contributions from both the tautomerization and bond cleavage. The bond cleavage rates exhibit exceptional sensitivity to the nature of R substituents which strongly modulate activation entropy. At pH < 2, decompositions of all these NONOates are subject to additional acid catalysis that occurs through di-protonation of the [NN(O)NO]- group.

  4. Decomposition of amino diazeniumdiolates (NONOates): Molecular mechanisms

    DOE PAGES

    Shaikh, Nizamuddin; Valiev, Marat; Lymar, Sergei V.

    2014-08-23

    Although diazeniumdiolates (X[N(O)NO]-) are extensively used in biochemical, physiological, and pharmacological studies due to their ability to release NO and/or its congeneric nitroxyl, the mechanisms of these processes remain obscure. In this work, we used a combination of spectroscopic, kinetic, and computational techniques to arrive at a quantitatively consistent molecular mechanism for decomposition of amino diazeniumdiolates (amino NONOates: R2N[N(O)NO]-, where R = —N(C2H5)2(1), —N(C3H4NH2)2(2), or —N(C2H4NH2)2(3)). Decomposition of these NONOates is triggered by protonation of their [NN(O)NO]- group with the apparent pKa and decomposition rate constants of 4.6 and 1 s-1 for 1; 3.5 and 0.083 s-1 for 2; andmore » 3.8 and 0.0033 s-1 for 3. Although protonation occurs mainly on the O atoms of the functional group, only the minor R2N(H)N(O)NO tautomer (population ~ 10-7, for 1) undergoes the N—N heterolytic bond cleavage (kd ~ 107 s-1 for 1) leading to amine and NO. Decompositions of protonated amino NONOates are strongly temperature-dependent; activation enthalpies are 20.4 and 19.4 kcal/mol for 1 and 2, respectively, which includes contributions from both the tautomerization and bond cleavage. Thus, the bond cleavage rates exhibit exceptional sensitivity to the nature of R substituents which strongly modulate activation entropy. At pH < 2, decompositions of all three NONOates that have been investigated are subject to additional acid catalysis that occurs through di-protonation of the [NN(O)NO]- group.« less

  5. Identifying Multi-Dimensional Co-Clusters in Tensors Based on Hyperplane Detection in Singular Vector Spaces

    PubMed Central

    Liu, Xinyu; Yan, Hong

    2016-01-01

    Co-clustering, often called biclustering for two-dimensional data, has found many applications, such as gene expression data analysis and text mining. Nowadays, a variety of multi-dimensional arrays (tensors) frequently occur in data analysis tasks, and co-clustering techniques play a key role in dealing with such datasets. Co-clusters represent coherent patterns and exhibit important properties along all the modes. Development of robust co-clustering techniques is important for the detection and analysis of these patterns. In this paper, a co-clustering method based on hyperplane detection in singular vector spaces (HDSVS) is proposed. Specifically in this method, higher-order singular value decomposition (HOSVD) transforms a tensor into a core part and a singular vector matrix along each mode, whose row vectors can be clustered by a linear grouping algorithm (LGA). Meanwhile, hyperplanar patterns are extracted and successfully supported the identification of multi-dimensional co-clusters. To validate HDSVS, a number of synthetic and biological tensors were adopted. The synthetic tensors attested a favorable performance of this algorithm on noisy or overlapped data. Experiments with gene expression data and lineage data of embryonic cells further verified the reliability of HDSVS to practical problems. Moreover, the detected co-clusters are well consistent with important genetic pathways and gene ontology annotations. Finally, a series of comparisons between HDSVS and state-of-the-art methods on synthetic tensors and a yeast gene expression tensor were implemented, verifying the robust and stable performance of our method. PMID:27598575

  6. Identifying Multi-Dimensional Co-Clusters in Tensors Based on Hyperplane Detection in Singular Vector Spaces.

    PubMed

    Zhao, Hongya; Wang, Debby D; Chen, Long; Liu, Xinyu; Yan, Hong

    2016-01-01

    Co-clustering, often called biclustering for two-dimensional data, has found many applications, such as gene expression data analysis and text mining. Nowadays, a variety of multi-dimensional arrays (tensors) frequently occur in data analysis tasks, and co-clustering techniques play a key role in dealing with such datasets. Co-clusters represent coherent patterns and exhibit important properties along all the modes. Development of robust co-clustering techniques is important for the detection and analysis of these patterns. In this paper, a co-clustering method based on hyperplane detection in singular vector spaces (HDSVS) is proposed. Specifically in this method, higher-order singular value decomposition (HOSVD) transforms a tensor into a core part and a singular vector matrix along each mode, whose row vectors can be clustered by a linear grouping algorithm (LGA). Meanwhile, hyperplanar patterns are extracted and successfully supported the identification of multi-dimensional co-clusters. To validate HDSVS, a number of synthetic and biological tensors were adopted. The synthetic tensors attested a favorable performance of this algorithm on noisy or overlapped data. Experiments with gene expression data and lineage data of embryonic cells further verified the reliability of HDSVS to practical problems. Moreover, the detected co-clusters are well consistent with important genetic pathways and gene ontology annotations. Finally, a series of comparisons between HDSVS and state-of-the-art methods on synthetic tensors and a yeast gene expression tensor were implemented, verifying the robust and stable performance of our method. PMID:27598575

  7. Uncertainty Quantification via Random Domain Decomposition and Probabilistic Collocation on Sparse Grids

    SciTech Connect

    Lin, Guang; Tartakovsky, Alexandre M.; Tartakovsky, Daniel M.

    2010-09-01

    Due to lack of knowledge or insufficient data, many physical systems are subject to uncertainty. Such uncertainty occurs on a multiplicity of scales. In this study, we conduct the uncertainty analysis of diffusion in random composites with two dominant scales of uncertainty: Large-scale uncertainty in the spatial arrangement of materials and small-scale uncertainty in the parameters within each material. A general two-scale framework that combines random domain decomposition (RDD) and probabilistic collocation method (PCM) on sparse grids to quantify the large and small scales of uncertainty, respectively. Using sparse grid points instead of standard grids based on full tensor products for both the large and small scales of uncertainty can greatly reduce the overall computational cost, especially for random process with small correlation length (large number of random dimensions). For one-dimensional random contact point problem and random inclusion problem, analytical solution and Monte Carlo simulations have been conducted respectively to verify the accuracy of the combined RDD-PCM approach. Additionally, we employed our combined RDD-PCM approach to two- and three-dimensional examples to demonstrate that our combined RDD-PCM approach provides efficient, robust and nonintrusive approximations for the statistics of diffusion in random composites.

  8. Heuristic decomposition for non-hierarchic systems

    NASA Technical Reports Server (NTRS)

    Bloebaum, Christina L.; Hajela, P.

    1991-01-01

    Design and optimization is substantially more complex in multidisciplinary and large-scale engineering applications due to the existing inherently coupled interactions. The paper introduces a quasi-procedural methodology for multidisciplinary optimization that is applicable for nonhierarchic systems. The necessary decision-making support for the design process is provided by means of an embedded expert systems capability. The method employs a decomposition approach whose modularity allows for implementation of specialized methods for analysis and optimization within disciplines.

  9. A multidimensional signal processing approach for classification of microwave measurements with application to stroke type diagnosis.

    PubMed

    Mesri, Hamed Yousefi; Najafabadi, Masoud Khazaeli; McKelvey, Tomas

    2011-01-01

    A multidimensional signal processing method is described for detection of bleeding stroke based on microwave measurements from an antenna array placed around the head of the patient. The method is data driven and the algorithm uses samples from a healthy control group to calculate the feature used for classification. The feature is derived using a tensor approach and the higher order singular value decomposition is a key component. A leave-one-out validation method is used to evaluate the properties of the method using clinical data.

  10. Thermal Decomposition of Copper (II) Calcium (II) Formate

    NASA Astrophysics Data System (ADS)

    Leyva, A. G.; Polla, G.; de Perazzo, P. K.; Lanza, H.; de Benyacar, M. A. R.

    1996-05-01

    The presence of different stages in the thermal decomposition process of CuCa(HCOO) 4has been established by means of TGA at different heating rates, X-ray powder diffraction of quenched samples, and DSC methods. During the first stage, decomposition of one of the two copper formate structural units contained in the unit cell takes place. The presence of CuCa 2(HCOO) 6has been detected. Calcium formate structural units break down at higher temperatures; the last decomposition peak corresponds to the appearance of different calcium-copper oxides.

  11. Modeling of the Process of Filling a Dome Separator with the Decomposition of a Gas Hydrate Formed During the Mounting of the Installation

    NASA Astrophysics Data System (ADS)

    Chiglintsev, I. A.; Nasyrov, A. A.

    2016-07-01

    Consideration is given to the theoretical foundations of operation of a dome separator designed to collect and subsequently ship gas and oil emissions in the case of fracturing of the well near deep-water reservoirs where thermobaric conditions are favorable for the formation of a gas hydrate. A mathematical model has been constructed that describes the process of filling the indicated dome with hydrocarbons and pumping them out it under hydrate-formation conditions. The dynamics of change in the phase temperature in the dome has been described.

  12. Thermal Decomposition Kinetics of HMX

    SciTech Connect

    Burnham, A K; Weese, R K

    2004-11-18

    Nucleation-growth kinetic expressions are derived for thermal decomposition of HMX from a variety of thermal analysis data types, including mass loss for isothermal and constant rate heating in an open pan and heat flow for isothermal and constant rate heating in open and closed pans. Conditions are identified in which thermal runaway is small to nonexistent, which typically means temperatures less than 255 C and heating rates less than 1 C/min. Activation energies are typically in the 140 to 165 kJ/mol range for open pan experiments and about 150 to 165 kJ/mol for sealed pan experiments. Our activation energies tend to be slightly lower than those derived from data supplied by the University of Utah, which we consider the best previous thermal analysis work. The reaction clearly displays more than one process, and most likely three processes, which are most clearly evident in open pan experiments. The reaction is accelerated in closed pan experiments, and one global reaction appears to fit the data well. Comparison of our rate measurements with additional literature sources for open and closed low temperature pyrolysis from Sandia gives a likely activation energy of 165 kJ/mol at 10% conversion.

  13. Thermal Decomposition Kinetics of HMX

    SciTech Connect

    Burnham, A K; Weese, R K

    2005-03-17

    Nucleation-growth kinetic expressions are derived for thermal decomposition of HMX from a variety of types of data, including mass loss for isothermal and constant rate heating in an open pan, and heat flow for isothermal and constant rate heating in open and closed pans. Conditions are identified in which thermal runaway is small to nonexistent, which typically means temperatures less than 255 C and heating rates less than 1 C/min. Activation energies are typically in the 140 to 165 kJ/mol regime for open pan experiments and about 150-165 kJ/mol for sealed-pan experiments. The reaction clearly displays more than one process, and most likely three processes, which are most clearly evident in open pan experiments. The reaction is accelerated for closed pan experiments, and one global reaction fits the data fairly well. Our A-E values lie in the middle of the values given in a compensation-law plot by Brill et al. (1994). Comparison with additional open and closed low temperature pyrolysis experiments support an activation energy of 165 kJ/mol at 10% conversion.

  14. Decomposition in northern Minnesota peatlands

    SciTech Connect

    Farrish, K.W.

    1985-01-01

    Decomposition in peatlands was investigated in northern Minnesota. Four sites, an ombrotrophic raised bog, an ombrotrophic perched bog and two groundwater minerotrophic fens, were studied. Decomposition rates of peat and paper were estimated using mass-loss techniques. Environmental and substrate factors that were most likely to be responsible for limiting decomposition were monitored. Laboratory incubation experiments complemented the field work. Mass-loss over one year in one of the bogs, ranged from 11 percent in the upper 10 cm of hummocks to 1 percent at 60 to 100 cm depth in hollows. Regression analysis of the data for that bog predicted no mass-loss below 87 cm. Decomposition estimates on an area basis were 2720 and 6460 km/ha yr for the two bogs; 17,000 and 5900 kg/ha yr for the two fens. Environmental factors found to limit decomposition in these peatlands were reducing/anaerobic conditions below the water table and cool peat temperatures. Substrate factors found to limit decomposition were low pH, high content of resistant organics such as lignin, and shortages of available N and K. Greater groundwater influence was found to favor decomposition through raising the pH and perhaps by introducing limited amounts of dissolved oxygen.

  15. [Leaf litter decomposition in six Cloud Forest streams of the upper La Antigua watershed, Veracruz, Mexico].

    PubMed

    Astudillo, Manuel R; Ramírez, Alonso; Novelo-Gutiérrez, Rodolfo; Vázquez, Gabriela

    2014-04-01

    Leaf litter decomposition is an important stream ecosystem process. To understand factors controlling leaf decomposition in cloud forest in Mexico, we incubated leaf packs in different streams along a land use cover gradient for 35 days during the dry and wet seasons. We assessed relations between leaf decomposition rates (k), stream physicochemistry, and macroinvertebrates colonizing leaf packs. Physicochemical parameters showed a clear seasonal difference at all study streams. Leaves were colonized by collector-gatherer insects, followed by shredders. Assessment of factors related to k indicated that only forest cover was negatively related to leaf decomposition rates. Thus stream physicochemistry and seasonality had no impact on decomposition rates. We concluded that leaf litter decomposition at our study streams is a stable process over the year. However, it is possible that this stability is the result of factors regulating decomposition during the different seasons and streams.

  16. Boundary interpretation of gravity gradient tensor data by enhanced directional total horizontal derivatives

    NASA Astrophysics Data System (ADS)

    Yuan, Y.

    2015-12-01

    Boundary identification is a requested task in the interpretation of potential-field data, which has been widely used as a tool in exploration technologies for mineral resources. The main geological edges are fault lines and the borders of geological or rock bodies of different density, magnetic nature, and so on. Gravity gradient tensor data have been widely used in geophysical exploration for its large amount of information and containing higher frequency signals than gravity data, which can be used to delineate small scale anomalies. Therefore, combining multiple components of gradient tensor data to interpret gravity gradient tensor data is a challenge. This needs to develop new edge detector to process the gravity gradient tensor data. In order to make use of multiple components information, we first define directional total horizontal derivatives and enhanced directional total horizontal derivatives and use them to define new edge detectors. In order to display the edges of different amplitudes anomalies simultaneously, we present a normalization method. These methods have been tested on synthetic data to verify that the new methods can delineate the edges of different amplitude anomalies clearly and avoid bringing additional false edges when anomalies contain both positive and negative anomalies. Finally, we apply these methods to real full gravity gradient tensor data in St. Georges Bay, Canada, which get well results.

  17. Assessing the Uncertainties on Seismic Source Parameters: Towards Realistic Estimates of Moment Tensor Determinations

    NASA Astrophysics Data System (ADS)

    Magnoni, F.; Scognamiglio, L.; Tinti, E.; Casarotti, E.

    2014-12-01

    Seismic moment tensor is one of the most important source parameters defining the earthquake dimension and style of the activated fault. Moment tensor catalogues are ordinarily used by geoscientists, however, few attempts have been done to assess possible impacts of moment magnitude uncertainties upon their own analysis. The 2012 May 20 Emilia mainshock is a representative event since it is defined in literature with a moment magnitude value (Mw) spanning between 5.63 and 6.12. An uncertainty of ~0.5 units in magnitude leads to a controversial knowledge of the real size of the event. The possible uncertainty associated to this estimate could be critical for the inference of other seismological parameters, suggesting caution for seismic hazard assessment, coulomb stress transfer determination and other analyses where self-consistency is important. In this work, we focus on the variability of the moment tensor solution, highlighting the effect of four different velocity models, different types and ranges of filtering, and two different methodologies. Using a larger dataset, to better quantify the source parameter uncertainty, we also analyze the variability of the moment tensor solutions depending on the number, the epicentral distance and the azimuth of used stations. We endorse that the estimate of seismic moment from moment tensor solutions, as well as the estimate of the other kinematic source parameters, cannot be considered an absolute value and requires to come out with the related uncertainties and in a reproducible framework characterized by disclosed assumptions and explicit processing workflows.

  18. Towards metal detection and identification for humanitarian demining using magnetic polarizability tensor spectroscopy

    NASA Astrophysics Data System (ADS)

    Dekdouk, B.; Ktistis, C.; Marsh, L. A.; Armitage, D. W.; Peyton, A. J.

    2015-11-01

    This paper presents an inversion procedure to estimate the location and magnetic polarizability tensor of metal targets from broadband electromagnetic induction (EMI) data. The solution of this inversion produces a spectral target signature, which may be used in identifying metal targets in landmines from harmless clutter. In this process, the response of the metal target is modelled with dipole moment and fitted to planar EMI data by solving a minimization least squares problem. A computer simulation platform has been developed using a modelled EMI sensor to produce synthetic data for inversion. The reconstructed tensor is compared with an assumed true solution estimated using a modelled tri-axial Helmholtz coil array. Using some test examples including a sphere which has a known analytical solution, results show the inversion routine produces accurate tensors to within 12% error of the true tensor. A good convergence rate is also demonstrated even when the target location is mis-estimated by a few centimeters. Having verified the inversion routine using finite element modelling, a swept frequency EMI experimental setup is used to compute tensors for a set of test samples representing examples of metallic landmine components and clutter for a broadband range of frequencies (kHz to tens of kHz). Results show the reconstructed spectral target signatures are very distinctive and hence potentially offer an efficient physical approach for landmine identification. The accuracy of the evaluated spectra is similarly verified using a uniform field forming sensor.

  19. Tensor hypercontraction density fitting. I. Quartic scaling second- and third-order Møller-Plesset perturbation theory

    NASA Astrophysics Data System (ADS)

    Hohenstein, Edward G.; Parrish, Robert M.; Martínez, Todd J.

    2012-07-01

    Many approximations have been developed to help deal with the O(N4) growth of the electron repulsion integral (ERI) tensor, where N is the number of one-electron basis functions used to represent the electronic wavefunction. Of these, the density fitting (DF) approximation is currently the most widely used despite the fact that it is often incapable of altering the underlying scaling of computational effort with respect to molecular size. We present a method for exploiting sparsity in three-center overlap integrals through tensor decomposition to obtain a low-rank approximation to density fitting (tensor hypercontraction density fitting or THC-DF). This new approximation reduces the 4th-order ERI tensor to a product of five matrices, simultaneously reducing the storage requirement as well as increasing the flexibility to regroup terms and reduce scaling behavior. As an example, we demonstrate such a scaling reduction for second- and third-order perturbation theory (MP2 and MP3), showing that both can be carried out in O(N4) operations. This should be compared to the usual scaling behavior of O(N5) and O(N6) for MP2 and MP3, respectively. The THC-DF technique can also be applied to other methods in electronic structure theory, such as coupled-cluster and configuration interaction, promising significant gains in computational efficiency and storage reduction.

  20. Perfluoropolyalkylether decomposition on catalytic aluminas

    NASA Technical Reports Server (NTRS)

    Morales, Wilfredo

    1994-01-01

    The decomposition of Fomblin Z25, a commercial perfluoropolyalkylether liquid lubricant, was studied using the Penn State Micro-oxidation Test, and a thermal gravimetric/differential scanning calorimetry unit. The micro-oxidation test was conducted using 440C stainless steel and pure iron metal catalyst specimens, whereas the thermal gravimetric/differential scanning calorimetry tests were conducted using catalytic alumina pellets. Analysis of the thermal data, high pressure liquid chromatography data, and x-ray photoelectron spectroscopy data support evidence that there are two different decomposition mechanisms for Fomblin Z25, and that reductive sites on the catalytic surfaces are responsible for the decomposition of Fomblin Z25.

  1. Structural optimization by multilevel decomposition

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; James, B.; Dovi, A.

    1983-01-01

    A method is described for decomposing an optimization problem into a set of subproblems and a coordination problem which preserves coupling between the subproblems. The method is introduced as a special case of multilevel, multidisciplinary system optimization and its algorithm is fully described for two level optimization for structures assembled of finite elements of arbitrary type. Numerical results are given for an example of a framework to show that the decomposition method converges and yields results comparable to those obtained without decomposition. It is pointed out that optimization by decomposition should reduce the design time by allowing groups of engineers, using different computers to work concurrently on the same large problem.

  2. Autonomous Gaussian Decomposition

    NASA Astrophysics Data System (ADS)

    Lindner, Robert R.; Vera-Ciro, Carlos; Murray, Claire E.; Stanimirović, Snežana; Babler, Brian; Heiles, Carl; Hennebelle, Patrick; Goss, W. M.; Dickey, John

    2015-04-01

    We present a new algorithm, named Autonomous Gaussian Decomposition (AGD), for automatically decomposing spectra into Gaussian components. AGD uses derivative spectroscopy and machine learning to provide optimized guesses for the number of Gaussian components in the data, and also their locations, widths, and amplitudes. We test AGD and find that it produces results comparable to human-derived solutions on 21 cm absorption spectra from the 21 cm SPectral line Observations of Neutral Gas with the EVLA (21-SPONGE) survey. We use AGD with Monte Carlo methods to derive the H i line completeness as a function of peak optical depth and velocity width for the 21-SPONGE data, and also show that the results of AGD are stable against varying observational noise intensity. The autonomy and computational efficiency of the method over traditional manual Gaussian fits allow for truly unbiased comparisons between observations and simulations, and for the ability to scale up and interpret the very large data volumes from the upcoming Square Kilometer Array and pathfinder telescopes.

  3. AUTONOMOUS GAUSSIAN DECOMPOSITION

    SciTech Connect

    Lindner, Robert R.; Vera-Ciro, Carlos; Murray, Claire E.; Stanimirović, Snežana; Babler, Brian; Heiles, Carl; Hennebelle, Patrick; Dickey, John

    2015-04-15

    We present a new algorithm, named Autonomous Gaussian Decomposition (AGD), for automatically decomposing spectra into Gaussian components. AGD uses derivative spectroscopy and machine learning to provide optimized guesses for the number of Gaussian components in the data, and also their locations, widths, and amplitudes. We test AGD and find that it produces results comparable to human-derived solutions on 21 cm absorption spectra from the 21 cm SPectral line Observations of Neutral Gas with the EVLA (21-SPONGE) survey. We use AGD with Monte Carlo methods to derive the H i line completeness as a function of peak optical depth and velocity width for the 21-SPONGE data, and also show that the results of AGD are stable against varying observational noise intensity. The autonomy and computational efficiency of the method over traditional manual Gaussian fits allow for truly unbiased comparisons between observations and simulations, and for the ability to scale up and interpret the very large data volumes from the upcoming Square Kilometer Array and pathfinder telescopes.

  4. Visualization of second order tensor fields and matrix data

    NASA Technical Reports Server (NTRS)

    Delmarcelle, Thierry; Hesselink, Lambertus

    1992-01-01

    We present a study of the visualization of 3-D second order tensor fields and matrix data. The general problem of visualizing unsymmetric real or complex Hermitian second order tensor fields can be reduced to the simultaneous visualization of a real and symmetric second order tensor field and a real vector field. As opposed to the discrete iconic techniques commonly used in multivariate data visualization, the emphasis is on exploiting the mathematical properties of tensor fields in order to facilitate their visualization and to produce a continuous representation of the data. We focus on interactively sensing and exploring real and symmetric second order tensor data by generalizing the vector notion of streamline to the tensor concept of hyperstreamline. We stress the importance of a structural analysis of the data field analogous to the techniques of vector field topology extraction in order to obtain a unique and objective representation of second order tensor fields.

  5. Charmless hadronic B decays into a tensor meson

    SciTech Connect

    Cheng, Hai-Yang; Yang, Kwei-Chou

    2011-02-01

    Two-body charmless hadronic B decays involving a tensor meson in the final state are studied within the framework of QCD factorization (QCDF). Because of the G-parity of the tensor meson, both the chiral-even and chiral-odd two-parton light-cone distribution amplitudes of the tensor meson are antisymmetric under the interchange of momentum fractions of the quark and antiquark in the SU(3) limit. Our main results are: (i) In the naieve factorization approach, the decays such as B{sup -}{yields}K{sub 2}*{sup 0}{pi}{sup -} and B{sup 0}{yields}K{sub 2}*{sup -}{pi}{sup +} with a tensor meson emitted are prohibited because a tensor meson cannot be created from the local V-A or tensor current. Nevertheless, the decays receive nonfactorizable contributions in QCDF from vertex, penguin and hard spectator corrections. The experimental observation of B{sup -}{yields}K{sub 2}*{sup 0}{pi}{sup -} indicates the importance of nonfactorizable effects. (ii) For penguin-dominated B{yields}TP and TV decays, the predicted rates in naieve factorization are usually too small by 1 to 2 orders of magnitude. In QCDF, they are enhanced by power corrections from penguin annihilation and nonfactorizable contributions. (iii) The dominant penguin contributions to B{yields}K{sub 2}*{eta}{sup (')} arise from the processes: (a) b{yields}sss{yields}s{eta}{sub s} and (b) b{yields}sqq{yields}qK{sub 2}* with {eta}{sub q}=(uu+dd)/{radical}(2) and {eta}{sub s}=ss. The interference, constructive for K{sub 2}*{eta}{sup '} and destructive for K{sub 2}*{eta}, explains why {Gamma}(B{yields}K{sub 2}*{eta}{sup '})>>{Gamma}(B{yields}K{sub 2}*{eta}). (iv) We use the measured rates of B{yields}K{sub 2}*({omega},{phi}) to extract the penguin-annihilation parameters {rho}{sub A}{sup TV} and {rho}{sub A}{sup VT} and the observed longitudinal polarization fractions f{sub L}(K{sub 2}*{omega}) and f{sub L}(K{sub 2}*{phi}) to fix the phases {phi}{sub A}{sup VT} and {phi}{sub A}{sup TV}. (v) The experimental observation

  6. Odd tensor modes from inflation

    NASA Astrophysics Data System (ADS)

    Sorbo, Lorenzo

    2016-07-01

    The existence of a primordial spectrum of gravitational waves is a generic prediction of inflation. Here, I will discuss under what conditions the coupling of a pseudoscalar inflaton to a U(1) gauge field can induce, in a two-step process, gravitational waves with unusual properties such as: (i) a net chirality, (ii) a blue spectrum, (iii) large non-Gaussianities even if the scalar perturbations are approximately Gaussian and (iv) being detectable in the (relatively) near future by ground-based gravitational interferometers.

  7. Elliptic Relaxation of a Tensor Representation for the Redistribution Terms in a Reynolds Stress Turbulence Model

    NASA Technical Reports Server (NTRS)

    Carlson, J. R.; Gatski, T. B.

    2002-01-01

    A formulation to include the effects of wall proximity in a second-moment closure model that utilizes a tensor representation for the redistribution terms in the Reynolds stress equations is presented. The wall-proximity effects are modeled through an elliptic relaxation process of the tensor expansion coefficients that properly accounts for both correlation length and time scales as the wall is approached. Direct numerical simulation data and Reynolds stress solutions using a full differential approach are compared for the case of fully developed channel flow.

  8. FAST TRACK COMMUNICATION Algebraic classification of the Weyl tensor in higher dimensions based on its 'superenergy' tensor

    NASA Astrophysics Data System (ADS)

    Senovilla, José M. M.

    2010-11-01

    The algebraic classification of the Weyl tensor in the arbitrary dimension n is recovered by means of the principal directions of its 'superenergy' tensor. This point of view can be helpful in order to compute the Weyl aligned null directions explicitly, and permits one to obtain the algebraic type of the Weyl tensor by computing the principal eigenvalue of rank-2 symmetric future tensors. The algebraic types compatible with states of intrinsic gravitational radiation can then be explored. The underlying ideas are general, so that a classification of arbitrary tensors in the general dimension can be achieved.

  9. Mechanistic insight into the chemiluminescent decomposition of firefly dioxetanone.

    PubMed

    Yue, Ling; Liu, Ya-Jun; Fang, Wei-Hai

    2012-07-18

    The peroxide decomposition that generates the excited-state carbonyl compound is the key step in most organic chemiluminescence, and chemically initiated electron exchange luminescence (CIEEL) has been widely accepted for decades as the general mechanism for this decomposition. The firefly dioxetanone, which is a peroxide, is the intermediate in firefly bioluminescence, and its decomposition is the most important step leading to the emission of visible light by a firefly. However, the firefly dioxetanone decomposition mechanism has never been explored at a reliable theoretical level, because the decomposition process includes biradical, charge-transfer (CT) and several nearly degenerate states. Herein, we have investigated the thermolysis of firefly dioxetanone in its neutral (FDOH) and anionic (FDO(-)) forms using second-order multiconfigurational perturbation theories in combination with the ground-state intrinsic reaction coordinate calculated via the combined hybrid functional with Coulomb attenuated exchange-correlation, and considered the solvent effect on the ground-state reaction path using the combined hybrid functional with Coulomb attenuated exchange-correlation. The calculated results indicate that the chemiluminescent decomposition of FDOH or FDO(-) does not take place via the CIEEL mechanism. An entropic trap was found to lead to an excited-state carbonyl compound for FDOH, and a gradually reversible CT initiated luminescence (GRCTIL) was proposed as a new mechanism for the decomposition of FDO(-).

  10. A sensitive, high resolution magic angle turning experiment for measuring chemical shift tensor principal values

    NASA Astrophysics Data System (ADS)

    Alderman, D. W.

    1998-12-01

    A sensitive, high-resolution 'FIREMAT' two-dimensional (2D) magic-angle-turning experiment is described that measures chemical shift tensor principal values in powdered solids. The spectra display spinning-sideband patterns separated by their isotropic shifts. The new method's sensitivity and high resolution in the isotropic-shift dimension result from combining the 5pi magic-angle-turning pulse sequence, an extension of the pseudo-2D sideband-suppression data rearrangement, and the TIGER protocol for processing 2D data. TPPM decoupling is used to enhance resolution. The method requires precise synchronization of the pulses and sampling to the rotor position. It is shown that the technique obtains 35 natural-abundance 13C tensors from erythromycin in 19 hours, and high quality naturalabundance 15N tensors from eight sites in potassium penicillin V in three days on a 400MHz spectrometer.

  11. Newton-based optimization for Kullback-Leibler nonnegative tensor factorizations

    SciTech Connect

    Plantenga, Todd; Kolda, Tamara G.; Hansen, Samantha

    2015-04-30

    Tensor factorizations with nonnegativity constraints have found application in analysing data from cyber traffic, social networks, and other areas. We consider application data best described as being generated by a Poisson process (e.g. count data), which leads to sparse tensors that can be modelled by sparse factor matrices. In this paper, we investigate efficient techniques for computing an appropriate canonical polyadic tensor factorization based on the Kullback–Leibler divergence function. We propose novel subproblem solvers within the standard alternating block variable approach. Our new methods exploit structure and reformulate the optimization problem as small independent subproblems. We employ bound-constrained Newton and quasi-Newton methods. Finally, we compare our algorithms against other codes, demonstrating superior speed for high accuracy results and the ability to quickly find sparse solutions.

  12. Newton-based optimization for Kullback-Leibler nonnegative tensor factorizations

    DOE PAGES

    Plantenga, Todd; Kolda, Tamara G.; Hansen, Samantha

    2015-04-30

    Tensor factorizations with nonnegativity constraints have found application in analysing data from cyber traffic, social networks, and other areas. We consider application data best described as being generated by a Poisson process (e.g. count data), which leads to sparse tensors that can be modelled by sparse factor matrices. In this paper, we investigate efficient techniques for computing an appropriate canonical polyadic tensor factorization based on the Kullback–Leibler divergence function. We propose novel subproblem solvers within the standard alternating block variable approach. Our new methods exploit structure and reformulate the optimization problem as small independent subproblems. We employ bound-constrained Newton andmore » quasi-Newton methods. Finally, we compare our algorithms against other codes, demonstrating superior speed for high accuracy results and the ability to quickly find sparse solutions.« less

  13. Advantages of horizontal directional Theta method to detect the edges of full tensor gravity gradient data

    NASA Astrophysics Data System (ADS)

    Yuan, Yuan; Gao, Jin-Yao; Chen, Ling-Na

    2016-07-01

    Full tensor gravity gradient data contain nine signal components. They include higher frequency signals than traditional gravity data, which can extract the small-scale features of the sources. Edge detection has played an important role in the interpretation of potential-field data. There are many methods that have been proposed to detect and enhance the edges of geological bodies based on horizontal and vertical derivatives of potential-field data. In order to make full use of all the measured gradient components, we need to develop a new edge detector to process the full tensor gravity gradient data. We first define the directional Theta and use the horizontal directional Theta to define a new edge detector. This method was tested on synthetic and real full tensor gravity gradient data to validate its feasibility. Compared the results with other balanced detectors, the new detector can effectively delineate the edges and does not produce any additional false edges.

  14. Tensors: A guide for undergraduate students

    NASA Astrophysics Data System (ADS)

    Battaglia, Franco; George, Thomas F.

    2013-07-01

    A guide on tensors is proposed for undergraduate students in physics or engineering that ties directly to vector calculus in orthonormal coordinate systems. We show that once orthonormality is relaxed, a dual basis, together with the contravariant and covariant components, naturally emerges. Manipulating these components requires some skill that can be acquired more easily and quickly once a new notation is adopted. This notation distinguishes multi-component quantities in different coordinate systems by a differentiating sign on the index labelling the component rather than on the label of the quantity itself. This tiny stratagem, together with simple rules openly stated at the beginning of this guide, allows an almost automatic, easy-to-pursue procedure for what is otherwise a cumbersome algebra. By the end of the paper, the reader will be skillful enough to tackle many applications involving tensors of any rank in any coordinate system, without index-manipulation obstacles standing in the way.

  15. Extended scalar-tensor theories of gravity

    NASA Astrophysics Data System (ADS)

    Crisostomi, Marco; Koyama, Kazuya; Tasinato, Gianmassimo

    2016-04-01

    We study new consistent scalar-tensor theories of gravity recently introduced by Langlois and Noui with potentially interesting cosmological applications. We derive the conditions for the existence of a primary constraint that prevents the propagation of an additional dangerous mode associated with higher order equations of motion. We then classify the most general, consistent scalar-tensor theories that are at most quadratic in the second derivatives of the scalar field. In addition, we investigate the possible connection between these theories and (beyond) Horndeski through conformal and disformal transformations. Finally, we point out that these theories can be associated with new operators in the effective field theory of dark energy, which might open up new possibilities to test dark energy models in future surveys.

  16. Lifted tensors and Hamilton-Jacobi separability

    NASA Astrophysics Data System (ADS)

    Waeyaert, G.; Sarlet, W.

    2014-12-01

    Starting from a bundle τ : E → R, the bundle π :J1τ∗ → E, which is the dual of the first jet bundle J1 τ and a sub-bundle of T∗ E, is the appropriate manifold for the geometric description of time-dependent Hamiltonian systems. Based on previous work, we recall properties of the complete lifts of a type (1 , 1) tensor R on E to both T∗ E and J1τ∗. We discuss how an interplay between both lifted tensors leads to the identification of related distributions on both manifolds. The integrability of these distributions, a coordinate free condition, is shown to produce exactly Forbat's conditions for separability of the time-dependent Hamilton-Jacobi equation in appropriate coordinates.

  17. Conformal killing tensors and covariant Hamiltonian dynamics

    SciTech Connect

    Cariglia, M.; Gibbons, G. W.; Holten, J.-W. van; Horvathy, P. A.; Zhang, P.-M.

    2014-12-15

    A covariant algorithm for deriving the conserved quantities for natural Hamiltonian systems is combined with the non-relativistic framework of Eisenhart, and of Duval, in which the classical trajectories arise as geodesics in a higher dimensional space-time, realized by Brinkmann manifolds. Conserved quantities which are polynomial in the momenta can be built using time-dependent conformal Killing tensors with flux. The latter are associated with terms proportional to the Hamiltonian in the lower dimensional theory and with spectrum generating algebras for higher dimensional quantities of order 1 and 2 in the momenta. Illustrations of the general theory include the Runge-Lenz vector for planetary motion with a time-dependent gravitational constant G(t), motion in a time-dependent electromagnetic field of a certain form, quantum dots, the Hénon-Heiles and Holt systems, respectively, providing us with Killing tensors of rank that ranges from one to six.

  18. Tensor modes on the string theory landscape

    NASA Astrophysics Data System (ADS)

    Westphal, Alexander

    2013-04-01

    We attempt an estimate for the distribution of the tensor mode fraction r over the landscape of vacua in string theory. The dynamics of eternal inflation and quantum tunneling lead to a kind of democracy on the landscape, providing no bias towards large-field or small-field inflation regardless of the class of measure. The tensor mode fraction then follows the number frequency distributions of inflationary mechanisms of string theory over the landscape. We show that an estimate of the relative number frequencies for small-field vs large-field inflation, while unattainable on the whole landscape, may be within reach as a regional answer for warped Calabi-Yau flux compactifications of type IIB string theory.

  19. Saliency Mapping Enhanced by Structure Tensor

    PubMed Central

    He, Zhiyong; Chen, Xin; Sun, Lining

    2015-01-01

    We propose a novel efficient algorithm for computing visual saliency, which is based on the computation architecture of Itti model. As one of well-known bottom-up visual saliency models, Itti method evaluates three low-level features, color, intensity, and orientation, and then generates multiscale activation maps. Finally, a saliency map is aggregated with multiscale fusion. In our method, the orientation feature is replaced by edge and corner features extracted by a linear structure tensor. Following it, these features are used to generate contour activation map, and then all activation maps are directly combined into a saliency map. Compared to Itti method, our method is more computationally efficient because structure tensor is more computationally efficient than Gabor filter that is used to compute the orientation feature and our aggregation is a direct method instead of the multiscale operator. Experiments on Bruce's dataset show that our method is a strong contender for the state of the art. PMID:26788050

  20. Oscillating chiral tensor spectrum from axionic inflation

    NASA Astrophysics Data System (ADS)

    Obata, Ippei; Soda, Jiro

    2016-08-01

    We study axionic inflation with a modulated potential and examine if the primordial tensor power spectrum exhibits oscillatory feature, which is testable with future space-based gravitational-wave experiments such as DECIGO and BBO. In the case of single-field axion monodromy inflation, it turns out that it is difficult to detect an oscillation in the spectrum due to the suppression of the sub-Planckian decay constant of the axion. On the other hand, in the case of aligned chromo-natural inflation where the axion is coupled to a SU(2) gauge field, it turns out that a sizable oscillation in the tensor spectrum can occur due to the enhancement of chiral gravitational waves sourced by the gauge field. We expect that this feature will be a new probe for axion phenomenologies in the early Universe through chiral gravitational waves.

  1. ADVANCED OXIDATION: OXALATE DECOMPOSITION TESTING WITH OZONE

    SciTech Connect

    Ketusky, E.; Subramanian, K.

    2012-02-29

    At the Savannah River Site (SRS), oxalic acid is currently considered the preferred agent for chemically cleaning the large underground Liquid Radioactive Waste Tanks. It is applied only in the final stages of emptying a tank when generally less than 5,000 kg of waste solids remain, and slurrying based removal methods are no-longer effective. The use of oxalic acid is preferred because of its combined dissolution and chelating properties, as well as the fact that corrosion to the carbon steel tank walls can be controlled. Although oxalic acid is the preferred agent, there are significant potential downstream impacts. Impacts include: (1) Degraded evaporator operation; (2) Resultant oxalate precipitates taking away critically needed operating volume; and (3) Eventual creation of significant volumes of additional feed to salt processing. As an alternative to dealing with the downstream impacts, oxalate decomposition using variations of ozone based Advanced Oxidation Process (AOP) were investigated. In general AOPs use ozone or peroxide and a catalyst to create hydroxyl radicals. Hydroxyl radicals have among the highest oxidation potentials, and are commonly used to decompose organics. Although oxalate is considered among the most difficult organic to decompose, the ability of hydroxyl radicals to decompose oxalate is considered to be well demonstrated. In addition, as AOPs are considered to be 'green' their use enables any net chemical additions to the waste to be minimized. In order to test the ability to decompose the oxalate and determine the decomposition rates, a test rig was designed, where 10 vol% ozone would be educted into a spent oxalic acid decomposition loop, with the loop maintained at 70 C and recirculated at 40L/min. Each of the spent oxalic acid streams would be created from three oxalic acid strikes of an F-area simulant (i.e., Purex = high Fe/Al concentration) and H-area simulant (i.e., H area modified Purex = high Al/Fe concentration) after nearing

  2. 3D structure tensor analysis of light microscopy data for validating diffusion MRI

    PubMed Central

    Khan, Ahmad Raza; Cornea, Anda; Leigland, Lindsey A.; Kohama, Steven G.; Jespersen, Sune Nørhøj; Kroenke, Christopher D.

    2015-01-01

    Diffusion magnetic resonance imaging (d-MRI) is a powerful non-invasive and non-destructive technique for characterizing brain tissue on the microscopic scale. However, the lack of validation of d-MRI by independent experimental means poses an obstacle to accurate interpretation of data acquired using this method. Recently, structure tensor analysis has been applied to light microscopy images, and this technique holds promise to be a powerful validation strategy for d-MRI. Advantages of this approach include its similarity to d-MRI in terms of averaging the effects of a large number of cellular structures, and its simplicity, which enables it to be implemented in a high-throughput manner. However, a drawback of previous implementations of this technique arises from it being restricted to 2D. As a result, structure tensor analyses have been limited to tissue sectioned in a direction orthogonal to the direction of interest. Here we describe the analytical framework for extending structure tensor analysis to 3D, and utilize the results to analyze serial image “stacks” acquired with confocal microscopy of rhesus macaque hippocampal tissue. Implementation of 3D structure tensor procedures requires removal of sources of anisotropy introduced in tissue preparation and confocal imaging. This is accomplished with image processing steps to mitigate the effects of anisotropic tissue shrinkage, and the effects of anisotropy in the point spread function (PSF). In order to address the latter confound, we describe procedures for measuring the dependence of PSF anisotropy on distance from the microscope objective within tissue. Prior to microscopy, ex vivo d-MRI measurements performed on the hippocampal tissue revealed three regions of tissue with mutually orthogonal directions of least restricted diffusion that correspond to CA1, alveus and inferior longitudinal fasciculus. We demonstrate the ability of 3D structure tensor analysis to identify structure tensor orientations

  3. Inflation in anisotropic scalar-tensor theories

    NASA Technical Reports Server (NTRS)

    Pimentel, Luis O.; Stein-Schabes, Jaime

    1988-01-01

    The existence of an inflationary phase in anisotropic Scalar-Tensor Theories is investigated by means of a conformal transformation that allows us to rewrite these theories as gravity minimally coupled to a scalar field with a nontrivial potential. The explicit form of the potential is then used and the No Hair Theorem concludes that there is an inflationary phase in all open or flat anisotropic spacetimes in these theories. Several examples are constructed where the effect becomes manifest.

  4. Stress tensor correlators in three dimensional gravity

    NASA Astrophysics Data System (ADS)

    Bagchi, Arjun; Grumiller, Daniel; Merbis, Wout

    2016-03-01

    We calculate holographically arbitrary n -point correlators of the boundary stress tensor in three-dimensional Einstein gravity with negative or vanishing cosmological constant. We provide explicit expressions up to 5-point (connected) correlators and show consistency with the Galilean conformal field theory Ward identities and recursion relations of correlators, which we derive. This provides a novel check of flat space holography in three dimensions.

  5. Tensor Networks and Quantum Error Correction

    NASA Astrophysics Data System (ADS)

    Ferris, Andrew J.; Poulin, David

    2014-07-01

    We establish several relations between quantum error correction (QEC) and tensor network (TN) methods of quantum many-body physics. We exhibit correspondences between well-known families of QEC codes and TNs, and demonstrate a formal equivalence between decoding a QEC code and contracting a TN. We build on this equivalence to propose a new family of quantum codes and decoding algorithms that generalize and improve upon quantum polar codes and successive cancellation decoding in a natural way.

  6. Monte Carlo Volcano Seismic Moment Tensors

    NASA Astrophysics Data System (ADS)

    Waite, G. P.; Brill, K. A.; Lanza, F.

    2015-12-01

    Inverse modeling of volcano seismic sources can provide insight into the geometry and dynamics of volcanic conduits. But given the logistical challenges of working on an active volcano, seismic networks are typically deficient in spatial and temporal coverage; this potentially leads to large errors in source models. In addition, uncertainties in the centroid location and moment-tensor components, including volumetric components, are difficult to constrain from the linear inversion results, which leads to a poor understanding of the model space. In this study, we employ a nonlinear inversion using a Monte Carlo scheme with the objective of defining robustly resolved elements of model space. The model space is randomized by centroid location and moment tensor eigenvectors. Point sources densely sample the summit area and moment tensors are constrained to a randomly chosen geometry within the inversion; Green's functions for the random moment tensors are all calculated from modeled single forces, making the nonlinear inversion computationally reasonable. We apply this method to very-long-period (VLP) seismic events that accompany minor eruptions at Fuego volcano, Guatemala. The library of single force Green's functions is computed with a 3D finite-difference modeling algorithm through a homogeneous velocity-density model that includes topography, for a 3D grid of nodes, spaced 40 m apart, within the summit region. The homogenous velocity and density model is justified by long wavelength of VLP data. The nonlinear inversion reveals well resolved model features and informs the interpretation through a better understanding of the possible models. This approach can also be used to evaluate possible station geometries in order to optimize networks prior to deployment.

  7. Gamma-ray decomposition of PCBs

    SciTech Connect

    Mincher, B.J.; Meikrantz, D.H.; Arbon, R.E.; Murphy, R.J.

    1991-12-01

    This program is the Idaho National Engineering Laboratory (INEL) component of a joint collaborative effort with Lawrence Livermore National Laboratory (LLNL). The purpose of this effort is to demonstrate a viable process for breaking down hazardous halogenated organic wastes to simpler, non-hazardous wastes using high energy ionizing radiation. The INEL effort focuses on the use of spent reactor fuel gamma radiation sources to decompose complex wastes such as PCBs. At LLNL, halogenated solvents such as carbon tetrachloride and trichloroethylene are being studied using accelerator radiation sources. The INEL irradiation experiments concentrated on a single PCB congener so that a limited set of decomposition reactions could be studied. The congener 2, 2{prime}, 3, 3{prime}, 4, 5{prime}, 6, 6{prime}-octachlorobiphenyl was examined following exposure to various gamma doses at the Advanced Test Reactor (ATR) spent fuel pool. The decomposition rates and products in several solvents. are discussed. 7 refs., 13 figs., 1 tab.

  8. Gamma-ray decomposition of PCBs

    SciTech Connect

    Mincher, B.J.; Meikrantz, D.H.; Arbon, R.E.; Murphy, R.J.

    1991-01-01

    This program is the Idaho National Engineering Laboratory (INEL) component of a joint collaborative effort with Lawrence Livermore National Laboratory (LLNL). The purpose of this effort is to demonstrate a viable process for breaking down hazardous halogenated organic wastes to simpler, non-hazardous wastes using high energy ionizing radiation. The INEL effort focuses on the use of spent reactor fuel gamma radiation sources to decompose complex wastes such as PCBs. At LLNL, halogenated solvents such as carbon tetrachloride and trichloroethylene are being studied using accelerator radiation sources. The INEL irradiation experiments concentrated on a single PCB congener so that a limited set of decomposition reactions could be studied. The congener 2, 2{prime}, 3, 3{prime}, 4, 5{prime}, 6, 6{prime}-octachlorobiphenyl was examined following exposure to various gamma doses at the Advanced Test Reactor (ATR) spent fuel pool. The decomposition rates and products in several solvents. are discussed. 7 refs., 13 figs., 1 tab.

  9. Full Three-Dimensonal Reconstruction of the Dyadic Green Tensor from Electron Energy Loss Spectroscopy of Plasmonic Nanoparticles

    PubMed Central

    2015-01-01

    Electron energy loss spectroscopy (EELS) has emerged as a powerful tool for the investigation of plasmonic nanoparticles, but the interpretation of EELS results in terms of optical quantities, such as the photonic local density of states, remains challenging. Recent work has demonstrated that, under restrictive assumptions, including the applicability of the quasistatic approximation and a plasmonic response governed by a single mode, one can rephrase EELS as a tomography scheme for the reconstruction of plasmonic eigenmodes. In this paper we lift these restrictions by formulating EELS as an inverse problem and show that the complete dyadic Green tensor can be reconstructed for plasmonic particles of arbitrary shape. The key steps underlying our approach are a generic singular value decomposition of the dyadic Green tensor and a compressed sensing optimization for the determination of the expansion coefficients. We demonstrate the applicability of our scheme for prototypical nanorod, bowtie, and cube geometries. PMID:26523284

  10. Some nonlinear space decomposition algorithms

    SciTech Connect

    Tai, Xue-Cheng; Espedal, M.

    1996-12-31

    Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms.

  11. Lignocellulose decomposition by microbial secretions

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Carbon storage in terrestrial ecosystems is contingent upon the natural resistance of plant cell wall polymers to rapid biological degradation. Nevertheless, certain microorganisms have evolved remarkable means to overcome this natural resistance. Lignocellulose decomposition by microorganisms com...

  12. Tree Tensor Network State with Variable Tensor Order: An Efficient Multireference Method for Strongly Correlated Systems.

    PubMed

    Murg, V; Verstraete, F; Schneider, R; Nagy, P R; Legeza, Ö

    2015-03-10

    We study the tree-tensor-network-state (TTNS) method with variable tensor orders for quantum chemistry. TTNS is a variational method to efficiently approximate complete active space (CAS) configuration interaction (CI) wave functions in a tensor product form. TTNS can be considered as a higher order generalization of the matrix product state (MPS) method. The MPS wave function is formulated as products of matrices in a multiparticle basis spanning a truncated Hilbert space of the original CAS-CI problem. These matrices belong to active orbitals organized in a one-dimensional array, while tensors in TTNS are defined upon a tree-like arrangement of the same orbitals. The tree-structure is advantageous since the distance between two arbitrary orbitals in the tree scales only logarithmically with the number of orbitals N, whereas the scaling is linear in the MPS array. It is found to be beneficial from the computational costs point of view to keep strongly correlated orbitals in close vicinity in both arrangements; therefore, the TTNS ansatz is better suited for multireference problems with numerous highly correlated orbitals. To exploit the advantages of TTNS a novel algorithm is designed to optimize the tree tensor network topology based on quantum information theory and entanglement. The superior performance of the TTNS method is illustrated on the ionic-neutral avoided crossing of LiF. It is also shown that the avoided crossing of LiF can be localized using only ground state properties, namely one-orbital entanglement.

  13. To Learn Production of Scalars VS. Tensors in e+e- Collisions

    NASA Astrophysics Data System (ADS)

    Achasov, N. N.; Goncharenko, A. I.; Kiselev, A. V.; Rogozina, E. V.

    2014-12-01

    The intensity of scalar a0(980), f0(980) and tensor a2(1320), f2(1270) mesons production at VEPP-2000 (BINP, Novosibirsk) and the upgraded DAΦNE (Frascati, Italy) in the processes e+e- → a0(f0, a2, f2)γ is calculated.

  14. Decomposition of Rare Earth Loaded Resin Particles

    SciTech Connect

    Voit, Stewart L; Rawn, Claudia J

    2010-09-01

    resin is made of sulfonic acid functional groups attached to a styrene divinylbenzene copolymer lattice (long chained hydrocarbon). The metal cation binds to the sulfur group, then during thermal decomposition in air the hydrocarbons will form gaseous species leaving behind a spherical metal-oxide particle. Process development for resin applications with radioactive materials is typically performed using surrogates. For americium and curium, a trivalent metal like neodymium can be used. Thermal decomposition of Nd-loaded resin in air has been studied by Hale. Process conditions were established for resin decomposition and the formation of Nd{sub 2}O{sub 3} particles. The intermediate product compounds were described using x-ray diffraction (XRD) and wet chemistry. Leskela and Niinisto studied the decomposition of rare earth (RE) elements and found results consistent with Hale. Picart et al. demonstrated the viability of using a resin loading process for the fabrication of uranium-actinide mixed oxide microspheres for transmutation of minor actinides in a fast reactor. For effective transmutation of actinides, it will be desirable to extend the in-reactor burnup and minimize the number of recycles of used actinide materials. Longer burn times increases the chance of Fuel Clad Chemical or Mechanical Interaction (FCCI, FCMI). Sulfur is suspected of contributing to Irradiation Assisted Stress Corrosion Cracking (IASCC) thus it is necessary to maximize the removal of sulfur during decomposition of the resin. The present effort extends the previous work by quantifying the removal of sulfur during the decomposition process. Neodymium was selected as a surrogate for trivalent actinide metal cations. As described above Nd was dissolved in nitric acid solution then contacted with the AG-50W resin column. After washing the column, the Nd-resin particles are removed and dried. The Nd-resin, seen in Figure 1 prior to decomposition, is ready to be converted to Nd oxide microspheres.

  15. Decomposition of energetic chemicals contaminated with iron or stainless steel.

    PubMed

    Chervin, Sima; Bodman, Glenn T; Barnhart, Richard W

    2006-03-17

    Contamination of chemicals or reaction mixtures with iron or stainless steel is likely to take place during chemical processing. If energetic and thermally unstable chemicals are involved in a manufacturing process, contamination with iron or stainless steel can impact the decomposition characteristics of these chemicals and, subsequently, the safety of the processes, and should be investigated. The goal of this project was to undertake a systematic approach to study the impact of iron or stainless steel contamination on the decomposition characteristics of different chemical classes. Differential scanning calorimetry (DSC) was used to study the decomposition reaction by testing each chemical pure, and in mixtures with iron and stainless steel. The following classes of energetic chemicals were investigated: nitrobenzenes, tetrazoles, hydrazines, hydroxylamines and oximes, sulfonic acid derivatives and monomers. The following non-energetic groups were investigated for contributing effects: halogens, hydroxyls, amines, amides, nitriles, sulfonic acid esters, carbonyl halides and salts of hydrochloric acid. Based on the results obtained, conclusions were drawn regarding the sensitivity of the decomposition reaction to contamination with iron and stainless steel for the chemical classes listed above. It was demonstrated that the most sensitive classes are hydrazines and hydroxylamines/oximes. Contamination of these chemicals with iron or stainless steel not only destabilizes them, leading to decomposition at significantly lower temperatures, but also sometimes causes increased severity of the decomposition. The sensitivity of nitrobenzenes to contamination with iron or stainless steel depended upon the presence of other contributing groups: the presence of such groups as acid chlorides or chlorine/fluorine significantly increased the effect of contamination on decomposition characteristics of nitrobenzenes. The decomposition of sulfonic acid derivatives and tetrazoles

  16. Decomposition of indwelling EMG signals

    PubMed Central

    Nawab, S. Hamid; Wotiz, Robert P.; De Luca, Carlo J.

    2008-01-01

    Decomposition of indwelling electromyographic (EMG) signals is challenging in view of the complex and often unpredictable behaviors and interactions of the action potential trains of different motor units that constitute the indwelling EMG signal. These phenomena create a myriad of problem situations that a decomposition technique needs to address to attain completeness and accuracy levels required for various scientific and clinical applications. Starting with the maximum a posteriori probability classifier adapted from the original precision decomposition system (PD I) of LeFever and De Luca (25, 26), an artificial intelligence approach has been used to develop a multiclassifier system (PD II) for addressing some of the experimentally identified problem situations. On a database of indwelling EMG signals reflecting such conditions, the fully automatic PD II system is found to achieve a decomposition accuracy of 86.0% despite the fact that its results include low-amplitude action potential trains that are not decomposable at all via systems such as PD I. Accuracy was established by comparing the decompositions of indwelling EMG signals obtained from two sensors. At the end of the automatic PD II decomposition procedure, the accuracy may be enhanced to nearly 100% via an interactive editor, a particularly significant fact for the previously indecomposable trains. PMID:18483170

  17. Application of decomposition techniques to the preliminary design of a transport aircraft

    NASA Technical Reports Server (NTRS)

    Rogan, J. E.; Mcelveen, R. P.; Kolb, M. A.

    1986-01-01

    A multifaceted decomposition of a nonlinear constrained optimization problem describing the preliminary design process for a transport aircraft has been made. Flight dynamics, flexible aircraft loads and deformations, and preliminary structural design subproblems appear prominently in the decomposition. The use of design process decomposition for scheduling design projects, a new system integration approach to configuration control, and the application of object-centered programming to a new generation of design tools are discussed.

  18. Application of decomposition techniques to the preliminary design of a transport aircraft

    NASA Technical Reports Server (NTRS)

    Rogan, J. E.; Kolb, M. A.

    1987-01-01

    A nonlinear constrained optimization problem describing the preliminary design process for a transport aircraft has been formulated. A multifaceted decomposition of the optimization problem has been made. Flight dynamics, flexible aircraft loads and deformations, and preliminary structural design subproblems appear prominently in the decomposition. The use of design process decomposition for scheduling design projects, a new system integration approach to configuration control, and the application of object-centered programming to a new generation of design tools are discussed.

  19. Thermal decomposition of ethylpentaborane in gas phase

    NASA Technical Reports Server (NTRS)

    Mcdonald, Glen E

    1956-01-01

    The thermal decomposition of ethylpentaborane at temperatures of 185 degrees to 244 degrees C is approximately a 1.5-order reaction. The products of the decomposition were hydrogen, methane, a nonvolatile boron hydride, and traces of decaborane. Measurements of the rate of decomposition of pentaborane showed that ethylpentaborane has a greater rate of decomposition than pentaborane.

  20. Tribochemical Decomposition of Light Ionic Hydrides at Room Temperature.

    PubMed

    Nevshupa, Roman; Ares, Jose Ramón; Fernández, Jose Francisco; Del Campo, Adolfo; Roman, Elisa

    2015-07-16

    Tribochemical decomposition of magnesium hydride (MgH2) induced by deformation at room temperature was studied on a micrometric scale, in situ and in real time. During deformation, a near-full depletion of hydrogen in the micrometric affected zone is observed through an instantaneous (t < 1 s) and huge release of hydrogen (3-50 nmol/s). H release is related to a nonthermal decomposition process. After deformation, the remaining hydride is thermally decomposed at room temperature, exhibiting a much slower rate than during deformation. Confocal-microRaman spectroscopy of the mechanically affected zone was used to characterize the decomposition products. Decomposition was enhanced through the formation of the distorted structure of MgH2 with reduced crystal size by mechanical deformation.

  1. Scare Tactics: Evaluating Problem Decompositions Using Failure Scenarios

    NASA Technical Reports Server (NTRS)

    Helm, B. Robert; Fickas, Stephen

    1992-01-01

    Our interest is in the design of multi-agent problem-solving systems, which we refer to as composite systems. We have proposed an approach to composite system design by decomposition of problem statements. An automated assistant called Critter provides a library of reusable design transformations which allow a human analyst to search the space of decompositions for a problem. In this paper we describe a method for evaluating and critiquing problem decompositions generated by this search process. The method uses knowledge stored in the form of failure decompositions attached to design transformations. We suggest the benefits of our critiquing method by showing how it could re-derive steps of a published development example. We then identify several open issues for the method.

  2. Associational Patterns of Scavenger Beetles to Decomposition Stages.

    PubMed

    Zanetti, Noelia I; Visciarelli, Elena C; Centeno, Nestor D

    2015-07-01

    Beetles associated with carrion play an important role in recycling organic matter in an ecosystem. Four experiments on decomposition, one per season, were conducted in a semirural area in Bahía Blanca, Argentina. Melyridae are reported for the first time of forensic interest. Apart from adults and larvae of Scarabaeidae, thirteen species and two genera of other coleopteran families are new forensic records in Argentina. Diversity, abundance, and species composition of beetles showed differences between stages and seasons. Our results differed from other studies conducted in temperate regions. Four guilds and succession patterns were established in relation to decomposition stages and seasons. Dermestidae (necrophages) predominated in winter during the decomposition process; Staphylinidae (necrophiles) in Fresh and Bloat stages during spring, summer, and autumn; and Histeridae (necrophiles) and Cleridae (omnivores) in the following stages during those seasons. Finally, coleopteran activity, diversity and abundance, and decomposition rate change with biogeoclimatic characteristics, which is of significance in forensics. PMID:26174466

  3. Associational Patterns of Scavenger Beetles to Decomposition Stages.

    PubMed

    Zanetti, Noelia I; Visciarelli, Elena C; Centeno, Nestor D

    2015-07-01

    Beetles associated with carrion play an important role in recycling organic matter in an ecosystem. Four experiments on decomposition, one per season, were conducted in a semirural area in Bahía Blanca, Argentina. Melyridae are reported for the first time of forensic interest. Apart from adults and larvae of Scarabaeidae, thirteen species and two genera of other coleopteran families are new forensic records in Argentina. Diversity, abundance, and species composition of beetles showed differences between stages and seasons. Our results differed from other studies conducted in temperate regions. Four guilds and succession patterns were established in relation to decomposition stages and seasons. Dermestidae (necrophages) predominated in winter during the decomposition process; Staphylinidae (necrophiles) in Fresh and Bloat stages during spring, summer, and autumn; and Histeridae (necrophiles) and Cleridae (omnivores) in the following stages during those seasons. Finally, coleopteran activity, diversity and abundance, and decomposition rate change with biogeoclimatic characteristics, which is of significance in forensics.

  4. Ab initio kinetics of gas phase decomposition reactions.

    PubMed

    Sharia, Onise; Kuklja, Maija M

    2010-12-01

    The thermal and kinetic aspects of gas phase decomposition reactions can be extremely complex due to a large number of parameters, a variety of possible intermediates, and an overlap in thermal decomposition traces. The experimental determination of the activation energies is particularly difficult when several possible reaction pathways coexist in the thermal decomposition. Ab initio calculations intended to provide an interpretation of the experiment are often of little help if they produce only the activation barriers and ignore the kinetics of the decomposition process. To overcome this ambiguity, a theoretical study of a complete picture of gas phase thermo-decomposition, including reaction energies, activation barriers, and reaction rates, is illustrated with the example of the β-octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine (HMX) molecule by means of quantum-chemical calculations. We study three types of major decomposition reactions characteristic of nitramines: the HONO elimination, the NONO rearrangement, and the N-NO(2) homolysis. The reaction rates were determined using the conventional transition state theory for the HONO and NONO decompositions and the variational transition state theory for the N-NO(2) homolysis. Our calculations show that the HMX decomposition process is more complex than it was previously believed to be and is defined by a combination of reactions at any given temperature. At all temperatures, the direct N-NO(2) homolysis prevails with the activation barrier at 38.1 kcal/mol. The nitro-nitrite isomerization and the HONO elimination, with the activation barriers at 46.3 and 39.4 kcal/mol, respectively, are slow reactions at all temperatures. The obtained conclusions provide a consistent interpretation for the reported experimental data. PMID:21077597

  5. Vertebrate Decomposition Is Accelerated by Soil Microbes

    PubMed Central

    Lauber, Christian L.; Metcalf, Jessica L.; Keepers, Kyle; Ackermann, Gail; Carter, David O.

    2014-01-01

    Carrion decomposition is an ecologically important natural phenomenon influenced by a complex set of factors, including temperature, moisture, and the activity of microorganisms, invertebrates, and scavengers. The role of soil microbes as decomposers in this process is essential but not well understood and represents a knowledge gap in carrion ecology. To better define the role and sources of microbes in carrion decomposition, lab-reared mice were decomposed on either (i) soil with an intact microbial community or (ii) soil that was sterilized. We characterized the microbial community (16S rRNA gene for bacteria and archaea, and the 18S rRNA gene for fungi and microbial eukaryotes) for three body sites along with the underlying soil (i.e., gravesoils) at time intervals coinciding with visible changes in carrion morphology. Our results indicate that mice placed on soil with intact microbial communities reach advanced stages of decomposition 2 to 3 times faster than those placed on sterile soil. Microbial communities associated with skin and gravesoils of carrion in stages of active and advanced decay were significantly different between soil types (sterile versus untreated), suggesting that substrates on which carrion decompose may partially determine the microbial decomposer community. However, the source of the decomposer community (soil- versus carcass-associated microbes) was not clear in our data set, suggesting that greater sequencing depth needs to be employed to identify the origin of the decomposer communities in carrion decomposition. Overall, our data show that soil microbial communities have a significant impact on the rate at which carrion decomposes and have important implications for understanding carrion ecology. PMID:24907317

  6. Vertebrate decomposition is accelerated by soil microbes.

    PubMed

    Lauber, Christian L; Metcalf, Jessica L; Keepers, Kyle; Ackermann, Gail; Carter, David O; Knight, Rob

    2014-08-01

    Carrion decomposition is an ecologically important natural phenomenon influenced by a complex set of factors, including temperature, moisture, and the activity of microorganisms, invertebrates, and scavengers. The role of soil microbes as decomposers in this process is essential but not well understood and represents a knowledge gap in carrion ecology. To better define the role and sources of microbes in carrion decomposition, lab-reared mice were decomposed on either (i) soil with an intact microbial community or (ii) soil that was sterilized. We characterized the microbial community (16S rRNA gene for bacteria and archaea, and the 18S rRNA gene for fungi and microbial eukaryotes) for three body sites along with the underlying soil (i.e., gravesoils) at time intervals coinciding with visible changes in carrion morphology. Our results indicate that mice placed on soil with intact microbial communities reach advanced stages of decomposition 2 to 3 times faster than those placed on sterile soil. Microbial communities associated with skin and gravesoils of carrion in stages of active and advanced decay were significantly different between soil types (sterile versus untreated), suggesting that substrates on which carrion decompose may partially determine the microbial decomposer community. However, the source of the decomposer community (soil- versus carcass-associated microbes) was not clear in our data set, suggesting that greater sequencing depth needs to be employed to identify the origin of the decomposer communities in carrion decomposition. Overall, our data show that soil microbial communities have a significant impact on the rate at which carrion decomposes and have important implications for understanding carrion ecology.

  7. Geometric derivation of the microscopic stress: A covariant central force decomposition

    NASA Astrophysics Data System (ADS)

    Torres-Sánchez, Alejandro; Vanegas, Juan M.; Arroyo, Marino

    2016-08-01

    We revisit the derivation of the microscopic stress, linking the statistical mechanics of particle systems and continuum mechanics. The starting point in our geometric derivation is the Doyle-Ericksen formula, which states that the Cauchy stress tensor is the derivative of the free-energy with respect to the ambient metric tensor and which follows from a covariance argument. Thus, our approach to define the microscopic stress tensor does not rely on the statement of balance of linear momentum as in the classical Irving-Kirkwood-Noll approach. Nevertheless, the resulting stress tensor satisfies balance of linear and angular momentum. Furthermore, our approach removes the ambiguity in the definition of the microscopic stress in the presence of multibody interactions by naturally suggesting a canonical and physically motivated force decomposition into pairwise terms, a key ingredient in this theory. As a result, our approach provides objective expressions to compute a microscopic stress for a system in equilibrium and for force-fields expanded into multibody interactions of arbitrarily high order. We illustrate the proposed methodology with molecular dynamics simulations of a fibrous protein using a force-field involving up to 5-body interactions.

  8. Reduction of noise in diffusion tensor images using anisotropic smoothing.

    PubMed

    Ding, Zhaohua; Gore, John C; Anderson, Adam W

    2005-02-01

    To improve the accuracy of tissue structural and architectural characterization with diffusion tensor imaging, a novel smoothing technique is developed for reducing noise in diffusion tensor images. The technique extends the traditional anisotropic diffusion filtering method by allowing isotropic smoothing within homogeneous regions and anisotropic smoothing along structure boundaries. This is particularly useful for smoothing diffusion tensor images in which direction information contained in the tensor needs to be restored following noise corruption and preserved around tissue boundaries. The effectiveness of this technique is quantitatively studied with experiments on simulated and human in vivo diffusion tensor data. Illustrative results demonstrate that the anisotropic smoothing technique developed can significantly reduce the impact of noise on the direction as well as anisotropy measures of the diffusion tensor images.

  9. A Communication-Optimal Framework for Contracting Distributed Tensors

    SciTech Connect

    Rajbhandari, Samyam; NIkam, Akshay; Lai, Pai-Wei; Stock, Kevin; Krishnamoorthy, Sriram; Sadayappan, Ponnuswamy

    2014-11-16

    Tensor contractions are extremely compute intensive generalized matrix multiplication operations encountered in many computational science fields, such as quantum chemistry and nuclear physics. Unlike distributed matrix multiplication, which has been extensively studied, limited work has been done in understanding distributed tensor contractions. In this paper, we characterize distributed tensor contraction algorithms on torus networks. We develop a framework with three fundamental communication operators to generate communication-efficient contraction algorithms for arbitrary tensor contractions. We show that for a given amount of memory per processor, our framework is communication optimal for all tensor contractions. We demonstrate performance and scalability of our framework on up to 262,144 cores of BG/Q supercomputer using five tensor contraction examples.

  10. Emergent classical geometries on boundaries of randomly connected tensor networks

    NASA Astrophysics Data System (ADS)

    Chen, Hua; Sasakura, Naoki; Sato, Yuki

    2016-03-01

    It is shown that classical spaces with geometries emerge on boundaries of randomly connected tensor networks with appropriately chosen tensors in the thermodynamic limit. With variation of the tensors the dimensions of the spaces can be freely chosen, and the geometries—which are curved in general—can be varied. We give the explicit solvable examples of emergent flat tori in arbitrary dimensions, and the correspondence from the tensors to the geometries for general curved cases. The perturbative dynamics in the emergent space is shown to be described by an effective action which is invariant under the spatial diffeomorphism due to the underlying orthogonal group symmetry of the randomly connected tensor network. It is also shown that there are various phase transitions among spaces, including extended and point-like ones, under continuous change of the tensors.

  11. Irreducible Cartesian tensors of highest weight, for arbitrary order

    NASA Astrophysics Data System (ADS)

    Mane, S. R.

    2016-03-01

    A closed form expression is presented for the irreducible Cartesian tensor of highest weight, for arbitrary order. Two proofs are offered, one employing bookkeeping of indices and, after establishing the connection with the so-called natural tensors and their projection operators, the other one employing purely coordinate-free tensor manipulations. Some theorems and formulas in the published literature are generalized from SO(3) to SO(n), for dimensions n ≥ 3.

  12. Construction of energy-momentum tensor of gravitation

    NASA Astrophysics Data System (ADS)

    Bamba, Kazuharu; Shimizu, Katsutaro

    2016-10-01

    We construct the gravitational energy-momentum tensor in general relativity through the Noether theorem. In particular, we explicitly demonstrate that the constructed quantity can vary as a tensor under the general coordinate transformation. Furthermore, we verify that the energy-momentum conservation is satisfied because one of the two indices of the energy-momentum tensor should be in the local Lorentz frame. It is also shown that the gravitational energy and the matter one cancel out in certain space-times.

  13. Simultaneous Analysis and Quality Assurance for Diffusion Tensor Imaging

    PubMed Central

    Lauzon, Carolyn B.; Asman, Andrew J.; Esparza, Michael L.; Burns, Scott S.; Fan, Qiuyun; Gao, Yurui; Anderson, Adam W.; Davis, Nicole; Cutting, Laurie E.; Landman, Bennett A.

    2013-01-01

    Diffusion tensor imaging (DTI) enables non-invasive, cyto-architectural mapping of in vivo tissue microarchitecture through voxel-wise mathematical modeling of multiple magnetic resonance imaging (MRI) acquisitions, each differently sensitized to water diffusion. DTI computations are fundamentally estimation processes and are sensitive to noise and artifacts. Despite widespread adoption in the neuroimaging community, maintaining consistent DTI data quality remains challenging given the propensity for patient motion, artifacts associated with fast imaging techniques, and the possibility of hardware changes/failures. Furthermore, the quantity of data acquired per voxel, the non-linear estimation process, and numerous potential use cases complicate traditional visual data inspection approaches. Currently, quality inspection of DTI data has relied on visual inspection and individual processing in DTI analysis software programs (e.g. DTIPrep, DTI-studio). However, recent advances in applied statistical methods have yielded several different metrics to assess noise level, artifact propensity, quality of tensor fit, variance of estimated measures, and bias in estimated measures. To date, these metrics have been largely studied in isolation. Herein, we select complementary metrics for integration into an automatic DTI analysis and quality assurance pipeline. The pipeline completes in 24 hours, stores statistical outputs, and produces a graphical summary quality analysis (QA) report. We assess the utility of this streamlined approach for empirical quality assessment on 608 DTI datasets from pediatric neuroimaging studies. The efficiency and accuracy of quality analysis using the proposed pipeline is compared with quality analysis based on visual inspection. The unified pipeline is found to save a statistically significant amount of time (over 70%) while improving the consistency of QA between a DTI expert and a pool of research associates. Projection of QA metrics to a low

  14. 3D structure tensor analysis of light microscopy data for validating diffusion MRI.

    PubMed

    Khan, Ahmad Raza; Cornea, Anda; Leigland, Lindsey A; Kohama, Steven G; Jespersen, Sune Nørhøj; Kroenke, Christopher D

    2015-05-01

    Diffusion magnetic resonance imaging (d-MRI) is a powerful non-invasive and non-destructive technique for characterizing brain tissue on the microscopic scale. However, the lack of validation of d-MRI by independent experimental means poses an obstacle to accurate interpretation of data acquired using this method. Recently, structure tensor analysis has been applied to light microscopy images, and this technique holds promise to be a powerful validation strategy for d-MRI. Advantages of this approach include its similarity to d-MRI in terms of averaging the effects of a large number of cellular structures, and its simplicity, which enables it to be implemented in a high-throughput manner. However, a drawback of previous implementations of this technique arises from it being restricted to 2D. As a result, structure tensor analyses have been limited to tissue sectioned in a direction orthogonal to the direction of interest. Here we describe the analytical framework for extending structure tensor analysis to 3D, and utilize the results to analyze serial image "stacks" acquired with confocal microscopy of rhesus macaque hippocampal tissue. Implementation of 3D structure tensor procedures requires removal of sources of anisotropy introduced in tissue preparation and confocal imaging. This is accomplished with image processing steps to mitigate the effects of anisotropic tissue shrinkage, and the effects of anisotropy in the point spread function (PSF). In order to address the latter confound, we describe procedures for measuring the dependence of PSF anisotropy on distance from the microscope objective within tissue. Prior to microscopy, ex vivo d-MRI measurements performed on the hippocampal tissue revealed three regions of tissue with mutually orthogonal directions of least restricted diffusion that correspond to CA1, alveus and inferior longitudinal fasciculus. We demonstrate the ability of 3D structure tensor analysis to identify structure tensor orientations that

  15. Tensor network characterization of superconducting circuits

    NASA Astrophysics Data System (ADS)

    Duclos-Cianci, Guillaume; Poulin, David; Najafi-Yazdi, Alireza

    Superconducting circuits are promising candidates in the development of reliable quantum computing devices. In principle, one can obtain the Hamiltonian of a generic superconducting circuit and solve for its eigenvalues to obtain its energy spectrum. In practice, however, the computational cost of calculating eigenvalues of a complex device with many degrees of freedom can become prohibitively expensive. In the present work, we investigate the application of tensor network algorithms to enable efficient and accurate characterization of superconducting circuits comprised of many components. Suitable validation test cases are performed to study the accuracy, computational efficiency and limitations of the proposed approach.

  16. Ground state fidelity from tensor network representations.

    PubMed

    Zhou, Huan-Qiang; Orús, Roman; Vidal, Guifre

    2008-02-29

    For any D-dimensional quantum lattice system, the fidelity between two ground state many-body wave functions is mapped onto the partition function of a D-dimensional classical statistical vertex lattice model with the same lattice geometry. The fidelity per lattice site, analogous to the free energy per site, is well defined in the thermodynamic limit and can be used to characterize the phase diagram of the model. We explain how to compute the fidelity per site in the context of tensor network algorithms, and demonstrate the approach by analyzing the two-dimensional quantum Ising model with transverse and parallel magnetic fields. PMID:18352611

  17. On the dynamic viscous permeability tensor symmetry.

    PubMed

    Perrot, Camille; Chevillotte, Fabien; Panneton, Raymond; Allard, Jean-François; Lafarge, Denis

    2008-10-01

    Based on a direct generalization of a proof given by Torquato for symmetry property in static regime, this express letter clarifies the reasons why the dynamic permeability tensor is symmetric for spatially periodic structures having symmetrical axes which do not coincide with orthogonal pairs being perpendicular to the axis of three-, four-, and sixfold symmetry. This somewhat nonintuitive property is illustrated by providing detailed numerical examples for a hexagonal lattice of solid cylinders in the asymptotic and frequency dependent regimes. It may be practically useful for numerical implementation validation and/or convergence assessment.

  18. Beam-plasma dielectric tensor with Mathematica

    NASA Astrophysics Data System (ADS)

    Bret, A.

    2007-03-01

    We present a Mathematica notebook allowing for the symbolic calculation of the 3×3 dielectric tensor of an electron-beam plasma system in the fluid approximation. Calculation is detailed for a cold relativistic electron beam entering a cold magnetized plasma, and for arbitrarily oriented wave vectors. We show how one can elaborate on this example to account for temperatures, arbitrarily oriented magnetic field or a different kind of plasma. Program summaryTitle of program: Tensor Catalog identifier: ADYT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYT_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer for which the program is designed and others on which it has been tested: Computers: Any computer running Mathematica 4.1. Tested on DELL Dimension 5100 and IBM ThinkPad T42. Installations: ETSI Industriales, Universidad Castilla la Mancha, Ciudad Real, Spain Operating system under which the program has been tested: Windows XP Pro Programming language used: Mathematica 4.1 Memory required to execute with typical data: 7.17 Mbytes No. of bytes in distributed program, including test data, etc.: 33 439 No. of lines in distributed program, including test data, etc.: 3169 Distribution format: tar.gz Nature of the physical problem: The dielectric tensor of a relativistic beam plasma system may be quite involved to calculate symbolically when considering a magnetized plasma, kinetic pressure, collisions between species, and so on. The present Mathematica notebook performs the symbolic computation in terms of some usual dimensionless variables. Method of solution: The linearized relativistic fluid equations are directly entered and solved by Mathematica to express the first-order expression of the current. This expression is then introduced into a combination of Faraday and Ampère-Maxwell's equations to give the dielectric tensor. Some additional manipulations are needed to express the result in terms of the

  19. A Tensor Hyperviscosity Model in Kull

    SciTech Connect

    Ulitsky, M

    2005-06-28

    A tensor artificial hyper-viscosity model has recently been added to the available list of artificial viscosities that one can chose from when running KULL. This model is based on the theoretical work of A. Cook and B. Cabot, and the numerical results of running the model in the high-order spectral/compact finite difference framework of the Eulerian MIRANDA code. The viscosity model is based on filtering a Laplacian or bi-Laplacian of the strain rate magnitude, and it was desired to investigate whether the formalism that worked so well for the MIRANDA research code could be carried over to an unstructured ALE code like KULL.

  20. The Topology of Three-Dimensional Symmetric Tensor Fields

    NASA Technical Reports Server (NTRS)

    Lavin, Yingmei; Levy, Yuval; Hesselink, Lambertus

    1994-01-01

    We study the topology of 3-D symmetric tensor fields. The goal is to represent their complex structure by a simple set of carefully chosen points and lines analogous to vector field topology. The basic constituents of tensor topology are the degenerate points, or points where eigenvalues are equal to each other. First, we introduce a new method for locating 3-D degenerate points. We then extract the topological skeletons of the eigenvector fields and use them for a compact, comprehensive description of the tensor field. Finally, we demonstrate the use of tensor field topology for the interpretation of the two-force Boussinesq problem.

  1. Low-tubal-rank tensor completion using alternating minimization

    NASA Astrophysics Data System (ADS)

    Liu, Xiao-Yang; Aeron, Shuchin; Aggarwal, Vaneet; Wang, Xiaodong

    2016-05-01

    The low-tubal-rank tensors have been recently proposed to model real-world multidimensional data. In this paper, we study the low-tubal-rank tensor completion problem, i.e., to recover a third-order tensor by observing a subset of elements selected uniform at random. We propose a fast iterative algorithm, called Tubal-Alt-Min, that is inspired by similar approach for low rank matrix completion. The unknown low-tubal-rank tensor is parameterized as the product of two much smaller tensors with the low-tubal-rank property being automatically incorporated, and Tubal-Alt-Min alternates between estimating those two tensors using tensor least squares minimization. We note that the tensor least squares minimization is different from its counterpart and nontrivial, and this paper gives a routine to carry out this operation. Further, on both synthetic data and real-world video data, evaluation results show that compared with the tensor nuclear norm minimization, the proposed algorithm improves the recovery error by orders of magnitude with smaller running time for higher sampling rates.

  2. A general theory of linear cosmological perturbations: scalar-tensor and vector-tensor theories

    NASA Astrophysics Data System (ADS)

    Lagos, Macarena; Baker, Tessa; Ferreira, Pedro G.; Noller, Johannes

    2016-08-01

    We present a method for parametrizing linear cosmological perturbations of theories of gravity, around homogeneous and isotropic backgrounds. The method is sufficiently general and systematic that it can be applied to theories with any degrees of freedom (DoFs) and arbitrary gauge symmetries. In this paper, we focus on scalar-tensor and vector-tensor theories, invariant under linear coordinate transformations. In the case of scalar-tensor theories, we use our framework to recover the simple parametrizations of linearized Horndeski and ``Beyond Horndeski'' theories, and also find higher-derivative corrections. In the case of vector-tensor theories, we first construct the most general quadratic action for perturbations that leads to second-order equations of motion, which propagates two scalar DoFs. Then we specialize to the case in which the vector field is time-like (à la Einstein-Aether gravity), where the theory only propagates one scalar DoF. As a result, we identify the complete forms of the quadratic actions for perturbations, and the number of free parameters that need to be defined, to cosmologically characterize these two broad classes of theories.

  3. Physical states in the canonical tensor model from the perspective of random tensor networks

    NASA Astrophysics Data System (ADS)

    Narain, Gaurav; Sasakura, Naoki; Sato, Yuki

    2015-01-01

    Tensor models, generalization of matrix models, are studied aiming for quantum gravity in dimensions larger than two. Among them, the canonical tensor model is formulated as a totally constrained system with first-class constraints, the algebra of which resembles the Dirac algebra of general relativity. When quantized, the physical states are defined to be vanished by the quantized constraints. In explicit representations, the constraint equations are a set of partial differential equations for the physical wave-functions, which do not seem straightforward to be solved due to their non-linear character. In this paper, after providing some explicit solutions for N = 2 , 3, we show that certain scale-free integration of partition functions of statistical systems on random networks (or random tensor networks more generally) provides a series of solutions for general N. Then, by generalizing this form, we also obtain various solutions for general N. Moreover, we show that the solutions for the cases with a cosmological constant can be obtained from those with no cosmological constant for increased N. This would imply the interesting possibility that a cosmological constant can always be absorbed into the dynamics and is not an input parameter in the canonical tensor model. We also observe the possibility of symmetry enhancement in N = 3, and comment on an extension of Airy function related to the solutions.

  4. Application of modern tensor calculus to engineered domain structures. 2. Tensor distinction of domain states.

    PubMed

    Kopský, Vojtech

    2006-03-01

    The theory of domain states is reviewed as a prerequisite for consideration of tensorial distinction of domain states. It is then shown that the parameters of the first domain in a ferroic phase transition from a set of isomorphic groups of the same oriented Laue class can be systematically and suitably represented in terms of typical variables. On replacing these variables by actual tensor components according to the previous paper, we can reveal the tensorial parameters associated with each particular symmetry descent. Parameters are distinguished by the ireps to which they belong and this can be used to determine which of them are the principal parameters that distinguish all domain states, in contrast to secondary parameters which are common to several domain states. In general, the parameters are expressed as the covariant components of the tensors. A general procedure is described which is designed to transform the results to Cartesian components. It consists of two parts: the first, called the labelling of covariants, and its inverse, called the conversion equations. Transformation of parameters from the first domain state to other states is now reduced to irreducible subspaces whose maximal dimension is three in contrast with higher dimensions of tensor spaces. With this method, we can explicitly calculate tensor parameters for all domain states. To find the distinction of pairs of domain states, it is suitable to use the concept of the twinning group which is briefly described. PMID:16489243

  5. [Effects of aquatic plants during their decay and decomposition on water quality].

    PubMed

    Tang, Jin-Yan; Cao, Pei-Pei; Xu, Chi; Liu, Mao-Song

    2013-01-01

    Taking 6 aquatic plant species as test objects, a 64-day decomposition experiment was conducted to study the temporal variation patterns of nutrient concentration in water body during the process of the aquatic plant decomposition. There existed greater differences in the decomposition rates between the 6 species. Floating-leaved plants had the highest decomposition rate, followed by submerged plants, and emerged plants. The effects of the aquatic plant species during their decomposition on water quality differed, which was related to the plant biomass density. During the decomposition of Phragmites australis, water body had the lowest concentrations of chemical oxygen demand, total nitrogen, and total phosphorus. In the late decomposition period of Zizania latifolia, the concentrations of water body chemical oxygen demand and total nitrogen increased, resulting in the deterioration of water quality. In the decomposition processes of Nymphoides peltatum and Nelumbo nucifera, the concentrations of water body chemical oxygen demand and total nitrogen were higher than those during the decomposition of other test plants. In contrast, during the decomposition of Potamogeton crispus and Myriophyllum verticillatum, water body had the highest concentrations of ammonium, nitrate, and total phosphorus. For a given plant species, the main water quality indices had the similar variation trends under different biomass densities. It was suggested that the existence of moderate plant residues could effectively promote the nitrogen and phosphorus cycles in water body, reduce its nitrate concentration to some extent, and decrease the water body nitrogen load. PMID:23717994

  6. [Effects of aquatic plants during their decay and decomposition on water quality].

    PubMed

    Tang, Jin-Yan; Cao, Pei-Pei; Xu, Chi; Liu, Mao-Song

    2013-01-01

    Taking 6 aquatic plant species as test objects, a 64-day decomposition experiment was conducted to study the temporal variation patterns of nutrient concentration in water body during the process of the aquatic plant decomposition. There existed greater differences in the decomposition rates between the 6 species. Floating-leaved plants had the highest decomposition rate, followed by submerged plants, and emerged plants. The effects of the aquatic plant species during their decomposition on water quality differed, which was related to the plant biomass density. During the decomposition of Phragmites australis, water body had the lowest concentrations of chemical oxygen demand, total nitrogen, and total phosphorus. In the late decomposition period of Zizania latifolia, the concentrations of water body chemical oxygen demand and total nitrogen increased, resulting in the deterioration of water quality. In the decomposition processes of Nymphoides peltatum and Nelumbo nucifera, the concentrations of water body chemical oxygen demand and total nitrogen were higher than those during the decomposition of other test plants. In contrast, during the decomposition of Potamogeton crispus and Myriophyllum verticillatum, water body had the highest concentrations of ammonium, nitrate, and total phosphorus. For a given plant species, the main water quality indices had the similar variation trends under different biomass densities. It was suggested that the existence of moderate plant residues could effectively promote the nitrogen and phosphorus cycles in water body, reduce its nitrate concentration to some extent, and decrease the water body nitrogen load.

  7. Exploring Multimodal Data Fusion Through Joint Decompositions with Flexible Couplings

    NASA Astrophysics Data System (ADS)

    Cabral Farias, Rodrigo; Cohen, Jeremy Emile; Comon, Pierre

    2016-09-01

    A Bayesian framework is proposed to define flexible coupling models for joint tensor decompositions of multiple data sets. Under this framework, a natural formulation of the data fusion problem is to cast it in terms of a joint maximum a posteriori (MAP) estimator. Data driven scenarios of joint posterior distributions are provided, including general Gaussian priors and non Gaussian coupling priors. We present and discuss implementation issues of algorithms used to obtain the joint MAP estimator. We also show how this framework can be adapted to tackle the problem of joint decompositions of large datasets. In the case of a conditional Gaussian coupling with a linear transformation, we give theoretical bounds on the data fusion performance using the Bayesian Cramer-Rao bound. Simulations are reported for hybrid coupling models ranging from simple additive Gaussian models, to Gamma-type models with positive variables and to the coupling of data sets which are inherently of different size due to different resolution of the measurement devices.

  8. Efficient elastic reverse-time migration for the decomposed P-wavefield using stress tensor in the time domain

    NASA Astrophysics Data System (ADS)

    Ha, Jiho; Shin, Sungryul; Shin, Changsoo; Chung, Wookeen

    2015-05-01

    Because complex mixed waves are typically generated in elastic media, wavefield decomposition is required for such media to obtain migration images accurately. In isotropic media, this is achieved according to the Helmholtz decomposition theorem; in particular, the divergence operator is commonly applied to P-wavefield decomposition. In this study, two types of elastic reverse-time migration algorithms are proposed for decomposition of the P-wavefield without requiring the divergence operator. The first algorithm involves formulation of the stress tensor by spatially differentiated displacement according to the stress-strain relationship and is utilized to construct an imaging condition for the decomposed P-wavefield. We demonstrate this approach through numerical testing. The second algorithm allows us to obtain emphasized interfaces through the application of the absolute value function to decomposed wavefield in imaging condition. Because reverse-time migration can be defined by a zero-lag cross-correlation relationship between the partial-derivative wavefield and the observed wavefield data, we derive the virtual source to construct the partial-derivative wavefield based on a 2D staggered-grid finite-difference modeling method in the time domain. The explicitly computed partial-derivative wavefield from virtual sources with the stress tensor is in agreement with the partial-derivative wavefield directly computed from residual by between with and without a perturbation point in the subsurface. Moreover, the back-propagation technique is used to enhance the computational efficiency. To validate our two types of imaging conditions, numerical tests are conducted. The migration images created according to our imaging conditions can represent the subsurface structure accurately. Thus, we can confirm the feasibility of obtaining migration images of the decomposed P-wavefield without requiring the application of the divergence operator.

  9. Long-term litter decomposition controlled by manganese redox cycling.

    PubMed

    Keiluweit, Marco; Nico, Peter; Harmon, Mark E; Mao, Jingdong; Pett-Ridge, Jennifer; Kleber, Markus

    2015-09-22

    Litter decomposition is a keystone ecosystem process impacting nutrient cycling and productivity, soil properties, and the terrestrial carbon (C) balance, but the factors regulating decomposition rate are still poorly understood. Traditional models assume that the rate is controlled by litter quality, relying on parameters such as lignin content as predictors. However, a strong correlation has been observed between the manganese (Mn) content of litter and decomposition rates across a variety of forest ecosystems. Here, we show that long-term litter decomposition in forest ecosystems is tightly coupled to Mn redox cycling. Over 7 years of litter decomposition, microbial transformation of litter was paralleled by variations in Mn oxidation state and concentration. A detailed chemical imaging analysis of the litter revealed that fungi recruit and redistribute unreactive Mn(2+) provided by fresh plant litter to produce oxidative Mn(3+) species at sites of active decay, with Mn eventually accumulating as insoluble Mn(3+/4+) oxides. Formation of reactive Mn(3+) species coincided with the generation of aromatic oxidation products, providing direct proof of the previously posited role of Mn(3+)-based oxidizers in the breakdown of litter. Our results suggest that the litter-decomposing machinery at our coniferous forest site depends on the ability of plants and microbes to supply, accumulate, and regenerate short-lived Mn(3+) species in the litter layer. This observation indicates that biogeochemical constraints on bioavailability, mobility, and reactivity of Mn in the plant-soil system may have a profound impact on litter decomposition rates.

  10. Understanding coal using thermal decomposition and fourier transform infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Solomon, P. R.; Hamblen, D. G.

    1981-02-01

    Fourier Transform Infrared Spectroscopy (FTIR) is being used to provide understanding of the organic structure of coals and coal thermal decomposition products. The research has developed a relationship between the coal organic structure and the products of thermal decomposition. The work has also led to the discovery that many of the coal structural elements are preserved in the heavy molecular weight products (tar) released in thermal decomposition and that careful analysis of these products in relation to the parent coal can supply clues to the original structure. Quantitative FTIR spectra for coals, tars and chars are used to determine concentrations of the hydroxyl, aliphatic and aromatic hydrogen. Concentrations of aliphatic carbon are computed using an assumed aliphatic stoichiometry; aromatic carbon concentrations are determined by difference. The values are in good agreement with date determined by 13C and proton NMR. Analysis of the solid produ ts produced by successive stages in the thermal decomposition provides information on the changes in the chemical bonds occurring during the process. Time resolved infrared scans (129 msec/scan) taken during the thermal decomposition provide data on the amount, composition and rate of evolution of light gas species. The relationship between the evolved light species and their sources in the coal is developed by comparing the rate of evolution with the rate of change in the chemical bonds. With the application of these techniques, a general kinetic model has been developed which relates the products of thermal decomposition to the organic structure of the parent coal.

  11. Long-term litter decomposition controlled by manganese redox cycling

    PubMed Central

    Keiluweit, Marco; Nico, Peter; Harmon, Mark E.; Mao, Jingdong; Pett-Ridge, Jennifer; Kleber, Markus

    2015-01-01

    Litter decomposition is a keystone ecosystem process impacting nutrient cycling and productivity, soil properties, and the terrestrial carbon (C) balance, but the factors regulating decomposition rate are still poorly understood. Traditional models assume that the rate is controlled by litter quality, relying on parameters such as lignin content as predictors. However, a strong correlation has been observed between the manganese (Mn) content of litter and decomposition rates across a variety of forest ecosystems. Here, we show that long-term litter decomposition in forest ecosystems is tightly coupled to Mn redox cycling. Over 7 years of litter decomposition, microbial transformation of litter was paralleled by variations in Mn oxidation state and concentration. A detailed chemical imaging analysis of the litter revealed that fungi recruit and redistribute unreactive Mn2+ provided by fresh plant litter to produce oxidative Mn3+ species at sites of active decay, with Mn eventually accumulating as insoluble Mn3+/4+ oxides. Formation of reactive Mn3+ species coincided with the generation of aromatic oxidation products, providing direct proof of the previously posited role of Mn3+-based oxidizers in the breakdown of litter. Our results suggest that the litter-decomposing machinery at our coniferous forest site depends on the ability of plants and microbes to supply, accumulate, and regenerate short-lived Mn3+ species in the litter layer. This observation indicates that biogeochemical constraints on bioavailability, mobility, and reactivity of Mn in the plant–soil system may have a profound impact on litter decomposition rates. PMID:26372954

  12. Induced representations of tensors and spinors of any rank in the Stueckelberg-Horwitz-Piron theory

    SciTech Connect

    Horwitz, Lawrence P.; Zeilig-Hess, Meir

    2015-09-15

    We show that a modification of Wigner’s induced representation for the description of a relativistic particle with spin can be used to construct spinors and tensors of arbitrary rank, with invariant decomposition over angular momentum. In particular, scalar and vector fields, as well as the representations of their transformations, are constructed. The method that is developed here admits the construction of wave packets and states of a many body relativistic system with definite total angular momentum. Furthermore, a Pauli-Lubanski operator is constructed on the orbit of the induced representation which provides a Casimir operator for the Poincaré group and which contains the physical intrinsic angular momentum of the particle covariantly.

  13. Gauge-invariant decomposition of nucleon spin

    SciTech Connect

    Wakamatsu, M.

    2010-06-01

    We investigate the relation between the known decompositions of the nucleon spin into its constituents, thereby clarifying in what respect they are common and in what respect they are different essentially. The decomposition recently proposed by Chen et al. can be thought of as a nontrivial generalization of the gauge-variant Jaffe-Manohar decomposition so as to meet the gauge-invariance requirement of each term of the decomposition. We however point out that there is another gauge-invariant decomposition of the nucleon spin, which is closer to the Ji decomposition, while allowing the decomposition of the gluon total angular momentum into the spin and orbital parts. After clarifying the reason why the gauge-invariant decomposition of the nucleon spin is not unique, we discuss which decomposition is more preferable from an experimental viewpoint.

  14. Tensor-based dictionary learning for dynamic tomographic reconstruction

    NASA Astrophysics Data System (ADS)

    Tan, Shengqi; Zhang, Yanbo; Wang, Ge; Mou, Xuanqin; Cao, Guohua; Wu, Zhifang; Yu, Hengyong

    2015-04-01

    In dynamic computed tomography (CT) reconstruction, the data acquisition speed limits the spatio-temporal resolution. Recently, compressed sensing theory has been instrumental in improving CT reconstruction from far few-view projections. In this paper, we present an adaptive method to train a tensor-based spatio-temporal dictionary for sparse representation of an image sequence during the reconstruction process. The correlations among atoms and across phases are considered to capture the characteristics of an object. The reconstruction problem is solved by the alternating direction method of multipliers. To recover fine or sharp structures such as edges, the nonlocal total variation is incorporated into the algorithmic framework. Preclinical examples including a sheep lung perfusion study and a dynamic mouse cardiac imaging demonstrate that the proposed approach outperforms the vectorized dictionary-based CT reconstruction in the case of few-view reconstruction.

  15. Tensor-based Dictionary Learning for Dynamic Tomographic Reconstruction

    PubMed Central

    Tan, Shengqi; Zhang, Yanbo; Wang, Ge; Mou, Xuanqin; Cao, Guohua; Wu, Zhifang; Yu, Hengyong

    2015-01-01

    In dynamic computed tomography (CT) reconstruction, the data acquisition speed limits the spatio-temporal resolution. Recently, compressed sensing theory has been instrumental in improving CT reconstruction from far few-view projections. In this paper, we present an adaptive method to train a tensor-based spatio-temporal dictionary for sparse representation of an image sequence during the reconstruction process. The correlations among atoms and across phases are considered to capture the characteristics of an object. The reconstruction problem is solved by the alternating direction method of multipliers. To recover fine or sharp structures such as edges, the nonlocal total variation is incorporated into the algorithmic framework. Preclinical examples including a sheep lung perfusion study and a dynamic mouse cardiac imaging demonstrate that the proposed approach outperforms the vectorized dictionary-based CT reconstruction in the case of few-view reconstruction. PMID:25779991

  16. Precision and accuracy in diffusion tensor magnetic resonance imaging.

    PubMed

    Jones, Derek K

    2010-04-01

    This article reviews some of the key factors influencing the accuracy and precision of quantitative metrics derived from diffusion magnetic resonance imaging data. It focuses on the study pipeline beginning at the choice of imaging protocol, through preprocessing and model fitting up to the point of extracting quantitative estimates for subsequent analysis. The aim was to provide the newcomers to the field with sufficient knowledge of how their decisions at each stage along this process might impact on precision and accuracy, to design their study/approach, and to use diffusion tensor magnetic resonance imaging in the clinic. More specifically, emphasis is placed on improving accuracy and precision. I illustrate how careful choices along the way can substantially affect the sample size needed to make an inference from the data.

  17. Thermal decomposition products of butyraldehyde.

    PubMed

    Hatten, Courtney D; Kaskey, Kevin R; Warner, Brian J; Wright, Emily M; McCunn, Laura R

    2013-12-01

    The thermal decomposition of gas-phase butyraldehyde, CH3CH2CH2CHO, was studied in the 1300-1600 K range with a hyperthermal nozzle. Products were identified via matrix-isolation Fourier transform infrared spectroscopy and photoionization mass spectrometry in separate experiments. There are at least six major initial reactions contributing to the decomposition of butyraldehyde: a radical decomposition channel leading to propyl radical + CO + H; molecular elimination to form H2 + ethylketene; a keto-enol tautomerism followed by elimination of H2O producing 1-butyne; an intramolecular hydrogen shift and elimination producing vinyl alcohol and ethylene, a β-C-C bond scission yielding ethyl and vinoxy radicals; and a γ-C-C bond scission yielding methyl and CH2CH2CHO radicals. The first three reactions are analogous to those observed in the thermal decomposition of acetaldehyde, but the latter three reactions are made possible by the longer alkyl chain structure of butyraldehyde. The products identified following thermal decomposition of butyraldehyde are CO, HCO, CH3CH2CH2, CH3CH2CH=C=O, H2O, CH3CH2C≡CH, CH2CH2, CH2=CHOH, CH2CHO, CH3, HC≡CH, CH2CCH, CH3C≡CH, CH3CH=CH2, H2C=C=O, CH3CH2CH3, CH2=CHCHO, C4H2, C4H4, and C4H8. The first ten products listed are direct products of the six reactions listed above. The remaining products can be attributed to further decomposition reactions or bimolecular reactions in the nozzle.

  18. Experimental study of MgB{sub 2} decomposition

    SciTech Connect

    Fan, Z. Y.; Hinks, D. G.; Newman, N.; Rowell, J. M.

    2001-07-02

    The thermal stability of MgB{sub 2} has been studied experimentally to determine the role of thermodynamic and kinetic barriers in the decomposition process. The MgB{sub 2} decomposition rate approaches one monolayer per second at 650 C and has an activation energy of 2.0 eV. The evaporation coefficient is inferred to be {approx}10{sup -4}, indicating that this process is kinetically limited. These values were inferred from in situ measurements using a quartz crystal microbalance and a residual gas analyzer, in conjunction with ex situ measurements of redeposited material by Rutherford backscattering spectroscopy and secondary ion mass spectroscopy. The presence of a large kinetic barrier to decomposition indicates that the synthesis of MgB{sub 2} thin films conditions may be possible with vacuum processing, albeit within a narrow window in the reactive growth conditions.

  19. Nuclear driven water decomposition plant for hydrogen production

    NASA Technical Reports Server (NTRS)

    Parker, G. H.; Brecher, L. E.; Farbman, G. H.

    1976-01-01

    The conceptual design of a hydrogen production plant using a very-high-temperature nuclear reactor (VHTR) to energize a hybrid electrolytic-thermochemical system for water decomposition has been prepared. A graphite-moderated helium-cooled VHTR is used to produce 1850 F gas for electric power generation and 1600 F process heat for the water-decomposition process which uses sulfur compounds and promises performance superior to normal water electrolysis or other published thermochemical processes. The combined cycle operates at an overall thermal efficiency in excess of 45%, and the overall economics of hydrogen production by this plant have been evaluated predicated on a consistent set of economic ground rules. The conceptual design and evaluation efforts have indicated that development of this type of nuclear-driven water-decomposition plant will permit large-scale economic generation of hydrogen in the 1990s.

  20. Plant diversity effects on root decomposition in grasslands

    NASA Astrophysics Data System (ADS)

    Chen, Hongmei; Mommer, Liesje; van Ruijven, Jasper; de Kroon, Hans; Gessler, Arthur; Scherer-Lorenzen, Michael; Wirth, Christian; Weigelt, Alexandra

    2016-04-01

    Loss of plant diversity impairs ecosystem functioning. Compared to other well-studied processes, we know little about whether and how plant diversity affects root decomposition, which is limiting our knowledge on biodiversity-carbon cycling relationships in the soil. Plant diversity potentially affects root decomposition via two non-exclusive mechanisms: by providing roots of different substrate quality and/or by altering the soil decomposition environment. To disentangle these two mechanisms, three decomposition experiments using a litter-bag approach were conducted on experimental grassland plots differing in plant species richness, functional group richness and functional group composition (e.g. presence/absence of grasses, legumes, small herbs and tall herbs, the Jena Experiment). We studied: 1) root substrate quality effects by decomposing roots collected from the different experimental plant communities in one common plot; 2) soil decomposition environment effects by decomposing standard roots in all experimental plots; and 3) the overall plant diversity effects by decomposing community roots in their 'home' plots. Litter bags were installed in April 2014 and retrieved after 1, 2 and 4 months to determine the mass loss. We found that mass loss decreased with increasing plant species richness, but not with functional group richness in the three experiments. However, functional group presence significantly affected mass loss with primarily negative effects of the presence of grasses and positive effects of the presence of legumes and small herbs. Our results thus provide clear evidence that species richness has a strong negative effect on root decomposition via effects on both root substrate quality and soil decomposition environment. This negative plant diversity-root decomposition relationship may partly account for the positive effect of plant diversity on soil C stocks by reducing C loss in addition to increasing primary root productivity. However, to fully