Science.gov

Sample records for processing tensor decomposition

  1. Orthogonal tensor decompositions

    SciTech Connect

    Tamara G. Kolda

    2000-03-01

    The authors explore the orthogonal decomposition of tensors (also known as multi-dimensional arrays or n-way arrays) using two different definitions of orthogonality. They present numerous examples to illustrate the difficulties in understanding such decompositions. They conclude with a counterexample to a tensor extension of the Eckart-Young SVD approximation theorem by Leibovici and Sabatier [Linear Algebra Appl. 269(1998):307--329].

  2. Nontraditional tensor decompositions and applications.

    SciTech Connect

    Bader, Brett William

    2010-07-01

    This presentation will discuss two tensor decompositions that are not as well known as PARAFAC (parallel factors) and Tucker, but have proven useful in informatics applications. Three-way DEDICOM (decomposition into directional components) is an algebraic model for the analysis of 3-way arrays with nonsymmetric slices. PARAFAC2 is a related model that is less constrained than PARAFAC and allows for different objects in one mode. Applications of both models to informatics problems will be shown.

  3. An Iterative Reweighted Method for Tucker Decomposition of Incomplete Tensors

    NASA Astrophysics Data System (ADS)

    Yang, Linxiao; Fang, Jun; Li, Hongbin; Zeng, Bing

    2016-09-01

    We consider the problem of low-rank decomposition of incomplete multiway tensors. Since many real-world data lie on an intrinsically low dimensional subspace, tensor low-rank decomposition with missing entries has applications in many data analysis problems such as recommender systems and image inpainting. In this paper, we focus on Tucker decomposition which represents an Nth-order tensor in terms of N factor matrices and a core tensor via multilinear operations. To exploit the underlying multilinear low-rank structure in high-dimensional datasets, we propose a group-based log-sum penalty functional to place structural sparsity over the core tensor, which leads to a compact representation with smallest core tensor. The method for Tucker decomposition is developed by iteratively minimizing a surrogate function that majorizes the original objective function, which results in an iterative reweighted process. In addition, to reduce the computational complexity, an over-relaxed monotone fast iterative shrinkage-thresholding technique is adapted and embedded in the iterative reweighted process. The proposed method is able to determine the model complexity (i.e. multilinear rank) in an automatic way. Simulation results show that the proposed algorithm offers competitive performance compared with other existing algorithms.

  4. Dynamic rotation and stretch tensors from a dynamic polar decomposition

    NASA Astrophysics Data System (ADS)

    Haller, George

    2016-01-01

    The local rigid-body component of continuum deformation is typically characterized by the rotation tensor, obtained from the polar decomposition of the deformation gradient. Beyond its well-known merits, the polar rotation tensor also has a lesser known dynamical inconsistency: it does not satisfy the fundamental superposition principle of rigid-body rotations over adjacent time intervals. As a consequence, the polar rotation diverts from the observed mean material rotation of fibers in fluids, and introduces a purely kinematic memory effect into computed material rotation. Here we derive a generalized polar decomposition for linear processes that yields a unique, dynamically consistent rotation component, the dynamic rotation tensor, for the deformation gradient. The left dynamic stretch tensor is objective, and shares the principal strain values and axes with its classic polar counterpart. Unlike its classic polar counterpart, however, the dynamic stretch tensor evolves in time without spin. The dynamic rotation tensor further decomposes into a spatially constant mean rotation tensor and a dynamically consistent relative rotation tensor that is objective for planar deformations. We also obtain simple expressions for dynamic analogues of Cauchy's mean rotation angle that characterize a deforming body objectively.

  5. Tensor decomposition of EEG signals: a brief review.

    PubMed

    Cong, Fengyu; Lin, Qiu-Hua; Kuang, Li-Dan; Gong, Xiao-Feng; Astikainen, Piia; Ristaniemi, Tapani

    2015-06-15

    Electroencephalography (EEG) is one fundamental tool for functional brain imaging. EEG signals tend to be represented by a vector or a matrix to facilitate data processing and analysis with generally understood methodologies like time-series analysis, spectral analysis and matrix decomposition. Indeed, EEG signals are often naturally born with more than two modes of time and space, and they can be denoted by a multi-way array called as tensor. This review summarizes the current progress of tensor decomposition of EEG signals with three aspects. The first is about the existing modes and tensors of EEG signals. Second, two fundamental tensor decomposition models, canonical polyadic decomposition (CPD, it is also called parallel factor analysis-PARAFAC) and Tucker decomposition, are introduced and compared. Moreover, the applications of the two models for EEG signals are addressed. Particularly, the determination of the number of components for each mode is discussed. Finally, the N-way partial least square and higher-order partial least square are described for a potential trend to process and analyze brain signals of two modalities simultaneously. PMID:25840362

  6. Robust Face Clustering Via Tensor Decomposition.

    PubMed

    Cao, Xiaochun; Wei, Xingxing; Han, Yahong; Lin, Dongdai

    2015-11-01

    Face clustering is a key component either in image managements or video analysis. Wild human faces vary with the poses, expressions, and illumination changes. All kinds of noises, like block occlusions, random pixel corruptions, and various disguises may also destroy the consistency of faces referring to the same person. This motivates us to develop a robust face clustering algorithm that is less sensitive to these noises. To retain the underlying structured information within facial images, we use tensors to represent faces, and then accomplish the clustering task based on the tensor data. The proposed algorithm is called robust tensor clustering (RTC), which firstly finds a lower-rank approximation of the original tensor data using a L1 norm optimization function. Because L1 norm does not exaggerate the effect of noises compared with L2 norm, the minimization of the L1 norm approximation function makes RTC robust. Then, we compute high-order singular value decomposition of this approximate tensor to obtain the final clustering results. Different from traditional algorithms solving the approximation function with a greedy strategy, we utilize a nongreedy strategy to obtain a better solution. Experiments conducted on the benchmark facial datasets and gait sequences demonstrate that RTC has better performance than the state-of-the-art clustering algorithms and is more robust to noises. PMID:25546869

  7. An optimization approach for fitting canonical tensor decompositions.

    SciTech Connect

    Dunlavy, Daniel M.; Acar, Evrim; Kolda, Tamara Gibson

    2009-02-01

    Tensor decompositions are higher-order analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as the CANDECOMP/PARAFAC decomposition (CPD), which expresses a tensor as the sum of component rank-one tensors and is used in a multitude of applications such as chemometrics, signal processing, neuroscience, and web analysis. The task of computing the CPD, however, can be difficult. The typical approach is based on alternating least squares (ALS) optimization, which can be remarkably fast but is not very accurate. Previously, nonlinear least squares (NLS) methods have also been recommended; existing NLS methods are accurate but slow. In this paper, we propose the use of gradient-based optimization methods. We discuss the mathematical calculation of the derivatives and further show that they can be computed efficiently, at the same cost as one iteration of ALS. Computational experiments demonstrate that the gradient-based optimization methods are much more accurate than ALS and orders of magnitude faster than NLS.

  8. Heterogeneous Tensor Decomposition for Clustering via Manifold Optimization.

    PubMed

    Sun, Yanfeng; Gao, Junbin; Hong, Xia; Mishra, Bamdev; Yin, Baocai

    2016-03-01

    Tensor clustering is an important tool that exploits intrinsically rich structures in real-world multiarray or Tensor datasets. Often in dealing with those datasets, standard practice is to use subspace clustering that is based on vectorizing multiarray data. However, vectorization of tensorial data does not exploit complete structure information. In this paper, we propose a subspace clustering algorithm without adopting any vectorization process. Our approach is based on a novel heterogeneous Tucker decomposition model taking into account cluster membership information. We propose a new clustering algorithm that alternates between different modes of the proposed heterogeneous tensor model. All but the last mode have closed-form updates. Updating the last mode reduces to optimizing over the multinomial manifold for which we investigate second order Riemannian geometry and propose a trust-region algorithm. Numerical experiments show that our proposed algorithm compete effectively with state-of-the-art clustering algorithms that are based on tensor factorization. PMID:27046492

  9. Analysis of Social Networks by Tensor Decomposition

    NASA Astrophysics Data System (ADS)

    Sizov, Sergej; Staab, Steffen; Franz, Thomas

    The Social Web fosters novel applications targeting a more efficient and satisfying user guidance in modern social networks, e.g., for identifying thematically focused communities, or finding users with similar interests. Large scale and high diversity of users in social networks poses the challenging question of appropriate relevance/authority ranking, for producing fine-grained and rich descriptions of available partners, e.g., to guide the user along most promising groups of interest. Existing methods for graph-based authority ranking lack support for fine-grained latent coherence between user relations and content (i.e., support for edge semantics in graph-based social network models). We present TweetRank, a novel approach for faceted authority ranking in the context of social networks. TweetRank captures the additional latent semantics of social networks by means of statistical methods in order to produce richer descriptions of user relations. We model the social network by a 3-dimensional tensor that enables the seamless representation of arbitrary semantic relations. For the analysis of that model, we apply the PARAFAC decomposition, which can be seen as a multi-modal counterpart to common Web authority ranking with HITS. The result are groupings of users and terms, characterized by authority and navigational (hub) scores with respect to the identified latent topics. Sample experiments with life data of the Twitter community demonstrate the ability of TweetRank to produce richer and more comprehensive contact recommendations than other existing methods for social authority ranking.

  10. Tensor network decompositions in the presence of a global symmetry

    SciTech Connect

    Singh, Sukhwinder; Pfeifer, Robert N. C.; Vidal, Guifre

    2010-11-15

    Tensor network decompositions offer an efficient description of certain many-body states of a lattice system and are the basis of a wealth of numerical simulation algorithms. We discuss how to incorporate a global symmetry, given by a compact, completely reducible group G, in tensor network decompositions and algorithms. This is achieved by considering tensors that are invariant under the action of the group G. Each symmetric tensor decomposes into two types of tensors: degeneracy tensors, containing all the degrees of freedom, and structural tensors, which only depend on the symmetry group. In numerical calculations, the use of symmetric tensors ensures the preservation of the symmetry, allows selection of a specific symmetry sector, and significantly reduces computational costs. On the other hand, the resulting tensor network can be interpreted as a superposition of exponentially many spin networks. Spin networks are used extensively in loop quantum gravity, where they represent states of quantum geometry. Our work highlights their importance in the context of tensor network algorithms as well, thus setting the stage for cross-fertilization between these two areas of research.

  11. Tensor product decomposition methods for plasmas physics computations

    NASA Astrophysics Data System (ADS)

    Del-Castillo-Negrete, D.

    2012-03-01

    Tensor product decomposition (TPD) methods are a powerful linear algebra technique for the efficient representation of high dimensional data sets. In the simplest 2-dimensional case, TPD reduces to the singular value decomposition (SVD) of matrices. These methods, which are closely related to proper orthogonal decomposition techniques, have been extensively applied in signal and image processing, and to some fluid mechanics problems. However, their use in plasma physics computation is relatively new. Some recent applications include: data compression of 6-dimensional gyrokinetic plasma turbulence data sets,footnotetextD. R. Hatch, D. del-Castillo-Negrete, and P. W. Terry. Submitted to Journal Comp. Phys. (2011). noise reduction in particle methods,footnotetextR. Nguyen, D. del-Castillo-Negrete, K. Schneider, M. Farge, and G. Chen: Journal of Comp. Phys. 229, 2821-2839 (2010). and multiscale analysis of plasma turbulence.footnotetextS. Futatani, S. Benkadda, and D. del-Castillo-Negrete: Phys. of Plasmas, 16, 042506 (2009) The goal of this presentation is to discuss a novel application of TPD methods to projective integration of particle-based collisional plasma transport computations.

  12. 3D tensor-based blind multispectral image decomposition for tumor demarcation

    NASA Astrophysics Data System (ADS)

    Kopriva, Ivica; Peršin, Antun

    2010-03-01

    Blind decomposition of multi-spectral fluorescent image for tumor demarcation is formulated exploiting tensorial structure of the image. First contribution of the paper is identification of the matrix of spectral responses and 3D tensor of spatial distributions of the materials present in the image from Tucker3 or PARAFAC models of 3D image tensor. Second contribution of the paper is clustering based estimation of the number of the materials present in the image as well as matrix of their spectral profiles. 3D tensor of the spatial distributions of the materials is recovered through 3-mode multiplication of the multi-spectral image tensor and inverse of the matrix of spectral profiles. Tensor representation of the multi-spectral image preserves its local spatial structure that is lost, due to vectorization process, when matrix factorization-based decomposition methods (such as non-negative matrix factorization and independent component analysis) are used. Superior performance of the tensor-based image decomposition over matrix factorization-based decompositions is demonstrated on experimental red-green-blue (RGB) image with known ground truth as well as on RGB fluorescent images of the skin tumor (basal cell carcinoma).

  13. 3D extension of Tensorial Polar Decomposition. Application to (photo-)elasticity tensors

    NASA Astrophysics Data System (ADS)

    Desmorat, Rodrigue; Desmorat, Boris

    2016-06-01

    The orthogonalized harmonic decomposition of symmetric fourth-order tensors (i.e. having major and minor indicial symmetries, such as elasticity tensors) is completed by a representation of harmonic fourth-order tensors H by means of two second-order harmonic (symmetric deviatoric) tensors only. A similar decomposition is obtained for non-symmetric tensors (i.e. having minor indicial symmetry only, such as photo-elasticity tensors or elasto-plasticity tangent operators) introducing a fourth-order major antisymmetric traceless tensor Z. The tensor Z is represented by means of one harmonic second-order tensor and one antisymmetric second-order tensor only. Representations of totally symmetric (rari-constant), symmetric and major antisymmetric fourth-order tensors are simple particular cases of the proposed general representation. Closed-form expressions for tensor decomposition are given in the monoclinic case. Practical applications to elasticity and photo-elasticity monoclinic tensors are finally presented. xml:lang="fr"

  14. A full variational calculation based on a tensor product decomposition

    NASA Astrophysics Data System (ADS)

    Senese, Frederick A.; Beattie, Christopher A.; Schug, John C.; Viers, Jimmy W.; Watson, Layne T.

    1989-08-01

    A new direct full variational approach exploits a tensor (Kronecker) product decomposition of the Hamiltonian. Explicit assembly and storage of the Hamiltonian matrix is avoided by using the Kronecker product structure to form matrix-vector products directly from the molecular integrals. Computation-intensive integral transformations and formula tapes are unnecessary. The wavefunction is expanded in terms of spin-free primitive kets rather than Slater determinants or configuration state functions, and the expansion is equivalent to a full configuration interaction expansion. The approach suggests compact storage schemes and algorithms which are naturally suited to parallel and pipelined machines.

  15. Uncertainty propagation in orbital mechanics via tensor decomposition

    NASA Astrophysics Data System (ADS)

    Sun, Yifei; Kumar, Mrinal

    2016-03-01

    Uncertainty forecasting in orbital mechanics is an essential but difficult task, primarily because the underlying Fokker-Planck equation (FPE) is defined on a relatively high dimensional (6-D) state-space and is driven by the nonlinear perturbed Keplerian dynamics. In addition, an enormously large solution domain is required for numerical solution of this FPE (e.g. encompassing the entire orbit in the x-y-z subspace), of which the state probability density function (pdf) occupies a tiny fraction at any given time. This coupling of large size, high dimensionality and nonlinearity makes for a formidable computational task, and has caused the FPE for orbital uncertainty propagation to remain an unsolved problem. To the best of the authors' knowledge, this paper presents the first successful direct solution of the FPE for perturbed Keplerian mechanics. To tackle the dimensionality issue, the time-varying state pdf is approximated in the CANDECOMP/PARAFAC decomposition tensor form where all the six spatial dimensions as well as the time dimension are separated from one other. The pdf approximation for all times is obtained simultaneously via the alternating least squares algorithm. Chebyshev spectral differentiation is employed for discretization on account of its spectral ("super-fast") convergence rate. To facilitate the tensor decomposition and control the solution domain size, system dynamics is expressed using spherical coordinates in a noninertial reference frame. Numerical results obtained on a regular personal computer are compared with Monte Carlo simulations.

  16. Databases post-processing in Tensoral

    NASA Technical Reports Server (NTRS)

    Dresselhaus, Eliot

    1994-01-01

    The Center for Turbulent Research (CTR) post-processing effort aims to make turbulence simulations and data more readily and usefully available to the research and industrial communities. The Tensoral language, introduced in this document and currently existing in prototype form, is the foundation of this effort. Tensoral provides a convenient and powerful protocol to connect users who wish to analyze fluids databases with the authors who generate them. In this document we introduce Tensoral and its prototype implementation in the form of a user's guide. This guide focuses on use of Tensoral for post-processing turbulence databases. The corresponding document - the Tensoral 'author's guide' - which focuses on how authors can make databases available to users via the Tensoral system - is currently unwritten. Section 1 of this user's guide defines Tensoral's basic notions: we explain the class of problems at hand and how Tensoral abstracts them. Section 2 defines Tensoral syntax for mathematical expressions. Section 3 shows how these expressions make up Tensoral statements. Section 4 shows how Tensoral statements and expressions are embedded into other computer languages (such as C or Vectoral) to make Tensoral programs. We conclude with a complete example program.

  17. TripleRank: Ranking Semantic Web Data by Tensor Decomposition

    NASA Astrophysics Data System (ADS)

    Franz, Thomas; Schultz, Antje; Sizov, Sergej; Staab, Steffen

    The Semantic Web fosters novel applications targeting a more efficient and satisfying exploitation of the data available on the web, e.g. faceted browsing of linked open data. Large amounts and high diversity of knowledge in the Semantic Web pose the challenging question of appropriate relevance ranking for producing fine-grained and rich descriptions of the available data, e.g. to guide the user along most promising knowledge aspects. Existing methods for graph-based authority ranking lack support for fine-grained latent coherence between resources and predicates (i.e. support for link semantics in the linked data model). In this paper, we present TripleRank, a novel approach for faceted authority ranking in the context of RDF knowledge bases. TripleRank captures the additional latent semantics of Semantic Web data by means of statistical methods in order to produce richer descriptions of the available data. We model the Semantic Web by a 3-dimensional tensor that enables the seamless representation of arbitrary semantic links. For the analysis of that model, we apply the PARAFAC decomposition, which can be seen as a multi-modal counterpart to Web authority ranking with HITS. The result are groupings of resources and predicates that characterize their authority and navigational (hub) properties with respect to identified topics. We have applied TripleRank to multiple data sets from the linked open data community and gathered encouraging feedback in a user evaluation where TripleRank results have been exploited in a faceted browsing scenario.

  18. The Amplitude Phase Decomposition for the Magnetotelluric Impedance Tensor and Galvanic Electric Distortion

    NASA Astrophysics Data System (ADS)

    Neukirch, Maik; Rudolf, Daniel; Garcia, Xavier

    2016-04-01

    The introduction of the phase tensor marked a major breakthrough in understanding of, analysing of and dealing with galvanic distortion of the electric field in the Magnetotelluric method. The phase tensor itself can be used for (distortion free) dimensionality analysis, if applicable distortion analysis and even to invert for subsurface models. However, impedance amplitude information is not stored in the phase tensor, therefore the impedance corrected by distortion analysis (or alternative remedies) may yield better results. We formulate an impedance tensor decomposition into the known phase tensor and an amplitude tensor that is shown to be complementary and independent of the phase tensor. The rotational invariant amplitude tensor contains galvanic and inductive amplitudes of which the latter are physically related to the inductive phase information present in the phase tensor. We show, that for the special cases of 1D and 2D subsurfaces, the geometric amplitude tensor parameter (strike and skew) converge to phase tensor parameter and the singular values are the amplitudes of the impedance in TE and TM mode. Further, the physical similarity between inductive phase and amplitude is used to approximate the galvanic amplitude for the general subsurface, which leads to the qualitative interpretation of 3D galvanic distortion: (i) the (purely) galvanic part of the subsurface (as sensed at a given period) may have a changing impact on the impedance (over a period range) and (ii) only the purely galvanic response of the lowest available period should be termed galvanic distortion. The approximation of the galvanic amplitude (and therewith galvanic distortion), though not accurate, offers a new perspective on galvanic distortion, which breaks with the general belief of the need to assume 1D or 2D regional structure for the impedance. The amplitude tensor itself is complementary to the phase tensor containing integrated (galvanic and inductive) subsurface information

  19. Tensor decomposition techniques in the solution of vibrational coupled cluster response theory eigenvalue equations

    NASA Astrophysics Data System (ADS)

    Godtliebsen, Ian H.; Hansen, Mads Bøttger; Christiansen, Ove

    2015-01-01

    We show how the eigenvalue equations of vibrational coupled cluster response theory can be solved using a subspace projection method with Davidson update, where basis vectors are stacked tensors decomposed into canonical (CP, Candecomp/Parafac) form. In each update step, new vectors are first orthogonalized to old vectors, followed by a tensor decomposition to a prescribed threshold TCP. The algorithm can provide excitation energies and eigenvectors of similar accuracy as a full vector approach and with only a very modest increase in the number of vectors required for convergence. The algorithm is illustrated with sample calculations for formaldehyde, 1,2,5-thiadiazole, and water. Analysis of the formaldehyde and thiadiazole calculations illustrate a number of interesting features of the algorithm. For example, the tensor decomposition threshold is optimally put to rather loose values, such as TCP = 10-2. With such thresholds for the tensor decompositions, the original eigenvalue equations can still be solved accurately. It is thus possible to directly calculate vibrational wave functions in tensor decomposed format.

  20. Thermochemical water decomposition processes

    NASA Technical Reports Server (NTRS)

    Chao, R. E.

    1974-01-01

    Thermochemical processes which lead to the production of hydrogen and oxygen from water without the consumption of any other material have a number of advantages when compared to other processes such as water electrolysis. It is possible to operate a sequence of chemical steps with net work requirements equal to zero at temperatures well below the temperature required for water dissociation in a single step. Various types of procedures are discussed, giving attention to halide processes, reverse Deacon processes, iron oxide and carbon oxide processes, and metal and alkali metal processes. Economical questions are also considered.

  1. Nonlinear Beam Kinematics by Decomposition of the Rotation Tensor

    NASA Technical Reports Server (NTRS)

    Danielson, D. A.; Hodges, D. H.

    1987-01-01

    A simple matrix expression is obtained for the strain components of a beam in which the displacements and rotations are large. The only restrictions are on the magnitudes of the strain and of the local rotation, a newly-identified kinematical quantity. The local rotation is defined as the change of orientation of material elements relative to the change of orientation of the beam reference triad. The vectors and tensors in the theory are resolved along orthogonal triads of base vectors centered along the undeformed and deformed beam reference axes, so Cartesian tensor notation is used. Although a curvilinear coordinate system is natural to the beam problem, the complications usually associated with its use are circumvented. Local rotations appear explicitly in the resulting strain expressions, facilitating the treatment of beams with both open and closed cross sections in applications of the theory. The theory is used to obtain the kinematical relations for coupled bending, torsion extension, shear deformation, and warping of an initially curved and twisted beam.

  2. Tensor decomposition in post-Hartree–Fock methods. II. CCD implementation

    SciTech Connect

    Benedikt, Udo; Böhm, Karl-Heinz; Auer, Alexander A.

    2013-12-14

    In a previous publication, we have discussed the usage of tensor decomposition in the canonical polyadic (CP) tensor format for electronic structure methods. There, we focused on two-electron integrals and second order Møller-Plesset perturbation theory (MP2). In this work, we discuss the CP format for Coupled Cluster (CC) theory and present a pilot implementation for the Coupled Cluster Doubles method. We discuss the iterative solution of the CC amplitude equations using tensors in CP representation and present a tensor contraction scheme that minimizes the effort necessary for the rank reductions during the iterations. Furthermore, several details concerning the reduction of complexity of the algorithm, convergence of the CC iterations, truncation errors, and the choice of threshold for chemical accuracy are discussed.

  3. Tensor decomposition in electronic structure calculations on 3D Cartesian grids

    SciTech Connect

    Khoromskij, B.N. Khoromskaia, V.; Chinnamsetty, S.R.; Flad, H.-J.

    2009-09-01

    In this paper, we investigate a novel approach based on the combination of Tucker-type and canonical tensor decomposition techniques for the efficient numerical approximation of functions and operators in electronic structure calculations. In particular, we study applicability of tensor approximations for the numerical solution of Hartree-Fock and Kohn-Sham equations on 3D Cartesian grids. We show that the orthogonal Tucker-type tensor approximation of electron density and Hartree potential of simple molecules leads to low tensor rank representations. This enables an efficient tensor-product convolution scheme for the computation of the Hartree potential using a collocation-type approximation via piecewise constant basis functions on a uniform nxnxn grid. Combined with the Richardson extrapolation, our approach exhibits O(h{sup 3}) convergence in the grid-size h=O(n{sup -1}). Moreover, this requires O(3rn+r{sup 3}) storage, where r denotes the Tucker rank of the electron density with r=O(logn), almost uniformly in n. For example, calculations of the Coulomb matrix and the Hartree-Fock energy for the CH{sub 4} molecule, with a pseudopotential on the C atom, achieved accuracies of the order of 10{sup -6} hartree with a grid-size n of several hundreds. Since the tensor-product convolution in 3D is performed via 1D convolution transforms, our scheme markedly outperforms the 3D-FFT in both the computing time and storage requirements.

  4. On the decomposition of stress and strain tensors into spherical and deviatoric parts.

    PubMed

    Augusti, G; Martin, J B; Prager, W

    1969-06-01

    It is well known that Hooke's law for a linearly elastic, isotropic solid may be written in the form of two relations that involve only the spherical or only the deviatoric parts of the tensors of stress and strain. The example of the linearly elastic, transversely isotropic solid is used to show that this decomposition is not, in general, feasible for linearly elastic, anisotropic solids. The discussion is extended to a large class of work-hardening rigid, plastic solids, and it is shown that the considered decomposition can only be achieved for the incompressible solids of this class. PMID:16591754

  5. Partial-wave decomposition of the finite-range effective tensor interaction

    NASA Astrophysics Data System (ADS)

    Davesne, D.; Becker, P.; Pastore, A.; Navarro, J.

    2016-06-01

    We perform a detailed analysis of the properties of the finite-range tensor term associated with the Gogny and M3Y effective interactions. In particular, by using a partial-wave decomposition of the equation of state of symmetric nuclear matter, we show how we can extract their tensor parameters directly from microscopic results based on bare nucleon-nucleon interactions. Furthermore, we show that the zero-range limit of both finite-range interactions has the form of the next-to-next-to-next-leading-order (N3LO) Skyrme pseudopotential, which thus constitutes a reliable approximation in the density range relevant for finite nuclei. Finally, we use Brueckner-Hartree-Fock results to fix the tensor parameters for the three effective interactions.

  6. Multipole theory and the Hehl-Obukhov decomposition of the electromagnetic constitutive tensor

    NASA Astrophysics Data System (ADS)

    de Lange, O. L.; Raab, R. E.

    2015-05-01

    The Hehl-Obukhov decomposition expresses the 36 independent components of the electromagnetic constitutive tensor for a local linear anisotropic medium in a useful general form comprising seven macroscopic property tensors: four of second rank, two vectors, and a four-dimensional (pseudo)scalar. We consider homogeneous media and show that in semi-classical multipole theory, the first full realization of this formulation is obtained (in terms of molecular polarizability tensors) at third order (electric octopole-magnetic quadrupole order). The calculations are an extension of a direct method previously used at second order (electric quadrupole-magnetic dipole order). We consider in what sense this theory is independent of the choice of molecular coordinate origins relative to which polarizabilities are evaluated. The pseudoscalar (axion) observable is expressed relative to the crystallographic origin. The other six property tensors are invariant (with respect to an arbitrary choice of each molecular coordinate origin), or zero, at first and second orders. At third order, this invariance has to be imposed (by transformation of the response fields)—an aspect that is required by consideration of isotropic fluids and is consistent with the invariance of transmission phenomena in dielectrics. Alternative derivations of the property tensors are reviewed, with emphasis on the pseudoscalar, constraint-breaking, translational invariance, and uniqueness.

  7. Real-time object recognition in multidimensional images based on joined extended structural tensor and higher-order tensor decomposition methods

    NASA Astrophysics Data System (ADS)

    Cyganek, Boguslaw; Smolka, Bogdan

    2015-02-01

    In this paper a system for real-time recognition of objects in multidimensional video signals is proposed. Object recognition is done by pattern projection into the tensor subspaces obtained from the factorization of the signal tensors representing the input signal. However, instead of taking only the intensity signal the novelty of this paper is first to build the Extended Structural Tensor representation from the intensity signal that conveys information on signal intensities, as well as on higher-order statistics of the input signals. This way the higher-order input pattern tensors are built from the training samples. Then, the tensor subspaces are built based on the Higher-Order Singular Value Decomposition of the prototype pattern tensors. Finally, recognition relies on measurements of the distance of a test pattern projected into the tensor subspaces obtained from the training tensors. Due to high-dimensionality of the input data, tensor based methods require high memory and computational resources. However, recent achievements in the technology of the multi-core microprocessors and graphic cards allows real-time operation of the multidimensional methods as is shown and analyzed in this paper based on real examples of object detection in digital images.

  8. Detection of crossing white matter fibers with high-order tensors and rank-k decompositions

    PubMed Central

    Jiao, Fangxiang; Gur, Yaniv; Johnson, Chris R.; Joshi, Sarang

    2011-01-01

    Fundamental to high angular resolution diffusion imaging (HARDI), is the estimation of a positive-semidefinite orientation distribution function (ODF) and extracting the diffusion properties (e.g., fiber directions). In this work we show that these two goals can be achieved efficiently by using homogeneous polynomials to represent the ODF in the spherical deconvolution approach, as was proposed in the Cartesian Tensor-ODF (CT-ODF) formulation. Based on this formulation we first suggest an estimation method for positive-semidefinite ODF by solving a linear programming problem that does not require special parameterization of the ODF. We also propose a rank-k tensor decomposition, known as CP decomposition, to extract the fibers information from the estimated ODF. We show that this decomposition is superior to the fiber direction estimation via ODF maxima detection as it enables one to reach the full fiber separation resolution of the estimation technique. We assess the accuracy of this new framework by applying it to synthetic and experimentally obtained HARDI data. PMID:21761684

  9. Detection of crossing white matter fibers with high-order tensors and rank-k decompositions.

    PubMed

    Jiao, Fangxiang; Gur, Yaniv; Johnson, Chris R; Joshi, Sarang

    2011-01-01

    Fundamental to high angular resolution diffusion imaging (HARDI), is the estimation of a positive-semidefinite orientation distribution function (ODF) and extracting the diffusion properties (e.g., fiber directions). In this work we show that these two goals can be achieved efficiently by using homogeneous polynomials to represent the ODF in the spherical deconvolution approach, as was proposed in the Cartesian Tensor-ODF (CT-ODF) formulation. Based on this formulation we first suggest an estimation method for positive-semidefinite ODF by solving a linear programming problem that does not require special parameterization of the ODF. We also propose a rank-k tensor decomposition, known as CP decomposition, to extract the fibers information from the estimated ODF. We show that this decomposition is superior to the fiber direction estimation via ODF maxima detection as it enables one to reach the full fiber separation resolution of the estimation technique. We assess the accuracy of this new framework by applying it to synthetic and experimentally obtained HARDI data. PMID:21761684

  10. Tensoral for post-processing users and simulation authors

    NASA Technical Reports Server (NTRS)

    Dresselhaus, Eliot

    1993-01-01

    The CTR post-processing effort aims to make turbulence simulations and data more readily and usefully available to the research and industrial communities. The Tensoral language, which provides the foundation for this effort, is introduced here in the form of a user's guide. The Tensoral user's guide is presented in two main sections. Section one acts as a general introduction and guides database users who wish to post-process simulation databases. Section two gives a brief description of how database authors and other advanced users can make simulation codes and/or the databases they generate available to the user community via Tensoral database back ends. The two-part structure of this document conforms to the two-level design structure of the Tensoral language. Tensoral has been designed to be a general computer language for performing tensor calculus and statistics on numerical data. Tensoral's generality allows it to be used for stand-alone native coding of high-level post-processing tasks (as described in section one of this guide). At the same time, Tensoral's specialization to a minute task (namely, to numerical tensor calculus and statistics) allows it to be easily embedded into applications written partly in Tensoral and partly in other computer languages (here, C and Vectoral). Embedded Tensoral, aimed at advanced users for more general coding (e.g. of efficient simulations, for interfacing with pre-existing software, for visualization, etc.), is described in section two of this guide.

  11. Representing Matrix Cracks Through Decomposition of the Deformation Gradient Tensor in Continuum Damage Mechanics Methods

    NASA Technical Reports Server (NTRS)

    Leone, Frank A., Jr.

    2015-01-01

    A method is presented to represent the large-deformation kinematics of intraply matrix cracks and delaminations in continuum damage mechanics (CDM) constitutive material models. The method involves the additive decomposition of the deformation gradient tensor into 'crack' and 'bulk material' components. The response of the intact bulk material is represented by a reduced deformation gradient tensor, and the opening of an embedded cohesive interface is represented by a normalized cohesive displacement-jump vector. The rotation of the embedded interface is tracked as the material deforms and as the crack opens. The distribution of the total local deformation between the bulk material and the cohesive interface components is determined by minimizing the difference between the cohesive stress and the bulk material stress projected onto the cohesive interface. The improvements to the accuracy of CDM models that incorporate the presented method over existing approaches are demonstrated for a single element subjected to simple shear deformation and for a finite element model of a unidirectional open-hole tension specimen. The material model is implemented as a VUMAT user subroutine for the Abaqus/Explicit finite element software. The presented deformation gradient decomposition method reduces the artificial load transfer across matrix cracks subjected to large shearing deformations, and avoids the spurious secondary failure modes that often occur in analyses based on conventional progressive damage models.

  12. Symmetric tensor decomposition-configuration interaction study of BeH2

    NASA Astrophysics Data System (ADS)

    Kasamatsu, Shusuke; Uemura, Wataru; Sugino, Osamu

    2014-03-01

    The configuration interaction (CI) is a straightforward approach to describing interacting fermions. However, its application is hampered by the non-polynomially increasing computational time and memory requirements with the system size. To overcome this problem, we have been developing a variational method based on the canonical decomposition of the full-CI coefficients, which we call the symmetric tensor decomposition (STD)-CI. The applicability of STD-CI was tested for simple molecular systems, but here we test it using a stringent benchmark system, i.e., the insertion of Be into H2. The Be + H2 system is known for strong configurational degeneracy along the insertion pathway, and has been used for assessing a method's capability to treat correlated systems. We obtained errors compared to full CI results of ~10 mHartrees when using a rank 2 decomposition of the full CI coefficients. This is a huge improvement over Hartree-Fock results having errors of up to ~100 mHartrees in worst cases, although not as good as, e.g., CAS-CCSD with errors less than 1 mHartree.

  13. Towards Interactive Construction of Topical Hierarchy: A Recursive Tensor Decomposition Approach

    PubMed Central

    Wang, Chi; Liu, Xueqing; Song, Yanglei; Han, Jiawei

    2015-01-01

    Automatic construction of user-desired topical hierarchies over large volumes of text data is a highly desirable but challenging task. This study proposes to give users freedom to construct topical hierarchies via interactive operations such as expanding a branch and merging several branches. Existing hierarchical topic modeling techniques are inadequate for this purpose because (1) they cannot consistently preserve the topics when the hierarchy structure is modified; and (2) the slow inference prevents swift response to user requests. In this study, we propose a novel method, called STROD, that allows efficient and consistent modification of topic hierarchies, based on a recursive generative model and a scalable tensor decomposition inference algorithm with theoretical performance guarantee. Empirical evaluation shows that STROD reduces the runtime of construction by several orders of magnitude, while generating consistent and quality hierarchies. PMID:26705505

  14. Tensor-multi-scalar theories: relativistic stars and 3 + 1 decomposition

    NASA Astrophysics Data System (ADS)

    Horbatsch, Michael; Silva, Hector O.; Gerosa, Davide; Pani, Paolo; Berti, Emanuele; Gualtieri, Leonardo; Sperhake, Ulrich

    2015-10-01

    Gravitational theories with multiple scalar fields coupled to the metric and each other—a natural extension of the well studied single-scalar-tensor theories—are interesting phenomenological frameworks to describe deviations from general relativity in the strong-field regime. In these theories, the N-tuple of scalar fields takes values in a coordinate patch of an N-dimensional Riemannian target-space manifold whose properties are poorly constrained by weak-field observations. Here we introduce for simplicity a non-trivial model with two scalar fields and a maximally symmetric target-space manifold. Within this model we present a preliminary investigation of spontaneous scalarization for relativistic, perfect fluid stellar models in spherical symmetry. We find that the scalarization threshold is determined by the eigenvalues of a symmetric scalar-matter coupling matrix, and that the properties of strongly scalarized stellar configurations additionally depend on the target-space curvature radius. In preparation for numerical relativity simulations, we also write down the 3 + 1 decomposition of the field equations for generic tensor-multi-scalar theories.

  15. Aridity and decomposition processes in complex landscapes

    NASA Astrophysics Data System (ADS)

    Ossola, Alessandro; Nyman, Petter

    2015-04-01

    Decomposition of organic matter is a key biogeochemical process contributing to nutrient cycles, carbon fluxes and soil development. The activity of decomposers depends on microclimate, with temperature and rainfall being major drivers. In complex terrain the fine-scale variation in microclimate (and hence water availability) as a result of slope orientation is caused by differences in incoming radiation and surface temperature. Aridity, measured as the long-term balance between net radiation and rainfall, is a metric that can be used to represent variations in water availability within the landscape. Since aridity metrics can be obtained at fine spatial scales, they could theoretically be used to investigate how decomposition processes vary across complex landscapes. In this study, four research sites were selected in tall open sclerophyll forest along a aridity gradient (Budyko dryness index ranging from 1.56 -2.22) where microclimate, litter moisture and soil moisture were monitored continuously for one year. Litter bags were packed to estimate decomposition rates (k) using leaves of a tree species not present in the study area (Eucalyptus globulus) in order to avoid home-field advantage effects. Litter mass loss was measured to assess the activity of macro-decomposers (6mm litter bag mesh size), meso-decomposers (1 mm mesh), microbes above-ground (0.2 mm mesh) and microbes below-ground (2 cm depth, 0.2 mm mesh). Four replicates for each set of bags were installed at each site and bags were collected at 1, 2, 4, 7 and 12 months since installation. We first tested whether differences in microclimate due to slope orientation have significant effects on decomposition processes. Then the dryness index was related to decomposition rates to evaluate if small-scale variation in decomposition can be predicted using readily available information on rainfall and radiation. Decomposition rates (k), calculated fitting single pool negative exponential models, generally

  16. Enabling High-Dimensional Hierarchical Uncertainty Quantification by ANOVA and Tensor-Train Decomposition

    SciTech Connect

    Zhang, Zheng; Yang, Xiu; Oseledets, Ivan; Karniadakis, George E.; Daniel, Luca

    2015-01-31

    Hierarchical uncertainty quantification can reduce the computational cost of stochastic circuit simulation by employing spectral methods at different levels. This paper presents an efficient framework to simulate hierarchically some challenging stochastic circuits/systems that include high-dimensional subsystems. Due to the high parameter dimensionality, it is challenging to both extract surrogate models at the low level of the design hierarchy and to handle them in the high-level simulation. In this paper, we develop an efficient analysis of variance-based stochastic circuit/microelectromechanical systems simulator to efficiently extract the surrogate models at the low level. In order to avoid the curse of dimensionality, we employ tensor-train decomposition at the high level to construct the basis functions and Gauss quadrature points. As a demonstration, we verify our algorithm on a stochastic oscillator with four MEMS capacitors and 184 random parameters. This challenging example is efficiently simulated by our simulator at the cost of only 10min in MATLAB on a regular personal computer.

  17. Optical Acquisition and Polar Decomposition of the Full-Field Deformation Gradient Tensor Within a Fracture Callus

    PubMed Central

    Kim, Wangdo; Kohles, Sean S.

    2009-01-01

    Tracking tissue deformation is often hampered by material inhomogeneity, so local measurements tend to be insufficient thus lending to the necessity of full-field optical measurements. This study presents a novel approach to factoring heterogeneous deformation of soft and hard tissues in a fracture callus by introducing an anisotropic metric derived from the deformation gradient tensor (F). The deformation gradient tensor contains all the information available in a Green-Lagrange strain tensor, plus the rigid-body rotational components. A recent study [Bottlang et al., J. Biomech. 41(3), 2008] produced full-field strains within ovine fracture calluses acquired through the application of electronic speckle pattern interferometery (ESPI). The technique is based on infinitesimal strain approximation (Engineering Strain) whose scheme is not independent of rigid body rotation. In this work, for rotation extraction, the stretch and rotation tensors were separately determined from F by the polar decomposition theorem. Interfragmentary motions in a fracture gap were characterized by the two distinct mechanical factors (stretch and rotation) at each material point through full-field mapping. In the composite nature of bone and soft tissue, collagen arrangements are hypothesized such that fibers locally aligned with principal directions will stretch and fibers not aligned with the principal direction will rotate and stretch. This approach has revealed the deformation gradient tensor as an appropriate quantification of strain within callus bony and fibrous tissue via optical measurements. PMID:19647826

  18. Tensor based geology preserving reservoir parameterization with Higher Order Singular Value Decomposition (HOSVD)

    NASA Astrophysics Data System (ADS)

    Afra, Sardar; Gildin, Eduardo

    2016-09-01

    Parameter estimation through robust parameterization techniques has been addressed in many works associated with history matching and inverse problems. Reservoir models are in general complex, nonlinear, and large-scale with respect to the large number of states and unknown parameters. Thus, having a practical approach to replace the original set of highly correlated unknown parameters with non-correlated set of lower dimensionality, that captures the most significant features comparing to the original set, is of high importance. Furthermore, de-correlating system's parameters while keeping the geological description intact is critical to control the ill-posedness nature of such problems. We introduce the advantages of a new low dimensional parameterization approach for reservoir characterization applications utilizing multilinear algebra based techniques like higher order singular value decomposition (HOSVD). In tensor based approaches like HOSVD, 2D permeability images are treated as they are, i.e., the data structure is kept as it is, whereas in conventional dimensionality reduction algorithms like SVD data has to be vectorized. Hence, compared to classical methods, higher redundancy reduction with less information loss can be achieved through decreasing present redundancies in all dimensions. In other words, HOSVD approximation results in a better compact data representation with respect to least square sense and geological consistency in comparison with classical algorithms. We examined the performance of the proposed parameterization technique against SVD approach on the SPE10 benchmark reservoir model as well as synthetic channelized permeability maps to demonstrate the capability of the proposed method. Moreover, to acquire statistical consistency, we repeat all experiments for a set of 1000 unknown geological samples and provide comparison using RMSE analysis. Results prove that, for a fixed compression ratio, the performance of the proposed approach

  19. Diffusion tensors for processing sheared and rotated rectangles.

    PubMed

    Steidl, Gabriele; Teuber, Tanja

    2009-12-01

    Image restoration and simplification methods that respect important features such as edges play a fundamental role in digital image processing. However, known edge-preserving methods like common nonlinear diffusion methods tend to round vertices for large diffusion times. In this paper, we adapt the diffusion tensor for anisotropic diffusion to avoid this effects in images containing rotated and sheared rectangles, respectively. In this context, we propose a new method for estimating rotation angles and shear parameters based on the so-called structure tensor. Further, we show how the knowledge of appropriate diffusion tensors can be used in variational models. Numerical examples including orientation estimation, denoising and segmentation demonstrate the good performance of our methods. PMID:19651552

  20. Biogeochemistry of Decomposition and Detrital Processing

    NASA Astrophysics Data System (ADS)

    Sanderman, J.; Amundson, R.

    2003-12-01

    Decomposition is a key ecological process that roughly balances net primary production in terrestrial ecosystems and is an essential process in resupplying nutrients to the plant community. Decomposition consists of three concurrent processes: communition or fragmentation, leaching of water-soluble compounds, and microbial catabolism. Decomposition can also be viewed as a sequential process, what Eijsackers and Zehnder (1990) compare to a Russian matriochka doll. Soil macrofauna fragment and partially solubilize plant residues, facilitating establishment of a community of decomposer microorganisms. This decomposer community will gradually shift as the most easily degraded plant compounds are utilized and the more recalcitrant materials begin to accumulate. Given enough time and the proper environmental conditions, most naturally occurring compounds can completely be mineralized to inorganic forms. Simultaneously with mineralization, the process of humification acts to transform a fraction of the plant residues into stable soil organic matter (SOM) or humus. For reference, Schlesinger (1990) estimated that only ˜0.7% of detritus eventually becomes stabilized into humus.Decomposition plays a key role in the cycling of most plant macro- and micronutrients and in the formation of humus. Figure 1 places the roles of detrital processing and mineralization within the context of the biogeochemical cycling of essential plant nutrients. Chapin (1991) found that while the atmosphere supplied 4% and mineral weathering supplied no nitrogen and <1% of phosphorus, internal nutrient recycling is the source for >95% of all the nitrogen and phosphorus uptake by tundra species in Barrow, Alaska. In a cool temperate forest, nutrient recycling accounted for 93%, 89%, 88%, and 65% of total sources for nitrogen, phosphorus, potassium, and calcium, respectively ( Chapin, 1991). (13K)Figure 1. A decomposition-centric biogeochemical model of nutrient cycling. Although there is significant

  1. Biogeochemistry of Decomposition and Detrital Processing

    NASA Astrophysics Data System (ADS)

    Sanderman, J.; Amundson, R.

    2003-12-01

    Decomposition is a key ecological process that roughly balances net primary production in terrestrial ecosystems and is an essential process in resupplying nutrients to the plant community. Decomposition consists of three concurrent processes: communition or fragmentation, leaching of water-soluble compounds, and microbial catabolism. Decomposition can also be viewed as a sequential process, what Eijsackers and Zehnder (1990) compare to a Russian matriochka doll. Soil macrofauna fragment and partially solubilize plant residues, facilitating establishment of a community of decomposer microorganisms. This decomposer community will gradually shift as the most easily degraded plant compounds are utilized and the more recalcitrant materials begin to accumulate. Given enough time and the proper environmental conditions, most naturally occurring compounds can completely be mineralized to inorganic forms. Simultaneously with mineralization, the process of humification acts to transform a fraction of the plant residues into stable soil organic matter (SOM) or humus. For reference, Schlesinger (1990) estimated that only ˜0.7% of detritus eventually becomes stabilized into humus.Decomposition plays a key role in the cycling of most plant macro- and micronutrients and in the formation of humus. Figure 1 places the roles of detrital processing and mineralization within the context of the biogeochemical cycling of essential plant nutrients. Chapin (1991) found that while the atmosphere supplied 4% and mineral weathering supplied no nitrogen and <1% of phosphorus, internal nutrient recycling is the source for >95% of all the nitrogen and phosphorus uptake by tundra species in Barrow, Alaska. In a cool temperate forest, nutrient recycling accounted for 93%, 89%, 88%, and 65% of total sources for nitrogen, phosphorus, potassium, and calcium, respectively ( Chapin, 1991). (13K)Figure 1. A decomposition-centric biogeochemical model of nutrient cycling. Although there is significant

  2. Decomposition of Variance for Spatial Cox Processes

    PubMed Central

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    2012-01-01

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with additive or log linear random intensity functions. We moreover consider a new and flexible class of pair correlation function models given in terms of normal variance mixture covariance functions. The proposed methodology is applied to point pattern data sets of locations of tropical rain forest trees. PMID:23599558

  3. Tensoral: A system for post-processing turbulence simulation data

    NASA Technical Reports Server (NTRS)

    Dresselhaus, Eliot

    1993-01-01

    Many computer simulations in engineering and science -- and especially in computational fluid dynamics (CFD) -- produce huge quantities of numerical data. These data are often so large as to make even relatively simple post-processing of this data unwieldy. The data, once computed and quality-assured, is most likely analyzed by only a few people. As a result, much useful numerical data is under-utilized. Since future state-of-the-art simulations will produce even larger datasets, will use more complex flow geometries, and will be performed on more complex supercomputers, data management issues will become increasingly cumbersome. My goal is to provide software which will automate the present and future task of managing and post-processing large turbulence datasets. My research has focused on the development of these software tools -- specifically, through the development of a very high-level language called 'Tensoral'. The ultimate goal of Tensoral is to convert high-level mathematical expressions (tensor algebra, calculus, and statistics) into efficient low-level programs which numerically calculate these expressions given simulation datasets. This approach to the database and post-processing problem has several advantages. Using Tensoral the numerical and data management details of a simulation are shielded from the concerns of the end user. This shielding is carried out without sacrificing post-processor efficiency and robustness. Another advantage of Tensoral is that its very high-level nature lends itself to portability across a wide variety of computing (and supercomputing) platforms. This is especially important considering the rapidity of changes in supercomputing hardware.

  4. Using empirical mode decomposition to process marine magnetotelluric data

    NASA Astrophysics Data System (ADS)

    Chen, Jin; Heincke, Bjoern; Jegen, Marion; Moorkamp, Max

    2012-07-01

    A major step in processing magnetotelluric (MT) data is the calculation of an impedance tensor as function of frequency from recorded time-varying electromagnetic fields. Common signal processing techniques such as Fourier transform based procedures assume that the signals are stationary over the record length, which is not necessarily the case in MT, due to the possibility of sudden spatial and temporal variations in the naturally occurring source fields. In addition, noise in the recorded electric and magnetic field data may also be non-stationary. Many modern MT processing techniques can handle such non-stationarities through strategies such as windowing of the time-series. However, it is not completely clear how extreme non-stationarity may affect the resulting impedances. As a possible alternative, we examine a heuristic method called empirical mode decomposition (EMD) that is developed to handle arbitrary non-stationary time-series. EMD is a dynamic time series analysis method, in which complicated data sets can be decomposed into a finite number of simple intrinsic mode functions. In this paper, we use the EMD method on real and synthetic MT data. To determine impedance tensor estimates we first calculate instantaneous frequencies and spectra from the intrinsic mode functions and apply the impedance formula proposed by Berdichevsky to the instantaneous spectra. We first conduct synthetic tests where we compare the results from our EMD method to analytically determined apparent resistivities and phases. Next, we compare our strategy to a simple Fourier derived impedance formula and the frequently used robust processing technique bounded-influence remote reference processing (BIRRP) for different levels of stochastic noise. All results show that apparent resistivities and phases which are calculated from EMD derived impedance tensors are generally more stable than those determined from simple Fourier analysis and only slightly worse than those from the robust

  5. Tensor Algebra Library for NVidia Graphics Processing Units

    SciTech Connect

    Liakh, Dmitry

    2015-03-16

    This is a general purpose math library implementing basic tensor algebra operations on NVidia GPU accelerators. This software is a tensor algebra library that can perform basic tensor algebra operations, including tensor contractions, tensor products, tensor additions, etc., on NVidia GPU accelerators, asynchronously with respect to the CPU host. It supports a simultaneous use of multiple NVidia GPUs. Each asynchronous API function returns a handle which can later be used for querying the completion of the corresponding tensor algebra operation on a specific GPU. The tensors participating in a particular tensor operation are assumed to be stored in local RAM of a node or GPU RAM. The main research area where this library can be utilized is the quantum many-body theory (e.g., in electronic structure theory).

  6. Tensor Algebra Library for NVidia Graphics Processing Units

    Energy Science and Technology Software Center (ESTSC)

    2015-03-16

    This is a general purpose math library implementing basic tensor algebra operations on NVidia GPU accelerators. This software is a tensor algebra library that can perform basic tensor algebra operations, including tensor contractions, tensor products, tensor additions, etc., on NVidia GPU accelerators, asynchronously with respect to the CPU host. It supports a simultaneous use of multiple NVidia GPUs. Each asynchronous API function returns a handle which can later be used for querying the completion ofmore » the corresponding tensor algebra operation on a specific GPU. The tensors participating in a particular tensor operation are assumed to be stored in local RAM of a node or GPU RAM. The main research area where this library can be utilized is the quantum many-body theory (e.g., in electronic structure theory).« less

  7. Decomposition

    USGS Publications Warehouse

    Middleton, Beth A.

    2014-01-01

    A cornerstone of ecosystem ecology, decomposition was recognized as a fundamental process driving the exchange of energy in ecosystems by early ecologists such as Lindeman 1942 and Odum 1960). In the history of ecology, studies of decomposition were incorporated into the International Biological Program in the 1960s to compare the nature of organic matter breakdown in various ecosystem types. Such studies still have an important role in ecological studies of today. More recent refinements have brought debates on the relative role microbes, invertebrates and environment in the breakdown and release of carbon into the atmosphere, as well as how nutrient cycling, production and other ecosystem processes regulated by decomposition may shift with climate change. Therefore, this bibliography examines the primary literature related to organic matter breakdown, but it also explores topics in which decomposition plays a key supporting role including vegetation composition, latitudinal gradients, altered ecosystems, anthropogenic impacts, carbon storage, and climate change models. Knowledge of these topics is relevant to both the study of ecosystem ecology as well projections of future conditions for human societies.

  8. A non-statistical regularization approach and a tensor product decomposition method applied to complex flow data

    NASA Astrophysics Data System (ADS)

    von Larcher, Thomas; Blome, Therese; Klein, Rupert; Schneider, Reinhold; Wolf, Sebastian; Huber, Benjamin

    2016-04-01

    Handling high-dimensional data sets like they occur e.g. in turbulent flows or in multiscale behaviour of certain types in Geosciences are one of the big challenges in numerical analysis and scientific computing. A suitable solution is to represent those large data sets in an appropriate compact form. In this context, tensor product decomposition methods currently emerge as an important tool. One reason is that these methods often enable one to attack high-dimensional problems successfully, another that they allow for very compact representations of large data sets. We follow the novel Tensor-Train (TT) decomposition method to support the development of improved understanding of the multiscale behavior and the development of compact storage schemes for solutions of such problems. One long-term goal of the project is the construction of a self-consistent closure for Large Eddy Simulations (LES) of turbulent flows that explicitly exploits the tensor product approach's capability of capturing self-similar structures. Secondly, we focus on a mixed deterministic-stochastic subgrid scale modelling strategy currently under development for application in Finite Volume Large Eddy Simulation (LES) codes. Advanced methods of time series analysis for the databased construction of stochastic models with inherently non-stationary statistical properties and concepts of information theory based on a modified Akaike information criterion and on the Bayesian information criterion for the model discrimination are used to construct surrogate models for the non-resolved flux fluctuations. Vector-valued auto-regressive models with external influences form the basis for the modelling approach [1], [2], [4]. Here, we present the reconstruction capabilities of the two modeling approaches tested against 3D turbulent channel flow data computed by direct numerical simulation (DNS) for an incompressible, isothermal fluid at Reynolds number Reτ = 590 (computed by [3]). References [1] I

  9. An image-processing toolset for diffusion tensor tractography

    PubMed Central

    Mishra, Arabinda; Lu, Yonggang; Choe, Ann S.; Aldroubi, Akram; Gore, John C.; Anderson, Adam W.; Ding, Zhaohua

    2009-01-01

    Diffusion tensor imaging (DTI)-based fiber tractography holds great promise in delineating neuronal fiber tracts and, hence, providing connectivity maps of the neural networks in the human brain. An array of image-processing techniques has to be developed to turn DTI tractography into a practically useful tool. To this end, we have developed a suite of image-processing tools for fiber tractography with improved reliability. This article summarizes the main technical developments we have made to date, which include anisotropic smoothing, anisotropic interpolation, Bayesian fiber tracking and automatic fiber bundling. A primary focus of these techniques is the robustness to noise and partial volume averaging, the two major hurdles to reliable fiber tractography. Performance of these techniques has been comprehensively examined with simulated and in vivo DTI data, demonstrating improvements in the robustness and reliability of DTI tractography. PMID:17371726

  10. A tensor-based population value decomposition to explain rectal toxicity after prostate cancer radiotherapy

    PubMed Central

    Ospina, Juan David; Commandeur, Frédéric; Ríos, Richard; Dréan, Gaël; Correa, Juan Carlos; Simon, Antoine; Haigron, Pascal; De Crevoisier, Renaud; Acosta, Oscar

    2013-01-01

    In prostate cancer radiotherapy the association between the dose distribution and the occurrence of undesirable side-effects is yet to be revealed. In this work a method to perform population analysis by comparing the dose distributions is proposed. The method is a tensor-based approach that generalises an existing method for 2D images and allows for the highlighting of over irradiated zones correlated with rectal bleeding after prostate cancer radiotherapy. Thus, the aim is to contribute to the elucidation of the dose patterns correlated with rectal toxicity. The method was applied to a cohort of 63 patients and it was able to build up a dose pattern characterizing the difference between patients presenting rectal bleeding after prostate cancer radiotherapy and those who did not. PMID:24579164

  11. Advanced Insights into Functional Brain Connectivity by Combining Tensor Decomposition and Partial Directed Coherence

    PubMed Central

    Leistritz, Lutz; Witte, Herbert; Schiecke, Karin

    2015-01-01

    Quantification of functional connectivity in physiological networks is frequently performed by means of time-variant partial directed coherence (tvPDC), based on time-variant multivariate autoregressive models. The principle advantage of tvPDC lies in the combination of directionality, time variance and frequency selectivity simultaneously, offering a more differentiated view into complex brain networks. Yet the advantages specific to tvPDC also cause a large number of results, leading to serious problems in interpretability. To counter this issue, we propose the decomposition of multi-dimensional tvPDC results into a sum of rank-1 outer products. This leads to a data condensation which enables an advanced interpretation of results. Furthermore it is thereby possible to uncover inherent interaction patterns of induced neuronal subsystems by limiting the decomposition to several relevant channels, while retaining the global influence determined by the preceding multivariate AR estimation and tvPDC calculation of the entire scalp. Finally a comparison between several subjects is considerably easier, as individual tvPDC results are summarized within a comprehensive model equipped with subject-specific loading coefficients. A proof-of-principle of the approach is provided by means of simulated data; EEG data of an experiment concerning visual evoked potentials are used to demonstrate the applicability to real data. PMID:26046537

  12. On feature extraction and classification in prostate cancer radiotherapy using tensor decompositions.

    PubMed

    Fargeas, Auréline; Albera, Laurent; Kachenoura, Amar; Dréan, Gaël; Ospina, Juan-David; Coloigner, Julie; Lafond, Caroline; Delobel, Jean-Bernard; De Crevoisier, Renaud; Acosta, Oscar

    2015-01-01

    External beam radiotherapy is commonly prescribed for prostate cancer. Although new radiation techniques allow high doses to be delivered to the target, the surrounding healthy organs (rectum and bladder) may suffer from irradiation, which might produce undesirable side-effects. Hence, the understanding of the complex toxicity dose-volume effect relationships is crucial to adapt the treatment, thereby decreasing the risk of toxicity. In this paper, we introduce a novel method to classify patients at risk of presenting rectal bleeding based on a Deterministic Multi-way Analysis (DMA) of three-dimensional planned dose distributions across a population. After a non-rigid spatial alignment of the anatomies applied to the dose distributions, the proposed method seeks for two bases of vectors representing bleeding and non bleeding patients by using the Canonical Polyadic (CP) decomposition of two fourth order arrays of the planned doses. A patient is then classified according to its distance to the subspaces spanned by both bases. A total of 99 patients treated for prostate cancer were used to analyze and test the performance of the proposed approach, named CP-DMA, in a leave-one-out cross validation scheme. Results were compared with supervised (linear discriminant analysis, support vector machine, K-means, K-nearest neighbor) and unsupervised (recent principal component analysis-based algorithm, and multidimensional classification method) approaches based on the registered dose distribution. Moreover, CP-DMA was also compared with the Normal Tissue Complication Probability (NTCP) model. The CP-DMA method allowed rectal bleeding patients to be classified with good specificity and sensitivity values, outperforming the classical approaches. PMID:25443534

  13. Tracking Hierarchical Processing in Morphological Decomposition with Brain Potentials

    ERIC Educational Resources Information Center

    Lavric, Aureliu; Elchlepp, Heike; Rastle, Kathleen

    2012-01-01

    One important debate in psycholinguistics concerns the nature of morphological decomposition processes in visual word recognition (e.g., darkness = {dark} + {-ness}). One theory claims that these processes arise during orthographic analysis and prior to accessing meaning (Rastle & Davis, 2008), and another argues that these processes arise through…

  14. Decomposition: A Strategy for Query Processing.

    ERIC Educational Resources Information Center

    Wong, Eugene; Youssefi, Karel

    Multivariable queries can be processed in the data base management system INGRES. The general procedure is to decompose the query into a sequence of one-variable queries using two processes. One process is reduction which requires breaking off components of the query which are joined to it by a single variable. The other process,…

  15. Substrate heterogeneity and environmental variability in the decomposition process

    NASA Astrophysics Data System (ADS)

    Sierra, Carlos; Harmon, Mark; Perakis, Steven

    2010-05-01

    Soil organic matter is a complex mixture of material with heterogeneous biological, physical, and chemical properties. However, traditional analyses of organic matter decomposition assume that a single decomposition rate constant can represent the dynamics of this heterogeneous mix. Terrestrial decomposition models approach this heterogeneity by representing organic matter as a substrate with three to six pools with different susceptibilities to decomposition. Even though it is well recognized that this representation of organic matter in models is less than ideal, there is little work analyzing the effects of assuming substrate homogeneity or simple discrete representations on the mineralization of carbon and nutrients. Using concepts from the continuous quality theory developed by Göran I. Ågren and Ernesto Bosatta, we performed a systematic analysis to explore the consequences of ignoring substrate heterogeneity in modeling decomposition. We found that the compartmentalization of organic matter in a few pools introduces approximation error when both the distribution of carbon and the decomposition rate are continuous functions of quality. This error is generally large for models that use three or four pools. We also found that the pattern of carbon and nitrogen mineralization over time is highly dependent on differences in microbial growth and efficiency for different qualities. In the long-term, stabilization and destabilization processes operating simultaneously result in the accumulation of carbon in lower qualities, independent of the quality of the incoming litter. This large amount of carbon accumulated in lower qualities would produce a major response to temperature change even when its temperature sensitivity is low. The interaction of substrate heterogeneity and temperature variability produces behaviors of carbon accumulation that cannot be predicted by simple decomposition models. Responses of soil organic matter to temperature change would depend

  16. Statistical Modeling of the Industrial Sodium Aluminate Solutions Decomposition Process

    NASA Astrophysics Data System (ADS)

    Živković, Živan; Mihajlović, Ivan; Djurić, Isidora; Štrbac, Nada

    2010-10-01

    This article presents the results of the statistical modeling of industrial sodium aluminate solution decomposition as part of the Bayer alumina production process. The aim of this study was to define the correlation dependence of degree of the aluminate solution decomposition on the following parameters of technological processes: concentration of the Na2O (caustic), caustic ratio and crystallization ratio, starting temperature, final temperature, average diameter of crystallization seed, and duration of decomposition process. Multiple linear regression analysis (MLRA) and artificial neural networks (ANNs) were used as the tools for the mathematical analysis of the indicated problem. On the one hand, the attempt of process modeling, using MLRA, resulted in a linear model whose correlation coefficient was equal to R 2 = 0.731. On the other hand, ANNs enabled, to some extent, better process modeling, with a correlation coefficient equal to R 2 = 0.895. Both models obtained using MLRA and ANNs can be used for the efficient prediction of the degree of sodium aluminate solution decomposition, as the function of the input parameters, under industrial conditions of the Bayer alumina production process.

  17. The ergodic decomposition of stationary discrete random processes

    NASA Technical Reports Server (NTRS)

    Gray, R. M.; Davisson, L. D.

    1974-01-01

    The ergodic decomposition is discussed, and a version focusing on the structure of individual sample functions of stationary processes is proved for the special case of discrete-time random processes with discrete alphabets. The result is stronger in this case than the usual theorem, and the proof is both intuitive and simple. Estimation-theoretic and information-theoretic interpretations are developed and applied to prove existence theorems for universal source codes, both noiseless and with a fidelity criterion.

  18. Analysis of benzoquinone decomposition in solution plasma process

    NASA Astrophysics Data System (ADS)

    Bratescu, M. A.; Saito, N.

    2016-01-01

    The decomposition of p-benzoquinone (p-BQ) in Solution Plasma Processing (SPP) was analyzed by Coherent Anti-Stokes Raman Spectroscopy (CARS) by monitoring the change of the anti-Stokes signal intensity of the vibrational transitions of the molecule, during and after SPP. Just in the beginning of the SPP treatment, the CARS signal intensities of the ring vibrational molecular transitions increased under the influence of the electric field of plasma. The results show that plasma influences the p-BQ molecules in two ways: (i) plasma produces a polarization and an orientation of the molecules in the local electric field of plasma and (ii) the gas phase plasma supplies, in the liquid phase, hydrogen and hydroxyl radicals, which reduce or oxidize the molecules, respectively, generating different carboxylic acids. The decomposition of p-BQ after SPP was confirmed by UV-visible absorption spectroscopy and liquid chromatography.

  19. Singular value decomposition in magnetotelluric sounding data processing

    SciTech Connect

    Shengjie, S. )

    1991-01-01

    In this paper singular value decomposition method, a method for magnetotelluric sounding data processing, is recommended minutely; and its real number operation process is derived. For analysis, this method decomposes data matrix into signal matrix and noise matrix. It may give least squares estimation of response function, performs quantitative analysis of signal and noise to calculate S/N ratio, and offers the estimated variance of the response function. Theoretical calculation shows that this method is reliable and effective in suppressing noise, estimating response function, and analyzing noise and variance.

  20. The impact of post-processing on spinal cord diffusion tensor imaging

    PubMed Central

    Mohammadi, Siawoosh; Freund, Patrick; Feiweier, Thorsten; Curt, Armin; Weiskopf, Nikolaus

    2013-01-01

    Diffusion tensor imaging (DTI) provides information about the microstructure in the brain and spinal cord. While new neuroimaging techniques have significantly advanced the accuracy and sensitivity of DTI of the brain, the quality of spinal cord DTI data has improved less. This is in part due to the small size of the spinal cord (ca. 1 cm diameter) and more severe instrumental (e.g. eddy current) and physiological (e.g. cardiac pulsation) artefacts present in spinal cord DTI. So far, the improvements in image quality and resolution have resulted from cardiac gating and new acquisition approaches (e.g. reduced field-of-view techniques). The use of retrospective correction methods is not well established for spinal cord DTI. The aim of this paper is to develop an improved post-processing pipeline tailored for DTI data of the spinal cord with increased quality. For this purpose, we compared two eddy current and motion correction approaches using three-dimensional affine (3D-affine) and slice-wise registrations. We also introduced a new robust-tensor-fitting method that controls for whole-volume outliers. Although in general 3D-affine registration improves data quality, occasionally it can lead to misregistrations and biassed tensor estimates. The proposed robust tensor fitting reduced misregistration-related bias and yielded more reliable tensor estimates. Overall, the combination of slice-wise motion correction, eddy current correction, and robust tensor fitting yielded the best results. It increased the contrast-to-noise ratio (CNR) in FA maps by about 30% and reduced intra-subject variation in fractional anisotropy (FA) maps by 18%. The higher quality of FA maps allows for a better distinction between grey and white matter without increasing scan time and is compatible with any multi-directional DTI acquisition scheme. PMID:23298752

  1. Matrix decomposition graphics processing unit solver for Poisson image editing

    NASA Astrophysics Data System (ADS)

    Lei, Zhao; Wei, Li

    2012-10-01

    In recent years, gradient-domain methods have been widely discussed in the image processing field, including seamless cloning and image stitching. These algorithms are commonly carried out by solving a large sparse linear system: the Poisson equation. However, solving the Poisson equation is a computational and memory intensive task which makes it not suitable for real-time image editing. A new matrix decomposition graphics processing unit (GPU) solver (MDGS) is proposed to settle the problem. A matrix decomposition method is used to distribute the work among GPU threads, so that MDGS will take full advantage of the computing power of current GPUs. Additionally, MDGS is a hybrid solver (combines both the direct and iterative techniques) and has two-level architecture. These enable MDGS to generate identical solutions with those of the common Poisson methods and achieve high convergence rate in most cases. This approach is advantageous in terms of parallelizability, enabling real-time image processing, low memory-taken and extensive applications.

  2. Catalytic hydrothermal processing of microalgae: decomposition and upgrading of lipids.

    PubMed

    Biller, P; Riley, R; Ross, A B

    2011-04-01

    Hydrothermal processing of high lipid feedstock such as microalgae is an alternative method of oil extraction which has obvious benefits for high moisture containing biomass. A range of microalgae and lipids extracted from terrestrial oil seed have been processed at 350 °C, at pressures of 150-200 bar in water. Hydrothermal liquefaction is shown to convert the triglycerides to fatty acids and alkanes in the presence of certain heterogeneous catalysts. This investigation has compared the composition of lipids and free fatty acids from solvent extraction to those from hydrothermal processing. The initial decomposition products include free fatty acids and glycerol, and the potential for de-oxygenation using heterogeneous catalysts has been investigated. The results indicate that the bio-crude yields from the liquefaction of microalgae were increased slightly with the use of heterogeneous catalysts but the higher heating value (HHV) and the level of de-oxygenation increased, by up to 10%. PMID:21295976

  3. A decomposition of irreversible diffusion processes without detailed balance

    NASA Astrophysics Data System (ADS)

    Qian, Hong

    2013-05-01

    As a generalization of deterministic, nonlinear conservative dynamical systems, a notion of canonical conservative dynamics with respect to a positive, differentiable stationary density ρ(x) is introduced: dot{x}=j(x) in which ∇.(ρ(x)j(x)) = 0. Such systems have a conserved "generalized free energy function" F[u] = ∫u(x, t)ln (u(x, t)/ρ(x))dx in phase space with a density flow u(x, t) satisfying ∂ut = -∇.(ju). Any general stochastic diffusion process without detailed balance, in terms of its Fokker-Planck equation, can be decomposed into a reversible diffusion process with detailed balance and a canonical conservative dynamics. This decomposition can be rigorously established in a function space with inner product defined as ⟨ϕ, ψ⟩ = ∫ρ-1(x)ϕ(x)ψ(x)dx. Furthermore, a law for balancing F[u] can be obtained: The non-positive dF[u(x, t)]/dt = Ein(t) - ep(t) where the "source" Ein(t) ⩾ 0 and the "sink" ep(t) ⩾ 0 are known as house-keeping heat and entropy production, respectively. A reversible diffusion has Ein(t) = 0. For a linear (Ornstein-Uhlenbeck) diffusion process, our decomposition is equivalent to the previous approaches developed by Graham and Ao, as well as the theory of large deviations. In terms of two different formulations of time reversal for a same stochastic process, the meanings of dissipative and conservative stationary dynamics are discussed.

  4. Canonical polyadic decomposition of third-order semi-nonnegative semi-symmetric tensors using LU and QR matrix factorizations

    NASA Astrophysics Data System (ADS)

    Wang, Lu; Albera, Laurent; Kachenoura, Amar; Shu, Huazhong; Senhadji, Lotfi

    2014-12-01

    Semi-symmetric three-way arrays are essential tools in blind source separation (BSS) particularly in independent component analysis (ICA). These arrays can be built by resorting to higher order statistics of the data. The canonical polyadic (CP) decomposition of such semi-symmetric three-way arrays allows us to identify the so-called mixing matrix, which contains the information about the intensities of some latent source signals present in the observation channels. In addition, in many applications, such as the magnetic resonance spectroscopy (MRS), the columns of the mixing matrix are viewed as relative concentrations of the spectra of the chemical components. Therefore, the two loading matrices of the three-way array, which are equal to the mixing matrix, are nonnegative. Most existing CP algorithms handle the symmetry and the nonnegativity separately. Up to now, very few of them consider both the semi-nonnegativity and the semi-symmetry structure of the three-way array. Nevertheless, like all the methods based on line search, trust region strategies, and alternating optimization, they appear to be dependent on initialization, requiring in practice a multi-initialization procedure. In order to overcome this drawback, we propose two new methods, called [InlineEquation not available: see fulltext.] and [InlineEquation not available: see fulltext.], to solve the problem of CP decomposition of semi-nonnegative semi-symmetric three-way arrays. Firstly, we rewrite the constrained optimization problem as an unconstrained one. In fact, the nonnegativity constraint of the two symmetric modes is ensured by means of a square change of variable. Secondly, a Jacobi-like optimization procedure is adopted because of its good convergence property. More precisely, the two new methods use LU and QR matrix factorizations, respectively, which consist in formulating high-dimensional optimization problems into several sequential polynomial and rational subproblems. By using both LU

  5. ENVIRONMENTAL ASSESSMENT OF THE BASE CATALYZED DECOMPOSITION (BCD) PROCESS

    EPA Science Inventory

    This report summarizes laboratory-scale, pilot-scale, and field performance data on BCD (Base Catalyzed Decomposition) and technology, collected to date by various governmental, academic, and private organizations.

  6. CO2 decomposition using electrochemical process in molten salts

    NASA Astrophysics Data System (ADS)

    Otake, Koya; Kinoshita, Hiroshi; Kikuchi, Tatsuya; Suzuki, Ryosuke O.

    2012-08-01

    The electrochemical decomposition of CO2 gas to carbon and oxygen gas in LiCl-Li2O and CaCl2-CaO molten salts was studied. This process consists of electrochemical reduction of Li2O and CaO, as well as the thermal reduction of CO2 gas by the respective metallic Li and Ca. Two kinds of ZrO2 solid electrolytes were tested as an oxygen ion conductor, and the electrolytes removed oxygen ions from the molten salts to the outside of the reactor. After electrolysis in both salts, the aggregations of nanometer-scale amorphous carbon and rod-like graphite crystals were observed by transmission electron microscopy. When 9.7 %CO2-Ar mixed gas was blown into LiCl-Li2O and CaCl2-CaO molten salts, the current efficiency was evaluated to be 89.7 % and 78.5 %, respectively, by the exhaust gas analysis and the supplied charge. When a solid electrolyte with higher ionic conductivity was used, the current and carbon production became larger. It was found that the rate determining step is the diffusion of oxygen ions into the ZrO2 solid electrolyte.

  7. Thermochemical processes for hydrogen production by water decomposition. Final report

    SciTech Connect

    Perlmutter, D.D.

    1980-08-01

    The principal contributions of the research are in the area of gas-solid reactions, ranging from models and data interpretation for fundamental kinetics and mixing of solids to simulations of engineering scale reactors. Models were derived for simulating the heat and mass transfer processes inside the reactor and tested by experiments. The effects of surface renewal of solids on the mass transfer phenomena were studied and related to the solid mixing. Catalysis by selected additives were studied experimentally. The separate results were combined in a simulation study of industrial-scale rotary reactor performance. A study was made of the controlled decompositions of a series of inorganic sulfates and their common hydrates, carried out in a Thermogravimetric Analyzer (TGA), a Differential Scanning Calorimeter (DSC), and a Differential Thermal Analyzer (DTA). Various sample sizes, heating rates, and ambient atmospheres were used to demonstrate their influence on the results. The purposes of this study were to: (i) reveal intermediate compounds, (ii) determine the stable temperature range of each compound, and (iii) measure reaction kinetics. In addition, several solid additives: carbon, metal oxides, and sodium chloride, were demonstrated to have catalytic effects to varying degrees for the different salts.

  8. Using Empirical Mode Decomposition to process Marine Magnetotelluric Data

    NASA Astrophysics Data System (ADS)

    Chen, J.; Jegen, M. D.; Heincke, B. H.; Moorkamp, M.

    2014-12-01

    The magnetotelluric (MT) data always exhibits nonstationarities due to variations of source mechanisms causing MT variations on different time and spatial scales. An additional non-stationary component is introduced through noise, which is particularly pronounced in marine MT data through motion induced noise caused by time-varying wave motion and currents. We present a new heuristic method for dealing with the non-stationarity of MT time series based on Empirical Mode Decomposition (EMD). The EMD method is used in combination with the derived instantaneous spectra to determine impedance estimates. The procedure is tested on synthetic and field MT data. In synthetic tests the reliability of impedance estimates from EMD-based method is compared to the synthetic responses of a 1D layered model. To examine how estimates are affected by noise, stochastic stationary and non-stationary noise are added on the time series. Comparisons reveal that estimates by the EMD-based method are generally more stable than those by simple Fourier analysis. Furthermore, the results are compared to those derived by a commonly used Fourier-based MT data processing software (BIRRP), which incorporates additional sophisticated robust estimations to deal with noise issues. It is revealed that the results from both methods are already comparable, even though no robust estimate procedures are implemented in the EMD approach at present stage. The processing scheme is then applied to marine MT field data. Testing is performed on short, relatively quiet segments of several data sets, as well as on long segments of data with many non-stationary noise packages. Compared to BIRRP, the new method gives comparable or better impedance estimates, furthermore, the estimates are extended to lower frequencies and less noise biased estimates with smaller error bars are obtained at high frequencies. The new processing methodology represents an important step towards deriving a better resolved Earth model to

  9. Image Corruption Detection in Diffusion Tensor Imaging for Post-Processing and Real-Time Monitoring

    PubMed Central

    Li, Yue; Shea, Steven M.; Lorenz, Christine H.; Jiang, Hangyi; Chou, Ming-Chung; Mori, Susumu

    2013-01-01

    Due to the high sensitivity of diffusion tensor imaging (DTI) to physiological motion, clinical DTI scans often suffer a significant amount of artifacts. Tensor-fitting-based, post-processing outlier rejection is often used to reduce the influence of motion artifacts. Although it is an effective approach, when there are multiple corrupted data, this method may no longer correctly identify and reject the corrupted data. In this paper, we introduce a new criterion called “corrected Inter-Slice Intensity Discontinuity” (cISID) to detect motion-induced artifacts. We compared the performance of algorithms using cISID and other existing methods with regard to artifact detection. The experimental results show that the integration of cISID into fitting-based methods significantly improves the retrospective detection performance at post-processing analysis. The performance of the cISID criterion, if used alone, was inferior to the fitting-based methods, but cISID could effectively identify severely corrupted images with a rapid calculation time. In the second part of this paper, an outlier rejection scheme was implemented on a scanner for real-time monitoring of image quality and reacquisition of the corrupted data. The real-time monitoring, based on cISID and followed by post-processing, fitting-based outlier rejection, could provide a robust environment for routine DTI studies. PMID:24204551

  10. A stable elemental decomposition for dynamic process optimization

    NASA Astrophysics Data System (ADS)

    Cervantes, Arturo M.; Biegler, Lorenz T.

    2000-08-01

    In Cervantes and Biegler (A.I.Ch.E.J. 44 (1998) 1038), we presented a simultaneous nonlinear programming problem (NLP) formulation for the solution of DAE optimization problems. Here, by applying collocation on finite elements, the DAE system is transformed into a nonlinear system. The resulting optimization problem, in which the element placement is fixed, is solved using a reduced space successive quadratic programming (rSQP) algorithm. The space is partitioned into range and null spaces. This partitioning is performed by choosing a pivot sequence for an LU factorization with partial pivoting which allows us to detect unstable modes in the DAE system. The system is stabilized without imposing new boundary conditions. The decomposition of the range space can be performed in a single step by exploiting the overall sparsity of the collocation matrix but not its almost block diagonal structure. In order to solve larger problems a new decomposition approach and a new method for constructing the quadratic programming (QP) subproblem are presented in this work. The decomposition of the collocation matrix is now performed element by element, thus reducing the storage requirements and the computational effort. Under this scheme, the unstable modes are considered in each element and a range-space move is constructed sequentially based on decomposition in each element. This new decomposition improves the efficiency of our previous approach and at the same time preserves its stability. The performance of the algorithm is tested on several examples. Finally, some future directions for research are discussed.

  11. Decomposition and hydrocarbon growth processes for hexadienes in nonpremixed flames

    SciTech Connect

    McEnally, Charles S.; Pfefferle, Lisa D.

    2008-03-15

    Alkadienes are formed during the decomposition of alkanes and play a key role in the formation of aromatics due to their degree of unsaturation. The experiments in this paper examined the decomposition and hydrocarbon growth mechanisms of a wide range of hexadiene isomers in soot-forming nonpremixed flames. Specifically, C3 to C12 hydrocarbon concentrations were measured on the centerlines of atmospheric-pressure methane/air coflowing nonpremixed flames doped with 2000 ppm of 1,3-, 1,4-, 1,5-, and 2,4-hexadiene and 2-methyl-1,3-, 3-methyl-1,3-, 2-methyl-1,4-, 3-methyl-1,4-pentadiene, and 2,3-dimethyl-1,3-butadiene. The hexadiene decomposition rates and hydrocarbon product concentrations showed that the primary decomposition mechanism was unimolecular fission of C-C single bonds, whose fission produced allyl and other resonantly stabilized products. The one isomer that does not contain any of these bonds, 2,4-hexadiene, isomerized by a six-center mechanism to 1,3-hexadiene. These decomposition pathways differ from those that have been observed previously for propadiene and 1,3-butadiene, and these differences affect aromatic hydrocarbon formation. 1,5-Hexadiene and 2,3-dimethyl-1,3-butadiene produced significantly more C{sub 3}H{sub 4} and C{sub 4}H{sub 4} than the other isomers, but less benzene, which suggests that benzene formation pathways other than the conventional C3 + C3 and C4 + C2 pathways were important in most of the hexadiene-doped flames. The most likely additional mechanism is cyclization of highly unsaturated C5 decomposition products, followed by methyl addition to cyclopentadienyl radicals. (author)

  12. C%2B%2B tensor toolbox user manual.

    SciTech Connect

    Plantenga, Todd D.; Kolda, Tamara Gibson

    2012-04-01

    The C++ Tensor Toolbox is a software package for computing tensor decompositions. It is based on the Matlab Tensor Toolbox, and is particularly optimized for sparse data sets. This user manual briefly overviews tensor decomposition mathematics, software capabilities, and installation of the package. Tensors (also known as multidimensional arrays or N-way arrays) are used in a variety of applications ranging from chemometrics to network analysis. The Tensor Toolbox provides classes for manipulating dense, sparse, and structured tensors in C++. The Toolbox compiles into libraries and is intended for use with custom applications written by users.

  13. Developmental process of the arcuate fasciculus from infancy to adolescence: a diffusion tensor imaging study

    PubMed Central

    Tak, Hyeong Jun; Kim, Jin Hyun; Son, Su Min

    2016-01-01

    We investigated the radiologic developmental process of the arcuate fasciculus (AF) using subcomponent diffusion tensor imaging (DTI) analysis in typically developing volunteers. DTI data were acquired from 96 consecutive typically developing children, aged 0–14 years. AF subcomponents, including the posterior, anterior, and direct AF tracts were analyzed. Success rates of analysis (AR) and fractional anisotropy (FA) values of each subcomponent tract were measured and compared. AR of all subcomponent tracts, except the posterior, showed a significant increase with aging (P < 0.05). Subcomponent tracts had a specific developmental sequence: First, the posterior AF tract, second, the anterior AF tract, and last, the direct AF tract in identical hemispheres. FA values of all subcomponent tracts, except right direct AF tract, showed correlation with subject's age (P < 0.05). Increased AR and FA values were observed in female subjects in young age (0–2 years) group compared with males (P < 0.05). The direct AF tract showed leftward hemispheric asymmetry and this tendency showed greater consolidation in older age (3–14 years) groups (P < 0.05). These findings demonstrated the radiologic developmental patterns of the AF from infancy to adolescence using subcomponent DTI analysis. The AF showed a specific developmental sequence, sex difference in younger age, and hemispheric asymmetry in older age. PMID:27482222

  14. Developmental process of the arcuate fasciculus from infancy to adolescence: a diffusion tensor imaging study.

    PubMed

    Tak, Hyeong Jun; Kim, Jin Hyun; Son, Su Min

    2016-06-01

    We investigated the radiologic developmental process of the arcuate fasciculus (AF) using subcomponent diffusion tensor imaging (DTI) analysis in typically developing volunteers. DTI data were acquired from 96 consecutive typically developing children, aged 0-14 years. AF subcomponents, including the posterior, anterior, and direct AF tracts were analyzed. Success rates of analysis (AR) and fractional anisotropy (FA) values of each subcomponent tract were measured and compared. AR of all subcomponent tracts, except the posterior, showed a significant increase with aging (P < 0.05). Subcomponent tracts had a specific developmental sequence: First, the posterior AF tract, second, the anterior AF tract, and last, the direct AF tract in identical hemispheres. FA values of all subcomponent tracts, except right direct AF tract, showed correlation with subject's age (P < 0.05). Increased AR and FA values were observed in female subjects in young age (0-2 years) group compared with males (P < 0.05). The direct AF tract showed leftward hemispheric asymmetry and this tendency showed greater consolidation in older age (3-14 years) groups (P < 0.05). These findings demonstrated the radiologic developmental patterns of the AF from infancy to adolescence using subcomponent DTI analysis. The AF showed a specific developmental sequence, sex difference in younger age, and hemispheric asymmetry in older age. PMID:27482222

  15. The Dynamics of Cognition and Action: Mental Processes Inferred from Speed-Accuracy Decomposition.

    ERIC Educational Resources Information Center

    Meyer, David E.; And Others

    1988-01-01

    Theoretical/empirical foundations on which reaction times are measured and interpreted are discussed. Models of human information processing are reviewed. A hybrid procedure and analytical framework are introduced, using a speed-accuracy decomposition technique to analyze the intermediate products of rapid mental processes. Results invalidate many…

  16. Exothermic Behavior of Thermal Decomposition of Sodium Percarbonate: Kinetic Deconvolution of Successive Endothermic and Exothermic Processes.

    PubMed

    Nakano, Masayoshi; Wada, Takeshi; Koga, Nobuyoshi

    2015-09-24

    This study focused on the kinetic modeling of the thermal decomposition of sodium percarbonate (SPC, sodium carbonate-hydrogen peroxide (2/3)). The reaction is characterized by apparently different kinetic profiles of mass-loss and exothermic behavior as recorded by thermogravimetry and differential scanning calorimetry, respectively. This phenomenon results from a combination of different kinetic features of the reaction involving two overlapping mass-loss steps controlled by the physico-geometry of the reaction and successive endothermic and exothermic processes caused by the detachment and decomposition of H2O2(g). For kinetic modeling, the overall reaction was initially separated into endothermic and exothermic processes using kinetic deconvolution analysis. Then, both of the endothermic and exothermic processes were further separated into two reaction steps accounting for the physico-geometrically controlled reaction that occurs in two steps. Kinetic modeling through kinetic deconvolution analysis clearly illustrates the appearance of the net exothermic effect is the result of a slight delay of the exothermic process to the endothermic process in each physico-geometrically controlled reaction step. This demonstrates that kinetic modeling attempted in this study is useful for interpreting the exothermic behavior of solid-state reactions such as the oxidative decomposition of solids and thermal decomposition of oxidizing agent. PMID:26371394

  17. Decomposition of repetition priming processes in word translation.

    PubMed

    Francis, Wendy S; Durán, Gabriela; Augustini, Beatriz K; Luévano, Genoveva; Arzate, José C; Sáenz, Silvia P

    2011-01-01

    Translation in fluent bilinguals requires comprehension of a stimulus word and subsequent production, or retrieval and articulation, of the response word. Four repetition-priming experiments with Spanish–English bilinguals (N = 274) decomposed these processes using selective facilitation to evaluate their unique priming contributions and factorial combination to evaluate the degree of process overlap or dependence. In Experiment 1, symmetric priming between semantic classification and translation tasks indicated that bilinguals do not covertly translate words during semantic classification. In Experiments 2 and 3, semantic classification of words and word-cued picture drawing facilitated word-comprehension processes of translation, and picture naming facilitated word-production processes. These effects were independent, consistent with a sequential model and with the conclusion that neither semantic classification nor word-cued picture drawing elicits covert translation. Experiment 4 showed that 2 tasks involving word-retrieval processes--written word translation and picture naming--had subadditive effects on later translation. Incomplete transfer from written translation to spoken translation indicated that preparation for articulation also benefited from repetition in the less-fluent language. PMID:21058875

  18. The tensor hierarchy algebra

    NASA Astrophysics Data System (ADS)

    Palmkvist, Jakob

    2014-01-01

    We introduce an infinite-dimensional Lie superalgebra which is an extension of the U-duality Lie algebra of maximal supergravity in D dimensions, for 3 ⩽ D ⩽ 7. The level decomposition with respect to the U-duality Lie algebra gives exactly the tensor hierarchy of representations that arises in gauge deformations of the theory described by an embedding tensor, for all positive levels p. We prove that these representations are always contained in those coming from the associated Borcherds-Kac-Moody superalgebra, and we explain why some of the latter representations are not included in the tensor hierarchy. The most remarkable feature of our Lie superalgebra is that it does not admit a triangular decomposition like a (Borcherds-)Kac-Moody (super)algebra. Instead the Hodge duality relations between level p and D - 2 - p extend to negative p, relating the representations at the first two negative levels to the supersymmetry and closure constraints of the embedding tensor.

  19. The tensor hierarchy algebra

    SciTech Connect

    Palmkvist, Jakob

    2014-01-15

    We introduce an infinite-dimensional Lie superalgebra which is an extension of the U-duality Lie algebra of maximal supergravity in D dimensions, for 3 ⩽ D ⩽ 7. The level decomposition with respect to the U-duality Lie algebra gives exactly the tensor hierarchy of representations that arises in gauge deformations of the theory described by an embedding tensor, for all positive levels p. We prove that these representations are always contained in those coming from the associated Borcherds-Kac-Moody superalgebra, and we explain why some of the latter representations are not included in the tensor hierarchy. The most remarkable feature of our Lie superalgebra is that it does not admit a triangular decomposition like a (Borcherds-)Kac-Moody (super)algebra. Instead the Hodge duality relations between level p and D − 2 − p extend to negative p, relating the representations at the first two negative levels to the supersymmetry and closure constraints of the embedding tensor.

  20. Decomposition of Repetition Priming Processes in Word Translation

    ERIC Educational Resources Information Center

    Francis, Wendy S.; Duran, Gabriela; Augustini, Beatriz K.; Luevano, Genoveva; Arzate, Jose C.; Saenz, Silvia P.

    2011-01-01

    Translation in fluent bilinguals requires comprehension of a stimulus word and subsequent production, or retrieval and articulation, of the response word. Four repetition-priming experiments with Spanish-English bilinguals (N = 274) decomposed these processes using selective facilitation to evaluate their unique priming contributions and factorial…

  1. PROCESS OF COATING WITH NICKEL BY THE DECOMPOSITION OF NICKEL CARBONYL

    DOEpatents

    Hoover, T.B.

    1959-04-01

    An improved process is presented for the deposition of nickel coatings by the thermal decomposition of nickel carbonyl vapor. The improvement consists in incorporating a small amount of hydrogen sulfide gas in the nickel carbonyl plating gas. It is postulated that the hydrogen sulfide functions as a catalyst. i

  2. Method for increasing steam decomposition in a coal gasification process

    DOEpatents

    Wilson, Marvin W.

    1988-01-01

    The gasification of coal in the presence of steam and oxygen is significantly enhanced by introducing a thermochemical water-splitting agent such as sulfuric acid, into the gasifier for decomposing the steam to provide additional oxygen and hydrogen usable in the gasification process for the combustion of the coal and enrichment of the gaseous gasification products. The addition of the water-splitting agent into the gasifier also allows for the operation of the reactor at a lower temperature.

  3. Method for increasing steam decomposition in a coal gasification process

    DOEpatents

    Wilson, M.W.

    1987-03-23

    The gasification of coal in the presence of steam and oxygen is significantly enhanced by introducing a thermochemical water- splitting agent such as sulfuric acid, into the gasifier for decomposing the steam to provide additional oxygen and hydrogen usable in the gasification process for the combustion of the coal and enrichment of the gaseous gasification products. The addition of the water-splitting agent into the gasifier also allows for the operation of the reactor at a lower temperature.

  4. A statistical approach based on accumulated degree-days to predict decomposition-related processes in forensic studies.

    PubMed

    Michaud, Jean-Philippe; Moreau, Gaétan

    2011-01-01

    Using pig carcasses exposed over 3 years in rural fields during spring, summer, and fall, we studied the relationship between decomposition stages and degree-day accumulation (i) to verify the predictability of the decomposition stages used in forensic entomology to document carcass decomposition and (ii) to build a degree-day accumulation model applicable to various decomposition-related processes. Results indicate that the decomposition stages can be predicted with accuracy from temperature records and that a reliable degree-day index can be developed to study decomposition-related processes. The development of degree-day indices opens new doors for researchers and allows for the application of inferential tools unaffected by climatic variability, as well as for the inclusion of statistics in a science that is primarily descriptive and in need of validation methods in courtroom proceedings. PMID:21198596

  5. The neural basis of novelty and appropriateness in processing of creative chunk decomposition.

    PubMed

    Huang, Furong; Fan, Jin; Luo, Jing

    2015-06-01

    Novelty and appropriateness have been recognized as the fundamental features of creative thinking. However, the brain mechanisms underlying these features remain largely unknown. In this study, we used event-related functional magnetic resonance imaging (fMRI) to dissociate these mechanisms in a revised creative chunk decomposition task in which participants were required to perform different types of chunk decomposition that systematically varied in novelty and appropriateness. We found that novelty processing involved functional areas for procedural memory (caudate), mental rewarding (substantia nigra, SN), and visual-spatial processing, whereas appropriateness processing was mediated by areas for declarative memory (hippocampus), emotional arousal (amygdala), and orthography recognition. These results indicate that non-declarative and declarative memory systems may jointly contribute to the two fundamental features of creative thinking. PMID:25797834

  6. Decomposition of gaseous organic contaminants by surface discharge induced plasma chemical processing -- SPCP

    SciTech Connect

    Oda, Tetsuji; Yamashita, Ryuichi; Haga, Ichiro; Takahashi, Tadashi; Masuda, Senichi

    1996-01-01

    The decomposition performance of the surface induced plasma chemical processing (SPCP) for chlorofluorocarbon (83 ppm CFC-113 in air), acetone, trichloroethylene, and isopropylalcohol was experimentally examined. In every case, very high decomposition performance, more than 90 or 99% removal rate, is realized when the residence time is about 1 second and the input electric power for a 16 cm{sup 3} reactor is about 10 W. Acetone is the most stable compound and alcohol is most easily decomposed. The decomposed product-analysis by a GasChromato-MassSpectrometer has just started but very poor results are obtained. In fact, some portion of the isopropylalcohol may change to acetone which is worse than alcohol. The necessary energy to decompose one mol gas diluted in the air is calculated form the experiments. The necessary energy level for acetone and trichloroethylene is about one-tenth or one-fiftieth of that for chlorofluorocarbon.

  7. Decomposition and Precipitation Process During Thermo-mechanical Fatigue of Duplex Stainless Steel

    NASA Astrophysics Data System (ADS)

    Weidner, Anja; Kolmorgen, Roman; Kubena, Ivo; Kulawinski, Dirk; Kruml, Tomas; Biermann, Horst

    2016-05-01

    The so-called 748 K (475 °C) embrittlement is one of the main drawbacks for the application of ferritic-austenitic duplex stainless steels (DSS) at higher temperatures caused by a spinodal decomposition of the ferritic phase. Thermo-mechanical fatigue tests performed on a DSS in the temperature range between 623 K and 873 K (350 °C and 600 °C) revealed no negative influence on the fatigue lifetime. However, an intensive subgrain formation occurred in the ferritic phase, which was accompanied by formation of fine precipitates. In order to study the decomposition process of the ferritic grains due to TMF testing, detailed investigations using scanning and transmission electron microscopy are presented. The nature of the precipitates was determined as the cubic face centered G-phase, which is characterized by an enrichment of Si, Mo, and Ni. Furthermore, the formation of secondary austenite within ferritic grains was observed.

  8. Controlled decomposition and oxidation: A treatment method for gaseous process effluents

    NASA Technical Reports Server (NTRS)

    Mckinley, Roger J. B., Sr.

    1990-01-01

    The safe disposal of effluent gases produced by the electronics industry deserves special attention. Due to the hazardous nature of many of the materials used, it is essential to control and treat the reactants and reactant by-products as they are exhausted from the process tool and prior to their release into the manufacturing facility's exhaust system and the atmosphere. Controlled decomposition and oxidation (CDO) is one method of treating effluent gases from thin film deposition processes. CDO equipment applications, field experience, and results of the use of CDO equipment and technological advances gained from the field experiences are discussed.

  9. Chlorine/UV Process for Decomposition and Detoxification of Microcystin-LR.

    PubMed

    Zhang, Xinran; Li, Jing; Yang, Jer-Yen; Wood, Karl V; Rothwell, Arlene P; Li, Weiguang; Blatchley Iii, Ernest R

    2016-07-19

    Microcystin-LR (MC-LR) is a potent hepatotoxin that is often associated with blooms of cyanobacteria. Experiments were conducted to evaluate the efficiency of the chlorine/UV process for MC-LR decomposition and detoxification. Chlorinated MC-LR was observed to be more photoactive than MC-LR. LC/MS analyses confirmed that the arginine moiety represented an important reaction site within the MC-LR molecule for conditions of chlorination below the chlorine demand of the molecule. Prechlorination activated MC-LR toward UV254 exposure by increasing the product of the molar absorption coefficient and the quantum yield of chloro-MC-LR, relative to the unchlorinated molecule. This mechanism of decay is fundamentally different than the conventional view of chlorine/UV as an advanced oxidation process. A toxicity assay based on human liver cells indicated MC-LR degradation byproducts in the chlorine/UV process possessed less cytotoxicity than those that resulted from chlorination or UV254 irradiation applied separately. MC-LR decomposition and detoxification in this combined process were more effective at pH 8.5 than at pH 7.5 or 6.5. These results suggest that the chlorine/UV process could represent an effective strategy for control of microcystins and their associated toxicity in drinking water supplies. PMID:27338715

  10. A quantitative acoustic emission study on fracture processes in ceramics based on wavelet packet decomposition

    SciTech Connect

    Ning, J. G.; Chu, L.; Ren, H. L.

    2014-08-28

    We base a quantitative acoustic emission (AE) study on fracture processes in alumina ceramics on wavelet packet decomposition and AE source location. According to the frequency characteristics, as well as energy and ringdown counts of AE, the fracture process is divided into four stages: crack closure, nucleation, development, and critical failure. Each of the AE signals is decomposed by a 2-level wavelet package decomposition into four different (from-low-to-high) frequency bands (AA{sub 2}, AD{sub 2}, DA{sub 2}, and DD{sub 2}). The energy eigenvalues P{sub 0}, P{sub 1}, P{sub 2}, and P{sub 3} corresponding to these four frequency bands are calculated. By analyzing changes in P{sub 0} and P{sub 3} in the four stages, we determine the inverse relationship between AE frequency and the crack source size during ceramic fracture. AE signals with regard to crack nucleation can be expressed when P{sub 0} is less than 5 and P{sub 3} more than 60; whereas AE signals with regard to dangerous crack propagation can be expressed when more than 92% of P{sub 0} is greater than 4, and more than 95% of P{sub 3} is less than 45. Geiger location algorithm is used to locate AE sources and cracks in the sample. The results of this location algorithm are consistent with the positions of fractures in the sample when observed under a scanning electronic microscope; thus the locations of fractures located with Geiger's method can reflect the fracture process. The stage division by location results is in a good agreement with the division based on AE frequency characteristics. We find that both wavelet package decomposition and Geiger's AE source locations are suitable for the identification of the evolutionary process of cracks in alumina ceramics.

  11. ERP and Adaptive Autoregressive identification with spectral power decomposition to study rapid auditory processing in infants.

    PubMed

    Piazza, C; Cantiani, C; Tacchino, G; Molteni, M; Reni, G; Bianchi, A M

    2014-01-01

    The ability to process rapidly-occurring auditory stimuli plays an important role in the mechanisms of language acquisition. For this reason, the research community has begun to investigate infant auditory processing, particularly using the Event Related Potentials (ERP) technique. In this paper we approach this issue by means of time domain and time-frequency domain analysis. For the latter, we propose the use of Adaptive Autoregressive (AAR) identification with spectral power decomposition. Results show EEG delta-theta oscillation enhancement related to the processing of acoustic frequency and duration changes, suggesting that, as expected, power modulation encodes rapid auditory processing (RAP) in infants and that the time-frequency analysis method proposed is able to identify this modulation. PMID:25571014

  12. Contribution of free radicals to chlorophenols decomposition by several advanced oxidation processes.

    PubMed

    Benitez, F J; Beltran-Heredia, J; Acero, J L; Rubio, F J

    2000-10-01

    The chemical decomposition of aqueous solutions of various chlorophenols (4-chlorophenol (4-CP), 2,4-dichlorophenol (2-DCP), 2,4,6-trichlorophenol (2,4,6-TCP) and 2,3,4,6-tetrachlorophenol (2,3,4,6-TeCP)), which are environmental priority pollutants, is studied by means of single oxidants (hydrogen peroxide, UV radiation, Fenton's reagent and ozone at pH 2 and 9), and by the Advanced Oxidation Processes (AOPs) constituted by combinations of these oxidants (UV/H2O2 UV/Fenton's reagent and O3/UV). For all these reactions the degradation rates are evaluated by determining their first-order rate constants and the half-life times. Ozone is more reactive with higher substituted CPs while OH* radicals react faster with those chlorophenols having lower number of chlorine atoms. The improvement in the decomposition levels reached by the combined processes, due to the generation of the very reactive hydroxyl radicals. in relation to the single oxidants is clearly demonstrated and evaluated by kinetic modeling. PMID:10901258

  13. A detailed kinetic model for the hydrothermal decomposition process of sewage sludge.

    PubMed

    Yin, Fengjun; Chen, Hongzhen; Xu, Guihua; Wang, Guangwei; Xu, Yuanjian

    2015-12-01

    A detailed kinetic model for the hydrothermal decomposition (HTD) of sewage sludge was developed based on an explicit reaction scheme considering exact intermediates including protein, saccharide, NH4(+)-N and acetic acid. The parameters were estimated by a series of kinetic data at a temperature range of 180-300°C. This modeling framework is capable of revealing stoichiometric relationships between different components by determining the conversion coefficients and identifying the reaction behaviors by determining rate constants and activation energies. The modeling work shows that protein and saccharide are the primary intermediates in the initial stage of HTD resulting from the fast reduction of biomass. The oxidation processes of macromolecular products to acetic acid are highly dependent on reaction temperature and dramatically restrained when temperature is below 220°C. Overall, this detailed model is meaningful for process simulation and kinetic analysis. PMID:26409104

  14. Surface modification processes during methane decomposition on Cu-promoted Ni–ZrO2 catalysts

    PubMed Central

    Wolfbeisser, Astrid; Klötzer, Bernhard; Mayr, Lukas; Rameshan, Raffael; Zemlyanov, Dmitry; Bernardi, Johannes; Rupprechter, Günther

    2015-01-01

    The surface chemistry of methane on Ni–ZrO2 and bimetallic CuNi–ZrO2 catalysts and the stability of the CuNi alloy under reaction conditions of methane decomposition were investigated by combining reactivity measurements and in situ synchrotron-based near-ambient pressure XPS. Cu was selected as an exemplary promoter for modifying the reactivity of Ni and enhancing the resistance against coke formation. We observed an activation process occurring in methane between 650 and 735 K with the exact temperature depending on the composition which resulted in an irreversible modification of the catalytic performance of the bimetallic catalysts towards a Ni-like behaviour. The sudden increase in catalytic activity could be explained by an increase in the concentration of reduced Ni atoms at the catalyst surface in the active state, likely as a consequence of the interaction with methane. Cu addition to Ni improved the desired resistance against carbon deposition by lowering the amount of coke formed. As a key conclusion, the CuNi alloy shows limited stability under relevant reaction conditions. This system is stable only in a limited range of temperature up to ~700 K in methane. Beyond this temperature, segregation of Ni species causes a fast increase in methane decomposition rate. In view of the applicability of this system, a detailed understanding of the stability and surface composition of the bimetallic phases present and the influence of the Cu promoter on the surface chemistry under relevant reaction conditions are essential. PMID:25815163

  15. MATLAB Tensor Toolbox

    Energy Science and Technology Software Center (ESTSC)

    2006-08-03

    This software provides a collection of MATLAB classes for tensor manipulations that can be used for fast algorithm prototyping. The tensor class extends the functionality of MATLAB's multidimensional arrays by supporting additional operations such as tensor multiplication. We have also added support for sparse tensor, tensors in Kruskal or Tucker format, and tensors stored as matrices (both dense and sparse).

  16. Linear friction weld process monitoring of fixture cassette deformations using empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Bakker, O. J.; Gibson, C.; Wilson, P.; Lohse, N.; Popov, A. A.

    2015-10-01

    Due to its inherent advantages, linear friction welding is a solid-state joining process of increasing importance to the aerospace, automotive, medical and power generation equipment industries. Tangential oscillations and forge stroke during the burn-off phase of the joining process introduce essential dynamic forces, which can also be detrimental to the welding process. Since burn-off is a critical phase in the manufacturing stage, process monitoring is fundamental for quality and stability control purposes. This study aims to improve workholding stability through the analysis of fixture cassette deformations. Methods and procedures for process monitoring are developed and implemented in a fail-or-pass assessment system for fixture cassette deformations during the burn-off phase. Additionally, the de-noised signals are compared to results from previous production runs. The observed deformations as a consequence of the forces acting on the fixture cassette are measured directly during the welding process. Data on the linear friction-welding machine are acquired and de-noised using empirical mode decomposition, before the burn-off phase is extracted. This approach enables a direct, objective comparison of the signal features with trends from previous successful welds. The capacity of the whole process monitoring system is validated and demonstrated through the analysis of a large number of signals obtained from welding experiments.

  17. Quantitative evaluation and visualization of cracking process in reinforced concrete by a moment tensor analysis of acoustic emission

    SciTech Connect

    Yuyama, Shigenori; Okamoto, Takahisa; Shigeishi, Mitsuhiro; Ohtsu, Masayasu

    1995-06-01

    Fracture tests are conducted on two types of reinforced concrete specimens under cyclic loadings. Cracking process is quantitatively evaluated and visualized by applying a moment tensor analysis to the AE waveforms detected during the fracture. First, bending tests are performed on reinforced concrete beams. It is found that both tensile and shear cracks are generated around the reinforcement in the low loading stages. However, shear cracks become dominant as the cracking process progresses. In the final stages, shear cracks are generated near the interface between the reinforcement and concrete even during unloadings. A bond strength test, made second, shows that tensile cracks are produced around the reinforcement in the early stages. They spread apart from the reinforcement to wider areas in the later stages. An intense AE cluster due to shear cracks is observed along the interface between the reinforcement and concrete. The previous result from an engineering structure is also presented for comparison. All these results demonstrate a great promise of the analysis for quantitative evaluation and visualization of the cracking process in reinforced concrete. The relationship between the opening width of surface cracks and the Kaiser effect is intensively studied. It is shown that a breakdown of the Kaiser effect and high AE activities during unloading can be effective indices to estimate the level of deterioration in concrete structures.

  18. Tensor Modeling Based for Airborne LiDAR Data Classification

    NASA Astrophysics Data System (ADS)

    Li, N.; Liu, C.; Pfeifer, N.; Yin, J. F.; Liao, Z. Y.; Zhou, Y.

    2016-06-01

    Feature selection and description is a key factor in classification of Earth observation data. In this paper a classification method based on tensor decomposition is proposed. First, multiple features are extracted from raw LiDAR point cloud, and raster LiDAR images are derived by accumulating features or the "raw" data attributes. Then, the feature rasters of LiDAR data are stored as a tensor, and tensor decomposition is used to select component features. This tensor representation could keep the initial spatial structure and insure the consideration of the neighborhood. Based on a small number of component features a k nearest neighborhood classification is applied.

  19. A low-rank approximation-based transductive support tensor machine for semisupervised classification.

    PubMed

    Liu, Xiaolan; Guo, Tengjiao; He, Lifang; Yang, Xiaowei

    2015-06-01

    In the fields of machine learning, pattern recognition, image processing, and computer vision, the data are usually represented by the tensors. For the semisupervised tensor classification, the existing transductive support tensor machine (TSTM) needs to resort to iterative technique, which is very time-consuming. In order to overcome this shortcoming, in this paper, we extend the concave-convex procedure-based transductive support vector machine (CCCP-TSVM) to the tensor patterns and propose a low-rank approximation-based TSTM, in which the tensor rank-one decomposition is used to compute the inner product of the tensors. Theoretically, concave-convex procedure-based TSTM (CCCP-TSTM) is an extension of the linear CCCP-TSVM to tensor patterns. When the input patterns are vectors, CCCP-TSTM degenerates into the linear CCCP-TSVM. A set of experiments is conducted on 23 semisupervised classification tasks, which are generated from seven second-order face data sets, three third-order gait data sets, and two third-order image data sets, to illustrate the performance of the CCCP-TSTM. The results show that compared with CCCP-TSVM and TSTM, CCCP-TSTM provides significant performance gain in terms of test accuracy and training speed. PMID:25700447

  20. Highly entangled tensor networks

    NASA Astrophysics Data System (ADS)

    Gu, Yingfei; Bulmash, Daniel; Qi, Xiao-Liang

    Tensor network states are used to represent many-body quantum state, e.g., a ground state of local Hamiltonian. In this talk, we will provide a systematic way to produce a family of highly entangled tensor network states. These states are entangled in a special way such that the entanglement entropy of a subsystem follows the Ryu-Takayanagi formula, i.e. the entropy is proportional to the minimal area geodesic surface bounding the boundary region. Our construction also provide an intuitive understanding of the Ryu-Takayanagi formula by relating it to a wave propagation process. We will present examples in various geometries.

  1. Demonstration of base catalyzed decomposition process, Navy Public Works Center, Guam, Mariana Islands

    SciTech Connect

    Schmidt, A.J.; Freeman, H.D.; Brown, M.D.; Zacher, A.H.; Neuenschwander, G.N.; Wilcox, W.A.; Gano, S.R.; Kim, B.C.; Gavaskar, A.R.

    1996-02-01

    Base Catalyzed Decomposition (BCD) is a chemical dehalogenation process designed for treating soils and other substrate contaminated with polychlorinated biphenyls (PCB), pesticides, dioxins, furans, and other hazardous organic substances. PCBs are heavy organic liquids once widely used in industry as lubricants, heat transfer oils, and transformer dielectric fluids. In 1976, production was banned when PCBs were recognized as carcinogenic substances. It was estimated that significant quantities (one billion tons) of U.S. soils, including areas on U.S. military bases outside the country, were contaminated by PCB leaks and spills, and cleanup activities began. The BCD technology was developed in response to these activities. This report details the evolution of the process, from inception to deployment in Guam, and describes the process and system components provided to the Navy to meet the remediation requirements. The report is divided into several sections to cover the range of development and demonstration activities. Section 2.0 gives an overview of the project history. Section 3.0 describes the process chemistry and remediation steps involved. Section 4.0 provides a detailed description of each component and specific development activities. Section 5.0 details the testing and deployment operations and provides the results of the individual demonstration campaigns. Section 6.0 gives an economic assessment of the process. Section 7.0 presents the conclusions and recommendations form this project. The appendices contain equipment and instrument lists, equipment drawings, and detailed run and analytical data.

  2. Decomposition of phenylarsonic acid by AOP processes: degradation rate constants and by-products.

    PubMed

    Jaworek, K; Czaplicka, M; Bratek, Ł

    2014-10-01

    The paper presents results of the studies photodegradation, photooxidation, and oxidation of phenylarsonic acid (PAA) in aquatic solution. The water solutions, which consist of 2.7 g dm(-3) phenylarsonic acid, were subjected to advance oxidation process (AOP) in UV, UV/H2O2, UV/O3, H2O2, and O3 systems under two pH conditions. Kinetic rate constants and half-life of phenylarsonic acid decomposition reaction are presented. The results from the study indicate that at pH 2 and 7, PAA degradation processes takes place in accordance with the pseudo first order kinetic reaction. The highest rate constants (10.45 × 10(-3) and 20.12 × 10(-3)) and degradation efficiencies at pH 2 and 7 were obtained at UV/O3 processes. In solution, after processes, benzene, phenol, acetophenone, o-hydroxybiphenyl, p-hydroxybiphenyl, benzoic acid, benzaldehyde, and biphenyl were identified. PMID:24824504

  3. A data-driven multidimensional signal-noise decomposition approach for GPR data processing

    NASA Astrophysics Data System (ADS)

    Chen, Chih-Sung; Jeng, Yih

    2015-12-01

    We demonstrate the possibility of applying a data-driven nonlinear filtering scheme in processing ground penetrating radar (GPR) data. The algorithm is based on the recently developed multidimensional ensemble empirical mode decomposition (MDEEMD) method which provides a frame of developing a variety of approaches in data analysis. The GPR data processing is very challenging due to the large data volume, special format, and geometrical sensitive attributes which are very easily affected by various noises. Approaches which work in other fields of data processing may not be equally applicable to GPR data. Therefore, the MDEEMD has to be modified to fit the special needs in the GPR data processing. In this study, we first give a brief review of the MDEEMD, and then provide the detailed procedure of implementing a 2D GPR filter by exploiting the modified MDEEMD. A complete synthetic model study shows the details of algorithm implementation. To assess the performance of the proposed approach, models of various signal to noise (S/N) ratios are discussed, and the results of conventional filtering method are also provided for comparison. Two real GPR field examples and onsite excavations indicate that the proposed approach is feasible for practical use.

  4. Probabilistic Round Trip Contamination Analysis of a Mars Sample Acquisition and Handling Process Using Markovian Decompositions

    NASA Technical Reports Server (NTRS)

    Hudson, Nicolas; Lin, Ying; Barengoltz, Jack

    2010-01-01

    A method for evaluating the probability of a Viable Earth Microorganism (VEM) contaminating a sample during the sample acquisition and handling (SAH) process of a potential future Mars Sample Return mission is developed. A scenario where multiple core samples would be acquired using a rotary percussive coring tool, deployed from an arm on a MER class rover is analyzed. The analysis is conducted in a structured way by decomposing sample acquisition and handling process into a series of discrete time steps, and breaking the physical system into a set of relevant components. At each discrete time step, two key functions are defined: The probability of a VEM being released from each component, and the transport matrix, which represents the probability of VEM transport from one component to another. By defining the expected the number of VEMs on each component at the start of the sampling process, these decompositions allow the expected number of VEMs on each component at each sampling step to be represented as a Markov chain. This formalism provides a rigorous mathematical framework in which to analyze the probability of a VEM entering the sample chain, as well as making the analysis tractable by breaking the process down into small analyzable steps.

  5. Character Decomposition and Transposition Processes in Chinese Compound Words Modulates Attentional Blink

    PubMed Central

    Cao, Hongwen; Gao, Min; Yan, Hongmei

    2016-01-01

    The attentional blink (AB) is the phenomenon in which the identification of the second of two targets (T2) is attenuated if it is presented less than 500 ms after the first target (T1). Although the AB is eliminated in canonical word conditions, it remains unclear whether the character order in compound words affects the magnitude of the AB. Morpheme decomposition and transposition of Chinese two-character compound words can provide an effective means to examine AB priming and to assess combinations of the component representations inherent to visual word identification. In the present study, we examined the processing of consecutive targets in a rapid serial visual presentation (RSVP) paradigm using Chinese two-character compound words in which the two characters were transposed to form meaningful words or meaningless combinations (reversible, transposed, or canonical words). We found that when two Chinese characters that form a compound word, regardless of their order, are presented in an RSVP sequence, the likelihood of an AB for the second character is greatly reduced or eliminated compared to when the two characters constitute separate words rather than a compound word. Moreover, the order of the report for the two characters is more likely to be reversed when the normal order of the two characters in a compound word is reversed, especially when the interval between the presentation of the two characters is extremely short. These findings are more consistent with the cognitive strategy hypothesis than the resource-limited hypothesis during character decomposition and transposition of Chinese two-character compound words. These results suggest that compound characters are perceived as a unit, rather than two separate words. The data further suggest that readers could easily understand the text with character transpositions in compound words during Chinese reading. PMID:27379003

  6. Character Decomposition and Transposition Processes in Chinese Compound Words Modulates Attentional Blink.

    PubMed

    Cao, Hongwen; Gao, Min; Yan, Hongmei

    2016-01-01

    The attentional blink (AB) is the phenomenon in which the identification of the second of two targets (T2) is attenuated if it is presented less than 500 ms after the first target (T1). Although the AB is eliminated in canonical word conditions, it remains unclear whether the character order in compound words affects the magnitude of the AB. Morpheme decomposition and transposition of Chinese two-character compound words can provide an effective means to examine AB priming and to assess combinations of the component representations inherent to visual word identification. In the present study, we examined the processing of consecutive targets in a rapid serial visual presentation (RSVP) paradigm using Chinese two-character compound words in which the two characters were transposed to form meaningful words or meaningless combinations (reversible, transposed, or canonical words). We found that when two Chinese characters that form a compound word, regardless of their order, are presented in an RSVP sequence, the likelihood of an AB for the second character is greatly reduced or eliminated compared to when the two characters constitute separate words rather than a compound word. Moreover, the order of the report for the two characters is more likely to be reversed when the normal order of the two characters in a compound word is reversed, especially when the interval between the presentation of the two characters is extremely short. These findings are more consistent with the cognitive strategy hypothesis than the resource-limited hypothesis during character decomposition and transposition of Chinese two-character compound words. These results suggest that compound characters are perceived as a unit, rather than two separate words. The data further suggest that readers could easily understand the text with character transpositions in compound words during Chinese reading. PMID:27379003

  7. Fundamental phenomena on fuel decomposition and boundary layer combustion processes with applications to hybrid rocket motors

    NASA Astrophysics Data System (ADS)

    Kuo, Kenneth K.; Lu, Y. C.; Chiaverini, Martin J.; Harting, George C.

    1994-11-01

    An experimental study on the fundamental processes involved in fuel decomposition and boundary layer combustion in hybrid rocket motors is being conducted at the High Pressure Combustion Laboratory of the Pennsylvania State University. This research should provide a useful engineering technology base in the development of hybrid rocket motors as well as a fundamental understanding of the complex processes involved in hybrid propulsion. A high pressure slab motor has been designed and manufactured for conducting experimental investigations. Oxidizer (LOX or GOX) supply and control systems have been designed and partly constructed for the head-end injection into the test chamber. Experiments using HTPB fuel, as well as fuels supplied by NASA designated industrial companies will be conducted. Design and construction of fuel casting molds and sample holders have been completed. The portion of these items for industrial company fuel casting will be sent to the McDonnell Douglas Aerospace Corporation in the near future. The study focuses on the following areas: observation of solid fuel burning processes with LOX or GOX, measurement and correlation of solid fuel regression rate with operating conditions, measurement of flame temperature and radical species concentrations, determination of the solid fuel subsurface temperature profile, and utilization of experimental data for validation of a companion theoretical study (Part 2) also being conducted at PSU.

  8. Fundamental phenomena on fuel decomposition and boundary layer combustion processes with applications to hybrid rocket motors

    NASA Technical Reports Server (NTRS)

    Kuo, Kenneth K.; Lu, Y. C.; Chiaverini, Martin J.; Harting, George C.

    1994-01-01

    An experimental study on the fundamental processes involved in fuel decomposition and boundary layer combustion in hybrid rocket motors is being conducted at the High Pressure Combustion Laboratory of the Pennsylvania State University. This research should provide a useful engineering technology base in the development of hybrid rocket motors as well as a fundamental understanding of the complex processes involved in hybrid propulsion. A high pressure slab motor has been designed and manufactured for conducting experimental investigations. Oxidizer (LOX or GOX) supply and control systems have been designed and partly constructed for the head-end injection into the test chamber. Experiments using HTPB fuel, as well as fuels supplied by NASA designated industrial companies will be conducted. Design and construction of fuel casting molds and sample holders have been completed. The portion of these items for industrial company fuel casting will be sent to the McDonnell Douglas Aerospace Corporation in the near future. The study focuses on the following areas: observation of solid fuel burning processes with LOX or GOX, measurement and correlation of solid fuel regression rate with operating conditions, measurement of flame temperature and radical species concentrations, determination of the solid fuel subsurface temperature profile, and utilization of experimental data for validation of a companion theoretical study (Part 2) also being conducted at PSU.

  9. Thermal decomposition of potassium and sodium ethylxanthates and the influence of nitrobenzene on this process

    SciTech Connect

    Gorbatov, V.V.; Gerega, V.F.; Bordzilovskii, V.Ya.; Borovoi, A.A.; Dergunov, Yu.I.

    1988-02-10

    The thermal decomposition of the alkylxanthates was described by a first-order kinetic equation up to a degree of conversion of 50%. Thermal decomposition studies of potassium alkylxanthates indicated that the rate constants of the decomposition of ROCS/sub 2/K in isopentyl alcohol increased and the activation energies decreased as the group R changed along the series CH/sub 3/, C/sub 2/H/sub 5/, C/sub 3/H/sub 7/, C/sub 4/H/sub 9/, (CH/sub 3/)/sub 3/CCH/sub 2/, and iso-C/sub 3/H/sub 7/. In this study of the influence of additions of nitrobenzene on the decomposition of potassium and sodium alkylxanthates its additions had an accelerating action on the thermal decomposition of ROCS/sub 2/M in isopentyl alcohol.

  10. Tensor representation techniques in post-Hartree-Fock methods: matrix product state tensor format

    NASA Astrophysics Data System (ADS)

    Benedikt, Udo; Auer, Henry; Espig, Mike; Hackbusch, Wolfgang; Auer, Alexander A.

    2013-09-01

    In this proof-of-principle study, we discuss the application of various tensor representation formats and their implications on memory requirements and computational effort for tensor manipulations as they occur in typical post-Hartree-Fock (post-HF) methods. A successive tensor decomposition/rank reduction scheme in the matrix product state (MPS) format for the two-electron integrals in the AO and MO bases and an estimate of the t 2 amplitudes as obtained from second-order many-body perturbation theory (MP2) are described. Furthermore, the AO-MO integral transformation, the calculation of the MP2 energy and the potential usage of tensors in low-rank MPS representation for the tensor contractions in coupled cluster theory are discussed in detail. We are able to show that the overall scaling of the memory requirements is reduced from the conventional N 4 scaling to approximately N 3 and the scaling of computational effort for tensor contractions in post-HF methods can be reduced to roughly N 4 while the decomposition itself scales as N 5. While efficient algorithms with low prefactor for the tensor decomposition have yet to be devised, this ansatz offers the possibility to find a robust approximation with low-scaling behaviour with system and basis-set size for post-HF ab initio methods.

  11. Empirical mode decomposition as a time-varying multirate signal processing system

    NASA Astrophysics Data System (ADS)

    Yang, Yanli

    2016-08-01

    Empirical mode decomposition (EMD) can adaptively split composite signals into narrow subbands termed intrinsic mode functions (IMFs). Although an analytical expression of IMFs extracted by EMD from signals is introduced in Yang et al. (2013) [1], it is only used for the case of extrema spaced uniformly. In this paper, the EMD algorithm is analyzed from digital signal processing perspective for the case of extrema spaced nonuniformly. Firstly, the extrema extraction is represented by a time-varying extrema decimator. The nonuniform extrema extraction is analyzed through modeling the time-varying extrema decimation at a fixed time point as a time-invariant decimation. Secondly, by using the impulse/summation approach, spline interpolation for knots spaced nonuniformly is shown as two basic operations, time-varying interpolation and filtering by a time-varying spline filter. Thirdly, envelopes of signals are written as the output of the time-varying spline filter. An expression of envelopes of signals in both time and frequency domain is presented. The EMD algorithm is then described as a time-varying multirate signal processing system. Finally, an equation to model IMFs is derived by using a matrix formulation in time domain for the general case of extrema spaced nonuniformly.

  12. Fundamental phenomena on fuel decomposition and boundary layer combustion processes with applications to hybrid rocket motors

    NASA Technical Reports Server (NTRS)

    Kuo, Kenneth K.; Lu, Y. C.; Chiaverini, Martin J.; Harting, George C.

    1994-01-01

    An experimental study on the fundamental processes involved in fuel decomposition and boundary layer combustion in hybrid rocket motors is being conducted at the High Pressure Combustion Laboratory of the Pennsylvania State University. This research should provide an engineering technology base for development of large scale hybrid rocket motors as well as a fundamental understanding of the complex processes involved in hybrid propulsion. A high pressure slab motor has been designed for conducting experimental investigations. Oxidizer (LOX or GOX) is injected through the head-end over a solid fuel (HTPB) surface. Experiments using fuels supplied by NASA designated industrial companies will also be conducted. The study focuses on the following areas: measurement and observation of solid fuel burning with LOX or GOX, correlation of solid fuel regression rate with operating conditions, measurement of flame temperature and radical species concentrations, determination of the solid fuel subsurface temperature profile, and utilization of experimental data for validation of a companion theoretical study also being conducted at PSU.

  13. [Rates of decomposition processes in mountain soils of the Sudeten as a function of edaphic-climatic and biotic factors].

    PubMed

    Striganova, B R; Bienkowski, P

    2000-01-01

    The rate of grass litter decomposition was studied in soils of the Karkonosze Mountains of the Sudeten at different altitudes. Parallel structural-functional investigations of the soil animal population on the example of soil macrofauna were carried out and heavy metals were assayed in the soil at stationary plots to reveal the effects of both natural and anthropogenic factors on the soil biological activity. The recent contamination of soil in the Sudeten by heavy metals and sulfur does not affect the spatial distribution and abundance of the soil-dwelling invertebrates and the decomposition rates. The latter correlated to a high level of soil saprotroph activity. The activity of the decomposition processes depends on the soil content of organic matter, conditions of soil drainage, and the temperature of upper soil horizon. PMID:11149317

  14. Multilinear operators for higher-order decompositions.

    SciTech Connect

    Kolda, Tamara Gibson

    2006-04-01

    We propose two new multilinear operators for expressing the matrix compositions that are needed in the Tucker and PARAFAC (CANDECOMP) decompositions. The first operator, which we call the Tucker operator, is shorthand for performing an n-mode matrix multiplication for every mode of a given tensor and can be employed to concisely express the Tucker decomposition. The second operator, which we call the Kruskal operator, is shorthand for the sum of the outer-products of the columns of N matrices and allows a divorce from a matricized representation and a very concise expression of the PARAFAC decomposition. We explore the properties of the Tucker and Kruskal operators independently of the related decompositions. Additionally, we provide a review of the matrix and tensor operations that are frequently used in the context of tensor decompositions.

  15. Environmental assessment of the base catalyzed decomposition (BCD) process. Research report, June--July 1998

    SciTech Connect

    1998-08-01

    The report summarizes laboratory-scale, pilot-scale, and field performance data on BCD (Base Catalyzed Decomposition) technology, collected to date by various governmental, academic, and private organizations.

  16. Age-Related Modifications of Diffusion Tensor Imaging Parameters and White Matter Hyperintensities as Inter-Dependent Processes

    PubMed Central

    Pelletier, Amandine; Periot, Olivier; Dilharreguy, Bixente; Hiba, Bassem; Bordessoules, Martine; Chanraud, Sandra; Pérès, Karine; Amieva, Hélène; Dartigues, Jean-François; Allard, Michèle; Catheline, Gwénaëlle

    2016-01-01

    Microstructural changes of White Matter (WM) associated with aging have been widely described through Diffusion Tensor Imaging (DTI) parameters. In parallel, White Matter Hyperintensities (WMH) as observed on a T2-weighted MRI are extremely common in older individuals. However, few studies have investigated both phenomena conjointly. The present study investigates aging effects on DTI parameters in absence and in presence of WMH. Diffusion maps were constructed based on 21 directions DTI scans of young adults (n = 19, mean age = 33 SD = 7.4) and two age-matched groups of older adults, one presenting low-level-WMH (n = 20, mean age = 78, SD = 3.2) and one presenting high-level-WMH (n = 20, mean age = 79, SD = 5.4). Older subjects with low-level-WMH presented modifications of DTI parameters in comparison to younger subjects, fitting with the DTI pattern classically described in aging, i.e., Fractional Anisotropy (FA) decrease/Radial Diffusivity (RD) increase. Furthermore, older subjects with high-level-WMH showed higher DTI modifications in Normal Appearing White Matter (NAWM) in comparison to those with low-level-WMH. Finally, in older subjects with high-level-WMH, FA, and RD values of NAWM were associated with to WMH burden. Therefore, our findings suggest that DTI modifications and the presence of WMH would be two inter-dependent processes but occurring within different temporal windows. DTI changes would reflect the early phase of white matter changes and WMH would appear as a consequence of those changes. PMID:26834625

  17. Combined effects of leaf litter and soil microsite on decomposition process in arid rangelands.

    PubMed

    Carrera, Analía Lorena; Bertiller, Mónica Beatriz

    2013-01-15

    The objective of this study was to analyze the combined effects of leaf litter quality and soil properties on litter decomposition and soil nitrogen (N) mineralization at conserved (C) and disturbed by sheep grazing (D) vegetation states in arid rangelands of the Patagonian Monte. It was hypothesized that spatial differences in soil inorganic-N levels have larger impact on decomposition processes of non-recalcitrant than recalcitrant leaf litter (low and high concentration of secondary compounds, respectively). Leaf litter and upper soil were extracted from modal size plant patches (patch microsite) and the associated inter-patch area (inter-patch microsite) in C and D. Leaf litter was pooled per vegetation state and soil was pooled combining vegetation state and microsite. Concentrations of N and secondary compounds in leaf litter and total and inorganic-N in soil were assessed at each pooled sample. Leaf litter decay and soil N mineralization at microsites of C and D were estimated in 160 microcosms incubated at field capacity (16 month). C soils had higher total N than D soils (0.58 and 0.41 mg/g, respectively). Patch soil of C and inter-patch soil of D exhibited the highest values of inorganic-N (8.8 and 8.4 μg/g, respectively). Leaf litter of C was less recalcitrant and decomposed faster than that of D. Non-recalcitrant leaf litter decay and induced soil N mineralization had larger variation among microsites (coefficients of variation = 25 and 41%, respectively) than recalcitrant leaf litter (coefficients of variation = 12 and 32%, respectively). Changes in the canopy structure induced by grazing disturbance increased leaf litter recalcitrance, and reduced litter decay and soil N mineralization, independently of soil N levels. This highlights the importance of the combined effects of soil and leaf litter properties on N cycling probably with consequences for vegetation reestablishment and dynamics, rangeland resistance and resilience with implications

  18. KOALA: A program for the processing and decomposition of transient spectra

    NASA Astrophysics Data System (ADS)

    Grubb, Michael P.; Orr-Ewing, Andrew J.; Ashfold, Michael N. R.

    2014-06-01

    Extracting meaningful kinetic traces from time-resolved absorption spectra is a non-trivial task, particularly for solution phase spectra where solvent interactions can substantially broaden and shift the transition frequencies. Typically, each spectrum is composed of signal from a number of molecular species (e.g., excited states, intermediate complexes, product species) with overlapping spectral features. Additionally, the profiles of these spectral features may evolve in time (i.e., signal nonlinearity), further complicating the decomposition process. Here, we present a new program for decomposing mixed transient spectra into their individual component spectra and extracting the corresponding kinetic traces: KOALA (Kinetics Observed After Light Absorption). The software combines spectral target analysis with brute-force linear least squares fitting, which is computationally efficient because of the small nonlinear parameter space of most spectral features. Within, we demonstrate the application of KOALA to two sets of experimental transient absorption spectra with multiple mixed spectral components. Although designed for decomposing solution-phase transient absorption data, KOALA may in principle be applied to any time-evolving spectra with multiple components.

  19. Design studies of the sulfur trioxide decomposition reactor for the sulfur-cycle hydrogen-production process

    SciTech Connect

    Lin, S.S.; Flaherty, R.

    1982-01-01

    The Sulfur Cycle is a two-step hybrid electrochemical/thermochemical process for decomposing water into hydrogen and oxygen. Integration of a complex chemical process with a solar heat source poses unique challenges with regard to process and equipment design. The conceptual design for a developmental test unit demonstrating the sulfur cycle was prepared in 1980. The test unit design is compatible with the power level of a large parabolic solar collector. One of the key components in the process is the sulfur trioxide decomposition reactor. The design studies of the sulfur trioxide decomposition reactor encompassing the thermodynamics, reaction kinetics, heat transfer, and mechanical considerations, are described along with a brief description of the test unit.

  20. Growth of lanthanum manganate buffer layers for coated conductors via a metal-organic decomposition process

    NASA Astrophysics Data System (ADS)

    Venkataraman, Kartik

    LaMnO3 (LMO) was identified as a possible buffer material for YBa2Cu3O7-x conductors due to its diffusion barrier properties and close lattice match with YBa2Cu 3O7-x. Growth of LMO films via a metal-organic decomposition (MOD) process on Ni, Ni-5at.%W (Ni-5W), and single crystal SrTiO3 substrates was investigated. Phase-pure LMO was grown via MOD on Ni and SrTiO 3 substrates at temperatures and oxygen pressures within a thermodynamic "process window" wherein LMO, Ni, Ni-5W, and SrTiO3 are all stable components. LMO could not be grown on Ni-5W in the "process window" because tungsten diffused from the substrate into the overlying film, where it reacted to form La and Mn tungstates. The kinetics of tungstate formation and crystallization of phase-pure LMO from the La and Mn acetate precursors are competitive in the temperature range explored (850--1100°C). Temperatures <850°C might mitigate tungsten diffusion from the substrate to the film sufficiently to obviate tungstate formation, but LMO films deposited via MOD require temperatures ≥850°C for nucleation and grain growth. Using a Y2O3 seed layer on Ni-5W to block tungsten from diffusing into the LMO film was explored; however, Y2O3 reacts with tungsten in the "process window" at 850--1100°C. Tungsten diffusion into Y2O3 can be blocked if epitaxial, crack-free NiWO4 and NiO layers are formed at the interface between Ni-5W and Y2O3. NiWO 4 only grows epitaxially if the overlying NiO and buffer layers are thick enough to mechanically suppress (011)-oriented NiWO4 grain growth. This is not the case when a bare 75 nm-thick Y2O3 film on Ni-5W is processed at 850°C. These studies show that the Ni-5W substrate must be at a low temperature to prevent tungsten diffusion, whereas the LMO precursor film must be at elevated temperature to crystallize. An excimer laser-assisted MOD process was used where a Y2O 3-coated Ni-5W substrate was held at 500°C in air and the pulsed laser photo-thermally heated the Y2O3 and LMO

  1. Mathematical modeling of frontal process in thermal decomposition of a substance with allowance for the finite velocity of heat propagation

    SciTech Connect

    Shlenskii, O.F.; Murashov, G.G.

    1982-05-01

    In describing frontal processes of thermal decomposition of high-energy condensed substances, for example detonation, it is common practice to write the equation for the conservation of energy without any limitations on the heat propagation velocity (HPV). At the same time, it is known that in calculating fast processes of heat conduction, the assumption of an infinitely high HPV is not always justified. In order to evaluate the influence of the HPV on the results from calculations of heat conduction process under conditions of a short-term exothermic decomposition of a condensed substance, the solution of the problem of heating a semiinfinite, thermally unstable solid body with boundary conditions of the third kind on the surface has been examined.

  2. Empirical mode decomposition analysis of random processes in the solar atmosphere

    NASA Astrophysics Data System (ADS)

    Kolotkov, D. Y.; Anfinogentov, S. A.; Nakariakov, V. M.

    2016-08-01

    Context. Coloured noisy components with a power law spectral energy distribution are often shown to appear in solar signals of various types. Such a frequency-dependent noise may indicate the operation of various randomly distributed dynamical processes in the solar atmosphere. Aims: We develop a recipe for the correct usage of the empirical mode decomposition (EMD) technique in the presence of coloured noise, allowing for clear distinguishing between quasi-periodic oscillatory phenomena in the solar atmosphere and superimposed random background processes. For illustration, we statistically investigate extreme ultraviolet (EUV) emission intensity variations observed with SDO/AIA in the coronal (171 Å), chromospheric (304 Å), and upper photospheric (1600 Å) layers of the solar atmosphere, from a quiet sun and a sunspot umbrae region. Methods: EMD has been used for analysis because of its adaptive nature and essential applicability to the processing non-stationary and amplitude-modulated time series. For the comparison of the results obtained with EMD, we use the Fourier transform technique as an etalon. Results: We empirically revealed statistical properties of synthetic coloured noises in EMD, and suggested a scheme that allows for the detection of noisy components among the intrinsic modes obtained with EMD in real signals. Application of the method to the solar EUV signals showed that they indeed behave randomly and could be represented as a combination of different coloured noises characterised by a specific value of the power law indices in their spectral energy distributions. On the other hand, 3-min oscillations in the analysed sunspot were detected to have energies significantly above the corresponding noise level. Conclusions: The correct accounting for the background frequency-dependent random processes is essential when using EMD for analysis of oscillations in the solar atmosphere. For the quiet sun region the power law index was found to increase

  3. Decomposition Process of Alane and Gallane Compounds in Metal-Organic Chemical Vapor Deposition Studied by Surface Photo-Absorption

    NASA Astrophysics Data System (ADS)

    Yamauchi, Yoshiharu; Kobayashi, Naoki

    1992-09-01

    We used surface photo-absorption (SPA) to study trimethylamine alane (TMAA) and dimethylamine gallane (DMAG) decomposition processes on a substrate surface in metal-organic chemical vapor deposition. The decomposition onset temperatures of these group III hydride sources correspond to the substrate temperature at which the SPA reflectivity starts to increase during the supply of the group III source onto a group V stabilized surface. It was found that TMAA and DMAG start to decompose at about 150°C on an As-stabilized surface, which is much lower than the decomposition onsets of trialkyl Al and Ga compounds. Low temperature photoluminescence spectra exhibit dominant excitionic emissions for GaAs layers grown by DMAG at substrate temperatures above 400°C, indicating that carbon incorporation and the crystal quality deterioration due to incomplete decomposition on surface is much suppressed by using DMAG. A comparison of AlGaAs photoluminescence between layers by TMAA/triethylgallium and triethylaluminum/triethylgallium shows that the band-to-carbon acceptor transition is greatly reduced by using TMAA. TMAA and DMAG were verified to be promising group III sources for low-temperature and high-purity growth with low-carbon incorporation.

  4. Hyperbolicity of scalar-tensor theories of gravity

    SciTech Connect

    Salgado, Marcelo; Martinez del Rio, David; Alcubierre, Miguel; Nunez, Dario

    2008-05-15

    Two first order strongly hyperbolic formulations of scalar-tensor theories of gravity allowing nonminimal couplings (Jordan frame) are presented along the lines of the 3+1 decomposition of spacetime. One is based on the Bona-Masso formulation, while the other one employs a conformal decomposition similar to that of Baumgarte-Shapiro-Shibata-Nakamura. A modified Bona-Masso slicing condition adapted to the scalar-tensor theory is proposed for the analysis. This study confirms that the scalar-tensor theory has a well-posed Cauchy problem even when formulated in the Jordan frame.

  5. Ab inito molecular-dynamics study of EC decomposition process on Li2O2 surfaces

    NASA Astrophysics Data System (ADS)

    Ando, Yasunobu; Ikeshoji, Tamio; Otani, Minoru

    2015-03-01

    We have simulated electrochemical reactions of the EC molecule decomposition on Li2O2 substrate by ab initio molecular dynamics combined with the effective screening medium method. EC molecules adsorb onto the peroxide spontaneously. We find through the analysis of density of states that the adsorption state is stabilized by hybridization of the sp2 orbital and the surface states of the Li2O2. After adsorption, EC ring opens, which leads to the decomposition of the peroxide and the formation of a carboxy group. This kind of alkyl carbonates formed on the Li2O2 substrate was found in experiments actually Nanosystem Research Institute, AIST; ESICB, Kyoto University

  6. Efficient MATLAB computations with sparse and factored tensors.

    SciTech Connect

    Bader, Brett William; Kolda, Tamara Gibson (Sandia National Lab, Livermore, CA)

    2006-12-01

    In this paper, the term tensor refers simply to a multidimensional or N-way array, and we consider how specially structured tensors allow for efficient storage and computation. First, we study sparse tensors, which have the property that the vast majority of the elements are zero. We propose storing sparse tensors using coordinate format and describe the computational efficiency of this scheme for various mathematical operations, including those typical to tensor decomposition algorithms. Second, we study factored tensors, which have the property that they can be assembled from more basic components. We consider two specific types: a Tucker tensor can be expressed as the product of a core tensor (which itself may be dense, sparse, or factored) and a matrix along each mode, and a Kruskal tensor can be expressed as the sum of rank-1 tensors. We are interested in the case where the storage of the components is less than the storage of the full tensor, and we demonstrate that many elementary operations can be computed using only the components. All of the efficiencies described in this paper are implemented in the Tensor Toolbox for MATLAB.

  7. Effect of water vapor on the thermal decomposition process of zinc hydroxide chloride and crystal growth of zinc oxide

    SciTech Connect

    Kozawa, Takahiro; Onda, Ayumu; Yanagisawa, Kazumichi; Kishi, Akira; Masuda, Yasuaki

    2011-03-15

    Thermal decomposition process of zinc hydroxide chloride (ZHC), Zn{sub 5}(OH){sub 8}Cl{sub 2}.H{sub 2}O, prepared by a hydrothermal slow-cooling method has been investigated by simultaneous X-ray diffractometry and differential scanning calorimetry (XRD-DSC) and thermogravimetric-differential thermal analysis (TG-DTA) in a humidity-controlled atmosphere. ZHC was decomposed to ZnO through {beta}-Zn(OH)Cl as the intermediate phase, leaving amorphous hydrated ZnCl{sub 2}. In humid N{sub 2} with P{sub H{sub 2O}}=4.5 and 10 kPa, the hydrolysis of residual ZnCl{sub 2} was accelerated and the theoretical amount of ZnO was obtained at lower temperatures than in dry N{sub 2}, whereas significant weight loss was caused by vaporization of residual ZnCl{sub 2} in dry N{sub 2}. ZnO formed by calcinations in a stagnant air atmosphere had the same morphology of the original ZHC crystals and consisted of the c-axis oriented column-like particle arrays. On the other hand, preferred orientation of ZnO was inhibited in the case of calcinations in 100% water vapor. A detailed thermal decomposition process of ZHC and the effect of water vapor on the crystal growth of ZnO are discussed. -- Graphical abstract: Thermal decomposition process of zinc hydroxide chloride (ZHC), Zn{sub 5}(OH){sub 8}Cl{sub 2}.H{sub 2}O, has been investigated by novel thermal analyses with three different water vapor partial pressures. In the water vapor atmosphere, the formation of ZnO was completed at lower temperatures than in dry. Display Omitted highlights: > We examine the thermal decomposition of zinc hydroxide chloride in water vapor. > Water vapor had no effects on its thermal decomposition up to 230 {sup o}C. > Water vapor accelerated the decomposition of the residual ZnCl{sub 2} in ZnO. > Without water vapor, a large amount of ZnCl{sub 2} evaporated to form the c-axis oriented ZnO.

  8. Modelling regulation of decomposition and related root/mycorrhizal processes in arctic tundra soils. Final report

    SciTech Connect

    Linkins, A.E.

    1992-09-01

    Since this was the final year of this project principal activities were directed towards either collecting data needed to complete existing incomplete data sets or writing manuscripts. Data sets on Imnaviat Creek watershed basin are functionally complete and data finialized on the cellulose mineralizaiton and dust impact on soil organic carbon and phsophorus decomposition. Seven manuscripts were prepared, and are briefly outlined.

  9. Modelling regulation of decomposition and related root/mycorrhizal processes in arctic tundra soils

    SciTech Connect

    Linkins, A.E.

    1992-01-01

    Since this was the final year of this project principal activities were directed towards either collecting data needed to complete existing incomplete data sets or writing manuscripts. Data sets on Imnaviat Creek watershed basin are functionally complete and data finialized on the cellulose mineralizaiton and dust impact on soil organic carbon and phsophorus decomposition. Seven manuscripts were prepared, and are briefly outlined.

  10. Dynamics of crop residue composition-decomposition: Temporal modeling of multivariate carbon sources and processes [abstract

    Technology Transfer Automated Retrieval System (TEKTRAN)

    We examined multivariate relationships in structural carbohydrates plus lignin (STC) and non-structural (NSC) carbohydrates and their impact on C:N ratio and the dynamics of active (ka) and passive (kp) residue decomposition of alfalfa, corn, soybean, cuphea and switchgrass as candidates in diverse ...

  11. Multi-Zone Liquid Thrust Chamber Performance Code with Domain Decomposition for Parallel Processing

    NASA Technical Reports Server (NTRS)

    Navaz, Homayun K.

    2002-01-01

    -equation turbulence model, and two-phase flow. To overcome these limitations, the LTCP code is rewritten to include the multi-zone capability with domain decomposition that makes it suitable for parallel processing, i.e., enabling the code to run every zone or sub-domain on a separate processor. This can reduce the run time by a factor of 6 to 8, depending on the problem.

  12. Kinetic analysis of spinodal decomposition process in Fe-Cr alloys by small angle neutron scattering

    SciTech Connect

    Ujihara, T.; Osamura, K.

    2000-04-19

    The rate of spinodal decomposition depends on the spatial composition distribution. In order to estimate the time dependence of its rate experimentally, the structure change was investigated in Fe-30 at.% Cr and Fe-50 at.% Cr alloys aged at 748, 773, 798, and 823 K via small angle neutron scattering and a kinetic analysis of experimental data was carried out by using the Langer-Bar-on-Miller (LBM) theory. Their theory contains a rate term of a physical meaning similar to the diffusion coefficient. As a result, it becomes clear that the rate term corresponding to the diffusion coefficient decreases as decomposition advances and this fact can be explained by the modified LBM theory considering the composition-dependent mobility.

  13. Conceptualizing and Estimating Process Speed in Studies Employing Ecological Momentary Assessment Designs: A Multilevel Variance Decomposition Approach

    PubMed Central

    Shiyko, Mariya P.; Ram, Nilam

    2012-01-01

    Researchers have been making use of ecological momentary assessment (EMA) and other study designs that sample feelings and behaviors in real time and in naturalistic settings to study temporal dynamics and contextual factors of a wide variety of psychological, physiological, and behavioral processes. As EMA designs become more widespread, questions are arising about the frequency of data sampling, with direct implications for participants’ burden and researchers’ ability to capture and study dynamic processes. Traditionally, spectral analytic techniques are used for time series data to identify process speed. However, the nature of EMA data, often collected with fewer than 100 measurements per person, sampled at randomly spaced intervals, and replete with planned and unplanned missingness, precludes application of traditional spectral analytic techniques. Building on principles of variance partitioning used in the generalizability theory of measurement and spectral analysis, we illustrate the utility of multilevel variance decompositions for isolating process speed in EMA-type data. Simulation and empirical data from a smoking-cessation study are used to demonstrate the method and to evaluate the process speed of smoking urges and quitting self-efficacy. Results of the multilevel variance decomposition approach can inform process-oriented theory and future EMA study designs. PMID:22707796

  14. Mathematical modeling and investigations of the processes of heat conduction of ammonium perchlorate with phase transitions in thermal decomposition and gasification

    NASA Astrophysics Data System (ADS)

    Mikhailov, A. V.; Lagun, I. M.; Polyakov, E. P.

    2013-01-01

    Transient heat-conduction processes occurring in the period of thermal decomposition and gasification of a crystalline oxidant — ammonium perchlorate — have been investigated and analyzed on the basis of the developed mathematical model.

  15. Block term decomposition for modelling epileptic seizures

    NASA Astrophysics Data System (ADS)

    Hunyadi, Borbála; Camps, Daan; Sorber, Laurent; Paesschen, Wim Van; Vos, Maarten De; Huffel, Sabine Van; Lathauwer, Lieven De

    2014-12-01

    Recordings of neural activity, such as EEG, are an inherent mixture of different ongoing brain processes as well as artefacts and are typically characterised by low signal-to-noise ratio. Moreover, EEG datasets are often inherently multidimensional, comprising information in time, along different channels, subjects, trials, etc. Additional information may be conveyed by expanding the signal into even more dimensions, e.g. incorporating spectral features applying wavelet transform. The underlying sources might show differences in each of these modes. Therefore, tensor-based blind source separation techniques which can extract the sources of interest from such multiway arrays, simultaneously exploiting the signal characteristics in all dimensions, have gained increasing interest. Canonical polyadic decomposition (CPD) has been successfully used to extract epileptic seizure activity from wavelet-transformed EEG data (Bioinformatics 23(13):i10-i18, 2007; NeuroImage 37:844-854, 2007), where each source is described by a rank-1 tensor, i.e. by the combination of one particular temporal, spectral and spatial signature. However, in certain scenarios, where the seizure pattern is nonstationary, such a trilinear signal model is insufficient. Here, we present the application of a recently introduced technique, called block term decomposition (BTD) to separate EEG tensors into rank- ( L r , L r ,1) terms, allowing to model more variability in the data than what would be possible with CPD. In a simulation study, we investigate the robustness of BTD against noise and different choices of model parameters. Furthermore, we show various real EEG recordings where BTD outperforms CPD in capturing complex seizure characteristics.

  16. Toluene decomposition performance and NOx by-product formation during a DBD-catalyst process.

    PubMed

    Guo, Yufang; Liao, Xiaobin; Fu, Mingli; Huang, Haibao; Ye, Daiqi

    2015-02-01

    Characteristics of toluene decomposition and formation of nitrogen oxide (NOx) by-products were investigated in a dielectric barrier discharge (DBD) reactor with/without catalyst at room temperature and atmospheric pressure. Four kinds of metal oxides, i.e., manganese oxide (MnOx), iron oxide (FeOx), cobalt oxide (CoOx) and copper oxide (CuO), supported on Al2O3/nickel foam, were used as catalysts. It was found that introducing catalysts could improve toluene removal efficiency, promote decomposition of by-product ozone and enhance CO2 selectivity. In addition, NOx was suppressed with the decrease of specific energy density (SED) and the increase of humidity, gas flow rate and toluene concentration, or catalyst introduction. Among the four kinds of catalysts, the CuO catalyst showed the best performance in NOx suppression. The MnOx catalyst exhibited the lowest concentration of O3 and highest CO2 selectivity but the highest concentration of NOx. A possible pathway for NOx production in DBD was discussed. The contributions of oxygen active species and hydroxyl radicals are dominant in NOx suppression. PMID:25662254

  17. Automatic pitch decomposition for improved process window when printing dense features at k Ieff<0.20

    NASA Astrophysics Data System (ADS)

    Huckabay, Judy; Staud, Wolf; Naber, Robert; Dusa, Mircea; Flagello, Donis; Socha, Robert

    2006-05-01

    In conventional IC processes, the smallest size of any features that can be created on a wafer is severely limited by the pitch of the processing system. This approach is a key enabler of printing mask features on wafers without requiring new manufacturing equipment and with minor changes to existing manufacturing processes. The approach also does not require restrictions on the design of the chip. This paper will discuss the method and full-chip decomposition tool used to determine locations to split the layout. It will demonstrate examples of over-constrained layouts and how these configurations are mitigated. It will also show the reticle enhancement techniques used to process the split layouts and the Lithographic Checking used to verify the lithographic results.

  18. A uniform parameterization of moment tensors

    NASA Astrophysics Data System (ADS)

    Tape, C.; Tape, W.

    2015-12-01

    A moment tensor is a 3 x 3 symmetric matrix that expresses an earthquake source. We construct a parameterization of the five-dimensional space of all moment tensors of unit norm. The coordinates associated with the parameterization are closely related to moment tensor orientations and source types. The parameterization is uniform, in the sense that equal volumes in the coordinate domain of the parameterization correspond to equal volumes of moment tensors. Uniformly distributed points in the coordinate domain therefore give uniformly distributed moment tensors. A cartesian grid in the coordinate domain can be used to search efficiently over moment tensors. We find that uniformly distributed moment tensors have uniformly distributed orientations (eigenframes), but that their source types (eigenvalue triples) are distributed so as to favor double couples. An appropriate choice of a priori moment tensor probability is a prerequisite for parameter estimation. As a seemingly sensible choice, we consider the homogeneous probability, in which equal volumes of moment tensors are equally likely. We believe that it will lead to improved characterization of source processes.

  19. Singular value decomposition for genome-wide expression data processing and modeling

    PubMed Central

    Alter, Orly; Brown, Patrick O.; Botstein, David

    2000-01-01

    We describe the use of singular value decomposition in transforming genome-wide expression data from genes × arrays space to reduced diagonalized “eigengenes” × “eigenarrays” space, where the eigengenes (or eigenarrays) are unique orthonormal superpositions of the genes (or arrays). Normalizing the data by filtering out the eigengenes (and eigenarrays) that are inferred to represent noise or experimental artifacts enables meaningful comparison of the expression of different genes across different arrays in different experiments. Sorting the data according to the eigengenes and eigenarrays gives a global picture of the dynamics of gene expression, in which individual genes and arrays appear to be classified into groups of similar regulation and function, or similar cellular state and biological phenotype, respectively. After normalization and sorting, the significant eigengenes and eigenarrays can be associated with observed genome-wide effects of regulators, or with measured samples, in which these regulators are overactive or underactive, respectively. PMID:10963673

  20. Tensor hypercontraction. II. Least-squares renormalization.

    PubMed

    Parrish, Robert M; Hohenstein, Edward G; Martínez, Todd J; Sherrill, C David

    2012-12-14

    The least-squares tensor hypercontraction (LS-THC) representation for the electron repulsion integral (ERI) tensor is presented. Recently, we developed the generic tensor hypercontraction (THC) ansatz, which represents the fourth-order ERI tensor as a product of five second-order tensors [E. G. Hohenstein, R. M. Parrish, and T. J. Martínez, J. Chem. Phys. 137, 044103 (2012)]. Our initial algorithm for the generation of the THC factors involved a two-sided invocation of overlap-metric density fitting, followed by a PARAFAC decomposition, and is denoted PARAFAC tensor hypercontraction (PF-THC). LS-THC supersedes PF-THC by producing the THC factors through a least-squares renormalization of a spatial quadrature over the otherwise singular 1∕r(12) operator. Remarkably, an analytical and simple formula for the LS-THC factors exists. Using this formula, the factors may be generated with O(N(5)) effort if exact integrals are decomposed, or O(N(4)) effort if the decomposition is applied to density-fitted integrals, using any choice of density fitting metric. The accuracy of LS-THC is explored for a range of systems using both conventional and density-fitted integrals in the context of MP2. The grid fitting error is found to be negligible even for extremely sparse spatial quadrature grids. For the case of density-fitted integrals, the additional error incurred by the grid fitting step is generally markedly smaller than the underlying Coulomb-metric density fitting error. The present results, coupled with our previously published factorizations of MP2 and MP3, provide an efficient, robust O(N(4)) approach to both methods. Moreover, LS-THC is generally applicable to many other methods in quantum chemistry. PMID:23248986

  1. Tensor hypercontraction. II. Least-squares renormalization

    NASA Astrophysics Data System (ADS)

    Parrish, Robert M.; Hohenstein, Edward G.; Martínez, Todd J.; Sherrill, C. David

    2012-12-01

    The least-squares tensor hypercontraction (LS-THC) representation for the electron repulsion integral (ERI) tensor is presented. Recently, we developed the generic tensor hypercontraction (THC) ansatz, which represents the fourth-order ERI tensor as a product of five second-order tensors [E. G. Hohenstein, R. M. Parrish, and T. J. Martínez, J. Chem. Phys. 137, 044103 (2012)], 10.1063/1.4732310. Our initial algorithm for the generation of the THC factors involved a two-sided invocation of overlap-metric density fitting, followed by a PARAFAC decomposition, and is denoted PARAFAC tensor hypercontraction (PF-THC). LS-THC supersedes PF-THC by producing the THC factors through a least-squares renormalization of a spatial quadrature over the otherwise singular 1/r12 operator. Remarkably, an analytical and simple formula for the LS-THC factors exists. Using this formula, the factors may be generated with O(N^5) effort if exact integrals are decomposed, or O(N^4) effort if the decomposition is applied to density-fitted integrals, using any choice of density fitting metric. The accuracy of LS-THC is explored for a range of systems using both conventional and density-fitted integrals in the context of MP2. The grid fitting error is found to be negligible even for extremely sparse spatial quadrature grids. For the case of density-fitted integrals, the additional error incurred by the grid fitting step is generally markedly smaller than the underlying Coulomb-metric density fitting error. The present results, coupled with our previously published factorizations of MP2 and MP3, provide an efficient, robust O(N^4) approach to both methods. Moreover, LS-THC is generally applicable to many other methods in quantum chemistry.

  2. Randomized interpolative decomposition of separated representations

    NASA Astrophysics Data System (ADS)

    Biagioni, David J.; Beylkin, Daniel; Beylkin, Gregory

    2015-01-01

    We introduce an algorithm to compute tensor interpolative decomposition (dubbed CTD-ID) for the reduction of the separation rank of Canonical Tensor Decompositions (CTDs). Tensor ID selects, for a user-defined accuracy ɛ, a near optimal subset of terms of a CTD to represent the remaining terms via a linear combination of the selected terms. CTD-ID can be used as an alternative to or in combination with the Alternating Least Squares (ALS) algorithm. We present examples of its use within a convergent iteration to compute inverse operators in high dimensions. We also briefly discuss the spectral norm as a computational alternative to the Frobenius norm in estimating approximation errors of tensor ID. We reduce the problem of finding tensor IDs to that of constructing interpolative decompositions of certain matrices. These matrices are generated via randomized projection of the terms of the given tensor. We provide cost estimates and several examples of the new approach to the reduction of separation rank.

  3. Seismically Inferred Rupture Process of the 2011 Tohoku-Oki Earthquake by Using Data-Validated 3D and 2.5D Green's Tensor Waveforms

    NASA Astrophysics Data System (ADS)

    Okamoto, T.; Takenaka, H.; Hara, T.; Nakamura, T.; Aoki, T.

    2014-12-01

    We analyze "seismic" rupture process of the March 11, 2011 Tohoku-Oki earthquake (GCMT Mw9.1) by using a non-linear multi-time-window waveform inversion method. We incorporate the effect of the near-source laterally heterogeneous structure on the synthetic Green's tensor waveforms; otherwise the analysis may result in erroneous solutions [1]. To increase the resolution we use teleseismic and strong-motion seismograms jointly because the one-sided distribution of strong-motion station may cause reduced resolution near the trench axis [2]. We use a 2.5D FDM [3] for teleseismic P-waves and a full 3D FDM that incorporates topography, oceanic water layer, 3D heterogeneity and attenuation for strong-motions [4]. We apply multi-GPU acceleration by using the TSUBAME supercomputer in Tokyo Institute of Technology [5]. We "validated" the Green's tensor waveforms with a point-source moment tensor inversion analysis for a small (Mw5.8) shallow event: we confirm the observed waveforms are reproduced well with the synthetics.The inferred slip distribution using the 2.5D and 3D Green's functions has large slips (max. 37 m) near the hypocenter and small slips near the trench (figure). Also an isolated slip region is identified close to Fukushima prefecture. These features are similar to those obtained by our preliminary study [4]. The land-ward large slips and trench-ward small slips have also been reported by [2]. It is remarkable that we confirmed these features by using data-validated Green's functions. On the other hand very large slips are inferred close to the trench when we apply "1D" Green's functions that do not incorporate the lateral heterogeneity. Our result suggests the trench-ward large deformation that caused large tsunamis did not radiate strong seismic waves. Very slow slips (e.g., the tsunami earthquake), delayed slips and anelastic deformation are among the candidates of the physical processes of the deformation.[1] Okamoto and Takenaka, EPS, 61, e17-e20, 2009

  4. Process versus product in social learning: comparative diffusion tensor imaging of neural systems for action execution-observation matching in macaques, chimpanzees, and humans.

    PubMed

    Hecht, Erin E; Gutman, David A; Preuss, Todd M; Sanchez, Mar M; Parr, Lisa A; Rilling, James K

    2013-05-01

    Social learning varies among primate species. Macaques only copy the product of observed actions, or emulate, while humans and chimpanzees also copy the process, or imitate. In humans, imitation is linked to the mirror system. Here we compare mirror system connectivity across these species using diffusion tensor imaging. In macaques and chimpanzees, the preponderance of this circuitry consists of frontal-temporal connections via the extreme/external capsules. In contrast, humans have more substantial temporal-parietal and frontal-parietal connections via the middle/inferior longitudinal fasciculi and the third branch of the superior longitudinal fasciculus. In chimpanzees and humans, but not in macaques, this circuitry includes connections with inferior temporal cortex. In humans alone, connections with superior parietal cortex were also detected. We suggest a model linking species differences in mirror system connectivity and responsivity with species differences in behavior, including adaptations for imitation and social learning of tool use. PMID:22539611

  5. When Policy Structures Technology: Balancing upfront decomposition and in-process coordination in Europe's decentralized space technology ecosystem

    NASA Astrophysics Data System (ADS)

    Vrolijk, Ademir; Szajnfarber, Zoe

    2015-01-01

    This paper examines the decentralization of European space technology research and development through the joint lenses of policy, systems architecture, and innovation contexts. It uses a detailed longitudinal case history of the development of a novel astrophysics instrument to explore the link between policy-imposed institutional decomposition and the architecture of the technical system. The analysis focuses on five instances of collaborative design decision-making and finds that matching between the technical and institutional architectures is a predictor of project success, consistent with the mirroring hypothesis in extant literature. Examined over time, the instances reveal stability in the loosely coupled nature of institutional arrangements and a trend towards more integral, or tightly coupled, technical systems. The stability of the institutional arrangements is explained as an artifact of the European Hultqvist policy and the trend towards integral technical systems is related to the increasing complexity of modern space systems. If these trends persist, the scale of the mismatch will continue to grow. As a first step towards mitigating this challenge, the paper develops a framework for balancing upfront decomposition and in-process coordination in collaborative development projects. The astrophysics instrument case history is used to illustrate how collaborations should be defined for a given inherent system complexity.

  6. Unraveling the Decomposition Process of Lead(II) Acetate: Anhydrous Polymorphs, Hydrates, and Byproducts and Room Temperature Phosphorescence.

    PubMed

    Martínez-Casado, Francisco J; Ramos-Riesco, Miguel; Rodríguez-Cheda, José A; Cucinotta, Fabio; Matesanz, Emilio; Miletto, Ivana; Gianotti, Enrica; Marchese, Leonardo; Matěj, Zdeněk

    2016-09-01

    Lead(II) acetate [Pb(Ac)2, where Ac = acetate group (CH3-COO(-))2] is a very common salt with many and varied uses throughout history. However, only lead(II) acetate trihydrate [Pb(Ac)2·3H2O] has been characterized to date. In this paper, two enantiotropic polymorphs of the anhydrous salt, a novel hydrate [lead(II) acetate hemihydrate: Pb(Ac)2·(1)/2H2O], and two decomposition products [corresponding to two different basic lead(II) acetates: Pb4O(Ac)6 and Pb2O(Ac)2] are reported, with their structures being solved for the first time. The compounds present a variety of molecular arrangements, being 2D or 1D coordination polymers. A thorough thermal analysis, by differential scanning calorimetry (DSC) and thermogravimetric analysis (TGA), was also carried out to study the behavior and thermal data of the salt and its decomposition process, in inert and oxygenated atmospheres, identifying the phases and byproducts that appear. The complex thermal behavior of lead(II) acetate is now solved, finding the existence of another hydrate, two anhydrous enantiotropic polymorphs, and some byproducts. Moreover, some of them are phosphorescent at room temperature. The compounds were studied by TGA, DSC, X-ray diffraction, and UV-vis spectroscopy. PMID:27548299

  7. A Domain Decomposition Approach for Large-Scale Simulations of Flow Processes in Hydrate-Bearing Geologic Media

    SciTech Connect

    Zhang, Keni; Moridis, G.J.; Wu, Y.-S.; Pruess, K.

    2008-07-01

    Simulation of the system behavior of hydrate-bearing geologic media involves solving fully coupled mass- and heat-balance equations. In this study, we develop a domain decomposition approach for large-scale gas hydrate simulations with coarse-granularity parallel computation. This approach partitions a simulation domain into small subdomains. The full model domain, consisting of discrete subdomains, is still simulated simultaneously by using multiple processes/processors. Each processor is dedicated to following tasks of the partitioned subdomain: updating thermophysical properties, assembling mass- and energy-balance equations, solving linear equation systems, and performing various other local computations. The linearized equation systems are solved in parallel with a parallel linear solver, using an efficient interprocess communication scheme. This new domain decomposition approach has been implemented into the TOUGH+HYDRATE code and has demonstrated excellent speedup and good scalability. In this paper, we will demonstrate applications for the new approach in simulating field-scale models for gas production from gas-hydrate deposits.

  8. 3D reconstruction of tensors and vectors

    SciTech Connect

    Defrise, Michel; Gullberg, Grant T.

    2005-02-17

    Here we have developed formulations for the reconstruction of 3D tensor fields from planar (Radon) and line-integral (X-ray) projections of 3D vector and tensor fields. Much of the motivation for this work is the potential application of MRI to perform diffusion tensor tomography. The goal is to develop a theory for the reconstruction of both Radon planar and X-ray or line-integral projections because of the flexibility of MRI to obtain both of these type of projections in 3D. The development presented here for the linear tensor tomography problem provides insight into the structure of the nonlinear MRI diffusion tensor inverse problem. A particular application of tensor imaging in MRI is the potential application of cardiac diffusion tensor tomography for determining in vivo cardiac fiber structure. One difficulty in the cardiac application is the motion of the heart. This presents a need for developing future theory for tensor tomography in a motion field. This means developing a better understanding of the MRI signal for diffusion processes in a deforming media. The techniques developed may allow the application of MRI tensor tomography for the study of structure of fiber tracts in the brain, atherosclerotic plaque, and spine in addition to fiber structure in the heart. However, the relations presented are also applicable to other fields in medical imaging such as diffraction tomography using ultrasound. The mathematics presented can also be extended to exponential Radon transform of tensor fields and to other geometric acquisitions such as cone beam tomography of tensor fields.

  9. Applying matching pursuit decomposition time-frequency processing to UGS footstep classification

    NASA Astrophysics Data System (ADS)

    Larsen, Brett W.; Chung, Hugh; Dominguez, Alfonso; Sciacca, Jacob; Kovvali, Narayan; Papandreou-Suppappola, Antonia; Allee, David R.

    2013-06-01

    The challenge of rapid footstep detection and classification in remote locations has long been an important area of study for defense technology and national security. Also, as the military seeks to create effective and disposable unattended ground sensors (UGS), computational complexity and power consumption have become essential considerations in the development of classification techniques. In response to these issues, a research project at the Flexible Display Center at Arizona State University (ASU) has experimented with footstep classification using the matching pursuit decomposition (MPD) time-frequency analysis method. The MPD provides a parsimonious signal representation by iteratively selecting matched signal components from a pre-determined dictionary. The resulting time-frequency representation of the decomposed signal provides distinctive features for different types of footsteps, including footsteps during walking or running activities. The MPD features were used in a Bayesian classification method to successfully distinguish between the different activities. The computational cost of the iterative MPD algorithm was reduced, without significant loss in performance, using a modified MPD with a dictionary consisting of signals matched to cadence temporal gait patterns obtained from real seismic measurements. The classification results were demonstrated with real data from footsteps under various conditions recorded using a low-cost seismic sensor.

  10. The Search for a Volatile Human Specific Marker in the Decomposition Process

    PubMed Central

    Rosier, E.; Loix, S.; Develter, W.; Van de Voorde, W.; Tytgat, J.; Cuypers, E.

    2015-01-01

    In this study, a validated method using a thermal desorber combined with a gas chromatograph coupled to mass spectrometry was used to identify the volatile organic compounds released during decomposition of 6 human and 26 animal remains in a laboratory environment during a period of 6 months. 452 compounds were identified. Among them a human specific marker was sought using principle component analysis. We found a combination of 8 compounds (ethyl propionate, propyl propionate, propyl butyrate, ethyl pentanoate, pyridine, diethyl disulfide, methyl(methylthio)ethyl disulfide and 3-methylthio-1-propanol) that led to the distinction of human and pig remains from other animal remains. Furthermore, it was possible to separate the pig remains from human remains based on 5 esters (3-methylbutyl pentanoate, 3-methylbutyl 3-methylbutyrate, 3-methylbutyl 2-methylbutyrate, butyl pentanoate and propyl hexanoate). Further research in the field with full bodies has to corroborate these results and search for one or more human specific markers. These markers would allow a more efficiently training of cadaver dogs or portable detection devices could be developed. PMID:26375029

  11. Multiple seismogenic processes for high-frequency earthquakes at Katmai National Park, Alaska: Evidence from stress tensor inversions of fault-plane solutions

    USGS Publications Warehouse

    Moran, S.C.

    2003-01-01

    The volcanological significance of seismicity within Katmai National Park has been debated since the first seismograph was installed in 1963, in part because Katmai seismicity consists almost entirely of high-frequency earthquakes that can be caused by a wide range of processes. I investigate this issue by determining 140 well-constrained first-motion fault-plane solutions for shallow (depth < 9 km) earthquakes occuring between 1995 and 2001 and inverting these solutions for the stress tensor in different regions within the park. Earthquakes removed by several kilometers from the volcanic axis occur in a stress field characterized by horizontally oriented ??1 and ??3 axes, with ??1 rotated slightly (12??) relative to the NUVELIA subduction vector, indicating that these earthquakes are occurring in response to regional tectonic forces. On the other hand, stress tensors for earthquake clusters beneath several Katmai cluster volcanoes have vertically oriented ??1 axes, indicating that these events are occuring in response to local, not regional, processes. At Martin-Mageik, vertically oriented ??1 is most consistent with failure under edifice loading conditions in conjunction with localized pore pressure increases associated with hydrothermal circulation cells. At Trident-Novarupta, it is consistent with a number of possible models, including occurence along fractures formed during the 1912 eruption that now serve as horizontal conduits for migrating fluids and/or volatiles from nearby degassing and cooling magma bodies. At Mount Katmai, it is most consistent with continued seismicity along ring-fracture systems created in the 1912 eruption, perhaps enhanced by circulating hydrothermal fluids and/or seepage from the caldera-filling lake.

  12. Feasibility study: Application of the geopressured-geothermal resource to pyrolytic conversion or decomposition/detoxification processes

    SciTech Connect

    Propp, W.A.; Grey, A.E.; Negus-de Wys, J.; Plum, M.M.; Haefner, D.R.

    1991-09-01

    This study presents a preliminary evaluation of the technical and economic feasibility of selected conceptual processes for pyrolytic conversion of organic feedstocks or the decomposition/detoxification of hazardous wastes by coupling the process to the geopressured-geothermal resource. The report presents a detailed discussion of the resource and of each process selected for evaluation including the technical evaluation of each. A separate section presents the economic methodology used and the evaluation of the technically viable process. A final section presents conclusions and recommendations. Three separate processes were selected for evaluation. These are pyrolytic conversion of biomass to petroleum like fluids, wet air oxidation (WAO) at subcritical conditions for destruction of hazardous waste, and supercritical water oxidation (SCWO) also for the destruction of hazardous waste. The scientific feasibility of all three processes has been previously established by various bench-scale and pilot-scale studies. For a variety of reasons detailed in the report the SCWO process is the only one deemed to be technically feasible, although the effects of the high solids content of the geothermal brine need further study. This technology shows tremendous promise for contributing to solving the nation's energy and hazardous waste problems. However, the current economic analysis suggests that it is uneconomical at this time. 50 refs., 5 figs., 7 tabs.

  13. Peatland Microbial Communities and Decomposition Processes in the James Bay Lowlands, Canada

    PubMed Central

    Preston, Michael D.; Smemo, Kurt A.; McLaughlin, James W.; Basiliko, Nathan

    2012-01-01

    Northern peatlands are a large repository of atmospheric carbon due to an imbalance between primary production by plants and microbial decomposition. The James Bay Lowlands (JBL) of northern Ontario are a large peatland-complex but remain relatively unstudied. Climate change models predict the region will experience warmer and drier conditions, potentially altering plant community composition, and shifting the region from a long-term carbon sink to a source. We collected a peat core from two geographically separated (ca. 200 km) ombrotrophic peatlands (Victor and Kinoje Bogs) and one minerotrophic peatland (Victor Fen) located near Victor Bog within the JBL. We characterized (i) archaeal, bacterial, and fungal community structure with terminal restriction fragment length polymorphism of ribosomal DNA, (ii) estimated microbial activity using community level physiological profiling and extracellular enzymes activities, and (iii) the aeration and temperature dependence of carbon mineralization at three depths (0–10, 50–60, and 100–110 cm) from each site. Similar dominant microbial taxa were observed at all three peatlands despite differences in nutrient content and substrate quality. In contrast, we observed differences in basal respiration, enzyme activity, and the magnitude of substrate utilization, which were all generally higher at Victor Fen and similar between the two bogs. However, there was no preferential mineralization of carbon substrates between the bogs and fens. Microbial community composition did not correlate with measures of microbial activity but pH was a strong predictor of activity across all sites and depths. Increased peat temperature and aeration stimulated CO2 production but this did not correlate with a change in enzyme activities. Potential microbial activity in the JBL appears to be influenced by the quality of the peat substrate and the presence of microbial inhibitors, which suggests the existing peat substrate will have a large

  14. Interaural cross correlation of event-related potentials and diffusion tensor imaging in the evaluation of auditory processing disorder: a case study.

    PubMed

    Jerger, James; Martin, Jeffrey; McColl, Roderick

    2004-01-01

    In a previous publication (Jerger et al, 2002), we presented event-related potential (ERP) data on a pair of 10-year-old twin girls (Twins C and E), one of whom (Twin E) showed strong evidence of auditory processing disorder. For the present paper, we analyzed cross-correlation functions of ERP waveforms generated in response to the presentation of target stimuli to either the right or left ears in a dichotic paradigm. There were four conditions; three involved the processing of real words for either phonemic, semantic, or spectral targets; one involved the processing of a nonword acoustic signal. Marked differences in the cross-correlation functions were observed. In the case of Twin C, cross-correlation functions were uniformly normal across both hemispheres. The functions for Twin E, however, suggest poorly correlated neural activity over the left parietal region during the three word processing conditions, and over the right parietal area in the nonword acoustic condition. Differences between the twins' brains were evaluated using diffusion tensor magnetic resonance imaging (DTI). For Twin E, results showed reduced anisotropy over the length of the midline corpus callosum and adjacent lateral structures, implying reduced myelin integrity. Taken together, these findings suggest that failure to achieve appropriate temporally correlated bihemispheric brain activity in response to auditory stimulation, perhaps as a result of faulty interhemispheric communication via corpus callosum, may be a factor in at least some children with auditory processing disorder. PMID:15030103

  15. Coupling experimental data and a prototype model to probe the physical and chemical processes of 2,4-dinitroimidazole solid-phase thermal decomposition

    SciTech Connect

    Behrens, R.; Minier, L.; Bulusu, S.

    1998-12-31

    The time-dependent, solid-phase thermal decomposition behavior of 2,4-dinitroimidazole (2,4-DNI) has been measured utilizing simultaneous thermogravimetric modulated beam mass spectrometry (STMBMS) methods. The decomposition products consist of gaseous and non-volatile polymeric products. The temporal behavior of the gas formation rates of the identified products indicate that the overall thermal decomposition process is complex. In isothermal experiments with 2,4-DNI in the solid phase, four distinguishing features are observed: (1) elevated rates of gas formation are observed during the early stages of the decomposition, which appear to be correlated to the presence of exogenous water in the sample; (2) this is followed by a period of relatively constant rates of gas formation; (3) next, the rates of gas formation accelerate, characteristic of an autocatalytic reaction; (4) finally, the 2,4-DNI is depleted and gaseous decomposition products continue to evolve at a decreasing rate. A physicochemical and mathematical model of the decomposition of 2,4-DNI has been developed and applied to the experimental results. The first generation of this model is described in this paper. Differences between the first generation of the model and the experimental data collected under different conditions suggest refinements for the next generation of the model.

  16. Photocatalytic Decomposition of Methylene Blue Over MIL-53(Fe) Prepared Using Microwave-Assisted Process Under Visible Light Irradiation.

    PubMed

    Trinh, Nguyen Duy; Hong, Seong-Soo

    2015-07-01

    Iron-based MIL-53 crystals with uniform size were successfully synthesized using a microwave-assisted solvothermal method and characterized by XRD, FE-SEM and DRS. We also investigated the photocatalytic activity of MIL-53(Fe) for the decomposition of methylene blue using H2O2 as an electron acceptor. From XRD and SEM results, the fully crystallized MIL-53(Fe) materials were obtained regardless of preparation method. From DRS results, MIL-53(Fe) samples prepared using microwave-assisted process displayed the absorption spectrum up to the visible region and then they showed the high photocatalytic activity under visible light irradiation. The MIL-53(Fe) catalyst prepared by two times microwave irradiation showed the highest activity. PMID:26373158

  17. EEG Classification for Hybrid Brain-Computer Interface Using a Tensor Based Multiclass Multimodal Analysis Scheme

    PubMed Central

    Ji, Hongfei; Li, Jie; Lu, Rongrong; Gu, Rong; Cao, Lei; Gong, Xiaoliang

    2016-01-01

    Electroencephalogram- (EEG-) based brain-computer interface (BCI) systems usually utilize one type of changes in the dynamics of brain oscillations for control, such as event-related desynchronization/synchronization (ERD/ERS), steady state visual evoked potential (SSVEP), and P300 evoked potentials. There is a recent trend to detect more than one of these signals in one system to create a hybrid BCI. However, in this case, EEG data were always divided into groups and analyzed by the separate processing procedures. As a result, the interactive effects were ignored when different types of BCI tasks were executed simultaneously. In this work, we propose an improved tensor based multiclass multimodal scheme especially for hybrid BCI, in which EEG signals are denoted as multiway tensors, a nonredundant rank-one tensor decomposition model is proposed to obtain nonredundant tensor components, a weighted fisher criterion is designed to select multimodal discriminative patterns without ignoring the interactive effects, and support vector machine (SVM) is extended to multiclass classification. Experiment results suggest that the proposed scheme can not only identify the different changes in the dynamics of brain oscillations induced by different types of tasks but also capture the interactive effects of simultaneous tasks properly. Therefore, it has great potential use for hybrid BCI. PMID:26880873

  18. EEG Classification for Hybrid Brain-Computer Interface Using a Tensor Based Multiclass Multimodal Analysis Scheme.

    PubMed

    Ji, Hongfei; Li, Jie; Lu, Rongrong; Gu, Rong; Cao, Lei; Gong, Xiaoliang

    2016-01-01

    Electroencephalogram- (EEG-) based brain-computer interface (BCI) systems usually utilize one type of changes in the dynamics of brain oscillations for control, such as event-related desynchronization/synchronization (ERD/ERS), steady state visual evoked potential (SSVEP), and P300 evoked potentials. There is a recent trend to detect more than one of these signals in one system to create a hybrid BCI. However, in this case, EEG data were always divided into groups and analyzed by the separate processing procedures. As a result, the interactive effects were ignored when different types of BCI tasks were executed simultaneously. In this work, we propose an improved tensor based multiclass multimodal scheme especially for hybrid BCI, in which EEG signals are denoted as multiway tensors, a nonredundant rank-one tensor decomposition model is proposed to obtain nonredundant tensor components, a weighted fisher criterion is designed to select multimodal discriminative patterns without ignoring the interactive effects, and support vector machine (SVM) is extended to multiclass classification. Experiment results suggest that the proposed scheme can not only identify the different changes in the dynamics of brain oscillations induced by different types of tasks but also capture the interactive effects of simultaneous tasks properly. Therefore, it has great potential use for hybrid BCI. PMID:26880873

  19. Bowen York tensors

    NASA Astrophysics Data System (ADS)

    Beig, Robert; Krammer, Werner

    2004-02-01

    For a conformally flat 3-space, we derive a family of linear second-order partial differential operators which sends vectors into trace-free, symmetric 2-tensors. These maps, which are parametrized by conformal Killing vectors on the 3-space, are such that the divergence of the resulting tensor field depends only on the divergence of the original vector field. In particular, these maps send source-free electric fields into TT tensors. Moreover, if the original vector field is the Coulomb field on {\\bb R}^3\\backslash \\lbrace0\\rbrace , the resulting tensor fields on {\\bb R}^3\\backslash \\lbrace0\\rbrace are nothing but the family of TT tensors originally written by Bowen and York.

  20. A low-cost polysilicon process based on the synthesis and decomposition of dichlorosilane

    NASA Technical Reports Server (NTRS)

    Mccormick, J. R.; Plahutnik, F.; Sawyer, D.; Arvidson, A.; Goldfarb, S.

    1982-01-01

    Major process steps of a dichlorosilane based chemical vapor deposition (CVD) process for the production of polycrystalline silicon have been evaluated. While an economic analysis of the process indicates that it is not capable of meeting JPL/DOE price objectives ($14.00/kg in 1980 dollars), product price in the $19.00/kg to $25.00/kg range may be achieved. Product quality has been evaluated and ascertained to be comparable to semiconductor-grade polycrystalline silicon. Solar cells fabricated from the material are also equivalent to those fabricated from semiconductor-grade polycrystalline silicon.

  1. A low-cost polysilicon process based on the synthesis and decomposition of dichlorosilane

    SciTech Connect

    McCormick, J.R.; Arvidson, A.; Goldfarb, S.; Plahutnik, F.; Sayer, D.

    1982-09-01

    Major process steps of a dichlorosilane based chemical vapor deposition (CVD) process for the production of polycrystalline silicon have been evaluated. While an economic analysis of the process indicates that it is not capable of meeting JPL/DOE price objectives ($14.00/kg in 1980 dollars), product price in the $19.00/kg to $25.00/kg range may be achieved. Product quality has been evaluated and ascertained to be comparable to semiconductor-grade polycrystalline silicon. Solar cells fabricated from the material are also equivalent to those fabricated from semiconductor-grade polycrystalline silicon.

  2. Fundamental phenomena on fuel decomposition and boundary-layer combustion processes with applications to hybrid rocket motors

    NASA Astrophysics Data System (ADS)

    Kuo, Kenneth K.; Lu, Yeu-Cherng; Chiaverini, Martin J.; Harting, George C.; Johnson, David K.; Serin, Nadir

    The experimental study on the fundamental processes involved in fuel decomposition and boundary-layer combustion in hybrid rocket motors is continuously being conducted at the High Pressure Combustion Laboratory of The Pennsylvania State University. This research will provide a useful engineering technology base in the development of hybrid rocket motors as well as a fundamental understanding of the complex processes involved in hybrid propulsion. A high-pressure, 2-D slab motor has been designed, manufactured, and utilized for conducting seven test firings using HTPB fuel processed at PSU. A total of 20 fuel slabs have been received from the Mcdonnell Douglas Aerospace Corporation. Ten of these fuel slabs contain an array of fine-wire thermocouples for measuring solid fuel surface and subsurface temperatures. Diagnostic instrumentation used in the test include high-frequency pressure transducers for measuring static and dynamic motor pressures and fine-wire thermocouples for measuring solid fuel surface and subsurface temperatures. The ultrasonic pulse-echo technique as well as a real-time x-ray radiography system have been used to obtain independent measurements of instantaneous solid fuel regression rates.

  3. Achieving Low Overpotential Li-O₂ Battery Operations by Li₂O₂ Decomposition through One-Electron Processes.

    PubMed

    Xie, Jin; Dong, Qi; Madden, Ian; Yao, Xiahui; Cheng, Qingmei; Dornath, Paul; Fan, Wei; Wang, Dunwei

    2015-12-01

    As a promising high-capacity energy storage technology, Li-O2 batteries face two critical challenges, poor cycle lifetime and low round-trip efficiencies, both of which are connected to the high overpotentials. The problem is particularly acute during recharge, where the reactions typically follow two-electron mechanisms that are inherently slow. Here we present a strategy that can significantly reduce recharge overpotentials. Our approach seeks to promote Li2O2 decomposition by one-electron processes, and the key is to stabilize the important intermediate of superoxide species. With the introduction of a highly polarizing electrolyte, we observe that recharge processes are successfully switched from a two-electron pathway to a single-electron one. While a similar one-electron route has been reported for the discharge processes, it has rarely been described for recharge except for the initial stage due to the poor mobilities of surface bound superoxide ions (O2(-)), a necessary intermediate for the mechanism. Key to our observation is the solvation of O2(-) by an ionic liquid electrolyte (PYR14TFSI). Recharge overpotentials as low as 0.19 V at 100 mA/g(carbon) are measured. PMID:26583874

  4. Fundamental phenomena on fuel decomposition and boundary-layer combustion processes with applications to hybrid rocket motors

    NASA Technical Reports Server (NTRS)

    Kuo, Kenneth K.; Lu, Yeu-Cherng; Chiaverini, Martin J.; Harting, George C.; Johnson, David K.; Serin, Nadir

    1995-01-01

    The experimental study on the fundamental processes involved in fuel decomposition and boundary-layer combustion in hybrid rocket motors is continuously being conducted at the High Pressure Combustion Laboratory of The Pennsylvania State University. This research will provide a useful engineering technology base in the development of hybrid rocket motors as well as a fundamental understanding of the complex processes involved in hybrid propulsion. A high-pressure, 2-D slab motor has been designed, manufactured, and utilized for conducting seven test firings using HTPB fuel processed at PSU. A total of 20 fuel slabs have been received from the Mcdonnell Douglas Aerospace Corporation. Ten of these fuel slabs contain an array of fine-wire thermocouples for measuring solid fuel surface and subsurface temperatures. Diagnostic instrumentation used in the test include high-frequency pressure transducers for measuring static and dynamic motor pressures and fine-wire thermocouples for measuring solid fuel surface and subsurface temperatures. The ultrasonic pulse-echo technique as well as a real-time x-ray radiography system have been used to obtain independent measurements of instantaneous solid fuel regression rates.

  5. On the Decomposition of Martensite During Bake Hardening of Thermomechanically Processed TRIP Steels

    SciTech Connect

    Pereloma, E. V.; Miller, Michael K; Timokhina, I. B.

    2008-01-01

    Thermomechanically processed (TMP) CMnSi transformation-induced plasticity (TRIP) steels with and without additions of Nb, Mo, or Al were subjected to prestraining and bake hardening. Atom probe tomography (APT) revealed the presence of fine C-rich clusters in the martensite of all studied steels after the thermomechanical processing. After bake hardening, the formation of iron carbides, containing from 25 to 90 at. pct C, was observed. The evolution of iron carbide compositions was independent of steel composition and was a function of carbide size.

  6. Towards a physical understanding of stratospheric cooling under global warming through a process-based decomposition method

    NASA Astrophysics Data System (ADS)

    Yang, Yang; Ren, R.-C.; Cai, Ming

    2016-02-01

    The stratosphere has been cooling under global warming, the causes of which are not yet well understood. This study applied a process-based decomposition method (CFRAM; Coupled Surface-Atmosphere Climate Feedback Response Analysis Method) to the simulation results of a Coupled Model Intercomparison Project, phase 5 (CMIP5) model (CCSM4; Community Climate System Model, version 4), to demonstrate the responsible radiative and non-radiative processes involved in the stratospheric cooling. By focusing on the long-term stratospheric temperature changes between the "historical run" and the 8.5 W m-2 Representative Concentration Pathway (RCP8.5) scenario, this study demonstrates that the changes of radiative radiation due to CO2, ozone and water vapor are the main divers of stratospheric cooling in both winter and summer. They contribute to the cooling changes by reducing the net radiative energy (mainly downward radiation) received by the stratospheric layer. In terms of the global average, their contributions are around -5, -1.5, and -1 K, respectively. However, the observed stratospheric cooling is much weaker than the cooling by radiative processes. It is because changes in atmospheric dynamic processes act to strongly mitigate the radiative cooling by yielding a roughly 4 K warming on the global average base. In particular, the much stronger/weaker dynamic warming in the northern/southern winter extratropics is associated with an increase of the planetary-wave activity in the northern winter, but a slight decrease in the southern winter hemisphere, under global warming. More importantly, although radiative processes dominate the stratospheric cooling, the spatial patterns are largely determined by the non-radiative effects of dynamic processes.

  7. Decomposition of cyclohexanoic acid by the UV/H2O2 process under various conditions.

    PubMed

    Afzal, Atefeh; Drzewicz, Przemysław; Martin, Jonathan W; Gamal El-Din, Mohamed

    2012-06-01

    Naphthenic acids (NAs) are a broad range of alicyclic and aliphatic compounds that are persistent and contribute to the toxicity of oil sands process affected water (OSPW). In this investigation, cyclohexanoic acid (CHA) was selected as a model naphthenic acid, and its oxidation was investigated using advanced oxidation employing a low-pressure ultraviolet light in the presence of hydrogen peroxide (UV/H(2)O(2) process). The effects of two pHs and common OSPW constituents, such as chloride (Cl(-)) and carbonate (CO(3)(2-)) were investigated in ultrapure water. The optimal molar ratio of H(2)O(2) to CHA in the treatment process was also investigated. The pH had no significant effect on the degradation, nor on the formation and degradation of byproducts in ultrapure water. The presence of CO(3)(2-) or Cl(-) significantly decreased the CHA degradation rate. The presence of 700 mg/L CO(3)(2-) or 500 mg/L Cl(-), typical concentrations in OSPW, caused a 55% and 23% decrease in the pseudo-first order degradation rate constants for CHA, respectively. However, no change in byproducts or in the degradation trend of byproducts, in the presence of scavengers was observed. A real OSPW matrix also had a significant impact by decreasing the CHA degradation rate, such that by spiking CHA into the OSPW, the degradation rate decreased up to 82% relative to that in ultrapure water. The results of this study show that UV/H(2)O(2) AOP is capable of degrading CHA as a model NA in ultrapure water. However, in the real applications, the effect of radical scavengers should be taken into consideration for the achievement of best performance of the process. PMID:22521165

  8. Ozone decomposition

    PubMed Central

    Batakliev, Todor; Georgiev, Vladimir; Anachkov, Metody; Rakovsky, Slavcho

    2014-01-01

    Catalytic ozone decomposition is of great significance because ozone is a toxic substance commonly found or generated in human environments (aircraft cabins, offices with photocopiers, laser printers, sterilizers). Considerable work has been done on ozone decomposition reported in the literature. This review provides a comprehensive summary of the literature, concentrating on analysis of the physico-chemical properties, synthesis and catalytic decomposition of ozone. This is supplemented by a review on kinetics and catalyst characterization which ties together the previously reported results. Noble metals and oxides of transition metals have been found to be the most active substances for ozone decomposition. The high price of precious metals stimulated the use of metal oxide catalysts and particularly the catalysts based on manganese oxide. It has been determined that the kinetics of ozone decomposition is of first order importance. A mechanism of the reaction of catalytic ozone decomposition is discussed, based on detailed spectroscopic investigations of the catalytic surface, showing the existence of peroxide and superoxide surface intermediates. PMID:26109880

  9. Ozone decomposition.

    PubMed

    Batakliev, Todor; Georgiev, Vladimir; Anachkov, Metody; Rakovsky, Slavcho; Zaikov, Gennadi E

    2014-06-01

    Catalytic ozone decomposition is of great significance because ozone is a toxic substance commonly found or generated in human environments (aircraft cabins, offices with photocopiers, laser printers, sterilizers). Considerable work has been done on ozone decomposition reported in the literature. This review provides a comprehensive summary of the literature, concentrating on analysis of the physico-chemical properties, synthesis and catalytic decomposition of ozone. This is supplemented by a review on kinetics and catalyst characterization which ties together the previously reported results. Noble metals and oxides of transition metals have been found to be the most active substances for ozone decomposition. The high price of precious metals stimulated the use of metal oxide catalysts and particularly the catalysts based on manganese oxide. It has been determined that the kinetics of ozone decomposition is of first order importance. A mechanism of the reaction of catalytic ozone decomposition is discussed, based on detailed spectroscopic investigations of the catalytic surface, showing the existence of peroxide and superoxide surface intermediates. PMID:26109880

  10. Physical and chemical processes of low-temperature plasma decomposition of liquids under ultrasonic treatment

    NASA Astrophysics Data System (ADS)

    Bulychev, N. A.; Kazaryan, M. A.

    2015-12-01

    In this work, a low-temperature plasma initiated in liquid media between electrodes has been shown to be able to decompose hydrogen containing organic molecules leading to obtaining gaseous products with volume part of hydrogen higher than 90% (up to gas chromatography data). Preliminary evaluations of energetic efficiency, calculated from combustion energy of hydrogen and initial liquids and electrical energy consumption have demonstrated the efficiency about 60-70% depending on initial liquids composition. Theoretical calculations of voltage and current values for this process have been done, that is in good agreement with experimental data.

  11. General route for the decomposition of InAs quantum dots during the capping process

    NASA Astrophysics Data System (ADS)

    González, D.; Reyes, D. F.; Utrilla, A. D.; Ben, T.; Braza, V.; Guzman, A.; Hierro, A.; Ulloa, J. M.

    2016-03-01

    The effect of the capping process on the morphology of InAs/GaAs quantum dots (QDs) by using different GaAs-based capping layers (CLs), ranging from strain reduction layers to strain compensating layers, has been studied by transmission microscopic techniques. For this, we have measured simultaneously the height and diameter in buried and uncapped QDs covering populations of hundreds of QDs that are statistically reliable. First, the uncapped QD population evolves in all cases from a pyramidal shape into a more homogenous distribution of buried QDs with a spherical-dome shape, despite the different mechanisms implicated in the QD capping. Second, the shape of the buried QDs depends only on the final QD size, where the radius of curvature is function of the base diameter independently of the CL composition and growth conditions. An asymmetric evolution of the QDs’ morphology takes place, in which the QD height and base diameter are modified in the amount required to adopt a similar stable shape characterized by a averaged aspect ratio of 0.21. Our results contradict the traditional model of QD material redistribution from the apex to the base and point to a different universal behavior of the overgrowth processes in self-organized InAs QDs.

  12. General route for the decomposition of InAs quantum dots during the capping process.

    PubMed

    González, D; Reyes, D F; Utrilla, A D; Ben, T; Braza, V; Guzman, A; Hierro, A; Ulloa, J M

    2016-03-29

    The effect of the capping process on the morphology of InAs/GaAs quantum dots (QDs) by using different GaAs-based capping layers (CLs), ranging from strain reduction layers to strain compensating layers, has been studied by transmission microscopic techniques. For this, we have measured simultaneously the height and diameter in buried and uncapped QDs covering populations of hundreds of QDs that are statistically reliable. First, the uncapped QD population evolves in all cases from a pyramidal shape into a more homogenous distribution of buried QDs with a spherical-dome shape, despite the different mechanisms implicated in the QD capping. Second, the shape of the buried QDs depends only on the final QD size, where the radius of curvature is function of the base diameter independently of the CL composition and growth conditions. An asymmetric evolution of the QDs' morphology takes place, in which the QD height and base diameter are modified in the amount required to adopt a similar stable shape characterized by a averaged aspect ratio of 0.21. Our results contradict the traditional model of QD material redistribution from the apex to the base and point to a different universal behavior of the overgrowth processes in self-organized InAs QDs. PMID:26891164

  13. Temperature Adaptations in the Terminal Processes of Anaerobic Decomposition of Yellowstone National Park and Icelandic Hot Spring Microbial Mats

    PubMed Central

    Sandbeck, Kenneth A.; Ward, David M.

    1982-01-01

    The optimum temperatures for methanogenesis in microbial mats of four neutral to alkaline, low-sulfate hot springs in Yellowstone National Park were between 50 and 60°C, which was 13 to 23°C lower than the upper temperature for mat development. Significant methanogenesis at 65°C was only observed in one of the springs. Methane production in samples collected at a 51 or 62°C site in Octopus Spring was increased by incubation at higher temperatures and was maximal at 70°C. Strains of Methanobacterium thermoautotrophicum were isolated from 50, 55, 60, and 65°C sites in Octopus Spring at the temperatures of the collection sites. The optimum temperature for growth and methanogenesis of each isolate was 65°C. Similar results were found for the potential rate of sulfate reduction in an Icelandic hot spring microbial mat in which sulfate reduction dominated methane production as a terminal process in anaerobic decomposition. The potential rate of sulfate reduction along the thermal gradient of the mat was greatest at 50°C, but incubation at 60°C of the samples obtained at 50°C increased the rate. Adaptation to different mat temperatures, common among various microorganisms and processes in the mats, did not appear to occur in the processes and microorganisms which terminate the anaerobic food chain. Other factors must explain why the maximal rates of these processes are restricted to moderate temperatures of the mat ecosystem. PMID:16346109

  14. Microscopic Approaches to Decomposition and Burning Processes of a Micro Plastic Resin Particle under Abrupt Heating

    NASA Astrophysics Data System (ADS)

    Ohiwa, Norio; Ishino, Yojiro; Yamamoto, Atsunori; Yamakita, Ryuji

    To elucidate the possibility and availability of thermal recycling of waste plastic resin from a basic and microscopic viewpoint, a series of abrupt heating processes of a spherical micro plastic particle having a diameter of about 200 μm is observed, when it is abruptly exposed to hot oxidizing combustion gas. Three ingenious devices are introduced and two typical plastic resins of polyethylene terephthalate and polyethylene are used. In this paper the dependency of internal and external appearances of residual plastic embers on the heating time and the ingredients of plastic resins is optically analyzed, along with appearances of internal micro bubbling, multiple micro explosions and jets, and micro diffusion flames during abrupt heating. Based on temporal variations of the surface area of a micro plastic particle, the apparent burning rate constant is also evaluated and compared with those of well-known volatile liquid fuels.

  15. Decomposition of lignin from sugar cane bagasse during ozonation process monitored by optical and mass spectrometries.

    PubMed

    Souza-Corrêa, J A; Ridenti, M A; Oliveira, C; Araújo, S R; Amorim, J

    2013-03-21

    Mass spectrometry was used to monitor neutral chemical species from sugar cane bagasse that could volatilize during the bagasse ozonation process. Lignin fragments and some radicals liberated by direct ozone reaction with the biomass structure were detected. Ozone density was monitored during the ozonation by optical absorption spectroscopy. The optical results indicated that the ozone interaction with the bagasse material was better for bagasse particle sizes less than or equal to 0.5 mm. Both techniques have shown that the best condition for the ozone diffusion in the bagasse was at 50% of its moisture content. In addition, Fourier transform infrared spectroscopy (FTIR) and scanning electron microscopy (SEM) were employed to analyze the lignin bond disruptions and morphology changes of the bagasse surface that occurred due to the ozonolysis reactions as well. Appropriate chemical characterization of the lignin content in bagasse before and after its ozonation was also carried out. PMID:23441875

  16. Decomposition techniques

    USGS Publications Warehouse

    Chao, T.T.; Sanzolone, R.F.

    1992-01-01

    Sample decomposition is a fundamental and integral step in the procedure of geochemical analysis. It is often the limiting factor to sample throughput, especially with the recent application of the fast and modern multi-element measurement instrumentation. The complexity of geological materials makes it necessary to choose the sample decomposition technique that is compatible with the specific objective of the analysis. When selecting a decomposition technique, consideration should be given to the chemical and mineralogical characteristics of the sample, elements to be determined, precision and accuracy requirements, sample throughput, technical capability of personnel, and time constraints. This paper addresses these concerns and discusses the attributes and limitations of many techniques of sample decomposition along with examples of their application to geochemical analysis. The chemical properties of reagents as to their function as decomposition agents are also reviewed. The section on acid dissolution techniques addresses the various inorganic acids that are used individually or in combination in both open and closed systems. Fluxes used in sample fusion are discussed. The promising microwave-oven technology and the emerging field of automation are also examined. A section on applications highlights the use of decomposition techniques for the determination of Au, platinum group elements (PGEs), Hg, U, hydride-forming elements, rare earth elements (REEs), and multi-elements in geological materials. Partial dissolution techniques used for geochemical exploration which have been treated in detail elsewhere are not discussed here; nor are fire-assaying for noble metals and decomposition techniques for X-ray fluorescence or nuclear methods be discussed. ?? 1992.

  17. Kinetic Analysis of Isothermal Decomposition Process of Sodium Bicarbonate Using the Weibull Probability Function—Estimation of Density Distribution Functions of the Apparent Activation Energies

    NASA Astrophysics Data System (ADS)

    Janković, Bojan

    2009-10-01

    The decomposition process of sodium bicarbonate (NaHCO3) has been studied by thermogravimetry in isothermal conditions at four different operating temperatures (380 K, 400 K, 420 K, and 440 K). It was found that the experimental integral and differential conversion curves at the different operating temperatures can be successfully described by the isothermal Weibull distribution function with a unique value of the shape parameter ( β = 1.07). It was also established that the Weibull distribution parameters ( β and η) show independent behavior on the operating temperature. Using the integral and differential (Friedman) isoconversional methods, in the conversion (α) range of 0.20 ≤ α ≤ 0.80, the apparent activation energy ( E a ) value was approximately constant ( E a, int = 95.2 kJmol-1 and E a, diff = 96.6 kJmol-1, respectively). The values of E a calculated by both isoconversional methods are in good agreement with the value of E a evaluated from the Arrhenius equation (94.3 kJmol-1), which was expressed through the scale distribution parameter ( η). The Málek isothermal procedure was used for estimation of the kinetic model for the investigated decomposition process. It was found that the two-parameter Šesták-Berggren (SB) autocatalytic model best describes the NaHCO3 decomposition process with the conversion function f(α) = α0.18(1-α)1.19. It was also concluded that the calculated density distribution functions of the apparent activation energies ( ddfE a ’s) are not dependent on the operating temperature, which exhibit the highly symmetrical behavior (shape factor = 1.00). The obtained isothermal decomposition results were compared with corresponding results of the nonisothermal decomposition process of NaHCO3.

  18. Comparison of the thermal decomposition processes of several aminoalcohol-based ZnO inks with one containing ethanolamine

    NASA Astrophysics Data System (ADS)

    Gómez-Núñez, Alberto; Roura, Pere; López, Concepción; Vilà, Anna

    2016-09-01

    Four inks for the production of ZnO semiconducting films have been prepared with zinc acetate dihydrate as precursor salt and one among the following aminoalcohols: aminopropanol (APr), aminomethyl butanol (AMB), aminophenol (APh) and aminobenzyl alcohol (AB) as stabilizing agent. Their thermal decomposition process has been analyzed in situ by thermogravimetric analysis (TGA), differential scanning calorimetry (DSC) and evolved gas analysis (EGA), whereas the solid product has been analysed ex-situ by X-ray diffraction (XRD) and infrared spectroscopy (IR). Although, except for the APh ink, crystalline ZnO is already obtained at 300 °C, the films contain an organic residue that evolves at higher temperature in the form of a large variety of nitrogen-containing cyclic compounds. The results indicate that APr can be a better stabilizing agent than ethanolamine (EA). It gives larger ZnO crystal sizes with similar carbon content. However, a common drawback of all the amino stabilizers (EA included) is that nitrogen atoms have not been completely removed from the ZnO film at the highest temperature of our experiments (600 °C).

  19. Phenol Decomposition Process by Pulsed-discharge Plasma above a Water Surface in Oxygen and Argon Atmosphere

    NASA Astrophysics Data System (ADS)

    Shiota, Haruki; Itabashi, Hideyuki; Satoh, Kohki; Itoh, Hidenori

    By-products from phenol by the exposure of pulsed-discharge plasma above a phenol aqueous solution are investigated by gas chromatography mass spectrometry, and the decomposition process of phenol is deduced. When Ar is used as a background gas, catechol, hydroquinone and 4-hydroxy-2-cyclohexene-1-on are produced, and no O3 is detected; therefore, active species such as OH, O, HO2, H2O2, which are produced from H2O in the discharge, can convert phenol into those by-products. When O2 is used as a background gas, formic acid, maleic acid, succinic acid and 4,6-dihydroxy-2,4-hexadienoic acid are produced in addition to catechol and hydroquinone. O3 is produced in the discharge plasma, so that phenol is probably decomposed into 4,6-dihydroxy-2,4-hexadienoic acid by 1,3-dipolar addition reaction with O3, and then 4,6-dihydroxy-2,4-hexadienoic acid can be decomposed into formic acid, maleic acid and succinic acid by 1,3-dipolar addition reaction with O3.

  20. Joint application of a statistical optimization process and Empirical Mode Decomposition to Magnetic Resonance Sounding Noise Cancelation

    NASA Astrophysics Data System (ADS)

    Ghanati, Reza; Fallahsafari, Mahdi; Hafizi, Mohammad Kazem

    2014-12-01

    The signal quality of Magnetic Resonance Sounding (MRS) measurements is a crucial criterion. The accuracy of the estimation of the signal parameters (i.e. E0 and T2*) strongly depends on amplitude and conditions of ambient electromagnetic interferences at the site of investigation. In this paper, in order to enhance the performance in the noisy environments, a two-step noise cancelation approach based on the Empirical Mode Decomposition (EMD) and a statistical method is proposed. In the first stage, the noisy signal is adaptively decomposed into intrinsic oscillatory components called intrinsic mode functions (IMFs) by means of the EMD algorithm. Afterwards based on an automatic procedure the noisy IMFs are detected, and then the partly de-noised signal is reconstructed through the no-noise IMFs. In the second stage, the signal obtained from the initial section enters an optimization process to cancel the remnant noise, and consequently, estimate the signal parameters. The strategy is tested on a synthetic MRS signal contaminated with Gaussian noise, spiky events and harmonic noise, and on real data. By applying successively the proposed steps, we can remove the noise from the signal to a high extent and the performance indexes, particularly signal to noise ratio, will increase significantly.

  1. Correlation of Fe/Cr phase decomposition process and age-hardening in Fe-15Cr ferritic alloys

    NASA Astrophysics Data System (ADS)

    Chen, Dongsheng; Kimura, Akihiko; Han, Wentuo

    2014-12-01

    The effects of thermal aging on the microstructure and mechanical properties of Fe-15Cr ferritic model alloys were investigated by TEM examinations, micro-hardness measurements and tensile tests. The materials used in this work were Fe-15Cr, Fe-15Cr-C and Fe-15Cr-X alloys, where X refers to Si, Mn and Ni to simulate a pressure vessel steel. Specimens were isothermally aged at 475 °C up to 5000 h. Thermal aging causes a significant increase in the hardness and strength. An almost twice larger hardening is required for embrittlement of Fe-15Cr-X relative to Fe-15Cr. The age-hardening is mainly due to the formation of Cr-rich α‧ precipitates, while the addition of minor elements has a small effect on the saturation level of age-hardening. The correlation of phase decomposition process and age-hardening in Fe-15Cr alloy was interpreted by dispersion strengthened models.

  2. Morphology and phase modifications of MoO{sub 3} obtained by metallo-organic decomposition processes

    SciTech Connect

    Barros Santos, Elias de; Martins de Souza e Silva, Juliana; Odone Mazali, Italo

    2010-11-15

    Molybdenum oxide samples were prepared using different temperatures and atmospheric conditions by metallo-organic decomposition processes and were characterized by XRD, SEM and DRS UV/Vis and Raman spectroscopies. Variation in the synthesis conditions resulted in solids with different morphologies and oxygen vacancy concentrations. Intense characteristic Raman bands of crystalline orthorhombic {alpha}-MoO{sub 3}, occurring at 992 cm{sup -1} and 820 cm{sup -1}, are observed and their shifts can be related to the differences in the structure of the solids obtained. The sample obtained under nitrogen flow at 1073 K is a phase mixture of orthorhombic {alpha}-MoO{sub 3} and monoclinic {beta}-MoO{sub 3}. The characterization results suggest that the molybdenum oxide samples are non-stoichiometric and are described as MoO{sub x} with x < 2.94. Variations in the reaction conditions make it possible to tune the number of oxygen defects and the band gap of the final material.

  3. The structure of correlation tensors in homogeneous anisotropic turbulence

    NASA Technical Reports Server (NTRS)

    Matthaeus, W. H.; Smith, C.

    1980-01-01

    The study of turbulence with spatially homogeneous but anisotropic statistical properties has applications in space physics and laboratory plasma physics. The first step in the systematic study of such fluctuations is the elucidation of the kinematic properties of the relevant statistical objects, which are the correlation tensors. The theory of isotropic tensors, developed by Robertson, Chandrasekhar and others, is reviewed and extended to cover the general case of turbulence with a pseudo-vector preferred direction, without assuming mirror reflection invariance. Attention is focused on two point correlation functions and it is shown that the form of the decomposition into proper and pseudo-tensor contributions is restricted by the homogeneity requirement. It is also shown that the vector and pseudo-vector preferred direction cases yield different results. An explicit form of the two point correlation tensor is presented which is appropriate for analyzing interplanetary magnetic fluctuations. A procedure for determining the magnetic helicity from experimental data is presented.

  4. Fuel decomposition and boundary-layer combustion processes of hybrid rocket motors

    NASA Technical Reports Server (NTRS)

    Chiaverini, Martin J.; Harting, George C.; Lu, Yeu-Cherng; Kuo, Kenneth K.; Serin, Nadir; Johnson, David K.

    1995-01-01

    Using a high-pressure, two-dimensional hybrid motor, an experimental investigation was conducted on fundamental processes involved in hybrid rocket combustion. HTPB (Hydroxyl-terminated Polybutadiene) fuel cross-linked with diisocyanate was burned with GOX under various operating conditions. Large-amplitude pressure oscillations were encountered in earlier test runs. After identifying the source of instability and decoupling the GOX feed-line system and combustion chamber, the pressure oscillations were drastically reduced from +/-20% of the localized mean pressure to an acceptable range of +/-1.5% Embedded fine-wire thermocouples indicated that the surface temperature of the burning fuel was around 1000 K depending upon axial locations and operating conditions. Also, except near the leading-edge region, the subsurface thermal wave profiles in the upstream locations are thicker than those in the downstream locations since the solid-fuel regression rate, in general, increases with distance along the fuel slab. The recovered solid fuel slabs in the laminar portion of the boundary layer exhibited smooth surfaces, indicating the existence of a liquid melt layer on the burning fuel surface in the upstream region. After the transition section, which displayed distinct transverse striations, the surface roughness pattern became quite random and very pronounced in the downstream turbulent boundary-layer region. Both real-time X-ray radiography and ultrasonic pulse-echo techniques were used to determine the instantaneous web thickness burned and instantaneous solid-fuel regression rates over certain portions of the fuel slabs. Globally averaged and axially dependent but time-averaged regression rates were also obtained and presented.

  5. Exotic species as modifiers of ecosystem processes: Litter decomposition in native and invaded secondary forests of NW Argentina

    NASA Astrophysics Data System (ADS)

    Aragón, Roxana; Montti, Lia; Ayup, María Marta; Fernández, Romina

    2014-01-01

    Invasions of exotic tree species can cause profound changes in community composition and structure, and may even cause legacy effect on nutrient cycling via litter production. In this study, we compared leaf litter decomposition of two invasive exotic trees (Ligustrum lucidum and Morus sp.) and two dominant native trees (Cinnamomum porphyria and Cupania vernalis) in native and invaded (Ligustrum-dominated) forest stands in NW Argentina. We measured leaf attributes and environmental characteristics in invaded and native stands to isolate the effects of litter quality and habitat characteristics. Species differed in their decomposition rates and, as predicted by the different species colonization status (pioneer vs. late successional), exotic species decayed more rapidly than native ones. Invasion by L. lucidum modified environmental attributes by reducing soil humidity. Decomposition constants (k) tended to be slightly lower (-5%) for all species in invaded stands. High SLA, low tensile strength, and low C:N of Morus sp. distinguish this species from the native ones and explain its higher decomposition rate. Contrary to our expectations, L. lucidum leaf attributes were similar to those of native species. Decomposition rates also differed between the two exotic species (35% higher in Morus sp.), presumably due to leaf attributes and colonization status. Given the high decomposition rate of L. lucidum litter (more than 6 times that of natives) we expect an acceleration of nutrient circulation at ecosystem level in Ligustrum-dominated stands. This may occur in spite of the modified environmental conditions that are associated with L. lucidum invasion.

  6. Investigation of thermal decomposition as the kinetic process that causes the loss of crystalline structure in sucrose using a chemical analysis approach (part II).

    PubMed

    Lee, Joo Won; Thomas, Leonard C; Jerrell, John; Feng, Hao; Cadwallader, Keith R; Schmidt, Shelly J

    2011-01-26

    High performance liquid chromatography (HPLC) on a calcium form cation exchange column with refractive index and photodiode array detection was used to investigate thermal decomposition as the cause of the loss of crystalline structure in sucrose. Crystalline sucrose structure was removed using a standard differential scanning calorimetry (SDSC) method (fast heating method) and a quasi-isothermal modulated differential scanning calorimetry (MDSC) method (slow heating method). In the fast heating method, initial decomposition components, glucose (0.365%) and 5-HMF (0.003%), were found in the sucrose sample coincident with the onset temperature of the first endothermic peak. In the slow heating method, glucose (0.411%) and 5-HMF (0.003%) were found in the sucrose sample coincident with the holding time (50 min) at which the reversing heat capacity began to increase. In both methods, even before the crystalline structure in sucrose was completely removed, unidentified thermal decomposition components were formed. These results prove not only that the loss of crystalline structure in sucrose is caused by thermal decomposition, but also that it is achieved via a time-temperature combination process. This knowledge is important for quality assurance purposes and for developing new sugar based food and pharmaceutical products. In addition, this research provides new insights into the caramelization process, showing that caramelization can occur under low temperature (significantly below the literature reported melting temperature), albeit longer time, conditions. PMID:21175200

  7. Prediction of apparent trabecular bone stiffness through fourth-order fabric tensors.

    PubMed

    Moreno, Rodrigo; Smedby, Örjan; Pahr, Dieter H

    2016-08-01

    The apparent stiffness tensor is an important mechanical parameter for characterizing trabecular bone. Previous studies have modeled this parameter as a function of mechanical properties of the tissue, bone density, and a second-order fabric tensor, which encodes both anisotropy and orientation of trabecular bone. Although these models yield strong correlations between observed and predicted stiffness tensors, there is still space for reducing accuracy errors. In this paper, we propose a model that uses fourth-order instead of second-order fabric tensors. First, the totally symmetric part of the stiffness tensor is assumed proportional to the fourth-order fabric tensor in the logarithmic scale. Second, the asymmetric part of the stiffness tensor is derived from relationships among components of the harmonic tensor decomposition of the stiffness tensor. The mean intercept length (MIL), generalized MIL (GMIL), and fourth-order global structure tensor were computed from images acquired through microcomputed tomography of 264 specimens of the femur. The predicted tensors were compared to the stiffness tensors computed by using the micro-finite element method ([Formula: see text]FE), which was considered as the gold standard, yielding strong correlations ([Formula: see text] above 0.962). The GMIL tensor yielded the best results among the tested fabric tensors. The Frobenius error, geodesic error, and the error of the norm were reduced by applying the proposed model by 3.75, 0.07, and 3.16 %, respectively, compared to the model by Zysset and Curnier (Mech Mater 21(4):243-250, 1995) with the second-order MIL tensor. From the results, fourth-order fabric tensors are a good alternative to the more expensive [Formula: see text]FE stiffness predictions. PMID:26341838

  8. Measuring Nematic Susceptibilities from the Elastoresistivity Tensor

    NASA Astrophysics Data System (ADS)

    Hristov, A. T.; Shapiro, M. C.; Hlobil, Patrick; Maharaj, Akash; Chu, Jiun-Haw; Fisher, Ian

    The elastoresistivity tensor mijkl relates changes in resistivity to the strain on a material. As a fourth-rank tensor, it contains considerably more information about the material than the simpler (second-rank) resistivity tensor; in particular, certain elastoresistivity coefficients can be related to thermodynamic susceptibilities and serve as a direct probe of symmetry breaking at a phase transition. The aim of this talk is twofold. First, we enumerate how symmetry both constrains the structure of the elastoresistivity tensor into an easy-to-understand form and connects tensor elements to thermodynamic susceptibilities. In the process, we generalize previous studies of elastoresistivity to include the effects of magnetic field. Second, we describe an approach to measuring quantities in the elastoresistivity tensor with a novel transverse measurement, which is immune to relative strain offsets. These techniques are then applied to BaFe2As2 in a proof of principle measurement. This work is supported by the Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division, under Contract DE-AC02-76SF00515.

  9. Singular value decomposition for photon-processing nuclear imaging systems and applications for reconstruction and computing null functions.

    PubMed

    Jha, Abhinav K; Barrett, Harrison H; Frey, Eric C; Clarkson, Eric; Caucci, Luca; Kupinski, Matthew A

    2015-09-21

    Recent advances in technology are enabling a new class of nuclear imaging systems consisting of detectors that use real-time maximum-likelihood (ML) methods to estimate the interaction position, deposited energy, and other attributes of each photon-interaction event and store these attributes in a list format. This class of systems, which we refer to as photon-processing (PP) nuclear imaging systems, can be described by a fundamentally different mathematical imaging operator that allows processing of the continuous-valued photon attributes on a per-photon basis. Unlike conventional photon-counting (PC) systems that bin the data into images, PP systems do not have any binning-related information loss. Mathematically, while PC systems have an infinite-dimensional null space due to dimensionality considerations, PP systems do not necessarily suffer from this issue. Therefore, PP systems have the potential to provide improved performance in comparison to PC systems. To study these advantages, we propose a framework to perform the singular-value decomposition (SVD) of the PP imaging operator. We use this framework to perform the SVD of operators that describe a general two-dimensional (2D) planar linear shift-invariant (LSIV) PP system and a hypothetical continuously rotating 2D single-photon emission computed tomography (SPECT) PP system. We then discuss two applications of the SVD framework. The first application is to decompose the object being imaged by the PP imaging system into measurement and null components. We compare these components to the measurement and null components obtained with PC systems. In the process, we also present a procedure to compute the null functions for a PC system. The second application is designing analytical reconstruction algorithms for PP systems. The proposed analytical approach exploits the fact that PP systems acquire data in a continuous domain to estimate a continuous object function. The approach is parallelizable and

  10. Singular value decomposition for photon-processing nuclear imaging systems and applications for reconstruction and computing null functions

    NASA Astrophysics Data System (ADS)

    Jha, Abhinav K.; Barrett, Harrison H.; Frey, Eric C.; Clarkson, Eric; Caucci, Luca; Kupinski, Matthew A.

    2015-09-01

    Recent advances in technology are enabling a new class of nuclear imaging systems consisting of detectors that use real-time maximum-likelihood (ML) methods to estimate the interaction position, deposited energy, and other attributes of each photon-interaction event and store these attributes in a list format. This class of systems, which we refer to as photon-processing (PP) nuclear imaging systems, can be described by a fundamentally different mathematical imaging operator that allows processing of the continuous-valued photon attributes on a per-photon basis. Unlike conventional photon-counting (PC) systems that bin the data into images, PP systems do not have any binning-related information loss. Mathematically, while PC systems have an infinite-dimensional null space due to dimensionality considerations, PP systems do not necessarily suffer from this issue. Therefore, PP systems have the potential to provide improved performance in comparison to PC systems. To study these advantages, we propose a framework to perform the singular-value decomposition (SVD) of the PP imaging operator. We use this framework to perform the SVD of operators that describe a general two-dimensional (2D) planar linear shift-invariant (LSIV) PP system and a hypothetical continuously rotating 2D single-photon emission computed tomography (SPECT) PP system. We then discuss two applications of the SVD framework. The first application is to decompose the object being imaged by the PP imaging system into measurement and null components. We compare these components to the measurement and null components obtained with PC systems. In the process, we also present a procedure to compute the null functions for a PC system. The second application is designing analytical reconstruction algorithms for PP systems. The proposed analytical approach exploits the fact that PP systems acquire data in a continuous domain to estimate a continuous object function. The approach is parallelizable and

  11. Woodland Decomposition.

    ERIC Educational Resources Information Center

    Napier, J.

    1988-01-01

    Outlines the role of the main organisms involved in woodland decomposition and discusses some of the variables affecting the rate of nutrient cycling. Suggests practical work that may be of value to high school students either as standard practice or long-term projects. (CW)

  12. Effect of mountain climatic elevation gradient and litter origin on decomposition processes: long-term experiment with litter-bags

    NASA Astrophysics Data System (ADS)

    Klimek, Beata; Niklińska, Maria; Chodak, Marcin

    2013-04-01

    Temperature is one of the most important factors affecting soil organic matter decomposition. Mountain areas with vertical gradients of temperature and precipitation provide an opportunity to observe climate changes similar to those observed at various latitudes and may serve as an approximation for climatic changes. The aim of the study was to compare the effects of climatic conditions and initial properties of litter on decomposition processes and thermal sensitivity of forest litter. The litter was collected at three altitudes (600, 900, 1200 m a.s.l.) in the Beskidy Mts (southern Poland), put into litter-bags and exposed in the field since autumn 2011. The litter collected at single altitude was exposed at the altitude it was taken and also at the two other altitudes. The litter-bags were laid out on five mountains, treated as replicates. Starting on April 2012, single sets of litter-bags were collected every five weeks. The laboratory measurements included determination of dry mass loss and chemical composition (Corg, Nt, St, Mg, Ca, Na, K, Cu, Zn) of the litter. In the additional litter-bag sets, taken in spring and autumn 2012, microbial properties were measured. To determine the effect of litter properties and climatic conditions of elevation sites on decomposing litter thermal sensitivity the respiration rate of litter was measured at 5°C, 15°C and 25°C and calculated as Q10 L and Q10 H (ratios of respiration rate between 5° and 15°C and between 15°C and 25°C, respectively). The functional diversity of soil microbes was measured with Biolog® ECO plates, structural diversity with phospholipid fatty acids (PLFA). Litter mass lost during first year of incubation was characterized by high variability and mean mass lost ranged up to a 30% of initial mass. After autumn sampling we showed, that mean respiration rate of litter (dry mass) from the 600m a.s.l site exposed on 600m a.s.l. was the highest at each tested temperature. In turn, the lowest mean

  13. In-situ and self-distributed: A new understanding on catalyzed thermal decomposition process of ammonium perchlorate over Nd{sub 2}O{sub 3}

    SciTech Connect

    Zou, Min Wang, Xin Jiang, Xiaohong Lu, Lude

    2014-05-01

    Catalyzed thermal decomposition process of ammonium perchlorate (AP) over neodymium oxide (Nd{sub 2}O{sub 3}) was investigated. Catalytic performances of nanometer-sized Nd{sub 2}O{sub 3} and micrometer-sized Nd{sub 2}O{sub 3} were evaluated by differential scanning calorimetry (DSC). In contrast to universal concepts, catalysts in different sizes have nearly similar catalytic activities. Based on structural and morphological variation of the catalysts during the reaction, combined with mass spectrum analyses and studies of unmixed style, a new understanding of this catalytic process was proposed. We believed that the newly formed chloride neodymium oxide (NdOCl) was the real catalytic species in the overall thermal decomposition of AP over Nd{sub 2}O{sub 3}. Meanwhile, it was the “self-distributed” procedure which occurred within the reaction that also worked for the improvement of overall catalytic activities. This work is of great value in understanding the roles of micrometer-sized catalysts used in heterogeneous reactions, especially the solid–solid reactions which could generate a large quantity of gaseous species. - Graphical abstract: In-situ and self-distributed reaction process in thermal decomposition of AP catalyzed by Nd{sub 2}O{sub 3}. - Highlights: • Micro- and nano-Nd{sub 2}O{sub 3} for catalytic thermal decomposition of AP. • No essential differences on their catalytic performances. • Structural and morphological variation of catalysts digs out catalytic mechanism. • This catalytic process is “in-situ and self-distributed” one.

  14. The non-uniqueness of the atomistic stress tensor and its relationship to the generalized Beltrami representation

    NASA Astrophysics Data System (ADS)

    Admal, Nikhil Chandra; Tadmor, E. B.

    2016-08-01

    The non-uniqueness of the atomistic stress tensor is a well-known issue when defining continuum fields for atomistic systems. In this paper, we study the non-uniqueness of the atomistic stress tensor stemming from the non-uniqueness of the potential energy representation. In particular, we show using rigidity theory that the distribution associated with the potential part of the atomistic stress tensor can be decomposed into an irrotational part that is independent of the potential energy representation, and a traction-free solenoidal part. Therefore, we have identified for the atomistic stress tensor a discrete analog of the continuum generalized Beltrami representation (a version of the vector Helmholtz decomposition for symmetric tensors). We demonstrate the validity of these analogies using a numerical test. A program for performing the decomposition of the atomistic stress tensor called MDStressLab is available online at

  15. Re-examination of Chinese semantic processing and syntactic processing: evidence from conventional ERPs and reconstructed ERPs by residue iteration decomposition (RIDE).

    PubMed

    Wang, Fang; Ouyang, Guang; Zhou, Changsong; Wang, Suiping

    2015-01-01

    A number of studies have explored the time course of Chinese semantic and syntactic processing. However, whether syntactic processing occurs earlier than semantics during Chinese sentence reading is still under debate. To further explore this issue, an event-related potentials (ERPs) experiment was conducted on 21 native Chinese speakers who read individually-presented Chinese simple sentences (NP1+VP+NP2) word-by-word for comprehension and made semantic plausibility judgments. The transitivity of the verbs was manipulated to form three types of stimuli: congruent sentences (CON), sentences with a semantically violated NP2 following a transitive verb (semantic violation, SEM), and sentences with a semantically violated NP2 following an intransitive verb (combined semantic and syntactic violation, SEM+SYN). The ERPs evoked from the target NP2 were analyzed by using the Residue Iteration Decomposition (RIDE) method to reconstruct the ERP waveform blurred by trial-to-trial variability, as well as by using the conventional ERP method based on stimulus-locked averaging. The conventional ERP analysis showed that, compared with the critical words in CON, those in SEM and SEM+SYN elicited an N400-P600 biphasic pattern. The N400 effects in both violation conditions were of similar size and distribution, but the P600 in SEM+SYN was bigger than that in SEM. Compared with the conventional ERP analysis, RIDE analysis revealed a larger N400 effect and an earlier P600 effect (in the time window of 500-800 ms instead of 570-810ms). Overall, the combination of conventional ERP analysis and the RIDE method for compensating for trial-to-trial variability confirmed the non-significant difference between SEM and SEM+SYN in the earlier N400 time window. Converging with previous findings on other Chinese structures, the current study provides further precise evidence that syntactic processing in Chinese does not occur earlier than semantic processing. PMID:25615600

  16. A LOW-COST PROCESS FOR THE SYNTHESIS OF NANOSIZE YTTRIA-STABILIZED ZIRCONIA (YSZ) BY MOLECULAR DECOMPOSITION

    SciTech Connect

    Anil V. Virkar

    2004-05-06

    This report summarizes the results of work done during the performance period on this project, between October 1, 2002 and December 31, 2003, with a three month no-cost extension. The principal objective of this work was to develop a low-cost process for the synthesis of sinterable, fine powder of YSZ. The process is based on molecular decomposition (MD) wherein very fine particles of YSZ are formed by: (1) Mixing raw materials in a powder form, (2) Synthesizing compound containing YSZ and a fugitive constituent by a conventional process, and (3) Selectively leaching (decomposing) the fugitive constituent, thus leaving behind insoluble YSZ of a very fine particle size. While there are many possible compounds, which can be used as precursors, the one selected for the present work was Y-doped Na{sub 2}ZrO{sub 3}, where the fugitive constituent is Na{sub 2}O. It can be readily demonstrated that the potential cost of the MD process for the synthesis of very fine (or nanosize) YSZ is considerably lower than the commonly used processes, namely chemical co-precipitation and combustion synthesis. Based on the materials cost alone, for a 100 kg batch, the cost of YSZ made by chemical co-precipitation is >$50/kg, while that of the MD process should be <$10/kg. Significant progress was made during the performance period on this project. The highlights of the progress are given here in a bullet form. (1) From the two selected precursors listed in Phase I proposal, namely Y-doped BaZrO{sub 3} and Y-doped Na{sub 2}ZrO{sub 3}, selection of Y-doped Na{sub 2}ZrO{sub 3} was made for the synthesis of nanosize (or fine) YSZ. This was based on the potential cost of the precursor, the need to use only water for leaching, and the short time required for the process. (2) For the synthesis of calcia-stabilized zirconia (CSZ), which has the potential for use in place of YSZ in the anode of SOFC, Ca-doped Na{sub 2}ZrO{sub 3} was demonstrated as a suitable precursor. (3) Synthesis of Y

  17. Evaluation of Bayesian tensor estimation using tensor coherence

    NASA Astrophysics Data System (ADS)

    Kim, Dae-Jin; Kim, In-Young; Jeong, Seok-Oh; Park, Hae-Jeong

    2009-06-01

    Fiber tractography, a unique and non-invasive method to estimate axonal fibers within white matter, constructs the putative streamlines from diffusion tensor MRI by interconnecting voxels according to the propagation direction defined by the diffusion tensor. This direction has uncertainties due to the properties of underlying fiber bundles, neighboring structures and image noise. Therefore, robust estimation of the diffusion direction is essential to reconstruct reliable fiber pathways. For this purpose, we propose a tensor estimation method using a Bayesian framework, which includes an a priori probability distribution based on tensor coherence indices, to utilize both the neighborhood direction information and the inertia moment as regularization terms. The reliability of the proposed tensor estimation was evaluated using Monte Carlo simulations in terms of accuracy and precision with four synthetic tensor fields at various SNRs and in vivo human data of brain and calf muscle. Proposed Bayesian estimation demonstrated the relative robustness to noise and the higher reliability compared to the simple tensor regression.

  18. Grid-based electronic structure calculations: The tensor decomposition approach

    NASA Astrophysics Data System (ADS)

    Rakhuba, M. V.; Oseledets, I. V.

    2016-05-01

    We present a fully grid-based approach for solving Hartree-Fock and all-electron Kohn-Sham equations based on low-rank approximation of three-dimensional electron orbitals. Due to the low-rank structure the total complexity of the algorithm depends linearly with respect to the one-dimensional grid size. Linear complexity allows for the usage of fine grids, e.g. 81923 and, thus, cheap extrapolation procedure. We test the proposed approach on closed-shell atoms up to the argon, several molecules and clusters of hydrogen atoms. All tests show systematical convergence with the required accuracy.

  19. Gogny interactions with tensor terms

    NASA Astrophysics Data System (ADS)

    Anguiano, M.; Lallena, A. M.; Co', G.; De Donno, V.; Grasso, M.; Bernard, R. N.

    2016-07-01

    We present a perturbative approach to include tensor terms in the Gogny interaction. We do not change the values of the usual parameterisations, with the only exception of the spin-orbit term, and we add tensor terms whose only free parameters are the strengths of the interactions. We identify observables sensitive to the presence of the tensor force in Hartree-Fock, Hartree-Fock-Bogoliubov and random phase approximation calculations. We show the need of including two tensor contributions, at least: a pure tensor term and a tensor-isospin term. We show results relevant for the inclusion of the tensor term for single-particle energies, charge-conserving magnetic excitations and Gamow-Teller excitations.

  20. An Alternative to Tensors

    NASA Astrophysics Data System (ADS)

    Brown, Eric

    2008-10-01

    Some of the most beautiful and complex theories in physics are formulated in the language of tensors. While powerful, these methods are sometimes daunting to the uninitiated. I will introduce the use of Clifford Algebra as a practical alternative to the use of tensors. Many physical quantities can be represented in an indexless form. The boundary between the classical and the quantum worlds becomes a little more transparent. I will review some key concepts, and then talk about some of the things that I am doing with this interesting and powerful tool. Of note to some will be the development of rigid body dynamics for a game engine. Others may be interested in expressing the connection on a spin bundle. My intent is to prove to the audience that there exists an accessible mathematical tool that can be employed to probe the most difficult of topics in physics.

  1. Superconducting tensor gravity gradiometer

    NASA Technical Reports Server (NTRS)

    Paik, H. J.

    1981-01-01

    The employment of superconductivity and other material properties at cryogenic temperatures to fabricate sensitive, low-drift, gravity gradiometer is described. The device yields a reduction of noise of four orders of magnitude over room temperature gradiometers, and direct summation and subtraction of signals from accelerometers in varying orientations are possible with superconducting circuitry. Additional circuits permit determination of the linear and angular acceleration vectors independent of the measurement of the gravity gradient tensor. A dewar flask capable of maintaining helium in a liquid state for a year's duration is under development by NASA, and a superconducting tensor gravity gradiometer for the NASA Geodynamics Program is intended for a LEO polar trajectory to measure the harmonic expansion coefficients of the earth's gravity field up to order 300.

  2. The atomic strain tensor

    SciTech Connect

    Mott, P.H.; Argon, A.S. ); Suter, U.W. Massachusetts Institute of Technology, Cambridge, MA )

    1992-07-01

    A definition of the local atomic strain increments in three dimensions and an algorithm for computing them is presented. An arbitrary arrangement of atoms is tessellated in to Delaunay tetrahedra, identifying interstices, and Voronoi polyhedra, identifying atomic domains. The deformation gradient increment tensor for interstitial space is obtained from the displacement increments of the corner atoms of Delaunay tetrahedra. The atomic site strain increment tensor is then obtained by finding the intersection of the Delaunay tetrahedra with the Voronoi polyhedra, accumulating the individual deformation gradient contributions of the intersected Delaunay tetrahedra into the Voronoi polyhedra. An example application is discussed, showing how the atomic strain clarifies the relative local atomic movement for a polymeric glass treated at the atomic level. 6 refs. 10 figs.

  3. Local anisotropy of fluids using Minkowski tensors

    NASA Astrophysics Data System (ADS)

    Kapfer, S. C.; Mickel, W.; Schaller, F. M.; Spanner, M.; Goll, C.; Nogawa, T.; Ito, N.; Mecke, K.; Schröder-Turk, G. E.

    2010-11-01

    Statistics of the free volume available to individual particles have previously been studied for simple and complex fluids, granular matter, amorphous solids, and structural glasses. Minkowski tensors provide a set of shape measures that are based on strong mathematical theorems and easily computed for polygonal and polyhedral bodies such as free volume cells (Voronoi cells). They characterize the local structure beyond the two-point correlation function and are suitable to define indices 0 <= βνa, b <= 1 of local anisotropy. Here, we analyze the statistics of Minkowski tensors for configurations of simple liquid models, including the ideal gas (Poisson point process), the hard disks and hard spheres ensemble, and the Lennard-Jones fluid. We show that Minkowski tensors provide a robust characterization of local anisotropy, which ranges from βνa, b≈0.3 for vapor phases to \\beta_\

  4. Direct Solution of the Chemical Master Equation Using Quantized Tensor Trains

    PubMed Central

    Kazeev, Vladimir; Khammash, Mustafa; Nip, Michael; Schwab, Christoph

    2014-01-01

    The Chemical Master Equation (CME) is a cornerstone of stochastic analysis and simulation of models of biochemical reaction networks. Yet direct solutions of the CME have remained elusive. Although several approaches overcome the infinite dimensional nature of the CME through projections or other means, a common feature of proposed approaches is their susceptibility to the curse of dimensionality, i.e. the exponential growth in memory and computational requirements in the number of problem dimensions. We present a novel approach that has the potential to “lift” this curse of dimensionality. The approach is based on the use of the recently proposed Quantized Tensor Train (QTT) formatted numerical linear algebra for the low parametric, numerical representation of tensors. The QTT decomposition admits both, algorithms for basic tensor arithmetics with complexity scaling linearly in the dimension (number of species) and sub-linearly in the mode size (maximum copy number), and a numerical tensor rounding procedure which is stable and quasi-optimal. We show how the CME can be represented in QTT format, then use the exponentially-converging -discontinuous Galerkin discretization in time to reduce the CME evolution problem to a set of QTT-structured linear equations to be solved at each time step using an algorithm based on Density Matrix Renormalization Group (DMRG) methods from quantum chemistry. Our method automatically adapts the “basis” of the solution at every time step guaranteeing that it is large enough to capture the dynamics of interest but no larger than necessary, as this would increase the computational complexity. Our approach is demonstrated by applying it to three different examples from systems biology: independent birth-death process, an example of enzymatic futile cycle, and a stochastic switch model. The numerical results on these examples demonstrate that the proposed QTT method achieves dramatic speedups and several orders of magnitude

  5. Direct solution of the Chemical Master Equation using quantized tensor trains.

    PubMed

    Kazeev, Vladimir; Khammash, Mustafa; Nip, Michael; Schwab, Christoph

    2014-03-01

    The Chemical Master Equation (CME) is a cornerstone of stochastic analysis and simulation of models of biochemical reaction networks. Yet direct solutions of the CME have remained elusive. Although several approaches overcome the infinite dimensional nature of the CME through projections or other means, a common feature of proposed approaches is their susceptibility to the curse of dimensionality, i.e. the exponential growth in memory and computational requirements in the number of problem dimensions. We present a novel approach that has the potential to "lift" this curse of dimensionality. The approach is based on the use of the recently proposed Quantized Tensor Train (QTT) formatted numerical linear algebra for the low parametric, numerical representation of tensors. The QTT decomposition admits both, algorithms for basic tensor arithmetics with complexity scaling linearly in the dimension (number of species) and sub-linearly in the mode size (maximum copy number), and a numerical tensor rounding procedure which is stable and quasi-optimal. We show how the CME can be represented in QTT format, then use the exponentially-converging hp-discontinuous Galerkin discretization in time to reduce the CME evolution problem to a set of QTT-structured linear equations to be solved at each time step using an algorithm based on Density Matrix Renormalization Group (DMRG) methods from quantum chemistry. Our method automatically adapts the "basis" of the solution at every time step guaranteeing that it is large enough to capture the dynamics of interest but no larger than necessary, as this would increase the computational complexity. Our approach is demonstrated by applying it to three different examples from systems biology: independent birth-death process, an example of enzymatic futile cycle, and a stochastic switch model. The numerical results on these examples demonstrate that the proposed QTT method achieves dramatic speedups and several orders of magnitude storage

  6. Physical decomposition of the gauge and gravitational fields

    SciTech Connect

    Chen Xiangsong; Zhu Benchao

    2011-04-15

    Physical decomposition of the non-Abelian gauge field has recently helped to achieve a meaningful gluon spin. Here we extend this approach to gravity and attempt a meaningful gravitational energy. The metric is unambiguously separated into a pure geometric term which contributes a null curvature tensor, and a physical term which represents the true gravitational effect and always vanishes in a flat space-time. By this decomposition the conventional pseudotensors of the gravitational stress-energy are easily rescued to produce a definite physical result. Our decomposition applies to any symmetric tensor, and has an interesting relation to the transverse-traceless decomposition discussed by Arnowitt, Deser and Misner, and by York.

  7. Structured data-sparse approximation to high order tensors arising from the deterministic Boltzmann equation

    NASA Astrophysics Data System (ADS)

    Khoromskij, Boris N.

    2007-09-01

    We develop efficient data-sparse representations to a class of high order tensors via a block many-fold Kronecker product decomposition. Such a decomposition is based on low separation-rank approximations of the corresponding multivariate generating function. We combine the Sinc interpolation and a quadrature-based approximation with hierarchically organised block tensor-product formats. Different matrix and tensor operations in the generalised Kronecker tensor-product format including the Hadamard-type product can be implemented with the low cost. An application to the collision integral from the deterministic Boltzmann equation leads to an asymptotical cost O(n^4log^beta n) - O(n^5log^beta n) in the one-dimensional problem size n (depending on the model kernel function), which noticeably improves the complexity O(n^6log^beta n) of the full matrix representation.

  8. Tensor-based detection of T wave alternans using ECG.

    PubMed

    Goovaerts, Griet; Vandenberk, Bert; Willems, Rik; Van Huffel, Sabine

    2015-08-01

    T wave alternans is defined as changes in the T wave amplitude in an ABABAB-pattern. It can be found in ECG signals of patients with heart diseases and is a possible indicator to predict the risk on sudden cardiac death. Due to its low amplitude, robust automatic T wave alternans detection is a difficult task. We present a new method to detect T wave alternans in multichannel ECG signals. The use of tensors (multidimensional matrices) permits the combination of the information present in different channels, making detection more reliable. The possibility of decomposition of incomplete tensors is exploited to deal with noisy ECG segments. Using a sliding window of 128 heartbeats, a tensor is constructed of the T waves of all channels. Canonical Polyadic Decomposition is applied to this tensor and the resulting loading vectors are examined for information about the T wave behavior in three dimensions. T wave alternans is detected using a sign change counting method that is able to extract both the T wave alternans length and magnitude. When applying this novel method to a database of patients with multiple positive T wave alternans tests using the clinically available spectral method tests, both the length and the magnitude of the detected T wave alternans is larger for these subjects than for subjects in a control group. PMID:26737901

  9. Carbon decomposition process of the residual biomass in the paddy soil of a single-crop rice field

    NASA Astrophysics Data System (ADS)

    Okada, K.; Iwata, T.

    2014-12-01

    In cultivated fields, residual organic matter is plowed into soil after harvest and decaying in fallow season. Greenhouse gases such as CO2 and CH4 is generated by the decomposition of the substantial organic matter and released into the atmosphere. In some fields, open burning is carried out by tradition, when carbon in residual matter is released into atmosphere as CO2. However, burning effect on carbon budget between crop lands and atmosphere is not entirely considered yet. In this study, coarse organic matter (COM) in paddy soil of a single-crop rice field was sampled on regular intervals between January, 2011 and August, 2014 The amount of carbon release from residual matter was estimated by analyzing of the variations in carbon content of COM. Effects of soil temperature (Ts) and soil water content (SWC) at the paddy field on the rate of carbon decomposition was investigated. Though decreasing rate of COM was much smaller in winter season, it is accelerated at the warming season between April and June every year. Decomposition was resisted for next rice cultivated season despite of highest soil temperature. In addition, the observational field was divided into two areas, and three time open burning experiments were conducted in November, 2011, 2012, and 2013. In each year, three sampling surveys, or plants before harvest and residuals before and after the burning experiment, were done. From these surveys, it is suggested that about 48±2% of carbon contents of above-ground plant was yield out as grain by harvest, and about 27±2% of carbon emitted as CO2 by burning. Carbon content of residuals plowed into soil after the harvest was estimated 293±1 and 220±36gC/m2 in no-burned and burned area, respectively, based on three-years average. It is estimated that 70 and 60% of the first input amount of COM was decomposed after a year in no-burned and burned area, respectively.

  10. Tensor powers for non-simply laced Lie algebras B2-case

    NASA Astrophysics Data System (ADS)

    Kulish, P. P.; Lyakhovsky, V. D.; Postnova, O. V.

    2012-02-01

    We study the decomposition problem for tensor powers of B2-fundamental modules. To solve this problem singular weight technique and injection fan algorithms are applied. Properties of multiplicity coefficients are formulated in terms of multiplicity functions. These functions are constructed showing explicitly the dependence of multiplicity coefficients on the highest weight coordinates and the tensor power parameter. It is thus possible to study general properties of multiplicity coefficients for powers of the fundamental B2-modules.

  11. Thermal decomposition of [Co(en)3][Fe(CN)6]∙ 2H2O: Topotactic dehydration process, valence and spin exchange mechanism elucidation

    PubMed Central

    2013-01-01

    Background The Prussian blue analogues represent well-known and extensively studied group of coordination species which has many remarkable applications due to their ion-exchange, electron transfer or magnetic properties. Among them, Co-Fe Prussian blue analogues have been extensively studied due to the photoinduced magnetization. Surprisingly, their suitability as precursors for solid-state synthesis of magnetic nanoparticles is almost unexplored. In this paper, the mechanism of thermal decomposition of [Co(en)3][Fe(CN)6] ∙∙ 2H2O (1a) is elucidated, including the topotactic dehydration, valence and spins exchange mechanisms suggestion and the formation of a mixture of CoFe2O4-Co3O4 (3:1) as final products of thermal degradation. Results The course of thermal decomposition of 1a in air atmosphere up to 600°C was monitored by TG/DSC techniques, 57Fe Mössbauer and IR spectroscopy. As first, the topotactic dehydration of 1a to the hemihydrate [Co(en)3][Fe(CN)6] ∙∙ 1/2H2O (1b) occurred with preserving the single-crystal character as was confirmed by the X-ray diffraction analysis. The consequent thermal decomposition proceeded in further four stages including intermediates varying in valence and spin states of both transition metal ions in their structures, i.e. [FeII(en)2(μ-NC)CoIII(CN)4], FeIII(NH2CH2CH3)2(μ-NC)2CoII(CN)3] and FeIII[CoII(CN)5], which were suggested mainly from 57Fe Mössbauer, IR spectral and elemental analyses data. Thermal decomposition was completed at 400°C when superparamagnetic phases of CoFe2O4 and Co3O4 in the molar ratio of 3:1 were formed. During further temperature increase (450 and 600°C), the ongoing crystallization process gave a new ferromagnetic phase attributed to the CoFe2O4-Co3O4 nanocomposite particles. Their formation was confirmed by XRD and TEM analyses. In-field (5 K / 5 T) Mössbauer spectrum revealed canting of Fe(III) spin in almost fully inverse spinel structure of CoFe2O4. Conclusions It has been found

  12. Synthesis of lead zirconate titanate nanofibres and the Fourier-transform infrared characterization of their metallo-organic decomposition process

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Santiago-Avilés, Jorge J.

    2004-01-01

    We have synthesized Pb(Zr0.52Ti0.48)O3 fibres with diameters ranging from 500 nm to several microns using electrospinning and metallo-organic decomposition techniques (Wang et al 2002 Mater. Res. Soc. Symp. Proc. 702 359). By a refinement of our electrospinning technique, i.e. by increasing the viscosity of the precursor solution, and by adding a filter to the tip of the syringe, the diameter of the synthesized PZT fibres has been reduced to the neighbourhood of 100 nm. The complex thermal decomposition was characterized using Fourier-transform infrared (FTIR) spectroscopy and x-ray diffraction (XRD). It was found that alcohol evaporated during electrospinning and that most of the organic groups had pyrolysed before the intermediate pyrochlore phase was formed. There is a good correspondence between XRD and FTIR spectra. We also verify that a thin film of platinum coated on the silicon substrate catalyses the phase transformation of the pyrochlore into the perovskite phase.

  13. Killing and conformal Killing tensors

    NASA Astrophysics Data System (ADS)

    Heil, Konstantin; Moroianu, Andrei; Semmelmann, Uwe

    2016-08-01

    We introduce an appropriate formalism in order to study conformal Killing (symmetric) tensors on Riemannian manifolds. We reprove in a simple way some known results in the field and obtain several new results, like the classification of conformal Killing 2-tensors on Riemannian products of compact manifolds, Weitzenböck formulas leading to non-existence results, and construct various examples of manifolds with conformal Killing tensors.

  14. FaRe: A Mathematica package for tensor reduction of Feynman integrals

    NASA Astrophysics Data System (ADS)

    Re Fiorentin, Michele

    2016-08-01

    In this paper, we present FaRe, a package for Mathematica that implements the decomposition of a generic tensor Feynman integral, with arbitrary loop number, into scalar integrals in higher dimension. In order for FaRe to work, the package FeynCalc is needed, so that the tensor structure of the different contributions is preserved and the obtained scalar integrals are grouped accordingly. FaRe can prove particularly useful when it is preferable to handle Feynman integrals with free Lorentz indices and tensor reduction of high-order integrals is needed. This can then be achieved with several powerful existing tools.

  15. Projectors and seed conformal blocks for traceless mixed-symmetry tensors

    NASA Astrophysics Data System (ADS)

    Costa, Miguel S.; Hansen, Tobias; Penedones, João; Trevisani, Emilio

    2016-07-01

    In this paper we derive the projectors to all irreducible SO( d) representations (traceless mixed-symmetry tensors) that appear in the partial wave decomposition of a conformal correlator of four stress-tensors in d dimensions. These projectors are given in a closed form for arbitrary length l 1 of the first row of the Young diagram. The appearance of Gegenbauer polynomials leads directly to recursion relations in l 1 for seed conformal blocks. Further results include a differential operator that generates the projectors to traceless mixed-symmetry tensors and the general normalization constant of the shadow operator.

  16. Notes on super Killing tensors

    NASA Astrophysics Data System (ADS)

    Howe, P. S.; Lindström, U.

    2016-03-01

    The notion of a Killing tensor is generalised to a superspace setting. Conserved quantities associated with these are defined for superparticles and Poisson brackets are used to define a supersymmetric version of the even Schouten-Nijenhuis bracket. Superconformal Killing tensors in flat superspaces are studied for spacetime dimensions 3,4,5,6 and 10. These tensors are also presented in analytic superspaces and super-twistor spaces for 3,4 and 6 dimensions. Algebraic structures associated with superconformal Killing tensors are also briefly discussed.

  17. Kinetics and mechanism of monomolecular heterolysis of framework compounds. V. Ionization-fragmentation process in decomposition of 1-adamantyl chloroformate

    SciTech Connect

    Ponomareva, E.A.; Yavorskaya, I.F.; Dvorko, G.V.

    1988-08-10

    The decomposition of 1-adamantyl chloroformate in acetonitrile, nitrobenzene, benzene, and isopropyl and tert-butyl alcohols in the presence of triphenylverdazyls as internal indicator was investigated preparatively and kinetically. In nitrobenzene small additions of water increase the reaction rate, and additions of tetraethylammonium halides reduce it. In isopropyl and tert-butyl alcohols and in nitrobenzene in the presence of tetraethylammonium halides the reaction rate depends on the nature of the substituent in the verdazyl. The reaction rate increases linearly with increase in the dielectric constant of the medium. It is assumed that an intimate ion pair is formed at the first stage of the reaction and undergoes fragmentation in the controlling stage to 1-adamantyl chloride or is converted into a solvent-separated ion pair. The latter reacts with the verdazyl or undergoes fragmentation to 1-adamantyl chloride.

  18. Relativistic Lagrangian displacement field and tensor perturbations

    NASA Astrophysics Data System (ADS)

    Rampf, Cornelius; Wiegand, Alexander

    2014-12-01

    We investigate the purely spatial Lagrangian coordinate transformation from the Lagrangian to the basic Eulerian frame. We demonstrate three techniques for extracting the relativistic displacement field from a given solution in the Lagrangian frame. These techniques are (a) from defining a local set of Eulerian coordinates embedded into the Lagrangian frame; (b) from performing a specific gauge transformation; and (c) from a fully nonperturbative approach based on the Arnowitt-Deser-Misner (ADM) split. The latter approach shows that this decomposition is not tied to a specific perturbative formulation for the solution of the Einstein equations. Rather, it can be defined at the level of the nonperturbative coordinate change from the Lagrangian to the Eulerian description. Studying such different techniques is useful because it allows us to compare and develop further the various approximation techniques available in the Lagrangian formulation. We find that one has to solve the gravitational wave equation in the relativistic analysis, otherwise the corresponding Newtonian limit will necessarily contain spurious nonpropagating tensor artifacts at second order in the Eulerian frame. We also derive the magnetic part of the Weyl tensor in the Lagrangian frame, and find that it is not only excited by gravitational waves but also by tensor perturbations which are induced through the nonlinear frame dragging. We apply our findings to calculate for the first time the relativistic displacement field, up to second order, for a Λ CDM Universe in the presence of a local primordial non-Gaussian component. Finally, we also comment on recent claims about whether mass conservation in the Lagrangian frame is violated.

  19. On Endomorphisms of Quantum Tensor Space

    NASA Astrophysics Data System (ADS)

    Lehrer, Gustav Isaac; Zhang, Ruibin

    2008-12-01

    We give a presentation of the endomorphism algebra End_{mathcal {U}q(mathfrak {sl}2)}(V^{⊗ r}) , where V is the three-dimensional irreducible module for quantum {mathfrak {sl}_2} over the function field {mathbb {C}(q^{1/2})} . This will be as a quotient of the Birman-Wenzl-Murakami algebra BMW r ( q) : = BMW r ( q -4, q 2 - q -2) by an ideal generated by a single idempotent Φ q . Our presentation is in analogy with the case where V is replaced by the two-dimensional irreducible {mathcal {U}_q(mathfrak {sl}2)} -module, the BMW algebra is replaced by the Hecke algebra H r ( q) of type A r-1, Φ q is replaced by the quantum alternator in H 3( q), and the endomorphism algebra is the classical realisation of the Temperley-Lieb algebra on tensor space. In particular, we show that all relations among the endomorphisms defined by the R-matrices on {V^{⊗ r}} are consequences of relations among the three R-matrices acting on {V^{⊗ 4}}. The proof makes extensive use of the theory of cellular algebras. Potential applications include the decomposition of tensor powers when q is a root of unity.

  20. A self-documenting source-independent data format for computer processing of tensor time series. [for filing satellite geophysical data

    NASA Technical Reports Server (NTRS)

    Mcpherron, R. L.

    1976-01-01

    The UCLA Space Science Group has developed a fixed format intermediate data set called a block data set, which is designed to hold multiple segments of multicomponent sampled data series. The format is sufficiently general so that tensor functions of one or more independent variables can be stored in the form of virtual data. This makes it possible for the unit data records of the block data set to be arrays of a single dependent variable rather than discrete samples. The format is self-documenting with parameter, label and header records completely characterizing the contents of the file. The block data set has been applied to the filing of satellite data (of ATS-6 among others).

  1. Reducing tensor magnetic gradiometer data for unexploded ordnance detection

    USGS Publications Warehouse

    Bracken, Robert E.; Brown, Philip J.

    2005-01-01

    We performed a survey to demonstrate the effectiveness of a prototype tensor magnetic gradiometer system (TMGS) for detection of buried unexploded ordnance (UXO). In order to achieve a useful result, we designed a data-reduction procedure that resulted in a realistic magnetic gradient tensor and devised a simple way of viewing complicated tensor data, not only to assess the validity of the final resulting tensor, but also to preview the data at interim stages of processing. The final processed map of the surveyed area clearly shows a sharp anomaly that peaks almost directly over the target UXO. This map agrees well with a modeled map derived from dipolar sources near the known target locations. From this agreement, it can be deduced that the reduction process is valid, making the prototype TMGS a foundation for development of future systems and processes.

  2. Relationship between the Decomposition Process of Coarse Woody Debris and Fungal Community Structure as Detected by High-Throughput Sequencing in a Deciduous Broad-Leaved Forest in Japan

    PubMed Central

    Yamashita, Satoshi; Masuya, Hayato; Abe, Shin; Masaki, Takashi; Okabe, Kimiko

    2015-01-01

    We examined the relationship between the community structure of wood-decaying fungi, detected by high-throughput sequencing, and the decomposition rate using 13 years of data from a forest dynamics plot. For molecular analysis and wood density measurements, drill dust samples were collected from logs and stumps of Fagus and Quercus in the plot. Regression using a negative exponential model between wood density and time since death revealed that the decomposition rate of Fagus was greater than that of Quercus. The residual between the expected value obtained from the regression curve and the observed wood density was used as a decomposition rate index. Principal component analysis showed that the fungal community compositions of both Fagus and Quercus changed with time since death. Principal component analysis axis scores were used as an index of fungal community composition. A structural equation model for each wood genus was used to assess the effect of fungal community structure traits on the decomposition rate and how the fungal community structure was determined by the traits of coarse woody debris. Results of the structural equation model suggested that the decomposition rate of Fagus was affected by two fungal community composition components: one that was affected by time since death and another that was not affected by the traits of coarse woody debris. In contrast, the decomposition rate of Quercus was not affected by coarse woody debris traits or fungal community structure. These findings suggest that, in the case of Fagus coarse woody debris, the fungal community structure is related to the decomposition process of its host substrate. Because fungal community structure is affected partly by the decay stage and wood density of its substrate, these factors influence each other. Further research on interactive effects is needed to improve our understanding of the relationship between fungal community structure and the woody debris decomposition process

  3. Thermal decomposition of tetramethyl orthosilicate in the gas phase: An experimental and theoretical study of the initiation process

    SciTech Connect

    Chu, J.C.S.; Soller, R.; Lin, M.C. ); Melius, C.F. )

    1995-01-12

    The thermal decomposition of Si(OCH[sub 3])[sub 4] (TMOS) has been studied by FTIR at temperatures between 858 and 968 K. The experiment was carried out in a static cell at a constant pressure of 700 Torr under highly diluted conditions. Additional experiments were performed by using toluene as a radical scavenger. The species monitored included TMOS, CH[sub 2]O, CH[sub 4], and CO. According to these measurements, the first-order global rate constants for the disappearance of TMOS without and with toluene can be given by k[sub g] = 1.4 x 10[sup 16] exp(-81 200/RT) s[sup [minus]1] and k[sub g] = 2.0 x 10[sup 14] exp(-74 500/RT) s[sup [minus]1], respectively. The noticeable difference between the two sets of Arrhenius parameters suggests that, in the absence of the inhibitor, the reactant was consumed to a significant extent by radical attacks at higher temperatures. The experimental data were kinetically modeled with the aid of a quantum-chemical calculation using the BAC-MP4 method. The results of the kinetic modeling, using the mechanism constructed on the basis of the quantum-chemical data and the known C/H/O chemistry, identified two rate-controlling reactions whose first-order rate constants are given here. 22 refs., 15 figs., 3 tabs.

  4. Mode decomposition evolution equations

    PubMed Central

    Wang, Yang; Wei, Guo-Wei; Yang, Siyang

    2011-01-01

    Partial differential equation (PDE) based methods have become some of the most powerful tools for exploring the fundamental problems in signal processing, image processing, computer vision, machine vision and artificial intelligence in the past two decades. The advantages of PDE based approaches are that they can be made fully automatic, robust for the analysis of images, videos and high dimensional data. A fundamental question is whether one can use PDEs to perform all the basic tasks in the image processing. If one can devise PDEs to perform full-scale mode decomposition for signals and images, the modes thus generated would be very useful for secondary processing to meet the needs in various types of signal and image processing. Despite of great progress in PDE based image analysis in the past two decades, the basic roles of PDEs in image/signal analysis are only limited to PDE based low-pass filters, and their applications to noise removal, edge detection, segmentation, etc. At present, it is not clear how to construct PDE based methods for full-scale mode decomposition. The above-mentioned limitation of most current PDE based image/signal processing methods is addressed in the proposed work, in which we introduce a family of mode decomposition evolution equations (MoDEEs) for a vast variety of applications. The MoDEEs are constructed as an extension of a PDE based high-pass filter (Europhys. Lett., 59(6): 814, 2002) by using arbitrarily high order PDE based low-pass filters introduced by Wei (IEEE Signal Process. Lett., 6(7): 165, 1999). The use of arbitrarily high order PDEs is essential to the frequency localization in the mode decomposition. Similar to the wavelet transform, the present MoDEEs have a controllable time-frequency localization and allow a perfect reconstruction of the original function. Therefore, the MoDEE operation is also called a PDE transform. However, modes generated from the present approach are in the spatial or time domain and can be

  5. Skyrme tensor force in heavy ion collisions

    NASA Astrophysics Data System (ADS)

    Stevenson, P. D.; Suckling, E. B.; Fracasso, S.; Barton, M. C.; Umar, A. S.

    2016-05-01

    Background: It is generally acknowledged that the time-dependent Hartree-Fock (TDHF) method provides a useful foundation for a fully microscopic many-body theory of low-energy heavy ion reactions. The TDHF method is also known in nuclear physics in the small-amplitude domain, where it provides a useful description of collective states, and is based on the mean-field formalism, which has been a relatively successful approximation to the nuclear many-body problem. Currently, the TDHF theory is being widely used in the study of fusion excitation functions, fission, and deep-inelastic scattering of heavy mass systems, while providing a natural foundation for many other studies. Purpose: With the advancement of computational power it is now possible to undertake TDHF calculations without any symmetry assumptions and incorporate the major strides made by the nuclear structure community in improving the energy density functionals used in these calculations. In particular, time-odd and tensor terms in these functionals are naturally present during the dynamical evolution, while being absent or minimally important for most static calculations. The parameters of these terms are determined by the requirement of Galilean invariance or local gauge invariance but their significance for the reaction dynamics have not been fully studied. This work addresses this question with emphasis on the tensor force. Method: The full version of the Skyrme force, including terms arising only from the Skyrme tensor force, is applied to the study of collisions within a completely symmetry-unrestricted TDHF implementation. Results: We examine the effect on upper fusion thresholds with and without the tensor force terms and find an effect on the fusion threshold energy of the order several MeV. Details of the distribution of the energy within terms in the energy density functional are also discussed. Conclusions: Terms in the energy density functional linked to the tensor force can play a non

  6. 3-D inversion of magnetotelluric Phase Tensor

    NASA Astrophysics Data System (ADS)

    Patro, Prasanta; Uyeshima, Makoto

    2010-05-01

    Three-dimensional (3-D) inversion of the magnetotelluric (MT) has become a routine practice among the MT community due to progress of algorithms for 3-D inverse problems (e.g. Mackie and Madden, 1993; Siripunvaraporn et al., 2005). While availability of such 3-D inversion codes have increased the resolving power of the MT data and improved the interpretation, on the other hand, still the galvanic effects poses difficulties in interpretation of resistivity structure obtained from the MT data. In order to tackle the galvanic distortion of MT data, Caldwell et al., (2004) introduced the concept of phase tensor. They demonstrated how the regional phase information can be retrieved from the observed impedance tensor without any assumptions for structural dimension, where both the near surface inhomogeneity and the regional conductivity structures can be 3-D. We made an attempt to modify a 3-D inversion code (Siripunvaraporn et al., 2005) to directly invert the phase tensor elements. We present here the main modification done in the sensitivity calculation and then show a few synthetic studies and its application to the real data. The synthetic model study suggests that the prior model (m_0) setting is important in retrieving the true model. This is because estimation of correct induction scale length lacks in the phase tensor inversion process. Comparison between results from conventional impedance inversion and new phase tensor inversion suggests that, in spite of presence of the galvanic distortion (due to near surface checkerboard anomalies in our case), the new inverion algorithm retrieves the regional conductivitity structure reliably. We applied the new inversion to the real data from the Indian sub continent and compared with the results from conventional impedance inversion.

  7. Extracting the diffusion tensor from molecular dynamics simulation with Milestoning

    SciTech Connect

    Mugnai, Mauro L.; Elber, Ron

    2015-01-07

    We propose an algorithm to extract the diffusion tensor from Molecular Dynamics simulations with Milestoning. A Kramers-Moyal expansion of a discrete master equation, which is the Markovian limit of the Milestoning theory, determines the diffusion tensor. To test the algorithm, we analyze overdamped Langevin trajectories and recover a multidimensional Fokker-Planck equation. The recovery process determines the flux through a mesh and estimates local kinetic parameters. Rate coefficients are converted to the derivatives of the potential of mean force and to coordinate dependent diffusion tensor. We illustrate the computation on simple models and on an atomically detailed system—the diffusion along the backbone torsions of a solvated alanine dipeptide.

  8. Link prediction on evolving graphs using matrix and tensor factorizations.

    SciTech Connect

    Dunlavy, Daniel M.; Acar, Evrim; Kolda, Tamara Gibson

    2010-06-01

    The data in many disciplines such as social networks, web analysis, etc. is link-based, and the link structure can be exploited for many different data mining tasks. In this paper, we consider the problem of temporal link prediction: Given link data for time periods 1 through T, can we predict the links in time period T + 1? Specifically, we look at bipartite graphs changing over time and consider matrix- and tensor-based methods for predicting links. We present a weight-based method for collapsing multi-year data into a single matrix. We show how the well-known Katz method for link prediction can be extended to bipartite graphs and, moreover, approximated in a scalable way using a truncated singular value decomposition. Using a CANDECOMP/PARAFAC tensor decomposition of the data, we illustrate the usefulness of exploiting the natural three-dimensional structure of temporal link data. Through several numerical experiments, we demonstrate that both matrix- and tensor-based techniques are effective for temporal link prediction despite the inherent difficulty of the problem.

  9. Parallel Tensor Compression for Large-Scale Scientific Data.

    SciTech Connect

    Kolda, Tamara G.; Ballard, Grey; Austin, Woody Nathan

    2015-10-01

    As parallel computing trends towards the exascale, scientific data produced by high-fidelity simulations are growing increasingly massive. For instance, a simulation on a three-dimensional spatial grid with 512 points per dimension that tracks 64 variables per grid point for 128 time steps yields 8 TB of data. By viewing the data as a dense five way tensor, we can compute a Tucker decomposition to find inherent low-dimensional multilinear structure, achieving compression ratios of up to 10000 on real-world data sets with negligible loss in accuracy. So that we can operate on such massive data, we present the first-ever distributed memory parallel implementation for the Tucker decomposition, whose key computations correspond to parallel linear algebra operations, albeit with nonstandard data layouts. Our approach specifies a data distribution for tensors that avoids any tensor data redistribution, either locally or in parallel. We provide accompanying analysis of the computation and communication costs of the algorithms. To demonstrate the compression and accuracy of the method, we apply our approach to real-world data sets from combustion science simulations. We also provide detailed performance results, including parallel performance in both weak and strong scaling experiments.

  10. Tensor Network Contractions for #SAT

    NASA Astrophysics Data System (ADS)

    Biamonte, Jacob D.; Morton, Jason; Turner, Jacob

    2015-09-01

    The computational cost of counting the number of solutions satisfying a Boolean formula, which is a problem instance of #SAT, has proven subtle to quantify. Even when finding individual satisfying solutions is computationally easy (e.g. 2-SAT, which is in ), determining the number of solutions can be #-hard. Recently, computational methods simulating quantum systems experienced advancements due to the development of tensor network algorithms and associated quantum physics-inspired techniques. By these methods, we give an algorithm using an axiomatic tensor contraction language for n-variable #SAT instances with complexity where c is the number of COPY-tensors, g is the number of gates, and d is the maximal degree of any COPY-tensor. Thus, n-variable counting problems can be solved efficiently when their tensor network expression has at most COPY-tensors and polynomial fan-out. This framework also admits an intuitive proof of a variant of the Tovey conjecture (the r,1-SAT instance of the Dubois-Tovey theorem). This study increases the theory, expressiveness and application of tensor based algorithmic tools and provides an alternative insight on these problems which have a long history in statistical physics and computer science.

  11. Tensor-network algorithm for nonequilibrium relaxation in the thermodynamic limit

    NASA Astrophysics Data System (ADS)

    Hotta, Yoshihito

    2016-06-01

    We propose a tensor-network algorithm for discrete-time stochastic dynamics of a homogeneous system in the thermodynamic limit. We map a d -dimensional nonequilibrium Markov process to a (d +1 ) -dimensional infinite tensor network by using a higher-order singular-value decomposition. As an application of the algorithm, we compute the nonequilibrium relaxation from a fully magnetized state to equilibrium of the one- and two-dimensional Ising models with periodic boundary conditions. Utilizing the translational invariance of the systems, we analyze the behavior in the thermodynamic limit directly. We estimated the dynamical critical exponent z =2.16 (5 ) for the two-dimensional Ising model. Our approach fits well with the framework of the nonequilibrium-relaxation method. Our algorithm can compute time evolution of the magnetization of a large system precisely for a relatively short period. In the nonequilibrium-relaxation method, one needs to simulate dynamics of a large system for a short time. The combination of the two provides a different approach to the study of critical phenomena.

  12. MATLAB tensor classes for fast algorithm prototyping.

    SciTech Connect

    Bader, Brett William; Kolda, Tamara Gibson

    2004-10-01

    Tensors (also known as mutidimensional arrays or N-way arrays) are used in a variety of applications ranging from chemometrics to psychometrics. We describe four MATLAB classes for tensor manipulations that can be used for fast algorithm prototyping. The tensor class extends the functionality of MATLAB's multidimensional arrays by supporting additional operations such as tensor multiplication. The tensor as matrix class supports the 'matricization' of a tensor, i.e., the conversion of a tensor to a matrix (and vice versa), a commonly used operation in many algorithms. Two additional classes represent tensors stored in decomposed formats: cp tensor and tucker tensor. We descibe all of these classes and then demonstrate their use by showing how to implement several tensor algorithms that have appeared in the literature.

  13. Tensor classification of structure in smoothed particle hydrodynamics density fields

    NASA Astrophysics Data System (ADS)

    Forgan, Duncan; Bonnell, Ian; Lucas, William; Rice, Ken

    2016-04-01

    As hydrodynamic simulations increase in scale and resolution, identifying structures with non-trivial geometries or regions of general interest becomes increasingly challenging. There is a growing need for algorithms that identify a variety of different features in a simulation without requiring a `by eye' search. We present tensor classification as such a technique for smoothed particle hydrodynamics (SPH). These methods have already been used to great effect in N-Body cosmological simulations, which require smoothing defined as an input free parameter. We show that tensor classification successfully identifies a wide range of structures in SPH density fields using its native smoothing, removing a free parameter from the analysis and preventing the need for tessellation of the density field, as required by some classification algorithms. As examples, we show that tensor classification using the tidal tensor and the velocity shear tensor successfully identifies filaments, shells and sheet structures in giant molecular cloud simulations, as well as spiral arms in discs. The relationship between structures identified using different tensors illustrates how different forces compete and co-operate to produce the observed density field. We therefore advocate the use of multiple tensors to classify structure in SPH simulations, to shed light on the interplay of multiple physical processes.

  14. Real-time framework for tensor-based image enhancement for object classification

    NASA Astrophysics Data System (ADS)

    Cyganek, Bogusław; Smołka, Bogdan

    2016-04-01

    In many practical situations visual pattern recognition is vastly burdened by low quality of input images due to noise, geometrical distortions, as well as low quality of the acquisition hardware. However, although there are techniques of image quality improvements, such as nonlinear filtering, there are only few attempts reported in the literature that try to build these enhancement methods into a complete chain for multi-dimensional object recognition such as color video or hyperspectral images. In this work we propose a joint multilinear signal filtering and classification system built upon the multi-dimensional (tensor) approach. Tensor filtering is performed by the multi-dimensional input signal projection into the tensor subspace spanned by the best-rank tensor decomposition method. On the other hand, object classification is done by construction of the tensor sub-space constructed based on the Higher-Order Singular Value Decomposition method applied to the prototype patters. In the experiments we show that the proposed chain allows high object recognition accuracy in the real-time even from the poor quality prototypes. Even more importantly, the proposed framework allows unified classification of signals of any dimensions, such as color images or video sequences which are exemplars of 3D and 4D tensors, respectively. The paper discussed also some practical issues related to implementation of the key components of the proposed system.

  15. Metallo-Organic Decomposition (MOD) film development

    NASA Technical Reports Server (NTRS)

    Parker, J.

    1986-01-01

    The processing techniques and problems encountered in formulating metallo-organic decomposition (MOD) films used in contracting structures for thin solar cells are described. The use of thermogravimetric analysis (TGA) and differential scanning calorimetry (DSC) techniques performed at Jet Propulsion Laboratory (JPL) in understanding the decomposition reactions lead to improvements in process procedures. The characteristics of the available MOD films were described in detail.

  16. Tensor-polarized structure functions: Tensor structure of deuteron in 2020's

    NASA Astrophysics Data System (ADS)

    Kumano, S.

    2014-10-01

    We explain spin structure for a spin-one hadron, in which there are new structure functions, in addition to the ones (F1, F2, g1, g2) which exist for the spin-1/2 nucleon, associated with its tensor structure. The new structure functions are b1, b2, b3, and b4 in deep inelastic scattering of a charged-lepton from a spin-one hadron such as the deuteron. Among them, twist- two functions are related by the Callan-Gross type relation b2 = 2xb1 in the Bjorken scaling limit. First, these new structure functions are introduced, and useful formulae are derived for projection operators of b1-4 from a hadron tensor Wμν. Second, a sum rule is explained for b1, and possible tensor-polarized distributions are discussed by using HERMES data in order to propose future experimental measurements and to compare them with theoretical models. A proposal was approved to measure b1 at the Thomas Jefferson National Accelerator Facility (JLab), so that much progress is expected for b1 in the near future. Third, formalisms of polarized proton-deuteron Drell-Yan processes are explained for probing especially tensor- polarized antiquark distributions, which were suggested by the HERMES data. The studies of the tensor-polarized structure functions will open a new era in 2020's for tensor-structure studies in terms of quark and gluon degrees of freedom, which are very different from ordinary descriptions in terms of nucleons and mesons.

  17. Attributing analysis on the model bias in surface temperature in the climate system model FGOALS-s2 through a process-based decomposition method

    NASA Astrophysics Data System (ADS)

    Yang, Yang; Ren, Rongcai; Cai, Ming; Rao, Jian

    2015-04-01

    This study uses the coupled atmosphere-surface climate feedback-response analysis method (CFRAM) to analyze the surface temperature biases in the Flexible Global Ocean-Atmosphere-Land System model, spectral version 2 (FGOALS-s2) in January and July. The process-based decomposition of the surface temperature biases, defined as the difference between the model and ERA-Interim during 1979-2005, enables us to attribute the model surface temperature biases to individual radiative processes including ozone, water vapor, cloud, and surface albedo; and non-radiative processes including surface sensible and latent heat fluxes, and dynamic processes at the surface and in the atmosphere. The results show that significant model surface temperature biases are almost globally present, are generally larger over land than over oceans, and are relatively larger in summer than in winter. Relative to the model biases in non-radiative processes, which tend to dominate the surface temperature biases in most parts of the world, biases in radiative processes are much smaller, except in the sub-polar Antarctic region where the cold biases from the much overestimated surface albedo are compensated for by the warm biases from nonradiative processes. The larger biases in non-radiative processes mainly lie in surface heat fluxes and in surface dynamics, which are twice as large in the Southern Hemisphere as in the Northern Hemisphere and always tend to compensate for each other. In particular, the upward/downward heat fluxes are systematically underestimated/overestimated in most parts of the world, and are mainly compensated for by surface dynamic processes including the increased heat storage in deep oceans across the globe.

  18. Hydrogen peroxide catalytic decomposition

    NASA Technical Reports Server (NTRS)

    Parrish, Clyde F. (Inventor)

    2010-01-01

    Nitric oxide in a gaseous stream is converted to nitrogen dioxide using oxidizing species generated through the use of concentrated hydrogen peroxide fed as a monopropellant into a catalyzed thruster assembly. The hydrogen peroxide is preferably stored at stable concentration levels, i.e., approximately 50%-70% by volume, and may be increased in concentration in a continuous process preceding decomposition in the thruster assembly. The exhaust of the thruster assembly, rich in hydroxyl and/or hydroperoxy radicals, may be fed into a stream containing oxidizable components, such as nitric oxide, to facilitate their oxidation.

  19. Benefits and Costs of Lexical Decomposition and Semantic Integration during the Processing of Transparent and Opaque English Compounds

    ERIC Educational Resources Information Center

    Ji, Hongbo; Gagne, Christina L.; Spalding, Thomas L.

    2011-01-01

    Six lexical decision experiments were conducted to examine the influence of complex structure on the processing speed of English compounds. All experiments revealed that semantically transparent compounds (e.g., "rosebud") were processed more quickly than matched monomorphemic words (e.g., "giraffe"). Opaque compounds (e.g., "hogwash") were also…

  20. Tensor visualizations in computational geomechanics

    NASA Astrophysics Data System (ADS)

    Jeremi, Boris; Scheuermann, Gerik; Frey, Jan; Yang, Zhaohui; Hamann, Bernd; Joy, Kenneth I.; Hagen, Hans

    2002-08-01

    We present a novel technique for visualizing tensors in three dimensional (3D) space. Of particular interest is the visualization of stress tensors resulting from 3D numerical simulations in computational geomechanics. To this end we present three different approaches to visualizing tensors in 3D space, namely hedgehogs, hyperstreamlines and hyperstreamsurfaces. We also present a number of examples related to stress distributions in 3D solids subjected to single and load couples. In addition, we present stress visualizations resulting from single-pile and pile-group computations. The main objective of this work is to investigate various techniques for visualizing general Cartesian tensors of rank 2 and it's application to geomechanics problems.

  1. Understanding the systematic air temperature biases in a coupled climate system model through a process-based decomposition method

    NASA Astrophysics Data System (ADS)

    Ren, R.-C.; Yang, Yang; Cai, Ming; Rao, Jian

    2015-10-01

    A quantitative attribution analysis is performed on the systematic atmospheric temperature biases in a coupled climate system model (flexible global ocean-atmosphere-land system model, spectral version 2) in reference to the European Center for Medium-Range Weather Forecasts, Re-analysis Interim data during 1979-2005. By adopting the coupled surface-atmosphere climate feedback response analysis method, the model temperature biases are related to model biases in representing the radiative processes including water vapor, ozone, clouds and surface albedo, and the non-radiative processes including surface heat fluxes and other dynamic processes. The results show that the temperature biases due to biases in radiative and non-radiative processes tend to compensate one another. In general, the radiative biases tend to dominate in the summer hemisphere, whereas the non-radiative biases dominate in the winter hemisphere. The temperature biases associated with radiative processes due to biases in ozone and water vapor content are the main contributors to the total temperature bias in the tropical and summer stratosphere. The overestimated surface albedo in both polar regions always results in significant cold biases in the atmosphere above in the summer season. Apart from these radiative biases, the zonal-mean patterns of the temperature biases in both boreal winter and summer are largely determined by model biases in non-radiative processes. In particular, the stronger non-radiative process biases in the northern winter hemisphere are responsible for the relatively larger `cold pole' bias in the northern winter polar stratosphere.

  2. Scalable tensor factorizations with incomplete data.

    SciTech Connect

    Morup, Morten; Dunlavy, Daniel M.; Acar, Evrim; Kolda, Tamara Gibson

    2010-07-01

    The problem of incomplete data - i.e., data with missing or unknown values - in multi-way arrays is ubiquitous in biomedical signal processing, network traffic analysis, bibliometrics, social network analysis, chemometrics, computer vision, communication networks, etc. We consider the problem of how to factorize data sets with missing values with the goal of capturing the underlying latent structure of the data and possibly reconstructing missing values (i.e., tensor completion). We focus on one of the most well-known tensor factorizations that captures multi-linear structure, CANDECOMP/PARAFAC (CP). In the presence of missing data, CP can be formulated as a weighted least squares problem that models only the known entries. We develop an algorithm called CP-WOPT (CP Weighted OPTimization) that uses a first-order optimization approach to solve the weighted least squares problem. Based on extensive numerical experiments, our algorithm is shown to successfully factorize tensors with noise and up to 99% missing data. A unique aspect of our approach is that it scales to sparse large-scale data, e.g., 1000 x 1000 x 1000 with five million known entries (0.5% dense). We further demonstrate the usefulness of CP-WOPT on two real-world applications: a novel EEG (electroencephalogram) application where missing data is frequently encountered due to disconnections of electrodes and the problem of modeling computer network traffic where data may be absent due to the expense of the data collection process.

  3. Tensor integrand reduction via Laurent expansion

    NASA Astrophysics Data System (ADS)

    Hirschi, Valentin; Peraro, Tiziano

    2016-06-01

    We introduce a new method for the application of one-loop integrand reduction via the Laurent expansion algorithm, as implemented in the public C ++ library N inja. We show how the coefficients of the Laurent expansion can be computed by suitable contractions of the loop numerator tensor with cut-dependent projectors, making it possible to interface N inja to any one-loop matrix element generator that can provide the components of this tensor. We implemented this technique in the N inja library and interfaced it to M adL oop, which is part of the public M adG raph5_ aMC@NLO framework. We performed a detailed performance study, comparing against other public reduction tools, namely C utT ools, S amurai, IREGI, PJF ry++ and G olem95. We find that N inja out-performs traditional integrand reduction in both speed and numerical stability, the latter being on par with that of the tensor integral reduction tool Golem95 which is however more limited and slower than N inja. We considered many benchmark multi-scale processes of increasing complexity, involving QCD and electro-weak corrections as well as effective non-renormalizable couplings, showing that N inja's performance scales well with both the rank and multiplicity of the considered process.

  4. Tensor integrand reduction via Laurent expansion

    DOE PAGESBeta

    Hirschi, Valentin; Peraro, Tiziano

    2016-06-09

    We introduce a new method for the application of one-loop integrand reduction via the Laurent expansion algorithm, as implemented in the public C++ library Ninja. We show how the coefficients of the Laurent expansion can be computed by suitable contractions of the loop numerator tensor with cut-dependent projectors, making it possible to interface Ninja to any one-loop matrix element generator that can provide the components of this tensor. We implemented this technique in the Ninja library and interfaced it to MadLoop, which is part of the public MadGraph5_aMC@NLO framework. We performed a detailed performance study, comparing against other public reductionmore » tools, namely CutTools, Samurai, IREGI, PJFry++ and Golem95. We find that Ninja out-performs traditional integrand reduction in both speed and numerical stability, the latter being on par with that of the tensor integral reduction tool Golem95 which is however more limited and slower than Ninja. Lastly, we considered many benchmark multi-scale processes of increasing complexity, involving QCD and electro-weak corrections as well as effective non-renormalizable couplings, showing that Ninja’s performance scales well with both the rank and multiplicity of the considered process.« less

  5. A three-dimensional domain decomposition method for large-scale DFT electronic structure calculations

    NASA Astrophysics Data System (ADS)

    Duy, Truong Vinh Truong; Ozaki, Taisuke

    2014-03-01

    With tens of petaflops supercomputers already in operation and exaflops machines expected to appear within the next 10 years, efficient parallel computational methods are required to take advantage of such extreme-scale machines. In this paper, we present a three-dimensional domain decomposition scheme for enabling large-scale electronic structure calculations based on density functional theory (DFT) on massively parallel computers. It is composed of two methods: (i) the atom decomposition method and (ii) the grid decomposition method. In the former method, we develop a modified recursive bisection method based on the moment of inertia tensor to reorder the atoms along a principal axis so that atoms that are close in real space are also close on the axis to ensure data locality. The atoms are then divided into sub-domains depending on their projections onto the principal axis in a balanced way among the processes. In the latter method, we define four data structures for the partitioning of grid points that are carefully constructed to make data locality consistent with that of the clustered atoms for minimizing data communications between the processes. We also propose a decomposition method for solving the Poisson equation using the three-dimensional FFT in Hartree potential calculation, which is shown to be better in terms of communication efficiency than a previously proposed parallelization method based on a two-dimensional decomposition. For evaluation, we perform benchmark calculations with our open-source DFT code, OpenMX, paying particular attention to the O(N) Krylov subspace method. The results show that our scheme exhibits good strong and weak scaling properties, with the parallel efficiency at 131,072 cores being 67.7% compared to the baseline of 16,384 cores with 131,072 atoms of the diamond structure on the K computer.

  6. Visualizing second order tensor fields with hyperstreamlines

    NASA Technical Reports Server (NTRS)

    Delmarcelle, Thierry; Hesselink, Lambertus

    1993-01-01

    Hyperstreamlines are a generalization to second order tensor fields of the conventional streamlines used in vector field visualization. As opposed to point icons commonly used in visualizing tensor fields, hyperstreamlines form a continuous representation of the complete tensor information along a three-dimensional path. This technique is useful in visulaizing both symmetric and unsymmetric three-dimensional tensor data. Several examples of tensor field visualization in solid materials and fluid flows are provided.

  7. Adaptive Multilinear Tensor Product Wavelets.

    PubMed

    Weiss, Kenneth; Lindstrom, Peter

    2016-01-01

    Many foundational visualization techniques including isosurfacing, direct volume rendering and texture mapping rely on piecewise multilinear interpolation over the cells of a mesh. However, there has not been much focus within the visualization community on techniques that efficiently generate and encode globally continuous functions defined by the union of multilinear cells. Wavelets provide a rich context for analyzing and processing complicated datasets. In this paper, we exploit adaptive regular refinement as a means of representing and evaluating functions described by a subset of their nonzero wavelet coefficients. We analyze the dependencies involved in the wavelet transform and describe how to generate and represent the coarsest adaptive mesh with nodal function values such that the inverse wavelet transform is exactly reproduced via simple interpolation (subdivision) over the mesh elements. This allows for an adaptive, sparse representation of the function with on-demand evaluation at any point in the domain. We focus on the popular wavelets formed by tensor products of linear B-splines, resulting in an adaptive, nonconforming but crack-free quadtree (2D) or octree (3D) mesh that allows reproducing globally continuous functions via multilinear interpolation over its cells. PMID:26529742

  8. Conceptualizing and Estimating Process Speed in Studies Employing Ecological Momentary Assessment Designs: A Multilevel Variance Decomposition Approach

    ERIC Educational Resources Information Center

    Shiyko, Mariya P.; Ram, Nilam

    2011-01-01

    Researchers have been making use of ecological momentary assessment (EMA) and other study designs that sample feelings and behaviors in real time and in naturalistic settings to study temporal dynamics and contextual factors of a wide variety of psychological, physiological, and behavioral processes. As EMA designs become more widespread,…

  9. Tensor analysis methods for activity characterization in spatiotemporal data

    SciTech Connect

    Haass, Michael Joseph; Van Benthem, Mark Hilary; Ochoa, Edward M.

    2014-03-01

    Tensor (multiway array) factorization and decomposition offers unique advantages for activity characterization in spatio-temporal datasets because these methods are compatible with sparse matrices and maintain multiway structure that is otherwise lost in collapsing for regular matrix factorization. This report describes our research as part of the PANTHER LDRD Grand Challenge to develop a foundational basis of mathematical techniques and visualizations that enable unsophisticated users (e.g. users who are not steeped in the mathematical details of matrix algebra and mulitway computations) to discover hidden patterns in large spatiotemporal data sets.

  10. Development of the Tensoral Computer Language

    NASA Technical Reports Server (NTRS)

    Ferziger, Joel; Dresselhaus, Eliot

    1996-01-01

    The research scientist or engineer wishing to perform large scale simulations or to extract useful information from existing databases is required to have expertise in the details of the particular database, the numerical methods and the computer architecture to be used. This poses a significant practical barrier to the use of simulation data. The goal of this research was to develop a high-level computer language called Tensoral, designed to remove this barrier. The Tensoral language provides a framework in which efficient generic data manipulations can be easily coded and implemented. First of all, Tensoral is general. The fundamental objects in Tensoral represent tensor fields and the operators that act on them. The numerical implementation of these tensors and operators is completely and flexibly programmable. New mathematical constructs and operators can be easily added to the Tensoral system. Tensoral is compatible with existing languages. Tensoral tensor operations co-exist in a natural way with a host language, which may be any sufficiently powerful computer language such as Fortran, C, or Vectoral. Tensoral is very-high-level. Tensor operations in Tensoral typically act on entire databases (i.e., arrays) at one time and may, therefore, correspond to many lines of code in a conventional language. Tensoral is efficient. Tensoral is a compiled language. Database manipulations are simplified optimized and scheduled by the compiler eventually resulting in efficient machine code to implement them.

  11. Diffusion tensor image registration using tensor geometry and orientation features.

    PubMed

    Yang, Jinzhong; Shen, Dinggang; Davatzikos, Christos; Verma, Ragini

    2008-01-01

    This paper presents a method for deformable registration of diffusion tensor (DT) images that integrates geometry and orientation features into a hierarchical matching framework. The geometric feature is derived from the structural geometry of diffusion and characterizes the shape of the tensor in terms of prolateness, oblateness, and sphericity of the tensor. Local spatial distributions of the prolate, oblate, and spherical geometry are used to create an attribute vector of geometric feature for matching. The orientation feature improves the matching of the WM fiber tracts by taking into account the statistical information of underlying fiber orientations. These features are incorporated into a hierarchical deformable registration framework to develop a diffusion tensor image registration algorithm. Extensive experiments on simulated and real brain DT data establish the superiority of this algorithm for deformable matching of diffusion tensors, thereby aiding in atlas creation. The robustness of the method makes it potentially useful for group-based analysis of DT images acquired in large studies to identify disease-induced and developmental changes. PMID:18982691

  12. A tensor-based subspace approach for bistatic MIMO radar in spatial colored noise.

    PubMed

    Wang, Xianpeng; Wang, Wei; Li, Xin; Wang, Junxiang

    2014-01-01

    In this paper, a new tensor-based subspace approach is proposed to estimate the direction of departure (DOD) and the direction of arrival (DOA) for bistatic multiple-input multiple-output (MIMO) radar in the presence of spatial colored noise. Firstly, the received signals can be packed into a third-order measurement tensor by exploiting the inherent structure of the matched filter. Then, the measurement tensor can be divided into two sub-tensors, and a cross-covariance tensor is formulated to eliminate the spatial colored noise. Finally, the signal subspace is constructed by utilizing the higher-order singular value decomposition (HOSVD) of the cross-covariance tensor, and the DOD and DOA can be obtained through the estimation of signal parameters via rotational invariance technique (ESPRIT) algorithm, which are paired automatically. Since the multidimensional inherent structure and the cross-covariance tensor technique are used, the proposed method provides better angle estimation performance than Chen's method, the ESPRIT algorithm and the multi-SVD method. Simulation results confirm the effectiveness and the advantage of the proposed method. PMID:24573313

  13. Integrated calibration of magnetic gradient tensor system

    NASA Astrophysics Data System (ADS)

    Gang, Yin; Yingtang, Zhang; Hongbo, Fan; GuoQuan, Ren; Zhining, Li

    2015-01-01

    Measurement precision of a magnetic gradient tensor system is not only connected with the imperfect performance of magnetometers such as bias, scale factor, non-orthogonality and misalignment errors, but also connected with the external soft-iron and hard-iron magnetic distortion fields when the system is used as a strapdown device. So an integrated scalar calibration method is proposed in this paper. In the first step, a mathematical model for scalar calibration of a single three-axis magnetometer is established, and a least squares ellipsoid fitting algorithm is proposed to estimate the detailed error parameters. For the misalignment errors existing at different magnetometers caused by the installation process and misalignment errors aroused by ellipsoid fitting estimation, a calibration method for combined misalignment errors is proposed in the second step to switch outputs of different magnetometers into the ideal reference orthogonal coordinate system. In order to verify effectiveness of the proposed method, simulation and experiment with a cross-magnetic gradient tensor system are performed, and the results show that the proposed method estimates error parameters and improves the measurement accuracy of magnetic gradient tensor greatly.

  14. A General Probabilistic Framework (GPF) for process-based models: blind validation, total error decomposition and uncertainty reduction.

    NASA Astrophysics Data System (ADS)

    Baroni, Gabriele; Jolley, Richard P.; Graeff, Thomas; Oswald, Sascha E.

    2014-05-01

    Process-based models are useful tools supporting research, policy analysis, and decision making. Ideally, they would only include input data and parameters having physical meaning and they could be applied in various conditions and scenario analysis. However, applicability of these models can be limited because they are affected by many sources of uncertainty, from scale issues to lack of knowledge. To overcome this limitation, a General Probabilistic Framework (GPF) for the application of process-based models is proposed. A first assessment of the performance of the model is conducted in a blind validation, assuming all the possible sources of uncertainty. The Sobol/Saltelli global sensitivity analysis is used to decompose the total uncertainty of the model output. Based on the results of the sensitivity analysis, improvements of the model application are considered in a goal-oriented approach, in which monitoring and modeling are related in a continuous learning process. This presentation describes the GPF and its application to two hydrological models. Firstly, the GPF is applied at field scale using a 1D physical-based hydrological model (SWAP). Secondly, the framework is applied at small catchment scale in combination with a spatially distributed hydrological model (SHETRAN). The models are evaluated considering different components of the water balance. The framework is conceptually simple, relatively easy to implement and it requires no modifications to existing source codes of simulation models. It can take into account all the various sources of uncertainty i.e. input data, parameters, model structures and observations. It can be extended to a wide variety of modelling applications, also when direct measurements of model output are not available. Further research will focus on the methods to account for correlation between the different sources of uncertainty.

  15. Poly (ethylene terephthalate) decomposition process in oxygen plasma; emission spectroscopic and surface analysis for oxygen-plasma reaction

    NASA Astrophysics Data System (ADS)

    Kumagai, Hidetoshi; Hiroki, Denbo; Fujii, Nobuyuki; Kobayashi, Takaomi

    2004-01-01

    Emission spectroscopy was applied to observe the reaction process of poly (ethylene terephthalate) (PET) in an oxygen (O2) plasma generated by a microwave discharge. As the PET was exposed in the O2 plasma flow, light emitted from the PET surface was monitored. In the diagnosis measurement, several emission peaks assigned to the Hα atomic line at 652 nm, Hβ at 486 nm, OH (2Σ-->2Π) transition near 244-343 nm and CO (b3 Σ-->a3 Σ) near 283-370 nm were observed and measured at various discharge times. These results indicated that after the plasma etching, the PET sample was decomposed by the oxygen plasma reaction, and then, hydrogen abstraction and carbon oxidation processes. We also observed the time profile of oxygen atom, as the atom-emission intensity at 777 nm was monitored. As Hβ atomic and OH molecule lines appeared in the presence of PET, the O atom intensity was significantly reduced. In the surface analysis on Fourier transform infrared and x-ray photoelectron spectroscopy measurements, it was found that for the PET surface treated by O2 plasma containing excited atomic oxygen species, ester bands were broken and carbonization formed on the PET surface. .

  16. Tensor Target Polarization at TRIUMF

    SciTech Connect

    Smith, G

    2014-10-27

    The first measurements of tensor observables in $\\pi \\vec{d}$ scattering experiments were performed in the mid-80's at TRIUMF, and later at SIN/PSI. The full suite of tensor observables accessible in $\\pi \\vec{d}$ elastic scattering were measured: $T_{20}$, $T_{21}$, and $T_{22}$. The vector analyzing power $iT_{11}$ was also measured. These results led to a better understanding of the three-body theory used to describe this reaction. %Some measurements were also made in the absorption and breakup channels. A direct measurement of the target tensor polarization was also made independent of the usual NMR techniques by exploiting the (nearly) model-independent result for the tensor analyzing power at 90$^\\circ _{cm}$ in the $\\pi \\vec{d} \\rightarrow 2p$ reaction. This method was also used to check efforts to enhance the tensor polarization by RF burning of the NMR spectrum. A brief description of the methods developed to measure and analyze these experiments is provided.

  17. Co-composting of rose oil processing waste with caged layer manure and straw or sawdust: effects of carbon source and C/N ratio on decomposition.

    PubMed

    Onursal, Emrah; Ekinci, Kamil

    2015-04-01

    Rose oil is a specific essential oil that is produced mainly for the cosmetics industry in a few selected locations around the world. Rose oil production is a water distillation process from petals of Rosa damascena Mill. Since the oil content of the rose petals of this variety is between 0.3-0.4% (w/w), almost 4000 to 3000 kg of rose petals are needed to produce 1 kg of rose oil. Rose oil production is a seasonal activity and takes place during the relatively short period where the roses are blooming. As a result, large quantities of solid waste are produced over a limited time interval. This research aims: (i) to determine the possibilities of aerobic co-composting as a waste management option for rose oil processing waste with caged layer manure; (ii) to identify effects of different carbon sources - straw or sawdust on co-composting of rose oil processing waste and caged layer manure, which are both readily available in Isparta, where significant rose oil production also takes place; (iii) to determine the effects of different C/N ratios on co-composting by the means of organic matter decomposition and dry matter loss. Composting experiments were carried out by 12 identical laboratory-scale composting reactors (60 L) simultaneously. The results of the study showed that the best results were obtained with a mixture consisting of 50% rose oil processing waste, 64% caged layer manure and 15% straw wet weight in terms of organic matter loss (66%) and dry matter loss (38%). PMID:25784689

  18. Highlighting earthworm contribution in uplifting biochemical response for organic matter decomposition during vermifiltration processing sewage sludge: Insights from proteomics.

    PubMed

    Xing, Meiyan; Wang, Yin; Xu, Ting; Yang, Jian

    2016-09-01

    A vermifilter (VF) was steadily operated to explore the mechanism of lower microbial biomass and higher enzymatic activities due to the presence of earthworms, with a conventional biofilter (BF) as a control. The analysis of 2-DE indicated that 432 spots and 488 spots were clearly detected in the VF and BF biofilm. Furthermore, MALDI-TOF/TOF MS revealed that six differential up-regulated proteins, namely Aldehyde Dehydrogenase, Molecular chaperone GroEL, ATP synthase subunit alpha, Flagellin, Chaperone protein HtpG and ATP synthase subunit beta, changed progressively. Based on Gene Ontology annotation, these differential proteins mainly performed 71.38% ATP binding and 16.23% response to stress functions. Taken the VF process performance merits into considerations, it was addressed that earthworm activities biochemically strengthened energy releasing of the microbial metabolism in an uncoupled manner. PMID:27287202

  19. Nested Taylor decomposition in multivariate function decomposition

    NASA Astrophysics Data System (ADS)

    Baykara, N. A.; Gürvit, Ercan

    2014-12-01

    Fluctuationlessness approximation applied to the remainder term of a Taylor decomposition expressed in integral form is already used in many articles. Some forms of multi-point Taylor expansion also are considered in some articles. This work is somehow a combination these where the Taylor decomposition of a function is taken where the remainder is expressed in integral form. Then the integrand is decomposed to Taylor again, not necessarily around the same point as the first decomposition and a second remainder is obtained. After taking into consideration the necessary change of variables and converting the integration limits to the universal [0;1] interval a multiple integration system formed by a multivariate function is formed. Then it is intended to apply the Fluctuationlessness approximation to each of these integrals one by one and get better results as compared with the single node Taylor decomposition on which the Fluctuationlessness is applied.

  20. Locally extracting scalar, vector and tensor modes in cosmological perturbation theory

    NASA Astrophysics Data System (ADS)

    Clarkson, Chris; Osano, Bob

    2011-11-01

    Cosmological perturbation theory relies on the decomposition of perturbations into so-called scalar, vector and tensor modes. This decomposition is non-local and depends on unknowable boundary conditions. The non-locality is particularly important at second and higher order because perturbative modes are sourced by products of lower order modes, which must be integrated over all space in order to isolate each mode. However, given a trace-free rank-2 tensor, a locally defined scalar mode may be trivially derived by taking two divergences, which knocks out the vector and tensor degrees of freedom. A similar local differential operation will return a pure vector mode. This means that scalar and vector degrees of freedom have local descriptions. The corresponding local extraction of the tensor mode is unknown however. We give it here. The operators we define are useful for defining gauge-invariant quantities at second order. We perform much of our analysis using an index-free ‘vector-calculus’ approach which makes manipulating tensor equations considerably simpler.

  1. Total Variation Regularized Tensor RPCA for Background Subtraction From Compressive Measurements.

    PubMed

    Cao, Wenfei; Wang, Yao; Sun, Jian; Meng, Deyu; Yang, Can; Cichocki, Andrzej; Xu, Zongben

    2016-09-01

    Background subtraction has been a fundamental and widely studied task in video analysis, with a wide range of applications in video surveillance, teleconferencing, and 3D modeling. Recently, motivated by compressive imaging, background subtraction from compressive measurements (BSCM) is becoming an active research task in video surveillance. In this paper, we propose a novel tensor-based robust principal component analysis (TenRPCA) approach for BSCM by decomposing video frames into backgrounds with spatial-temporal correlations and foregrounds with spatio-temporal continuity in a tensor framework. In this approach, we use 3D total variation to enhance the spatio-temporal continuity of foregrounds, and Tucker decomposition to model the spatio-temporal correlations of video background. Based on this idea, we design a basic tensor RPCA model over the video frames, dubbed as the holistic TenRPCA model. To characterize the correlations among the groups of similar 3D patches of video background, we further design a patch-group-based tensor RPCA model by joint tensor Tucker decompositions of 3D patch groups for modeling the video background. Efficient algorithms using the alternating direction method of multipliers are developed to solve the proposed models. Extensive experiments on simulated and real-world videos demonstrate the superiority of the proposed approaches over the existing state-of-the-art approaches. PMID:27305675

  2. Spacetimes with Semisymmetric Energy-Momentum Tensor

    NASA Astrophysics Data System (ADS)

    De, U. C.; Velimirović, Ljubica

    2015-06-01

    The object of the present paper is to introduce spacetimes with semisymmetric energy-momentum tensor. At first we consider the relation R( X, Y)ṡ T=0, that is, the energy-momentum tensor T of type (0,2) is semisymmetric. It is shown that in a general relativistic spacetime if the energy-momentum tensor is semisymmetric, then the spacetime is also Ricci semisymmetric and the converse is also true. Next we characterize the perfect fluid spacetime with semisymmetric energy-momentum tensor. Then, we consider conformally flat spacetime with semisymmetric energy-momentum tensor. Finally, we cited some examples of spacetimes admitting semisymmetric energy-momentum tensor.

  3. Retrodictive determinism. [covariant and transformational behavior of tensor fields in hydrodynamics and thermodynamics

    NASA Technical Reports Server (NTRS)

    Kiehn, R. M.

    1976-01-01

    With respect to irreversible, non-homeomorphic maps, contravariant and covariant tensor fields have distinctly natural covariance and transformational behavior. For thermodynamic processes which are non-adiabatic, the fact that the process cannot be represented by a homeomorphic map emphasizes the logical arrow of time, an idea which encompasses a principle of retrodictive determinism for covariant tensor fields.

  4. Tensor numerical methods in quantum chemistry: from Hartree-Fock to excitation energies.

    PubMed

    Khoromskaia, Venera; Khoromskij, Boris N

    2015-12-21

    We resume the recent successes of the grid-based tensor numerical methods and discuss their prospects in real-space electronic structure calculations. These methods, based on the low-rank representation of the multidimensional functions and integral operators, first appeared as an accurate tensor calculus for the 3D Hartree potential using 1D complexity operations, and have evolved to entirely grid-based tensor-structured 3D Hartree-Fock eigenvalue solver. It benefits from tensor calculation of the core Hamiltonian and two-electron integrals (TEI) in O(n log n) complexity using the rank-structured approximation of basis functions, electron densities and convolution integral operators all represented on 3D n × n × n Cartesian grids. The algorithm for calculating TEI tensor in a form of the Cholesky decomposition is based on multiple factorizations using algebraic 1D "density fitting" scheme, which yield an almost irreducible number of product basis functions involved in the 3D convolution integrals, depending on a threshold ε > 0. The basis functions are not restricted to separable Gaussians, since the analytical integration is substituted by high-precision tensor-structured numerical quadratures. The tensor approaches to post-Hartree-Fock calculations for the MP2 energy correction and for the Bethe-Salpeter excitation energies, based on using low-rank factorizations and the reduced basis method, were recently introduced. Another direction is towards the tensor-based Hartree-Fock numerical scheme for finite lattices, where one of the numerical challenges is the summation of electrostatic potentials of a large number of nuclei. The 3D grid-based tensor method for calculation of a potential sum on a L × L × L lattice manifests the linear in L computational work, O(L), instead of the usual O(L(3) log L) scaling by the Ewald-type approaches. PMID:26016539

  5. Elucidating effects of atmospheric deposition and peat decomposition processes on mercury accumulation rates in a northern Minnesota peatland over last 10,000 cal years

    NASA Astrophysics Data System (ADS)

    Nater, E. A.; Furman, O.; Toner, B. M.; Sebestyen, S. D.; Tfaily, M. M.; Chanton, J.; Fissore, C.; McFarlane, K. J.; Hanson, P. J.; Iversen, C. M.; Kolka, R. K.

    2014-12-01

    Climate change has the potential to affect mercury (Hg), sulfur (S) and carbon (C) stores and cycling in northern peatland ecosystems (NPEs). SPRUCE (Spruce and Peatland Responses Under Climate and Environmental change) is an interdisciplinary study of the effects of elevated temperature and CO2 enrichment on NPEs. Peat cores (0-3.0 m) were collected from 16 large plots located on the S1 peatland (an ombrotrophic bog treed with Picea mariana and Larix laricina) in August, 2012 for baseline characterization before the experiment begins. Peat samples were analyzed at depth increments for total Hg, bulk density, humification indices, and elemental composition. Net Hg accumulation rates over the last 10,000 years were derived from Hg concentrations and peat accumulation rates based on peat depth chronology established using 14C and 13C dating of peat cores. Historic Hg deposition rates are being modeled from pre-industrial deposition rates in S1 scaled by regional lake sediment records. Effects of peatland processes and factors (hydrology, decomposition, redox chemistry, vegetative changes, microtopography) on the biogeochemistry of Hg, S, and other elements are being assessed by comparing observed elemental depth profiles with accumulation profiles predicted solely from atmospheric deposition. We are using principal component analyses and cluster analyses to elucidate relationships between humification indices, peat physical properties, and inorganic and organic geochemistry data to interpret the main processes controlling net Hg accumulation and elemental concentrations in surface and subsurface peat layers. These findings are critical to predicting how climate change will affect future accumulation of Hg as well as existing Hg stores in NPE, and for providing reference baselines for SPRUCE future investigations.

  6. Collaborative Research: Process-resolving Decomposition of the Global Temperature Response to Modes of Low Frequency Variability in a Changing Climate

    SciTech Connect

    Cai, Ming; Deng, Yi

    2015-02-06

    El Niño-Southern Oscillation (ENSO) and Annular Modes (AMs) represent respectively the most important modes of low frequency variability in the tropical and extratropical circulations. The future projection of the ENSO and AM variability, however, remains highly uncertain with the state-of-the-art coupled general circulation models. A comprehensive understanding of the factors responsible for the inter-model discrepancies in projecting future changes in the ENSO and AM variability, in terms of multiple feedback processes involved, has yet to be achieved. The proposed research aims to identify sources of such uncertainty and establish a set of process-resolving quantitative evaluations of the existing predictions of the future ENSO and AM variability. The proposed process-resolving evaluations are based on a feedback analysis method formulated in Lu and Cai (2009), which is capable of partitioning 3D temperature anomalies/perturbations into components linked to 1) radiation-related thermodynamic processes such as cloud and water vapor feedbacks, 2) local dynamical processes including convection and turbulent/diffusive energy transfer and 3) non-local dynamical processes such as the horizontal energy transport in the oceans and atmosphere. Taking advantage of the high-resolution, multi-model ensemble products from the Coupled Model Intercomparison Project Phase 5 (CMIP5) soon to be available at the Lawrence Livermore National Lab, we will conduct a process-resolving decomposition of the global three-dimensional (3D) temperature (including SST) response to the ENSO and AM variability in the preindustrial, historical and future climate simulated by these models. Specific research tasks include 1) identifying the model-observation discrepancies in the global temperature response to ENSO and AM variability and attributing such discrepancies to specific feedback processes, 2) delineating the influence of anthropogenic radiative forcing on the key feedback processes

  7. High-Field Electron Paramagnetic Resonance and Density Functional Theory Study of Stable Organic Radicals in Lignin: Influence of the Extraction Process, Botanical Origin, and Protonation Reactions on the Radical g Tensor.

    PubMed

    Bährle, Christian; Nick, Thomas U; Bennati, Marina; Jeschke, Gunnar; Vogel, Frédéric

    2015-06-18

    The radical concentrations and g factors of stable organic radicals in different lignin preparations were determined by X-band EPR at 9 GHz. We observed that the g factors of these radicals are largely determined by the extraction process and not by the botanical origin of the lignin. The parameter mostly influencing the g factor is the pH value during lignin extraction. This effect was studied in depth using high-field EPR spectroscopy at 263 GHz. We were able to determine the gxx, gyy, and gzz components of the g tensor of the stable organic radicals in lignin. With the enhanced resolution of high-field EPR, distinct radical species could be found in this complex polymer. The radical species are assigned to substituted o-semiquinone radicals and can exist in different protonation states SH3+, SH2, SH1-, and S2-. The proposed model structures are supported by DFT calculations. The g principal values of the proposed structure were all in reasonable agreement with the experiments. PMID:25978006

  8. Low-rank approximation based non-negative multi-way array decomposition on event-related potentials.

    PubMed

    Cong, Fengyu; Zhou, Guoxu; Astikainen, Piia; Zhao, Qibin; Wu, Qiang; Nandi, Asoke K; Hietanen, Jari K; Ristaniemi, Tapani; Cichocki, Andrzej

    2014-12-01

    Non-negative tensor factorization (NTF) has been successfully applied to analyze event-related potentials (ERPs), and shown superiority in terms of capturing multi-domain features. However, the time-frequency representation of ERPs by higher-order tensors are usually large-scale, which prevents the popularity of most tensor factorization algorithms. To overcome this issue, we introduce a non-negative canonical polyadic decomposition (NCPD) based on low-rank approximation (LRA) and hierarchical alternating least square (HALS) techniques. We applied NCPD (LRAHALS and benchmark HALS) and CPD to extract multi-domain features of a visual ERP. The features and components extracted by LRAHALS NCPD and HALS NCPD were very similar, but LRAHALS NCPD was 70 times faster than HALS NCPD. Moreover, the desired multi-domain feature of the ERP by NCPD showed a significant group difference (control versus depressed participants) and a difference in emotion processing (fearful versus happy faces). This was more satisfactory than that by CPD, which revealed only a group difference. PMID:25164246

  9. Collaborative Research: Process-Resolving Decomposition of the Global Temperature Response to Modes of Low Frequency Variability in a Changing Climate

    SciTech Connect

    Deng, Yi

    2014-11-24

    DOE-GTRC-05596 11/24/2104 Collaborative Research: Process-Resolving Decomposition of the Global Temperature Response to Modes of Low Frequency Variability in a Changing Climate PI: Dr. Yi Deng (PI) School of Earth and Atmospheric Sciences Georgia Institute of Technology 404-385-1821, yi.deng@eas.gatech.edu El Niño-Southern Oscillation (ENSO) and Annular Modes (AMs) represent respectively the most important modes of low frequency variability in the tropical and extratropical circulations. The projection of future changes in the ENSO and AM variability, however, remains highly uncertain with the state-of-the-science climate models. This project conducted a process-resolving, quantitative evaluations of the ENSO and AM variability in the modern reanalysis observations and in climate model simulations. The goal is to identify and understand the sources of uncertainty and biases in models’ representation of ENSO and AM variability. Using a feedback analysis method originally formulated by one of the collaborative PIs, we partitioned the 3D atmospheric temperature anomalies and surface temperature anomalies associated with ENSO and AM variability into components linked to 1) radiation-related thermodynamic processes such as cloud and water vapor feedbacks, 2) local dynamical processes including convection and turbulent/diffusive energy transfer and 3) non-local dynamical processes such as the horizontal energy transport in the oceans and atmosphere. In the past 4 years, the research conducted at Georgia Tech under the support of this project has led to 15 peer-reviewed publications and 9 conference/workshop presentations. Two graduate students and one postdoctoral fellow also received research training through participating the project activities. This final technical report summarizes key scientific discoveries we made and provides also a list of all publications and conference presentations resulted from research activities at Georgia Tech. The main findings include

  10. Cadaver decomposition in terrestrial ecosystems

    NASA Astrophysics Data System (ADS)

    Carter, David O.; Yellowlees, David; Tibbett, Mark

    2007-01-01

    A dead mammal (i.e. cadaver) is a high quality resource (narrow carbon:nitrogen ratio, high water content) that releases an intense, localised pulse of carbon and nutrients into the soil upon decomposition. Despite the fact that as much as 5,000 kg of cadaver can be introduced to a square kilometre of terrestrial ecosystem each year, cadaver decomposition remains a neglected microsere. Here we review the processes associated with the introduction of cadaver-derived carbon and nutrients into soil from forensic and ecological settings to show that cadaver decomposition can have a greater, albeit localised, effect on belowground ecology than plant and faecal resources. Cadaveric materials are rapidly introduced to belowground floral and faunal communities, which results in the formation of a highly concentrated island of fertility, or cadaver decomposition island (CDI). CDIs are associated with increased soil microbial biomass, microbial activity (C mineralisation) and nematode abundance. Each CDI is an ephemeral natural disturbance that, in addition to releasing energy and nutrients to the wider ecosystem, acts as a hub by receiving these materials in the form of dead insects, exuvia and puparia, faecal matter (from scavengers, grazers and predators) and feathers (from avian scavengers and predators). As such, CDIs contribute to landscape heterogeneity. Furthermore, CDIs are a specialised habitat for a number of flies, beetles and pioneer vegetation, which enhances biodiversity in terrestrial ecosystems.

  11. An analysis of scatter decomposition

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Saltz, Joel H.

    1990-01-01

    A formal analysis of a powerful mapping technique known as scatter decomposition is presented. Scatter decomposition divides an irregular computational domain into a large number of equal sized pieces, and distributes them modularly among processors. A probabilistic model of workload in one dimension is used to formally explain why, and when scatter decomposition works. The first result is that if correlation in workload is a convex function of distance, then scattering a more finely decomposed domain yields a lower average processor workload variance. The second result shows that if the workload process is stationary Gaussian and the correlation function decreases linearly in distance until becoming zero and then remains zero, scattering a more finely decomposed domain yields a lower expected maximum processor workload. Finally it is shown that if the correlation function decreases linearly across the entire domain, then among all mappings that assign an equal number of domain pieces to each processor, scatter decomposition minimizes the average processor workload variance. The dependence of these results on the assumption of decreasing correlation is illustrated with situations where a coarser granularity actually achieves better load balance.

  12. Scalable tensor factorizations with missing data.

    SciTech Connect

    Morup, Morten; Dunlavy, Daniel M.; Acar, Evrim; Kolda, Tamara Gibson

    2010-04-01

    The problem of missing data is ubiquitous in domains such as biomedical signal processing, network traffic analysis, bibliometrics, social network analysis, chemometrics, computer vision, and communication networks|all domains in which data collection is subject to occasional errors. Moreover, these data sets can be quite large and have more than two axes of variation, e.g., sender, receiver, time. Many applications in those domains aim to capture the underlying latent structure of the data; in other words, they need to factorize data sets with missing entries. If we cannot address the problem of missing data, many important data sets will be discarded or improperly analyzed. Therefore, we need a robust and scalable approach for factorizing multi-way arrays (i.e., tensors) in the presence of missing data. We focus on one of the most well-known tensor factorizations, CANDECOMP/PARAFAC (CP), and formulate the CP model as a weighted least squares problem that models only the known entries. We develop an algorithm called CP-WOPT (CP Weighted OPTimization) using a first-order optimization approach to solve the weighted least squares problem. Based on extensive numerical experiments, our algorithm is shown to successfully factor tensors with noise and up to 70% missing data. Moreover, our approach is significantly faster than the leading alternative and scales to larger problems. To show the real-world usefulness of CP-WOPT, we illustrate its applicability on a novel EEG (electroencephalogram) application where missing data is frequently encountered due to disconnections of electrodes.

  13. Tensorially consistent microleveling of high resolution full tensor gradiometry data

    NASA Astrophysics Data System (ADS)

    Schiffler, M.; Queitsch, M.; Schneider, M.; Stolz, R.; Krech, W.; Meyer, H.; Kukowski, N.

    2013-12-01

    Full Tensor Magnetic Gradiometry (FTMG) data obtained with Superconductive Quantum Interference Device (SQUID) sensors offer high resolution and a low signal-to-noise ratio. In airborne operation, processing steps for leveling of flight lines using tie-lines and subsequent micro-leveling become important. Airborne SQUID-FTMG surveys show that in magnetically calm regions the overall measurement system noise level of ≈10pT/m RMS is the main contribution to the magnetograms and line-dependent artifacts become visible. Both tie-line and micro-leveling are used to remove these artifacts (corrugations). But, in the application of these standard leveling routines - originally designed for total magnetic intensity measurements - to the tensor components independently, the tracelessness and the symmetry of the resulting corrected tensor is not preserved. We show that tie-line leveling for airborne SQUID-FTMG data can be surpassed using the presented micro-leveling algorithm and discuss how it is designed to preserve the tensor properties. The micro-leveling process is performed via a moving median filter using a geometric median which preserves the properties of the tensor either to the entire tensor at once or to its structural part (eigenvalues) and rotational part (eigenvectors or idempotences) independently. We discuss the impact on data quality for the different micro-leveling methods. At each observation point, the median along the distance of the flight line is subtracted and the median in a specific footprint radius is added. For application of this filter to the rotational states, we use quaternions and quaternion interpolation. Examples of the new processing methods on data acquired with the FTMG system will be presented in this work.

  14. Diffusion tensor MR microscopy of tissues with low diffusional anisotropy

    PubMed Central

    Bajd, Franci; Mattea, Carlos; Stapf, Siegfried

    2016-01-01

    Abstract Background Diffusion tensor imaging exploits preferential diffusional motion of water molecules residing within tissue compartments for assessment of tissue structural anisotropy. However, instrumentation and post-processing errors play an important role in determination of diffusion tensor elements. In the study, several experimental factors affecting accuracy of diffusion tensor determination were analyzed. Materials and methods Effects of signal-to-noise ratio and configuration of the applied diffusion-sensitizing gradients on fractional anisotropy bias were analyzed by means of numerical simulations. In addition, diffusion tensor magnetic resonance microscopy experiments were performed on a tap water phantom and bovine articular cartilage-on-bone samples to verify the simulation results. Results In both, the simulations and the experiments, the multivariate linear regression of the diffusion-tensor analysis yielded overestimated fractional anisotropy with low SNRs and with low numbers of applied diffusion-sensitizing gradients. Conclusions An increase of the apparent fractional anisotropy due to unfavorable experimental conditions can be overcome by applying a larger number of diffusion sensitizing gradients with small values of the condition number of the transformation matrix. This is in particular relevant in magnetic resonance microscopy, where imaging gradients are high and the signal-to-noise ratio is low. PMID:27247550

  15. A uniform parametrization of moment tensors

    NASA Astrophysics Data System (ADS)

    Tape, Walter; Tape, Carl

    2015-09-01

    A moment tensor is a 3 × 3 symmetric matrix that expresses an earthquake source. We construct a parametrization of the 5-D space of all moment tensors of unit norm. The coordinates associated with the parametrization are closely related to moment tensor orientations and source types. The parametrization is uniform, in the sense that equal volumes in the coordinate domain of the parametrization correspond to equal volumes of moment tensors. Uniformly distributed points in the coordinate domain therefore give uniformly distributed moment tensors. A cartesian grid in the coordinate domain can be used to search efficiently over moment tensors. We find that uniformly distributed moment tensors have uniformly distributed orientations (eigenframes), but that their source types (eigenvalue triples) are distributed so as to favour double couples.

  16. Competition between the tensor light shift and nonlinear Zeeman effect

    SciTech Connect

    Chalupczak, W.; Wojciechowski, A.; Pustelny, S.; Gawlik, W.

    2010-08-15

    Many precision measurements (e.g., in spectroscopy, atomic clocks, quantum-information processing, etc.) suffer from systematic errors introduced by the light shift. In our experimental configuration, however, the tensor light shift plays a positive role enabling the observation of spectral features otherwise masked by the cancellation of the transition amplitudes and creating resonances at a frequency unperturbed either by laser power or beam inhomogeneity. These phenomena occur thanks to the special relation between the nonlinear Zeeman and light shift effects. The interplay between these two perturbations is systematically studied and the cancellation of the nonlinear Zeeman effect by the tensor light shift is demonstrated.

  17. Extracting the diffusion tensor from molecular dynamics simulation with Milestoning.

    PubMed

    Mugnai, Mauro L; Elber, Ron

    2015-01-01

    We propose an algorithm to extract the diffusion tensor from Molecular Dynamics simulations with Milestoning. A Kramers-Moyal expansion of a discrete master equation, which is the Markovian limit of the Milestoning theory, determines the diffusion tensor. To test the algorithm, we analyze overdamped Langevin trajectories and recover a multidimensional Fokker-Planck equation. The recovery process determines the flux through a mesh and estimates local kinetic parameters. Rate coefficients are converted to the derivatives of the potential of mean force and to coordinate dependent diffusion tensor. We illustrate the computation on simple models and on an atomically detailed system-the diffusion along the backbone torsions of a solvated alanine dipeptide. PMID:25573551

  18. Gravitational scalar-tensor theory

    NASA Astrophysics Data System (ADS)

    Naruko, Atsushi; Yoshida, Daisuke; Mukohyama, Shinji

    2016-05-01

    We consider a new form of gravity theories in which the action is written in terms of the Ricci scalar and its first and second derivatives. Despite the higher derivative nature of the action, the theory is ghost-free under an appropriate choice of the functional form of the Lagrangian. This model possesses 2 + 2 physical degrees of freedom, namely 2 scalar degrees and 2 tensor degrees. We exhaust all such theories with the Lagrangian of the form f(R,{({{\

  19. Local virial and tensor theorems.

    PubMed

    Cohen, Leon

    2011-11-17

    We show that for any wave function and potential the local virial theorem can always be satisfied 2K(r) = r·ΔV by choosing a particular expression for the local kinetic energy. In addition, we show that for each choice of local kinetic energy there are an infinite number of quasi-probability distributions which will generate the same expression. We also consider the local tensor virial theorem. PMID:21863837

  20. Generalised tensor fluctuations and inflation

    SciTech Connect

    Cannone, Dario; Tasinato, Gianmassimo; Wands, David E-mail: g.tasinato@swansea.ac.uk

    2015-01-01

    Using an effective field theory approach to inflation, we examine novel properties of the spectrum of inflationary tensor fluctuations, that arise when breaking some of the symmetries or requirements usually imposed on the dynamics of perturbations. During single-clock inflation, time-reparameterization invariance is broken by a time-dependent cosmological background. In order to explore more general scenarios, we consider the possibility that spatial diffeomorphism invariance is also broken by effective mass terms or by derivative operators for the metric fluctuations in the Lagrangian. We investigate the cosmological consequences of the breaking of spatial diffeomorphisms, focussing on operators that affect the power spectrum of fluctuations. We identify the operators for tensor fluctuations that can provide a blue spectrum without violating the null energy condition, and operators for scalar fluctuations that lead to non-conservation of the comoving curvature perturbation on superhorizon scales even in single-clock inflation. In the last part of our work, we also examine the consequences of operators containing more than two spatial derivatives, discussing how they affect the sound speed of tensor fluctuations, and showing that they can mimic some of the interesting effects of symmetry breaking operators, even in scenarios that preserve spatial diffeomorphism invariance.

  1. Random Tensors and Planted Cliques

    NASA Astrophysics Data System (ADS)

    Brubaker, S. Charles; Vempala, Santosh S.

    The r-parity tensor of a graph is a generalization of the adjacency matrix, where the tensor’s entries denote the parity of the number of edges in subgraphs induced by r distinct vertices. For r = 2, it is the adjacency matrix with 1’s for edges and - 1’s for nonedges. It is well-known that the 2-norm of the adjacency matrix of a random graph is O(sqrt{n}). Here we show that the 2-norm of the r-parity tensor is at most f(r)sqrt{n}log^{O(r)}n, answering a question of Frieze and Kannan [1] who proved this for r = 3. As a consequence, we get a tight connection between the planted clique problem and the problem of finding a vector that approximates the 2-norm of the r-parity tensor of a random graph. Our proof method is based on an inductive application of concentration of measure.

  2. Sparse alignment for robust tensor learning.

    PubMed

    Lai, Zhihui; Wong, Wai Keung; Xu, Yong; Zhao, Cairong; Sun, Mingming

    2014-10-01

    Multilinear/tensor extensions of manifold learning based algorithms have been widely used in computer vision and pattern recognition. This paper first provides a systematic analysis of the multilinear extensions for the most popular methods by using alignment techniques, thereby obtaining a general tensor alignment framework. From this framework, it is easy to show that the manifold learning based tensor learning methods are intrinsically different from the alignment techniques. Based on the alignment framework, a robust tensor learning method called sparse tensor alignment (STA) is then proposed for unsupervised tensor feature extraction. Different from the existing tensor learning methods, L1- and L2-norms are introduced to enhance the robustness in the alignment step of the STA. The advantage of the proposed technique is that the difficulty in selecting the size of the local neighborhood can be avoided in the manifold learning based tensor feature extraction algorithms. Although STA is an unsupervised learning method, the sparsity encodes the discriminative information in the alignment step and provides the robustness of STA. Extensive experiments on the well-known image databases as well as action and hand gesture databases by encoding object images as tensors demonstrate that the proposed STA algorithm gives the most competitive performance when compared with the tensor-based unsupervised learning methods. PMID:25291733

  3. Tensor SOM and tensor GTM: Nonlinear tensor analysis by topographic mappings.

    PubMed

    Iwasaki, Tohru; Furukawa, Tetsuo

    2016-05-01

    In this paper, we propose nonlinear tensor analysis methods: the tensor self-organizing map (TSOM) and the tensor generative topographic mapping (TGTM). TSOM is a straightforward extension of the self-organizing map from high-dimensional data to tensorial data, and TGTM is an extension of the generative topographic map, which provides a theoretical background for TSOM using a probabilistic generative model. These methods are useful tools for analyzing and visualizing tensorial data, especially multimodal relational data. For given n-mode relational data, TSOM and TGTM can simultaneously organize a set of n-topographic maps. Furthermore, they can be used to explore the tensorial data space by interactively visualizing the relationships between modes. We present the TSOM algorithm and a theoretical description from the viewpoint of TGTM. Various TSOM variations and visualization techniques are also described, along with some applications to real relational datasets. Additionally, we attempt to build a comprehensive description of the TSOM family by adapting various data structures. PMID:26991392

  4. Hardware Implementation of Singular Value Decomposition

    NASA Astrophysics Data System (ADS)

    Majumder, Swanirbhar; Shaw, Anil Kumar; Sarkar, Subir Kumar

    2016-06-01

    Singular value decomposition (SVD) is a useful decomposition technique which has important role in various engineering fields such as image compression, watermarking, signal processing, and numerous others. SVD does not involve convolution operation, which make it more suitable for hardware implementation, unlike the most popular transforms. This paper reviews the various methods of hardware implementation for SVD computation. This paper also studies the time complexity and hardware complexity in various methods of SVD computation.

  5. Thermal Decomposition Mechanism of Butyraldehyde

    NASA Astrophysics Data System (ADS)

    Hatten, Courtney D.; Warner, Brian; Wright, Emily; Kaskey, Kevin; McCunn, Laura R.

    2013-06-01

    The thermal decomposition of butyraldehyde, CH_3CH_2CH_2C(O)H, has been studied in a resistively heated SiC tubular reactor. Products of pyrolysis were identified via matrix-isolation FTIR spectroscopy and photoionization mass spectrometry in separate experiments. Carbon monoxide, ethene, acetylene, water and ethylketene were among the products detected. To unravel the mechanism of decomposition, pyrolysis of a partially deuterated sample of butyraldehyde was studied. Also, the concentration of butyraldehyde in the carrier gas was varied in experiments to determine the presence of bimolecular reactions. The results of these experiments can be compared to the dissociation pathways observed in similar aldehydes and are relevant to the processing of biomass, foods, and tobacco.

  6. Thermal decomposition of isooctyl nitrate

    SciTech Connect

    Pritchard, H.O.

    1989-03-01

    The diesel ignition improver DII-3, made by Ethyl Corporation, also known as isooctyl nitrate, is a mixture whose principal constituent (about 95%) is 2-ethyl hexyl nitrate. This note describes an investigation of the thermal decomposition that is not exhaustive, but that is intended to provide sufficient information on the rate and the mechanism so as to make possible the educated guesses needed for modeling the effect of isooctyl nitrate on the diesel ignition process. As is the case with other alkyl nitrates, the decomposition of the neat material is a complex one giving a complicated pressure versus time curve, unsuitable for a quick derivation of the rate constant. However, in the presence of toluene, whose intended purpose is to trap reactive free radicals and thereby simplify the overall mechanism, the pressure rises approximately exponentially to a limit; thus, on the assumption that the reaction is homogeneous and of first order, the rate constants can be determined from the half-life.

  7. Variance decomposition in stochastic simulators

    SciTech Connect

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  8. Variance decomposition in stochastic simulators

    NASA Astrophysics Data System (ADS)

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  9. Gravitoelectromagnetic analogy based on tidal tensors

    SciTech Connect

    Costa, L. Filipe O.; Herdeiro, Carlos A. R.

    2008-07-15

    We propose a new approach to a physical analogy between general relativity and electromagnetism, based on tidal tensors of both theories. Using this approach we write a covariant form for the gravitational analogues of the Maxwell equations, which makes transparent both the similarities and key differences between the two interactions. The following realizations of the analogy are given. The first one matches linearized gravitational tidal tensors to exact electromagnetic tidal tensors in Minkowski spacetime. The second one matches exact magnetic gravitational tidal tensors for ultrastationary metrics to exact magnetic tidal tensors of electromagnetism in curved spaces. In the third we show that our approach leads to a two-step exact derivation of Papapetrou's equation describing the force exerted on a spinning test particle. Analogous scalar invariants built from tidal tensors of both theories are also discussed.

  10. Bayes method for low rank tensor estimation

    NASA Astrophysics Data System (ADS)

    Suzuki, Taiji; Kanagawa, Heishiro

    2016-03-01

    We investigate the statistical convergence rate of a Bayesian low-rank tensor estimator, and construct a Bayesian nonlinear tensor estimator. The problem setting is the regression problem where the regression coefficient forms a tensor structure. This problem setting occurs in many practical applications, such as collaborative filtering, multi-task learning, and spatio-temporal data analysis. The convergence rate of the Bayes tensor estimator is analyzed in terms of both in-sample and out-of-sample predictive accuracies. It is shown that a fast learning rate is achieved without any strong convexity of the observation. Moreover, we extend the tensor estimator to a nonlinear function estimator so that we estimate a function that is a tensor product of several functions.

  11. Tensor coupling effect on relativistic symmetries

    NASA Astrophysics Data System (ADS)

    Chen, ShouWan; Li, DongPeng; Guo, JianYou

    2016-08-01

    The similarity renormalization group is used to transform the Dirac Hamiltonian with tensor coupling into a diagonal form. The upper (lower) diagonal element becomes a Schr¨odinger-like operator with the tensor component separated from the original Hamiltonian. Based on the operator, the tensor effect of the relativistic symmetries is explored with a focus on the single-particle energy contributed by the tensor coupling. The results show that the tensor coupling destroying (improving) the spin (pseudospin) symmetry is mainly attributed to the coupling of the spin-orbit and the tensor term, which plays an opposite role in the single-particle energy for the (pseudo-) spin-aligned and spin-unaligned states and has an important influence on the shell structure and its evolution.

  12. Inflationary tensor perturbations after BICEP2.

    PubMed

    Caligiuri, Jerod; Kosowsky, Arthur

    2014-05-16

    The measurement of B-mode polarization of the cosmic microwave background at large angular scales by the BICEP experiment suggests a stochastic gravitational wave background from early-Universe inflation with a surprisingly large amplitude. The power spectrum of these tensor perturbations can be probed both with further measurements of the microwave background polarization at smaller scales and also directly via interferometry in space. We show that sufficiently sensitive high-resolution B-mode measurements will ultimately have the ability to test the inflationary consistency relation between the amplitude and spectrum of the tensor perturbations, confirming their inflationary origin. Additionally, a precise B-mode measurement of the tensor spectrum will predict the tensor amplitude on solar system scales to 20% accuracy for an exact power-law tensor spectrum, so a direct detection will then measure the running of the tensor spectral index to high precision. PMID:24877926

  13. The Invar tensor package: Differential invariants of Riemann

    NASA Astrophysics Data System (ADS)

    Martín-García, J. M.; Yllanes, D.; Portugal, R.

    2008-10-01

    The long standing problem of the relations among the scalar invariants of the Riemann tensor is computationally solved for all 6ṡ10 objects with up to 12 derivatives of the metric. This covers cases ranging from products of up to 6 undifferentiated Riemann tensors to cases with up to 10 covariant derivatives of a single Riemann. We extend our computer algebra system Invar to produce within seconds a canonical form for any of those objects in terms of a basis. The process is as follows: (1) an invariant is converted in real time into a canonical form with respect to the permutation symmetries of the Riemann tensor; (2) Invar reads a database of more than 6ṡ10 relations and applies those coming from the cyclic symmetry of the Riemann tensor; (3) then applies the relations coming from the Bianchi identity, (4) the relations coming from commutations of covariant derivatives, (5) the dimensionally-dependent identities for dimension 4, and finally (6) simplifies invariants that can be expressed as product of dual invariants. Invar runs on top of the tensor computer algebra systems xTensor (for Mathematica) and Canon (for Maple). Program summaryProgram title:Invar Tensor Package v2.0 Catalogue identifier:ADZK_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZK_v2_0.html Program obtainable from:CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:3 243 249 No. of bytes in distributed program, including test data, etc.:939 Distribution format:tar.gz Programming language:Mathematica and Maple Computer:Any computer running Mathematica versions 5.0 to 6.0 or Maple versions 9 and 11 Operating system:Linux, Unix, Windows XP, MacOS RAM:100 Mb Word size:64 or 32 bits Supplementary material:The new database of relations is much larger than that for the previous version and therefore has not been included in

  14. Some classes of renormalizable tensor models

    NASA Astrophysics Data System (ADS)

    Geloun, Joseph Ben; Livine, Etera R.

    2013-08-01

    We identify new families of renormalizable tensor models from anterior renormalizable tensor models via a mapping capable of reducing or increasing the rank of the theory without having an effect on the renormalizability property. Mainly, a version of the rank 3 tensor model as defined by Ben Geloun and Samary [Ann. Henri Poincare 14, 1599 (2013); e-print arXiv:1201.0176 [hep-th

  15. The Topology of Symmetric Tensor Fields

    NASA Technical Reports Server (NTRS)

    Levin, Yingmei; Batra, Rajesh; Hesselink, Lambertus; Levy, Yuval

    1997-01-01

    Combinatorial topology, also known as "rubber sheet geometry", has extensive applications in geometry and analysis, many of which result from connections with the theory of differential equations. A link between topology and differential equations is vector fields. Recent developments in scientific visualization have shown that vector fields also play an important role in the analysis of second-order tensor fields. A second-order tensor field can be transformed into its eigensystem, namely, eigenvalues and their associated eigenvectors without loss of information content. Eigenvectors behave in a similar fashion to ordinary vectors with even simpler topological structures due to their sign indeterminacy. Incorporating information about eigenvectors and eigenvalues in a display technique known as hyperstreamlines reveals the structure of a tensor field. The simplify and often complex tensor field and to capture its important features, the tensor is decomposed into an isotopic tensor and a deviator. A tensor field and its deviator share the same set of eigenvectors, and therefore they have a similar topological structure. A a deviator determines the properties of a tensor field, while the isotopic part provides a uniform bias. Degenerate points are basic constituents of tensor fields. In 2-D tensor fields, there are only two types of degenerate points; while in 3-D, the degenerate points can be characterized in a Q'-R' plane. Compressible and incompressible flows share similar topological feature due to the similarity of their deviators. In the case of the deformation tensor, the singularities of its deviator represent the area of vortex core in the field. In turbulent flows, the similarities and differences of the topology of the deformation and the Reynolds stress tensors reveal that the basic addie-viscosity assuptions have their validity in turbulence modeling under certain conditions.

  16. Curvature tensors unified field equations on SEXn

    NASA Astrophysics Data System (ADS)

    Chung, Kyung Tae; Lee, Il Young

    1988-09-01

    We study the curvature tensors and field equations in the n-dimensional SE manifold SEXn. We obtain several basic properties of the vectors S λ and U λ and then of the SE curvature tensor and its contractions, such as a generalized Ricci identity, a generalized Bianchi identity, and two variations of the Bianchi identity satisfied by the SE Einstein tensor. Finally, a system of field equations is discussed in SEXn and one of its particular solutions is constructed and displayed.

  17. Denoising of hyperspectral images by best multilinear rank approximation of a tensor

    NASA Astrophysics Data System (ADS)

    Marin-McGee, Maider; Velez-Reyes, Miguel

    2010-04-01

    The hyperspectral image cube can be modeled as a three dimensional array. Tensors and the tools of multilinear algebra provide a natural framework to deal with this type of mathematical object. Singular value decomposition (SVD) and its variants have been used by the HSI community for denoising of hyperspectral imagery. Denoising of HSI using SVD is achieved by finding a low rank approximation of a matrix representation of the hyperspectral image cube. This paper investigates similar concepts in hyperspectral denoising by using a low multilinear rank approximation the given HSI tensor representation. The Best Multilinear Rank Approximation (BMRA) of a given tensor A is to find a lower multilinear rank tensor B that is as close as possible to A in the Frobenius norm. Different numerical methods to compute the BMRA using Alternating Least Square (ALS) method and Newton's Methods over product of Grassmann manifolds are presented. The effect of the multilinear rank, the numerical method used to compute the BMRA, and different parameter choices in those methods are studied. Results show that comparable results are achievable with both ALS and Newton type methods. Also, classification results using the filtered tensor are better than those obtained either with denoising using SVD or MNF.

  18. Modeling individual HRTF tensor using high-order partial least squares

    NASA Astrophysics Data System (ADS)

    Huang, Qinghua; Li, Lin

    2014-12-01

    A tensor is used to describe head-related transfer functions (HRTFs) depending on frequencies, sound directions, and anthropometric parameters. It keeps the multi-dimensional structure of measured HRTFs. To construct a multi-linear HRTF personalization model, an individual core tensor is extracted from the original HRTFs using high-order singular value decomposition (HOSVD). The individual core tensor in lower-dimensional space acts as the output of the multi-linear model. Some key anthropometric parameters as the inputs of the model are selected by Laplacian scores and correlation analyses between all the measured parameters and the individual core tensor. Then, the multi-linear regression model is constructed by high-order partial least squares (HOPLS), aiming to seek a joint subspace approximation for both the selected parameters and the individual core tensor. The numbers of latent variables and loadings are used to control the complexity of the model and prevent overfitting feasibly. Compared with the partial least squares regression (PLSR) method, objective simulations demonstrate the better performance for predicting individual HRTFs especially for the sound directions ipsilateral to the concerned ear. The subjective listening tests show that the predicted individual HRTFs are approximate to the measured HRTFs for the sound localization.

  19. Covariant Conformal Decomposition of Einstein Equations

    NASA Astrophysics Data System (ADS)

    Gourgoulhon, E.; Novak, J.

    It has been shown1,2 that the usual 3+1 form of Einstein's equations may be ill-posed. This result has been previously observed in numerical simulations3,4. We present a 3+1 type formalism inspired by these works to decompose Einstein's equations. This decomposition is motivated by the aim of stable numerical implementation and resolution of the equations. We introduce the conformal 3-``metric'' (scaled by the determinant of the usual 3-metric) which is a tensor density of weight -2/3. The Einstein equations are then derived in terms of this ``metric'', of the conformal extrinsic curvature and in terms of the associated derivative. We also introduce a flat 3-metric (the asymptotic metric for isolated systems) and the associated derivative. Finally, the generalized Dirac gauge (introduced by Smarr and York5) is used in this formalism and some examples of formulation of Einstein's equations are shown.

  20. Anisotropic fractional diffusion tensor imaging

    PubMed Central

    Meerschaert, Mark M; Magin, Richard L; Ye, Allen Q

    2015-01-01

    Traditional diffusion tensor imaging (DTI) maps brain structure by fitting a diffusion model to the magnitude of the electrical signal acquired in magnetic resonance imaging (MRI). Fractional DTI employs anomalous diffusion models to obtain a better fit to real MRI data, which can exhibit anomalous diffusion in both time and space. In this paper, we describe the challenge of developing and employing anisotropic fractional diffusion models for DTI. Since anisotropy is clearly present in the three-dimensional MRI signal response, such models hold great promise for improving brain imaging. We then propose some candidate models, based on stochastic theory.

  1. Estimating source time function and moment tensor from moment tensor rate functions by constrained L1 norm minimization

    NASA Astrophysics Data System (ADS)

    Wéber, Zoltán

    2009-08-01

    Linear inversion of three-component waveform data for the time-varying moment tensor rate functions (MTRFs) is a powerful method for studying seismic sources. After finding the MTRFs, however, we should try to represent an earthquake by just one moment tensor and one source time function (STF), if possible. This approach is particularly justified when dealing with weak local events. Unfortunately, extraction of a moment tensor and STF from the MTRFs is essentially a non-linear inverse problem. In this paper, we introduce an iterative Lp norm minimization technique to retrieve the best moment tensor and STF from the MTRFs obtained by waveform inversion. To allow only forward slip during the rupture process, we impose a positivity constraint on the STF. The error analysis, carried out by using Monte Carlo simulation, allows us to estimate and display the uncertainties of the retrieved source parameters. On the basis of the resulting moment tensor uncertainties, the statistical significance of the double-couple, compensated linear vector dipole and volumetric parts of the solution can be readily assessed. Tests on synthetic data indicate that the proposed algorithm gives good results for both simple and complex sources. Confidence zones for the retrieved STFs are usually fairly large. The mechanisms, on the other hand, are mostly well resolved. The scalar seismic moments are also determined with acceptable accuracy. If the MTRFs cannot resolve the complex nature of a source, the method yields the average source mechanism. If the subevents are well separated in time, their mechanisms can be estimated by appropriately splitting the MTRFs into subintervals. The method has also been applied to two local earthquakes that occurred in Hungary. The isotropic component of the moment tensor solutions is insignificant, implying the tectonic nature of the investigated events. The principal axes of the source mechanisms agree well with the main stress pattern published for the

  2. Importance of Force Decomposition for Local Stress Calculations in Biomembrane Molecular Simulations.

    PubMed

    Vanegas, Juan M; Torres-Sánchez, Alejandro; Arroyo, Marino

    2014-02-11

    Local stress fields are routinely computed from molecular dynamics trajectories to understand the structure and mechanical properties of lipid bilayers. These calculations can be systematically understood with the Irving-Kirkwood-Noll theory. In identifying the stress tensor, a crucial step is the decomposition of the forces on the particles into pairwise contributions. However, such a decomposition is not unique in general, leading to an ambiguity in the definition of the stress tensor, particularly for multibody potentials. Furthermore, a theoretical treatment of constraints in local stress calculations has been lacking. Here, we present a new implementation of local stress calculations that systematically treats constraints and considers a privileged decomposition, the central force decomposition, that leads to a symmetric stress tensor by construction. We focus on biomembranes, although the methodology presented here is widely applicable. Our results show that some unphysical behavior obtained with previous implementations (e.g. nonconstant normal stress profiles along an isotropic bilayer in equilibrium) is a consequence of an improper treatment of constraints. Furthermore, other valid force decompositions produce significantly different stress profiles, particularly in the presence of dihedral potentials. Our methodology reveals the striking effect of unsaturations on the bilayer mechanics, missed by previous stress calculation implementations. PMID:26580046

  3. Fast stray field computation on tensor grids

    PubMed Central

    Exl, L.; Auzinger, W.; Bance, S.; Gusenbauer, M.; Reichel, F.; Schrefl, T.

    2012-01-01

    A direct integration algorithm is described to compute the magnetostatic field and energy for given magnetization distributions on not necessarily uniform tensor grids. We use an analytically-based tensor approximation approach for function-related tensors, which reduces calculations to multilinear algebra operations. The algorithm scales with N4/3 for N computational cells used and with N2/3 (sublinear) when magnetization is given in canonical tensor format. In the final section we confirm our theoretical results concerning computing times and accuracy by means of numerical examples. PMID:24910469

  4. Electroproduction of tensor mesons in QCD

    NASA Astrophysics Data System (ADS)

    Braun, V. M.; Kivel, N.; Strohmaier, M.; Vladimirov, A. A.

    2016-06-01

    Due to multiple possible polarizations hard exclusive production of tensor mesons by virtual photons or in heavy meson decays offers interesting possibilities to study the helicity structure of the underlying short-distance process. Motivated by the first measurement of the transition form factor γ∗γ → f 2(1270) at large momentum transfers by the BELLE collaboration we present an improved QCD analysis of this reaction in the framework of collinear factorization including contributions of twist-three quark-antiquark-gluon operators and an estimate of soft end-point corrections using light-cone sum rules. The results appear to be in good agreement with the data, in particular the predicted scaling behavior is reproduced in all cases.

  5. Temporal dynamics of biotic and abiotic drivers of litter decomposition.

    PubMed

    García-Palacios, Pablo; Shaw, E Ashley; Wall, Diana H; Hättenschwiler, Stephan

    2016-05-01

    Climate, litter quality and decomposers drive litter decomposition. However, little is known about whether their relative contribution changes at different decomposition stages. To fill this gap, we evaluated the relative importance of leaf litter polyphenols, decomposer communities and soil moisture for litter C and N loss at different stages throughout the decomposition process. Although both microbial and nematode communities regulated litter C and N loss in the early decomposition stages, soil moisture and legacy effects of initial differences in litter quality played a major role in the late stages of the process. Our results provide strong evidence for substantial shifts in how biotic and abiotic factors control litter C and N dynamics during decomposition. Taking into account such temporal dynamics will increase the predictive power of decomposition models that are currently limited by a single-pool approach applying control variables uniformly to the entire decay process. PMID:26947573

  6. Thermal decomposition of allylbenzene ozonide

    SciTech Connect

    Ewing, J.C.; Church, D.F.; Pryor, W.A. )

    1989-07-19

    Thermal decomposition of allylbenzene ozonide (ABO) at 98{degree}C in the liquid phase yields toluene, bibenzyl, phenylacetaldehyde, formic acid, and (benzyloxy)methyl formate as major products; benzyl chloride is formed when chlorinated solvents are employed. These products, as well as benzyl formate, are formed when ABO is decomposed at 37{degree}C. When the decomposition of ABO is carried out in the presence of 1-butanethiol, the product distribution changes: yields of toluene increase, no bibenzyl is formed, and decreases in yields of (benzyloxy)methyl formate, phenylacetladehyde, and benzyl chloride are observed. The decomposition of 1-octene ozonide (OTO) also was studied for comparison. The activation parameters for both ABO and OTO are similar (28.2 kcal/mol, log A = 13.6 and 26.6 kcal/mol, log A = 12.5, respectively); these data suggest that ozonides decompose by homolysis of the O-O bond, rather than by an alternative synchronous two-bond scission process. When ABO is decomposed at 37{degree}C in the presence of the spin traps 5,5-dimethyl-1-pyrroline N-oxide (DMPO) or 3,3,5,5-tetramethyl-1-pyrroline N-oxide (M{sub 4}PO), ESR signals are observed that are consistent with the trapping of benzyl and other carbon- and oxygen-centered radicals. A mechanism for the thermal decomposition of ABO that involves peroxide bond homolysis and subsequent {beta}-scission is proposed. Thus, Criegee ozonides decompose to give free radicals at quite modest temperatures.

  7. Hyperelastic Internal Balance by Multiplicative Decomposition of the Deformation Gradient

    NASA Astrophysics Data System (ADS)

    Demirkoparan, Hasan; Pence, Thomas J.; Tsai, Hungyu

    2014-07-01

    The multiplicative decomposition of the deformation gradient {{F} = {{hat{F}}}{F}^*} is often used in finite deformation continuum mechanics as a basis for treating mechanical effects including plasticity, biological growth, material swelling, and notions of material morphogenesis. Evolution rules for the particular effect from this list are then posed for F*. The tensor {{{hat{F}}}} is then invoked to describe a subsequent elastic accommodation, and a hyperelastic framework is put in place for its determination using an elastic energy density function, say {W({hat{F}})} , as a constitutive specification. Here we explore the theory that emerges if both F* and {{hat{F}}} are governed by hyperelastic criteria; thus we consider energy densities {W({{hat{F}}}, {F}^*)} . The decomposition of F is itself determined by energy minimization, and the variation associated with the multiplicative decomposition gives a tensor relation that is interpreted as an internal balance requirement. Our initial development purposefully proceeds with minimal presumptions on the kinematic interpretation of the factors in the deformation gradient decomposition. Connections are then made to treatments that ascribe particular kinematic properties to the decomposition factors—the theory of structured deformations is especially significant in this regard. Such theories have broad utility in describing certain substructural reconfigurations in solids. To demonstrate in the context of the present variational treatment we consider a boundary value problem that involves an imposed twist. If the twist is small then the minimizer is classically smooth. At larger values of twist the energy minimizer exhibits a non-smooth deformation that localizes slip at a singular surface.

  8. Global decomposition experiment shows soil animal impacts on decomposition are climate-dependent

    PubMed Central

    WALL, DIANA H; BRADFORD, MARK A; ST JOHN, MARK G; TROFYMOW, JOHN A; BEHAN-PELLETIER, VALERIE; BIGNELL, DAVID E; DANGERFIELD, J MARK; PARTON, WILLIAM J; RUSEK, JOSEF; VOIGT, WINFRIED; WOLTERS, VOLKMAR; GARDEL, HOLLEY ZADEH; AYUKE, FRED O; BASHFORD, RICHARD; BELJAKOVA, OLGA I; BOHLEN, PATRICK J; BRAUMAN, ALAIN; FLEMMING, STEPHEN; HENSCHEL, JOH R; JOHNSON, DAN L; JONES, T HEFIN; KOVAROVA, MARCELA; KRANABETTER, J MARTY; KUTNY, LES; LIN, KUO-CHUAN; MARYATI, MOHAMED; MASSE, DOMINIQUE; POKARZHEVSKII, ANDREI; RAHMAN, HOMATHEVI; SABARÁ, MILLOR G; SALAMON, JOERG-ALFRED; SWIFT, MICHAEL J; VARELA, AMANDA; VASCONCELOS, HERALDO L; WHITE, DON; ZOU, XIAOMING

    2008-01-01

    Climate and litter quality are primary drivers of terrestrial decomposition and, based on evidence from multisite experiments at regional and global scales, are universally factored into global decomposition models. In contrast, soil animals are considered key regulators of decomposition at local scales but their role at larger scales is unresolved. Soil animals are consequently excluded from global models of organic mineralization processes. Incomplete assessment of the roles of soil animals stems from the difficulties of manipulating invertebrate animals experimentally across large geographic gradients. This is compounded by deficient or inconsistent taxonomy. We report a global decomposition experiment to assess the importance of soil animals in C mineralization, in which a common grass litter substrate was exposed to natural decomposition in either control or reduced animal treatments across 30 sites distributed from 43°S to 68°N on six continents. Animals in the mesofaunal size range were recovered from the litter by Tullgren extraction and identified to common specifications, mostly at the ordinal level. The design of the trials enabled faunal contribution to be evaluated against abiotic parameters between sites. Soil animals increase decomposition rates in temperate and wet tropical climates, but have neutral effects where temperature or moisture constrain biological activity. Our findings highlight that faunal influences on decomposition are dependent on prevailing climatic conditions. We conclude that (1) inclusion of soil animals will improve the predictive capabilities of region- or biome-scale decomposition models, (2) soil animal influences on decomposition are important at the regional scale when attempting to predict global change scenarios, and (3) the statistical relationship between decomposition rates and climate, at the global scale, is robust against changes in soil faunal abundance and diversity.

  9. Communication: Acceleration of coupled cluster singles and doubles via orbital-weighted least-squares tensor hypercontraction

    SciTech Connect

    Parrish, Robert M.; Sherrill, C. David; Hohenstein, Edward G.; Kokkila, Sara I. L.; Martínez, Todd J.

    2014-05-14

    We apply orbital-weighted least-squares tensor hypercontraction decomposition of the electron repulsion integrals to accelerate the coupled cluster singles and doubles (CCSD) method. Using accurate and flexible low-rank factorizations of the electron repulsion integral tensor, we are able to reduce the scaling of the most vexing particle-particle ladder term in CCSD from O(N{sup 6}) to O(N{sup 5}), with remarkably low error. Combined with a T{sub 1}-transformed Hamiltonian, this leads to substantial practical accelerations against an optimized density-fitted CCSD implementation.

  10. Mechanistic insights into formation of SnO₂ nanotubes: asynchronous decomposition of poly(vinylpyrrolidone) in electrospun fibers during calcining process.

    PubMed

    Wu, Jinjin; Zeng, Dawen; Wang, Xiaoxia; Zeng, Lei; Huang, Qingwu; Tang, Gen; Xie, Changsheng

    2014-09-23

    The formation mechanism of SnO2 nanotubes (NTs) fabricated by generic electrospinning and calcining was revealed by systematically investigating the structural evolution of calcined fibers, product composition, and released volatile byproducts. The structural evolution of the fibers proceeded sequentially from dense fiber to wire-in-tube to nanotube. This remarkable structural evolution indicated a disparate thermal decomposition of poly(vinylpyrrolidone) (PVP) in the interior and the surface of the fibers. PVP on the surface of the outer fibers decomposed completely at a lower temperature (<340 °C), due to exposure to oxygen, and SnO2 crystallized and formed a shell on the fiber. Interior PVP of the fiber was prone to loss of side substituents due to the oxygen-deficient decomposition, leaving only the carbon main chain. The rest of the Sn crystallized when the pores formed resulting from the aggregation of SnO2 nanocrystals in the shell. The residual carbon chain did not decompose completely at temperatures less than 550 °C. We proposed a PVP-assisted Ostwald ripening mechanism for the formation of SnO2 NTs. This work directs the fabrication of diverse nanostructure metal oxide by generic electrospinning method. PMID:25162977

  11. Novel techniques for data decomposition and load balancing for parallel processing of vision systems: Implementation and evaluation using a motion estimation system

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.

    1989-01-01

    Computer vision systems employ a sequence of vision algorithms in which the output of an algorithm is the input of the next algorithm in the sequence. Algorithms that constitute such systems exhibit vastly different computational characteristics, and therefore, require different data decomposition techniques and efficient load balancing techniques for parallel implementation. However, since the input data for a task is produced as the output data of the previous task, this information can be exploited to perform knowledge based data decomposition and load balancing. Presented here are algorithms for a motion estimation system. The motion estimation is based on the point correspondence between the involved images which are a sequence of stereo image pairs. Researchers propose algorithms to obtain point correspondences by matching feature points among stereo image pairs at any two consecutive time instants. Furthermore, the proposed algorithms employ non-iterative procedures, which results in saving considerable amounts of computation time. The system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from consecutive time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters.

  12. Asymmetric tensor analysis for flow visualization.

    PubMed

    Zhang, Eugene; Yeh, Harry; Lin, Zhongzang; Laramee, Robert S

    2009-01-01

    The gradient of a velocity vector field is an asymmetric tensor field which can provide critical insight that is difficult to infer from traditional trajectory-based vector field visualization techniques. We describe the structures in the eigenvalue and eigenvector fields of the gradient tensor and how these structures can be used to infer the behaviors of the velocity field. To illustrate the structures in asymmetric tensor fields, we introduce the notions of eigenvalue and eigenvector manifolds. These concepts afford a number of theoretical results that clarify the connections between symmetric and antisymmetric components in tensor fields. In addition, these manifolds naturally lead to partitions of tensor fields, which we use to design effective visualization strategies. Both eigenvalue manifold and eigenvector manifold are supported by a tensor reparameterization with physical meaning. This allows us to relate our tensor analysis to physical quantities such as rotation, angular deformation, and dilation, which provide physical interpretation of our tensor-driven vector field analysis in the context of fluid mechanics. To demonstrate the utility of our approach, we have applied our visualization techniques and interpretation to the study of the Sullivan Vortex as well as computational fluid dynamics simulation data. PMID:19008559

  13. Matrix Representation and Tensor Analysis of Courses

    ERIC Educational Resources Information Center

    Tait, W. H.

    1975-01-01

    A discussion of how a tensor theory can be adapted to provide a fully quantitative analysis of a social system. A social tensor is developed from the physical analogue and used to analyze the structure of an education course. (Author/HB)

  14. Decomposing Nekrasov decomposition

    NASA Astrophysics Data System (ADS)

    Morozov, A.; Zenkevich, Y.

    2016-02-01

    AGT relations imply that the four-point conformal block admits a decomposition into a sum over pairs of Young diagrams of essentially rational Nekrasov functions — this is immediately seen when conformal block is represented in the form of a matrix model. However, the q-deformation of the same block has a deeper decomposition — into a sum over a quadruple of Young diagrams of a product of four topological vertices. We analyze the interplay between these two decompositions, their properties and their generalization to multi-point conformal blocks. In the latter case we explain how Dotsenko-Fateev all-with-all (star) pair "interaction" is reduced to the quiver model nearest-neighbor (chain) one. We give new identities for q-Selberg averages of pairs of generalized Macdonald polynomials. We also translate the slicing invariance of refined topological strings into the language of conformal blocks and interpret it as abelianization of generalized Macdonald polynomials.

  15. CAST: Contraction Algorithm for Symmetric Tensors

    SciTech Connect

    Rajbhandari, Samyam; NIkam, Akshay; Lai, Pai-Wei; Stock, Kevin; Krishnamoorthy, Sriram; Sadayappan, Ponnuswamy

    2014-09-22

    Tensor contractions represent the most compute-intensive core kernels in ab initio computational quantum chemistry and nuclear physics. Symmetries in these tensor contractions makes them difficult to load balance and scale to large distributed systems. In this paper, we develop an efficient and scalable algorithm to contract symmetric tensors. We introduce a novel approach that avoids data redistribution in contracting symmetric tensors while also avoiding redundant storage and maintaining load balance. We present experimental results on two parallel supercomputers for several symmetric contractions that appear in the CCSD quantum chemistry method. We also present a novel approach to tensor redistribution that can take advantage of parallel hyperplanes when the initial distribution has replicated dimensions, and use collective broadcast when the final distribution has replicated dimensions, making the algorithm very efficient.

  16. Inflation and alternatives with blue tensor spectra

    SciTech Connect

    Wang, Yi; Xue, Wei E-mail: wei.xue@sissa.it

    2014-10-01

    We study the tilt of the primordial gravitational waves spectrum. A hint of blue tilt is shown from analyzing the BICEP2 and POLARBEAR data. Motivated by this, we explore the possibilities of blue tensor spectra from the very early universe cosmology models, including null energy condition violating inflation, inflation with general initial conditions, and string gas cosmology, etc. For the simplest G-inflation, blue tensor spectrum also implies blue scalar spectrum. In general, the inflation models with blue tensor spectra indicate large non-Gaussianities. On the other hand, string gas cosmology predicts blue tensor spectrum with highly Gaussian fluctuations. If further experiments do confirm the blue tensor spectrum, non-Gaussianity becomes a distinguishing test between inflation and alternatives.

  17. Tensor dissimilarity based adaptive seeding algorithm for DT-MRI visualization with streamtubes

    NASA Astrophysics Data System (ADS)

    Weldeselassie, Yonas T.; Hamarneh, Ghassan; Weiskopf, Daniel

    2007-03-01

    In this paper, we propose an adaptive seeding strategy for visualization of diffusion tensor magnetic resonance imaging (DT-MRI) data using streamtubes. DT-MRI is a medical imaging modality that captures unique water diffusion properties and fiber orientation information of the imaged tissues. Visualizing DT-MRI data using streamtubes has the advantage that not only the anisotropic nature of the diffusion is visualized but also the underlying anatomy of biological structures is revealed. This makes streamtubes significant for the analysis of fibrous tissues in medical images. In order to avoid rendering multiple similar streamtubes, an adaptive seeding strategy is employed which takes into account similarity of tensors in a given region. The goal is to automate the process of generating seed points such that regions with dissimilar tensors are assigned more seed points compared to regions with similar tensors. The algorithm is based on tensor dissimilarity metrics that take into account both diffusion magnitudes and directions to optimize the seeding positions and density of streamtubes in order to reduce the visual clutter. Two recent advances in tensor calculus and tensor dissimilarity metrics are utilized: the Log-Euclidean and the J-divergence. Results show that adaptive seeding not only helps to cull unnecessary streamtubes that would obscure visualization but also do so without having to compute the culled streamtubes, which makes the visualization process faster.

  18. NTO decomposition studies

    SciTech Connect

    Oxley, J.C.; Smith, J.L.; Yeager, K.E.; Rogers, E.; Dong, X.X.

    1996-07-01

    To examine the thermal decomposition of 5-nitro-2,4-dihydro-3H-1,2,4-triazol-3-one (NTO) in detail, isotopic labeling studies were undertaken. NTO samples labeled with {sup 15}N in three different locations [N(1) and N(2), N(4), and N(6)] were prepared. Upon thermolysis, the majority of the NTO condensed-phase product was a brown, insoluble residue, but small quantities of 2,4-dihydro-3H-1,2,4-triazol-3-one (TO) and triazole were detected. Gases comprised the remainder of the NTO decomposition products. The analysis of these gases is reported along with mechanistic implications of these observations.

  19. X-ray tensor tomography

    NASA Astrophysics Data System (ADS)

    Malecki, A.; Potdevin, G.; Biernath, T.; Eggl, E.; Willer, K.; Lasser, T.; Maisenbacher, J.; Gibmeier, J.; Wanner, A.; Pfeiffer, F.

    2014-02-01

    Here we introduce a new concept for x-ray computed tomography that yields information about the local micro-morphology and its orientation in each voxel of the reconstructed 3D tomogram. Contrary to conventional x-ray CT, which only reconstructs a single scalar value for each point in the 3D image, our approach provides a full scattering tensor with multiple independent structural parameters in each volume element. In the application example shown in this study, we highlight that our method can visualize sub-pixel fiber orientations in a carbon composite sample, hence demonstrating its value for non-destructive testing applications. Moreover, as the method is based on the use of a conventional x-ray tube, we believe that it will also have a great impact in the wider range of material science investigations and in future medical diagnostics. The authors declare no competing financial interests.

  20. Depth inpainting by tensor voting.

    PubMed

    Kulkarni, Mandar; Rajagopalan, Ambasamudram N

    2013-06-01

    Depth maps captured by range scanning devices or by using optical cameras often suffer from missing regions due to occlusions, reflectivity, limited scanning area, sensor imperfections, etc. In this paper, we propose a fast and reliable algorithm for depth map inpainting using the tensor voting (TV) framework. For less complex missing regions, local edge and depth information is utilized for synthesizing missing values. The depth variations are modeled by local planes using 3D TV, and missing values are estimated using plane equations. For large and complex missing regions, we collect and evaluate depth estimates from self-similar (training) datasets. We align the depth maps of the training set with the target (defective) depth map and evaluate the goodness of depth estimates among candidate values using 3D TV. We demonstrate the effectiveness of the proposed approaches on real as well as synthetic data. PMID:24323102

  1. Tensor representation techniques for full configuration interaction: A Fock space approach using the canonical product format.

    PubMed

    Böhm, Karl-Heinz; Auer, Alexander A; Espig, Mike

    2016-06-28

    In this proof-of-principle study, we apply tensor decomposition techniques to the Full Configuration Interaction (FCI) wavefunction in order to approximate the wavefunction parameters efficiently and to reduce the overall computational effort. For this purpose, the wavefunction ansatz is formulated in an occupation number vector representation that ensures antisymmetry. If the canonical product format tensor decomposition is then applied, the Hamiltonian and the wavefunction can be cast into a multilinear product form. As a consequence, the number of wavefunction parameters does not scale to the power of the number of particles (or orbitals) but depends on the rank of the approximation and linearly on the number of particles. The degree of approximation can be controlled by a single threshold for the rank reduction procedure required in the algorithm. We demonstrate that using this approximation, the FCI Hamiltonian matrix can be stored with N(5) scaling. The error of the approximation that is introduced is below Millihartree for a threshold of ϵ = 10(-4) and no convergence problems are observed solving the FCI equations iteratively in the new format. While promising conceptually, all effort of the algorithm is shifted to the required rank reduction procedure after the contraction of the Hamiltonian with the coefficient tensor. At the current state, this crucial step is the bottleneck of our approach and even for an optimistic estimate, the algorithm scales beyond N(10) and future work has to be directed towards reduction-free algorithms. PMID:27369492

  2. Tensor representation techniques for full configuration interaction: A Fock space approach using the canonical product format

    NASA Astrophysics Data System (ADS)

    Böhm, Karl-Heinz; Auer, Alexander A.; Espig, Mike

    2016-06-01

    In this proof-of-principle study, we apply tensor decomposition techniques to the Full Configuration Interaction (FCI) wavefunction in order to approximate the wavefunction parameters efficiently and to reduce the overall computational effort. For this purpose, the wavefunction ansatz is formulated in an occupation number vector representation that ensures antisymmetry. If the canonical product format tensor decomposition is then applied, the Hamiltonian and the wavefunction can be cast into a multilinear product form. As a consequence, the number of wavefunction parameters does not scale to the power of the number of particles (or orbitals) but depends on the rank of the approximation and linearly on the number of particles. The degree of approximation can be controlled by a single threshold for the rank reduction procedure required in the algorithm. We demonstrate that using this approximation, the FCI Hamiltonian matrix can be stored with N5 scaling. The error of the approximation that is introduced is below Millihartree for a threshold of ɛ = 10-4 and no convergence problems are observed solving the FCI equations iteratively in the new format. While promising conceptually, all effort of the algorithm is shifted to the required rank reduction procedure after the contraction of the Hamiltonian with the coefficient tensor. At the current state, this crucial step is the bottleneck of our approach and even for an optimistic estimate, the algorithm scales beyond N10 and future work has to be directed towards reduction-free algorithms.

  3. Calculation and Analysis of magnetic gradient tensor components of global magnetic models

    NASA Astrophysics Data System (ADS)

    Schiffler, Markus; Queitsch, Matthias; Schneider, Michael; Stolz, Ronny; Krech, Wolfram; Meyer, Hans-Georg; Kukowski, Nina

    2014-05-01

    Magnetic mapping missions like SWARM and its predecessors, e.g. the CHAMP and MAGSAT programs, offer high resolution Earth's magnetic field data. These datasets are usually combined with magnetic observatory and survey data, and subject to harmonic analysis. The derived spherical harmonic coefficients enable magnetic field modelling using a potential series expansion. Recently, new instruments like the JeSSY STAR Full Tensor Magnetic Gradiometry system equipped with very high sensitive sensors can directly measure the magnetic field gradient tensor components. The full understanding of the quality of the measured data requires the extension of magnetic field models to gradient tensor components. In this study, we focus on the extension of the derivation of the magnetic field out of the potential series magnetic field gradient tensor components and apply the new theoretical framework to the International Geomagnetic Reference Field (IGRF) and the High Definition Magnetic Model (HDGM). The gradient tensor component maps for entire Earth's surface produced for the IGRF show low values and smooth variations reflecting the core and mantle contributions whereas those for the HDGM gives a novel tool to unravel crustal structure and deep-situated ore bodies. For example, the Thor Suture and the Sorgenfrei-Thornquist Zone in Europe are delineated by a strong northward gradient. Derived from Eigenvalue decomposition of the magnetic gradient tensor, the scaled magnetic moment, normalized source strength (NSS) and the bearing of the lithospheric sources are presented. The NSS serves as a tool for estimating the lithosphere-asthenosphere boundary as well as the depth of plutons and ore bodies. Furthermore changes in magnetization direction parallel to the mid-ocean ridges can be obtained from the scaled magnetic moment and the normalized source strength discriminates the boundaries between the anomalies of major continental provinces like southern Africa or the Eastern European

  4. Ultrasound elastic tensor imaging: comparison with MR diffusion tensor imaging in the myocardium

    NASA Astrophysics Data System (ADS)

    Lee, Wei-Ning; Larrat, Benoît; Pernot, Mathieu; Tanter, Mickaël

    2012-08-01

    We have previously proven the feasibility of ultrasound-based shear wave imaging (SWI) to non-invasively characterize myocardial fiber orientation in both in vitro porcine and in vivo ovine hearts. The SWI-estimated results were in good correlation with histology. In this study, we proposed a new and robust fiber angle estimation method through a tensor-based approach for SWI, coined together as elastic tensor imaging (ETI), and compared it with magnetic resonance diffusion tensor imaging (DTI), a current gold standard and extensively reported non-invasive imaging technique for mapping fiber architecture. Fresh porcine (n = 5) and ovine (n = 5) myocardial samples (20 × 20 × 30 mm3) were studied. ETI was firstly performed to generate shear waves and to acquire the wave events at ultrafast frame rate (8000 fps). A 2.8 MHz phased array probe (pitch = 0.28 mm), connected to a prototype ultrasound scanner, was mounted on a customized MRI-compatible rotation device, which allowed both the rotation of the probe from -90° to 90° at 5° increments and co-registration between two imaging modalities. Transmural shear wave speed at all propagation directions realized was firstly estimated. The fiber angles were determined from the shear wave speed map using the least-squares method and eigen decomposition. The test myocardial sample together with the rotation device was then placed inside a 7T MRI scanner. Diffusion was encoded in six directions. A total of 270 diffusion-weighted images (b = 1000 s mm-2, FOV = 30 mm, matrix size = 60 × 64, TR = 6 s, TE = 19 ms, 24 averages) and 45 B0 images were acquired in 14 h 30 min. The fiber structure was analyzed by the fiber-tracking module in software, MedINRIA. The fiber orientation in the overlapped myocardial region which both ETI and DTI accessed was therefore compared, thanks to the co-registered imaging system. Results from all ten samples showed good correlation (r2 = 0.81, p < 0.0001) and good agreement (3.05° bias

  5. Hydrazine decomposition and other reactions

    NASA Technical Reports Server (NTRS)

    Armstrong, Warren E. (Inventor); La France, Donald S. (Inventor); Voge, Hervey H. (Inventor)

    1978-01-01

    This invention relates to the catalytic decomposition of hydrazine, catalysts useful for this decomposition and other reactions, and to reactions in hydrogen atmospheres generally using carbon-containing catalysts.

  6. Spatial Mapping of Translational Diffusion Coefficients Using Diffusion Tensor Imaging: A Mathematical Description

    PubMed Central

    SHETTY, ANIL N.; CHIANG, SHARON; MALETIC-SAVATIC, MIRJANA; KASPRIAN, GREGOR; VANNUCCI, MARINA; LEE, WESLEY

    2016-01-01

    In this article, we discuss the theoretical background for diffusion weighted imaging and diffusion tensor imaging. Molecular diffusion is a random process involving thermal Brownian motion. In biological tissues, the underlying microstructures restrict the diffusion of water molecules, making diffusion directionally dependent. Water diffusion in tissue is mathematically characterized by the diffusion tensor, the elements of which contain information about the magnitude and direction of diffusion and is a function of the coordinate system. Thus, it is possible to generate contrast in tissue based primarily on diffusion effects. Expressing diffusion in terms of the measured diffusion coefficient (eigenvalue) in any one direction can lead to errors. Nowhere is this more evident than in white matter, due to the preferential orientation of myelin fibers. The directional dependency is removed by diagonalization of the diffusion tensor, which then yields a set of three eigenvalues and eigenvectors, representing the magnitude and direction of the three orthogonal axes of the diffusion ellipsoid, respectively. For example, the eigenvalue corresponding to the eigenvector along the long axis of the fiber corresponds qualitatively to diffusion with least restriction. Determination of the principal values of the diffusion tensor and various anisotropic indices provides structural information. We review the use of diffusion measurements using the modified Stejskal–Tanner diffusion equation. The anisotropy is analyzed by decomposing the diffusion tensor based on symmetrical properties describing the geometry of diffusion tensor. We further describe diffusion tensor properties in visualizing fiber tract organization of the human brain.

  7. Thermal Decomposition Characteristics of Orthorhombic Ammonium Perchlorate (o-AP)

    SciTech Connect

    Behrens, R.; Minier, L.

    1999-03-01

    Preliminary STMBMS and SEM results of the thermal decomposition of AP in the orthorhombic phase are presented. The overall decomposition is shown to be complex and controlled by both physical and chemical processes. The data show that the physical and chemical processes can be probed and characterized utilizing SEM and STMBMS. The overall decomposition is characterized by three distinguishing features: an induction period, and accelerator period and a deceleratory period. The major decomposition event occurs in the subsurface of the AP particles and propagates towards the center of the particle with time. The amount of total decomposition is dependent upon particle size and increases from 23% for {approximately}50{micro}m-diameter AP to 33% for {approximately}200{micro}m-diameter AP. A conceptual model of the physical processes is presented. Insight into the chemical processes is provided by the gas formation rates that are measured for the gaseous products. To our knowledge, this is the first presentation of data showing that the chemical and physical decomposition processes can be identified from one another, probed and characterized at the level that is required to better understand the thermal decomposition behavior of AP. Future work is planned with the goal of obtaining data that can be used to develop a mathematical description for the thermal decomposition of o-AP.

  8. Low uncertainty method for inertia tensor identification

    NASA Astrophysics Data System (ADS)

    Barreto, J. P.; Muñoz, L. E.

    2016-02-01

    The uncertainty associated with the experimental identification of the inertia tensor can be reduced by implementing adequate rotational and translational motions in the experiment. This paper proposes a particular 3D trajectory that improves the experimental measurement of the inertia tensor of rigid bodies. Such a trajectory corresponds to a motion in which the object is rotated around a large number of instantaneous axes, while the center of gravity remains static. The uncertainty in the inertia tensor components obtained with this practice is reduced by 45% in average, compared with those calculated using simple rotations around three perpendicular axes (Roll, Pitch, Yaw).

  9. Tensor methods for large, sparse unconstrained optimization

    SciTech Connect

    Bouaricha, A.

    1996-11-01

    Tensor methods for unconstrained optimization were first introduced by Schnabel and Chow [SIAM J. Optimization, 1 (1991), pp. 293-315], who describe these methods for small to moderate size problems. This paper extends these methods to large, sparse unconstrained optimization problems. This requires an entirely new way of solving the tensor model that makes the methods suitable for solving large, sparse optimization problems efficiently. We present test results for sets of problems where the Hessian at the minimizer is nonsingular and where it is singular. These results show that tensor methods are significantly more efficient and more reliable than standard methods based on Newton`s method.

  10. Incremental Discriminant Analysis in Tensor Space

    PubMed Central

    Chang, Liu; Weidong, Zhao; Tao, Yan; Qiang, Pu; Xiaodan, Du

    2015-01-01

    To study incremental machine learning in tensor space, this paper proposes incremental tensor discriminant analysis. The algorithm employs tensor representation to carry on discriminant analysis and combine incremental learning to alleviate the computational cost. This paper proves that the algorithm can be unified into the graph framework theoretically and analyzes the time and space complexity in detail. The experiments on facial image detection have shown that the algorithm not only achieves sound performance compared with other algorithms, but also reduces the computational issues apparently. PMID:26339229

  11. Killing tensors, warped products and the orthogonal separation of the Hamilton-Jacobi equation

    SciTech Connect

    Rajaratnam, Krishan McLenaghan, Raymond G.

    2014-01-15

    We study Killing tensors in the context of warped products and apply the results to the problem of orthogonal separation of the Hamilton-Jacobi equation. This work is motivated primarily by the case of spaces of constant curvature where warped products are abundant. We first characterize Killing tensors which have a natural algebraic decomposition in warped products. We then apply this result to show how one can obtain the Killing-Stäckel space (KS-space) for separable coordinate systems decomposable in warped products. This result in combination with Benenti's theory for constructing the KS-space of certain special separable coordinates can be used to obtain the KS-space for all orthogonal separable coordinates found by Kalnins and Miller in Riemannian spaces of constant curvature. Next we characterize when a natural Hamiltonian is separable in coordinates decomposable in a warped product by showing that the conditions originally given by Benenti can be reduced. Finally, we use this characterization and concircular tensors (a special type of torsionless conformal Killing tensor) to develop a general algorithm to determine when a natural Hamiltonian is separable in a special class of separable coordinates which include all orthogonal separable coordinates in spaces of constant curvature.

  12. Geoelectrical dimensionality analyses in Sumatran Fault (Aceh segment) using magnetotelluric phase tensor

    NASA Astrophysics Data System (ADS)

    Prihantoro, Rudy; Nurhasan, Sutarno, Doddy; Ogawa, Yasuo; Priahadena, Has; Fitriani, Dini

    2014-03-01

    Earth electrical / geoelectrical conductivity may vary in any direction in a complex earth model. When conductivity only varying within one direction such as depth, it is considered as an one-dimensional (1-D) structure model. Two-dimensional (2-D) and three-dimensional (3-D) structure have more degrees of conductivity variation. In magnetotelluric (MT) surveys localized heterogeneities in conductivity near the Earth's surface distort the electromagnetic (EM) response produced by the underlying or 'regional' conductivity structure under investigation. Several attempts had been done to remove this distortion effect in measured MT transfer functions (impedances tensor) by a series of techniques and general conductivity models of increasing complexity. The most common technique are Bahr's method and Groom-Bailey decompositions, that is restricted by assumption of two dimensional (2D) regional conductivity structure. MT phase tensor technique proposed by Caldwell et al. (2004) requires no assumption about the dimensionality of the regional conductivity structure and is applicable where both the heterogeneity and the regional conductivity structure are 3-D. Here, we examine the dimensionality analyses using the MT phase tensor to measured data of Sumatran Fault (SF) Aceh segment that we've collected during July 2012. A small value of phase tensor dimensionality indicator (β) was found along the profile. This result indicate a strong a two dimensionality of regional conductivity structure of SF Aceh segment.

  13. Tensor-based classification of an auditory mobile BCI without a subject-specific calibration phase

    NASA Astrophysics Data System (ADS)

    Zink, Rob; Hunyadi, Borbála; Van Huffel, Sabine; De Vos, Maarten

    2016-04-01

    Objective. One of the major drawbacks in EEG brain-computer interfaces (BCI) is the need for subject-specific training of the classifier. By removing the need for a supervised calibration phase, new users could potentially explore a BCI faster. In this work we aim to remove this subject-specific calibration phase and allow direct classification. Approach. We explore canonical polyadic decompositions and block term decompositions of the EEG. These methods exploit structure in higher dimensional data arrays called tensors. The BCI tensors are constructed by concatenating ERP templates from other subjects to a target and non-target trial and the inherent structure guides a decomposition that allows accurate classification. We illustrate the new method on data from a three-class auditory oddball paradigm. Main results. The presented approach leads to a fast and intuitive classification with accuracies competitive with a supervised and cross-validated LDA approach. Significance. The described methods are a promising new way of classifying BCI data with a forthright link to the original P300 ERP signal over the conventional and widely used supervised approaches.

  14. Characterizing dielectric tensors of anisotropic materials from a single measurement

    NASA Astrophysics Data System (ADS)

    Smith, Paula Kay

    Ellipsometry techniques look at changes in polarization states to measure optical properties of thin film materials. A beam reflected from a substrate measures the real and imaginary parts of the index of the material represented as n and k, respectively. Measuring the substrate at several angles gives additional information that can be used to measure multilayer thin film stacks. However, the outstanding problem in standard ellipsometry is that it uses a limited number of incident polarization states (s and p). This limits the technique to isotropic materials. The technique discussed in this paper extends the standard process to measure anisotropic materials by using a larger set of incident polarization states. By using a polarimeter to generate several incident polarization states and measure the polarization properties of the sample, ellipsometry can be performed on biaxial materials. Use of an optimization algorithm in conjunction with biaxial ellipsometry can more accurately determine the dielectric tensor of individual layers in multilayer structures. Biaxial ellipsometry is a technique that measures the dielectric tensors of a biaxial substrate, single-layer thin film, or multi-layer structure. The dielectric tensor of a biaxial material consists of the real and imaginary parts of the three orthogonal principal indices (n x + ikx, ny +iky and nz + i kz) as well as three Euler angles (alpha, beta and gamma) to describe its orientation. The method utilized in this work measures an angle-of-incidence Mueller matrix from a Mueller matrix imaging polarimeter equipped with a pair of microscope objectives that have low polarization properties. To accurately determine the dielectric tensors for multilayer samples, the angle-of-incidence Mueller matrix images are collected for multiple wavelengths. This is done in either a transmission mode or a reflection mode, each incorporates an appropriate dispersion model. Given approximate a priori knowledge of the dielectric

  15. Regularized Positive-Definite Fourth Order Tensor Field Estimation from DW-MRI★

    PubMed Central

    Barmpoutis, Angelos; Vemuri, Baba C.; Howland, Dena; Forder, John R.

    2009-01-01

    In Diffusion Weighted Magnetic Resonance Image (DW-MRI) processing, a 2nd order tensor has been commonly used to approximate the diffusivity function at each lattice point of the DW-MRI data. From this tensor approximation, one can compute useful scalar quantities (e.g. anisotropy, mean diffusivity) which have been clinically used for monitoring encephalopathy, sclerosis, ischemia and other brain disorders. It is now well known that this 2nd-order tensor approximation fails to capture complex local tissue structures, e.g. crossing fibers, and as a result, the scalar quantities derived from these tensors are grossly inaccurate at such locations. In this paper we employ a 4th order symmetric positive-definite (SPD) tensor approximation to represent the diffusivity function and present a novel technique to estimate these tensors from the DW-MRI data guaranteeing the SPD property. Several articles have been reported in literature on higher order tensor approximations of the diffusivity function but none of them guarantee the positivity of the estimates, which is a fundamental constraint since negative values of the diffusivity are not meaningful. In this paper we represent the 4th-order tensors as ternary quartics and then apply Hilbert’s theorem on ternary quartics along with the Iwasawa parametrization to guarantee an SPD 4th-order tensor approximation from the DW-MRI data. The performance of this model is depicted on synthetic data as well as real DW-MRIs from a set of excised control and injured rat spinal cords, showing accurate estimation of scalar quantities such as generalized anisotropy and trace as well as fiber orientations. PMID:19063978

  16. Catalytic Decomposition of PH3 on Heated Tungsten Wire Surfaces

    NASA Astrophysics Data System (ADS)

    Umemoto, Hironobu; Nishihara, Yushin; Ishikawa, Takuma; Yamamoto, Shingo

    2012-08-01

    The catalytic decomposition processes of PH3 on heated tungsten surfaces were studied to clarify the mechanisms governing phosphorus doping into silicon substrates. Mass spectrometric measurements show that PH3 can be decomposed by more than 50% over 2000 K. H, P, PH, and PH2 radicals were identified by laser spectroscopic techniques. Absolute density measurements of these radical species, as well as their PH3 flow rate dependence, show that the major products on the catalyst surfaces are P and H atoms, while PH and PH2 are produced in secondary processes in the gas phase. In other words, catalytic decomposition, unlike plasma decomposition processes, can be a clean source of P atoms, which can be the only major dopant precursors. In the presence of an excess amount of H2, the apparent decomposition efficiency is small. This can be explained by rapid cyclic reactions including decomposition, deposition, and etching to reproduce PH3.

  17. Interpretation of the Weyl tensor

    NASA Astrophysics Data System (ADS)

    Hofmann, Stefan; Niedermann, Florian; Schneider, Robert

    2013-09-01

    According to folklore in general relativity, the Weyl tensor can be decomposed into parts corresponding to Newton-like, incoming and outgoing wavelike field components. It is shown here that this one-to-one correspondence does not hold for space-time geometries with cylindrical isometries. This is done by investigating some well-known exact solutions of Einstein’s field equations with whole-cylindrical symmetry, for which the physical interpretation is very clear, but for which the standard Weyl interpretation would give contradictory results. For planar or spherical geometries, however, the standard interpretation works for both static and dynamical space-times. It is argued that one reason for the failure in the cylindrical case is that for waves spreading in two spatial dimensions there is no local criterion to distinguish incoming and outgoing waves already at the linear level. It turns out that Thorne’s local energy notion, subject to certain qualifications, provides an efficient diagnostic tool to extract the proper physical interpretation of the space-time geometry in the case of cylindrical configurations.

  18. Diffusion Tensor Imaging of Pedophilia.

    PubMed

    Cantor, James M; Lafaille, Sophie; Soh, Debra W; Moayedi, Massieh; Mikulis, David J; Girard, Todd A

    2015-11-01

    Pedophilia is a principal motivator of child molestation, incurring great emotional and financial burdens on victims and society. Even among pedophiles who never commit any offense,the condition requires lifelong suppression and control. Previous comparison using voxel-based morphometry (VBM)of MR images from a large sample of pedophiles and controls revealed group differences in white matter. The present study therefore sought to verify and characterize white matter involvement using diffusion tensor imaging (DTI), which better captures the microstructure of white matter than does VBM. Pedophilics ex offenders (n=24) were compared with healthy, age-matched controls with no criminal record and no indication of pedophilia (n=32). White matter microstructure was analyzed with Tract-Based Spatial Statistics, and the trajectories of implicated fiber bundles were identified by probabilistic tractography. Groups showed significant, highly focused differences in DTI parameters which related to participants’ genital responses to sexual depictions of children, but not to measures of psychopathy or to childhood histories of physical abuse, sexual abuse, or neglect. Some previously reported gray matter differences were suggested under highly liberal statistical conditions (p(uncorrected)<.005), but did not survive ordinary statistical correction (whole brain per voxel false discovery rate of 5%). These results confirm that pedophilia is characterized by neuroanatomical differences in white matter microstructure, over and above any neural characteristics attributable to psychopathy and childhood adversity, which show neuroanatomic footprints of their own. Although some gray matter structures were implicated previously, only few have emerged reliably. PMID:26494360

  19. Potentials for transverse trace-free tensors

    NASA Astrophysics Data System (ADS)

    Conboye, Rory; Murchadha, Niall Ó.

    2014-04-01

    In constructing and understanding initial conditions in the 3 + 1 formalism for numerical relativity, the transverse and trace-free (TT) part of the extrinsic curvature plays a key role. We know that TT tensors possess two degrees of freedom per space point. However, finding an expression for a TT tensor depending on only two scalar functions is a non-trivial task. Assuming either axial or translational symmetry, expressions depending on two scalar potentials alone are derived here for all TT tensors in flat 3-space. In a more general spatial slice, only one of these potentials is found, the same potential given in (Baker and Puzio 1999 Phys. Rev. D 59 044030) and (Dain 2001 Phys. Rev. D 64 124002), with the remaining equations reduced to a partial differential equation, depending on boundary conditions for a solution. As an exercise, we also derive the potentials which give the Bowen-York curvature tensor in flat space.

  20. Shifted power method for computing tensor eigenvalues.

    SciTech Connect

    Mayo, Jackson R.; Kolda, Tamara Gibson

    2010-07-01

    Recent work on eigenvalues and eigenvectors for tensors of order m >= 3 has been motivated by applications in blind source separation, magnetic resonance imaging, molecular conformation, and more. In this paper, we consider methods for computing real symmetric-tensor eigenpairs of the form Ax{sup m-1} = lambda x subject to ||x||=1, which is closely related to optimal rank-1 approximation of a symmetric tensor. Our contribution is a shifted symmetric higher-order power method (SS-HOPM), which we show is guaranteed to converge to a tensor eigenpair. SS-HOPM can be viewed as a generalization of the power iteration method for matrices or of the symmetric higher-order power method. Additionally, using fixed point analysis, we can characterize exactly which eigenpairs can and cannot be found by the method. Numerical examples are presented, including examples from an extension of the method to finding complex eigenpairs.

  1. The Weyl tensor correlator in cosmological spacetimes

    SciTech Connect

    Fröb, Markus B.

    2014-12-01

    We give a general expression for the Weyl tensor two-point function in a general Friedmann-Lemaître-Robertson-Walker spacetime. We work in reduced phase space for the perturbations, i.e., quantize only the dynamical degrees of freedom without adding any gauge-fixing term. The general formula is illustrated by a calculation in slow-roll single-field inflation to first order in the slow-roll parameters ε and δ, and the result is shown to have the correct de Sitter limit as ε, δ → 0. Furthermore, it is seen that the Weyl tensor correlation function in slow-roll does not suffer from infrared divergences, unlike the two-point functions of the metric and scalar field perturbations. Lastly, we show how to recover the usual tensor power spectrum from the Weyl tensor correlation function.

  2. The Weyl tensor correlator in cosmological spacetimes

    SciTech Connect

    Fröb, Markus B.

    2014-12-05

    We give a general expression for the Weyl tensor two-point function in a general Friedmann-Lemaître-Robertson-Walker spacetime. We work in reduced phase space for the perturbations, i.e., quantize only the dynamical degrees of freedom without adding any gauge-fixing term. The general formula is illustrated by a calculation in slow-roll single-field inflation to first order in the slow-roll parameters ϵ and δ, and the result is shown to have the correct de Sitter limit as ϵ,δ→0. Furthermore, it is seen that the Weyl tensor correlation function in slow-roll does not suffer from infrared divergences, unlike the two-point functions of the metric and scalar field perturbations. Lastly, we show how to recover the usual tensor power spectrum from the Weyl tensor correlation function.

  3. Kinetic-energy-momentum tensor in electrodynamics

    NASA Astrophysics Data System (ADS)

    Sheppard, Cheyenne J.; Kemp, Brandon A.

    2016-01-01

    We show that the Einstein-Laub formulation of electrodynamics is invalid since it yields a stress-energy-momentum (SEM) tensor that is not frame invariant. Two leading hypotheses for the kinetic formulation of electrodynamics (Chu and Einstein-Laub) are studied by use of the relativistic principle of virtual power, mathematical modeling, Lagrangian methods, and SEM transformations. The relativistic principle of virtual power is used to demonstrate the field dynamics associated with energy relations within a relativistic framework. Lorentz transformations of the respective SEM tensors demonstrate the relativistic frameworks for each studied formulation. Mathematical modeling of stationary and moving media is used to illustrate the differences and discrepancies of specific proposed kinetic formulations, where energy relations and conservation theorems are employed. Lagrangian methods are utilized to derive the field kinetic Maxwell's equations, which are studied with respect to SEM tensor transforms. Within each analysis, the Einstein-Laub formulation violates special relativity, which invalidates the Einstein-Laub SEM tensor.

  4. Shifted power method for computing tensor eigenpairs.

    SciTech Connect

    Mayo, Jackson R.; Kolda, Tamara Gibson

    2010-10-01

    Recent work on eigenvalues and eigenvectors for tensors of order m {>=} 3 has been motivated by applications in blind source separation, magnetic resonance imaging, molecular conformation, and more. In this paper, we consider methods for computing real symmetric-tensor eigenpairs of the form Ax{sup m-1} = {lambda}x subject to {parallel}x{parallel} = 1, which is closely related to optimal rank-1 approximation of a symmetric tensor. Our contribution is a novel shifted symmetric higher-order power method (SS-HOPM), which we showis guaranteed to converge to a tensor eigenpair. SS-HOPM can be viewed as a generalization of the power iteration method for matrices or of the symmetric higher-order power method. Additionally, using fixed point analysis, we can characterize exactly which eigenpairs can and cannot be found by the method. Numerical examples are presented, including examples from an extension of the method to fnding complex eigenpairs.

  5. Quantum integrability of quadratic Killing tensors

    SciTech Connect

    Duval, C.; Valent, G.

    2005-05-01

    Quantum integrability of classical integrable systems given by quadratic Killing tensors on curved configuration spaces is investigated. It is proven that, using a 'minimal' quantization scheme, quantum integrability is ensured for a large class of classic examples.

  6. Application of modern tensor calculus to engineered domain structures. 1. Calculation of tensorial covariants.

    PubMed

    Kopský, Vojtech

    2006-03-01

    This article is a roadmap to a systematic calculation and tabulation of tensorial covariants for the point groups of material physics. The following are the essential steps in the described approach to tensor calculus. (i) An exact specification of the considered point groups by their embellished Hermann-Mauguin and Schoenflies symbols. (ii) Introduction of oriented Laue classes of magnetic point groups. (iii) An exact specification of matrix ireps (irreducible representations). (iv) Introduction of so-called typical (standard) bases and variables -- typical invariants, relative invariants or components of the typical covariants. (v) Introduction of Clebsch-Gordan products of the typical variables. (vi) Calculation of tensorial covariants of ascending ranks with consecutive use of tables of Clebsch-Gordan products. (vii) Opechowski's magic relations between tensorial decompositions. These steps are illustrated for groups of the tetragonal oriented Laue class D(4z) -- 4(z)2(x)2(xy) of magnetic point groups and for tensors up to fourth rank. PMID:16489242

  7. Adaptive registration of diffusion tensor images on lie groups

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Chen, LeiTing; Cai, HongBin; Qiu, Hang; Fei, Nanxi

    2016-08-01

    With diffusion tensor imaging (DTI), more exquisite information on tissue microstructure is provided for medical image processing. In this paper, we present a locally adaptive topology preserving method for DTI registration on Lie groups. The method aims to obtain more plausible diffeomorphisms for spatial transformations via accurate approximation for the local tangent space on the Lie group manifold. In order to capture an exact geometric structure of the Lie group, the local linear approximation is efficiently optimized by using the adaptive selection of the local neighborhood sizes on the given set of data points. Furthermore, numerical comparative experiments are conducted on both synthetic data and real DTI data to demonstrate that the proposed method yields a higher degree of topology preservation on a dense deformation tensor field while improving the registration accuracy.

  8. Adaptive registration of diffusion tensor images on lie groups

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Chen, LeiTing; Cai, HongBin; Qiu, Hang; Fei, Nanxi

    2016-06-01

    With diffusion tensor imaging (DTI), more exquisite information on tissue microstructure is provided for medical image processing. In this paper, we present a locally adaptive topology preserving method for DTI registration on Lie groups. The method aims to obtain more plausible diffeomorphisms for spatial transformations via accurate approximation for the local tangent space on the Lie group manifold. In order to capture an exact geometric structure of the Lie group, the local linear approximation is efficiently optimized by using the adaptive selection of the local neighborhood sizes on the given set of data points. Furthermore, numerical comparative experiments are conducted on both synthetic data and real DTI data to demonstrate that the proposed method yields a higher degree of topology preservation on a dense deformation tensor field while improving the registration accuracy.

  9. Assessment of bias for MRI diffusion tensor imaging using SIMEX.

    PubMed

    Lauzon, Carolyn B; Asman, Andrew J; Crainiceanu, Ciprian; Caffo, Brian C; Landman, Bennett A

    2011-01-01

    Diffusion Tensor Imaging (DTI) is a Magnetic Resonance Imaging method for measuring water diffusion in vivo. One powerful DTI contrast is fractional anisotropy (FA). FA reflects the strength of water's diffusion directional preference and is a primary metric for neuronal fiber tracking. As with other DTI contrasts, FA measurements are obscured by the well established presence of bias. DTI bias has been challenging to assess because it is a multivariable problem including SNR, six tensor parameters, and the DTI collection and processing method used. SIMEX is a modem statistical technique that estimates bias by tracking measurement error as a function of added noise. Here, we use SIMEX to assess bias in FA measurements and show the method provides; i) accurate FA bias estimates, ii) representation of FA bias that is data set specific and accessible to non-statisticians, and iii) a first time possibility for incorporation of bias into DTI data analysis. PMID:21995019

  10. The energy-momentum tensor(s) in classical gauge theories

    DOE PAGESBeta

    Gieres, Francois; Blaschke, Daniel N.; Reboud, Meril; Schweda, Manfred

    2016-07-12

    We give an introduction to, and review of, the energy–momentum tensors in classical gauge field theories in Minkowski space, and to some extent also in curved space–time. For the canonical energy–momentum tensor of non-Abelian gauge fields and of matter fields coupled to such fields, we present a new and simple improvement procedure based on gauge invariance for constructing a gauge invariant, symmetric energy–momentum tensor. Here, the relationship with the Einstein–Hilbert tensor following from the coupling to a gravitational field is also discussed.

  11. Novel Physics with Tensor Polarized Deuteron Targets

    SciTech Connect

    Slifer, Karl J.; Long, Elena A.

    2013-09-01

    Development of solid spin-1 polarized targets will open the study of tensor structure functions to precise measurement, and holds the promise to enable a new generation of polarized scattering experiments. In this talk we will discuss a measurement of the leading twist tensor structure function b1, along with prospects for future experiments with a solid tensor polarized target. The recently approved JLab experiment E12-13-011 will measure the lead- ing twist tensor structure function b1, which provides a unique tool to study partonic effects, while also being sensitive to coherent nuclear properties in the simplest nuclear system. At low x, shadowing effects are expected to dominate b1, while at larger values, b1 provides a clean probe of exotic QCD effects, such as hidden color due to 6-quark configuration. Since the deuteron wave function is relatively well known, any non-standard effects are expected to be readily observable. All available models predict a small or vanishing value of b1 at moderate x. However, the first pioneer measurement of b1 at HERMES revealed a crossover to an anomalously large negative value in the region 0.2 < x < 0.5, albeit with relatively large experimental uncertainty. E12-13-011 will perform an inclusive measurement of the deuteron tensor asymmetry in the region 0.16 < x < 0.49, for 0.8 < Q2 < 5.0 GeV2. The UVa solid polarized ND3 target will be used, along with the Hall C spectrometers, and an unpolarized 115 nA beam. This measurement will provide access to the tensor quark polarization, and allow a test of the Close-Kumano sum rule, which vanishes in the absence of tensor polarization in the quark sea. Until now, tensor structure has been largely unexplored, so the study of these quantities holds the potential of initiating a new field of spin physics at Jefferson Lab.

  12. Temperature-polarization correlations from tensor fluctuations

    SciTech Connect

    Crittenden, R.G.; Coulson, D.; Turok, N.G. |

    1995-11-15

    We study the polarization-temperature correlations on the cosmic microwave sky resulting from an initial scale-invariant spectrum of tensor (gravity wave) fluctuations, such as those which might arise during inflation. The correlation function has the opposite sign to that for scalar fluctuations on large scales, raising the possibility of a direct determination of whether the microwave anisotropies have a significant tensor component. We briefly discuss the important problem of estimating the expected foreground contamination.

  13. Morphological Decomposition in Reading Hebrew Homographs

    ERIC Educational Resources Information Center

    Miller, Paul; Liran-Hazan, Batel; Vaknin, Vered

    2016-01-01

    The present work investigates whether and how morphological decomposition processes bias the reading of Hebrew heterophonic homographs, i.e., unique orthographic patterns that are associated with two separate phonological, semantic entities depicted by means of two morphological structures (linear and nonlinear). In order to reveal the nature of…

  14. Calibration of SQUID vector magnetometers in full tensor gradiometry systems

    NASA Astrophysics Data System (ADS)

    Schiffler, M.; Queitsch, M.; Stolz, R.; Chwala, A.; Krech, W.; Meyer, H.-G.; Kukowski, N.

    2014-08-01

    Measurement of magnetic vector or tensor quantities, namely of field or field gradient, delivers more details of the underlying geological setting in geomagnetic prospection than a scalar measurement of a single component or of the scalar total magnetic intensity. Currently, highest measurement resolutions are achievable with superconducting quantum interference device (SQUID)-based systems. Due to technological limitations, it is necessary to suppress the parasitic magnetic field response from the SQUID gradiometer signals, which are a superposition of one tensor component and all three orthogonal magnetic field components. This in turn requires an accurate estimation of the local magnetic field. Such a measurement can itself be achieved via three additional orthogonal SQUID reference magnetometers. It is the calibration of such a SQUID reference vector magnetometer system that is the subject of this paper. A number of vector magnetometer calibration methods are described in the literature. We present two methods that we have implemented and compared, for their suitability of rapid data processing and integration into a full tensor magnetic gradiometry, SQUID-based, system. We conclude that the calibration routines must necessarily model fabrication misalignments, field offset and scale factors, and include comparison with a reference magnetic field. In order to enable fast processing on site, the software must be able to function as a stand-alone toolbox.

  15. Multicriteria approximation through decomposition

    SciTech Connect

    Burch, C. |; Krumke, S.; Marathe, M.; Phillips, C.; Sundberg, E. |

    1997-12-01

    The authors propose a general technique called solution decomposition to devise approximation algorithms with provable performance guarantees. The technique is applicable to a large class of combinatorial optimization problems that can be formulated as integer linear programs. Two key ingredients of the technique involve finding a decomposition of a fractional solution into a convex combination of feasible integral solutions and devising generic approximation algorithms based on calls to such decompositions as oracles. The technique is closely related to randomized rounding. The method yields as corollaries unified solutions to a number of well studied problems and it provides the first approximation algorithms with provable guarantees for a number of new problems. The particular results obtained in this paper include the following: (1) The authors demonstrate how the technique can be used to provide more understanding of previous results and new algorithms for classical problems such as Multicriteria Spanning Trees, and Suitcase Packing. (2) They show how the ideas can be extended to apply to multicriteria optimization problems, in which they wish to minimize a certain objective function subject to one or more budget constraints. As corollaries they obtain first non-trivial multicriteria approximation algorithms for problems including the k-Hurdle and the Network Inhibition problems.

  16. Multicriteria approximation through decomposition

    SciTech Connect

    Burch, C.; Krumke, S.; Marathe, M.; Phillips, C.; Sundberg, E.

    1998-06-01

    The authors propose a general technique called solution decomposition to devise approximation algorithms with provable performance guarantees. The technique is applicable to a large class of combinatorial optimization problems that can be formulated as integer linear programs. Two key ingredients of their technique involve finding a decomposition of a fractional solution into a convex combination of feasible integral solutions and devising generic approximation algorithms based on calls to such decompositions as oracles. The technique is closely related to randomized rounding. Their method yields as corollaries unified solutions to a number of well studied problems and it provides the first approximation algorithms with provable guarantees for a number of new problems. The particular results obtained in this paper include the following: (1) the authors demonstrate how the technique can be used to provide more understanding of previous results and new algorithms for classical problems such as Multicriteria Spanning Trees, and Suitcase Packing; (2) they also show how the ideas can be extended to apply to multicriteria optimization problems, in which they wish to minimize a certain objective function subject to one or more budget constraints. As corollaries they obtain first non-trivial multicriteria approximation algorithms for problems including the k-Hurdle and the Network Inhibition problems.

  17. Thermal decomposition hazard evaluation of hydroxylamine nitrate.

    PubMed

    Wei, Chunyang; Rogers, William J; Mannan, M Sam

    2006-03-17

    Hydroxylamine nitrate (HAN) is an important member of the hydroxylamine family and it is a liquid propellant when combined with alkylammonium nitrate fuel in an aqueous solution. Low concentrations of HAN are used primarily in the nuclear industry as a reductant in nuclear material processing and for decontamination of equipment. Also, HAN has been involved in several incidents because of its instability and autocatalytic decomposition behavior. This paper presents calorimetric measurement for the thermal decomposition of 24 mass% HAN/water. Gas phase enthalpy of formation of HAN is calculated using both semi-empirical methods with MOPAC and high-level quantum chemical methods of Gaussian 03. CHETAH is used to estimate the energy release potential of HAN. A Reactive System Screening Tool (RSST) and an Automatic Pressure Tracking Adiabatic Calorimeter (APTAC) are used to characterize thermal decomposition of HAN and to provide guidance about safe conditions for handling and storing of HAN. PMID:16154263

  18. Visualization of 3-D tensor fields

    NASA Technical Reports Server (NTRS)

    Hesselink, L.

    1996-01-01

    Second-order tensor fields have applications in many different areas of physics, such as general relativity and fluid mechanics. The wealth of multivariate information in tensor fields makes them more complex and abstract than scalar and vector fields. Visualization is a good technique for scientists to gain new insights from them. Visualizing a 3-D continuous tensor field is equivalent to simultaneously visualizing its three eigenvector fields. In the past, research has been conducted in the area of two-dimensional tensor fields. It was shown that degenerate points, defined as points where eigenvalues are equal to each other, are the basic singularities underlying the topology of tensor fields. Moreover, it was shown that eigenvectors never cross each other except at degenerate points. Since we live in a three-dimensional world, it is important for us to understand the underlying physics of this world. In this report, we describe a new method for locating degenerate points along with the conditions for classifying them in three-dimensional space. Finally, we discuss some topological features of three-dimensional tensor fields, and interpret topological patterns in terms of physical properties.

  19. Visualization of tensor fields using superquadric glyphs.

    PubMed

    Ennis, Daniel B; Kindlman, Gordon; Rodriguez, Ignacio; Helm, Patrick A; McVeigh, Elliot R

    2005-01-01

    The spatially varying tensor fields that arise in magnetic resonance imaging are difficult to visualize due to the multivariate nature of the data. To improve the understanding of myocardial structure and function a family of objects called glyphs, derived from superquadric parametric functions, are used to create informative and intuitive visualizations of the tensor fields. The superquadric glyphs are used to visualize both diffusion and strain tensors obtained in canine myocardium. The eigensystem of each tensor defines the glyph shape and orientation. Superquadric functions provide a continuum of shapes across four distinct eigensystems (lambda(i), sorted eigenvalues), lambda(1) = lambda(2) = lambda(3) (spherical), lambda(1) < lambda(2) = lambda(3) (oblate), lambda(1) > lambda(2) = lambda(3) (prolate), and lambda(1) > lambda(2) > lambda(3) (cuboid). The superquadric glyphs are especially useful for identifying regions of anisotropic structure and function. Diffusion tensor renderings exhibit fiber angle trends and orthotropy (three distinct eigenvalues). Visualization of strain tensors with superquadric glyphs compactly exhibits radial thickening gradients, circumferential and longitudinal shortening, and torsion combined. The orthotropic nature of many biologic tissues and their DTMRI and strain data require visualization strategies that clearly exhibit the anisotropy of the data if it is to be interpreted properly. Superquadric glyphs improve the ability to distinguish fiber orientation and tissue orthotropy compared to ellipsoids. PMID:15690516

  20. Particle creation from the quantum stress tensor

    NASA Astrophysics Data System (ADS)

    Firouzjaee, Javad T.; Ellis, George F. R.

    2015-05-01

    Among the different methods to derive particle creation, finding the quantum stress tensor expectation value gives a covariant quantity which can be used for examining the backreaction issue. However this tensor also includes vacuum polarization in a way that depends on the vacuum chosen. Here we review different aspects of particle creation by looking at energy conservation and at the quantum stress tensor. We show that in the case of general spherically symmetric black holes that have a dynamical horizon, as occurs in a cosmological context, one cannot have pair creation on the horizon because this violates energy conservation. This confirms the results obtained in other ways in a previous paper [J. T. Firouzjaee and G. F. R. Ellis, Gen. Relativ. Gravit. 47, 6 (2015)]. Looking at the expectation value of the quantum stress tensor with three different definitions of the vacuum state, we study the nature of particle creation and vacuum polarization in black hole and cosmological models, and the associated stress-energy tensors. We show that the thermal temperature that is calculated from the particle flux given by the quantum stress tensor is compatible with the temperature determined by the affine null parameter approach. Finally, we show that in the spherically symmetric dynamic case, we can neglect the backscattering term and only consider the s-wave term near the future apparent horizon.

  1. Variance decomposition in stochastic simulators.

    PubMed

    Le Maître, O P; Knio, O M; Moraes, A

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models. PMID:26133418

  2. Factors influencing leaf litter decomposition: An intersite decomposition experiment across China

    USGS Publications Warehouse

    Zhou, G.; Guan, L.; Wei, X.; Tang, X.; Liu, S.; Liu, J.; Zhang, Dongxiao; Yan, J.

    2008-01-01

    The Long-Term Intersite Decomposition Experiment in China (hereafter referred to as LTIDE-China) was established in 2002 to study how substrate quality and macroclimate factors affect leaf litter decomposition. The LTIDE-China includes a wide variety of natural and managed ecosystems, consisting of 12 forest types (eight regional broadleaf forests, three needle-leaf plantations and one broadleaf plantation) at eight locations across China. Samples of mixed leaf litter from the south subtropical evergreen broadleaf forest in Dinghushan (referred to as the DHS sample) were translocated to all 12 forest types. The leaf litter from each of other 11 forest types was placed in its original forest to enable comparison of decomposition rates of DHS and local litters. The experiment lasted for 30 months, involving collection of litterbags from each site every 3 months. Our results show that annual decomposition rate-constants, as represented by regression fitted k-values, ranged from 0.169 to 1.454/year. Climatic factors control the decomposition rate, in which mean annual temperature and annual actual evapotranspiration are dominant and mean annual precipitation is subordinate. Initial C/N and N/P ratios were demonstrated to be important factors of regulating litter decomposition rate. Decomposition process may apparently be divided into two phases controlled by different factors. In our study, 0.75 years is believed to be the dividing line of the two phases. The fact that decomposition rates of DHS litters were slower than those of local litters may have been resulted from the acclimation of local decomposer communities to extraneous substrate. ?? 2008 Springer Science+Business Media B.V.

  3. Moment tensors of a dislocation in a porous medium

    NASA Astrophysics Data System (ADS)

    Wang, Zhi; Hu, Hengshan

    2016-06-01

    A dislocation can be represented by a moment tensor for calculating seismic waves. However, the moment tensor expression was derived in an elastic medium and cannot completely describe a dislocation in a porous medium. In this paper, effective moment tensors of a dislocation in a porous medium are derived. It is found that the dislocation is equivalent to two independent moment tensors, i.e., the bulk moment tensor acting on the bulk of the porous medium and the isotropic fluid moment tensor acting on the pore fluid. Both of them are caused by the solid dislocation as well as the fluid-solid relative motion corresponding to fluid injection towards the surrounding rocks (or fluid outflow) through the fault plane. For a shear dislocation, the fluid moment tensor is zero, and the dislocation is equivalent to a double couple acting on the bulk; for an opening dislocation or fluid injection, the two moment tensors are needed to describe the source. The fluid moment tensor only affects the radiated compressional waves. By calculating the ratio of the radiation fields generated by unit fluid moment tensor and bulk moment tensor, it is found that the fast compressional wave radiated by the bulk moment tensor is much stronger than that radiated by the fluid moment tensor, while the slow compressional wave radiated by the fluid moment tensor is several times stronger than that radiated by the bulk moment tensor.

  4. Computation of streaming potential in porous media: Modified permeability tensor

    NASA Astrophysics Data System (ADS)

    Bandopadhyay, Aditya; DasGupta, Debabrata; Mitra, Sushanta K.; Chakraborty, Suman

    2015-11-01

    We quantify the pressure-driven electrokinetic transport of electrolytes in porous media through a matched asymptotic expansion based method to obtain a homogenized description of the upscaled transport. The pressure driven flow of aqueous electrolytes over charged surfaces leads to the generation of an induced electric potential, commonly termed as the streaming potential. We derive an expression for the modified permeability tensor, K↔eff, which is analogous to the Darcy permeability tensor with due accounting for the induced streaming potential. The porous media herein are modeled as spatially periodic. The modified permeability tensor is obtained for both topographically simple and complex domains by enforcing a zero net global current. Towards resolving the complicated details of the porous medium in a computationally efficient framework, the domain identification and reconstruction of the geometries are performed using adaptive quadtree (in 2D) and octree (in 3D) algorithms, which allows one to resolve the solid-liquid interface as per the desired level of resolution. We discuss the influence of the induced streaming potential on the modification of the Darcy law in connection to transport processes through porous plugs, clays and soils by considering a case-study on Berea sandstone.

  5. Optimizing Tensor Contraction Expressions for Hybrid CPU-GPU Execution

    SciTech Connect

    Ma, Wenjing; Krishnamoorthy, Sriram; Villa, Oreste; Kowalski, Karol; Agrawal, Gagan

    2013-03-01

    Tensor contractions are generalized multidimensional matrix multiplication operations that widely occur in quantum chemistry. Efficient execution of tensor contractions on Graphics Processing Units (GPUs) requires several challenges to be addressed, including index permutation and small dimension-sizes reducing thread block utilization. Moreover, to apply the same optimizations to various expressions, we need a code generation tool. In this paper, we present our approach to automatically generate CUDA code to execute tensor contractions on GPUs, including management of data movement between CPU and GPU. To evaluate our tool, GPU-enabled code is generated for the most expensive contractions in CCSD(T), a key coupled cluster method, and incorporated into NWChem, a popular computational chemistry suite. For this method, we demonstrate speedup over a factor of 8.4 using one GPU (instead of one core per node) and over 2.6 when utilizing the entire system using hybrid CPU+GPU solution with 2 GPUs and 5 cores (instead of 7 cores per node). Finally, we analyze the implementation behavior on future GPU systems.

  6. Diffusion tensor smoothing through weighted Karcher means.

    PubMed

    Carmichael, Owen; Chen, Jun; Paul, Debashis; Peng, Jie

    2013-01-01

    Diffusion tensor magnetic resonance imaging (MRI) quantifies the spatial distribution of water Diffusion at each voxel on a regular grid of locations in a biological specimen by Diffusion tensors- 3 × 3 positive definite matrices. Removal of noise from DTI is an important problem due to the high scientific relevance of DTI and relatively low signal to noise ratio it provides. Leading approaches to this problem amount to estimation of weighted Karcher means of Diffusion tensors within spatial neighborhoods, under various metrics imposed on the space of tensors. However, it is unclear how the behavior of these estimators varies with the magnitude of DTI sensor noise (the noise resulting from the thermal e!ects of MRI scanning) as well as the geometric structure of the underlying Diffusion tensor neighborhoods. In this paper, we combine theoretical analysis, empirical analysis of simulated DTI data, and empirical analysis of real DTI scans to compare the noise removal performance of three kernel-based DTI smoothers that are based on Euclidean, log-Euclidean, and affine-invariant metrics. The results suggest, contrary to conventional wisdom, that imposing a simplistic Euclidean metric may in fact provide comparable or superior noise removal, especially in relatively unstructured regions and/or in the presence of moderate to high levels of sensor noise. On the contrary, log-Euclidean and affine-invariant metrics may lead to better noise removal in highly structured anatomical regions, especially when the sensor noise is of low magnitude. These findings emphasize the importance of considering the interplay of sensor noise magnitude and tensor field geometric structure when assessing Diffusion tensor smoothing options. They also point to the necessity for continued development of smoothing methods that perform well across a large range of scenarios. PMID:25419264

  7. Generalized Tensor-Based Morphometry of HIV/AIDS Using Multivariate Statistics on Deformation Tensors

    PubMed Central

    Lepore, Natasha; Brun, Caroline; Chou, Yi-Yu; Chiang, Ming-Chang; Dutton, Rebecca A.; Hayashi, Kiralee M.; Luders, Eileen; Lopez, Oscar L.; Aizenstein, Howard J.; Toga, Arthur W.; Becker, James T.; Thompson, Paul M.

    2009-01-01

    This paper investigates the performance of a new multivariate method for tensor-based morphometry (TBM). Statistics on Riemannian manifolds are developed that exploit the full information in deformation tensor fields. In TBM, multiple brain images are warped to a common neuroanatomical template via 3-D nonlinear registration; the resulting deformation fields are analyzed statistically to identify group differences in anatomy. Rather than study the Jacobian determinant (volume expansion factor) of these deformations, as is common, we retain the full deformation tensors and apply a manifold version of Hotelling’s T 2 test to them, in a Log-Euclidean domain. In 2-D and 3-D magnetic resonance imaging (MRI) data from 26 HIV/AIDS patients and 14 matched healthy subjects, we compared multivariate tensor analysis versus univariate tests of simpler tensor-derived indices: the Jacobian determinant, the trace, geodesic anisotropy, and eigenvalues of the deformation tensor, and the angle of rotation of its eigenvectors. We detected consistent, but more extensive patterns of structural abnormalities, with multivariate tests on the full tensor manifold. Their improved power was established by analyzing cumulative p-value plots using false discovery rate (FDR) methods, appropriately controlling for false positives. This increased detection sensitivity may empower drug trials and large-scale studies of disease that use tensor-based morphometry. PMID:18270068

  8. Generalized tensor-based morphometry of HIV/AIDS using multivariate statistics on deformation tensors.

    PubMed

    Lepore, N; Brun, C; Chou, Y Y; Chiang, M C; Dutton, R A; Hayashi, K M; Luders, E; Lopez, O L; Aizenstein, H J; Toga, A W; Becker, J T; Thompson, P M

    2008-01-01

    This paper investigates the performance of a new multivariate method for tensor-based morphometry (TBM). Statistics on Riemannian manifolds are developed that exploit the full information in deformation tensor fields. In TBM, multiple brain images are warped to a common neuroanatomical template via 3-D nonlinear registration; the resulting deformation fields are analyzed statistically to identify group differences in anatomy. Rather than study the Jacobian determinant (volume expansion factor) of these deformations, as is common, we retain the full deformation tensors and apply a manifold version of Hotelling's $T(2) test to them, in a Log-Euclidean domain. In 2-D and 3-D magnetic resonance imaging (MRI) data from 26 HIV/AIDS patients and 14 matched healthy subjects, we compared multivariate tensor analysis versus univariate tests of simpler tensor-derived indices: the Jacobian determinant, the trace, geodesic anisotropy, and eigenvalues of the deformation tensor, and the angle of rotation of its eigenvectors. We detected consistent, but more extensive patterns of structural abnormalities, with multivariate tests on the full tensor manifold. Their improved power was established by analyzing cumulative p-value plots using false discovery rate (FDR) methods, appropriately controlling for false positives. This increased detection sensitivity may empower drug trials and large-scale studies of disease that use tensor-based morphometry. PMID:18270068

  9. An Adaptive Spectrally Weighted Structure Tensor Applied to Tensor Anisotropic Nonlinear Diffusion for Hyperspectral Images

    ERIC Educational Resources Information Center

    Marin Quintero, Maider J.

    2013-01-01

    The structure tensor for vector valued images is most often defined as the average of the scalar structure tensors in each band. The problem with this definition is the assumption that all bands provide the same amount of edge information giving them the same weights. As a result non-edge pixels can be reinforced and edges can be weakened…

  10. Fulvenallene decomposition kinetics.

    PubMed

    Polino, Daniela; Cavallotti, Carlo

    2011-09-22

    While the decomposition kinetics of the benzyl radical has been studied in depth both from the experimental and the theoretical standpoint, much less is known about the reactivity of what is likely to be its main decomposition product, fulvenallene. In this work the high temperature reactivity of fulvenallene was investigated on a Potential Energy Surface (PES) consisting of 10 wells interconnected through 11 transition states using a 1 D Master Equation (ME). Rate constants were calculated using RRKM theory and the ME was integrated using a stochastic kinetic Monte Carlo code. It was found that two main decomposition channels are possible, the first is active on the singlet PES and leads to the formation of the fulvenallenyl radical and atomic hydrogen. The second requires intersystem crossing to the triplet PES and leads to acetylene and cyclopentadienylidene. ME simulations were performed calculating the microcanonical intersystem crossing frequency using Landau-Zener theory convoluting the crossing probability with RRKM rates evaluated at the conical intersection. It was found that the reaction channel leading to the cyclopentadienylidene diradical is only slightly faster than that leading to the fulvenallenyl radical, so that it can be concluded that both reactions are likely to be active in the investigated temperature (1500-2000 K) and pressure (0.05-50 bar) ranges. However, the simulations show that intersystem crossing is rate limiting for the first reaction channel, as the removal of this barrier leads to an increase of the rate constant by a factor of 2-3. Channel specific rate constants are reported as a function of temperature and pressure. PMID:21819060

  11. Spectral line polarization with angle-dependent partial frequency redistribution. I. A Stokes parameters decomposition for Rayleigh scattering

    NASA Astrophysics Data System (ADS)

    Frisch, H.

    2010-11-01

    Context. The linear polarization of a strong resonance lines observed near the solar limb is created by a multiple-scattering process. Partial frequency redistribution (PRD) effects must be accounted for to explain the polarization profiles. The redistribution matrix describing the scattering process is a sum of terms, each containing a PRD function multiplied by a Rayleigh type phase matrix. A standard approximation made in calculating the polarization is to average the PRD functions over all the scattering angles, because the numerical work needed to take the angle-dependence of the PRD functions into account is large and not always needed for reasonable evaluations of the polarization. Aims: This paper describes a Stokes parameters decomposition method, that is applicable in plane-parallel cylindrically symmetrical media, which aims at simplifying the numerical work needed to overcome the angle-average approximation. Methods: The decomposition method relies on an azimuthal Fourier expansion of the PRD functions associated to a decomposition of the phase matrices in terms of the Landi Degl'Innocenti irreducible spherical tensors for polarimetry T^K_Q(i, Ω) (i Stokes parameter index, Ω ray direction). The terms that depend on the azimuth of the scattering angle are retained in the phase matrices. Results: It is shown that the Stokes parameters I and Q, which have the same cylindrical symmetry as the medium, can be expressed in terms of four cylindrically symmetrical components I_Q^K (K = Q = 0, K = 2, Q = 0, 1, 2). The components with Q = 1, 2 are created by the angular dependence of the PRD functions. They go to zero at disk center, ensuring that Stokes Q also goes to zero. Each component I_Q^K is a solution to a standard radiative transfer equation. The source term S_Q^K are significantly simpler than the source terms corresponding to I and Q. They satisfy a set of integral equations that can be solved by an accelerated lambda iteration (ALI) method.

  12. Killing-Yano tensors, rank-2 Killing tensors, and conserved quantities in higher dimensions

    NASA Astrophysics Data System (ADS)

    Krtous, Pavel; Kubiznák, David; Page, Don N.; Frolov, Valeri P.

    2007-02-01

    From the metric and one Killing-Yano tensor of rank D-2 in any D-dimensional spacetime with such a principal Killing-Yano tensor, we show how to generate k = [(D+1)/2] Killing-Yano tensors, of rank D-2j for all 0 <= j <= k-1, and k rank-2 Killing tensors, giving k constants of geodesic motion that are in involution. For the example of the Kerr-NUT-AdS spacetime (hep-th/0604125) with its principal Killing-Yano tensor (gr-qc/0610144), these constants and the constants from the k Killing vectors give D independent constants in involution, making the geodesic motion completely integrable (hep-th/0611083). The constants of motion are also related to the constants recently obtained in the separation of the Hamilton-Jacobi and Klein-Gordon equations (hep-th/0611245).

  13. Tensor network algorithm by coarse-graining tensor renormalization on finite periodic lattices

    NASA Astrophysics Data System (ADS)

    Zhao, Hui-Hai; Xie, Zhi-Yuan; Xiang, Tao; Imada, Masatoshi

    2016-03-01

    We develop coarse-graining tensor renormalization group algorithms to compute physical properties of two-dimensional lattice models on finite periodic lattices. Two different coarse-graining strategies, one based on the tensor renormalization group and the other based on the higher-order tensor renormalization group, are introduced. In order to optimize the tensor network model globally, a sweeping scheme is proposed to account for the renormalization effect from the environment tensors under the framework of second renormalization group. We demonstrate the algorithms by the classical Ising model on the square lattice and the Kitaev model on the honeycomb lattice, and show that the finite-size algorithms achieve substantially more accurate results than the corresponding infinite-size ones.

  14. Low Temperature Decomposition Rates for Tetraphenylborate Ion

    SciTech Connect

    Walker, D.D.

    1998-11-18

    Previous studies indicated that palladium is catalyzes rapid decomposition of alkaline tetraphenylborate slurries. Additional evidence suggest that Pd(II) reduces to Pd(0) during catalyst activation. Further use of tetraphenylborate ion in the decontamination of radioactive waste may require removal of the catalyst or cooling to temperatures at which the decomposition reaction proceeds slowly and does not adversely affect processing. Recent tests showed that tetraphenylborate did not react appreciably at 25 degrees Celsius over six months suggesting the potential to avoid the decomposition at low temperatures. The lack of reaction at low temperature could reflect very slow kinetics at the lower temperature, or may indicate a catalyst ''deactivation'' process. Previous tests in the temperature range 35 to 70 degrees Celsius provided a low precision estimate of the activation energy of the reaction with which to predict the rate of reaction at 25 percent Celsius. To understand the observations at 25 degrees Celsius, experiments must separate the catalyst activation step and the subsequent reaction with TPB. Tests described in this report represent an initial attempt to separate the two steps and determine the rate and activation energy of the reaction between active catalyst and TPB. The results of these tests indicate that the absence of reaction at 25 degrees Celsius was caused by failure to activate the catalyst or the presence of a deactivating mechanism. In the presence of activated catalyst, the decomposition reaction rate is significant.

  15. Autocatalytic Decomposition Mechanisms in Energetic Molecular Crystals

    NASA Astrophysics Data System (ADS)

    Kuklja, Maija; Rashkeev, Sergey

    2009-06-01

    Atomic scale mechanisms of the initiation of chemical processes in energetic molecular crystals, which lead to the decomposition and ultimately to an explosive chain reaction, are still far from being understood. In this work, we investigate the onset of the initiation processes in two high explosive crystals - diamino-dinitroethylene (DADNE) and triamino- trinitrobenzene (TATB). We found that an autocatalytic decomposition mechanism is likely to take place in DADNE crystal that consists of corrugated, dashboard-shaped molecular layers. The presence of a dissociated NO2 group in the interstitial space between two layers induces a significant shear-strain between these layers, which, in turn, facilitates the further dissociation of NO2 groups from surrounding molecules through lowering the C-NO2 decomposition barrier. Unlike this, in TATB (that consists of flat, graphite-like molecular layers), an interstitial NO2 group positioned between two layers tends to produce a tensile stress (rather than a shear-strain), which leads to local molecular disorder in these layers without any significant modification of the C-NO2 decomposition barrier. The observed differences between the two materials are discussed in terms of their structural, electronic, and chemical properties.

  16. A confidence parameter for seismic moment tensors

    NASA Astrophysics Data System (ADS)

    Tape, Walter; Tape, Carl

    2016-02-01

    Given a moment tensor m inferred from seismic data for an earthquake, we define P(V) to be the probability that the true moment tensor for the earthquake lies in the neighborhood of m that has fractional volume V. The average value of P(V) is then a measure of our confidence in m. The calculation of P(V) requires knowing both the probability hat{P}(ω ) and the fractional volume hat{V}(ω ) of the set of moment tensors within a given angular radius ω of m. We explain how to construct hat{P}(ω ) from a misfit function derived from seismic data, and we show how to calculate hat{V}(ω ), which depends on the set M of moment tensors under consideration. The two most important instances of M are where M is the set of all moment tensors of fixed norm, and where M is the set of all double couples of fixed norm.

  17. A confidence parameter for seismic moment tensors

    NASA Astrophysics Data System (ADS)

    Tape, Walter; Tape, Carl

    2016-05-01

    Given a moment tensor m inferred from seismic data for an earthquake, we define P(V) to be the probability that the true moment tensor for the earthquake lies in the neighbourhood of m that has fractional volume V. The average value of P(V) is then a measure of our confidence in m. The calculation of P(V) requires knowing both the probability hat{P}(ω) and the fractional volume hat{V}(ω) of the set of moment tensors within a given angular radius ω of m. We explain how to construct hat{P}(ω) from a misfit function derived from seismic data, and we show how to calculate hat{V}(ω), which depends on the set M of moment tensors under consideration. The two most important instances of M are where M is the set of all moment tensors of fixed norm, and where M is the set of all double couples of fixed norm.

  18. Non-standard symmetries and Killing tensors

    NASA Astrophysics Data System (ADS)

    Visinescu, Mihai

    2009-10-01

    Higher order symmetries corresponding to Killing tensors are investigated. The intimate relation between Killing-Yano tensors and non-standard supersymmetries is pointed out. The gravitational anomalies are absent if the hidden symmetry is associated with a Killing-Yano tensor. In the Dirac theory on curved spaces, Killing-Yano tensors generate Dirac type operators involved in interesting algebraic structures as dynamical algebras or even infinite dimensional algebras or superalgebras. The general results are applied to space-times which appear in modern studies. The 4-dimensional Euclidean Taub-NUT space and its generalizations introduced by Iwai and Katayama are analyzed from the point of view of hidden symmetries. One presents the infinite dimensional superalgebra of Dirac type operators on Taub-NUT space that can be seen as a twisted loop algebra. The axial anomaly, interpreted as the index of the Dirac operator, is computed for the generalized Taub-NUT metrics. The existence of the conformal Killing-Yano tensors is investigated for some spaces with mixed Sasakian structures.

  19. Primordial tensor modes of the early Universe

    NASA Astrophysics Data System (ADS)

    Martínez, Florencia Benítez; Olmedo, Javier

    2016-06-01

    We study cosmological tensor perturbations on a quantized background within the hybrid quantization approach. In particular, we consider a flat, homogeneous and isotropic spacetime and small tensor inhomogeneities on it. We truncate the action to second order in the perturbations. The dynamics is ruled by a homogeneous scalar constraint. We carry out a canonical transformation in the system where the Hamiltonian for the tensor perturbations takes a canonical form. The new tensor modes now admit a standard Fock quantization with a unitary dynamics. We then combine this representation with a generic quantum scheme for the homogeneous sector. We adopt a Born-Oppenheimer ansatz for the solutions to the constraint operator, previously employed to study the dynamics of scalar inhomogeneities. We analyze the approximations that allow us to recover, on the one hand, a Schrödinger equation similar to the one emerging in the dressed metric approach and, on the other hand, the ones necessary for the effective evolution equations of these primordial tensor modes within the hybrid approach to be valid. Finally, we consider loop quantum cosmology as an example where these quantization techniques can be applied and compare with other approaches.

  20. Moment tensor inversion for moderate earthquakes and horizontal direction of tectonic stress in and around the south korea peninsula

    NASA Astrophysics Data System (ADS)

    Cho, ChangSoo

    2015-04-01

    Moment tensor inversion method using waveform is not widely used in identification of fault direction for earthquake but also in identification of explosion experiment such as north korea nuclear test. TDMT inversion code as open source was used for 1-D focal mechanism to moderate earthquake. But TDMT code caused some problems to fit waveform data of earthquake. This software was modified and improved with using the extraction bandwidth for event data and using waveform fitting of maximum cross-correlation with limit of shifting time. Improved algorithm was applied to moderate earthquakes occurred in and around the korean peninsula and showed the result of good data fitting in deriving focal mechanism. CMT centeroid locations were calculated with this algorithm. Earthquakes occurred rarely in the korean peninsula and instrumental recording started from 1990's late. But quality of measurement ground motion is very good after the beginning of instrumental recording. 61 moderate earthquakes occurred analyzed between 2000 to present were analyzed. most of all focal mechanism of earthquake showed strike slip or reverse fault as intraplate earthquake. The horizontal direction of tectonic stress of the korean peninsula is ENE-WSW derived with focal mechanisms that were calculated with 1D moment tensor inversion for moderate earthquake by Zoback(1992)'s method of tectonic stress. 3D-moment tensor inversion method was also developed with simulation code of 3-D viscoelastic finite difference method with ADE(auxiliary differential equation)-PML(perfectly matched layer) and applied to main moderate earthquakes. Forward modeling of 3D seismic wave propagation for moment tensor inversion require much time and expensive cost. Forward simulation with domain decomposition of having only thin model between source and receiver in moment tensor inversion could reduce much time, memory and computational cost in 3D moment tensor inversion even though this method was not more effective

  1. Hydrogen iodide decomposition

    DOEpatents

    O'Keefe, Dennis R.; Norman, John H.

    1983-01-01

    Liquid hydrogen iodide is decomposed to form hydrogen and iodine in the presence of water using a soluble catalyst. Decomposition is carried out at a temperature between about 350.degree. K. and about 525.degree. K. and at a corresponding pressure between about 25 and about 300 atmospheres in the presence of an aqueous solution which acts as a carrier for the homogeneous catalyst. Various halides of the platinum group metals, particularly Pd, Rh and Pt, are used, particularly the chlorides and iodides which exhibit good solubility. After separation of the H.sub.2, the stream from the decomposer is countercurrently extracted with nearly dry HI to remove I.sub.2. The wet phase contains most of the catalyst and is recycled directly to the decomposition step. The catalyst in the remaining almost dry HI-I.sub.2 phase is then extracted into a wet phase which is also recycled. The catalyst-free HI-I.sub.2 phase is finally distilled to separate the HI and I.sub.2. The HI is recycled to the reactor; the I.sub.2 is returned to a reactor operating in accordance with the Bunsen equation to create more HI.

  2. Coxeter decompositions of hyperbolic simplexes

    SciTech Connect

    Felikson, A A

    2002-12-31

    A Coxeter decomposition of a polyhedron in a hyperbolic space H{sup n} is a decomposition of it into finitely many Coxeter polyhedra such that any two tiles having a common facet are symmetric with respect to it. The classification of Coxeter decompositions is closely related to the problem of the classification of finite-index subgroups generated by reflections in discrete hyperbolic groups generated by reflections. All Coxeter decompositions of simplexes in the hyperbolic spaces H{sup n} with n>3 are described in this paper.

  3. LU and Cholesky decomposition on an optical systolic array processor

    NASA Technical Reports Server (NTRS)

    Casasent, D.; Ghosh, A.

    1983-01-01

    Direct solutions of matrix-vector equations on an optical systolic array processor are considered. The solutions are discussed and a parallel algorithm for LU matrix decomposition that is very attractive for an optical realization is formulated. It is noted that when direct techniques are used, it is preferable to realize the matrix decomposition on an optical system and to utilize a digital processor for the solution of the simplified resultant matrix-vector problem. One method of realizing LU matrix decomposition on a new frequency-multiplexed optical systolic array matrix-matrix processor is described. A simple method for extending the process of LU decomposition to Cholesky decomposition on the optical processor is discussed.

  4. TensorPack: a Maple-based software package for the manipulation of algebraic expressions of tensors in general relativity

    NASA Astrophysics Data System (ADS)

    Huf, P. A.; Carminati, J.

    2015-09-01

    In this paper we: (1) introduce TensorPack, a software package for the algebraic manipulation of tensors in covariant index format in Maple; (2) briefly demonstrate the use of the package with an orthonormal tensor proof of the shearfree conjecture for dust. TensorPack is based on the Riemann and Canon tensor software packages and uses their functions to express tensors in an indexed covariant format. TensorPack uses a string representation as input and provides functions for output in index form. It extends the functionality to basic algebra of tensors, substitution, covariant differentiation, contraction, raising/lowering indices, symmetry functions and other accessory functions. The output can be merged with text in the Maple environment to create a full working document with embedded dynamic functionality. The package offers potential for manipulation of indexed algebraic tensor expressions in a flexible software environment.

  5. Role of the tensor interaction in He isotopes with a tensor-optimized shell model

    SciTech Connect

    Myo, Takayuki; Umeya, Atsushi; Toki, Hiroshi; Ikeda, Kiyomi

    2011-09-15

    We studied the role of the tensor interaction in He isotopes systematically on the basis of the tensor-optimized shell model (TOSM). We use a bare nucleon-nucleon interaction AV8{sup '} obtained from nucleon-nucleon scattering data. The short-range correlation is treated in the unitary correlation operator method (UCOM). Using the TOSM + UCOM approach, we investigate the role of tensor interaction on each spectrum in He isotopes. It is found that the tensor interaction enhances the LS splitting energy observed in {sup 5}He, in which the p{sub 1/2} and p{sub 3/2} orbits play different roles on the tensor correlation. In {sup 6,7,8}He, the low-lying states containing extra neutrons in the p{sub 3/2} orbit gain the tensor contribution. On the other hand, the excited states containing extra neutrons in the p{sub 1/2} orbit lose the tensor contribution due to the Pauli-blocking effect with the 2p2h states in the {sup 4}He core configuration.

  6. Blue running of the primordial tensor spectrum

    SciTech Connect

    Gong, Jinn-Ouk

    2014-07-01

    We examine the possibility of positive spectral index of the power spectrum of the primordial tensor perturbation produced during inflation in the light of the detection of the B-mode polarization by the BICEP2 collaboration. We find a blue tilt is in general possible when the slow-roll parameter decays rapidly. We present two known examples in which a positive spectral index for the tensor power spectrum can be obtained. We also briefly discuss other consistency tests for further studies on inflationary dynamics.

  7. Tensor distinction of domains in ferroic crystals

    NASA Astrophysics Data System (ADS)

    Litvin, D. B.

    2009-10-01

    Ferroic crystals contain two or more domains and may be distinguished by the values of components of tensorial physical properties of the domains. We have extended Aizu’s global tensor distinction by magnetization, polarization, and strain of all domains which arise in a ferroic phase transition to include distinction by toroidal moment, and from phases invariant under time reversal to domains which arise in transitions from all magnetic and non-magnetic phases. For determining possible switching of domains, a domain pair tensor distinction is also considered for all pairs of domains which arise in each ferroic phase transition.

  8. Tensor mesons produced in tau lepton decays

    SciTech Connect

    Lopez Castro, G.; Munoz, J. H.

    2011-05-01

    Light tensor mesons (T=a{sub 2}, f{sub 2} and K{sub 2}*) can be produced in decays of {tau} leptons. In this paper we compute the branching ratios of {tau}{yields}T{pi}{nu} decays by assuming the dominance of intermediate virtual states to model the form factors involved in the relevant hadronic matrix elements. The exclusive f{sub 2}(1270){pi}{sup -} decay mode turns out to have the largest branching ratio, of O(10{sup -4}). Our results indicate that the contribution of tensor meson intermediate states to the three-pseudoscalar channels of {tau} decays are rather small.

  9. Non-isothermal decomposition kinetics of diosgenin

    NASA Astrophysics Data System (ADS)

    Chen, Fei-xiong; Fu, Li; Feng, Lu; Liu, Chuo-chuo; Ren, Bao-zeng

    2013-10-01

    The thermal stability and kinetics of isothermal decomposition of diosgenin were studied by thermogravimetry (TG) and Differential Scanning Calorimeter (DSC). The activation energy of the thermal decomposition process was determined from the analysis of TG curves by the methods of Flynn-Wall-Ozawa, Doyle, Šatava-Šesták and Kissinger, respectively. The mechanism of thermal decomposition was determined to be Avrami-Erofeev equation ( n = 1/3, n is the reaction order) with integral form G(α) = [-ln(1 - α)]1/3 (α = 0.10-0.80). E a and log A [s-1] were determined to be 44.10 kJ mol-1 and 3.12, respectively. Moreover, the thermodynamics properties of Δ H ≠, Δ S ≠, and Δ G ≠ of this reaction were 38.18 kJ mol-1, -199.76 J mol-1 K-1, and 164.36 kJ mol-1 in the stage of thermal decomposition.

  10. Erbium hydride decomposition kinetics.

    SciTech Connect

    Ferrizz, Robert Matthew

    2006-11-01

    Thermal desorption spectroscopy (TDS) is used to study the decomposition kinetics of erbium hydride thin films. The TDS results presented in this report are analyzed quantitatively using Redhead's method to yield kinetic parameters (E{sub A} {approx} 54.2 kcal/mol), which are then utilized to predict hydrogen outgassing in vacuum for a variety of thermal treatments. Interestingly, it was found that the activation energy for desorption can vary by more than 7 kcal/mol (0.30 eV) for seemingly similar samples. In addition, small amounts of less-stable hydrogen were observed for all erbium dihydride films. A detailed explanation of several approaches for analyzing thermal desorption spectra to obtain kinetic information is included as an appendix.

  11. Art of spin decomposition

    SciTech Connect

    Chen Xiangsong; Sun Weimin; Wang Fan; Goldman, T.

    2011-04-01

    We analyze the problem of spin decomposition for an interacting system from a natural perspective of constructing angular-momentum eigenstates. We split, from the total angular-momentum operator, a proper part which can be separately conserved for a stationary state. This part commutes with the total Hamiltonian and thus specifies the quantum angular momentum. We first show how this can be done in a gauge-dependent way, by seeking a specific gauge in which part of the total angular-momentum operator vanishes identically. We then construct a gauge-invariant operator with the desired property. Our analysis clarifies what is the most pertinent choice among the various proposals for decomposing the nucleon spin. A similar analysis is performed for extracting a proper part from the total Hamiltonian to construct energy eigenstates.

  12. Application of the base catalyzed decomposition process to treatment of PCB-contaminated insulation and other materials associated with US Navy vessels. Final report

    SciTech Connect

    Schmidt, A.J.; Zacher, A.H.; Gano, S.R.

    1996-09-01

    The BCD process was applied to dechlorination of two types of PCB-contaminated materials generated from Navy vessel decommissioning activities at Puget Sound Naval Shipyard: insulation of wool felt impregnated with PCB, and PCB-containing paint chips/debris from removal of paint from metal surfaces. The BCD process is a two-stage, low-temperature chemical dehalogenation process. In Stage 1, the materials are mixed with sodium bicarbonate and heated to 350 C. The volatilized halogenated contaminants (eg, PCBs, dioxins, furans), which are collected in a small volume of particulates and granular activated carbon, are decomposed by the liquid-phase reaction (Stage 2) in a stirred-tank reactor, using a high-boiling-point hydrocarbon oil as the reaction medium, with addition of a hydrogen donor, a base (NaOH), and a catalyst. The tests showed that treating wool felt insulation and paint chip wastes with Stage 2 on a large scale is feasible, but compared with current disposal costs for PCB-contaminated materials, using Stage 2 would not be economical at this time. For paint chips generated from shot/sand blasting, the solid-phase BCD process (Stage 1) should be considered, if paint removal activities are accelerated in the future.

  13. Direct Sum Decomposition of Groups

    ERIC Educational Resources Information Center

    Thaheem, A. B.

    2005-01-01

    Direct sum decomposition of Abelian groups appears in almost all textbooks on algebra for undergraduate students. This concept plays an important role in group theory. One simple example of this decomposition is obtained by using the kernel and range of a projection map on an Abelian group. The aim in this pedagogical note is to establish a direct…

  14. Fast Density Inversion Solution for Full Tensor Gravity Gradiometry Data

    NASA Astrophysics Data System (ADS)

    Hou, Zhenlong; Wei, Xiaohui; Huang, Danian

    2016-02-01

    We modify the classical preconditioned conjugate gradient method for full tensor gravity gradiometry data. The resulting parallelized algorithm is implemented on a cluster to achieve rapid density inversions for various scenarios, overcoming the problems of computation time and memory requirements caused by too many iterations. The proposed approach is mainly based on parallel programming using the Message Passing Interface, supplemented by Open Multi-Processing. Our implementation is efficient and scalable, enabling its use with large-scale data. We consider two synthetic models and real survey data from Vinton Dome, US, and demonstrate that our solutions are reliable and feasible.

  15. Separable Processes Before, During, and After the N400 Elicited by Previously Inferred and New Information: Evidence from Time-Frequency Decompositions

    PubMed Central

    Steele, Vaughn R.; Bernat, Edward M.; van den Broek, Paul; Collins, Paul F.; Patrick, Christopher J.; Marsolek, Chad J.

    2012-01-01

    Successful comprehension during reading often requires inferring information not explicitly presented. This information is readily accessible when subsequently encountered, and a neural correlate of this is an attenuation of the N400 event-related potential (ERP). We used ERPs and time-frequency (TF) analysis to investigate neural correlates of processing inferred information after a causal coherence inference had been generated during text comprehension. Participants read short texts, some of which promoted inference generation. After each text, they performed lexical decisions to target words that were unrelated or inference-related to the preceding text. Consistent with previous findings, inference-related words elicited an attenuated N400 relative to unrelated words. TF analyses revealed unique contributions to the N400 from activity occurring at 1–6 Hz (theta) and 0–2 Hz (delta), supporting the view that multiple, sequential processes underlie the N400. PMID:23165117

  16. Inversion of gravity gradient tensor data: does it provide better resolution?

    NASA Astrophysics Data System (ADS)

    Paoletti, V.; Fedi, M.; Italiano, F.; Florio, G.; Ialongo, S.

    2016-04-01

    The gravity gradient tensor (GGT) has been increasingly used in practical applications, but the advantages and the disadvantages of the analysis of GGT components versus the analysis of the vertical component of the gravity field are still debated. We analyse the performance of joint inversion of GGT components versus separate inversion of the gravity field alone, or of one tensor component. We perform our analysis by inspection of the Picard Plot, a Singular Value Decomposition tool, and analyse both synthetic data and gradiometer measurements carried out at the Vredefort structure, South Africa. We show that the main factors controlling the reliability of the inversion are algebraic ambiguity (the difference between the number of unknowns and the number of available data points) and signal-to-noise ratio. Provided that algebraic ambiguity is kept low and the noise level is small enough so that a sufficient number of SVD components can be included in the regularized solution, we find that: (i) the choice of tensor components involved in the inversion is not crucial to the overall reliability of the reconstructions; (ii) GGT inversion can yield the same resolution as inversion with a denser distribution of gravity data points, but with the advantage of using fewer measurement stations.

  17. Decomposition, lookup, and recombination: MEG evidence for the full decomposition model of complex visual word recognition.

    PubMed

    Fruchter, Joseph; Marantz, Alec

    2015-04-01

    There is much evidence that visual recognition of morphologically complex words (e.g., teacher) proceeds via a decompositional route, first involving recognition of their component morphemes (teach + -er). According to the Full Decomposition model, after the visual decomposition stage, followed by morpheme lookup, there is a final "recombination" stage, in which the decomposed morphemes are combined and the well-formedness of the complex form is evaluated. Here, we use MEG to provide evidence for the temporally-differentiated stages of this model. First, we demonstrate an early effect of derivational family entropy, corresponding to the stem lookup stage; this is followed by a surface frequency effect, corresponding to the later recombination stage. We also demonstrate a late effect of a novel statistical measure, semantic coherence, which quantifies the gradient semantic well-formedness of complex words. Our findings illustrate the usefulness of corpus measures in investigating the component processes within visual word recognition. PMID:25797098

  18. Hierarchical decomposition model for reconfigurable architecture

    NASA Astrophysics Data System (ADS)

    Erdogan, Simsek; Wahab, Abdul

    1996-10-01

    This paper introduces a systematic approach for abstract modeling of VLSI digital systems using a hierarchical decomposition process and HDL. In particular, the modeling of the back propagation neural network on a massively parallel reconfigurable hardware is used to illustrate the design process rather than toy examples. Based on the design specification of the algorithm, a functional model is developed through successive refinement and decomposition for execution on the reconfiguration machine. First, a top- level block diagram of the system is derived. Then, a schematic sheet of the corresponding structural model is developed to show the interconnections of the main functional building blocks. Next, the functional blocks are decomposed iteratively as required. Finally, the blocks are modeled using HDL and verified against the block specifications.

  19. Decentralized modal identification of structures using parallel factor decomposition and sparse blind source separation

    NASA Astrophysics Data System (ADS)

    Sadhu, A.; Hazra, B.; Narasimhan, S.

    2013-12-01

    In this paper, a novel decentralized modal identification method is proposed utilizing the concepts of sparse blind source separation (BSS) and parallel factor decomposition. Unlike popular ambient modal identification methods which require large arrays of simultaneous vibration measurements, the decentralized algorithm presented here operates on partial measurements, utilizing a sub-set of sensors at-a-time. Mathematically, this leads to an underdetermined source separation problem, which is addressed using sparsifying wavelet transforms. The proposed method builds on a previously presented concept by the authors, which utilizes the stationary wavelet packet transform (SWPT) to generate an over-complete dictionary of sparse bases. However, the redundant SWPT can be computationally intensive depending on the bandwidth of the signals and the sampling frequency of the vibration measurements. This issue of computational burden is alleviated through a new method proposed here, which is based on a multi-linear algebra tool called PARAllel FACtor (PARAFAC) decomposition. At the core of this method, the wavelet packet decomposition coefficients are used to form a covariance tensor, followed by PARAFAC tensor decomposition to separate the modal responses. The underdetermined source identifiability of PARAFAC enables source separation in wavelet packet coefficients with considerable mode mixing, thereby relaxing the conditions to generate over-complete bases, thus reducing the computational burden. The proposed method is validated using a series of numerical simulations followed by an implementation on recorded ambient vibration measurements obtained from the UCLA factor building.

  20. Positivity of linear maps under tensor powers

    NASA Astrophysics Data System (ADS)

    Müller-Hermes, Alexander; Reeb, David; Wolf, Michael M.

    2016-01-01

    We investigate linear maps between matrix algebras that remain positive under tensor powers, i.e., under tensoring with n copies of themselves. Completely positive and completely co-positive maps are trivial examples of this kind. We show that for every n ∈ ℕ, there exist non-trivial maps with this property and that for two-dimensional Hilbert spaces there is no non-trivial map for which this holds for all n. For higher dimensions, we reduce the existence question of such non-trivial "tensor-stable positive maps" to a one-parameter family of maps and show that an affirmative answer would imply the existence of non-positive partial transpose bound entanglement. As an application, we show that any tensor-stable positive map that is not completely positive yields an upper bound on the quantum channel capacity, which for the transposition map gives the well-known cb-norm bound. We, furthermore, show that the latter is an upper bound even for the local operations and classical communications-assisted quantum capacity, and that moreover it is a strong converse rate for this task.

  1. Radiation Forces and Torques without Stress (Tensors)

    ERIC Educational Resources Information Center

    Bohren, Craig F.

    2011-01-01

    To understand radiation forces and torques or to calculate them does not require invoking photon or electromagnetic field momentum transfer or stress tensors. According to continuum electromagnetic theory, forces and torques exerted by radiation are a consequence of electric and magnetic fields acting on charges and currents that the fields induce…

  2. Cosmic Ray Diffusion Tensor Throughout the Heliosphere

    NASA Astrophysics Data System (ADS)

    Pei, C.; Bieber, J. W.; Breech, B.; Burger, R. A.; Clem, J.; Matthaeus, W. H.

    2008-12-01

    We calculate the cosmic ray diffusion tensor based on a recently developed model of magnetohydrodynamic (MHD) turbulence in the expanding solar wind [Breech et al., 2008.]. Parameters of this MHD model are tuned by using published observations from Helios, Voyager 2, and Ulysses. We present solutions of two turbulence parameter sets and derive the characteristics of the cosmic ray diffusion tensor for each. We determine the parallel diffusion coefficient of the cosmic ray following the method presented in Bieber et al. [1995]. We use the nonlinear guiding center (NLGC) theory to obtain the perpendicular diffusion coefficient of the cosmic ray [Matthaeus et al. 2003]. We find that (1) the radial mean free path decreases from 1 AU to 20 AU for both turbulence scenarios; (2) after 40 AU the radial mean free path is nearly constant; (3) the radial mean free path is dominated by the parallel component before 20 AU, after which the perpendicular component becomes important; (4) the rigidity P dependence of the parallel component of the diffusion tensor is proportional to P.404 for one turbulence scenario and P.374 for the other at 1 AU from 0.1 GVto 10 GV, but in the outer heliosphere its dependence becomes stronger above 4 GV; (5) the rigidity P dependence of the perpendicular component of the diffusion tensor is very weak. Supported by NASA Heliophysics Guest Investigator grant NNX07AH73G and by NASA Heliophysics Theory grant NNX08AI47G.

  3. Spacetime encodings. III. Second order Killing tensors

    SciTech Connect

    Brink, Jeandrew

    2010-01-15

    This paper explores the Petrov type D, stationary axisymmetric vacuum (SAV) spacetimes that were found by Carter to have separable Hamilton-Jacobi equations, and thus admit a second-order Killing tensor. The derivation of the spacetimes presented in this paper borrows from ideas about dynamical systems, and illustrates concepts that can be generalized to higher-order Killing tensors. The relationship between the components of the Killing equations and metric functions are given explicitly. The origin of the four separable coordinate systems found by Carter is explained and classified in terms of the analytic structure associated with the Killing equations. A geometric picture of what the orbital invariants may represent is built. Requiring that a SAV spacetime admits a second-order Killing tensor is very restrictive, selecting very few candidates from the group of all possible SAV spacetimes. This restriction arises due to the fact that the consistency conditions associated with the Killing equations require that the field variables obey a second-order differential equation, as opposed to a fourth-order differential equation that imposes the weaker condition that the spacetime be SAV. This paper introduces ideas that could lead to the explicit computation of more general orbital invariants in the form of higher-order Killing tensors.

  4. Nonlinear symmetries on spaces admitting Killing tensors

    NASA Astrophysics Data System (ADS)

    Visinescu, Mihai

    2010-04-01

    Nonlinear symmetries corresponding to Killing tensors are investigated. The intimate relation between Killing-Yano tensors and non-standard supersymmetries is pointed out. The gravitational anomalies are absent if the hidden symmetry is associated with a Killing-Yano tensor. In the case of the nonlinear symmetries the dynamical algebras of the Dirac-type operators is more involved and could be organized as infinite dimensional algebras or superalgebras. The general results are applied to some concrete spaces involved in theories of modern physics. As a first example it is considered the 4-dimensional Euclidean Taub-NUT space and its generalizations introduced by Iwai and Katayama. One presents the infinite dimensional superalgebra of Dirac type operators on Taub-NUT space that could be seen as a graded loop superalgebra of the Kac-Moody type. The axial anomaly, interpreted as the index of the Dirac operator, is computed for the generalized Taub-NUT metrics. Finally the existence of the conformal Killing-Yano tensors is investigated for some spaces with mixed Sasakian structures.

  5. Efficient Anisotropic Filtering of Diffusion Tensor Images

    PubMed Central

    Xu, Qing; Anderson, Adam W.; Gore, John C.; Ding, Zhaohua

    2009-01-01

    To improve the accuracy of structural and architectural characterization of living tissue with diffusion tensor imaging, an efficient smoothing algorithm is presented for reducing noise in diffusion tensor images. The algorithm is based on anisotropic diffusion filtering, which allows both image detail preservation and noise reduction. However, traditional numerical schemes for anisotropic filtering have the drawback of inefficiency and inaccuracy due to their poor stability and first order time accuracy. To address this, an unconditionally stable and second order time accuracy semi-implicit Craig-Sneyd scheme is adapted in our anisotropic filtering. By using large step size, unconditional stability allows this scheme to take much fewer iterations and thus less computation time than the explicit scheme to achieve a certain degree of smoothing. Second order time accuracy makes the algorithm reduce noise more effectively than a first order scheme with the same total iteration time. Both the efficiency and effectiveness are quantitatively evaluated based on synthetic and in vivo human brain diffusion tensor images, and these tests demonstrate that our algorithm is an efficient and effective tool for denoising diffusion tensor images. PMID:20061113

  6. Wood decomposition as influenced by invertebrates.

    PubMed

    Ulyshen, Michael D

    2016-02-01

    The diversity and habitat requirements of invertebrates associated with dead wood have been the subjects of hundreds of studies in recent years but we still know very little about the ecological or economic importance of these organisms. The purpose of this review is to examine whether, how and to what extent invertebrates affect wood decomposition in terrestrial ecosystems. Three broad conclusions can be reached from the available literature. First, wood decomposition is largely driven by microbial activity but invertebrates also play a significant role in both temperate and tropical environments. Primary mechanisms include enzymatic digestion (involving both endogenous enzymes and those produced by endo- and ectosymbionts), substrate alteration (tunnelling and fragmentation), biotic interactions and nitrogen fertilization (i.e. promoting nitrogen fixation by endosymbiotic and free-living bacteria). Second, the effects of individual invertebrate taxa or functional groups can be accelerative or inhibitory but the cumulative effect of the entire community is generally to accelerate wood decomposition, at least during the early stages of the process (most studies are limited to the first 2-3 years). Although methodological differences and design limitations preclude meta-analysis, studies aimed at quantifying the contributions of invertebrates to wood decomposition commonly attribute 10-20% of wood loss to these organisms. Finally, some taxa appear to be particularly influential with respect to promoting wood decomposition. These include large wood-boring beetles (Coleoptera) and termites (Termitoidae), especially fungus-farming macrotermitines. The presence or absence of these species may be more consequential than species richness and the influence of invertebrates is likely to vary biogeographically. PMID:25424353

  7. Decomposition of Amino Diazeniumdiolates (NONOates): Molecular Mechanisms

    SciTech Connect

    Shaikh, Nizamuddin; Valiev, Marat; Lymar, Sergei V.

    2014-08-23

    Although diazeniumdiolates (X[N(O)NO]-) are extensively used in biochemical, physiological, and pharmacological studies due to their ability to slowly release NO and/or its congeneric nitroxyl, the mechanisms of these processes remain obscure. In this work, we used a combination of spectroscopic, kinetic, and computational techniques to arrive at a qualitatively consistent molecular mechanism for decomposition of amino diazeniumdiolates (amino NONOates: R2N[N(O)NO]-, where R = -N(C2H5)2 (1), -N(C3H4NH2)2 (2), or -N(C2H4NH2)2 (3)). Decomposition of these NONOates is triggered by protonation of their [NN(O)NO]- group with apparent pKa and decomposition rate constants of 4.6 and 1 s-1 for 1-H, 3.5 and 83 x 10-3 s-1 for 2-H, and 3.8 and 3.3 x 10-3 s-1 for 3-H. Although protonation occurs mainly on the O atoms of the functional group, only the minor R2N(H)N(O)NO tautomer (population ~0.01%, for 1) undergoes the N-N heterolytic bond cleavage (k ~102 s-1 for 1) leading to amine and NO. Decompositions of protonated amino NONOates are strongly temperature-dependent; activation enthalpies are 20.4 and 19.4 kcal/mol for 1 and 2, respectively, which includes contributions from both the tautomerization and bond cleavage. The bond cleavage rates exhibit exceptional sensitivity to the nature of R substituents which strongly modulate activation entropy. At pH < 2, decompositions of all these NONOates are subject to additional acid catalysis that occurs through di-protonation of the [NN(O)NO]- group.

  8. Decomposition of amino diazeniumdiolates (NONOates): Molecular mechanisms

    DOE PAGESBeta

    Shaikh, Nizamuddin; Valiev, Marat; Lymar, Sergei V.

    2014-08-23

    Although diazeniumdiolates (X[N(O)NO]-) are extensively used in biochemical, physiological, and pharmacological studies due to their ability to release NO and/or its congeneric nitroxyl, the mechanisms of these processes remain obscure. In this work, we used a combination of spectroscopic, kinetic, and computational techniques to arrive at a quantitatively consistent molecular mechanism for decomposition of amino diazeniumdiolates (amino NONOates: R2N[N(O)NO]-, where R = —N(C2H5)2(1), —N(C3H4NH2)2(2), or —N(C2H4NH2)2(3)). Decomposition of these NONOates is triggered by protonation of their [NN(O)NO]- group with the apparent pKa and decomposition rate constants of 4.6 and 1 s-1 for 1; 3.5 and 0.083 s-1 for 2; andmore » 3.8 and 0.0033 s-1 for 3. Although protonation occurs mainly on the O atoms of the functional group, only the minor R2N(H)N(O)NO tautomer (population ~ 10-7, for 1) undergoes the N—N heterolytic bond cleavage (kd ~ 107 s-1 for 1) leading to amine and NO. Decompositions of protonated amino NONOates are strongly temperature-dependent; activation enthalpies are 20.4 and 19.4 kcal/mol for 1 and 2, respectively, which includes contributions from both the tautomerization and bond cleavage. Thus, the bond cleavage rates exhibit exceptional sensitivity to the nature of R substituents which strongly modulate activation entropy. At pH < 2, decompositions of all three NONOates that have been investigated are subject to additional acid catalysis that occurs through di-protonation of the [NN(O)NO]- group.« less

  9. Role of Tensor Force in Light Nuclei Based on the Tensor Optimized Shell Model

    SciTech Connect

    Myo, Takayuki; Umeya, Atsushi; Ikeda, Kiyomi; Valverde, Manuel; Toki, Hiroshi

    2011-10-21

    We propose a new theoretical approach to describe nucleus using bare nuclear interaction, in which the tensor and short-range correlations are described with the tensor optimized shell model (TOSM) and the unitary correlation operator method (UCOM), respectively. We show the obtained results of He isotopes using TOSM+UCOM, such as the importance of the pn-pair correlated by the tensor force, and the structure differences in the LS partners of 3/2{sup -} and 1/2{sup -} states of {sup 5}He. We also apply TOSM to the analysis of two-neutron halo nucleus {sup 11}Li, on the basis of the ''core described in TOSM''+n+n model. The halo formation of {sup 11}Li is naturally explained, in which the tensor correlation in the {sup 9}Li core is Pauli-blocked on the p-wave neutrons in {sup 11}Li and the s-wave component of halo structure is enhanced.

  10. Heuristic decomposition for non-hierarchic systems

    NASA Technical Reports Server (NTRS)

    Bloebaum, Christina L.; Hajela, P.

    1991-01-01

    Design and optimization is substantially more complex in multidisciplinary and large-scale engineering applications due to the existing inherently coupled interactions. The paper introduces a quasi-procedural methodology for multidisciplinary optimization that is applicable for nonhierarchic systems. The necessary decision-making support for the design process is provided by means of an embedded expert systems capability. The method employs a decomposition approach whose modularity allows for implementation of specialized methods for analysis and optimization within disciplines.

  11. Hardening of aged duplex stainless steels by spinodal decomposition.

    PubMed

    Danoix, F; Auger, P; Blavette, D

    2004-06-01

    Mechanical properties, such as hardness and impact toughness, of ferrite-containing stainless steels are greatly affected by long-term aging at intermediate temperatures. It is known that the alpha-alpha' spinodal decomposition occurring in the iron-chromium-based ferrite is responsible for this aging susceptibility. This decomposition can be characterized unambiguously by atom probe analysis, allowing comparison both with the existing theories of spinodal decomposition and the evolution of some mechanical properties. It is then possible to predict the evolution of hardness of industrial components during service, based on the detailed knowledge of the involved aging process. PMID:15233853

  12. Thermal Decomposition of Copper (II) Calcium (II) Formate

    NASA Astrophysics Data System (ADS)

    Leyva, A. G.; Polla, G.; de Perazzo, P. K.; Lanza, H.; de Benyacar, M. A. R.

    1996-05-01

    The presence of different stages in the thermal decomposition process of CuCa(HCOO) 4has been established by means of TGA at different heating rates, X-ray powder diffraction of quenched samples, and DSC methods. During the first stage, decomposition of one of the two copper formate structural units contained in the unit cell takes place. The presence of CuCa 2(HCOO) 6has been detected. Calcium formate structural units break down at higher temperatures; the last decomposition peak corresponds to the appearance of different calcium-copper oxides.

  13. Log-Euclidean metrics for fast and simple calculus on diffusion tensors.

    PubMed

    Arsigny, Vincent; Fillard, Pierre; Pennec, Xavier; Ayache, Nicholas

    2006-08-01

    Diffusion tensor imaging (DT-MRI or DTI) is an emerging imaging modality whose importance has been growing considerably. However, the processing of this type of data (i.e., symmetric positive-definite matrices), called "tensors" here, has proved difficult in recent years. Usual Euclidean operations on matrices suffer from many defects on tensors, which have led to the use of many ad hoc methods. Recently, affine-invariant Riemannian metrics have been proposed as a rigorous and general framework in which these defects are corrected. These metrics have excellent theoretical properties and provide powerful processing tools, but also lead in practice to complex and slow algorithms. To remedy this limitation, a new family of Riemannian metrics called Log-Euclidean is proposed in this article. They also have excellent theoretical properties and yield similar results in practice, but with much simpler and faster computations. This new approach is based on a novel vector space structure for tensors. In this framework, Riemannian computations can be converted into Euclidean ones once tensors have been transformed into their matrix logarithms. Theoretical aspects are presented and the Euclidean, affine-invariant, and Log-Euclidean frameworks are compared experimentally. The comparison is carried out on interpolation and regularization tasks on synthetic and clinical 3D DTI data. PMID:16788917

  14. Three-dimensional manifolds with special Cotton tensor

    NASA Astrophysics Data System (ADS)

    Calviño-Louzao, E.; García-Río, E.; Seoane-Bascoy, J.; Vázquez-Lorenzo, R.

    2015-10-01

    The Cotton tensor of three-dimensional Walker manifolds is investigated. A complete description of all locally conformally flat Walker three-manifolds is given, as well as that of Walker manifolds whose Cotton tensor is either a Codazzi or a Killing tensor.

  15. CP decomposition approach to blind separation for DS-CDMA system using a new performance index

    NASA Astrophysics Data System (ADS)

    Rouijel, Awatif; Minaoui, Khalid; Comon, Pierre; Aboutajdine, Driss

    2014-12-01

    In this paper, we present a canonical polyadic (CP) tensor decomposition isolating the scaling matrix. This has two major implications: (i) the problem conditioning shows up explicitly and could be controlled through a constraint on the so-called coherences and (ii) a performance criterion concerning the factor matrices can be exactly calculated and is more realistic than performance metrics used in the literature. Two new algorithms optimizing the CP decomposition based on gradient descent are proposed. This decomposition is illustrated by an application to direct-sequence code division multiplexing access (DS-CDMA) systems; computer simulations are provided and demonstrate the good behavior of these algorithms, compared to others in the literature.

  16. Thermal Decomposition Kinetics of HMX

    SciTech Connect

    Burnham, A K; Weese, R K

    2004-05-05

    Nucleation-growth kinetic expressions are derived for thermal decomposition of HMX from a variety of types of data, including mass loss for isothermal and constant rate heating in an open pan, and heat flow for isothermal and constant rate heating in open and closed pans. Conditions are identified in which thermal runaway is small to nonexistent, which typically means temperatures less than 255 C and heating rates less than 1 C/min. Activation energies are typically in the 140 to 150 kJ/mol regime for open pan experiments and about 160 kJ/mol for sealed pan experiments. Our activation energies are about 10% lower than those derived from data supplied by the University of Utah, which we consider the best previous work. The reaction clearly displays more than one process, and most likely three processes, which are most clearly evident in open pan experiments. The reaction is accelerated for closed pan experiments, and one global reaction appears to fit the data well.

  17. Thermal Decomposition Kinetics of HMX

    SciTech Connect

    Burnham, A K; Weese, R K

    2005-03-17

    Nucleation-growth kinetic expressions are derived for thermal decomposition of HMX from a variety of types of data, including mass loss for isothermal and constant rate heating in an open pan, and heat flow for isothermal and constant rate heating in open and closed pans. Conditions are identified in which thermal runaway is small to nonexistent, which typically means temperatures less than 255 C and heating rates less than 1 C/min. Activation energies are typically in the 140 to 165 kJ/mol regime for open pan experiments and about 150-165 kJ/mol for sealed-pan experiments. The reaction clearly displays more than one process, and most likely three processes, which are most clearly evident in open pan experiments. The reaction is accelerated for closed pan experiments, and one global reaction fits the data fairly well. Our A-E values lie in the middle of the values given in a compensation-law plot by Brill et al. (1994). Comparison with additional open and closed low temperature pyrolysis experiments support an activation energy of 165 kJ/mol at 10% conversion.

  18. Thermal Decomposition Kinetics of HMX

    SciTech Connect

    Burnham, A K; Weese, R K

    2004-11-18

    Nucleation-growth kinetic expressions are derived for thermal decomposition of HMX from a variety of thermal analysis data types, including mass loss for isothermal and constant rate heating in an open pan and heat flow for isothermal and constant rate heating in open and closed pans. Conditions are identified in which thermal runaway is small to nonexistent, which typically means temperatures less than 255 C and heating rates less than 1 C/min. Activation energies are typically in the 140 to 165 kJ/mol range for open pan experiments and about 150 to 165 kJ/mol for sealed pan experiments. Our activation energies tend to be slightly lower than those derived from data supplied by the University of Utah, which we consider the best previous thermal analysis work. The reaction clearly displays more than one process, and most likely three processes, which are most clearly evident in open pan experiments. The reaction is accelerated in closed pan experiments, and one global reaction appears to fit the data well. Comparison of our rate measurements with additional literature sources for open and closed low temperature pyrolysis from Sandia gives a likely activation energy of 165 kJ/mol at 10% conversion.

  19. Decomposition in northern Minnesota peatlands

    SciTech Connect

    Farrish, K.W.

    1985-01-01

    Decomposition in peatlands was investigated in northern Minnesota. Four sites, an ombrotrophic raised bog, an ombrotrophic perched bog and two groundwater minerotrophic fens, were studied. Decomposition rates of peat and paper were estimated using mass-loss techniques. Environmental and substrate factors that were most likely to be responsible for limiting decomposition were monitored. Laboratory incubation experiments complemented the field work. Mass-loss over one year in one of the bogs, ranged from 11 percent in the upper 10 cm of hummocks to 1 percent at 60 to 100 cm depth in hollows. Regression analysis of the data for that bog predicted no mass-loss below 87 cm. Decomposition estimates on an area basis were 2720 and 6460 km/ha yr for the two bogs; 17,000 and 5900 kg/ha yr for the two fens. Environmental factors found to limit decomposition in these peatlands were reducing/anaerobic conditions below the water table and cool peat temperatures. Substrate factors found to limit decomposition were low pH, high content of resistant organics such as lignin, and shortages of available N and K. Greater groundwater influence was found to favor decomposition through raising the pH and perhaps by introducing limited amounts of dissolved oxygen.

  20. A preliminary report on the development of MATLAB tensor classes for fast algorithm prototyping.

    SciTech Connect

    Bader, Brett William; Kolda, Tamara Gibson

    2004-07-01

    We describe three MATLAB classes for manipulating tensors in order to allow fast algorithm prototyping. A tensor is a multidimensional or N-way array. We present a tensor class for manipulating tensors which allows for tensor multiplication and 'matricization.' We have further added two classes for representing tensors in decomposed format: cp{_}tensor and tucker{_}tensor. We demonstrate the use of these classes by implementing several algorithms that have appeared in the literature.

  1. Identifying Multi-Dimensional Co-Clusters in Tensors Based on Hyperplane Detection in Singular Vector Spaces.

    PubMed

    Zhao, Hongya; Wang, Debby D; Chen, Long; Liu, Xinyu; Yan, Hong

    2016-01-01

    Co-clustering, often called biclustering for two-dimensional data, has found many applications, such as gene expression data analysis and text mining. Nowadays, a variety of multi-dimensional arrays (tensors) frequently occur in data analysis tasks, and co-clustering techniques play a key role in dealing with such datasets. Co-clusters represent coherent patterns and exhibit important properties along all the modes. Development of robust co-clustering techniques is important for the detection and analysis of these patterns. In this paper, a co-clustering method based on hyperplane detection in singular vector spaces (HDSVS) is proposed. Specifically in this method, higher-order singular value decomposition (HOSVD) transforms a tensor into a core part and a singular vector matrix along each mode, whose row vectors can be clustered by a linear grouping algorithm (LGA). Meanwhile, hyperplanar patterns are extracted and successfully supported the identification of multi-dimensional co-clusters. To validate HDSVS, a number of synthetic and biological tensors were adopted. The synthetic tensors attested a favorable performance of this algorithm on noisy or overlapped data. Experiments with gene expression data and lineage data of embryonic cells further verified the reliability of HDSVS to practical problems. Moreover, the detected co-clusters are well consistent with important genetic pathways and gene ontology annotations. Finally, a series of comparisons between HDSVS and state-of-the-art methods on synthetic tensors and a yeast gene expression tensor were implemented, verifying the robust and stable performance of our method. PMID:27598575

  2. Structural optimization by multilevel decomposition

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; James, B.; Dovi, A.

    1983-01-01

    A method is described for decomposing an optimization problem into a set of subproblems and a coordination problem which preserves coupling between the subproblems. The method is introduced as a special case of multilevel, multidisciplinary system optimization and its algorithm is fully described for two level optimization for structures assembled of finite elements of arbitrary type. Numerical results are given for an example of a framework to show that the decomposition method converges and yields results comparable to those obtained without decomposition. It is pointed out that optimization by decomposition should reduce the design time by allowing groups of engineers, using different computers to work concurrently on the same large problem.

  3. Perfluoropolyalkylether decomposition on catalytic aluminas

    NASA Technical Reports Server (NTRS)

    Morales, Wilfredo

    1994-01-01

    The decomposition of Fomblin Z25, a commercial perfluoropolyalkylether liquid lubricant, was studied using the Penn State Micro-oxidation Test, and a thermal gravimetric/differential scanning calorimetry unit. The micro-oxidation test was conducted using 440C stainless steel and pure iron metal catalyst specimens, whereas the thermal gravimetric/differential scanning calorimetry tests were conducted using catalytic alumina pellets. Analysis of the thermal data, high pressure liquid chromatography data, and x-ray photoelectron spectroscopy data support evidence that there are two different decomposition mechanisms for Fomblin Z25, and that reductive sites on the catalytic surfaces are responsible for the decomposition of Fomblin Z25.

  4. Autonomous Gaussian Decomposition

    NASA Astrophysics Data System (ADS)

    Lindner, Robert R.; Vera-Ciro, Carlos; Murray, Claire E.; Stanimirović, Snežana; Babler, Brian; Heiles, Carl; Hennebelle, Patrick; Goss, W. M.; Dickey, John

    2015-04-01

    We present a new algorithm, named Autonomous Gaussian Decomposition (AGD), for automatically decomposing spectra into Gaussian components. AGD uses derivative spectroscopy and machine learning to provide optimized guesses for the number of Gaussian components in the data, and also their locations, widths, and amplitudes. We test AGD and find that it produces results comparable to human-derived solutions on 21 cm absorption spectra from the 21 cm SPectral line Observations of Neutral Gas with the EVLA (21-SPONGE) survey. We use AGD with Monte Carlo methods to derive the H i line completeness as a function of peak optical depth and velocity width for the 21-SPONGE data, and also show that the results of AGD are stable against varying observational noise intensity. The autonomy and computational efficiency of the method over traditional manual Gaussian fits allow for truly unbiased comparisons between observations and simulations, and for the ability to scale up and interpret the very large data volumes from the upcoming Square Kilometer Array and pathfinder telescopes.

  5. Superquadric glyphs for symmetric second-order tensors.

    PubMed

    Schultz, Thomas; Kindlmann, Gordon L

    2010-01-01

    Symmetric second-order tensor fields play a central role in scientific and biomedical studies as well as in image analysis and feature-extraction methods. The utility of displaying tensor field samples has driven the development of visualization techniques that encode the tensor shape and orientation into the geometry of a tensor glyph. With some exceptions, these methods work only for positive-definite tensors (i.e. having positive eigenvalues, such as diffusion tensors). We expand the scope of tensor glyphs to all symmetric second-order tensors in two and three dimensions, gracefully and unambiguously depicting any combination of positive and negative eigenvalues. We generalize a previous method of superquadric glyphs for positive-definite tensors by drawing upon a larger portion of the superquadric shape space, supplemented with a coloring that indicates the quadratic form (including eigenvalue sign). We show that encoding arbitrary eigenvalue magnitudes requires design choices that differ fundamentally from those in previous work on traceless tensors that arise in the study of liquid crystals. Our method starts with a design of 2-D tensor glyphs guided by principles of scale-preservation and symmetry, and creates 3-D glyphs that include the 2-D glyphs in their axis-aligned cross-sections. A key ingredient of our method is a novel way of mapping from the shape space of three-dimensional symmetric second-order tensors to the unit square. We apply our new glyphs to stress tensors from mechanics, geometry tensors and Hessians from image analysis, and rate-of-deformation tensors in computational fluid dynamics. PMID:20975202

  6. ADVANCED OXIDATION: OXALATE DECOMPOSITION TESTING WITH OZONE

    SciTech Connect

    Ketusky, E.; Subramanian, K.

    2012-02-29

    At the Savannah River Site (SRS), oxalic acid is currently considered the preferred agent for chemically cleaning the large underground Liquid Radioactive Waste Tanks. It is applied only in the final stages of emptying a tank when generally less than 5,000 kg of waste solids remain, and slurrying based removal methods are no-longer effective. The use of oxalic acid is preferred because of its combined dissolution and chelating properties, as well as the fact that corrosion to the carbon steel tank walls can be controlled. Although oxalic acid is the preferred agent, there are significant potential downstream impacts. Impacts include: (1) Degraded evaporator operation; (2) Resultant oxalate precipitates taking away critically needed operating volume; and (3) Eventual creation of significant volumes of additional feed to salt processing. As an alternative to dealing with the downstream impacts, oxalate decomposition using variations of ozone based Advanced Oxidation Process (AOP) were investigated. In general AOPs use ozone or peroxide and a catalyst to create hydroxyl radicals. Hydroxyl radicals have among the highest oxidation potentials, and are commonly used to decompose organics. Although oxalate is considered among the most difficult organic to decompose, the ability of hydroxyl radicals to decompose oxalate is considered to be well demonstrated. In addition, as AOPs are considered to be 'green' their use enables any net chemical additions to the waste to be minimized. In order to test the ability to decompose the oxalate and determine the decomposition rates, a test rig was designed, where 10 vol% ozone would be educted into a spent oxalic acid decomposition loop, with the loop maintained at 70 C and recirculated at 40L/min. Each of the spent oxalic acid streams would be created from three oxalic acid strikes of an F-area simulant (i.e., Purex = high Fe/Al concentration) and H-area simulant (i.e., H area modified Purex = high Al/Fe concentration) after nearing

  7. Global moment tensor computation at GFZ Potsdam

    NASA Astrophysics Data System (ADS)

    Saul, J.; Becker, J.; Hanka, W.

    2011-12-01

    As part of its earthquake information service, GFZ Potsdam has started to provide seismic moment tensor solutions for significant earthquakes world-wide. The software used to compute the moment tensors is a GFZ-Potsdam in-house development, which uses the framework of the software SeisComP 3 (Hanka et al., 2010). SeisComP 3 (SC3) is a software package for seismological data acquisition, archival, quality control and analysis. SC3 is developed by GFZ Potsdam with significant contributions from its user community. The moment tensor inversion technique uses a combination of several wave types, time windows and frequency bands depending on magnitude and station distance. Wave types include body, surface and mantle waves as well as the so-called 'W-Phase' (Kanamori and Rivera, 2008). The inversion is currently performed in the time domain only. An iterative centroid search can be performed independently both horizontally and in depth. Moment tensors are currently computed in a semi-automatic fashion. This involves inversions that are performed automatically in near-real time, followed by analyst review prior to publication. The automatic results are quite often good enough to be published without further improvements, sometimes in less than 30 minutes from origin time. In those cases where a manual interaction is still required, the automatic inversion usually does a good job at pre-selecting those traces that are the most relevant for the inversion, keeping the work required for the analyst at a minimum. Our published moment tensors are generally in good agreement with those published by the Global Centroid-Moment-Tensor (GCMT) project for earthquakes above a magnitude of about Mw 5. Additionally we provide solutions for smaller earthquakes above about Mw 4 in Europe, which are normally not analyzed by the GCMT project. We find that for earthquakes above Mw 6, the most robust automatic inversions can usually be obtained using the W-Phase time window. The GFZ earthquake

  8. Autocatalytic Decomposition at Shear-Strain Interfaces

    NASA Astrophysics Data System (ADS)

    Kuklja, M. M.; Rashkeev, Sergey N.

    2009-12-01

    Atomic scale mechanisms of the initiation of chemical processes in energetic molecular crystals leading to the decomposition and ultimately to an explosive chain reaction, are far from being completely understood. We investigated the onset of the initiation processes in two energetic crystals—diamino-dinitroethylene (DADNE, C2H4N4O4) and triamino-trinitrobenzene (TATB, C6H6N6O6). We suggest that an autocatalytic decomposition mechanism is likely to take place in DADNE crystal that is built out of corrugated, dashboard-shaped molecular layers, and the level of the induced shear-strain perturbation between the layers strongly depends upon the presence of interstitial NO2 groups. Unlike this, in TATB, which consists of flat, graphite-like molecular layers, an interstitial NO2 group positioned between two layers produces a local molecular orientation disorder and barely affects the C-NO2 decomposition barrier. Split off NO2 groups in the interstitial exhibit a series of exothermic reactions. In DADNE, these reactions start at a lower concentration of interstitial nitro-groups which may be correlated to the higher sensitivity of this material to the initiation as compared to TATB.

  9. Gamma-ray decomposition of PCBs

    SciTech Connect

    Mincher, B.J.; Meikrantz, D.H.; Arbon, R.E.; Murphy, R.J.

    1991-12-01

    This program is the Idaho National Engineering Laboratory (INEL) component of a joint collaborative effort with Lawrence Livermore National Laboratory (LLNL). The purpose of this effort is to demonstrate a viable process for breaking down hazardous halogenated organic wastes to simpler, non-hazardous wastes using high energy ionizing radiation. The INEL effort focuses on the use of spent reactor fuel gamma radiation sources to decompose complex wastes such as PCBs. At LLNL, halogenated solvents such as carbon tetrachloride and trichloroethylene are being studied using accelerator radiation sources. The INEL irradiation experiments concentrated on a single PCB congener so that a limited set of decomposition reactions could be studied. The congener 2, 2{prime}, 3, 3{prime}, 4, 5{prime}, 6, 6{prime}-octachlorobiphenyl was examined following exposure to various gamma doses at the Advanced Test Reactor (ATR) spent fuel pool. The decomposition rates and products in several solvents. are discussed. 7 refs., 13 figs., 1 tab.

  10. Gamma-ray decomposition of PCBs

    SciTech Connect

    Mincher, B.J.; Meikrantz, D.H.; Arbon, R.E.; Murphy, R.J.

    1991-01-01

    This program is the Idaho National Engineering Laboratory (INEL) component of a joint collaborative effort with Lawrence Livermore National Laboratory (LLNL). The purpose of this effort is to demonstrate a viable process for breaking down hazardous halogenated organic wastes to simpler, non-hazardous wastes using high energy ionizing radiation. The INEL effort focuses on the use of spent reactor fuel gamma radiation sources to decompose complex wastes such as PCBs. At LLNL, halogenated solvents such as carbon tetrachloride and trichloroethylene are being studied using accelerator radiation sources. The INEL irradiation experiments concentrated on a single PCB congener so that a limited set of decomposition reactions could be studied. The congener 2, 2{prime}, 3, 3{prime}, 4, 5{prime}, 6, 6{prime}-octachlorobiphenyl was examined following exposure to various gamma doses at the Advanced Test Reactor (ATR) spent fuel pool. The decomposition rates and products in several solvents. are discussed. 7 refs., 13 figs., 1 tab.

  11. Assessing the Uncertainties on Seismic Source Parameters: Towards Realistic Estimates of Moment Tensor Determinations

    NASA Astrophysics Data System (ADS)

    Magnoni, F.; Scognamiglio, L.; Tinti, E.; Casarotti, E.

    2014-12-01

    Seismic moment tensor is one of the most important source parameters defining the earthquake dimension and style of the activated fault. Moment tensor catalogues are ordinarily used by geoscientists, however, few attempts have been done to assess possible impacts of moment magnitude uncertainties upon their own analysis. The 2012 May 20 Emilia mainshock is a representative event since it is defined in literature with a moment magnitude value (Mw) spanning between 5.63 and 6.12. An uncertainty of ~0.5 units in magnitude leads to a controversial knowledge of the real size of the event. The possible uncertainty associated to this estimate could be critical for the inference of other seismological parameters, suggesting caution for seismic hazard assessment, coulomb stress transfer determination and other analyses where self-consistency is important. In this work, we focus on the variability of the moment tensor solution, highlighting the effect of four different velocity models, different types and ranges of filtering, and two different methodologies. Using a larger dataset, to better quantify the source parameter uncertainty, we also analyze the variability of the moment tensor solutions depending on the number, the epicentral distance and the azimuth of used stations. We endorse that the estimate of seismic moment from moment tensor solutions, as well as the estimate of the other kinematic source parameters, cannot be considered an absolute value and requires to come out with the related uncertainties and in a reproducible framework characterized by disclosed assumptions and explicit processing workflows.

  12. Towards metal detection and identification for humanitarian demining using magnetic polarizability tensor spectroscopy

    NASA Astrophysics Data System (ADS)

    Dekdouk, B.; Ktistis, C.; Marsh, L. A.; Armitage, D. W.; Peyton, A. J.

    2015-11-01

    This paper presents an inversion procedure to estimate the location and magnetic polarizability tensor of metal targets from broadband electromagnetic induction (EMI) data. The solution of this inversion produces a spectral target signature, which may be used in identifying metal targets in landmines from harmless clutter. In this process, the response of the metal target is modelled with dipole moment and fitted to planar EMI data by solving a minimization least squares problem. A computer simulation platform has been developed using a modelled EMI sensor to produce synthetic data for inversion. The reconstructed tensor is compared with an assumed true solution estimated using a modelled tri-axial Helmholtz coil array. Using some test examples including a sphere which has a known analytical solution, results show the inversion routine produces accurate tensors to within 12% error of the true tensor. A good convergence rate is also demonstrated even when the target location is mis-estimated by a few centimeters. Having verified the inversion routine using finite element modelling, a swept frequency EMI experimental setup is used to compute tensors for a set of test samples representing examples of metallic landmine components and clutter for a broadband range of frequencies (kHz to tens of kHz). Results show the reconstructed spectral target signatures are very distinctive and hence potentially offer an efficient physical approach for landmine identification. The accuracy of the evaluated spectra is similarly verified using a uniform field forming sensor.

  13. Modeling the evolution of lithium-ion particle contact distributions using a fabric tensor approach

    NASA Astrophysics Data System (ADS)

    Stershic, A. J.; Simunovic, S.; Nanda, J.

    2015-11-01

    Electrode microstructure and processing can strongly influence lithium-ion battery performance such as capacity retention, power, and rate. Battery electrodes are multi-phase composite structures wherein conductive diluents and binder bond active material to a current collector. The structure and response of this composite network during repeated electrochemical cycling directly affects battery performance characteristics. We propose the fabric tensor formalism for describing the structure and evolution of the electrode microstructure. Fabric tensors are directional measures of particulate assemblies based on inter-particle connectivity, relating to the structural and transport properties of the electrode. Fabric tensor analysis is applied to experimental data-sets for positive electrode made of lithium nickel manganese cobalt oxide, captured by X-ray tomography for several compositions and consolidation pressures. We show that fabric tensors capture the evolution of inter-particle contact distribution and are therefore good measures for the internal state of and electronic transport within the electrode. The fabric tensor analysis is also applied to Discrete Element Method (DEM) simulations of electrode microstructures using spherical particles with size distributions from the tomography. These results do not follow the experimental trends, which indicates that the particle size distribution alone is not a sufficient measure for the electrode microstructures in DEM simulations.

  14. Diffusion tensor imaging-based research on human white matter anatomy.

    PubMed

    Qiu, Ming-guo; Zhang, Jing-na; Zhang, Ye; Li, Qi-Yu; Xie, Bing; Wang, Jian

    2012-01-01

    The aim of this study is to investigate the white matter by the diffusion tensor imaging and the Chinese visible human dataset and to provide the 3D anatomical data of the corticospinal tract for the neurosurgical planning by studying the probabilistic maps and the reproducibility of the corticospinal tract. Diffusion tensor images and high-resolution T1-weighted images of 15 healthy volunteers were acquired; the DTI data were processed using DtiStudio and FSL software. The FA and color FA maps were compared with the sectional images of the Chinese visible human dataset. The probability maps of the corticospinal tract were generated as a quantitative measure of reproducibility for each voxel of the stereotaxic space. The fibers displayed by the diffusion tensor imaging were well consistent with the sectional images of the Chinese visible human dataset and the existing anatomical knowledge. The three-dimensional architecture of the white matter fibers could be clearly visualized on the diffusion tensor tractography. The diffusion tensor tractography can establish the 3D probability maps of the corticospinal tract, in which the degree of intersubject reproducibility of the corticospinal tract is consistent with the previous architectonic report. DTI is a reliable method of studying the fiber connectivity in human brain, but it is difficult to identify the tiny fibers. The probability maps are useful for evaluating and identifying the corticospinal tract in the DTI, providing anatomical information for the preoperative planning and improving the accuracy of surgical risk assessments preoperatively. PMID:23226983

  15. Modeling the evolution of lithium-ion particle contact distributions using a fabric tensor approach

    DOE PAGESBeta

    Stershic, Andrew; Simunovic, Srdjan; Nanda, Jagjit

    2015-01-01

    Electrode microstructure and processing can strongly influence lithium-ion battery performance such as capacity retention, power, and rate. Battery electrodes are multi-phase composite structures wherein conductive diluents and binder bond active material to a current collector. The structure and response of this composite network during repeated electrochemical cycling directly affects battery performance characteristics. We propose the fabric tensor formalism for describing the structure and evolution of the electrode microstructure. Fabric tensors are directional measures of particulate assemblies based on inter-particle connectivity, relating to the structural and transport properties of the electrode. Fabric tensor analysis is applied to experimental data-sets for positivemore » electrode made of lithium nickel manganese cobalt oxide, captured by X-ray tomography for several compositions and consolidation pressures. We show that fabric tensors capture the evolution of inter-particle contact distribution and are therefore good measures for the internal state of and electronic transport within the electrode. The fabric tensor analysis is also applied to Discrete Element Method (DEM) simulations of electrode microstructures using spherical particles with size distributions from the tomography. These results do not follow the experimental trends, which indicates that the particle size distribution alone is not a sufficient measure for the electrode microstructures in DEM simulations.« less

  16. Lignocellulose decomposition by microbial secretions

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Carbon storage in terrestrial ecosystems is contingent upon the natural resistance of plant cell wall polymers to rapid biological degradation. Nevertheless, certain microorganisms have evolved remarkable means to overcome this natural resistance. Lignocellulose decomposition by microorganisms com...

  17. Some nonlinear space decomposition algorithms

    SciTech Connect

    Tai, Xue-Cheng; Espedal, M.

    1996-12-31

    Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms.

  18. Odd tensor modes from inflation

    NASA Astrophysics Data System (ADS)

    Sorbo, Lorenzo

    2016-07-01

    The existence of a primordial spectrum of gravitational waves is a generic prediction of inflation. Here, I will discuss under what conditions the coupling of a pseudoscalar inflaton to a U(1) gauge field can induce, in a two-step process, gravitational waves with unusual properties such as: (i) a net chirality, (ii) a blue spectrum, (iii) large non-Gaussianities even if the scalar perturbations are approximately Gaussian and (iv) being detectable in the (relatively) near future by ground-based gravitational interferometers.

  19. Tensor meson contribution to {gamma}p {yields} K{sup +}{Lambda}({Sigma};{sup 0}) at high energies

    SciTech Connect

    Yu, Byung Geel; Keun Choi, Tae; Kim, W.

    2011-10-21

    The role of the tensor meson K{sub 2}*(1430) exchange is investigated in the kaon photoproduction within the Regge framework. Inclusion of the K{sub 2}* exchange with the meson-baryon coupling constants chosen from the SU(3) symmetry reproduces the cross sections in good agreement with the experimental data. This shows the importance of the tensor meson exchange in the present process.

  20. Tensor hypercontraction density fitting. I. Quartic scaling second- and third-order Møller-Plesset perturbation theory

    NASA Astrophysics Data System (ADS)

    Hohenstein, Edward G.; Parrish, Robert M.; Martínez, Todd J.

    2012-07-01

    Many approximations have been developed to help deal with the O(N4) growth of the electron repulsion integral (ERI) tensor, where N is the number of one-electron basis functions used to represent the electronic wavefunction. Of these, the density fitting (DF) approximation is currently the most widely used despite the fact that it is often incapable of altering the underlying scaling of computational effort with respect to molecular size. We present a method for exploiting sparsity in three-center overlap integrals through tensor decomposition to obtain a low-rank approximation to density fitting (tensor hypercontraction density fitting or THC-DF). This new approximation reduces the 4th-order ERI tensor to a product of five matrices, simultaneously reducing the storage requirement as well as increasing the flexibility to regroup terms and reduce scaling behavior. As an example, we demonstrate such a scaling reduction for second- and third-order perturbation theory (MP2 and MP3), showing that both can be carried out in O(N4) operations. This should be compared to the usual scaling behavior of O(N5) and O(N6) for MP2 and MP3, respectively. The THC-DF technique can also be applied to other methods in electronic structure theory, such as coupled-cluster and configuration interaction, promising significant gains in computational efficiency and storage reduction.

  1. Decomposition of Rare Earth Loaded Resin Particles

    SciTech Connect

    Voit, Stewart L; Rawn, Claudia J

    2010-09-01

    resin is made of sulfonic acid functional groups attached to a styrene divinylbenzene copolymer lattice (long chained hydrocarbon). The metal cation binds to the sulfur group, then during thermal decomposition in air the hydrocarbons will form gaseous species leaving behind a spherical metal-oxide particle. Process development for resin applications with radioactive materials is typically performed using surrogates. For americium and curium, a trivalent metal like neodymium can be used. Thermal decomposition of Nd-loaded resin in air has been studied by Hale. Process conditions were established for resin decomposition and the formation of Nd{sub 2}O{sub 3} particles. The intermediate product compounds were described using x-ray diffraction (XRD) and wet chemistry. Leskela and Niinisto studied the decomposition of rare earth (RE) elements and found results consistent with Hale. Picart et al. demonstrated the viability of using a resin loading process for the fabrication of uranium-actinide mixed oxide microspheres for transmutation of minor actinides in a fast reactor. For effective transmutation of actinides, it will be desirable to extend the in-reactor burnup and minimize the number of recycles of used actinide materials. Longer burn times increases the chance of Fuel Clad Chemical or Mechanical Interaction (FCCI, FCMI). Sulfur is suspected of contributing to Irradiation Assisted Stress Corrosion Cracking (IASCC) thus it is necessary to maximize the removal of sulfur during decomposition of the resin. The present effort extends the previous work by quantifying the removal of sulfur during the decomposition process. Neodymium was selected as a surrogate for trivalent actinide metal cations. As described above Nd was dissolved in nitric acid solution then contacted with the AG-50W resin column. After washing the column, the Nd-resin particles are removed and dried. The Nd-resin, seen in Figure 1 prior to decomposition, is ready to be converted to Nd oxide microspheres.

  2. Visualization of second order tensor fields and matrix data

    NASA Technical Reports Server (NTRS)

    Delmarcelle, Thierry; Hesselink, Lambertus

    1992-01-01

    We present a study of the visualization of 3-D second order tensor fields and matrix data. The general problem of visualizing unsymmetric real or complex Hermitian second order tensor fields can be reduced to the simultaneous visualization of a real and symmetric second order tensor field and a real vector field. As opposed to the discrete iconic techniques commonly used in multivariate data visualization, the emphasis is on exploiting the mathematical properties of tensor fields in order to facilitate their visualization and to produce a continuous representation of the data. We focus on interactively sensing and exploring real and symmetric second order tensor data by generalizing the vector notion of streamline to the tensor concept of hyperstreamline. We stress the importance of a structural analysis of the data field analogous to the techniques of vector field topology extraction in order to obtain a unique and objective representation of second order tensor fields.

  3. Optimization by nonhierarchical asynchronous decomposition

    NASA Technical Reports Server (NTRS)

    Shankar, Jayashree; Ribbens, Calvin J.; Haftka, Raphael T.; Watson, Layne T.

    1992-01-01

    Large scale optimization problems are tractable only if they are somehow decomposed. Hierarchical decompositions are inappropriate for some types of problems and do not parallelize well. Sobieszczanski-Sobieski has proposed a nonhierarchical decomposition strategy for nonlinear constrained optimization that is naturally parallel. Despite some successes on engineering problems, the algorithm as originally proposed fails on simple two dimensional quadratic programs. The algorithm is carefully analyzed for quadratic programs, and a number of modifications are suggested to improve its robustness.

  4. Charmless hadronic B decays into a tensor meson

    SciTech Connect

    Cheng, Hai-Yang; Yang, Kwei-Chou

    2011-02-01

    Two-body charmless hadronic B decays involving a tensor meson in the final state are studied within the framework of QCD factorization (QCDF). Because of the G-parity of the tensor meson, both the chiral-even and chiral-odd two-parton light-cone distribution amplitudes of the tensor meson are antisymmetric under the interchange of momentum fractions of the quark and antiquark in the SU(3) limit. Our main results are: (i) In the naieve factorization approach, the decays such as B{sup -}{yields}K{sub 2}*{sup 0}{pi}{sup -} and B{sup 0}{yields}K{sub 2}*{sup -}{pi}{sup +} with a tensor meson emitted are prohibited because a tensor meson cannot be created from the local V-A or tensor current. Nevertheless, the decays receive nonfactorizable contributions in QCDF from vertex, penguin and hard spectator corrections. The experimental observation of B{sup -}{yields}K{sub 2}*{sup 0}{pi}{sup -} indicates the importance of nonfactorizable effects. (ii) For penguin-dominated B{yields}TP and TV decays, the predicted rates in naieve factorization are usually too small by 1 to 2 orders of magnitude. In QCDF, they are enhanced by power corrections from penguin annihilation and nonfactorizable contributions. (iii) The dominant penguin contributions to B{yields}K{sub 2}*{eta}{sup (')} arise from the processes: (a) b{yields}sss{yields}s{eta}{sub s} and (b) b{yields}sqq{yields}qK{sub 2}* with {eta}{sub q}=(uu+dd)/{radical}(2) and {eta}{sub s}=ss. The interference, constructive for K{sub 2}*{eta}{sup '} and destructive for K{sub 2}*{eta}, explains why {Gamma}(B{yields}K{sub 2}*{eta}{sup '})>>{Gamma}(B{yields}K{sub 2}*{eta}). (iv) We use the measured rates of B{yields}K{sub 2}*({omega},{phi}) to extract the penguin-annihilation parameters {rho}{sub A}{sup TV} and {rho}{sub A}{sup VT} and the observed longitudinal polarization fractions f{sub L}(K{sub 2}*{omega}) and f{sub L}(K{sub 2}*{phi}) to fix the phases {phi}{sub A}{sup VT} and {phi}{sub A}{sup TV}. (v) The experimental observation

  5. Saliency Mapping Enhanced by Structure Tensor

    PubMed Central

    He, Zhiyong; Chen, Xin; Sun, Lining

    2015-01-01

    We propose a novel efficient algorithm for computing visual saliency, which is based on the computation architecture of Itti model. As one of well-known bottom-up visual saliency models, Itti method evaluates three low-level features, color, intensity, and orientation, and then generates multiscale activation maps. Finally, a saliency map is aggregated with multiscale fusion. In our method, the orientation feature is replaced by edge and corner features extracted by a linear structure tensor. Following it, these features are used to generate contour activation map, and then all activation maps are directly combined into a saliency map. Compared to Itti method, our method is more computationally efficient because structure tensor is more computationally efficient than Gabor filter that is used to compute the orientation feature and our aggregation is a direct method instead of the multiscale operator. Experiments on Bruce's dataset show that our method is a strong contender for the state of the art. PMID:26788050

  6. Extended scalar-tensor theories of gravity

    NASA Astrophysics Data System (ADS)

    Crisostomi, Marco; Koyama, Kazuya; Tasinato, Gianmassimo

    2016-04-01

    We study new consistent scalar-tensor theories of gravity recently introduced by Langlois and Noui with potentially interesting cosmological applications. We derive the conditions for the existence of a primary constraint that prevents the propagation of an additional dangerous mode associated with higher order equations of motion. We then classify the most general, consistent scalar-tensor theories that are at most quadratic in the second derivatives of the scalar field. In addition, we investigate the possible connection between these theories and (beyond) Horndeski through conformal and disformal transformations. Finally, we point out that these theories can be associated with new operators in the effective field theory of dark energy, which might open up new possibilities to test dark energy models in future surveys.

  7. Tensor modes on the string theory landscape

    NASA Astrophysics Data System (ADS)

    Westphal, Alexander

    2013-04-01

    We attempt an estimate for the distribution of the tensor mode fraction r over the landscape of vacua in string theory. The dynamics of eternal inflation and quantum tunneling lead to a kind of democracy on the landscape, providing no bias towards large-field or small-field inflation regardless of the class of measure. The tensor mode fraction then follows the number frequency distributions of inflationary mechanisms of string theory over the landscape. We show that an estimate of the relative number frequencies for small-field vs large-field inflation, while unattainable on the whole landscape, may be within reach as a regional answer for warped Calabi-Yau flux compactifications of type IIB string theory.

  8. Tensors: A guide for undergraduate students

    NASA Astrophysics Data System (ADS)

    Battaglia, Franco; George, Thomas F.

    2013-07-01

    A guide on tensors is proposed for undergraduate students in physics or engineering that ties directly to vector calculus in orthonormal coordinate systems. We show that once orthonormality is relaxed, a dual basis, together with the contravariant and covariant components, naturally emerges. Manipulating these components requires some skill that can be acquired more easily and quickly once a new notation is adopted. This notation distinguishes multi-component quantities in different coordinate systems by a differentiating sign on the index labelling the component rather than on the label of the quantity itself. This tiny stratagem, together with simple rules openly stated at the beginning of this guide, allows an almost automatic, easy-to-pursue procedure for what is otherwise a cumbersome algebra. By the end of the paper, the reader will be skillful enough to tackle many applications involving tensors of any rank in any coordinate system, without index-manipulation obstacles standing in the way.

  9. Symmetry constraints on the elastoresistivity tensor

    NASA Astrophysics Data System (ADS)

    Shapiro, M. C.; Hlobil, Patrik; Hristov, A. T.; Maharaj, Akash V.; Fisher, I. R.

    2015-12-01

    The elastoresistivity tensor mi j ,k l characterizes changes in a material's resistivity due to strain. As a fourth-rank tensor, elastoresistivity can be a uniquely useful probe of the symmetries and character of the electronic state of a solid. We present a symmetry analysis of mi j ,k l (both in the presence and absence of a magnetic field) based on the crystalline point group, focusing for pedagogic purposes on the D4 h point group (of relevance to several materials of current interest). We also discuss the relation between mi j ,k l and various thermodynamic susceptibilities, particularly where they are sensitive to critical fluctuations proximate to a critical point at which a point-group symmetry is spontaneously broken.

  10. Conformal killing tensors and covariant Hamiltonian dynamics

    SciTech Connect

    Cariglia, M.; Gibbons, G. W.; Holten, J.-W. van; Horvathy, P. A.; Zhang, P.-M.

    2014-12-15

    A covariant algorithm for deriving the conserved quantities for natural Hamiltonian systems is combined with the non-relativistic framework of Eisenhart, and of Duval, in which the classical trajectories arise as geodesics in a higher dimensional space-time, realized by Brinkmann manifolds. Conserved quantities which are polynomial in the momenta can be built using time-dependent conformal Killing tensors with flux. The latter are associated with terms proportional to the Hamiltonian in the lower dimensional theory and with spectrum generating algebras for higher dimensional quantities of order 1 and 2 in the momenta. Illustrations of the general theory include the Runge-Lenz vector for planetary motion with a time-dependent gravitational constant G(t), motion in a time-dependent electromagnetic field of a certain form, quantum dots, the Hénon-Heiles and Holt systems, respectively, providing us with Killing tensors of rank that ranges from one to six.

  11. Decomposition of indwelling EMG signals

    PubMed Central

    Nawab, S. Hamid; Wotiz, Robert P.; De Luca, Carlo J.

    2008-01-01

    Decomposition of indwelling electromyographic (EMG) signals is challenging in view of the complex and often unpredictable behaviors and interactions of the action potential trains of different motor units that constitute the indwelling EMG signal. These phenomena create a myriad of problem situations that a decomposition technique needs to address to attain completeness and accuracy levels required for various scientific and clinical applications. Starting with the maximum a posteriori probability classifier adapted from the original precision decomposition system (PD I) of LeFever and De Luca (25, 26), an artificial intelligence approach has been used to develop a multiclassifier system (PD II) for addressing some of the experimentally identified problem situations. On a database of indwelling EMG signals reflecting such conditions, the fully automatic PD II system is found to achieve a decomposition accuracy of 86.0% despite the fact that its results include low-amplitude action potential trains that are not decomposable at all via systems such as PD I. Accuracy was established by comparing the decompositions of indwelling EMG signals obtained from two sensors. At the end of the automatic PD II decomposition procedure, the accuracy may be enhanced to nearly 100% via an interactive editor, a particularly significant fact for the previously indecomposable trains. PMID:18483170

  12. Elliptic Relaxation of a Tensor Representation of the Pressure-Strain and Dissipation Rate

    NASA Technical Reports Server (NTRS)

    Carlson, John R.; Gatski, Thomas B.

    2002-01-01

    A formulation to include the effects of wall-proximity in a second moment closure model is presented that utilizes a tensor representation for the redistribution term in the Reynolds stress equations. The wall-proximity effects are modeled through an elliptic relaxation process of the tensor expansion coefficients that properly accounts for both correlation length and time scales as the wall is approached. DNS data and Reynolds stress solutions using a full differential approach at channel Reynolds number of 590 are compared to the new model.

  13. Elliptic Relaxation of a Tensor Representation for the Redistribution Terms in a Reynolds Stress Turbulence Model

    NASA Technical Reports Server (NTRS)

    Carlson, J. R.; Gatski, T. B.

    2002-01-01

    A formulation to include the effects of wall proximity in a second-moment closure model that utilizes a tensor representation for the redistribution terms in the Reynolds stress equations is presented. The wall-proximity effects are modeled through an elliptic relaxation process of the tensor expansion coefficients that properly accounts for both correlation length and time scales as the wall is approached. Direct numerical simulation data and Reynolds stress solutions using a full differential approach are compared for the case of fully developed channel flow.

  14. FAST TRACK COMMUNICATION Algebraic classification of the Weyl tensor in higher dimensions based on its 'superenergy' tensor

    NASA Astrophysics Data System (ADS)

    Senovilla, José M. M.

    2010-11-01

    The algebraic classification of the Weyl tensor in the arbitrary dimension n is recovered by means of the principal directions of its 'superenergy' tensor. This point of view can be helpful in order to compute the Weyl aligned null directions explicitly, and permits one to obtain the algebraic type of the Weyl tensor by computing the principal eigenvalue of rank-2 symmetric future tensors. The algebraic types compatible with states of intrinsic gravitational radiation can then be explored. The underlying ideas are general, so that a classification of arbitrary tensors in the general dimension can be achieved.

  15. Newton-based optimization for Kullback-Leibler nonnegative tensor factorizations

    SciTech Connect

    Plantenga, Todd; Kolda, Tamara G.; Hansen, Samantha

    2015-04-30

    Tensor factorizations with nonnegativity constraints have found application in analysing data from cyber traffic, social networks, and other areas. We consider application data best described as being generated by a Poisson process (e.g. count data), which leads to sparse tensors that can be modelled by sparse factor matrices. In this paper, we investigate efficient techniques for computing an appropriate canonical polyadic tensor factorization based on the Kullback–Leibler divergence function. We propose novel subproblem solvers within the standard alternating block variable approach. Our new methods exploit structure and reformulate the optimization problem as small independent subproblems. We employ bound-constrained Newton and quasi-Newton methods. Finally, we compare our algorithms against other codes, demonstrating superior speed for high accuracy results and the ability to quickly find sparse solutions.

  16. Advantages of horizontal directional Theta method to detect the edges of full tensor gravity gradient data

    NASA Astrophysics Data System (ADS)

    Yuan, Yuan; Gao, Jin-Yao; Chen, Ling-Na

    2016-07-01

    Full tensor gravity gradient data contain nine signal components. They include higher frequency signals than traditional gravity data, which can extract the small-scale features of the sources. Edge detection has played an important role in the interpretation of potential-field data. There are many methods that have been proposed to detect and enhance the edges of geological bodies based on horizontal and vertical derivatives of potential-field data. In order to make full use of all the measured gradient components, we need to develop a new edge detector to process the full tensor gravity gradient data. We first define the directional Theta and use the horizontal directional Theta to define a new edge detector. This method was tested on synthetic and real full tensor gravity gradient data to validate its feasibility. Compared the results with other balanced detectors, the new detector can effectively delineate the edges and does not produce any additional false edges.

  17. A sensitive, high resolution magic angle turning experiment for measuring chemical shift tensor principal values

    NASA Astrophysics Data System (ADS)

    Alderman, D. W.

    1998-12-01

    A sensitive, high-resolution 'FIREMAT' two-dimensional (2D) magic-angle-turning experiment is described that measures chemical shift tensor principal values in powdered solids. The spectra display spinning-sideband patterns separated by their isotropic shifts. The new method's sensitivity and high resolution in the isotropic-shift dimension result from combining the 5pi magic-angle-turning pulse sequence, an extension of the pseudo-2D sideband-suppression data rearrangement, and the TIGER protocol for processing 2D data. TPPM decoupling is used to enhance resolution. The method requires precise synchronization of the pulses and sampling to the rotor position. It is shown that the technique obtains 35 natural-abundance 13C tensors from erythromycin in 19 hours, and high quality naturalabundance 15N tensors from eight sites in potassium penicillin V in three days on a 400MHz spectrometer.

  18. Newton-based optimization for Kullback-Leibler nonnegative tensor factorizations

    DOE PAGESBeta

    Plantenga, Todd; Kolda, Tamara G.; Hansen, Samantha

    2015-04-30

    Tensor factorizations with nonnegativity constraints have found application in analysing data from cyber traffic, social networks, and other areas. We consider application data best described as being generated by a Poisson process (e.g. count data), which leads to sparse tensors that can be modelled by sparse factor matrices. In this paper, we investigate efficient techniques for computing an appropriate canonical polyadic tensor factorization based on the Kullback–Leibler divergence function. We propose novel subproblem solvers within the standard alternating block variable approach. Our new methods exploit structure and reformulate the optimization problem as small independent subproblems. We employ bound-constrained Newton andmore » quasi-Newton methods. Finally, we compare our algorithms against other codes, demonstrating superior speed for high accuracy results and the ability to quickly find sparse solutions.« less

  19. Inflation in anisotropic scalar-tensor theories

    NASA Technical Reports Server (NTRS)

    Pimentel, Luis O.; Stein-Schabes, Jaime

    1988-01-01

    The existence of an inflationary phase in anisotropic Scalar-Tensor Theories is investigated by means of a conformal transformation that allows us to rewrite these theories as gravity minimally coupled to a scalar field with a nontrivial potential. The explicit form of the potential is then used and the No Hair Theorem concludes that there is an inflationary phase in all open or flat anisotropic spacetimes in these theories. Several examples are constructed where the effect becomes manifest.

  20. Tensor Forces and the Structure of Nuclei

    SciTech Connect

    Rocco Schiavilla

    2009-12-01

    The two preeminent features of the nucleon-nucleon interaction are its short-range repulsion and intermediate- to long-range tensor character. In the present talk, I review how these features influence two-nucleon densities in configuration and momentum space. The predicted large differences between the np and pp momentum distributions have been confirmed in 12C(e,e[prime] np) and 12C(e,e[prime] pp) experiments at Jefferson Lab.

  1. Tensor Networks and Quantum Error Correction

    NASA Astrophysics Data System (ADS)

    Ferris, Andrew J.; Poulin, David

    2014-07-01

    We establish several relations between quantum error correction (QEC) and tensor network (TN) methods of quantum many-body physics. We exhibit correspondences between well-known families of QEC codes and TNs, and demonstrate a formal equivalence between decoding a QEC code and contracting a TN. We build on this equivalence to propose a new family of quantum codes and decoding algorithms that generalize and improve upon quantum polar codes and successive cancellation decoding in a natural way.

  2. Stress tensor correlators in three dimensional gravity

    NASA Astrophysics Data System (ADS)

    Bagchi, Arjun; Grumiller, Daniel; Merbis, Wout

    2016-03-01

    We calculate holographically arbitrary n -point correlators of the boundary stress tensor in three-dimensional Einstein gravity with negative or vanishing cosmological constant. We provide explicit expressions up to 5-point (connected) correlators and show consistency with the Galilean conformal field theory Ward identities and recursion relations of correlators, which we derive. This provides a novel check of flat space holography in three dimensions.

  3. Monte Carlo Volcano Seismic Moment Tensors

    NASA Astrophysics Data System (ADS)

    Waite, G. P.; Brill, K. A.; Lanza, F.

    2015-12-01

    Inverse modeling of volcano seismic sources can provide insight into the geometry and dynamics of volcanic conduits. But given the logistical challenges of working on an active volcano, seismic networks are typically deficient in spatial and temporal coverage; this potentially leads to large errors in source models. In addition, uncertainties in the centroid location and moment-tensor components, including volumetric components, are difficult to constrain from the linear inversion results, which leads to a poor understanding of the model space. In this study, we employ a nonlinear inversion using a Monte Carlo scheme with the objective of defining robustly resolved elements of model space. The model space is randomized by centroid location and moment tensor eigenvectors. Point sources densely sample the summit area and moment tensors are constrained to a randomly chosen geometry within the inversion; Green's functions for the random moment tensors are all calculated from modeled single forces, making the nonlinear inversion computationally reasonable. We apply this method to very-long-period (VLP) seismic events that accompany minor eruptions at Fuego volcano, Guatemala. The library of single force Green's functions is computed with a 3D finite-difference modeling algorithm through a homogeneous velocity-density model that includes topography, for a 3D grid of nodes, spaced 40 m apart, within the summit region. The homogenous velocity and density model is justified by long wavelength of VLP data. The nonlinear inversion reveals well resolved model features and informs the interpretation through a better understanding of the possible models. This approach can also be used to evaluate possible station geometries in order to optimize networks prior to deployment.

  4. Application of decomposition techniques to the preliminary design of a transport aircraft

    NASA Technical Reports Server (NTRS)

    Rogan, J. E.; Kolb, M. A.

    1987-01-01

    A nonlinear constrained optimization problem describing the preliminary design process for a transport aircraft has been formulated. A multifaceted decomposition of the optimization problem has been made. Flight dynamics, flexible aircraft loads and deformations, and preliminary structural design subproblems appear prominently in the decomposition. The use of design process decomposition for scheduling design projects, a new system integration approach to configuration control, and the application of object-centered programming to a new generation of design tools are discussed.

  5. Irreducible Virasoro modules from tensor products

    NASA Astrophysics Data System (ADS)

    Tan, Haijun; Zhao, Kaiming

    2016-04-01

    In this paper, we obtain a class of irreducible Virasoro modules by taking tensor products of the irreducible Virasoro modules Ω(λ,b) with irreducible highest weight modules V(θ ,h) or with irreducible Virasoro modules Ind_{θ}(N) defined in Mazorchuk and Zhao (Selecta Math. (N.S.) 20:839-854, 2014). We determine the necessary and sufficient conditions for two such irreducible tensor products to be isomorphic. Then we prove that the tensor product of Ω(λ,b) with a classical Whittaker module is isomorphic to the module Ind_{θ, λ}({C}_{{m}}) defined in Mazorchuk and Weisner (Proc. Amer. Math. Soc. 142:3695-3703, 2014). As a by-product we obtain the necessary and sufficient conditions for the module Ind_{θ, λ}({C}_{{m}}) to be irreducible. We also generalize the module Ind_{θ, λ}({C}_{{m}}) to Ind_{θ,λ}({B}^{(n)}_{{s}}) for any non-negative integer n and use the above results to completely determine when the modules Ind_{θ,λ}({B}^{(n)}_{{s}}) are irreducible. The submodules of Ind_{θ,λ}({B}^{(n)}_{{s}}) are studied and an open problem in Guo et al. (J. Algebra 387:68-86, 2013) is solved. Feigin-Fuchs' Theorem on singular vectors of Verma modules over the Virasoro algebra is crucial to our proofs in this paper.

  6. Fast Tensor Image Morphing for Elastic Registration

    PubMed Central

    Yap, Pew-Thian; Wu, Guorong; Zhu, Hongtu; Lin, Weili; Shen, Dinggang

    2009-01-01

    We propose a novel algorithm, called Fast Tensor Image Morphing for Elastic Registration or F-TIMER. F-TIMER leverages multiscale tensor regional distributions and local boundaries for hierarchically driving deformable matching of tensor image volumes. Registration is achieved by aligning a set of automatically determined structural landmarks, via solving a soft correspondence problem. Based on the estimated correspondences, thin-plate splines are employed to generate a smooth, topology preserving, and dense transformation, and to avoid arbitrary mapping of non-landmark voxels. To mitigate the problem of local minima, which is common in the estimation of high dimensional transformations, we employ a hierarchical strategy where a small subset of voxels with more distinctive attribute vectors are first deployed as landmarks to estimate a relatively robust low-degrees-of-freedom transformation. As the registration progresses, an increasing number of voxels are permitted to participate in refining the correspondence matching. A scheme as such allows less conservative progression of the correspondence matching towards the optimal solution, and hence results in a faster matching speed. Results indicate that better accuracy can be achieved by F-TIMER, compared with other deformable registration algorithms [1, 2], with significantly reduced computation time cost of 4–14 folds. PMID:20426052

  7. Minkowski tensors of anisotropic spatial structure

    NASA Astrophysics Data System (ADS)

    Schröder-Turk, G. E.; Mickel, W.; Kapfer, S. C.; Schaller, F. M.; Breidenbach, B.; Hug, D.; Mecke, K.

    2013-08-01

    This paper describes the theoretical foundation of and explicit algorithms for a novel approach to morphology and anisotropy analysis of complex spatial structure using tensor-valued Minkowski functionals, the so-called Minkowski tensors. Minkowski tensors are generalizations of the well-known scalar Minkowski functionals and are explicitly sensitive to anisotropic aspects of morphology, relevant for example for elastic moduli or permeability of microstructured materials. Here we derive explicit linear-time algorithms to compute these tensorial measures for three-dimensional shapes. These apply to representations of any object that can be represented by a triangulation of its bounding surface; their application is illustrated for the polyhedral Voronoi cellular complexes of jammed sphere configurations and for triangulations of a biopolymer fibre network obtained by confocal microscopy. The paper further bridges the substantial notational and conceptual gap between the different but equivalent approaches to scalar or tensorial Minkowski functionals in mathematics and in physics, hence making the mathematical measure theoretic formalism more readily accessible for future application in the physical sciences.

  8. Full Three-Dimensonal Reconstruction of the Dyadic Green Tensor from Electron Energy Loss Spectroscopy of Plasmonic Nanoparticles

    PubMed Central

    2015-01-01

    Electron energy loss spectroscopy (EELS) has emerged as a powerful tool for the investigation of plasmonic nanoparticles, but the interpretation of EELS results in terms of optical quantities, such as the photonic local density of states, remains challenging. Recent work has demonstrated that, under restrictive assumptions, including the applicability of the quasistatic approximation and a plasmonic response governed by a single mode, one can rephrase EELS as a tomography scheme for the reconstruction of plasmonic eigenmodes. In this paper we lift these restrictions by formulating EELS as an inverse problem and show that the complete dyadic Green tensor can be reconstructed for plasmonic particles of arbitrary shape. The key steps underlying our approach are a generic singular value decomposition of the dyadic Green tensor and a compressed sensing optimization for the determination of the expansion coefficients. We demonstrate the applicability of our scheme for prototypical nanorod, bowtie, and cube geometries. PMID:26523284

  9. A method for decomposition of hexachlorobenzene by gamma-alumina.

    PubMed

    Zhang, Lifei; Zheng, Minghui; Liu, Wenbin; Zhang, Bing; Su, Guijin

    2008-02-11

    A method of decomposing hexachlorobenzene (HCB) by gamma-alumina was investigated at low temperature of 300 degrees C. It was found that HCB was rather quickly decomposed under such a condition. Decomposition efficiency (DE) increases with increasing the surface area of gamma-alumina. Pretreated gamma-alumina has a better performance for the decomposition reaction. A high decomposition efficiency within the short reactive time of 60 min was achieved to be 94.2%, which was obtained by preheating gamma-alumina with the surface area of 220 m(2)g(-1) at 450 degrees C for 2 h. High surface area and appropriate pretreatment temperature probably provide more reactive sites such as the isolated OH groups and Al(3+) sites surrounded by O(2-) sites. These sites may induce the decomposition of HCB via a main ring-cracking process. The present study, hopefully, holds the promise for the eliminating of HCB contained hazardous materials in industrial application. PMID:18037236

  10. Scare Tactics: Evaluating Problem Decompositions Using Failure Scenarios

    NASA Technical Reports Server (NTRS)

    Helm, B. Robert; Fickas, Stephen

    1992-01-01

    Our interest is in the design of multi-agent problem-solving systems, which we refer to as composite systems. We have proposed an approach to composite system design by decomposition of problem statements. An automated assistant called Critter provides a library of reusable design transformations which allow a human analyst to search the space of decompositions for a problem. In this paper we describe a method for evaluating and critiquing problem decompositions generated by this search process. The method uses knowledge stored in the form of failure decompositions attached to design transformations. We suggest the benefits of our critiquing method by showing how it could re-derive steps of a published development example. We then identify several open issues for the method.

  11. Domain Decomposition By the Advancing-Partition Method

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2008-01-01

    A new method of domain decomposition has been developed for generating unstructured grids in subdomains either sequentially or using multiple computers in parallel. Domain decomposition is a crucial and challenging step for parallel grid generation. Prior methods are generally based on auxiliary, complex, and computationally intensive operations for defining partition interfaces and usually produce grids of lower quality than those generated in single domains. The new technique, referred to as "Advancing Partition," is based on the Advancing-Front method, which partitions a domain as part of the volume mesh generation in a consistent and "natural" way. The benefits of this approach are: 1) the process of domain decomposition is highly automated, 2) partitioning of domain does not compromise the quality of the generated grids, and 3) the computational overhead for domain decomposition is minimal. The new method has been implemented in NASA's unstructured grid generation code VGRID.

  12. Associational Patterns of Scavenger Beetles to Decomposition Stages.

    PubMed

    Zanetti, Noelia I; Visciarelli, Elena C; Centeno, Nestor D

    2015-07-01

    Beetles associated with carrion play an important role in recycling organic matter in an ecosystem. Four experiments on decomposition, one per season, were conducted in a semirural area in Bahía Blanca, Argentina. Melyridae are reported for the first time of forensic interest. Apart from adults and larvae of Scarabaeidae, thirteen species and two genera of other coleopteran families are new forensic records in Argentina. Diversity, abundance, and species composition of beetles showed differences between stages and seasons. Our results differed from other studies conducted in temperate regions. Four guilds and succession patterns were established in relation to decomposition stages and seasons. Dermestidae (necrophages) predominated in winter during the decomposition process; Staphylinidae (necrophiles) in Fresh and Bloat stages during spring, summer, and autumn; and Histeridae (necrophiles) and Cleridae (omnivores) in the following stages during those seasons. Finally, coleopteran activity, diversity and abundance, and decomposition rate change with biogeoclimatic characteristics, which is of significance in forensics. PMID:26174466

  13. Multidimensional tensor array analysis of multiphase flow during a hydrodynamic ram event

    NASA Astrophysics Data System (ADS)

    Lingenfelter, A.; Liu, D.

    2015-12-01

    Flow visualization is necessary to characterize the fluid flow properties during a hydrodynamic ram event. The multiphase flow during a hydrodynamic ram event can make traditional image processing techniques such as contrast feature detection and PIV difficult. By stacking the imagery to form a multidimensional tensor array, feature detection to determine flow field velocities are visualized.

  14. Higher-order Zeeman and spin terms in the electron paramagnetic resonance spin Hamiltonian; their description in irreducible form using Cartesian, tesseral spherical tensor and Stevens' operator expressions.

    PubMed

    McGavin, Dennis G; Tennant, W Craighead

    2009-06-17

    In setting up a spin Hamiltonian (SH) to study high-spin Zeeman and high-spin nuclear and/or electronic interactions in electron paramagnetic resonance (EPR) experiments, it is argued that a maximally reduced SH (MRSH) framed in tesseral combinations of spherical tensor operators is necessary. Then, the SH contains only those terms that are necessary and sufficient to describe the particular spin system. The paper proceeds then to obtain interrelationships between the parameters of the MRSH and those of alternative SHs expressed in Cartesian tensor and Stevens operator-equivalent forms. The examples taken, initially, are those of Cartesian and Stevens' expressions for high-spin Zeeman terms of dimension BS(3) and BS(5). Starting from the well-known decomposition of the general Cartesian tensor of second rank to three irreducible tensors of ranks 0, 1 and 2, the decomposition of Cartesian tensors of ranks 4 and 6 are treated similarly. Next, following a generalization of the tesseral spherical tensor equations, the interrelationships amongst the parameters of the three kinds of expressions, as derived from equivalent SHs, are determined and detailed tables, including all redundancy equations, set out. In each of these cases the lowest symmetry, [Formula: see text] Laue class, is assumed and then examples of relationships for specific higher symmetries derived therefrom. The validity of a spin Hamiltonian containing mixtures of terms from the three expressions is considered in some detail for several specific symmetries, including again the lowest symmetry. Finally, we address the application of some of the relationships derived here to seldom-observed low-symmetry effects in EPR spectra, when high-spin electronic and nuclear interactions are present. PMID:21693947

  15. Nonequilibrium adiabatic molecular dynamics simulations of methane clathrate hydrate decomposition

    NASA Astrophysics Data System (ADS)

    Alavi, Saman; Ripmeester, J. A.

    2010-04-01

    Nonequilibrium, constant energy, constant volume (NVE) molecular dynamics simulations are used to study the decomposition of methane clathrate hydrate in contact with water. Under adiabatic conditions, the rate of methane clathrate decomposition is affected by heat and mass transfer arising from the breakup of the clathrate hydrate framework and release of the methane gas at the solid-liquid interface and diffusion of methane through water. We observe that temperature gradients are established between the clathrate and solution phases as a result of the endothermic clathrate decomposition process and this factor must be considered when modeling the decomposition process. Additionally we observe that clathrate decomposition does not occur gradually with breakup of individual cages, but rather in a concerted fashion with rows of structure I cages parallel to the interface decomposing simultaneously. Due to the concerted breakup of layers of the hydrate, large amounts of methane gas are released near the surface which can form bubbles that will greatly affect the rate of mass transfer near the surface of the clathrate phase. The effects of these phenomena on the rate of methane hydrate decomposition are determined and implications on hydrate dissociation in natural methane hydrate reservoirs are discussed.

  16. An observation of homogeneous and heterogeneous catalysis processes in the decomposition of H sub 2 O sub 2 over MnO sub 2 and Mn(OH) sub 2

    SciTech Connect

    Jiang, S.P.; Ashton, W.R.; Tseung, A.C.C. )

    1991-09-01

    The kinetics of peroxide decomposition by manganese dioxide (MnO{sub 2}) and manganese hydroxide (Mn(OH){sub 2}) have been studied in alkaline solutions. The activity for peroxide decomposition on Mn(OH){sub 2} was generally higher than MnO{sub 2} and the kinetics for the decomposition of H{sub 2}O{sub 2} were first-order in the case of MnO{sub 2} catalysts, but 1.3-order for Mn(OH){sub 2} catalysts. It is suggested that H{sub 2}O{sub 2} is mainly homogeneously decomposed by Mn{sup 2+} ions (in the form of HMnO{sub 2}{sup {minus}} ions in concentrated alkaline solutions) dissolved in the solution in the case of Mn(OH){sub 2}. Compared with the results reported for the decomposition of H{sub 2}O{sub 2} in the presence of 1 ppm Co{sup 2+} ions, it is concluded that the kinetics of the homogeneous decomposition of H{sub 2}O{sub 2} are directly influenced by the concentration of the active species in the solution.

  17. Tensor Forces and the Ground-State Structure of Nuclei

    SciTech Connect

    Rocco Schiavilla

    2007-03-01

    Two-nucleon momentum distributions are calculated for the ground states of nuclei with mass number A {le} 8, using accurate variational Monte Carlo wave functions derived from a realistic Hamiltonian with two- and three-nucleon potentials. The momentum distribution of 'np' pairs is found to be much larger than that of 'pp' pairs for values of the relative momentum in the range (300--600) MeV/c and vanishing total momentum. This large difference, more than an order of magnitude, is seen in all nuclei considered, and has a universal character originating from the tensor components present in any realistic nucleon-nucleon potential. The correlations induced by the tensor force strongly influence the structure of 'np' pairs, which are known to be predominantly in deuteron-like states, while they are ineffective for 'pp' pairs, which are mostly in {sup 1}S{sub 0} states. These features should be easily observable in two-nucleon knock-out processes, for example in A(e,e{prime} np) and A(e,e{prime} pp) reactions.

  18. Tensor Forces and the Ground-State Structure of Nuclei

    SciTech Connect

    Schiavilla, R.; Wiringa, R. B.; Pieper, Steven C.; Carlson, J.

    2007-03-30

    Two-nucleon momentum distributions are calculated for the ground states of nuclei with mass number A{<=}8, using variational Monte Carlo wave functions derived from a realistic Hamiltonian with two- and three-nucleon potentials. The momentum distribution of np pairs is found to be much larger than that of pp pairs for values of the relative momentum in the range (300-600) MeV/c and vanishing total momentum. This order of magnitude difference is seen in all nuclei considered and has a universal character originating from the tensor components present in any realistic nucleon-nucleon potential. The correlations induced by the tensor force strongly influence the structure of np pairs, which are predominantly in deuteronlike states, while they are ineffective for pp pairs, which are mostly in {sup 1}S{sub 0} states. These features should be easily observable in two-nucleon knockout processes, such as A(e,e{sup '}np) and A(e,e{sup '}pp)

  19. High energy decomposition of halogenated hydrocarbons

    SciTech Connect

    Mincher, B.J.; Arbon, R.E.; Meikrantz, D.H.

    1992-09-01

    This program is the INEL component of a joint collaborative effort with Lawrence Livermore National Laboratory (LLNL). Purpose is to demonstrate a viable process for breaking down hazardous halogenated organic wastes to simpler, nonhazardous wastes using high energy ionizing radiation. The INEL effort focuses on the use of spent reactor fuel gamma radiation sources to decompose complex wastes such as PCBS. Work in FY92 expanded upon that reported for FY91. During FY91 it was reported that PCBs were susceptible to radiolytic decomposition in alcoholic solution, but that only a small percentage of decomposition products could be accounted for. It was shown that decomposition was more efficient in methanol than in isopropanol and that the presence of a copper-zinc couple catalyst did not affect the reaction rate. Major goals of FY92 work were to determine the reaction mechanism, to identify further reaction products, and to select a more appropriate catalyst. Described in this report are results of mechanism specific experiments, mass balance studies, transformer oil irradiations, the use of hydrogen peroxide as a potential catalyst, and the irradiation of pure PCB crystals in the absence of diluent.

  20. Vertebrate decomposition is accelerated by soil microbes.

    PubMed

    Lauber, Christian L; Metcalf, Jessica L; Keepers, Kyle; Ackermann, Gail; Carter, David O; Knight, Rob

    2014-08-01

    Carrion decomposition is an ecologically important natural phenomenon influenced by a complex set of factors, including temperature, moisture, and the activity of microorganisms, invertebrates, and scavengers. The role of soil microbes as decomposers in this process is essential but not well understood and represents a knowledge gap in carrion ecology. To better define the role and sources of microbes in carrion decomposition, lab-reared mice were decomposed on either (i) soil with an intact microbial community or (ii) soil that was sterilized. We characterized the microbial community (16S rRNA gene for bacteria and archaea, and the 18S rRNA gene for fungi and microbial eukaryotes) for three body sites along with the underlying soil (i.e., gravesoils) at time intervals coinciding with visible changes in carrion morphology. Our results indicate that mice placed on soil with intact microbial communities reach advanced stages of decomposition 2 to 3 times faster than those placed on sterile soil. Microbial communities associated with skin and gravesoils of carrion in stages of active and advanced decay were significantly different between soil types (sterile versus untreated), suggesting that substrates on which carrion decompose may partially determine the microbial decomposer community. However, the source of the decomposer community (soil- versus carcass-associated microbes) was not clear in our data set, suggesting that greater sequencing depth needs to be employed to identify the origin of the decomposer communities in carrion decomposition. Overall, our data show that soil microbial communities have a significant impact on the rate at which carrion decomposes and have important implications for understanding carrion ecology. PMID:24907317

  1. Vertebrate Decomposition Is Accelerated by Soil Microbes

    PubMed Central

    Lauber, Christian L.; Metcalf, Jessica L.; Keepers, Kyle; Ackermann, Gail; Carter, David O.

    2014-01-01

    Carrion decomposition is an ecologically important natural phenomenon influenced by a complex set of factors, including temperature, moisture, and the activity of microorganisms, invertebrates, and scavengers. The role of soil microbes as decomposers in this process is essential but not well understood and represents a knowledge gap in carrion ecology. To better define the role and sources of microbes in carrion decomposition, lab-reared mice were decomposed on either (i) soil with an intact microbial community or (ii) soil that was sterilized. We characterized the microbial community (16S rRNA gene for bacteria and archaea, and the 18S rRNA gene for fungi and microbial eukaryotes) for three body sites along with the underlying soil (i.e., gravesoils) at time intervals coinciding with visible changes in carrion morphology. Our results indicate that mice placed on soil with intact microbial communities reach advanced stages of decomposition 2 to 3 times faster than those placed on sterile soil. Microbial communities associated with skin and gravesoils of carrion in stages of active and advanced decay were significantly different between soil types (sterile versus untreated), suggesting that substrates on which carrion decompose may partially determine the microbial decomposer community. However, the source of the decomposer community (soil- versus carcass-associated microbes) was not clear in our data set, suggesting that greater sequencing depth needs to be employed to identify the origin of the decomposer communities in carrion decomposition. Overall, our data show that soil microbial communities have a significant impact on the rate at which carrion decomposes and have important implications for understanding carrion ecology. PMID:24907317

  2. Geometric derivation of the microscopic stress: A covariant central force decomposition

    NASA Astrophysics Data System (ADS)

    Torres-Sánchez, Alejandro; Vanegas, Juan M.; Arroyo, Marino

    2016-08-01

    We revisit the derivation of the microscopic stress, linking the statistical mechanics of particle systems and continuum mechanics. The starting point in our geometric derivation is the Doyle-Ericksen formula, which states that the Cauchy stress tensor is the derivative of the free-energy with respect to the ambient metric tensor and which follows from a covariance argument. Thus, our approach to define the microscopic stress tensor does not rely on the statement of balance of linear momentum as in the classical Irving-Kirkwood-Noll approach. Nevertheless, the resulting stress tensor satisfies balance of linear and angular momentum. Furthermore, our approach removes the ambiguity in the definition of the microscopic stress in the presence of multibody interactions by naturally suggesting a canonical and physically motivated force decomposition into pairwise terms, a key ingredient in this theory. As a result, our approach provides objective expressions to compute a microscopic stress for a system in equilibrium and for force-fields expanded into multibody interactions of arbitrarily high order. We illustrate the proposed methodology with molecular dynamics simulations of a fibrous protein using a force-field involving up to 5-body interactions.

  3. [Effects of aquatic plants during their decay and decomposition on water quality].

    PubMed

    Tang, Jin-Yan; Cao, Pei-Pei; Xu, Chi; Liu, Mao-Song

    2013-01-01

    Taking 6 aquatic plant species as test objects, a 64-day decomposition experiment was conducted to study the temporal variation patterns of nutrient concentration in water body during the process of the aquatic plant decomposition. There existed greater differences in the decomposition rates between the 6 species. Floating-leaved plants had the highest decomposition rate, followed by submerged plants, and emerged plants. The effects of the aquatic plant species during their decomposition on water quality differed, which was related to the plant biomass density. During the decomposition of Phragmites australis, water body had the lowest concentrations of chemical oxygen demand, total nitrogen, and total phosphorus. In the late decomposition period of Zizania latifolia, the concentrations of water body chemical oxygen demand and total nitrogen increased, resulting in the deterioration of water quality. In the decomposition processes of Nymphoides peltatum and Nelumbo nucifera, the concentrations of water body chemical oxygen demand and total nitrogen were higher than those during the decomposition of other test plants. In contrast, during the decomposition of Potamogeton crispus and Myriophyllum verticillatum, water body had the highest concentrations of ammonium, nitrate, and total phosphorus. For a given plant species, the main water quality indices had the similar variation trends under different biomass densities. It was suggested that the existence of moderate plant residues could effectively promote the nitrogen and phosphorus cycles in water body, reduce its nitrate concentration to some extent, and decrease the water body nitrogen load. PMID:23717994

  4. Automatic deformable diffusion tensor registration for fiber population analysis.

    PubMed

    Irfanoglu, M O; Machiraju, R; Sammet, S; Pierpaoli, C; Knopp, M V

    2008-01-01

    In this work, we propose a novel method for deformable tensor-to-tensor registration of Diffusion Tensor Images. Our registration method models the distances in between the tensors with Geode-sic-Loxodromes and employs a version of Multi-Dimensional Scaling (MDS) algorithm to unfold the manifold described with this metric. Defining the same shape properties as tensors, the vector images obtained through MDS are fed into a multi-step vector-image registration scheme and the resulting deformation fields are used to reorient the tensor fields. Results on brain DTI indicate that the proposed method is very suitable for deformable fiber-to-fiber correspondence and DTI-atlas construction. PMID:18982704

  5. Turbulent fluid motion 2: Scalars, vectors, and tensors

    NASA Technical Reports Server (NTRS)

    Deissler, Robert G.

    1991-01-01

    The author shows that the sum or difference of two vectors is a vector. Similarly the sum of any two tensors of the same order is a tensor of that order. No meaning is attached to the sum of tensors of different orders, say u(sub i) + u(sub ij); that is not a tensor. In general, an equation containing tensors has meaning only if all the terms in the equation are tensors of the same order, and if the same unrepeated subscripts appear in all the terms. These facts will be used in obtaining appropriate equations for fluid turbulence. With the foregoing background, the derivation of appropriate continuum equations for turbulence should be straightforward.

  6. A Communication-Optimal Framework for Contracting Distributed Tensors

    SciTech Connect

    Rajbhandari, Samyam; NIkam, Akshay; Lai, Pai-Wei; Stock, Kevin; Krishnamoorthy, Sriram; Sadayappan, Ponnuswamy

    2014-11-16

    Tensor contractions are extremely compute intensive generalized matrix multiplication operations encountered in many computational science fields, such as quantum chemistry and nuclear physics. Unlike distributed matrix multiplication, which has been extensively studied, limited work has been done in understanding distributed tensor contractions. In this paper, we characterize distributed tensor contraction algorithms on torus networks. We develop a framework with three fundamental communication operators to generate communication-efficient contraction algorithms for arbitrary tensor contractions. We show that for a given amount of memory per processor, our framework is communication optimal for all tensor contractions. We demonstrate performance and scalability of our framework on up to 262,144 cores of BG/Q supercomputer using five tensor contraction examples.

  7. Long-term litter decomposition controlled by manganese redox cycling

    PubMed Central

    Keiluweit, Marco; Nico, Peter; Harmon, Mark E.; Mao, Jingdong; Pett-Ridge, Jennifer; Kleber, Markus

    2015-01-01

    Litter decomposition is a keystone ecosystem process impacting nutrient cycling and productivity, soil properties, and the terrestrial carbon (C) balance, but the factors regulating decomposition rate are still poorly understood. Traditional models assume that the rate is controlled by litter quality, relying on parameters such as lignin content as predictors. However, a strong correlation has been observed between the manganese (Mn) content of litter and decomposition rates across a variety of forest ecosystems. Here, we show that long-term litter decomposition in forest ecosystems is tightly coupled to Mn redox cycling. Over 7 years of litter decomposition, microbial transformation of litter was paralleled by variations in Mn oxidation state and concentration. A detailed chemical imaging analysis of the litter revealed that fungi recruit and redistribute unreactive Mn2+ provided by fresh plant litter to produce oxidative Mn3+ species at sites of active decay, with Mn eventually accumulating as insoluble Mn3+/4+ oxides. Formation of reactive Mn3+ species coincided with the generation of aromatic oxidation products, providing direct proof of the previously posited role of Mn3+-based oxidizers in the breakdown of litter. Our results suggest that the litter-decomposing machinery at our coniferous forest site depends on the ability of plants and microbes to supply, accumulate, and regenerate short-lived Mn3+ species in the litter layer. This observation indicates that biogeochemical constraints on bioavailability, mobility, and reactivity of Mn in the plant–soil system may have a profound impact on litter decomposition rates. PMID:26372954

  8. Long-term litter decomposition controlled by manganese redox cycling.

    PubMed

    Keiluweit, Marco; Nico, Peter; Harmon, Mark E; Mao, Jingdong; Pett-Ridge, Jennifer; Kleber, Markus

    2015-09-22

    Litter decomposition is a keystone ecosystem process impacting nutrient cycling and productivity, soil properties, and the terrestrial carbon (C) balance, but the factors regulating decomposition rate are still poorly understood. Traditional models assume that the rate is controlled by litter quality, relying on parameters such as lignin content as predictors. However, a strong correlation has been observed between the manganese (Mn) content of litter and decomposition rates across a variety of forest ecosystems. Here, we show that long-term litter decomposition in forest ecosystems is tightly coupled to Mn redox cycling. Over 7 years of litter decomposition, microbial transformation of litter was paralleled by variations in Mn oxidation state and concentration. A detailed chemical imaging analysis of the litter revealed that fungi recruit and redistribute unreactive Mn(2+) provided by fresh plant litter to produce oxidative Mn(3+) species at sites of active decay, with Mn eventually accumulating as insoluble Mn(3+/4+) oxides. Formation of reactive Mn(3+) species coincided with the generation of aromatic oxidation products, providing direct proof of the previously posited role of Mn(3+)-based oxidizers in the breakdown of litter. Our results suggest that the litter-decomposing machinery at our coniferous forest site depends on the ability of plants and microbes to supply, accumulate, and regenerate short-lived Mn(3+) species in the litter layer. This observation indicates that biogeochemical constraints on bioavailability, mobility, and reactivity of Mn in the plant-soil system may have a profound impact on litter decomposition rates. PMID:26372954

  9. Nuclear driven water decomposition plant for hydrogen production

    NASA Technical Reports Server (NTRS)

    Parker, G. H.; Brecher, L. E.; Farbman, G. H.

    1976-01-01

    The conceptual design of a hydrogen production plant using a very-high-temperature nuclear reactor (VHTR) to energize a hybrid electrolytic-thermochemical system for water decomposition has been prepared. A graphite-moderated helium-cooled VHTR is used to produce 1850 F gas for electric power generation and 1600 F process heat for the water-decomposition process which uses sulfur compounds and promises performance superior to normal water electrolysis or other published thermochemical processes. The combined cycle operates at an overall thermal efficiency in excess of 45%, and the overall economics of hydrogen production by this plant have been evaluated predicated on a consistent set of economic ground rules. The conceptual design and evaluation efforts have indicated that development of this type of nuclear-driven water-decomposition plant will permit large-scale economic generation of hydrogen in the 1990s.

  10. Simultaneous Analysis and Quality Assurance for Diffusion Tensor Imaging

    PubMed Central

    Lauzon, Carolyn B.; Asman, Andrew J.; Esparza, Michael L.; Burns, Scott S.; Fan, Qiuyun; Gao, Yurui; Anderson, Adam W.; Davis, Nicole; Cutting, Laurie E.; Landman, Bennett A.

    2013-01-01

    Diffusion tensor imaging (DTI) enables non-invasive, cyto-architectural mapping of in vivo tissue microarchitecture through voxel-wise mathematical modeling of multiple magnetic resonance imaging (MRI) acquisitions, each differently sensitized to water diffusion. DTI computations are fundamentally estimation processes and are sensitive to noise and artifacts. Despite widespread adoption in the neuroimaging community, maintaining consistent DTI data quality remains challenging given the propensity for patient motion, artifacts associated with fast imaging techniques, and the possibility of hardware changes/failures. Furthermore, the quantity of data acquired per voxel, the non-linear estimation process, and numerous potential use cases complicate traditional visual data inspection approaches. Currently, quality inspection of DTI data has relied on visual inspection and individual processing in DTI analysis software programs (e.g. DTIPrep, DTI-studio). However, recent advances in applied statistical methods have yielded several different metrics to assess noise level, artifact propensity, quality of tensor fit, variance of estimated measures, and bias in estimated measures. To date, these metrics have been largely studied in isolation. Herein, we select complementary metrics for integration into an automatic DTI analysis and quality assurance pipeline. The pipeline completes in 24 hours, stores statistical outputs, and produces a graphical summary quality analysis (QA) report. We assess the utility of this streamlined approach for empirical quality assessment on 608 DTI datasets from pediatric neuroimaging studies. The efficiency and accuracy of quality analysis using the proposed pipeline is compared with quality analysis based on visual inspection. The unified pipeline is found to save a statistically significant amount of time (over 70%) while improving the consistency of QA between a DTI expert and a pool of research associates. Projection of QA metrics to a low

  11. Simultaneous analysis and quality assurance for diffusion tensor imaging.

    PubMed

    Lauzon, Carolyn B; Asman, Andrew J; Esparza, Michael L; Burns, Scott S; Fan, Qiuyun; Gao, Yurui; Anderson, Adam W; Davis, Nicole; Cutting, Laurie E; Landman, Bennett A

    2013-01-01

    Diffusion tensor imaging (DTI) enables non-invasive, cyto-architectural mapping of in vivo tissue microarchitecture through voxel-wise mathematical modeling of multiple magnetic resonance imaging (MRI) acquisitions, each differently sensitized to water diffusion. DTI computations are fundamentally estimation processes and are sensitive to noise and artifacts. Despite widespread adoption in the neuroimaging community, maintaining consistent DTI data quality remains challenging given the propensity for patient motion, artifacts associated with fast imaging techniques, and the possibility of hardware changes/failures. Furthermore, the quantity of data acquired per voxel, the non-linear estimation process, and numerous potential use cases complicate traditional visual data inspection approaches. Currently, quality inspection of DTI data has relied on visual inspection and individual processing in DTI analysis software programs (e.g. DTIPrep, DTI-studio). However, recent advances in applied statistical methods have yielded several different metrics to assess noise level, artifact propensity, quality of tensor fit, variance of estimated measures, and bias in estimated measures. To date, these metrics have been largely studied in isolation. Herein, we select complementary metrics for integration into an automatic DTI analysis and quality assurance pipeline. The pipeline completes in 24 hours, stores statistical outputs, and produces a graphical summary quality analysis (QA) report. We assess the utility of this streamlined approach for empirical quality assessment on 608 DTI datasets from pediatric neuroimaging studies. The efficiency and accuracy of quality analysis using the proposed pipeline is compared with quality analysis based on visual inspection. The unified pipeline is found to save a statistically significant amount of time (over 70%) while improving the consistency of QA between a DTI expert and a pool of research associates. Projection of QA metrics to a low

  12. Thermal decomposition products of butyraldehyde

    NASA Astrophysics Data System (ADS)

    Hatten, Courtney D.; Kaskey, Kevin R.; Warner, Brian J.; Wright, Emily M.; McCunn, Laura R.

    2013-12-01

    The thermal decomposition of gas-phase butyraldehyde, CH3CH2CH2CHO, was studied in the 1300-1600 K range with a hyperthermal nozzle. Products were identified via matrix-isolation Fourier transform infrared spectroscopy and photoionization mass spectrometry in separate experiments. There are at least six major initial reactions contributing to the decomposition of butyraldehyde: a radical decomposition channel leading to propyl radical + CO + H; molecular elimination to form H2 + ethylketene; a keto-enol tautomerism followed by elimination of H2O producing 1-butyne; an intramolecular hydrogen shift and elimination producing vinyl alcohol and ethylene, a β-C-C bond scission yielding ethyl and vinoxy radicals; and a γ-C-C bond scission yielding methyl and CH2CH2CHO radicals. The first three reactions are analogous to those observed in the thermal decomposition of acetaldehyde, but the latter three reactions are made possible by the longer alkyl chain structure of butyraldehyde. The products identified following thermal decomposition of butyraldehyde are CO, HCO, CH3CH2CH2, CH3CH2CH=C=O, H2O, CH3CH2C≡CH, CH2CH2, CH2=CHOH, CH2CHO, CH3, HC≡CH, CH2CCH, CH3C≡CH, CH3CH=CH2, H2C=C=O, CH3CH2CH3, CH2=CHCHO, C4H2, C4H4, and C4H8. The first ten products listed are direct products of the six reactions listed above. The remaining products can be attributed to further decomposition reactions or bimolecular reactions in the nozzle.

  13. Exploring Multimodal Data Fusion Through Joint Decompositions with Flexible Couplings

    NASA Astrophysics Data System (ADS)

    Cabral Farias, Rodrigo; Cohen, Jeremy Emile; Comon, Pierre

    2016-09-01

    A Bayesian framework is proposed to define flexible coupling models for joint tensor decompositions of multiple data sets. Under this framework, a natural formulation of the data fusion problem is to cast it in terms of a joint maximum a posteriori (MAP) estimator. Data driven scenarios of joint posterior distributions are provided, including general Gaussian priors and non Gaussian coupling priors. We present and discuss implementation issues of algorithms used to obtain the joint MAP estimator. We also show how this framework can be adapted to tackle the problem of joint decompositions of large datasets. In the case of a conditional Gaussian coupling with a linear transformation, we give theoretical bounds on the data fusion performance using the Bayesian Cramer-Rao bound. Simulations are reported for hybrid coupling models ranging from simple additive Gaussian models, to Gamma-type models with positive variables and to the coupling of data sets which are inherently of different size due to different resolution of the measurement devices.

  14. Irreducible Cartesian tensors of highest weight, for arbitrary order

    NASA Astrophysics Data System (ADS)

    Mane, S. R.

    2016-03-01

    A closed form expression is presented for the irreducible Cartesian tensor of highest weight, for arbitrary order. Two proofs are offered, one employing bookkeeping of indices and, after establishing the connection with the so-called natural tensors and their projection operators, the other one employing purely coordinate-free tensor manipulations. Some theorems and formulas in the published literature are generalized from SO(3) to SO(n), for dimensions n ≥ 3.

  15. Redberry: a computer algebra system designed for tensor manipulation

    NASA Astrophysics Data System (ADS)

    Poslavsky, Stanislav; Bolotin, Dmitry

    2015-05-01

    In this paper we focus on the main aspects of computer-aided calculations with tensors and present a new computer algebra system Redberry which was specifically designed for algebraic tensor manipulation. We touch upon distinctive features of tensor software in comparison with pure scalar systems, discuss the main approaches used to handle tensorial expressions and present the comparison of Redberry performance with other relevant tools.

  16. Braided Tensor Categories and Extensions of Vertex Operator Algebras

    NASA Astrophysics Data System (ADS)

    Huang, Yi-Zhi; Kirillov, Alexander; Lepowsky, James

    2015-08-01

    Let V be a vertex operator algebra satisfying suitable conditions such that in particular its module category has a natural vertex tensor category structure, and consequently, a natural braided tensor category structure. We prove that the notions of extension (i.e., enlargement) of V and of commutative associative algebra, with uniqueness of unit and with trivial twist, in the braided tensor category of V-modules are equivalent.

  17. Plant diversity effects on root decomposition in grasslands

    NASA Astrophysics Data System (ADS)

    Chen, Hongmei; Mommer, Liesje; van Ruijven, Jasper; de Kroon, Hans; Gessler, Arthur; Scherer-Lorenzen, Michael; Wirth, Christian; Weigelt, Alexandra

    2016-04-01

    Loss of plant diversity impairs ecosystem functioning. Compared to other well-studied processes, we know little about whether and how plant diversity affects root decomposition, which is limiting our knowledge on biodiversity-carbon cycling relationships in the soil. Plant diversity potentially affects root decomposition via two non-exclusive mechanisms: by providing roots of different substrate quality and/or by altering the soil decomposition environment. To disentangle these two mechanisms, three decomposition experiments using a litter-bag approach were conducted on experimental grassland plots differing in plant species richness, functional group richness and functional group composition (e.g. presence/absence of grasses, legumes, small herbs and tall herbs, the Jena Experiment). We studied: 1) root substrate quality effects by decomposing roots collected from the different experimental plant communities in one common plot; 2) soil decomposition environment effects by decomposing standard roots in all experimental plots; and 3) the overall plant diversity effects by decomposing community roots in their 'home' plots. Litter bags were installed in April 2014 and retrieved after 1, 2 and 4 months to determine the mass loss. We found that mass loss decreased with increasing plant species richness, but not with functional group richness in the three experiments. However, functional group presence significantly affected mass loss with primarily negative effects of the presence of grasses and positive effects of the presence of legumes and small herbs. Our results thus provide clear evidence that species richness has a strong negative effect on root decomposition via effects on both root substrate quality and soil decomposition environment. This negative plant diversity-root decomposition relationship may partly account for the positive effect of plant diversity on soil C stocks by reducing C loss in addition to increasing primary root productivity. However, to fully

  18. Ground state fidelity from tensor network representations.

    PubMed

    Zhou, Huan-Qiang; Orús, Roman; Vidal, Guifre

    2008-02-29

    For any D-dimensional quantum lattice system, the fidelity between two ground state many-body wave functions is mapped onto the partition function of a D-dimensional classical statistical vertex lattice model with the same lattice geometry. The fidelity per lattice site, analogous to the free energy per site, is well defined in the thermodynamic limit and can be used to characterize the phase diagram of the model. We explain how to compute the fidelity per site in the context of tensor network algorithms, and demonstrate the approach by analyzing the two-dimensional quantum Ising model with transverse and parallel magnetic fields. PMID:18352611

  19. Beam-plasma dielectric tensor with Mathematica

    NASA Astrophysics Data System (ADS)

    Bret, A.

    2007-03-01

    We present a Mathematica notebook allowing for the symbolic calculation of the 3×3 dielectric tensor of an electron-beam plasma system in the fluid approximation. Calculation is detailed for a cold relativistic electron beam entering a cold magnetized plasma, and for arbitrarily oriented wave vectors. We show how one can elaborate on this example to account for temperatures, arbitrarily oriented magnetic field or a different kind of plasma. Program summaryTitle of program: Tensor Catalog identifier: ADYT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYT_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer for which the program is designed and others on which it has been tested: Computers: Any computer running Mathematica 4.1. Tested on DELL Dimension 5100 and IBM ThinkPad T42. Installations: ETSI Industriales, Universidad Castilla la Mancha, Ciudad Real, Spain Operating system under which the program has been tested: Windows XP Pro Programming language used: Mathematica 4.1 Memory required to execute with typical data: 7.17 Mbytes No. of bytes in distributed program, including test data, etc.: 33 439 No. of lines in distributed program, including test data, etc.: 3169 Distribution format: tar.gz Nature of the physical problem: The dielectric tensor of a relativistic beam plasma system may be quite involved to calculate symbolically when considering a magnetized plasma, kinetic pressure, collisions between species, and so on. The present Mathematica notebook performs the symbolic computation in terms of some usual dimensionless variables. Method of solution: The linearized relativistic fluid equations are directly entered and solved by Mathematica to express the first-order expression of the current. This expression is then introduced into a combination of Faraday and Ampère-Maxwell's equations to give the dielectric tensor. Some additional manipulations are needed to express the result in terms of the

  20. Bounds on tensor wave and twisted inflation

    NASA Astrophysics Data System (ADS)

    Panda, Sudhakar; Sami, M.; Ward, John

    2010-11-01

    We study the bounds on tensor wave in a class of twisted inflation models, where D(4+2k)-branes are wrapped on cycles in the compact manifold and wrap the Kaluza-Klein direction in the corresponding effective field theory. While the lower bound is found to be analogous to that in type IIB models of brane inflation, the upper bound turns out to be significantly different. This is argued for a range of values for the parameter gsM satisfying the self-consistency relation and the WMAP data. Further, we observe that the wrapped D8-brane appears to be the most attractive from a cosmological perspective.