Science.gov

Sample records for processing tensor decomposition

  1. Orthogonal tensor decompositions

    SciTech Connect

    Tamara G. Kolda

    2000-03-01

    The authors explore the orthogonal decomposition of tensors (also known as multi-dimensional arrays or n-way arrays) using two different definitions of orthogonality. They present numerous examples to illustrate the difficulties in understanding such decompositions. They conclude with a counterexample to a tensor extension of the Eckart-Young SVD approximation theorem by Leibovici and Sabatier [Linear Algebra Appl. 269(1998):307--329].

  2. Nontraditional tensor decompositions and applications.

    SciTech Connect

    Bader, Brett William

    2010-07-01

    This presentation will discuss two tensor decompositions that are not as well known as PARAFAC (parallel factors) and Tucker, but have proven useful in informatics applications. Three-way DEDICOM (decomposition into directional components) is an algebraic model for the analysis of 3-way arrays with nonsymmetric slices. PARAFAC2 is a related model that is less constrained than PARAFAC and allows for different objects in one mode. Applications of both models to informatics problems will be shown.

  3. An Iterative Reweighted Method for Tucker Decomposition of Incomplete Tensors

    NASA Astrophysics Data System (ADS)

    Yang, Linxiao; Fang, Jun; Li, Hongbin; Zeng, Bing

    2016-09-01

    We consider the problem of low-rank decomposition of incomplete multiway tensors. Since many real-world data lie on an intrinsically low dimensional subspace, tensor low-rank decomposition with missing entries has applications in many data analysis problems such as recommender systems and image inpainting. In this paper, we focus on Tucker decomposition which represents an Nth-order tensor in terms of N factor matrices and a core tensor via multilinear operations. To exploit the underlying multilinear low-rank structure in high-dimensional datasets, we propose a group-based log-sum penalty functional to place structural sparsity over the core tensor, which leads to a compact representation with smallest core tensor. The method for Tucker decomposition is developed by iteratively minimizing a surrogate function that majorizes the original objective function, which results in an iterative reweighted process. In addition, to reduce the computational complexity, an over-relaxed monotone fast iterative shrinkage-thresholding technique is adapted and embedded in the iterative reweighted process. The proposed method is able to determine the model complexity (i.e. multilinear rank) in an automatic way. Simulation results show that the proposed algorithm offers competitive performance compared with other existing algorithms.

  4. Dynamic rotation and stretch tensors from a dynamic polar decomposition

    NASA Astrophysics Data System (ADS)

    Haller, George

    2016-01-01

    The local rigid-body component of continuum deformation is typically characterized by the rotation tensor, obtained from the polar decomposition of the deformation gradient. Beyond its well-known merits, the polar rotation tensor also has a lesser known dynamical inconsistency: it does not satisfy the fundamental superposition principle of rigid-body rotations over adjacent time intervals. As a consequence, the polar rotation diverts from the observed mean material rotation of fibers in fluids, and introduces a purely kinematic memory effect into computed material rotation. Here we derive a generalized polar decomposition for linear processes that yields a unique, dynamically consistent rotation component, the dynamic rotation tensor, for the deformation gradient. The left dynamic stretch tensor is objective, and shares the principal strain values and axes with its classic polar counterpart. Unlike its classic polar counterpart, however, the dynamic stretch tensor evolves in time without spin. The dynamic rotation tensor further decomposes into a spatially constant mean rotation tensor and a dynamically consistent relative rotation tensor that is objective for planar deformations. We also obtain simple expressions for dynamic analogues of Cauchy's mean rotation angle that characterize a deforming body objectively.

  5. Tensor decomposition of EEG signals: a brief review.

    PubMed

    Cong, Fengyu; Lin, Qiu-Hua; Kuang, Li-Dan; Gong, Xiao-Feng; Astikainen, Piia; Ristaniemi, Tapani

    2015-06-15

    Electroencephalography (EEG) is one fundamental tool for functional brain imaging. EEG signals tend to be represented by a vector or a matrix to facilitate data processing and analysis with generally understood methodologies like time-series analysis, spectral analysis and matrix decomposition. Indeed, EEG signals are often naturally born with more than two modes of time and space, and they can be denoted by a multi-way array called as tensor. This review summarizes the current progress of tensor decomposition of EEG signals with three aspects. The first is about the existing modes and tensors of EEG signals. Second, two fundamental tensor decomposition models, canonical polyadic decomposition (CPD, it is also called parallel factor analysis-PARAFAC) and Tucker decomposition, are introduced and compared. Moreover, the applications of the two models for EEG signals are addressed. Particularly, the determination of the number of components for each mode is discussed. Finally, the N-way partial least square and higher-order partial least square are described for a potential trend to process and analyze brain signals of two modalities simultaneously. PMID:25840362

  6. Robust Face Clustering Via Tensor Decomposition.

    PubMed

    Cao, Xiaochun; Wei, Xingxing; Han, Yahong; Lin, Dongdai

    2015-11-01

    Face clustering is a key component either in image managements or video analysis. Wild human faces vary with the poses, expressions, and illumination changes. All kinds of noises, like block occlusions, random pixel corruptions, and various disguises may also destroy the consistency of faces referring to the same person. This motivates us to develop a robust face clustering algorithm that is less sensitive to these noises. To retain the underlying structured information within facial images, we use tensors to represent faces, and then accomplish the clustering task based on the tensor data. The proposed algorithm is called robust tensor clustering (RTC), which firstly finds a lower-rank approximation of the original tensor data using a L1 norm optimization function. Because L1 norm does not exaggerate the effect of noises compared with L2 norm, the minimization of the L1 norm approximation function makes RTC robust. Then, we compute high-order singular value decomposition of this approximate tensor to obtain the final clustering results. Different from traditional algorithms solving the approximation function with a greedy strategy, we utilize a nongreedy strategy to obtain a better solution. Experiments conducted on the benchmark facial datasets and gait sequences demonstrate that RTC has better performance than the state-of-the-art clustering algorithms and is more robust to noises. PMID:25546869

  7. An optimization approach for fitting canonical tensor decompositions.

    SciTech Connect

    Dunlavy, Daniel M.; Acar, Evrim; Kolda, Tamara Gibson

    2009-02-01

    Tensor decompositions are higher-order analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as the CANDECOMP/PARAFAC decomposition (CPD), which expresses a tensor as the sum of component rank-one tensors and is used in a multitude of applications such as chemometrics, signal processing, neuroscience, and web analysis. The task of computing the CPD, however, can be difficult. The typical approach is based on alternating least squares (ALS) optimization, which can be remarkably fast but is not very accurate. Previously, nonlinear least squares (NLS) methods have also been recommended; existing NLS methods are accurate but slow. In this paper, we propose the use of gradient-based optimization methods. We discuss the mathematical calculation of the derivatives and further show that they can be computed efficiently, at the same cost as one iteration of ALS. Computational experiments demonstrate that the gradient-based optimization methods are much more accurate than ALS and orders of magnitude faster than NLS.

  8. Heterogeneous Tensor Decomposition for Clustering via Manifold Optimization.

    PubMed

    Sun, Yanfeng; Gao, Junbin; Hong, Xia; Mishra, Bamdev; Yin, Baocai

    2016-03-01

    Tensor clustering is an important tool that exploits intrinsically rich structures in real-world multiarray or Tensor datasets. Often in dealing with those datasets, standard practice is to use subspace clustering that is based on vectorizing multiarray data. However, vectorization of tensorial data does not exploit complete structure information. In this paper, we propose a subspace clustering algorithm without adopting any vectorization process. Our approach is based on a novel heterogeneous Tucker decomposition model taking into account cluster membership information. We propose a new clustering algorithm that alternates between different modes of the proposed heterogeneous tensor model. All but the last mode have closed-form updates. Updating the last mode reduces to optimizing over the multinomial manifold for which we investigate second order Riemannian geometry and propose a trust-region algorithm. Numerical experiments show that our proposed algorithm compete effectively with state-of-the-art clustering algorithms that are based on tensor factorization. PMID:27046492

  9. Analysis of Social Networks by Tensor Decomposition

    NASA Astrophysics Data System (ADS)

    Sizov, Sergej; Staab, Steffen; Franz, Thomas

    The Social Web fosters novel applications targeting a more efficient and satisfying user guidance in modern social networks, e.g., for identifying thematically focused communities, or finding users with similar interests. Large scale and high diversity of users in social networks poses the challenging question of appropriate relevance/authority ranking, for producing fine-grained and rich descriptions of available partners, e.g., to guide the user along most promising groups of interest. Existing methods for graph-based authority ranking lack support for fine-grained latent coherence between user relations and content (i.e., support for edge semantics in graph-based social network models). We present TweetRank, a novel approach for faceted authority ranking in the context of social networks. TweetRank captures the additional latent semantics of social networks by means of statistical methods in order to produce richer descriptions of user relations. We model the social network by a 3-dimensional tensor that enables the seamless representation of arbitrary semantic relations. For the analysis of that model, we apply the PARAFAC decomposition, which can be seen as a multi-modal counterpart to common Web authority ranking with HITS. The result are groupings of users and terms, characterized by authority and navigational (hub) scores with respect to the identified latent topics. Sample experiments with life data of the Twitter community demonstrate the ability of TweetRank to produce richer and more comprehensive contact recommendations than other existing methods for social authority ranking.

  10. Tensor network decompositions in the presence of a global symmetry

    SciTech Connect

    Singh, Sukhwinder; Pfeifer, Robert N. C.; Vidal, Guifre

    2010-11-15

    Tensor network decompositions offer an efficient description of certain many-body states of a lattice system and are the basis of a wealth of numerical simulation algorithms. We discuss how to incorporate a global symmetry, given by a compact, completely reducible group G, in tensor network decompositions and algorithms. This is achieved by considering tensors that are invariant under the action of the group G. Each symmetric tensor decomposes into two types of tensors: degeneracy tensors, containing all the degrees of freedom, and structural tensors, which only depend on the symmetry group. In numerical calculations, the use of symmetric tensors ensures the preservation of the symmetry, allows selection of a specific symmetry sector, and significantly reduces computational costs. On the other hand, the resulting tensor network can be interpreted as a superposition of exponentially many spin networks. Spin networks are used extensively in loop quantum gravity, where they represent states of quantum geometry. Our work highlights their importance in the context of tensor network algorithms as well, thus setting the stage for cross-fertilization between these two areas of research.

  11. Tensor product decomposition methods for plasmas physics computations

    NASA Astrophysics Data System (ADS)

    Del-Castillo-Negrete, D.

    2012-03-01

    Tensor product decomposition (TPD) methods are a powerful linear algebra technique for the efficient representation of high dimensional data sets. In the simplest 2-dimensional case, TPD reduces to the singular value decomposition (SVD) of matrices. These methods, which are closely related to proper orthogonal decomposition techniques, have been extensively applied in signal and image processing, and to some fluid mechanics problems. However, their use in plasma physics computation is relatively new. Some recent applications include: data compression of 6-dimensional gyrokinetic plasma turbulence data sets,footnotetextD. R. Hatch, D. del-Castillo-Negrete, and P. W. Terry. Submitted to Journal Comp. Phys. (2011). noise reduction in particle methods,footnotetextR. Nguyen, D. del-Castillo-Negrete, K. Schneider, M. Farge, and G. Chen: Journal of Comp. Phys. 229, 2821-2839 (2010). and multiscale analysis of plasma turbulence.footnotetextS. Futatani, S. Benkadda, and D. del-Castillo-Negrete: Phys. of Plasmas, 16, 042506 (2009) The goal of this presentation is to discuss a novel application of TPD methods to projective integration of particle-based collisional plasma transport computations.

  12. 3D tensor-based blind multispectral image decomposition for tumor demarcation

    NASA Astrophysics Data System (ADS)

    Kopriva, Ivica; Peršin, Antun

    2010-03-01

    Blind decomposition of multi-spectral fluorescent image for tumor demarcation is formulated exploiting tensorial structure of the image. First contribution of the paper is identification of the matrix of spectral responses and 3D tensor of spatial distributions of the materials present in the image from Tucker3 or PARAFAC models of 3D image tensor. Second contribution of the paper is clustering based estimation of the number of the materials present in the image as well as matrix of their spectral profiles. 3D tensor of the spatial distributions of the materials is recovered through 3-mode multiplication of the multi-spectral image tensor and inverse of the matrix of spectral profiles. Tensor representation of the multi-spectral image preserves its local spatial structure that is lost, due to vectorization process, when matrix factorization-based decomposition methods (such as non-negative matrix factorization and independent component analysis) are used. Superior performance of the tensor-based image decomposition over matrix factorization-based decompositions is demonstrated on experimental red-green-blue (RGB) image with known ground truth as well as on RGB fluorescent images of the skin tumor (basal cell carcinoma).

  13. 3D extension of Tensorial Polar Decomposition. Application to (photo-)elasticity tensors

    NASA Astrophysics Data System (ADS)

    Desmorat, Rodrigue; Desmorat, Boris

    2016-06-01

    The orthogonalized harmonic decomposition of symmetric fourth-order tensors (i.e. having major and minor indicial symmetries, such as elasticity tensors) is completed by a representation of harmonic fourth-order tensors H by means of two second-order harmonic (symmetric deviatoric) tensors only. A similar decomposition is obtained for non-symmetric tensors (i.e. having minor indicial symmetry only, such as photo-elasticity tensors or elasto-plasticity tangent operators) introducing a fourth-order major antisymmetric traceless tensor Z. The tensor Z is represented by means of one harmonic second-order tensor and one antisymmetric second-order tensor only. Representations of totally symmetric (rari-constant), symmetric and major antisymmetric fourth-order tensors are simple particular cases of the proposed general representation. Closed-form expressions for tensor decomposition are given in the monoclinic case. Practical applications to elasticity and photo-elasticity monoclinic tensors are finally presented. xml:lang="fr"

  14. A full variational calculation based on a tensor product decomposition

    NASA Astrophysics Data System (ADS)

    Senese, Frederick A.; Beattie, Christopher A.; Schug, John C.; Viers, Jimmy W.; Watson, Layne T.

    1989-08-01

    A new direct full variational approach exploits a tensor (Kronecker) product decomposition of the Hamiltonian. Explicit assembly and storage of the Hamiltonian matrix is avoided by using the Kronecker product structure to form matrix-vector products directly from the molecular integrals. Computation-intensive integral transformations and formula tapes are unnecessary. The wavefunction is expanded in terms of spin-free primitive kets rather than Slater determinants or configuration state functions, and the expansion is equivalent to a full configuration interaction expansion. The approach suggests compact storage schemes and algorithms which are naturally suited to parallel and pipelined machines.

  15. Uncertainty propagation in orbital mechanics via tensor decomposition

    NASA Astrophysics Data System (ADS)

    Sun, Yifei; Kumar, Mrinal

    2016-03-01

    Uncertainty forecasting in orbital mechanics is an essential but difficult task, primarily because the underlying Fokker-Planck equation (FPE) is defined on a relatively high dimensional (6-D) state-space and is driven by the nonlinear perturbed Keplerian dynamics. In addition, an enormously large solution domain is required for numerical solution of this FPE (e.g. encompassing the entire orbit in the x-y-z subspace), of which the state probability density function (pdf) occupies a tiny fraction at any given time. This coupling of large size, high dimensionality and nonlinearity makes for a formidable computational task, and has caused the FPE for orbital uncertainty propagation to remain an unsolved problem. To the best of the authors' knowledge, this paper presents the first successful direct solution of the FPE for perturbed Keplerian mechanics. To tackle the dimensionality issue, the time-varying state pdf is approximated in the CANDECOMP/PARAFAC decomposition tensor form where all the six spatial dimensions as well as the time dimension are separated from one other. The pdf approximation for all times is obtained simultaneously via the alternating least squares algorithm. Chebyshev spectral differentiation is employed for discretization on account of its spectral ("super-fast") convergence rate. To facilitate the tensor decomposition and control the solution domain size, system dynamics is expressed using spherical coordinates in a noninertial reference frame. Numerical results obtained on a regular personal computer are compared with Monte Carlo simulations.

  16. Databases post-processing in Tensoral

    NASA Technical Reports Server (NTRS)

    Dresselhaus, Eliot

    1994-01-01

    The Center for Turbulent Research (CTR) post-processing effort aims to make turbulence simulations and data more readily and usefully available to the research and industrial communities. The Tensoral language, introduced in this document and currently existing in prototype form, is the foundation of this effort. Tensoral provides a convenient and powerful protocol to connect users who wish to analyze fluids databases with the authors who generate them. In this document we introduce Tensoral and its prototype implementation in the form of a user's guide. This guide focuses on use of Tensoral for post-processing turbulence databases. The corresponding document - the Tensoral 'author's guide' - which focuses on how authors can make databases available to users via the Tensoral system - is currently unwritten. Section 1 of this user's guide defines Tensoral's basic notions: we explain the class of problems at hand and how Tensoral abstracts them. Section 2 defines Tensoral syntax for mathematical expressions. Section 3 shows how these expressions make up Tensoral statements. Section 4 shows how Tensoral statements and expressions are embedded into other computer languages (such as C or Vectoral) to make Tensoral programs. We conclude with a complete example program.

  17. TripleRank: Ranking Semantic Web Data by Tensor Decomposition

    NASA Astrophysics Data System (ADS)

    Franz, Thomas; Schultz, Antje; Sizov, Sergej; Staab, Steffen

    The Semantic Web fosters novel applications targeting a more efficient and satisfying exploitation of the data available on the web, e.g. faceted browsing of linked open data. Large amounts and high diversity of knowledge in the Semantic Web pose the challenging question of appropriate relevance ranking for producing fine-grained and rich descriptions of the available data, e.g. to guide the user along most promising knowledge aspects. Existing methods for graph-based authority ranking lack support for fine-grained latent coherence between resources and predicates (i.e. support for link semantics in the linked data model). In this paper, we present TripleRank, a novel approach for faceted authority ranking in the context of RDF knowledge bases. TripleRank captures the additional latent semantics of Semantic Web data by means of statistical methods in order to produce richer descriptions of the available data. We model the Semantic Web by a 3-dimensional tensor that enables the seamless representation of arbitrary semantic links. For the analysis of that model, we apply the PARAFAC decomposition, which can be seen as a multi-modal counterpart to Web authority ranking with HITS. The result are groupings of resources and predicates that characterize their authority and navigational (hub) properties with respect to identified topics. We have applied TripleRank to multiple data sets from the linked open data community and gathered encouraging feedback in a user evaluation where TripleRank results have been exploited in a faceted browsing scenario.

  18. The Amplitude Phase Decomposition for the Magnetotelluric Impedance Tensor and Galvanic Electric Distortion

    NASA Astrophysics Data System (ADS)

    Neukirch, Maik; Rudolf, Daniel; Garcia, Xavier

    2016-04-01

    The introduction of the phase tensor marked a major breakthrough in understanding of, analysing of and dealing with galvanic distortion of the electric field in the Magnetotelluric method. The phase tensor itself can be used for (distortion free) dimensionality analysis, if applicable distortion analysis and even to invert for subsurface models. However, impedance amplitude information is not stored in the phase tensor, therefore the impedance corrected by distortion analysis (or alternative remedies) may yield better results. We formulate an impedance tensor decomposition into the known phase tensor and an amplitude tensor that is shown to be complementary and independent of the phase tensor. The rotational invariant amplitude tensor contains galvanic and inductive amplitudes of which the latter are physically related to the inductive phase information present in the phase tensor. We show, that for the special cases of 1D and 2D subsurfaces, the geometric amplitude tensor parameter (strike and skew) converge to phase tensor parameter and the singular values are the amplitudes of the impedance in TE and TM mode. Further, the physical similarity between inductive phase and amplitude is used to approximate the galvanic amplitude for the general subsurface, which leads to the qualitative interpretation of 3D galvanic distortion: (i) the (purely) galvanic part of the subsurface (as sensed at a given period) may have a changing impact on the impedance (over a period range) and (ii) only the purely galvanic response of the lowest available period should be termed galvanic distortion. The approximation of the galvanic amplitude (and therewith galvanic distortion), though not accurate, offers a new perspective on galvanic distortion, which breaks with the general belief of the need to assume 1D or 2D regional structure for the impedance. The amplitude tensor itself is complementary to the phase tensor containing integrated (galvanic and inductive) subsurface information

  19. Tensor decomposition techniques in the solution of vibrational coupled cluster response theory eigenvalue equations

    NASA Astrophysics Data System (ADS)

    Godtliebsen, Ian H.; Hansen, Mads Bøttger; Christiansen, Ove

    2015-01-01

    We show how the eigenvalue equations of vibrational coupled cluster response theory can be solved using a subspace projection method with Davidson update, where basis vectors are stacked tensors decomposed into canonical (CP, Candecomp/Parafac) form. In each update step, new vectors are first orthogonalized to old vectors, followed by a tensor decomposition to a prescribed threshold TCP. The algorithm can provide excitation energies and eigenvectors of similar accuracy as a full vector approach and with only a very modest increase in the number of vectors required for convergence. The algorithm is illustrated with sample calculations for formaldehyde, 1,2,5-thiadiazole, and water. Analysis of the formaldehyde and thiadiazole calculations illustrate a number of interesting features of the algorithm. For example, the tensor decomposition threshold is optimally put to rather loose values, such as TCP = 10-2. With such thresholds for the tensor decompositions, the original eigenvalue equations can still be solved accurately. It is thus possible to directly calculate vibrational wave functions in tensor decomposed format.

  20. Thermochemical water decomposition processes

    NASA Technical Reports Server (NTRS)

    Chao, R. E.

    1974-01-01

    Thermochemical processes which lead to the production of hydrogen and oxygen from water without the consumption of any other material have a number of advantages when compared to other processes such as water electrolysis. It is possible to operate a sequence of chemical steps with net work requirements equal to zero at temperatures well below the temperature required for water dissociation in a single step. Various types of procedures are discussed, giving attention to halide processes, reverse Deacon processes, iron oxide and carbon oxide processes, and metal and alkali metal processes. Economical questions are also considered.

  1. Nonlinear Beam Kinematics by Decomposition of the Rotation Tensor

    NASA Technical Reports Server (NTRS)

    Danielson, D. A.; Hodges, D. H.

    1987-01-01

    A simple matrix expression is obtained for the strain components of a beam in which the displacements and rotations are large. The only restrictions are on the magnitudes of the strain and of the local rotation, a newly-identified kinematical quantity. The local rotation is defined as the change of orientation of material elements relative to the change of orientation of the beam reference triad. The vectors and tensors in the theory are resolved along orthogonal triads of base vectors centered along the undeformed and deformed beam reference axes, so Cartesian tensor notation is used. Although a curvilinear coordinate system is natural to the beam problem, the complications usually associated with its use are circumvented. Local rotations appear explicitly in the resulting strain expressions, facilitating the treatment of beams with both open and closed cross sections in applications of the theory. The theory is used to obtain the kinematical relations for coupled bending, torsion extension, shear deformation, and warping of an initially curved and twisted beam.

  2. Tensor decomposition in post-Hartree–Fock methods. II. CCD implementation

    SciTech Connect

    Benedikt, Udo; Böhm, Karl-Heinz; Auer, Alexander A.

    2013-12-14

    In a previous publication, we have discussed the usage of tensor decomposition in the canonical polyadic (CP) tensor format for electronic structure methods. There, we focused on two-electron integrals and second order Møller-Plesset perturbation theory (MP2). In this work, we discuss the CP format for Coupled Cluster (CC) theory and present a pilot implementation for the Coupled Cluster Doubles method. We discuss the iterative solution of the CC amplitude equations using tensors in CP representation and present a tensor contraction scheme that minimizes the effort necessary for the rank reductions during the iterations. Furthermore, several details concerning the reduction of complexity of the algorithm, convergence of the CC iterations, truncation errors, and the choice of threshold for chemical accuracy are discussed.

  3. Tensor decomposition in electronic structure calculations on 3D Cartesian grids

    SciTech Connect

    Khoromskij, B.N. Khoromskaia, V.; Chinnamsetty, S.R.; Flad, H.-J.

    2009-09-01

    In this paper, we investigate a novel approach based on the combination of Tucker-type and canonical tensor decomposition techniques for the efficient numerical approximation of functions and operators in electronic structure calculations. In particular, we study applicability of tensor approximations for the numerical solution of Hartree-Fock and Kohn-Sham equations on 3D Cartesian grids. We show that the orthogonal Tucker-type tensor approximation of electron density and Hartree potential of simple molecules leads to low tensor rank representations. This enables an efficient tensor-product convolution scheme for the computation of the Hartree potential using a collocation-type approximation via piecewise constant basis functions on a uniform nxnxn grid. Combined with the Richardson extrapolation, our approach exhibits O(h{sup 3}) convergence in the grid-size h=O(n{sup -1}). Moreover, this requires O(3rn+r{sup 3}) storage, where r denotes the Tucker rank of the electron density with r=O(logn), almost uniformly in n. For example, calculations of the Coulomb matrix and the Hartree-Fock energy for the CH{sub 4} molecule, with a pseudopotential on the C atom, achieved accuracies of the order of 10{sup -6} hartree with a grid-size n of several hundreds. Since the tensor-product convolution in 3D is performed via 1D convolution transforms, our scheme markedly outperforms the 3D-FFT in both the computing time and storage requirements.

  4. On the decomposition of stress and strain tensors into spherical and deviatoric parts.

    PubMed

    Augusti, G; Martin, J B; Prager, W

    1969-06-01

    It is well known that Hooke's law for a linearly elastic, isotropic solid may be written in the form of two relations that involve only the spherical or only the deviatoric parts of the tensors of stress and strain. The example of the linearly elastic, transversely isotropic solid is used to show that this decomposition is not, in general, feasible for linearly elastic, anisotropic solids. The discussion is extended to a large class of work-hardening rigid, plastic solids, and it is shown that the considered decomposition can only be achieved for the incompressible solids of this class. PMID:16591754

  5. Partial-wave decomposition of the finite-range effective tensor interaction

    NASA Astrophysics Data System (ADS)

    Davesne, D.; Becker, P.; Pastore, A.; Navarro, J.

    2016-06-01

    We perform a detailed analysis of the properties of the finite-range tensor term associated with the Gogny and M3Y effective interactions. In particular, by using a partial-wave decomposition of the equation of state of symmetric nuclear matter, we show how we can extract their tensor parameters directly from microscopic results based on bare nucleon-nucleon interactions. Furthermore, we show that the zero-range limit of both finite-range interactions has the form of the next-to-next-to-next-leading-order (N3LO) Skyrme pseudopotential, which thus constitutes a reliable approximation in the density range relevant for finite nuclei. Finally, we use Brueckner-Hartree-Fock results to fix the tensor parameters for the three effective interactions.

  6. Multipole theory and the Hehl-Obukhov decomposition of the electromagnetic constitutive tensor

    NASA Astrophysics Data System (ADS)

    de Lange, O. L.; Raab, R. E.

    2015-05-01

    The Hehl-Obukhov decomposition expresses the 36 independent components of the electromagnetic constitutive tensor for a local linear anisotropic medium in a useful general form comprising seven macroscopic property tensors: four of second rank, two vectors, and a four-dimensional (pseudo)scalar. We consider homogeneous media and show that in semi-classical multipole theory, the first full realization of this formulation is obtained (in terms of molecular polarizability tensors) at third order (electric octopole-magnetic quadrupole order). The calculations are an extension of a direct method previously used at second order (electric quadrupole-magnetic dipole order). We consider in what sense this theory is independent of the choice of molecular coordinate origins relative to which polarizabilities are evaluated. The pseudoscalar (axion) observable is expressed relative to the crystallographic origin. The other six property tensors are invariant (with respect to an arbitrary choice of each molecular coordinate origin), or zero, at first and second orders. At third order, this invariance has to be imposed (by transformation of the response fields)—an aspect that is required by consideration of isotropic fluids and is consistent with the invariance of transmission phenomena in dielectrics. Alternative derivations of the property tensors are reviewed, with emphasis on the pseudoscalar, constraint-breaking, translational invariance, and uniqueness.

  7. Real-time object recognition in multidimensional images based on joined extended structural tensor and higher-order tensor decomposition methods

    NASA Astrophysics Data System (ADS)

    Cyganek, Boguslaw; Smolka, Bogdan

    2015-02-01

    In this paper a system for real-time recognition of objects in multidimensional video signals is proposed. Object recognition is done by pattern projection into the tensor subspaces obtained from the factorization of the signal tensors representing the input signal. However, instead of taking only the intensity signal the novelty of this paper is first to build the Extended Structural Tensor representation from the intensity signal that conveys information on signal intensities, as well as on higher-order statistics of the input signals. This way the higher-order input pattern tensors are built from the training samples. Then, the tensor subspaces are built based on the Higher-Order Singular Value Decomposition of the prototype pattern tensors. Finally, recognition relies on measurements of the distance of a test pattern projected into the tensor subspaces obtained from the training tensors. Due to high-dimensionality of the input data, tensor based methods require high memory and computational resources. However, recent achievements in the technology of the multi-core microprocessors and graphic cards allows real-time operation of the multidimensional methods as is shown and analyzed in this paper based on real examples of object detection in digital images.

  8. Detection of crossing white matter fibers with high-order tensors and rank-k decompositions

    PubMed Central

    Jiao, Fangxiang; Gur, Yaniv; Johnson, Chris R.; Joshi, Sarang

    2011-01-01

    Fundamental to high angular resolution diffusion imaging (HARDI), is the estimation of a positive-semidefinite orientation distribution function (ODF) and extracting the diffusion properties (e.g., fiber directions). In this work we show that these two goals can be achieved efficiently by using homogeneous polynomials to represent the ODF in the spherical deconvolution approach, as was proposed in the Cartesian Tensor-ODF (CT-ODF) formulation. Based on this formulation we first suggest an estimation method for positive-semidefinite ODF by solving a linear programming problem that does not require special parameterization of the ODF. We also propose a rank-k tensor decomposition, known as CP decomposition, to extract the fibers information from the estimated ODF. We show that this decomposition is superior to the fiber direction estimation via ODF maxima detection as it enables one to reach the full fiber separation resolution of the estimation technique. We assess the accuracy of this new framework by applying it to synthetic and experimentally obtained HARDI data. PMID:21761684

  9. Detection of crossing white matter fibers with high-order tensors and rank-k decompositions.

    PubMed

    Jiao, Fangxiang; Gur, Yaniv; Johnson, Chris R; Joshi, Sarang

    2011-01-01

    Fundamental to high angular resolution diffusion imaging (HARDI), is the estimation of a positive-semidefinite orientation distribution function (ODF) and extracting the diffusion properties (e.g., fiber directions). In this work we show that these two goals can be achieved efficiently by using homogeneous polynomials to represent the ODF in the spherical deconvolution approach, as was proposed in the Cartesian Tensor-ODF (CT-ODF) formulation. Based on this formulation we first suggest an estimation method for positive-semidefinite ODF by solving a linear programming problem that does not require special parameterization of the ODF. We also propose a rank-k tensor decomposition, known as CP decomposition, to extract the fibers information from the estimated ODF. We show that this decomposition is superior to the fiber direction estimation via ODF maxima detection as it enables one to reach the full fiber separation resolution of the estimation technique. We assess the accuracy of this new framework by applying it to synthetic and experimentally obtained HARDI data. PMID:21761684

  10. Tensoral for post-processing users and simulation authors

    NASA Technical Reports Server (NTRS)

    Dresselhaus, Eliot

    1993-01-01

    The CTR post-processing effort aims to make turbulence simulations and data more readily and usefully available to the research and industrial communities. The Tensoral language, which provides the foundation for this effort, is introduced here in the form of a user's guide. The Tensoral user's guide is presented in two main sections. Section one acts as a general introduction and guides database users who wish to post-process simulation databases. Section two gives a brief description of how database authors and other advanced users can make simulation codes and/or the databases they generate available to the user community via Tensoral database back ends. The two-part structure of this document conforms to the two-level design structure of the Tensoral language. Tensoral has been designed to be a general computer language for performing tensor calculus and statistics on numerical data. Tensoral's generality allows it to be used for stand-alone native coding of high-level post-processing tasks (as described in section one of this guide). At the same time, Tensoral's specialization to a minute task (namely, to numerical tensor calculus and statistics) allows it to be easily embedded into applications written partly in Tensoral and partly in other computer languages (here, C and Vectoral). Embedded Tensoral, aimed at advanced users for more general coding (e.g. of efficient simulations, for interfacing with pre-existing software, for visualization, etc.), is described in section two of this guide.

  11. Representing Matrix Cracks Through Decomposition of the Deformation Gradient Tensor in Continuum Damage Mechanics Methods

    NASA Technical Reports Server (NTRS)

    Leone, Frank A., Jr.

    2015-01-01

    A method is presented to represent the large-deformation kinematics of intraply matrix cracks and delaminations in continuum damage mechanics (CDM) constitutive material models. The method involves the additive decomposition of the deformation gradient tensor into 'crack' and 'bulk material' components. The response of the intact bulk material is represented by a reduced deformation gradient tensor, and the opening of an embedded cohesive interface is represented by a normalized cohesive displacement-jump vector. The rotation of the embedded interface is tracked as the material deforms and as the crack opens. The distribution of the total local deformation between the bulk material and the cohesive interface components is determined by minimizing the difference between the cohesive stress and the bulk material stress projected onto the cohesive interface. The improvements to the accuracy of CDM models that incorporate the presented method over existing approaches are demonstrated for a single element subjected to simple shear deformation and for a finite element model of a unidirectional open-hole tension specimen. The material model is implemented as a VUMAT user subroutine for the Abaqus/Explicit finite element software. The presented deformation gradient decomposition method reduces the artificial load transfer across matrix cracks subjected to large shearing deformations, and avoids the spurious secondary failure modes that often occur in analyses based on conventional progressive damage models.

  12. Symmetric tensor decomposition-configuration interaction study of BeH2

    NASA Astrophysics Data System (ADS)

    Kasamatsu, Shusuke; Uemura, Wataru; Sugino, Osamu

    2014-03-01

    The configuration interaction (CI) is a straightforward approach to describing interacting fermions. However, its application is hampered by the non-polynomially increasing computational time and memory requirements with the system size. To overcome this problem, we have been developing a variational method based on the canonical decomposition of the full-CI coefficients, which we call the symmetric tensor decomposition (STD)-CI. The applicability of STD-CI was tested for simple molecular systems, but here we test it using a stringent benchmark system, i.e., the insertion of Be into H2. The Be + H2 system is known for strong configurational degeneracy along the insertion pathway, and has been used for assessing a method's capability to treat correlated systems. We obtained errors compared to full CI results of ~10 mHartrees when using a rank 2 decomposition of the full CI coefficients. This is a huge improvement over Hartree-Fock results having errors of up to ~100 mHartrees in worst cases, although not as good as, e.g., CAS-CCSD with errors less than 1 mHartree.

  13. Towards Interactive Construction of Topical Hierarchy: A Recursive Tensor Decomposition Approach

    PubMed Central

    Wang, Chi; Liu, Xueqing; Song, Yanglei; Han, Jiawei

    2015-01-01

    Automatic construction of user-desired topical hierarchies over large volumes of text data is a highly desirable but challenging task. This study proposes to give users freedom to construct topical hierarchies via interactive operations such as expanding a branch and merging several branches. Existing hierarchical topic modeling techniques are inadequate for this purpose because (1) they cannot consistently preserve the topics when the hierarchy structure is modified; and (2) the slow inference prevents swift response to user requests. In this study, we propose a novel method, called STROD, that allows efficient and consistent modification of topic hierarchies, based on a recursive generative model and a scalable tensor decomposition inference algorithm with theoretical performance guarantee. Empirical evaluation shows that STROD reduces the runtime of construction by several orders of magnitude, while generating consistent and quality hierarchies. PMID:26705505

  14. Tensor-multi-scalar theories: relativistic stars and 3 + 1 decomposition

    NASA Astrophysics Data System (ADS)

    Horbatsch, Michael; Silva, Hector O.; Gerosa, Davide; Pani, Paolo; Berti, Emanuele; Gualtieri, Leonardo; Sperhake, Ulrich

    2015-10-01

    Gravitational theories with multiple scalar fields coupled to the metric and each other—a natural extension of the well studied single-scalar-tensor theories—are interesting phenomenological frameworks to describe deviations from general relativity in the strong-field regime. In these theories, the N-tuple of scalar fields takes values in a coordinate patch of an N-dimensional Riemannian target-space manifold whose properties are poorly constrained by weak-field observations. Here we introduce for simplicity a non-trivial model with two scalar fields and a maximally symmetric target-space manifold. Within this model we present a preliminary investigation of spontaneous scalarization for relativistic, perfect fluid stellar models in spherical symmetry. We find that the scalarization threshold is determined by the eigenvalues of a symmetric scalar-matter coupling matrix, and that the properties of strongly scalarized stellar configurations additionally depend on the target-space curvature radius. In preparation for numerical relativity simulations, we also write down the 3 + 1 decomposition of the field equations for generic tensor-multi-scalar theories.

  15. Aridity and decomposition processes in complex landscapes

    NASA Astrophysics Data System (ADS)

    Ossola, Alessandro; Nyman, Petter

    2015-04-01

    Decomposition of organic matter is a key biogeochemical process contributing to nutrient cycles, carbon fluxes and soil development. The activity of decomposers depends on microclimate, with temperature and rainfall being major drivers. In complex terrain the fine-scale variation in microclimate (and hence water availability) as a result of slope orientation is caused by differences in incoming radiation and surface temperature. Aridity, measured as the long-term balance between net radiation and rainfall, is a metric that can be used to represent variations in water availability within the landscape. Since aridity metrics can be obtained at fine spatial scales, they could theoretically be used to investigate how decomposition processes vary across complex landscapes. In this study, four research sites were selected in tall open sclerophyll forest along a aridity gradient (Budyko dryness index ranging from 1.56 -2.22) where microclimate, litter moisture and soil moisture were monitored continuously for one year. Litter bags were packed to estimate decomposition rates (k) using leaves of a tree species not present in the study area (Eucalyptus globulus) in order to avoid home-field advantage effects. Litter mass loss was measured to assess the activity of macro-decomposers (6mm litter bag mesh size), meso-decomposers (1 mm mesh), microbes above-ground (0.2 mm mesh) and microbes below-ground (2 cm depth, 0.2 mm mesh). Four replicates for each set of bags were installed at each site and bags were collected at 1, 2, 4, 7 and 12 months since installation. We first tested whether differences in microclimate due to slope orientation have significant effects on decomposition processes. Then the dryness index was related to decomposition rates to evaluate if small-scale variation in decomposition can be predicted using readily available information on rainfall and radiation. Decomposition rates (k), calculated fitting single pool negative exponential models, generally

  16. Enabling High-Dimensional Hierarchical Uncertainty Quantification by ANOVA and Tensor-Train Decomposition

    SciTech Connect

    Zhang, Zheng; Yang, Xiu; Oseledets, Ivan; Karniadakis, George E.; Daniel, Luca

    2015-01-31

    Hierarchical uncertainty quantification can reduce the computational cost of stochastic circuit simulation by employing spectral methods at different levels. This paper presents an efficient framework to simulate hierarchically some challenging stochastic circuits/systems that include high-dimensional subsystems. Due to the high parameter dimensionality, it is challenging to both extract surrogate models at the low level of the design hierarchy and to handle them in the high-level simulation. In this paper, we develop an efficient analysis of variance-based stochastic circuit/microelectromechanical systems simulator to efficiently extract the surrogate models at the low level. In order to avoid the curse of dimensionality, we employ tensor-train decomposition at the high level to construct the basis functions and Gauss quadrature points. As a demonstration, we verify our algorithm on a stochastic oscillator with four MEMS capacitors and 184 random parameters. This challenging example is efficiently simulated by our simulator at the cost of only 10min in MATLAB on a regular personal computer.

  17. Optical Acquisition and Polar Decomposition of the Full-Field Deformation Gradient Tensor Within a Fracture Callus

    PubMed Central

    Kim, Wangdo; Kohles, Sean S.

    2009-01-01

    Tracking tissue deformation is often hampered by material inhomogeneity, so local measurements tend to be insufficient thus lending to the necessity of full-field optical measurements. This study presents a novel approach to factoring heterogeneous deformation of soft and hard tissues in a fracture callus by introducing an anisotropic metric derived from the deformation gradient tensor (F). The deformation gradient tensor contains all the information available in a Green-Lagrange strain tensor, plus the rigid-body rotational components. A recent study [Bottlang et al., J. Biomech. 41(3), 2008] produced full-field strains within ovine fracture calluses acquired through the application of electronic speckle pattern interferometery (ESPI). The technique is based on infinitesimal strain approximation (Engineering Strain) whose scheme is not independent of rigid body rotation. In this work, for rotation extraction, the stretch and rotation tensors were separately determined from F by the polar decomposition theorem. Interfragmentary motions in a fracture gap were characterized by the two distinct mechanical factors (stretch and rotation) at each material point through full-field mapping. In the composite nature of bone and soft tissue, collagen arrangements are hypothesized such that fibers locally aligned with principal directions will stretch and fibers not aligned with the principal direction will rotate and stretch. This approach has revealed the deformation gradient tensor as an appropriate quantification of strain within callus bony and fibrous tissue via optical measurements. PMID:19647826

  18. Tensor based geology preserving reservoir parameterization with Higher Order Singular Value Decomposition (HOSVD)

    NASA Astrophysics Data System (ADS)

    Afra, Sardar; Gildin, Eduardo

    2016-09-01

    Parameter estimation through robust parameterization techniques has been addressed in many works associated with history matching and inverse problems. Reservoir models are in general complex, nonlinear, and large-scale with respect to the large number of states and unknown parameters. Thus, having a practical approach to replace the original set of highly correlated unknown parameters with non-correlated set of lower dimensionality, that captures the most significant features comparing to the original set, is of high importance. Furthermore, de-correlating system's parameters while keeping the geological description intact is critical to control the ill-posedness nature of such problems. We introduce the advantages of a new low dimensional parameterization approach for reservoir characterization applications utilizing multilinear algebra based techniques like higher order singular value decomposition (HOSVD). In tensor based approaches like HOSVD, 2D permeability images are treated as they are, i.e., the data structure is kept as it is, whereas in conventional dimensionality reduction algorithms like SVD data has to be vectorized. Hence, compared to classical methods, higher redundancy reduction with less information loss can be achieved through decreasing present redundancies in all dimensions. In other words, HOSVD approximation results in a better compact data representation with respect to least square sense and geological consistency in comparison with classical algorithms. We examined the performance of the proposed parameterization technique against SVD approach on the SPE10 benchmark reservoir model as well as synthetic channelized permeability maps to demonstrate the capability of the proposed method. Moreover, to acquire statistical consistency, we repeat all experiments for a set of 1000 unknown geological samples and provide comparison using RMSE analysis. Results prove that, for a fixed compression ratio, the performance of the proposed approach

  19. Diffusion tensors for processing sheared and rotated rectangles.

    PubMed

    Steidl, Gabriele; Teuber, Tanja

    2009-12-01

    Image restoration and simplification methods that respect important features such as edges play a fundamental role in digital image processing. However, known edge-preserving methods like common nonlinear diffusion methods tend to round vertices for large diffusion times. In this paper, we adapt the diffusion tensor for anisotropic diffusion to avoid this effects in images containing rotated and sheared rectangles, respectively. In this context, we propose a new method for estimating rotation angles and shear parameters based on the so-called structure tensor. Further, we show how the knowledge of appropriate diffusion tensors can be used in variational models. Numerical examples including orientation estimation, denoising and segmentation demonstrate the good performance of our methods. PMID:19651552

  20. Biogeochemistry of Decomposition and Detrital Processing

    NASA Astrophysics Data System (ADS)

    Sanderman, J.; Amundson, R.

    2003-12-01

    Decomposition is a key ecological process that roughly balances net primary production in terrestrial ecosystems and is an essential process in resupplying nutrients to the plant community. Decomposition consists of three concurrent processes: communition or fragmentation, leaching of water-soluble compounds, and microbial catabolism. Decomposition can also be viewed as a sequential process, what Eijsackers and Zehnder (1990) compare to a Russian matriochka doll. Soil macrofauna fragment and partially solubilize plant residues, facilitating establishment of a community of decomposer microorganisms. This decomposer community will gradually shift as the most easily degraded plant compounds are utilized and the more recalcitrant materials begin to accumulate. Given enough time and the proper environmental conditions, most naturally occurring compounds can completely be mineralized to inorganic forms. Simultaneously with mineralization, the process of humification acts to transform a fraction of the plant residues into stable soil organic matter (SOM) or humus. For reference, Schlesinger (1990) estimated that only ˜0.7% of detritus eventually becomes stabilized into humus.Decomposition plays a key role in the cycling of most plant macro- and micronutrients and in the formation of humus. Figure 1 places the roles of detrital processing and mineralization within the context of the biogeochemical cycling of essential plant nutrients. Chapin (1991) found that while the atmosphere supplied 4% and mineral weathering supplied no nitrogen and <1% of phosphorus, internal nutrient recycling is the source for >95% of all the nitrogen and phosphorus uptake by tundra species in Barrow, Alaska. In a cool temperate forest, nutrient recycling accounted for 93%, 89%, 88%, and 65% of total sources for nitrogen, phosphorus, potassium, and calcium, respectively ( Chapin, 1991). (13K)Figure 1. A decomposition-centric biogeochemical model of nutrient cycling. Although there is significant

  1. Biogeochemistry of Decomposition and Detrital Processing

    NASA Astrophysics Data System (ADS)

    Sanderman, J.; Amundson, R.

    2003-12-01

    Decomposition is a key ecological process that roughly balances net primary production in terrestrial ecosystems and is an essential process in resupplying nutrients to the plant community. Decomposition consists of three concurrent processes: communition or fragmentation, leaching of water-soluble compounds, and microbial catabolism. Decomposition can also be viewed as a sequential process, what Eijsackers and Zehnder (1990) compare to a Russian matriochka doll. Soil macrofauna fragment and partially solubilize plant residues, facilitating establishment of a community of decomposer microorganisms. This decomposer community will gradually shift as the most easily degraded plant compounds are utilized and the more recalcitrant materials begin to accumulate. Given enough time and the proper environmental conditions, most naturally occurring compounds can completely be mineralized to inorganic forms. Simultaneously with mineralization, the process of humification acts to transform a fraction of the plant residues into stable soil organic matter (SOM) or humus. For reference, Schlesinger (1990) estimated that only ˜0.7% of detritus eventually becomes stabilized into humus.Decomposition plays a key role in the cycling of most plant macro- and micronutrients and in the formation of humus. Figure 1 places the roles of detrital processing and mineralization within the context of the biogeochemical cycling of essential plant nutrients. Chapin (1991) found that while the atmosphere supplied 4% and mineral weathering supplied no nitrogen and <1% of phosphorus, internal nutrient recycling is the source for >95% of all the nitrogen and phosphorus uptake by tundra species in Barrow, Alaska. In a cool temperate forest, nutrient recycling accounted for 93%, 89%, 88%, and 65% of total sources for nitrogen, phosphorus, potassium, and calcium, respectively ( Chapin, 1991). (13K)Figure 1. A decomposition-centric biogeochemical model of nutrient cycling. Although there is significant

  2. Decomposition of Variance for Spatial Cox Processes

    PubMed Central

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    2012-01-01

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with additive or log linear random intensity functions. We moreover consider a new and flexible class of pair correlation function models given in terms of normal variance mixture covariance functions. The proposed methodology is applied to point pattern data sets of locations of tropical rain forest trees. PMID:23599558

  3. Tensoral: A system for post-processing turbulence simulation data

    NASA Technical Reports Server (NTRS)

    Dresselhaus, Eliot

    1993-01-01

    Many computer simulations in engineering and science -- and especially in computational fluid dynamics (CFD) -- produce huge quantities of numerical data. These data are often so large as to make even relatively simple post-processing of this data unwieldy. The data, once computed and quality-assured, is most likely analyzed by only a few people. As a result, much useful numerical data is under-utilized. Since future state-of-the-art simulations will produce even larger datasets, will use more complex flow geometries, and will be performed on more complex supercomputers, data management issues will become increasingly cumbersome. My goal is to provide software which will automate the present and future task of managing and post-processing large turbulence datasets. My research has focused on the development of these software tools -- specifically, through the development of a very high-level language called 'Tensoral'. The ultimate goal of Tensoral is to convert high-level mathematical expressions (tensor algebra, calculus, and statistics) into efficient low-level programs which numerically calculate these expressions given simulation datasets. This approach to the database and post-processing problem has several advantages. Using Tensoral the numerical and data management details of a simulation are shielded from the concerns of the end user. This shielding is carried out without sacrificing post-processor efficiency and robustness. Another advantage of Tensoral is that its very high-level nature lends itself to portability across a wide variety of computing (and supercomputing) platforms. This is especially important considering the rapidity of changes in supercomputing hardware.

  4. Using empirical mode decomposition to process marine magnetotelluric data

    NASA Astrophysics Data System (ADS)

    Chen, Jin; Heincke, Bjoern; Jegen, Marion; Moorkamp, Max

    2012-07-01

    A major step in processing magnetotelluric (MT) data is the calculation of an impedance tensor as function of frequency from recorded time-varying electromagnetic fields. Common signal processing techniques such as Fourier transform based procedures assume that the signals are stationary over the record length, which is not necessarily the case in MT, due to the possibility of sudden spatial and temporal variations in the naturally occurring source fields. In addition, noise in the recorded electric and magnetic field data may also be non-stationary. Many modern MT processing techniques can handle such non-stationarities through strategies such as windowing of the time-series. However, it is not completely clear how extreme non-stationarity may affect the resulting impedances. As a possible alternative, we examine a heuristic method called empirical mode decomposition (EMD) that is developed to handle arbitrary non-stationary time-series. EMD is a dynamic time series analysis method, in which complicated data sets can be decomposed into a finite number of simple intrinsic mode functions. In this paper, we use the EMD method on real and synthetic MT data. To determine impedance tensor estimates we first calculate instantaneous frequencies and spectra from the intrinsic mode functions and apply the impedance formula proposed by Berdichevsky to the instantaneous spectra. We first conduct synthetic tests where we compare the results from our EMD method to analytically determined apparent resistivities and phases. Next, we compare our strategy to a simple Fourier derived impedance formula and the frequently used robust processing technique bounded-influence remote reference processing (BIRRP) for different levels of stochastic noise. All results show that apparent resistivities and phases which are calculated from EMD derived impedance tensors are generally more stable than those determined from simple Fourier analysis and only slightly worse than those from the robust

  5. Tensor Algebra Library for NVidia Graphics Processing Units

    Energy Science and Technology Software Center (ESTSC)

    2015-03-16

    This is a general purpose math library implementing basic tensor algebra operations on NVidia GPU accelerators. This software is a tensor algebra library that can perform basic tensor algebra operations, including tensor contractions, tensor products, tensor additions, etc., on NVidia GPU accelerators, asynchronously with respect to the CPU host. It supports a simultaneous use of multiple NVidia GPUs. Each asynchronous API function returns a handle which can later be used for querying the completion ofmore » the corresponding tensor algebra operation on a specific GPU. The tensors participating in a particular tensor operation are assumed to be stored in local RAM of a node or GPU RAM. The main research area where this library can be utilized is the quantum many-body theory (e.g., in electronic structure theory).« less

  6. Tensor Algebra Library for NVidia Graphics Processing Units

    SciTech Connect

    Liakh, Dmitry

    2015-03-16

    This is a general purpose math library implementing basic tensor algebra operations on NVidia GPU accelerators. This software is a tensor algebra library that can perform basic tensor algebra operations, including tensor contractions, tensor products, tensor additions, etc., on NVidia GPU accelerators, asynchronously with respect to the CPU host. It supports a simultaneous use of multiple NVidia GPUs. Each asynchronous API function returns a handle which can later be used for querying the completion of the corresponding tensor algebra operation on a specific GPU. The tensors participating in a particular tensor operation are assumed to be stored in local RAM of a node or GPU RAM. The main research area where this library can be utilized is the quantum many-body theory (e.g., in electronic structure theory).

  7. Decomposition

    USGS Publications Warehouse

    Middleton, Beth A.

    2014-01-01

    A cornerstone of ecosystem ecology, decomposition was recognized as a fundamental process driving the exchange of energy in ecosystems by early ecologists such as Lindeman 1942 and Odum 1960). In the history of ecology, studies of decomposition were incorporated into the International Biological Program in the 1960s to compare the nature of organic matter breakdown in various ecosystem types. Such studies still have an important role in ecological studies of today. More recent refinements have brought debates on the relative role microbes, invertebrates and environment in the breakdown and release of carbon into the atmosphere, as well as how nutrient cycling, production and other ecosystem processes regulated by decomposition may shift with climate change. Therefore, this bibliography examines the primary literature related to organic matter breakdown, but it also explores topics in which decomposition plays a key supporting role including vegetation composition, latitudinal gradients, altered ecosystems, anthropogenic impacts, carbon storage, and climate change models. Knowledge of these topics is relevant to both the study of ecosystem ecology as well projections of future conditions for human societies.

  8. A non-statistical regularization approach and a tensor product decomposition method applied to complex flow data

    NASA Astrophysics Data System (ADS)

    von Larcher, Thomas; Blome, Therese; Klein, Rupert; Schneider, Reinhold; Wolf, Sebastian; Huber, Benjamin

    2016-04-01

    Handling high-dimensional data sets like they occur e.g. in turbulent flows or in multiscale behaviour of certain types in Geosciences are one of the big challenges in numerical analysis and scientific computing. A suitable solution is to represent those large data sets in an appropriate compact form. In this context, tensor product decomposition methods currently emerge as an important tool. One reason is that these methods often enable one to attack high-dimensional problems successfully, another that they allow for very compact representations of large data sets. We follow the novel Tensor-Train (TT) decomposition method to support the development of improved understanding of the multiscale behavior and the development of compact storage schemes for solutions of such problems. One long-term goal of the project is the construction of a self-consistent closure for Large Eddy Simulations (LES) of turbulent flows that explicitly exploits the tensor product approach's capability of capturing self-similar structures. Secondly, we focus on a mixed deterministic-stochastic subgrid scale modelling strategy currently under development for application in Finite Volume Large Eddy Simulation (LES) codes. Advanced methods of time series analysis for the databased construction of stochastic models with inherently non-stationary statistical properties and concepts of information theory based on a modified Akaike information criterion and on the Bayesian information criterion for the model discrimination are used to construct surrogate models for the non-resolved flux fluctuations. Vector-valued auto-regressive models with external influences form the basis for the modelling approach [1], [2], [4]. Here, we present the reconstruction capabilities of the two modeling approaches tested against 3D turbulent channel flow data computed by direct numerical simulation (DNS) for an incompressible, isothermal fluid at Reynolds number Reτ = 590 (computed by [3]). References [1] I

  9. An image-processing toolset for diffusion tensor tractography

    PubMed Central

    Mishra, Arabinda; Lu, Yonggang; Choe, Ann S.; Aldroubi, Akram; Gore, John C.; Anderson, Adam W.; Ding, Zhaohua

    2009-01-01

    Diffusion tensor imaging (DTI)-based fiber tractography holds great promise in delineating neuronal fiber tracts and, hence, providing connectivity maps of the neural networks in the human brain. An array of image-processing techniques has to be developed to turn DTI tractography into a practically useful tool. To this end, we have developed a suite of image-processing tools for fiber tractography with improved reliability. This article summarizes the main technical developments we have made to date, which include anisotropic smoothing, anisotropic interpolation, Bayesian fiber tracking and automatic fiber bundling. A primary focus of these techniques is the robustness to noise and partial volume averaging, the two major hurdles to reliable fiber tractography. Performance of these techniques has been comprehensively examined with simulated and in vivo DTI data, demonstrating improvements in the robustness and reliability of DTI tractography. PMID:17371726

  10. A tensor-based population value decomposition to explain rectal toxicity after prostate cancer radiotherapy

    PubMed Central

    Ospina, Juan David; Commandeur, Frédéric; Ríos, Richard; Dréan, Gaël; Correa, Juan Carlos; Simon, Antoine; Haigron, Pascal; De Crevoisier, Renaud; Acosta, Oscar

    2013-01-01

    In prostate cancer radiotherapy the association between the dose distribution and the occurrence of undesirable side-effects is yet to be revealed. In this work a method to perform population analysis by comparing the dose distributions is proposed. The method is a tensor-based approach that generalises an existing method for 2D images and allows for the highlighting of over irradiated zones correlated with rectal bleeding after prostate cancer radiotherapy. Thus, the aim is to contribute to the elucidation of the dose patterns correlated with rectal toxicity. The method was applied to a cohort of 63 patients and it was able to build up a dose pattern characterizing the difference between patients presenting rectal bleeding after prostate cancer radiotherapy and those who did not. PMID:24579164

  11. Advanced Insights into Functional Brain Connectivity by Combining Tensor Decomposition and Partial Directed Coherence

    PubMed Central

    Leistritz, Lutz; Witte, Herbert; Schiecke, Karin

    2015-01-01

    Quantification of functional connectivity in physiological networks is frequently performed by means of time-variant partial directed coherence (tvPDC), based on time-variant multivariate autoregressive models. The principle advantage of tvPDC lies in the combination of directionality, time variance and frequency selectivity simultaneously, offering a more differentiated view into complex brain networks. Yet the advantages specific to tvPDC also cause a large number of results, leading to serious problems in interpretability. To counter this issue, we propose the decomposition of multi-dimensional tvPDC results into a sum of rank-1 outer products. This leads to a data condensation which enables an advanced interpretation of results. Furthermore it is thereby possible to uncover inherent interaction patterns of induced neuronal subsystems by limiting the decomposition to several relevant channels, while retaining the global influence determined by the preceding multivariate AR estimation and tvPDC calculation of the entire scalp. Finally a comparison between several subjects is considerably easier, as individual tvPDC results are summarized within a comprehensive model equipped with subject-specific loading coefficients. A proof-of-principle of the approach is provided by means of simulated data; EEG data of an experiment concerning visual evoked potentials are used to demonstrate the applicability to real data. PMID:26046537

  12. On feature extraction and classification in prostate cancer radiotherapy using tensor decompositions.

    PubMed

    Fargeas, Auréline; Albera, Laurent; Kachenoura, Amar; Dréan, Gaël; Ospina, Juan-David; Coloigner, Julie; Lafond, Caroline; Delobel, Jean-Bernard; De Crevoisier, Renaud; Acosta, Oscar

    2015-01-01

    External beam radiotherapy is commonly prescribed for prostate cancer. Although new radiation techniques allow high doses to be delivered to the target, the surrounding healthy organs (rectum and bladder) may suffer from irradiation, which might produce undesirable side-effects. Hence, the understanding of the complex toxicity dose-volume effect relationships is crucial to adapt the treatment, thereby decreasing the risk of toxicity. In this paper, we introduce a novel method to classify patients at risk of presenting rectal bleeding based on a Deterministic Multi-way Analysis (DMA) of three-dimensional planned dose distributions across a population. After a non-rigid spatial alignment of the anatomies applied to the dose distributions, the proposed method seeks for two bases of vectors representing bleeding and non bleeding patients by using the Canonical Polyadic (CP) decomposition of two fourth order arrays of the planned doses. A patient is then classified according to its distance to the subspaces spanned by both bases. A total of 99 patients treated for prostate cancer were used to analyze and test the performance of the proposed approach, named CP-DMA, in a leave-one-out cross validation scheme. Results were compared with supervised (linear discriminant analysis, support vector machine, K-means, K-nearest neighbor) and unsupervised (recent principal component analysis-based algorithm, and multidimensional classification method) approaches based on the registered dose distribution. Moreover, CP-DMA was also compared with the Normal Tissue Complication Probability (NTCP) model. The CP-DMA method allowed rectal bleeding patients to be classified with good specificity and sensitivity values, outperforming the classical approaches. PMID:25443534

  13. Tracking Hierarchical Processing in Morphological Decomposition with Brain Potentials

    ERIC Educational Resources Information Center

    Lavric, Aureliu; Elchlepp, Heike; Rastle, Kathleen

    2012-01-01

    One important debate in psycholinguistics concerns the nature of morphological decomposition processes in visual word recognition (e.g., darkness = {dark} + {-ness}). One theory claims that these processes arise during orthographic analysis and prior to accessing meaning (Rastle & Davis, 2008), and another argues that these processes arise through…

  14. Decomposition: A Strategy for Query Processing.

    ERIC Educational Resources Information Center

    Wong, Eugene; Youssefi, Karel

    Multivariable queries can be processed in the data base management system INGRES. The general procedure is to decompose the query into a sequence of one-variable queries using two processes. One process is reduction which requires breaking off components of the query which are joined to it by a single variable. The other process,…

  15. Substrate heterogeneity and environmental variability in the decomposition process

    NASA Astrophysics Data System (ADS)

    Sierra, Carlos; Harmon, Mark; Perakis, Steven

    2010-05-01

    Soil organic matter is a complex mixture of material with heterogeneous biological, physical, and chemical properties. However, traditional analyses of organic matter decomposition assume that a single decomposition rate constant can represent the dynamics of this heterogeneous mix. Terrestrial decomposition models approach this heterogeneity by representing organic matter as a substrate with three to six pools with different susceptibilities to decomposition. Even though it is well recognized that this representation of organic matter in models is less than ideal, there is little work analyzing the effects of assuming substrate homogeneity or simple discrete representations on the mineralization of carbon and nutrients. Using concepts from the continuous quality theory developed by Göran I. Ågren and Ernesto Bosatta, we performed a systematic analysis to explore the consequences of ignoring substrate heterogeneity in modeling decomposition. We found that the compartmentalization of organic matter in a few pools introduces approximation error when both the distribution of carbon and the decomposition rate are continuous functions of quality. This error is generally large for models that use three or four pools. We also found that the pattern of carbon and nitrogen mineralization over time is highly dependent on differences in microbial growth and efficiency for different qualities. In the long-term, stabilization and destabilization processes operating simultaneously result in the accumulation of carbon in lower qualities, independent of the quality of the incoming litter. This large amount of carbon accumulated in lower qualities would produce a major response to temperature change even when its temperature sensitivity is low. The interaction of substrate heterogeneity and temperature variability produces behaviors of carbon accumulation that cannot be predicted by simple decomposition models. Responses of soil organic matter to temperature change would depend

  16. Statistical Modeling of the Industrial Sodium Aluminate Solutions Decomposition Process

    NASA Astrophysics Data System (ADS)

    Živković, Živan; Mihajlović, Ivan; Djurić, Isidora; Štrbac, Nada

    2010-10-01

    This article presents the results of the statistical modeling of industrial sodium aluminate solution decomposition as part of the Bayer alumina production process. The aim of this study was to define the correlation dependence of degree of the aluminate solution decomposition on the following parameters of technological processes: concentration of the Na2O (caustic), caustic ratio and crystallization ratio, starting temperature, final temperature, average diameter of crystallization seed, and duration of decomposition process. Multiple linear regression analysis (MLRA) and artificial neural networks (ANNs) were used as the tools for the mathematical analysis of the indicated problem. On the one hand, the attempt of process modeling, using MLRA, resulted in a linear model whose correlation coefficient was equal to R 2 = 0.731. On the other hand, ANNs enabled, to some extent, better process modeling, with a correlation coefficient equal to R 2 = 0.895. Both models obtained using MLRA and ANNs can be used for the efficient prediction of the degree of sodium aluminate solution decomposition, as the function of the input parameters, under industrial conditions of the Bayer alumina production process.

  17. The ergodic decomposition of stationary discrete random processes

    NASA Technical Reports Server (NTRS)

    Gray, R. M.; Davisson, L. D.

    1974-01-01

    The ergodic decomposition is discussed, and a version focusing on the structure of individual sample functions of stationary processes is proved for the special case of discrete-time random processes with discrete alphabets. The result is stronger in this case than the usual theorem, and the proof is both intuitive and simple. Estimation-theoretic and information-theoretic interpretations are developed and applied to prove existence theorems for universal source codes, both noiseless and with a fidelity criterion.

  18. Analysis of benzoquinone decomposition in solution plasma process

    NASA Astrophysics Data System (ADS)

    Bratescu, M. A.; Saito, N.

    2016-01-01

    The decomposition of p-benzoquinone (p-BQ) in Solution Plasma Processing (SPP) was analyzed by Coherent Anti-Stokes Raman Spectroscopy (CARS) by monitoring the change of the anti-Stokes signal intensity of the vibrational transitions of the molecule, during and after SPP. Just in the beginning of the SPP treatment, the CARS signal intensities of the ring vibrational molecular transitions increased under the influence of the electric field of plasma. The results show that plasma influences the p-BQ molecules in two ways: (i) plasma produces a polarization and an orientation of the molecules in the local electric field of plasma and (ii) the gas phase plasma supplies, in the liquid phase, hydrogen and hydroxyl radicals, which reduce or oxidize the molecules, respectively, generating different carboxylic acids. The decomposition of p-BQ after SPP was confirmed by UV-visible absorption spectroscopy and liquid chromatography.

  19. Singular value decomposition in magnetotelluric sounding data processing

    SciTech Connect

    Shengjie, S. )

    1991-01-01

    In this paper singular value decomposition method, a method for magnetotelluric sounding data processing, is recommended minutely; and its real number operation process is derived. For analysis, this method decomposes data matrix into signal matrix and noise matrix. It may give least squares estimation of response function, performs quantitative analysis of signal and noise to calculate S/N ratio, and offers the estimated variance of the response function. Theoretical calculation shows that this method is reliable and effective in suppressing noise, estimating response function, and analyzing noise and variance.

  20. The impact of post-processing on spinal cord diffusion tensor imaging

    PubMed Central

    Mohammadi, Siawoosh; Freund, Patrick; Feiweier, Thorsten; Curt, Armin; Weiskopf, Nikolaus

    2013-01-01

    Diffusion tensor imaging (DTI) provides information about the microstructure in the brain and spinal cord. While new neuroimaging techniques have significantly advanced the accuracy and sensitivity of DTI of the brain, the quality of spinal cord DTI data has improved less. This is in part due to the small size of the spinal cord (ca. 1 cm diameter) and more severe instrumental (e.g. eddy current) and physiological (e.g. cardiac pulsation) artefacts present in spinal cord DTI. So far, the improvements in image quality and resolution have resulted from cardiac gating and new acquisition approaches (e.g. reduced field-of-view techniques). The use of retrospective correction methods is not well established for spinal cord DTI. The aim of this paper is to develop an improved post-processing pipeline tailored for DTI data of the spinal cord with increased quality. For this purpose, we compared two eddy current and motion correction approaches using three-dimensional affine (3D-affine) and slice-wise registrations. We also introduced a new robust-tensor-fitting method that controls for whole-volume outliers. Although in general 3D-affine registration improves data quality, occasionally it can lead to misregistrations and biassed tensor estimates. The proposed robust tensor fitting reduced misregistration-related bias and yielded more reliable tensor estimates. Overall, the combination of slice-wise motion correction, eddy current correction, and robust tensor fitting yielded the best results. It increased the contrast-to-noise ratio (CNR) in FA maps by about 30% and reduced intra-subject variation in fractional anisotropy (FA) maps by 18%. The higher quality of FA maps allows for a better distinction between grey and white matter without increasing scan time and is compatible with any multi-directional DTI acquisition scheme. PMID:23298752

  1. Matrix decomposition graphics processing unit solver for Poisson image editing

    NASA Astrophysics Data System (ADS)

    Lei, Zhao; Wei, Li

    2012-10-01

    In recent years, gradient-domain methods have been widely discussed in the image processing field, including seamless cloning and image stitching. These algorithms are commonly carried out by solving a large sparse linear system: the Poisson equation. However, solving the Poisson equation is a computational and memory intensive task which makes it not suitable for real-time image editing. A new matrix decomposition graphics processing unit (GPU) solver (MDGS) is proposed to settle the problem. A matrix decomposition method is used to distribute the work among GPU threads, so that MDGS will take full advantage of the computing power of current GPUs. Additionally, MDGS is a hybrid solver (combines both the direct and iterative techniques) and has two-level architecture. These enable MDGS to generate identical solutions with those of the common Poisson methods and achieve high convergence rate in most cases. This approach is advantageous in terms of parallelizability, enabling real-time image processing, low memory-taken and extensive applications.

  2. Catalytic hydrothermal processing of microalgae: decomposition and upgrading of lipids.

    PubMed

    Biller, P; Riley, R; Ross, A B

    2011-04-01

    Hydrothermal processing of high lipid feedstock such as microalgae is an alternative method of oil extraction which has obvious benefits for high moisture containing biomass. A range of microalgae and lipids extracted from terrestrial oil seed have been processed at 350 °C, at pressures of 150-200 bar in water. Hydrothermal liquefaction is shown to convert the triglycerides to fatty acids and alkanes in the presence of certain heterogeneous catalysts. This investigation has compared the composition of lipids and free fatty acids from solvent extraction to those from hydrothermal processing. The initial decomposition products include free fatty acids and glycerol, and the potential for de-oxygenation using heterogeneous catalysts has been investigated. The results indicate that the bio-crude yields from the liquefaction of microalgae were increased slightly with the use of heterogeneous catalysts but the higher heating value (HHV) and the level of de-oxygenation increased, by up to 10%. PMID:21295976

  3. A decomposition of irreversible diffusion processes without detailed balance

    NASA Astrophysics Data System (ADS)

    Qian, Hong

    2013-05-01

    As a generalization of deterministic, nonlinear conservative dynamical systems, a notion of canonical conservative dynamics with respect to a positive, differentiable stationary density ρ(x) is introduced: dot{x}=j(x) in which ∇.(ρ(x)j(x)) = 0. Such systems have a conserved "generalized free energy function" F[u] = ∫u(x, t)ln (u(x, t)/ρ(x))dx in phase space with a density flow u(x, t) satisfying ∂ut = -∇.(ju). Any general stochastic diffusion process without detailed balance, in terms of its Fokker-Planck equation, can be decomposed into a reversible diffusion process with detailed balance and a canonical conservative dynamics. This decomposition can be rigorously established in a function space with inner product defined as ⟨ϕ, ψ⟩ = ∫ρ-1(x)ϕ(x)ψ(x)dx. Furthermore, a law for balancing F[u] can be obtained: The non-positive dF[u(x, t)]/dt = Ein(t) - ep(t) where the "source" Ein(t) ⩾ 0 and the "sink" ep(t) ⩾ 0 are known as house-keeping heat and entropy production, respectively. A reversible diffusion has Ein(t) = 0. For a linear (Ornstein-Uhlenbeck) diffusion process, our decomposition is equivalent to the previous approaches developed by Graham and Ao, as well as the theory of large deviations. In terms of two different formulations of time reversal for a same stochastic process, the meanings of dissipative and conservative stationary dynamics are discussed.

  4. Canonical polyadic decomposition of third-order semi-nonnegative semi-symmetric tensors using LU and QR matrix factorizations

    NASA Astrophysics Data System (ADS)

    Wang, Lu; Albera, Laurent; Kachenoura, Amar; Shu, Huazhong; Senhadji, Lotfi

    2014-12-01

    Semi-symmetric three-way arrays are essential tools in blind source separation (BSS) particularly in independent component analysis (ICA). These arrays can be built by resorting to higher order statistics of the data. The canonical polyadic (CP) decomposition of such semi-symmetric three-way arrays allows us to identify the so-called mixing matrix, which contains the information about the intensities of some latent source signals present in the observation channels. In addition, in many applications, such as the magnetic resonance spectroscopy (MRS), the columns of the mixing matrix are viewed as relative concentrations of the spectra of the chemical components. Therefore, the two loading matrices of the three-way array, which are equal to the mixing matrix, are nonnegative. Most existing CP algorithms handle the symmetry and the nonnegativity separately. Up to now, very few of them consider both the semi-nonnegativity and the semi-symmetry structure of the three-way array. Nevertheless, like all the methods based on line search, trust region strategies, and alternating optimization, they appear to be dependent on initialization, requiring in practice a multi-initialization procedure. In order to overcome this drawback, we propose two new methods, called [InlineEquation not available: see fulltext.] and [InlineEquation not available: see fulltext.], to solve the problem of CP decomposition of semi-nonnegative semi-symmetric three-way arrays. Firstly, we rewrite the constrained optimization problem as an unconstrained one. In fact, the nonnegativity constraint of the two symmetric modes is ensured by means of a square change of variable. Secondly, a Jacobi-like optimization procedure is adopted because of its good convergence property. More precisely, the two new methods use LU and QR matrix factorizations, respectively, which consist in formulating high-dimensional optimization problems into several sequential polynomial and rational subproblems. By using both LU

  5. ENVIRONMENTAL ASSESSMENT OF THE BASE CATALYZED DECOMPOSITION (BCD) PROCESS

    EPA Science Inventory

    This report summarizes laboratory-scale, pilot-scale, and field performance data on BCD (Base Catalyzed Decomposition) and technology, collected to date by various governmental, academic, and private organizations.

  6. CO2 decomposition using electrochemical process in molten salts

    NASA Astrophysics Data System (ADS)

    Otake, Koya; Kinoshita, Hiroshi; Kikuchi, Tatsuya; Suzuki, Ryosuke O.

    2012-08-01

    The electrochemical decomposition of CO2 gas to carbon and oxygen gas in LiCl-Li2O and CaCl2-CaO molten salts was studied. This process consists of electrochemical reduction of Li2O and CaO, as well as the thermal reduction of CO2 gas by the respective metallic Li and Ca. Two kinds of ZrO2 solid electrolytes were tested as an oxygen ion conductor, and the electrolytes removed oxygen ions from the molten salts to the outside of the reactor. After electrolysis in both salts, the aggregations of nanometer-scale amorphous carbon and rod-like graphite crystals were observed by transmission electron microscopy. When 9.7 %CO2-Ar mixed gas was blown into LiCl-Li2O and CaCl2-CaO molten salts, the current efficiency was evaluated to be 89.7 % and 78.5 %, respectively, by the exhaust gas analysis and the supplied charge. When a solid electrolyte with higher ionic conductivity was used, the current and carbon production became larger. It was found that the rate determining step is the diffusion of oxygen ions into the ZrO2 solid electrolyte.

  7. Thermochemical processes for hydrogen production by water decomposition. Final report

    SciTech Connect

    Perlmutter, D.D.

    1980-08-01

    The principal contributions of the research are in the area of gas-solid reactions, ranging from models and data interpretation for fundamental kinetics and mixing of solids to simulations of engineering scale reactors. Models were derived for simulating the heat and mass transfer processes inside the reactor and tested by experiments. The effects of surface renewal of solids on the mass transfer phenomena were studied and related to the solid mixing. Catalysis by selected additives were studied experimentally. The separate results were combined in a simulation study of industrial-scale rotary reactor performance. A study was made of the controlled decompositions of a series of inorganic sulfates and their common hydrates, carried out in a Thermogravimetric Analyzer (TGA), a Differential Scanning Calorimeter (DSC), and a Differential Thermal Analyzer (DTA). Various sample sizes, heating rates, and ambient atmospheres were used to demonstrate their influence on the results. The purposes of this study were to: (i) reveal intermediate compounds, (ii) determine the stable temperature range of each compound, and (iii) measure reaction kinetics. In addition, several solid additives: carbon, metal oxides, and sodium chloride, were demonstrated to have catalytic effects to varying degrees for the different salts.

  8. Using Empirical Mode Decomposition to process Marine Magnetotelluric Data

    NASA Astrophysics Data System (ADS)

    Chen, J.; Jegen, M. D.; Heincke, B. H.; Moorkamp, M.

    2014-12-01

    The magnetotelluric (MT) data always exhibits nonstationarities due to variations of source mechanisms causing MT variations on different time and spatial scales. An additional non-stationary component is introduced through noise, which is particularly pronounced in marine MT data through motion induced noise caused by time-varying wave motion and currents. We present a new heuristic method for dealing with the non-stationarity of MT time series based on Empirical Mode Decomposition (EMD). The EMD method is used in combination with the derived instantaneous spectra to determine impedance estimates. The procedure is tested on synthetic and field MT data. In synthetic tests the reliability of impedance estimates from EMD-based method is compared to the synthetic responses of a 1D layered model. To examine how estimates are affected by noise, stochastic stationary and non-stationary noise are added on the time series. Comparisons reveal that estimates by the EMD-based method are generally more stable than those by simple Fourier analysis. Furthermore, the results are compared to those derived by a commonly used Fourier-based MT data processing software (BIRRP), which incorporates additional sophisticated robust estimations to deal with noise issues. It is revealed that the results from both methods are already comparable, even though no robust estimate procedures are implemented in the EMD approach at present stage. The processing scheme is then applied to marine MT field data. Testing is performed on short, relatively quiet segments of several data sets, as well as on long segments of data with many non-stationary noise packages. Compared to BIRRP, the new method gives comparable or better impedance estimates, furthermore, the estimates are extended to lower frequencies and less noise biased estimates with smaller error bars are obtained at high frequencies. The new processing methodology represents an important step towards deriving a better resolved Earth model to

  9. Image Corruption Detection in Diffusion Tensor Imaging for Post-Processing and Real-Time Monitoring

    PubMed Central

    Li, Yue; Shea, Steven M.; Lorenz, Christine H.; Jiang, Hangyi; Chou, Ming-Chung; Mori, Susumu

    2013-01-01

    Due to the high sensitivity of diffusion tensor imaging (DTI) to physiological motion, clinical DTI scans often suffer a significant amount of artifacts. Tensor-fitting-based, post-processing outlier rejection is often used to reduce the influence of motion artifacts. Although it is an effective approach, when there are multiple corrupted data, this method may no longer correctly identify and reject the corrupted data. In this paper, we introduce a new criterion called “corrected Inter-Slice Intensity Discontinuity” (cISID) to detect motion-induced artifacts. We compared the performance of algorithms using cISID and other existing methods with regard to artifact detection. The experimental results show that the integration of cISID into fitting-based methods significantly improves the retrospective detection performance at post-processing analysis. The performance of the cISID criterion, if used alone, was inferior to the fitting-based methods, but cISID could effectively identify severely corrupted images with a rapid calculation time. In the second part of this paper, an outlier rejection scheme was implemented on a scanner for real-time monitoring of image quality and reacquisition of the corrupted data. The real-time monitoring, based on cISID and followed by post-processing, fitting-based outlier rejection, could provide a robust environment for routine DTI studies. PMID:24204551

  10. A stable elemental decomposition for dynamic process optimization

    NASA Astrophysics Data System (ADS)

    Cervantes, Arturo M.; Biegler, Lorenz T.

    2000-08-01

    In Cervantes and Biegler (A.I.Ch.E.J. 44 (1998) 1038), we presented a simultaneous nonlinear programming problem (NLP) formulation for the solution of DAE optimization problems. Here, by applying collocation on finite elements, the DAE system is transformed into a nonlinear system. The resulting optimization problem, in which the element placement is fixed, is solved using a reduced space successive quadratic programming (rSQP) algorithm. The space is partitioned into range and null spaces. This partitioning is performed by choosing a pivot sequence for an LU factorization with partial pivoting which allows us to detect unstable modes in the DAE system. The system is stabilized without imposing new boundary conditions. The decomposition of the range space can be performed in a single step by exploiting the overall sparsity of the collocation matrix but not its almost block diagonal structure. In order to solve larger problems a new decomposition approach and a new method for constructing the quadratic programming (QP) subproblem are presented in this work. The decomposition of the collocation matrix is now performed element by element, thus reducing the storage requirements and the computational effort. Under this scheme, the unstable modes are considered in each element and a range-space move is constructed sequentially based on decomposition in each element. This new decomposition improves the efficiency of our previous approach and at the same time preserves its stability. The performance of the algorithm is tested on several examples. Finally, some future directions for research are discussed.

  11. Decomposition and hydrocarbon growth processes for hexadienes in nonpremixed flames

    SciTech Connect

    McEnally, Charles S.; Pfefferle, Lisa D.

    2008-03-15

    Alkadienes are formed during the decomposition of alkanes and play a key role in the formation of aromatics due to their degree of unsaturation. The experiments in this paper examined the decomposition and hydrocarbon growth mechanisms of a wide range of hexadiene isomers in soot-forming nonpremixed flames. Specifically, C3 to C12 hydrocarbon concentrations were measured on the centerlines of atmospheric-pressure methane/air coflowing nonpremixed flames doped with 2000 ppm of 1,3-, 1,4-, 1,5-, and 2,4-hexadiene and 2-methyl-1,3-, 3-methyl-1,3-, 2-methyl-1,4-, 3-methyl-1,4-pentadiene, and 2,3-dimethyl-1,3-butadiene. The hexadiene decomposition rates and hydrocarbon product concentrations showed that the primary decomposition mechanism was unimolecular fission of C-C single bonds, whose fission produced allyl and other resonantly stabilized products. The one isomer that does not contain any of these bonds, 2,4-hexadiene, isomerized by a six-center mechanism to 1,3-hexadiene. These decomposition pathways differ from those that have been observed previously for propadiene and 1,3-butadiene, and these differences affect aromatic hydrocarbon formation. 1,5-Hexadiene and 2,3-dimethyl-1,3-butadiene produced significantly more C{sub 3}H{sub 4} and C{sub 4}H{sub 4} than the other isomers, but less benzene, which suggests that benzene formation pathways other than the conventional C3 + C3 and C4 + C2 pathways were important in most of the hexadiene-doped flames. The most likely additional mechanism is cyclization of highly unsaturated C5 decomposition products, followed by methyl addition to cyclopentadienyl radicals. (author)

  12. C%2B%2B tensor toolbox user manual.

    SciTech Connect

    Plantenga, Todd D.; Kolda, Tamara Gibson

    2012-04-01

    The C++ Tensor Toolbox is a software package for computing tensor decompositions. It is based on the Matlab Tensor Toolbox, and is particularly optimized for sparse data sets. This user manual briefly overviews tensor decomposition mathematics, software capabilities, and installation of the package. Tensors (also known as multidimensional arrays or N-way arrays) are used in a variety of applications ranging from chemometrics to network analysis. The Tensor Toolbox provides classes for manipulating dense, sparse, and structured tensors in C++. The Toolbox compiles into libraries and is intended for use with custom applications written by users.

  13. Developmental process of the arcuate fasciculus from infancy to adolescence: a diffusion tensor imaging study

    PubMed Central

    Tak, Hyeong Jun; Kim, Jin Hyun; Son, Su Min

    2016-01-01

    We investigated the radiologic developmental process of the arcuate fasciculus (AF) using subcomponent diffusion tensor imaging (DTI) analysis in typically developing volunteers. DTI data were acquired from 96 consecutive typically developing children, aged 0–14 years. AF subcomponents, including the posterior, anterior, and direct AF tracts were analyzed. Success rates of analysis (AR) and fractional anisotropy (FA) values of each subcomponent tract were measured and compared. AR of all subcomponent tracts, except the posterior, showed a significant increase with aging (P < 0.05). Subcomponent tracts had a specific developmental sequence: First, the posterior AF tract, second, the anterior AF tract, and last, the direct AF tract in identical hemispheres. FA values of all subcomponent tracts, except right direct AF tract, showed correlation with subject's age (P < 0.05). Increased AR and FA values were observed in female subjects in young age (0–2 years) group compared with males (P < 0.05). The direct AF tract showed leftward hemispheric asymmetry and this tendency showed greater consolidation in older age (3–14 years) groups (P < 0.05). These findings demonstrated the radiologic developmental patterns of the AF from infancy to adolescence using subcomponent DTI analysis. The AF showed a specific developmental sequence, sex difference in younger age, and hemispheric asymmetry in older age. PMID:27482222

  14. Developmental process of the arcuate fasciculus from infancy to adolescence: a diffusion tensor imaging study.

    PubMed

    Tak, Hyeong Jun; Kim, Jin Hyun; Son, Su Min

    2016-06-01

    We investigated the radiologic developmental process of the arcuate fasciculus (AF) using subcomponent diffusion tensor imaging (DTI) analysis in typically developing volunteers. DTI data were acquired from 96 consecutive typically developing children, aged 0-14 years. AF subcomponents, including the posterior, anterior, and direct AF tracts were analyzed. Success rates of analysis (AR) and fractional anisotropy (FA) values of each subcomponent tract were measured and compared. AR of all subcomponent tracts, except the posterior, showed a significant increase with aging (P < 0.05). Subcomponent tracts had a specific developmental sequence: First, the posterior AF tract, second, the anterior AF tract, and last, the direct AF tract in identical hemispheres. FA values of all subcomponent tracts, except right direct AF tract, showed correlation with subject's age (P < 0.05). Increased AR and FA values were observed in female subjects in young age (0-2 years) group compared with males (P < 0.05). The direct AF tract showed leftward hemispheric asymmetry and this tendency showed greater consolidation in older age (3-14 years) groups (P < 0.05). These findings demonstrated the radiologic developmental patterns of the AF from infancy to adolescence using subcomponent DTI analysis. The AF showed a specific developmental sequence, sex difference in younger age, and hemispheric asymmetry in older age. PMID:27482222

  15. The Dynamics of Cognition and Action: Mental Processes Inferred from Speed-Accuracy Decomposition.

    ERIC Educational Resources Information Center

    Meyer, David E.; And Others

    1988-01-01

    Theoretical/empirical foundations on which reaction times are measured and interpreted are discussed. Models of human information processing are reviewed. A hybrid procedure and analytical framework are introduced, using a speed-accuracy decomposition technique to analyze the intermediate products of rapid mental processes. Results invalidate many…

  16. Exothermic Behavior of Thermal Decomposition of Sodium Percarbonate: Kinetic Deconvolution of Successive Endothermic and Exothermic Processes.

    PubMed

    Nakano, Masayoshi; Wada, Takeshi; Koga, Nobuyoshi

    2015-09-24

    This study focused on the kinetic modeling of the thermal decomposition of sodium percarbonate (SPC, sodium carbonate-hydrogen peroxide (2/3)). The reaction is characterized by apparently different kinetic profiles of mass-loss and exothermic behavior as recorded by thermogravimetry and differential scanning calorimetry, respectively. This phenomenon results from a combination of different kinetic features of the reaction involving two overlapping mass-loss steps controlled by the physico-geometry of the reaction and successive endothermic and exothermic processes caused by the detachment and decomposition of H2O2(g). For kinetic modeling, the overall reaction was initially separated into endothermic and exothermic processes using kinetic deconvolution analysis. Then, both of the endothermic and exothermic processes were further separated into two reaction steps accounting for the physico-geometrically controlled reaction that occurs in two steps. Kinetic modeling through kinetic deconvolution analysis clearly illustrates the appearance of the net exothermic effect is the result of a slight delay of the exothermic process to the endothermic process in each physico-geometrically controlled reaction step. This demonstrates that kinetic modeling attempted in this study is useful for interpreting the exothermic behavior of solid-state reactions such as the oxidative decomposition of solids and thermal decomposition of oxidizing agent. PMID:26371394

  17. Decomposition of repetition priming processes in word translation.

    PubMed

    Francis, Wendy S; Durán, Gabriela; Augustini, Beatriz K; Luévano, Genoveva; Arzate, José C; Sáenz, Silvia P

    2011-01-01

    Translation in fluent bilinguals requires comprehension of a stimulus word and subsequent production, or retrieval and articulation, of the response word. Four repetition-priming experiments with Spanish–English bilinguals (N = 274) decomposed these processes using selective facilitation to evaluate their unique priming contributions and factorial combination to evaluate the degree of process overlap or dependence. In Experiment 1, symmetric priming between semantic classification and translation tasks indicated that bilinguals do not covertly translate words during semantic classification. In Experiments 2 and 3, semantic classification of words and word-cued picture drawing facilitated word-comprehension processes of translation, and picture naming facilitated word-production processes. These effects were independent, consistent with a sequential model and with the conclusion that neither semantic classification nor word-cued picture drawing elicits covert translation. Experiment 4 showed that 2 tasks involving word-retrieval processes--written word translation and picture naming--had subadditive effects on later translation. Incomplete transfer from written translation to spoken translation indicated that preparation for articulation also benefited from repetition in the less-fluent language. PMID:21058875

  18. The tensor hierarchy algebra

    NASA Astrophysics Data System (ADS)

    Palmkvist, Jakob

    2014-01-01

    We introduce an infinite-dimensional Lie superalgebra which is an extension of the U-duality Lie algebra of maximal supergravity in D dimensions, for 3 ⩽ D ⩽ 7. The level decomposition with respect to the U-duality Lie algebra gives exactly the tensor hierarchy of representations that arises in gauge deformations of the theory described by an embedding tensor, for all positive levels p. We prove that these representations are always contained in those coming from the associated Borcherds-Kac-Moody superalgebra, and we explain why some of the latter representations are not included in the tensor hierarchy. The most remarkable feature of our Lie superalgebra is that it does not admit a triangular decomposition like a (Borcherds-)Kac-Moody (super)algebra. Instead the Hodge duality relations between level p and D - 2 - p extend to negative p, relating the representations at the first two negative levels to the supersymmetry and closure constraints of the embedding tensor.

  19. The tensor hierarchy algebra

    SciTech Connect

    Palmkvist, Jakob

    2014-01-15

    We introduce an infinite-dimensional Lie superalgebra which is an extension of the U-duality Lie algebra of maximal supergravity in D dimensions, for 3 ⩽ D ⩽ 7. The level decomposition with respect to the U-duality Lie algebra gives exactly the tensor hierarchy of representations that arises in gauge deformations of the theory described by an embedding tensor, for all positive levels p. We prove that these representations are always contained in those coming from the associated Borcherds-Kac-Moody superalgebra, and we explain why some of the latter representations are not included in the tensor hierarchy. The most remarkable feature of our Lie superalgebra is that it does not admit a triangular decomposition like a (Borcherds-)Kac-Moody (super)algebra. Instead the Hodge duality relations between level p and D − 2 − p extend to negative p, relating the representations at the first two negative levels to the supersymmetry and closure constraints of the embedding tensor.

  20. Decomposition of Repetition Priming Processes in Word Translation

    ERIC Educational Resources Information Center

    Francis, Wendy S.; Duran, Gabriela; Augustini, Beatriz K.; Luevano, Genoveva; Arzate, Jose C.; Saenz, Silvia P.

    2011-01-01

    Translation in fluent bilinguals requires comprehension of a stimulus word and subsequent production, or retrieval and articulation, of the response word. Four repetition-priming experiments with Spanish-English bilinguals (N = 274) decomposed these processes using selective facilitation to evaluate their unique priming contributions and factorial…

  1. PROCESS OF COATING WITH NICKEL BY THE DECOMPOSITION OF NICKEL CARBONYL

    DOEpatents

    Hoover, T.B.

    1959-04-01

    An improved process is presented for the deposition of nickel coatings by the thermal decomposition of nickel carbonyl vapor. The improvement consists in incorporating a small amount of hydrogen sulfide gas in the nickel carbonyl plating gas. It is postulated that the hydrogen sulfide functions as a catalyst. i

  2. Method for increasing steam decomposition in a coal gasification process

    DOEpatents

    Wilson, Marvin W.

    1988-01-01

    The gasification of coal in the presence of steam and oxygen is significantly enhanced by introducing a thermochemical water-splitting agent such as sulfuric acid, into the gasifier for decomposing the steam to provide additional oxygen and hydrogen usable in the gasification process for the combustion of the coal and enrichment of the gaseous gasification products. The addition of the water-splitting agent into the gasifier also allows for the operation of the reactor at a lower temperature.

  3. Method for increasing steam decomposition in a coal gasification process

    DOEpatents

    Wilson, M.W.

    1987-03-23

    The gasification of coal in the presence of steam and oxygen is significantly enhanced by introducing a thermochemical water- splitting agent such as sulfuric acid, into the gasifier for decomposing the steam to provide additional oxygen and hydrogen usable in the gasification process for the combustion of the coal and enrichment of the gaseous gasification products. The addition of the water-splitting agent into the gasifier also allows for the operation of the reactor at a lower temperature.

  4. A statistical approach based on accumulated degree-days to predict decomposition-related processes in forensic studies.

    PubMed

    Michaud, Jean-Philippe; Moreau, Gaétan

    2011-01-01

    Using pig carcasses exposed over 3 years in rural fields during spring, summer, and fall, we studied the relationship between decomposition stages and degree-day accumulation (i) to verify the predictability of the decomposition stages used in forensic entomology to document carcass decomposition and (ii) to build a degree-day accumulation model applicable to various decomposition-related processes. Results indicate that the decomposition stages can be predicted with accuracy from temperature records and that a reliable degree-day index can be developed to study decomposition-related processes. The development of degree-day indices opens new doors for researchers and allows for the application of inferential tools unaffected by climatic variability, as well as for the inclusion of statistics in a science that is primarily descriptive and in need of validation methods in courtroom proceedings. PMID:21198596

  5. The neural basis of novelty and appropriateness in processing of creative chunk decomposition.

    PubMed

    Huang, Furong; Fan, Jin; Luo, Jing

    2015-06-01

    Novelty and appropriateness have been recognized as the fundamental features of creative thinking. However, the brain mechanisms underlying these features remain largely unknown. In this study, we used event-related functional magnetic resonance imaging (fMRI) to dissociate these mechanisms in a revised creative chunk decomposition task in which participants were required to perform different types of chunk decomposition that systematically varied in novelty and appropriateness. We found that novelty processing involved functional areas for procedural memory (caudate), mental rewarding (substantia nigra, SN), and visual-spatial processing, whereas appropriateness processing was mediated by areas for declarative memory (hippocampus), emotional arousal (amygdala), and orthography recognition. These results indicate that non-declarative and declarative memory systems may jointly contribute to the two fundamental features of creative thinking. PMID:25797834

  6. Decomposition of gaseous organic contaminants by surface discharge induced plasma chemical processing -- SPCP

    SciTech Connect

    Oda, Tetsuji; Yamashita, Ryuichi; Haga, Ichiro; Takahashi, Tadashi; Masuda, Senichi

    1996-01-01

    The decomposition performance of the surface induced plasma chemical processing (SPCP) for chlorofluorocarbon (83 ppm CFC-113 in air), acetone, trichloroethylene, and isopropylalcohol was experimentally examined. In every case, very high decomposition performance, more than 90 or 99% removal rate, is realized when the residence time is about 1 second and the input electric power for a 16 cm{sup 3} reactor is about 10 W. Acetone is the most stable compound and alcohol is most easily decomposed. The decomposed product-analysis by a GasChromato-MassSpectrometer has just started but very poor results are obtained. In fact, some portion of the isopropylalcohol may change to acetone which is worse than alcohol. The necessary energy to decompose one mol gas diluted in the air is calculated form the experiments. The necessary energy level for acetone and trichloroethylene is about one-tenth or one-fiftieth of that for chlorofluorocarbon.

  7. Decomposition and Precipitation Process During Thermo-mechanical Fatigue of Duplex Stainless Steel

    NASA Astrophysics Data System (ADS)

    Weidner, Anja; Kolmorgen, Roman; Kubena, Ivo; Kulawinski, Dirk; Kruml, Tomas; Biermann, Horst

    2016-05-01

    The so-called 748 K (475 °C) embrittlement is one of the main drawbacks for the application of ferritic-austenitic duplex stainless steels (DSS) at higher temperatures caused by a spinodal decomposition of the ferritic phase. Thermo-mechanical fatigue tests performed on a DSS in the temperature range between 623 K and 873 K (350 °C and 600 °C) revealed no negative influence on the fatigue lifetime. However, an intensive subgrain formation occurred in the ferritic phase, which was accompanied by formation of fine precipitates. In order to study the decomposition process of the ferritic grains due to TMF testing, detailed investigations using scanning and transmission electron microscopy are presented. The nature of the precipitates was determined as the cubic face centered G-phase, which is characterized by an enrichment of Si, Mo, and Ni. Furthermore, the formation of secondary austenite within ferritic grains was observed.

  8. Controlled decomposition and oxidation: A treatment method for gaseous process effluents

    NASA Technical Reports Server (NTRS)

    Mckinley, Roger J. B., Sr.

    1990-01-01

    The safe disposal of effluent gases produced by the electronics industry deserves special attention. Due to the hazardous nature of many of the materials used, it is essential to control and treat the reactants and reactant by-products as they are exhausted from the process tool and prior to their release into the manufacturing facility's exhaust system and the atmosphere. Controlled decomposition and oxidation (CDO) is one method of treating effluent gases from thin film deposition processes. CDO equipment applications, field experience, and results of the use of CDO equipment and technological advances gained from the field experiences are discussed.

  9. Chlorine/UV Process for Decomposition and Detoxification of Microcystin-LR.

    PubMed

    Zhang, Xinran; Li, Jing; Yang, Jer-Yen; Wood, Karl V; Rothwell, Arlene P; Li, Weiguang; Blatchley Iii, Ernest R

    2016-07-19

    Microcystin-LR (MC-LR) is a potent hepatotoxin that is often associated with blooms of cyanobacteria. Experiments were conducted to evaluate the efficiency of the chlorine/UV process for MC-LR decomposition and detoxification. Chlorinated MC-LR was observed to be more photoactive than MC-LR. LC/MS analyses confirmed that the arginine moiety represented an important reaction site within the MC-LR molecule for conditions of chlorination below the chlorine demand of the molecule. Prechlorination activated MC-LR toward UV254 exposure by increasing the product of the molar absorption coefficient and the quantum yield of chloro-MC-LR, relative to the unchlorinated molecule. This mechanism of decay is fundamentally different than the conventional view of chlorine/UV as an advanced oxidation process. A toxicity assay based on human liver cells indicated MC-LR degradation byproducts in the chlorine/UV process possessed less cytotoxicity than those that resulted from chlorination or UV254 irradiation applied separately. MC-LR decomposition and detoxification in this combined process were more effective at pH 8.5 than at pH 7.5 or 6.5. These results suggest that the chlorine/UV process could represent an effective strategy for control of microcystins and their associated toxicity in drinking water supplies. PMID:27338715

  10. A quantitative acoustic emission study on fracture processes in ceramics based on wavelet packet decomposition

    SciTech Connect

    Ning, J. G.; Chu, L.; Ren, H. L.

    2014-08-28

    We base a quantitative acoustic emission (AE) study on fracture processes in alumina ceramics on wavelet packet decomposition and AE source location. According to the frequency characteristics, as well as energy and ringdown counts of AE, the fracture process is divided into four stages: crack closure, nucleation, development, and critical failure. Each of the AE signals is decomposed by a 2-level wavelet package decomposition into four different (from-low-to-high) frequency bands (AA{sub 2}, AD{sub 2}, DA{sub 2}, and DD{sub 2}). The energy eigenvalues P{sub 0}, P{sub 1}, P{sub 2}, and P{sub 3} corresponding to these four frequency bands are calculated. By analyzing changes in P{sub 0} and P{sub 3} in the four stages, we determine the inverse relationship between AE frequency and the crack source size during ceramic fracture. AE signals with regard to crack nucleation can be expressed when P{sub 0} is less than 5 and P{sub 3} more than 60; whereas AE signals with regard to dangerous crack propagation can be expressed when more than 92% of P{sub 0} is greater than 4, and more than 95% of P{sub 3} is less than 45. Geiger location algorithm is used to locate AE sources and cracks in the sample. The results of this location algorithm are consistent with the positions of fractures in the sample when observed under a scanning electronic microscope; thus the locations of fractures located with Geiger's method can reflect the fracture process. The stage division by location results is in a good agreement with the division based on AE frequency characteristics. We find that both wavelet package decomposition and Geiger's AE source locations are suitable for the identification of the evolutionary process of cracks in alumina ceramics.

  11. ERP and Adaptive Autoregressive identification with spectral power decomposition to study rapid auditory processing in infants.

    PubMed

    Piazza, C; Cantiani, C; Tacchino, G; Molteni, M; Reni, G; Bianchi, A M

    2014-01-01

    The ability to process rapidly-occurring auditory stimuli plays an important role in the mechanisms of language acquisition. For this reason, the research community has begun to investigate infant auditory processing, particularly using the Event Related Potentials (ERP) technique. In this paper we approach this issue by means of time domain and time-frequency domain analysis. For the latter, we propose the use of Adaptive Autoregressive (AAR) identification with spectral power decomposition. Results show EEG delta-theta oscillation enhancement related to the processing of acoustic frequency and duration changes, suggesting that, as expected, power modulation encodes rapid auditory processing (RAP) in infants and that the time-frequency analysis method proposed is able to identify this modulation. PMID:25571014

  12. Contribution of free radicals to chlorophenols decomposition by several advanced oxidation processes.

    PubMed

    Benitez, F J; Beltran-Heredia, J; Acero, J L; Rubio, F J

    2000-10-01

    The chemical decomposition of aqueous solutions of various chlorophenols (4-chlorophenol (4-CP), 2,4-dichlorophenol (2-DCP), 2,4,6-trichlorophenol (2,4,6-TCP) and 2,3,4,6-tetrachlorophenol (2,3,4,6-TeCP)), which are environmental priority pollutants, is studied by means of single oxidants (hydrogen peroxide, UV radiation, Fenton's reagent and ozone at pH 2 and 9), and by the Advanced Oxidation Processes (AOPs) constituted by combinations of these oxidants (UV/H2O2 UV/Fenton's reagent and O3/UV). For all these reactions the degradation rates are evaluated by determining their first-order rate constants and the half-life times. Ozone is more reactive with higher substituted CPs while OH* radicals react faster with those chlorophenols having lower number of chlorine atoms. The improvement in the decomposition levels reached by the combined processes, due to the generation of the very reactive hydroxyl radicals. in relation to the single oxidants is clearly demonstrated and evaluated by kinetic modeling. PMID:10901258

  13. A detailed kinetic model for the hydrothermal decomposition process of sewage sludge.

    PubMed

    Yin, Fengjun; Chen, Hongzhen; Xu, Guihua; Wang, Guangwei; Xu, Yuanjian

    2015-12-01

    A detailed kinetic model for the hydrothermal decomposition (HTD) of sewage sludge was developed based on an explicit reaction scheme considering exact intermediates including protein, saccharide, NH4(+)-N and acetic acid. The parameters were estimated by a series of kinetic data at a temperature range of 180-300°C. This modeling framework is capable of revealing stoichiometric relationships between different components by determining the conversion coefficients and identifying the reaction behaviors by determining rate constants and activation energies. The modeling work shows that protein and saccharide are the primary intermediates in the initial stage of HTD resulting from the fast reduction of biomass. The oxidation processes of macromolecular products to acetic acid are highly dependent on reaction temperature and dramatically restrained when temperature is below 220°C. Overall, this detailed model is meaningful for process simulation and kinetic analysis. PMID:26409104

  14. Surface modification processes during methane decomposition on Cu-promoted Ni–ZrO2 catalysts

    PubMed Central

    Wolfbeisser, Astrid; Klötzer, Bernhard; Mayr, Lukas; Rameshan, Raffael; Zemlyanov, Dmitry; Bernardi, Johannes; Rupprechter, Günther

    2015-01-01

    The surface chemistry of methane on Ni–ZrO2 and bimetallic CuNi–ZrO2 catalysts and the stability of the CuNi alloy under reaction conditions of methane decomposition were investigated by combining reactivity measurements and in situ synchrotron-based near-ambient pressure XPS. Cu was selected as an exemplary promoter for modifying the reactivity of Ni and enhancing the resistance against coke formation. We observed an activation process occurring in methane between 650 and 735 K with the exact temperature depending on the composition which resulted in an irreversible modification of the catalytic performance of the bimetallic catalysts towards a Ni-like behaviour. The sudden increase in catalytic activity could be explained by an increase in the concentration of reduced Ni atoms at the catalyst surface in the active state, likely as a consequence of the interaction with methane. Cu addition to Ni improved the desired resistance against carbon deposition by lowering the amount of coke formed. As a key conclusion, the CuNi alloy shows limited stability under relevant reaction conditions. This system is stable only in a limited range of temperature up to ~700 K in methane. Beyond this temperature, segregation of Ni species causes a fast increase in methane decomposition rate. In view of the applicability of this system, a detailed understanding of the stability and surface composition of the bimetallic phases present and the influence of the Cu promoter on the surface chemistry under relevant reaction conditions are essential. PMID:25815163

  15. MATLAB Tensor Toolbox

    Energy Science and Technology Software Center (ESTSC)

    2006-08-03

    This software provides a collection of MATLAB classes for tensor manipulations that can be used for fast algorithm prototyping. The tensor class extends the functionality of MATLAB's multidimensional arrays by supporting additional operations such as tensor multiplication. We have also added support for sparse tensor, tensors in Kruskal or Tucker format, and tensors stored as matrices (both dense and sparse).

  16. Linear friction weld process monitoring of fixture cassette deformations using empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Bakker, O. J.; Gibson, C.; Wilson, P.; Lohse, N.; Popov, A. A.

    2015-10-01

    Due to its inherent advantages, linear friction welding is a solid-state joining process of increasing importance to the aerospace, automotive, medical and power generation equipment industries. Tangential oscillations and forge stroke during the burn-off phase of the joining process introduce essential dynamic forces, which can also be detrimental to the welding process. Since burn-off is a critical phase in the manufacturing stage, process monitoring is fundamental for quality and stability control purposes. This study aims to improve workholding stability through the analysis of fixture cassette deformations. Methods and procedures for process monitoring are developed and implemented in a fail-or-pass assessment system for fixture cassette deformations during the burn-off phase. Additionally, the de-noised signals are compared to results from previous production runs. The observed deformations as a consequence of the forces acting on the fixture cassette are measured directly during the welding process. Data on the linear friction-welding machine are acquired and de-noised using empirical mode decomposition, before the burn-off phase is extracted. This approach enables a direct, objective comparison of the signal features with trends from previous successful welds. The capacity of the whole process monitoring system is validated and demonstrated through the analysis of a large number of signals obtained from welding experiments.

  17. Tensor Modeling Based for Airborne LiDAR Data Classification

    NASA Astrophysics Data System (ADS)

    Li, N.; Liu, C.; Pfeifer, N.; Yin, J. F.; Liao, Z. Y.; Zhou, Y.

    2016-06-01

    Feature selection and description is a key factor in classification of Earth observation data. In this paper a classification method based on tensor decomposition is proposed. First, multiple features are extracted from raw LiDAR point cloud, and raster LiDAR images are derived by accumulating features or the "raw" data attributes. Then, the feature rasters of LiDAR data are stored as a tensor, and tensor decomposition is used to select component features. This tensor representation could keep the initial spatial structure and insure the consideration of the neighborhood. Based on a small number of component features a k nearest neighborhood classification is applied.

  18. Quantitative evaluation and visualization of cracking process in reinforced concrete by a moment tensor analysis of acoustic emission

    SciTech Connect

    Yuyama, Shigenori; Okamoto, Takahisa; Shigeishi, Mitsuhiro; Ohtsu, Masayasu

    1995-06-01

    Fracture tests are conducted on two types of reinforced concrete specimens under cyclic loadings. Cracking process is quantitatively evaluated and visualized by applying a moment tensor analysis to the AE waveforms detected during the fracture. First, bending tests are performed on reinforced concrete beams. It is found that both tensile and shear cracks are generated around the reinforcement in the low loading stages. However, shear cracks become dominant as the cracking process progresses. In the final stages, shear cracks are generated near the interface between the reinforcement and concrete even during unloadings. A bond strength test, made second, shows that tensile cracks are produced around the reinforcement in the early stages. They spread apart from the reinforcement to wider areas in the later stages. An intense AE cluster due to shear cracks is observed along the interface between the reinforcement and concrete. The previous result from an engineering structure is also presented for comparison. All these results demonstrate a great promise of the analysis for quantitative evaluation and visualization of the cracking process in reinforced concrete. The relationship between the opening width of surface cracks and the Kaiser effect is intensively studied. It is shown that a breakdown of the Kaiser effect and high AE activities during unloading can be effective indices to estimate the level of deterioration in concrete structures.

  19. A low-rank approximation-based transductive support tensor machine for semisupervised classification.

    PubMed

    Liu, Xiaolan; Guo, Tengjiao; He, Lifang; Yang, Xiaowei

    2015-06-01

    In the fields of machine learning, pattern recognition, image processing, and computer vision, the data are usually represented by the tensors. For the semisupervised tensor classification, the existing transductive support tensor machine (TSTM) needs to resort to iterative technique, which is very time-consuming. In order to overcome this shortcoming, in this paper, we extend the concave-convex procedure-based transductive support vector machine (CCCP-TSVM) to the tensor patterns and propose a low-rank approximation-based TSTM, in which the tensor rank-one decomposition is used to compute the inner product of the tensors. Theoretically, concave-convex procedure-based TSTM (CCCP-TSTM) is an extension of the linear CCCP-TSVM to tensor patterns. When the input patterns are vectors, CCCP-TSTM degenerates into the linear CCCP-TSVM. A set of experiments is conducted on 23 semisupervised classification tasks, which are generated from seven second-order face data sets, three third-order gait data sets, and two third-order image data sets, to illustrate the performance of the CCCP-TSTM. The results show that compared with CCCP-TSVM and TSTM, CCCP-TSTM provides significant performance gain in terms of test accuracy and training speed. PMID:25700447

  20. Highly entangled tensor networks

    NASA Astrophysics Data System (ADS)

    Gu, Yingfei; Bulmash, Daniel; Qi, Xiao-Liang

    Tensor network states are used to represent many-body quantum state, e.g., a ground state of local Hamiltonian. In this talk, we will provide a systematic way to produce a family of highly entangled tensor network states. These states are entangled in a special way such that the entanglement entropy of a subsystem follows the Ryu-Takayanagi formula, i.e. the entropy is proportional to the minimal area geodesic surface bounding the boundary region. Our construction also provide an intuitive understanding of the Ryu-Takayanagi formula by relating it to a wave propagation process. We will present examples in various geometries.

  1. Demonstration of base catalyzed decomposition process, Navy Public Works Center, Guam, Mariana Islands

    SciTech Connect

    Schmidt, A.J.; Freeman, H.D.; Brown, M.D.; Zacher, A.H.; Neuenschwander, G.N.; Wilcox, W.A.; Gano, S.R.; Kim, B.C.; Gavaskar, A.R.

    1996-02-01

    Base Catalyzed Decomposition (BCD) is a chemical dehalogenation process designed for treating soils and other substrate contaminated with polychlorinated biphenyls (PCB), pesticides, dioxins, furans, and other hazardous organic substances. PCBs are heavy organic liquids once widely used in industry as lubricants, heat transfer oils, and transformer dielectric fluids. In 1976, production was banned when PCBs were recognized as carcinogenic substances. It was estimated that significant quantities (one billion tons) of U.S. soils, including areas on U.S. military bases outside the country, were contaminated by PCB leaks and spills, and cleanup activities began. The BCD technology was developed in response to these activities. This report details the evolution of the process, from inception to deployment in Guam, and describes the process and system components provided to the Navy to meet the remediation requirements. The report is divided into several sections to cover the range of development and demonstration activities. Section 2.0 gives an overview of the project history. Section 3.0 describes the process chemistry and remediation steps involved. Section 4.0 provides a detailed description of each component and specific development activities. Section 5.0 details the testing and deployment operations and provides the results of the individual demonstration campaigns. Section 6.0 gives an economic assessment of the process. Section 7.0 presents the conclusions and recommendations form this project. The appendices contain equipment and instrument lists, equipment drawings, and detailed run and analytical data.

  2. Decomposition of phenylarsonic acid by AOP processes: degradation rate constants and by-products.

    PubMed

    Jaworek, K; Czaplicka, M; Bratek, Ł

    2014-10-01

    The paper presents results of the studies photodegradation, photooxidation, and oxidation of phenylarsonic acid (PAA) in aquatic solution. The water solutions, which consist of 2.7 g dm(-3) phenylarsonic acid, were subjected to advance oxidation process (AOP) in UV, UV/H2O2, UV/O3, H2O2, and O3 systems under two pH conditions. Kinetic rate constants and half-life of phenylarsonic acid decomposition reaction are presented. The results from the study indicate that at pH 2 and 7, PAA degradation processes takes place in accordance with the pseudo first order kinetic reaction. The highest rate constants (10.45 × 10(-3) and 20.12 × 10(-3)) and degradation efficiencies at pH 2 and 7 were obtained at UV/O3 processes. In solution, after processes, benzene, phenol, acetophenone, o-hydroxybiphenyl, p-hydroxybiphenyl, benzoic acid, benzaldehyde, and biphenyl were identified. PMID:24824504

  3. Probabilistic Round Trip Contamination Analysis of a Mars Sample Acquisition and Handling Process Using Markovian Decompositions

    NASA Technical Reports Server (NTRS)

    Hudson, Nicolas; Lin, Ying; Barengoltz, Jack

    2010-01-01

    A method for evaluating the probability of a Viable Earth Microorganism (VEM) contaminating a sample during the sample acquisition and handling (SAH) process of a potential future Mars Sample Return mission is developed. A scenario where multiple core samples would be acquired using a rotary percussive coring tool, deployed from an arm on a MER class rover is analyzed. The analysis is conducted in a structured way by decomposing sample acquisition and handling process into a series of discrete time steps, and breaking the physical system into a set of relevant components. At each discrete time step, two key functions are defined: The probability of a VEM being released from each component, and the transport matrix, which represents the probability of VEM transport from one component to another. By defining the expected the number of VEMs on each component at the start of the sampling process, these decompositions allow the expected number of VEMs on each component at each sampling step to be represented as a Markov chain. This formalism provides a rigorous mathematical framework in which to analyze the probability of a VEM entering the sample chain, as well as making the analysis tractable by breaking the process down into small analyzable steps.

  4. A data-driven multidimensional signal-noise decomposition approach for GPR data processing

    NASA Astrophysics Data System (ADS)

    Chen, Chih-Sung; Jeng, Yih

    2015-12-01

    We demonstrate the possibility of applying a data-driven nonlinear filtering scheme in processing ground penetrating radar (GPR) data. The algorithm is based on the recently developed multidimensional ensemble empirical mode decomposition (MDEEMD) method which provides a frame of developing a variety of approaches in data analysis. The GPR data processing is very challenging due to the large data volume, special format, and geometrical sensitive attributes which are very easily affected by various noises. Approaches which work in other fields of data processing may not be equally applicable to GPR data. Therefore, the MDEEMD has to be modified to fit the special needs in the GPR data processing. In this study, we first give a brief review of the MDEEMD, and then provide the detailed procedure of implementing a 2D GPR filter by exploiting the modified MDEEMD. A complete synthetic model study shows the details of algorithm implementation. To assess the performance of the proposed approach, models of various signal to noise (S/N) ratios are discussed, and the results of conventional filtering method are also provided for comparison. Two real GPR field examples and onsite excavations indicate that the proposed approach is feasible for practical use.

  5. Character Decomposition and Transposition Processes in Chinese Compound Words Modulates Attentional Blink

    PubMed Central

    Cao, Hongwen; Gao, Min; Yan, Hongmei

    2016-01-01

    The attentional blink (AB) is the phenomenon in which the identification of the second of two targets (T2) is attenuated if it is presented less than 500 ms after the first target (T1). Although the AB is eliminated in canonical word conditions, it remains unclear whether the character order in compound words affects the magnitude of the AB. Morpheme decomposition and transposition of Chinese two-character compound words can provide an effective means to examine AB priming and to assess combinations of the component representations inherent to visual word identification. In the present study, we examined the processing of consecutive targets in a rapid serial visual presentation (RSVP) paradigm using Chinese two-character compound words in which the two characters were transposed to form meaningful words or meaningless combinations (reversible, transposed, or canonical words). We found that when two Chinese characters that form a compound word, regardless of their order, are presented in an RSVP sequence, the likelihood of an AB for the second character is greatly reduced or eliminated compared to when the two characters constitute separate words rather than a compound word. Moreover, the order of the report for the two characters is more likely to be reversed when the normal order of the two characters in a compound word is reversed, especially when the interval between the presentation of the two characters is extremely short. These findings are more consistent with the cognitive strategy hypothesis than the resource-limited hypothesis during character decomposition and transposition of Chinese two-character compound words. These results suggest that compound characters are perceived as a unit, rather than two separate words. The data further suggest that readers could easily understand the text with character transpositions in compound words during Chinese reading. PMID:27379003

  6. Character Decomposition and Transposition Processes in Chinese Compound Words Modulates Attentional Blink.

    PubMed

    Cao, Hongwen; Gao, Min; Yan, Hongmei

    2016-01-01

    The attentional blink (AB) is the phenomenon in which the identification of the second of two targets (T2) is attenuated if it is presented less than 500 ms after the first target (T1). Although the AB is eliminated in canonical word conditions, it remains unclear whether the character order in compound words affects the magnitude of the AB. Morpheme decomposition and transposition of Chinese two-character compound words can provide an effective means to examine AB priming and to assess combinations of the component representations inherent to visual word identification. In the present study, we examined the processing of consecutive targets in a rapid serial visual presentation (RSVP) paradigm using Chinese two-character compound words in which the two characters were transposed to form meaningful words or meaningless combinations (reversible, transposed, or canonical words). We found that when two Chinese characters that form a compound word, regardless of their order, are presented in an RSVP sequence, the likelihood of an AB for the second character is greatly reduced or eliminated compared to when the two characters constitute separate words rather than a compound word. Moreover, the order of the report for the two characters is more likely to be reversed when the normal order of the two characters in a compound word is reversed, especially when the interval between the presentation of the two characters is extremely short. These findings are more consistent with the cognitive strategy hypothesis than the resource-limited hypothesis during character decomposition and transposition of Chinese two-character compound words. These results suggest that compound characters are perceived as a unit, rather than two separate words. The data further suggest that readers could easily understand the text with character transpositions in compound words during Chinese reading. PMID:27379003

  7. Fundamental phenomena on fuel decomposition and boundary layer combustion processes with applications to hybrid rocket motors

    NASA Astrophysics Data System (ADS)

    Kuo, Kenneth K.; Lu, Y. C.; Chiaverini, Martin J.; Harting, George C.

    1994-11-01

    An experimental study on the fundamental processes involved in fuel decomposition and boundary layer combustion in hybrid rocket motors is being conducted at the High Pressure Combustion Laboratory of the Pennsylvania State University. This research should provide a useful engineering technology base in the development of hybrid rocket motors as well as a fundamental understanding of the complex processes involved in hybrid propulsion. A high pressure slab motor has been designed and manufactured for conducting experimental investigations. Oxidizer (LOX or GOX) supply and control systems have been designed and partly constructed for the head-end injection into the test chamber. Experiments using HTPB fuel, as well as fuels supplied by NASA designated industrial companies will be conducted. Design and construction of fuel casting molds and sample holders have been completed. The portion of these items for industrial company fuel casting will be sent to the McDonnell Douglas Aerospace Corporation in the near future. The study focuses on the following areas: observation of solid fuel burning processes with LOX or GOX, measurement and correlation of solid fuel regression rate with operating conditions, measurement of flame temperature and radical species concentrations, determination of the solid fuel subsurface temperature profile, and utilization of experimental data for validation of a companion theoretical study (Part 2) also being conducted at PSU.

  8. Fundamental phenomena on fuel decomposition and boundary layer combustion processes with applications to hybrid rocket motors

    NASA Technical Reports Server (NTRS)

    Kuo, Kenneth K.; Lu, Y. C.; Chiaverini, Martin J.; Harting, George C.

    1994-01-01

    An experimental study on the fundamental processes involved in fuel decomposition and boundary layer combustion in hybrid rocket motors is being conducted at the High Pressure Combustion Laboratory of the Pennsylvania State University. This research should provide a useful engineering technology base in the development of hybrid rocket motors as well as a fundamental understanding of the complex processes involved in hybrid propulsion. A high pressure slab motor has been designed and manufactured for conducting experimental investigations. Oxidizer (LOX or GOX) supply and control systems have been designed and partly constructed for the head-end injection into the test chamber. Experiments using HTPB fuel, as well as fuels supplied by NASA designated industrial companies will be conducted. Design and construction of fuel casting molds and sample holders have been completed. The portion of these items for industrial company fuel casting will be sent to the McDonnell Douglas Aerospace Corporation in the near future. The study focuses on the following areas: observation of solid fuel burning processes with LOX or GOX, measurement and correlation of solid fuel regression rate with operating conditions, measurement of flame temperature and radical species concentrations, determination of the solid fuel subsurface temperature profile, and utilization of experimental data for validation of a companion theoretical study (Part 2) also being conducted at PSU.

  9. Thermal decomposition of potassium and sodium ethylxanthates and the influence of nitrobenzene on this process

    SciTech Connect

    Gorbatov, V.V.; Gerega, V.F.; Bordzilovskii, V.Ya.; Borovoi, A.A.; Dergunov, Yu.I.

    1988-02-10

    The thermal decomposition of the alkylxanthates was described by a first-order kinetic equation up to a degree of conversion of 50%. Thermal decomposition studies of potassium alkylxanthates indicated that the rate constants of the decomposition of ROCS/sub 2/K in isopentyl alcohol increased and the activation energies decreased as the group R changed along the series CH/sub 3/, C/sub 2/H/sub 5/, C/sub 3/H/sub 7/, C/sub 4/H/sub 9/, (CH/sub 3/)/sub 3/CCH/sub 2/, and iso-C/sub 3/H/sub 7/. In this study of the influence of additions of nitrobenzene on the decomposition of potassium and sodium alkylxanthates its additions had an accelerating action on the thermal decomposition of ROCS/sub 2/M in isopentyl alcohol.

  10. Tensor representation techniques in post-Hartree-Fock methods: matrix product state tensor format

    NASA Astrophysics Data System (ADS)

    Benedikt, Udo; Auer, Henry; Espig, Mike; Hackbusch, Wolfgang; Auer, Alexander A.

    2013-09-01

    In this proof-of-principle study, we discuss the application of various tensor representation formats and their implications on memory requirements and computational effort for tensor manipulations as they occur in typical post-Hartree-Fock (post-HF) methods. A successive tensor decomposition/rank reduction scheme in the matrix product state (MPS) format for the two-electron integrals in the AO and MO bases and an estimate of the t 2 amplitudes as obtained from second-order many-body perturbation theory (MP2) are described. Furthermore, the AO-MO integral transformation, the calculation of the MP2 energy and the potential usage of tensors in low-rank MPS representation for the tensor contractions in coupled cluster theory are discussed in detail. We are able to show that the overall scaling of the memory requirements is reduced from the conventional N 4 scaling to approximately N 3 and the scaling of computational effort for tensor contractions in post-HF methods can be reduced to roughly N 4 while the decomposition itself scales as N 5. While efficient algorithms with low prefactor for the tensor decomposition have yet to be devised, this ansatz offers the possibility to find a robust approximation with low-scaling behaviour with system and basis-set size for post-HF ab initio methods.

  11. Empirical mode decomposition as a time-varying multirate signal processing system

    NASA Astrophysics Data System (ADS)

    Yang, Yanli

    2016-08-01

    Empirical mode decomposition (EMD) can adaptively split composite signals into narrow subbands termed intrinsic mode functions (IMFs). Although an analytical expression of IMFs extracted by EMD from signals is introduced in Yang et al. (2013) [1], it is only used for the case of extrema spaced uniformly. In this paper, the EMD algorithm is analyzed from digital signal processing perspective for the case of extrema spaced nonuniformly. Firstly, the extrema extraction is represented by a time-varying extrema decimator. The nonuniform extrema extraction is analyzed through modeling the time-varying extrema decimation at a fixed time point as a time-invariant decimation. Secondly, by using the impulse/summation approach, spline interpolation for knots spaced nonuniformly is shown as two basic operations, time-varying interpolation and filtering by a time-varying spline filter. Thirdly, envelopes of signals are written as the output of the time-varying spline filter. An expression of envelopes of signals in both time and frequency domain is presented. The EMD algorithm is then described as a time-varying multirate signal processing system. Finally, an equation to model IMFs is derived by using a matrix formulation in time domain for the general case of extrema spaced nonuniformly.

  12. Fundamental phenomena on fuel decomposition and boundary layer combustion processes with applications to hybrid rocket motors

    NASA Technical Reports Server (NTRS)

    Kuo, Kenneth K.; Lu, Y. C.; Chiaverini, Martin J.; Harting, George C.

    1994-01-01

    An experimental study on the fundamental processes involved in fuel decomposition and boundary layer combustion in hybrid rocket motors is being conducted at the High Pressure Combustion Laboratory of the Pennsylvania State University. This research should provide an engineering technology base for development of large scale hybrid rocket motors as well as a fundamental understanding of the complex processes involved in hybrid propulsion. A high pressure slab motor has been designed for conducting experimental investigations. Oxidizer (LOX or GOX) is injected through the head-end over a solid fuel (HTPB) surface. Experiments using fuels supplied by NASA designated industrial companies will also be conducted. The study focuses on the following areas: measurement and observation of solid fuel burning with LOX or GOX, correlation of solid fuel regression rate with operating conditions, measurement of flame temperature and radical species concentrations, determination of the solid fuel subsurface temperature profile, and utilization of experimental data for validation of a companion theoretical study also being conducted at PSU.

  13. [Rates of decomposition processes in mountain soils of the Sudeten as a function of edaphic-climatic and biotic factors].

    PubMed

    Striganova, B R; Bienkowski, P

    2000-01-01

    The rate of grass litter decomposition was studied in soils of the Karkonosze Mountains of the Sudeten at different altitudes. Parallel structural-functional investigations of the soil animal population on the example of soil macrofauna were carried out and heavy metals were assayed in the soil at stationary plots to reveal the effects of both natural and anthropogenic factors on the soil biological activity. The recent contamination of soil in the Sudeten by heavy metals and sulfur does not affect the spatial distribution and abundance of the soil-dwelling invertebrates and the decomposition rates. The latter correlated to a high level of soil saprotroph activity. The activity of the decomposition processes depends on the soil content of organic matter, conditions of soil drainage, and the temperature of upper soil horizon. PMID:11149317

  14. Multilinear operators for higher-order decompositions.

    SciTech Connect

    Kolda, Tamara Gibson

    2006-04-01

    We propose two new multilinear operators for expressing the matrix compositions that are needed in the Tucker and PARAFAC (CANDECOMP) decompositions. The first operator, which we call the Tucker operator, is shorthand for performing an n-mode matrix multiplication for every mode of a given tensor and can be employed to concisely express the Tucker decomposition. The second operator, which we call the Kruskal operator, is shorthand for the sum of the outer-products of the columns of N matrices and allows a divorce from a matricized representation and a very concise expression of the PARAFAC decomposition. We explore the properties of the Tucker and Kruskal operators independently of the related decompositions. Additionally, we provide a review of the matrix and tensor operations that are frequently used in the context of tensor decompositions.

  15. Environmental assessment of the base catalyzed decomposition (BCD) process. Research report, June--July 1998

    SciTech Connect

    1998-08-01

    The report summarizes laboratory-scale, pilot-scale, and field performance data on BCD (Base Catalyzed Decomposition) technology, collected to date by various governmental, academic, and private organizations.

  16. Age-Related Modifications of Diffusion Tensor Imaging Parameters and White Matter Hyperintensities as Inter-Dependent Processes

    PubMed Central

    Pelletier, Amandine; Periot, Olivier; Dilharreguy, Bixente; Hiba, Bassem; Bordessoules, Martine; Chanraud, Sandra; Pérès, Karine; Amieva, Hélène; Dartigues, Jean-François; Allard, Michèle; Catheline, Gwénaëlle

    2016-01-01

    Microstructural changes of White Matter (WM) associated with aging have been widely described through Diffusion Tensor Imaging (DTI) parameters. In parallel, White Matter Hyperintensities (WMH) as observed on a T2-weighted MRI are extremely common in older individuals. However, few studies have investigated both phenomena conjointly. The present study investigates aging effects on DTI parameters in absence and in presence of WMH. Diffusion maps were constructed based on 21 directions DTI scans of young adults (n = 19, mean age = 33 SD = 7.4) and two age-matched groups of older adults, one presenting low-level-WMH (n = 20, mean age = 78, SD = 3.2) and one presenting high-level-WMH (n = 20, mean age = 79, SD = 5.4). Older subjects with low-level-WMH presented modifications of DTI parameters in comparison to younger subjects, fitting with the DTI pattern classically described in aging, i.e., Fractional Anisotropy (FA) decrease/Radial Diffusivity (RD) increase. Furthermore, older subjects with high-level-WMH showed higher DTI modifications in Normal Appearing White Matter (NAWM) in comparison to those with low-level-WMH. Finally, in older subjects with high-level-WMH, FA, and RD values of NAWM were associated with to WMH burden. Therefore, our findings suggest that DTI modifications and the presence of WMH would be two inter-dependent processes but occurring within different temporal windows. DTI changes would reflect the early phase of white matter changes and WMH would appear as a consequence of those changes. PMID:26834625

  17. Combined effects of leaf litter and soil microsite on decomposition process in arid rangelands.

    PubMed

    Carrera, Analía Lorena; Bertiller, Mónica Beatriz

    2013-01-15

    The objective of this study was to analyze the combined effects of leaf litter quality and soil properties on litter decomposition and soil nitrogen (N) mineralization at conserved (C) and disturbed by sheep grazing (D) vegetation states in arid rangelands of the Patagonian Monte. It was hypothesized that spatial differences in soil inorganic-N levels have larger impact on decomposition processes of non-recalcitrant than recalcitrant leaf litter (low and high concentration of secondary compounds, respectively). Leaf litter and upper soil were extracted from modal size plant patches (patch microsite) and the associated inter-patch area (inter-patch microsite) in C and D. Leaf litter was pooled per vegetation state and soil was pooled combining vegetation state and microsite. Concentrations of N and secondary compounds in leaf litter and total and inorganic-N in soil were assessed at each pooled sample. Leaf litter decay and soil N mineralization at microsites of C and D were estimated in 160 microcosms incubated at field capacity (16 month). C soils had higher total N than D soils (0.58 and 0.41 mg/g, respectively). Patch soil of C and inter-patch soil of D exhibited the highest values of inorganic-N (8.8 and 8.4 μg/g, respectively). Leaf litter of C was less recalcitrant and decomposed faster than that of D. Non-recalcitrant leaf litter decay and induced soil N mineralization had larger variation among microsites (coefficients of variation = 25 and 41%, respectively) than recalcitrant leaf litter (coefficients of variation = 12 and 32%, respectively). Changes in the canopy structure induced by grazing disturbance increased leaf litter recalcitrance, and reduced litter decay and soil N mineralization, independently of soil N levels. This highlights the importance of the combined effects of soil and leaf litter properties on N cycling probably with consequences for vegetation reestablishment and dynamics, rangeland resistance and resilience with implications

  18. KOALA: A program for the processing and decomposition of transient spectra

    NASA Astrophysics Data System (ADS)

    Grubb, Michael P.; Orr-Ewing, Andrew J.; Ashfold, Michael N. R.

    2014-06-01

    Extracting meaningful kinetic traces from time-resolved absorption spectra is a non-trivial task, particularly for solution phase spectra where solvent interactions can substantially broaden and shift the transition frequencies. Typically, each spectrum is composed of signal from a number of molecular species (e.g., excited states, intermediate complexes, product species) with overlapping spectral features. Additionally, the profiles of these spectral features may evolve in time (i.e., signal nonlinearity), further complicating the decomposition process. Here, we present a new program for decomposing mixed transient spectra into their individual component spectra and extracting the corresponding kinetic traces: KOALA (Kinetics Observed After Light Absorption). The software combines spectral target analysis with brute-force linear least squares fitting, which is computationally efficient because of the small nonlinear parameter space of most spectral features. Within, we demonstrate the application of KOALA to two sets of experimental transient absorption spectra with multiple mixed spectral components. Although designed for decomposing solution-phase transient absorption data, KOALA may in principle be applied to any time-evolving spectra with multiple components.

  19. Design studies of the sulfur trioxide decomposition reactor for the sulfur-cycle hydrogen-production process

    SciTech Connect

    Lin, S.S.; Flaherty, R.

    1982-01-01

    The Sulfur Cycle is a two-step hybrid electrochemical/thermochemical process for decomposing water into hydrogen and oxygen. Integration of a complex chemical process with a solar heat source poses unique challenges with regard to process and equipment design. The conceptual design for a developmental test unit demonstrating the sulfur cycle was prepared in 1980. The test unit design is compatible with the power level of a large parabolic solar collector. One of the key components in the process is the sulfur trioxide decomposition reactor. The design studies of the sulfur trioxide decomposition reactor encompassing the thermodynamics, reaction kinetics, heat transfer, and mechanical considerations, are described along with a brief description of the test unit.

  20. Growth of lanthanum manganate buffer layers for coated conductors via a metal-organic decomposition process

    NASA Astrophysics Data System (ADS)

    Venkataraman, Kartik

    LaMnO3 (LMO) was identified as a possible buffer material for YBa2Cu3O7-x conductors due to its diffusion barrier properties and close lattice match with YBa2Cu 3O7-x. Growth of LMO films via a metal-organic decomposition (MOD) process on Ni, Ni-5at.%W (Ni-5W), and single crystal SrTiO3 substrates was investigated. Phase-pure LMO was grown via MOD on Ni and SrTiO 3 substrates at temperatures and oxygen pressures within a thermodynamic "process window" wherein LMO, Ni, Ni-5W, and SrTiO3 are all stable components. LMO could not be grown on Ni-5W in the "process window" because tungsten diffused from the substrate into the overlying film, where it reacted to form La and Mn tungstates. The kinetics of tungstate formation and crystallization of phase-pure LMO from the La and Mn acetate precursors are competitive in the temperature range explored (850--1100°C). Temperatures <850°C might mitigate tungsten diffusion from the substrate to the film sufficiently to obviate tungstate formation, but LMO films deposited via MOD require temperatures ≥850°C for nucleation and grain growth. Using a Y2O3 seed layer on Ni-5W to block tungsten from diffusing into the LMO film was explored; however, Y2O3 reacts with tungsten in the "process window" at 850--1100°C. Tungsten diffusion into Y2O3 can be blocked if epitaxial, crack-free NiWO4 and NiO layers are formed at the interface between Ni-5W and Y2O3. NiWO 4 only grows epitaxially if the overlying NiO and buffer layers are thick enough to mechanically suppress (011)-oriented NiWO4 grain growth. This is not the case when a bare 75 nm-thick Y2O3 film on Ni-5W is processed at 850°C. These studies show that the Ni-5W substrate must be at a low temperature to prevent tungsten diffusion, whereas the LMO precursor film must be at elevated temperature to crystallize. An excimer laser-assisted MOD process was used where a Y2O 3-coated Ni-5W substrate was held at 500°C in air and the pulsed laser photo-thermally heated the Y2O3 and LMO

  1. Mathematical modeling of frontal process in thermal decomposition of a substance with allowance for the finite velocity of heat propagation

    SciTech Connect

    Shlenskii, O.F.; Murashov, G.G.

    1982-05-01

    In describing frontal processes of thermal decomposition of high-energy condensed substances, for example detonation, it is common practice to write the equation for the conservation of energy without any limitations on the heat propagation velocity (HPV). At the same time, it is known that in calculating fast processes of heat conduction, the assumption of an infinitely high HPV is not always justified. In order to evaluate the influence of the HPV on the results from calculations of heat conduction process under conditions of a short-term exothermic decomposition of a condensed substance, the solution of the problem of heating a semiinfinite, thermally unstable solid body with boundary conditions of the third kind on the surface has been examined.

  2. Empirical mode decomposition analysis of random processes in the solar atmosphere

    NASA Astrophysics Data System (ADS)

    Kolotkov, D. Y.; Anfinogentov, S. A.; Nakariakov, V. M.

    2016-08-01

    Context. Coloured noisy components with a power law spectral energy distribution are often shown to appear in solar signals of various types. Such a frequency-dependent noise may indicate the operation of various randomly distributed dynamical processes in the solar atmosphere. Aims: We develop a recipe for the correct usage of the empirical mode decomposition (EMD) technique in the presence of coloured noise, allowing for clear distinguishing between quasi-periodic oscillatory phenomena in the solar atmosphere and superimposed random background processes. For illustration, we statistically investigate extreme ultraviolet (EUV) emission intensity variations observed with SDO/AIA in the coronal (171 Å), chromospheric (304 Å), and upper photospheric (1600 Å) layers of the solar atmosphere, from a quiet sun and a sunspot umbrae region. Methods: EMD has been used for analysis because of its adaptive nature and essential applicability to the processing non-stationary and amplitude-modulated time series. For the comparison of the results obtained with EMD, we use the Fourier transform technique as an etalon. Results: We empirically revealed statistical properties of synthetic coloured noises in EMD, and suggested a scheme that allows for the detection of noisy components among the intrinsic modes obtained with EMD in real signals. Application of the method to the solar EUV signals showed that they indeed behave randomly and could be represented as a combination of different coloured noises characterised by a specific value of the power law indices in their spectral energy distributions. On the other hand, 3-min oscillations in the analysed sunspot were detected to have energies significantly above the corresponding noise level. Conclusions: The correct accounting for the background frequency-dependent random processes is essential when using EMD for analysis of oscillations in the solar atmosphere. For the quiet sun region the power law index was found to increase

  3. Decomposition Process of Alane and Gallane Compounds in Metal-Organic Chemical Vapor Deposition Studied by Surface Photo-Absorption

    NASA Astrophysics Data System (ADS)

    Yamauchi, Yoshiharu; Kobayashi, Naoki

    1992-09-01

    We used surface photo-absorption (SPA) to study trimethylamine alane (TMAA) and dimethylamine gallane (DMAG) decomposition processes on a substrate surface in metal-organic chemical vapor deposition. The decomposition onset temperatures of these group III hydride sources correspond to the substrate temperature at which the SPA reflectivity starts to increase during the supply of the group III source onto a group V stabilized surface. It was found that TMAA and DMAG start to decompose at about 150°C on an As-stabilized surface, which is much lower than the decomposition onsets of trialkyl Al and Ga compounds. Low temperature photoluminescence spectra exhibit dominant excitionic emissions for GaAs layers grown by DMAG at substrate temperatures above 400°C, indicating that carbon incorporation and the crystal quality deterioration due to incomplete decomposition on surface is much suppressed by using DMAG. A comparison of AlGaAs photoluminescence between layers by TMAA/triethylgallium and triethylaluminum/triethylgallium shows that the band-to-carbon acceptor transition is greatly reduced by using TMAA. TMAA and DMAG were verified to be promising group III sources for low-temperature and high-purity growth with low-carbon incorporation.

  4. Hyperbolicity of scalar-tensor theories of gravity

    SciTech Connect

    Salgado, Marcelo; Martinez del Rio, David; Alcubierre, Miguel; Nunez, Dario

    2008-05-15

    Two first order strongly hyperbolic formulations of scalar-tensor theories of gravity allowing nonminimal couplings (Jordan frame) are presented along the lines of the 3+1 decomposition of spacetime. One is based on the Bona-Masso formulation, while the other one employs a conformal decomposition similar to that of Baumgarte-Shapiro-Shibata-Nakamura. A modified Bona-Masso slicing condition adapted to the scalar-tensor theory is proposed for the analysis. This study confirms that the scalar-tensor theory has a well-posed Cauchy problem even when formulated in the Jordan frame.

  5. Ab inito molecular-dynamics study of EC decomposition process on Li2O2 surfaces

    NASA Astrophysics Data System (ADS)

    Ando, Yasunobu; Ikeshoji, Tamio; Otani, Minoru

    2015-03-01

    We have simulated electrochemical reactions of the EC molecule decomposition on Li2O2 substrate by ab initio molecular dynamics combined with the effective screening medium method. EC molecules adsorb onto the peroxide spontaneously. We find through the analysis of density of states that the adsorption state is stabilized by hybridization of the sp2 orbital and the surface states of the Li2O2. After adsorption, EC ring opens, which leads to the decomposition of the peroxide and the formation of a carboxy group. This kind of alkyl carbonates formed on the Li2O2 substrate was found in experiments actually Nanosystem Research Institute, AIST; ESICB, Kyoto University

  6. Efficient MATLAB computations with sparse and factored tensors.

    SciTech Connect

    Bader, Brett William; Kolda, Tamara Gibson (Sandia National Lab, Livermore, CA)

    2006-12-01

    In this paper, the term tensor refers simply to a multidimensional or N-way array, and we consider how specially structured tensors allow for efficient storage and computation. First, we study sparse tensors, which have the property that the vast majority of the elements are zero. We propose storing sparse tensors using coordinate format and describe the computational efficiency of this scheme for various mathematical operations, including those typical to tensor decomposition algorithms. Second, we study factored tensors, which have the property that they can be assembled from more basic components. We consider two specific types: a Tucker tensor can be expressed as the product of a core tensor (which itself may be dense, sparse, or factored) and a matrix along each mode, and a Kruskal tensor can be expressed as the sum of rank-1 tensors. We are interested in the case where the storage of the components is less than the storage of the full tensor, and we demonstrate that many elementary operations can be computed using only the components. All of the efficiencies described in this paper are implemented in the Tensor Toolbox for MATLAB.

  7. Effect of water vapor on the thermal decomposition process of zinc hydroxide chloride and crystal growth of zinc oxide

    SciTech Connect

    Kozawa, Takahiro; Onda, Ayumu; Yanagisawa, Kazumichi; Kishi, Akira; Masuda, Yasuaki

    2011-03-15

    Thermal decomposition process of zinc hydroxide chloride (ZHC), Zn{sub 5}(OH){sub 8}Cl{sub 2}.H{sub 2}O, prepared by a hydrothermal slow-cooling method has been investigated by simultaneous X-ray diffractometry and differential scanning calorimetry (XRD-DSC) and thermogravimetric-differential thermal analysis (TG-DTA) in a humidity-controlled atmosphere. ZHC was decomposed to ZnO through {beta}-Zn(OH)Cl as the intermediate phase, leaving amorphous hydrated ZnCl{sub 2}. In humid N{sub 2} with P{sub H{sub 2O}}=4.5 and 10 kPa, the hydrolysis of residual ZnCl{sub 2} was accelerated and the theoretical amount of ZnO was obtained at lower temperatures than in dry N{sub 2}, whereas significant weight loss was caused by vaporization of residual ZnCl{sub 2} in dry N{sub 2}. ZnO formed by calcinations in a stagnant air atmosphere had the same morphology of the original ZHC crystals and consisted of the c-axis oriented column-like particle arrays. On the other hand, preferred orientation of ZnO was inhibited in the case of calcinations in 100% water vapor. A detailed thermal decomposition process of ZHC and the effect of water vapor on the crystal growth of ZnO are discussed. -- Graphical abstract: Thermal decomposition process of zinc hydroxide chloride (ZHC), Zn{sub 5}(OH){sub 8}Cl{sub 2}.H{sub 2}O, has been investigated by novel thermal analyses with three different water vapor partial pressures. In the water vapor atmosphere, the formation of ZnO was completed at lower temperatures than in dry. Display Omitted highlights: > We examine the thermal decomposition of zinc hydroxide chloride in water vapor. > Water vapor had no effects on its thermal decomposition up to 230 {sup o}C. > Water vapor accelerated the decomposition of the residual ZnCl{sub 2} in ZnO. > Without water vapor, a large amount of ZnCl{sub 2} evaporated to form the c-axis oriented ZnO.

  8. Modelling regulation of decomposition and related root/mycorrhizal processes in arctic tundra soils. Final report

    SciTech Connect

    Linkins, A.E.

    1992-09-01

    Since this was the final year of this project principal activities were directed towards either collecting data needed to complete existing incomplete data sets or writing manuscripts. Data sets on Imnaviat Creek watershed basin are functionally complete and data finialized on the cellulose mineralizaiton and dust impact on soil organic carbon and phsophorus decomposition. Seven manuscripts were prepared, and are briefly outlined.

  9. Modelling regulation of decomposition and related root/mycorrhizal processes in arctic tundra soils

    SciTech Connect

    Linkins, A.E.

    1992-01-01

    Since this was the final year of this project principal activities were directed towards either collecting data needed to complete existing incomplete data sets or writing manuscripts. Data sets on Imnaviat Creek watershed basin are functionally complete and data finialized on the cellulose mineralizaiton and dust impact on soil organic carbon and phsophorus decomposition. Seven manuscripts were prepared, and are briefly outlined.

  10. Dynamics of crop residue composition-decomposition: Temporal modeling of multivariate carbon sources and processes [abstract

    Technology Transfer Automated Retrieval System (TEKTRAN)

    We examined multivariate relationships in structural carbohydrates plus lignin (STC) and non-structural (NSC) carbohydrates and their impact on C:N ratio and the dynamics of active (ka) and passive (kp) residue decomposition of alfalfa, corn, soybean, cuphea and switchgrass as candidates in diverse ...

  11. Multi-Zone Liquid Thrust Chamber Performance Code with Domain Decomposition for Parallel Processing

    NASA Technical Reports Server (NTRS)

    Navaz, Homayun K.

    2002-01-01

    -equation turbulence model, and two-phase flow. To overcome these limitations, the LTCP code is rewritten to include the multi-zone capability with domain decomposition that makes it suitable for parallel processing, i.e., enabling the code to run every zone or sub-domain on a separate processor. This can reduce the run time by a factor of 6 to 8, depending on the problem.

  12. Kinetic analysis of spinodal decomposition process in Fe-Cr alloys by small angle neutron scattering

    SciTech Connect

    Ujihara, T.; Osamura, K.

    2000-04-19

    The rate of spinodal decomposition depends on the spatial composition distribution. In order to estimate the time dependence of its rate experimentally, the structure change was investigated in Fe-30 at.% Cr and Fe-50 at.% Cr alloys aged at 748, 773, 798, and 823 K via small angle neutron scattering and a kinetic analysis of experimental data was carried out by using the Langer-Bar-on-Miller (LBM) theory. Their theory contains a rate term of a physical meaning similar to the diffusion coefficient. As a result, it becomes clear that the rate term corresponding to the diffusion coefficient decreases as decomposition advances and this fact can be explained by the modified LBM theory considering the composition-dependent mobility.

  13. Conceptualizing and Estimating Process Speed in Studies Employing Ecological Momentary Assessment Designs: A Multilevel Variance Decomposition Approach

    PubMed Central

    Shiyko, Mariya P.; Ram, Nilam

    2012-01-01

    Researchers have been making use of ecological momentary assessment (EMA) and other study designs that sample feelings and behaviors in real time and in naturalistic settings to study temporal dynamics and contextual factors of a wide variety of psychological, physiological, and behavioral processes. As EMA designs become more widespread, questions are arising about the frequency of data sampling, with direct implications for participants’ burden and researchers’ ability to capture and study dynamic processes. Traditionally, spectral analytic techniques are used for time series data to identify process speed. However, the nature of EMA data, often collected with fewer than 100 measurements per person, sampled at randomly spaced intervals, and replete with planned and unplanned missingness, precludes application of traditional spectral analytic techniques. Building on principles of variance partitioning used in the generalizability theory of measurement and spectral analysis, we illustrate the utility of multilevel variance decompositions for isolating process speed in EMA-type data. Simulation and empirical data from a smoking-cessation study are used to demonstrate the method and to evaluate the process speed of smoking urges and quitting self-efficacy. Results of the multilevel variance decomposition approach can inform process-oriented theory and future EMA study designs. PMID:22707796

  14. Mathematical modeling and investigations of the processes of heat conduction of ammonium perchlorate with phase transitions in thermal decomposition and gasification

    NASA Astrophysics Data System (ADS)

    Mikhailov, A. V.; Lagun, I. M.; Polyakov, E. P.

    2013-01-01

    Transient heat-conduction processes occurring in the period of thermal decomposition and gasification of a crystalline oxidant — ammonium perchlorate — have been investigated and analyzed on the basis of the developed mathematical model.

  15. Block term decomposition for modelling epileptic seizures

    NASA Astrophysics Data System (ADS)

    Hunyadi, Borbála; Camps, Daan; Sorber, Laurent; Paesschen, Wim Van; Vos, Maarten De; Huffel, Sabine Van; Lathauwer, Lieven De

    2014-12-01

    Recordings of neural activity, such as EEG, are an inherent mixture of different ongoing brain processes as well as artefacts and are typically characterised by low signal-to-noise ratio. Moreover, EEG datasets are often inherently multidimensional, comprising information in time, along different channels, subjects, trials, etc. Additional information may be conveyed by expanding the signal into even more dimensions, e.g. incorporating spectral features applying wavelet transform. The underlying sources might show differences in each of these modes. Therefore, tensor-based blind source separation techniques which can extract the sources of interest from such multiway arrays, simultaneously exploiting the signal characteristics in all dimensions, have gained increasing interest. Canonical polyadic decomposition (CPD) has been successfully used to extract epileptic seizure activity from wavelet-transformed EEG data (Bioinformatics 23(13):i10-i18, 2007; NeuroImage 37:844-854, 2007), where each source is described by a rank-1 tensor, i.e. by the combination of one particular temporal, spectral and spatial signature. However, in certain scenarios, where the seizure pattern is nonstationary, such a trilinear signal model is insufficient. Here, we present the application of a recently introduced technique, called block term decomposition (BTD) to separate EEG tensors into rank- ( L r , L r ,1) terms, allowing to model more variability in the data than what would be possible with CPD. In a simulation study, we investigate the robustness of BTD against noise and different choices of model parameters. Furthermore, we show various real EEG recordings where BTD outperforms CPD in capturing complex seizure characteristics.

  16. Toluene decomposition performance and NOx by-product formation during a DBD-catalyst process.

    PubMed

    Guo, Yufang; Liao, Xiaobin; Fu, Mingli; Huang, Haibao; Ye, Daiqi

    2015-02-01

    Characteristics of toluene decomposition and formation of nitrogen oxide (NOx) by-products were investigated in a dielectric barrier discharge (DBD) reactor with/without catalyst at room temperature and atmospheric pressure. Four kinds of metal oxides, i.e., manganese oxide (MnOx), iron oxide (FeOx), cobalt oxide (CoOx) and copper oxide (CuO), supported on Al2O3/nickel foam, were used as catalysts. It was found that introducing catalysts could improve toluene removal efficiency, promote decomposition of by-product ozone and enhance CO2 selectivity. In addition, NOx was suppressed with the decrease of specific energy density (SED) and the increase of humidity, gas flow rate and toluene concentration, or catalyst introduction. Among the four kinds of catalysts, the CuO catalyst showed the best performance in NOx suppression. The MnOx catalyst exhibited the lowest concentration of O3 and highest CO2 selectivity but the highest concentration of NOx. A possible pathway for NOx production in DBD was discussed. The contributions of oxygen active species and hydroxyl radicals are dominant in NOx suppression. PMID:25662254

  17. Automatic pitch decomposition for improved process window when printing dense features at k Ieff<0.20

    NASA Astrophysics Data System (ADS)

    Huckabay, Judy; Staud, Wolf; Naber, Robert; Dusa, Mircea; Flagello, Donis; Socha, Robert

    2006-05-01

    In conventional IC processes, the smallest size of any features that can be created on a wafer is severely limited by the pitch of the processing system. This approach is a key enabler of printing mask features on wafers without requiring new manufacturing equipment and with minor changes to existing manufacturing processes. The approach also does not require restrictions on the design of the chip. This paper will discuss the method and full-chip decomposition tool used to determine locations to split the layout. It will demonstrate examples of over-constrained layouts and how these configurations are mitigated. It will also show the reticle enhancement techniques used to process the split layouts and the Lithographic Checking used to verify the lithographic results.

  18. A uniform parameterization of moment tensors

    NASA Astrophysics Data System (ADS)

    Tape, C.; Tape, W.

    2015-12-01

    A moment tensor is a 3 x 3 symmetric matrix that expresses an earthquake source. We construct a parameterization of the five-dimensional space of all moment tensors of unit norm. The coordinates associated with the parameterization are closely related to moment tensor orientations and source types. The parameterization is uniform, in the sense that equal volumes in the coordinate domain of the parameterization correspond to equal volumes of moment tensors. Uniformly distributed points in the coordinate domain therefore give uniformly distributed moment tensors. A cartesian grid in the coordinate domain can be used to search efficiently over moment tensors. We find that uniformly distributed moment tensors have uniformly distributed orientations (eigenframes), but that their source types (eigenvalue triples) are distributed so as to favor double couples. An appropriate choice of a priori moment tensor probability is a prerequisite for parameter estimation. As a seemingly sensible choice, we consider the homogeneous probability, in which equal volumes of moment tensors are equally likely. We believe that it will lead to improved characterization of source processes.

  19. Singular value decomposition for genome-wide expression data processing and modeling

    PubMed Central

    Alter, Orly; Brown, Patrick O.; Botstein, David

    2000-01-01

    We describe the use of singular value decomposition in transforming genome-wide expression data from genes × arrays space to reduced diagonalized “eigengenes” × “eigenarrays” space, where the eigengenes (or eigenarrays) are unique orthonormal superpositions of the genes (or arrays). Normalizing the data by filtering out the eigengenes (and eigenarrays) that are inferred to represent noise or experimental artifacts enables meaningful comparison of the expression of different genes across different arrays in different experiments. Sorting the data according to the eigengenes and eigenarrays gives a global picture of the dynamics of gene expression, in which individual genes and arrays appear to be classified into groups of similar regulation and function, or similar cellular state and biological phenotype, respectively. After normalization and sorting, the significant eigengenes and eigenarrays can be associated with observed genome-wide effects of regulators, or with measured samples, in which these regulators are overactive or underactive, respectively. PMID:10963673

  20. Tensor hypercontraction. II. Least-squares renormalization.

    PubMed

    Parrish, Robert M; Hohenstein, Edward G; Martínez, Todd J; Sherrill, C David

    2012-12-14

    The least-squares tensor hypercontraction (LS-THC) representation for the electron repulsion integral (ERI) tensor is presented. Recently, we developed the generic tensor hypercontraction (THC) ansatz, which represents the fourth-order ERI tensor as a product of five second-order tensors [E. G. Hohenstein, R. M. Parrish, and T. J. Martínez, J. Chem. Phys. 137, 044103 (2012)]. Our initial algorithm for the generation of the THC factors involved a two-sided invocation of overlap-metric density fitting, followed by a PARAFAC decomposition, and is denoted PARAFAC tensor hypercontraction (PF-THC). LS-THC supersedes PF-THC by producing the THC factors through a least-squares renormalization of a spatial quadrature over the otherwise singular 1∕r(12) operator. Remarkably, an analytical and simple formula for the LS-THC factors exists. Using this formula, the factors may be generated with O(N(5)) effort if exact integrals are decomposed, or O(N(4)) effort if the decomposition is applied to density-fitted integrals, using any choice of density fitting metric. The accuracy of LS-THC is explored for a range of systems using both conventional and density-fitted integrals in the context of MP2. The grid fitting error is found to be negligible even for extremely sparse spatial quadrature grids. For the case of density-fitted integrals, the additional error incurred by the grid fitting step is generally markedly smaller than the underlying Coulomb-metric density fitting error. The present results, coupled with our previously published factorizations of MP2 and MP3, provide an efficient, robust O(N(4)) approach to both methods. Moreover, LS-THC is generally applicable to many other methods in quantum chemistry. PMID:23248986

  1. Tensor hypercontraction. II. Least-squares renormalization

    NASA Astrophysics Data System (ADS)

    Parrish, Robert M.; Hohenstein, Edward G.; Martínez, Todd J.; Sherrill, C. David

    2012-12-01

    The least-squares tensor hypercontraction (LS-THC) representation for the electron repulsion integral (ERI) tensor is presented. Recently, we developed the generic tensor hypercontraction (THC) ansatz, which represents the fourth-order ERI tensor as a product of five second-order tensors [E. G. Hohenstein, R. M. Parrish, and T. J. Martínez, J. Chem. Phys. 137, 044103 (2012)], 10.1063/1.4732310. Our initial algorithm for the generation of the THC factors involved a two-sided invocation of overlap-metric density fitting, followed by a PARAFAC decomposition, and is denoted PARAFAC tensor hypercontraction (PF-THC). LS-THC supersedes PF-THC by producing the THC factors through a least-squares renormalization of a spatial quadrature over the otherwise singular 1/r12 operator. Remarkably, an analytical and simple formula for the LS-THC factors exists. Using this formula, the factors may be generated with O(N^5) effort if exact integrals are decomposed, or O(N^4) effort if the decomposition is applied to density-fitted integrals, using any choice of density fitting metric. The accuracy of LS-THC is explored for a range of systems using both conventional and density-fitted integrals in the context of MP2. The grid fitting error is found to be negligible even for extremely sparse spatial quadrature grids. For the case of density-fitted integrals, the additional error incurred by the grid fitting step is generally markedly smaller than the underlying Coulomb-metric density fitting error. The present results, coupled with our previously published factorizations of MP2 and MP3, provide an efficient, robust O(N^4) approach to both methods. Moreover, LS-THC is generally applicable to many other methods in quantum chemistry.

  2. Randomized interpolative decomposition of separated representations

    NASA Astrophysics Data System (ADS)

    Biagioni, David J.; Beylkin, Daniel; Beylkin, Gregory

    2015-01-01

    We introduce an algorithm to compute tensor interpolative decomposition (dubbed CTD-ID) for the reduction of the separation rank of Canonical Tensor Decompositions (CTDs). Tensor ID selects, for a user-defined accuracy ɛ, a near optimal subset of terms of a CTD to represent the remaining terms via a linear combination of the selected terms. CTD-ID can be used as an alternative to or in combination with the Alternating Least Squares (ALS) algorithm. We present examples of its use within a convergent iteration to compute inverse operators in high dimensions. We also briefly discuss the spectral norm as a computational alternative to the Frobenius norm in estimating approximation errors of tensor ID. We reduce the problem of finding tensor IDs to that of constructing interpolative decompositions of certain matrices. These matrices are generated via randomized projection of the terms of the given tensor. We provide cost estimates and several examples of the new approach to the reduction of separation rank.

  3. Seismically Inferred Rupture Process of the 2011 Tohoku-Oki Earthquake by Using Data-Validated 3D and 2.5D Green's Tensor Waveforms

    NASA Astrophysics Data System (ADS)

    Okamoto, T.; Takenaka, H.; Hara, T.; Nakamura, T.; Aoki, T.

    2014-12-01

    We analyze "seismic" rupture process of the March 11, 2011 Tohoku-Oki earthquake (GCMT Mw9.1) by using a non-linear multi-time-window waveform inversion method. We incorporate the effect of the near-source laterally heterogeneous structure on the synthetic Green's tensor waveforms; otherwise the analysis may result in erroneous solutions [1]. To increase the resolution we use teleseismic and strong-motion seismograms jointly because the one-sided distribution of strong-motion station may cause reduced resolution near the trench axis [2]. We use a 2.5D FDM [3] for teleseismic P-waves and a full 3D FDM that incorporates topography, oceanic water layer, 3D heterogeneity and attenuation for strong-motions [4]. We apply multi-GPU acceleration by using the TSUBAME supercomputer in Tokyo Institute of Technology [5]. We "validated" the Green's tensor waveforms with a point-source moment tensor inversion analysis for a small (Mw5.8) shallow event: we confirm the observed waveforms are reproduced well with the synthetics.The inferred slip distribution using the 2.5D and 3D Green's functions has large slips (max. 37 m) near the hypocenter and small slips near the trench (figure). Also an isolated slip region is identified close to Fukushima prefecture. These features are similar to those obtained by our preliminary study [4]. The land-ward large slips and trench-ward small slips have also been reported by [2]. It is remarkable that we confirmed these features by using data-validated Green's functions. On the other hand very large slips are inferred close to the trench when we apply "1D" Green's functions that do not incorporate the lateral heterogeneity. Our result suggests the trench-ward large deformation that caused large tsunamis did not radiate strong seismic waves. Very slow slips (e.g., the tsunami earthquake), delayed slips and anelastic deformation are among the candidates of the physical processes of the deformation.[1] Okamoto and Takenaka, EPS, 61, e17-e20, 2009

  4. A Domain Decomposition Approach for Large-Scale Simulations of Flow Processes in Hydrate-Bearing Geologic Media

    SciTech Connect

    Zhang, Keni; Moridis, G.J.; Wu, Y.-S.; Pruess, K.

    2008-07-01

    Simulation of the system behavior of hydrate-bearing geologic media involves solving fully coupled mass- and heat-balance equations. In this study, we develop a domain decomposition approach for large-scale gas hydrate simulations with coarse-granularity parallel computation. This approach partitions a simulation domain into small subdomains. The full model domain, consisting of discrete subdomains, is still simulated simultaneously by using multiple processes/processors. Each processor is dedicated to following tasks of the partitioned subdomain: updating thermophysical properties, assembling mass- and energy-balance equations, solving linear equation systems, and performing various other local computations. The linearized equation systems are solved in parallel with a parallel linear solver, using an efficient interprocess communication scheme. This new domain decomposition approach has been implemented into the TOUGH+HYDRATE code and has demonstrated excellent speedup and good scalability. In this paper, we will demonstrate applications for the new approach in simulating field-scale models for gas production from gas-hydrate deposits.

  5. When Policy Structures Technology: Balancing upfront decomposition and in-process coordination in Europe's decentralized space technology ecosystem

    NASA Astrophysics Data System (ADS)

    Vrolijk, Ademir; Szajnfarber, Zoe

    2015-01-01

    This paper examines the decentralization of European space technology research and development through the joint lenses of policy, systems architecture, and innovation contexts. It uses a detailed longitudinal case history of the development of a novel astrophysics instrument to explore the link between policy-imposed institutional decomposition and the architecture of the technical system. The analysis focuses on five instances of collaborative design decision-making and finds that matching between the technical and institutional architectures is a predictor of project success, consistent with the mirroring hypothesis in extant literature. Examined over time, the instances reveal stability in the loosely coupled nature of institutional arrangements and a trend towards more integral, or tightly coupled, technical systems. The stability of the institutional arrangements is explained as an artifact of the European Hultqvist policy and the trend towards integral technical systems is related to the increasing complexity of modern space systems. If these trends persist, the scale of the mismatch will continue to grow. As a first step towards mitigating this challenge, the paper develops a framework for balancing upfront decomposition and in-process coordination in collaborative development projects. The astrophysics instrument case history is used to illustrate how collaborations should be defined for a given inherent system complexity.

  6. Unraveling the Decomposition Process of Lead(II) Acetate: Anhydrous Polymorphs, Hydrates, and Byproducts and Room Temperature Phosphorescence.

    PubMed

    Martínez-Casado, Francisco J; Ramos-Riesco, Miguel; Rodríguez-Cheda, José A; Cucinotta, Fabio; Matesanz, Emilio; Miletto, Ivana; Gianotti, Enrica; Marchese, Leonardo; Matěj, Zdeněk

    2016-09-01

    Lead(II) acetate [Pb(Ac)2, where Ac = acetate group (CH3-COO(-))2] is a very common salt with many and varied uses throughout history. However, only lead(II) acetate trihydrate [Pb(Ac)2·3H2O] has been characterized to date. In this paper, two enantiotropic polymorphs of the anhydrous salt, a novel hydrate [lead(II) acetate hemihydrate: Pb(Ac)2·(1)/2H2O], and two decomposition products [corresponding to two different basic lead(II) acetates: Pb4O(Ac)6 and Pb2O(Ac)2] are reported, with their structures being solved for the first time. The compounds present a variety of molecular arrangements, being 2D or 1D coordination polymers. A thorough thermal analysis, by differential scanning calorimetry (DSC) and thermogravimetric analysis (TGA), was also carried out to study the behavior and thermal data of the salt and its decomposition process, in inert and oxygenated atmospheres, identifying the phases and byproducts that appear. The complex thermal behavior of lead(II) acetate is now solved, finding the existence of another hydrate, two anhydrous enantiotropic polymorphs, and some byproducts. Moreover, some of them are phosphorescent at room temperature. The compounds were studied by TGA, DSC, X-ray diffraction, and UV-vis spectroscopy. PMID:27548299

  7. Process versus product in social learning: comparative diffusion tensor imaging of neural systems for action execution-observation matching in macaques, chimpanzees, and humans.

    PubMed

    Hecht, Erin E; Gutman, David A; Preuss, Todd M; Sanchez, Mar M; Parr, Lisa A; Rilling, James K

    2013-05-01

    Social learning varies among primate species. Macaques only copy the product of observed actions, or emulate, while humans and chimpanzees also copy the process, or imitate. In humans, imitation is linked to the mirror system. Here we compare mirror system connectivity across these species using diffusion tensor imaging. In macaques and chimpanzees, the preponderance of this circuitry consists of frontal-temporal connections via the extreme/external capsules. In contrast, humans have more substantial temporal-parietal and frontal-parietal connections via the middle/inferior longitudinal fasciculi and the third branch of the superior longitudinal fasciculus. In chimpanzees and humans, but not in macaques, this circuitry includes connections with inferior temporal cortex. In humans alone, connections with superior parietal cortex were also detected. We suggest a model linking species differences in mirror system connectivity and responsivity with species differences in behavior, including adaptations for imitation and social learning of tool use. PMID:22539611

  8. 3D reconstruction of tensors and vectors

    SciTech Connect

    Defrise, Michel; Gullberg, Grant T.

    2005-02-17

    Here we have developed formulations for the reconstruction of 3D tensor fields from planar (Radon) and line-integral (X-ray) projections of 3D vector and tensor fields. Much of the motivation for this work is the potential application of MRI to perform diffusion tensor tomography. The goal is to develop a theory for the reconstruction of both Radon planar and X-ray or line-integral projections because of the flexibility of MRI to obtain both of these type of projections in 3D. The development presented here for the linear tensor tomography problem provides insight into the structure of the nonlinear MRI diffusion tensor inverse problem. A particular application of tensor imaging in MRI is the potential application of cardiac diffusion tensor tomography for determining in vivo cardiac fiber structure. One difficulty in the cardiac application is the motion of the heart. This presents a need for developing future theory for tensor tomography in a motion field. This means developing a better understanding of the MRI signal for diffusion processes in a deforming media. The techniques developed may allow the application of MRI tensor tomography for the study of structure of fiber tracts in the brain, atherosclerotic plaque, and spine in addition to fiber structure in the heart. However, the relations presented are also applicable to other fields in medical imaging such as diffraction tomography using ultrasound. The mathematics presented can also be extended to exponential Radon transform of tensor fields and to other geometric acquisitions such as cone beam tomography of tensor fields.

  9. The Search for a Volatile Human Specific Marker in the Decomposition Process

    PubMed Central

    Rosier, E.; Loix, S.; Develter, W.; Van de Voorde, W.; Tytgat, J.; Cuypers, E.

    2015-01-01

    In this study, a validated method using a thermal desorber combined with a gas chromatograph coupled to mass spectrometry was used to identify the volatile organic compounds released during decomposition of 6 human and 26 animal remains in a laboratory environment during a period of 6 months. 452 compounds were identified. Among them a human specific marker was sought using principle component analysis. We found a combination of 8 compounds (ethyl propionate, propyl propionate, propyl butyrate, ethyl pentanoate, pyridine, diethyl disulfide, methyl(methylthio)ethyl disulfide and 3-methylthio-1-propanol) that led to the distinction of human and pig remains from other animal remains. Furthermore, it was possible to separate the pig remains from human remains based on 5 esters (3-methylbutyl pentanoate, 3-methylbutyl 3-methylbutyrate, 3-methylbutyl 2-methylbutyrate, butyl pentanoate and propyl hexanoate). Further research in the field with full bodies has to corroborate these results and search for one or more human specific markers. These markers would allow a more efficiently training of cadaver dogs or portable detection devices could be developed. PMID:26375029

  10. Applying matching pursuit decomposition time-frequency processing to UGS footstep classification

    NASA Astrophysics Data System (ADS)

    Larsen, Brett W.; Chung, Hugh; Dominguez, Alfonso; Sciacca, Jacob; Kovvali, Narayan; Papandreou-Suppappola, Antonia; Allee, David R.

    2013-06-01

    The challenge of rapid footstep detection and classification in remote locations has long been an important area of study for defense technology and national security. Also, as the military seeks to create effective and disposable unattended ground sensors (UGS), computational complexity and power consumption have become essential considerations in the development of classification techniques. In response to these issues, a research project at the Flexible Display Center at Arizona State University (ASU) has experimented with footstep classification using the matching pursuit decomposition (MPD) time-frequency analysis method. The MPD provides a parsimonious signal representation by iteratively selecting matched signal components from a pre-determined dictionary. The resulting time-frequency representation of the decomposed signal provides distinctive features for different types of footsteps, including footsteps during walking or running activities. The MPD features were used in a Bayesian classification method to successfully distinguish between the different activities. The computational cost of the iterative MPD algorithm was reduced, without significant loss in performance, using a modified MPD with a dictionary consisting of signals matched to cadence temporal gait patterns obtained from real seismic measurements. The classification results were demonstrated with real data from footsteps under various conditions recorded using a low-cost seismic sensor.

  11. Multiple seismogenic processes for high-frequency earthquakes at Katmai National Park, Alaska: Evidence from stress tensor inversions of fault-plane solutions

    USGS Publications Warehouse

    Moran, S.C.

    2003-01-01

    The volcanological significance of seismicity within Katmai National Park has been debated since the first seismograph was installed in 1963, in part because Katmai seismicity consists almost entirely of high-frequency earthquakes that can be caused by a wide range of processes. I investigate this issue by determining 140 well-constrained first-motion fault-plane solutions for shallow (depth < 9 km) earthquakes occuring between 1995 and 2001 and inverting these solutions for the stress tensor in different regions within the park. Earthquakes removed by several kilometers from the volcanic axis occur in a stress field characterized by horizontally oriented ??1 and ??3 axes, with ??1 rotated slightly (12??) relative to the NUVELIA subduction vector, indicating that these earthquakes are occurring in response to regional tectonic forces. On the other hand, stress tensors for earthquake clusters beneath several Katmai cluster volcanoes have vertically oriented ??1 axes, indicating that these events are occuring in response to local, not regional, processes. At Martin-Mageik, vertically oriented ??1 is most consistent with failure under edifice loading conditions in conjunction with localized pore pressure increases associated with hydrothermal circulation cells. At Trident-Novarupta, it is consistent with a number of possible models, including occurence along fractures formed during the 1912 eruption that now serve as horizontal conduits for migrating fluids and/or volatiles from nearby degassing and cooling magma bodies. At Mount Katmai, it is most consistent with continued seismicity along ring-fracture systems created in the 1912 eruption, perhaps enhanced by circulating hydrothermal fluids and/or seepage from the caldera-filling lake.

  12. Feasibility study: Application of the geopressured-geothermal resource to pyrolytic conversion or decomposition/detoxification processes

    SciTech Connect

    Propp, W.A.; Grey, A.E.; Negus-de Wys, J.; Plum, M.M.; Haefner, D.R.

    1991-09-01

    This study presents a preliminary evaluation of the technical and economic feasibility of selected conceptual processes for pyrolytic conversion of organic feedstocks or the decomposition/detoxification of hazardous wastes by coupling the process to the geopressured-geothermal resource. The report presents a detailed discussion of the resource and of each process selected for evaluation including the technical evaluation of each. A separate section presents the economic methodology used and the evaluation of the technically viable process. A final section presents conclusions and recommendations. Three separate processes were selected for evaluation. These are pyrolytic conversion of biomass to petroleum like fluids, wet air oxidation (WAO) at subcritical conditions for destruction of hazardous waste, and supercritical water oxidation (SCWO) also for the destruction of hazardous waste. The scientific feasibility of all three processes has been previously established by various bench-scale and pilot-scale studies. For a variety of reasons detailed in the report the SCWO process is the only one deemed to be technically feasible, although the effects of the high solids content of the geothermal brine need further study. This technology shows tremendous promise for contributing to solving the nation's energy and hazardous waste problems. However, the current economic analysis suggests that it is uneconomical at this time. 50 refs., 5 figs., 7 tabs.

  13. Peatland Microbial Communities and Decomposition Processes in the James Bay Lowlands, Canada

    PubMed Central

    Preston, Michael D.; Smemo, Kurt A.; McLaughlin, James W.; Basiliko, Nathan

    2012-01-01

    Northern peatlands are a large repository of atmospheric carbon due to an imbalance between primary production by plants and microbial decomposition. The James Bay Lowlands (JBL) of northern Ontario are a large peatland-complex but remain relatively unstudied. Climate change models predict the region will experience warmer and drier conditions, potentially altering plant community composition, and shifting the region from a long-term carbon sink to a source. We collected a peat core from two geographically separated (ca. 200 km) ombrotrophic peatlands (Victor and Kinoje Bogs) and one minerotrophic peatland (Victor Fen) located near Victor Bog within the JBL. We characterized (i) archaeal, bacterial, and fungal community structure with terminal restriction fragment length polymorphism of ribosomal DNA, (ii) estimated microbial activity using community level physiological profiling and extracellular enzymes activities, and (iii) the aeration and temperature dependence of carbon mineralization at three depths (0–10, 50–60, and 100–110 cm) from each site. Similar dominant microbial taxa were observed at all three peatlands despite differences in nutrient content and substrate quality. In contrast, we observed differences in basal respiration, enzyme activity, and the magnitude of substrate utilization, which were all generally higher at Victor Fen and similar between the two bogs. However, there was no preferential mineralization of carbon substrates between the bogs and fens. Microbial community composition did not correlate with measures of microbial activity but pH was a strong predictor of activity across all sites and depths. Increased peat temperature and aeration stimulated CO2 production but this did not correlate with a change in enzyme activities. Potential microbial activity in the JBL appears to be influenced by the quality of the peat substrate and the presence of microbial inhibitors, which suggests the existing peat substrate will have a large

  14. Interaural cross correlation of event-related potentials and diffusion tensor imaging in the evaluation of auditory processing disorder: a case study.

    PubMed

    Jerger, James; Martin, Jeffrey; McColl, Roderick

    2004-01-01

    In a previous publication (Jerger et al, 2002), we presented event-related potential (ERP) data on a pair of 10-year-old twin girls (Twins C and E), one of whom (Twin E) showed strong evidence of auditory processing disorder. For the present paper, we analyzed cross-correlation functions of ERP waveforms generated in response to the presentation of target stimuli to either the right or left ears in a dichotic paradigm. There were four conditions; three involved the processing of real words for either phonemic, semantic, or spectral targets; one involved the processing of a nonword acoustic signal. Marked differences in the cross-correlation functions were observed. In the case of Twin C, cross-correlation functions were uniformly normal across both hemispheres. The functions for Twin E, however, suggest poorly correlated neural activity over the left parietal region during the three word processing conditions, and over the right parietal area in the nonword acoustic condition. Differences between the twins' brains were evaluated using diffusion tensor magnetic resonance imaging (DTI). For Twin E, results showed reduced anisotropy over the length of the midline corpus callosum and adjacent lateral structures, implying reduced myelin integrity. Taken together, these findings suggest that failure to achieve appropriate temporally correlated bihemispheric brain activity in response to auditory stimulation, perhaps as a result of faulty interhemispheric communication via corpus callosum, may be a factor in at least some children with auditory processing disorder. PMID:15030103

  15. Coupling experimental data and a prototype model to probe the physical and chemical processes of 2,4-dinitroimidazole solid-phase thermal decomposition

    SciTech Connect

    Behrens, R.; Minier, L.; Bulusu, S.

    1998-12-31

    The time-dependent, solid-phase thermal decomposition behavior of 2,4-dinitroimidazole (2,4-DNI) has been measured utilizing simultaneous thermogravimetric modulated beam mass spectrometry (STMBMS) methods. The decomposition products consist of gaseous and non-volatile polymeric products. The temporal behavior of the gas formation rates of the identified products indicate that the overall thermal decomposition process is complex. In isothermal experiments with 2,4-DNI in the solid phase, four distinguishing features are observed: (1) elevated rates of gas formation are observed during the early stages of the decomposition, which appear to be correlated to the presence of exogenous water in the sample; (2) this is followed by a period of relatively constant rates of gas formation; (3) next, the rates of gas formation accelerate, characteristic of an autocatalytic reaction; (4) finally, the 2,4-DNI is depleted and gaseous decomposition products continue to evolve at a decreasing rate. A physicochemical and mathematical model of the decomposition of 2,4-DNI has been developed and applied to the experimental results. The first generation of this model is described in this paper. Differences between the first generation of the model and the experimental data collected under different conditions suggest refinements for the next generation of the model.

  16. Photocatalytic Decomposition of Methylene Blue Over MIL-53(Fe) Prepared Using Microwave-Assisted Process Under Visible Light Irradiation.

    PubMed

    Trinh, Nguyen Duy; Hong, Seong-Soo

    2015-07-01

    Iron-based MIL-53 crystals with uniform size were successfully synthesized using a microwave-assisted solvothermal method and characterized by XRD, FE-SEM and DRS. We also investigated the photocatalytic activity of MIL-53(Fe) for the decomposition of methylene blue using H2O2 as an electron acceptor. From XRD and SEM results, the fully crystallized MIL-53(Fe) materials were obtained regardless of preparation method. From DRS results, MIL-53(Fe) samples prepared using microwave-assisted process displayed the absorption spectrum up to the visible region and then they showed the high photocatalytic activity under visible light irradiation. The MIL-53(Fe) catalyst prepared by two times microwave irradiation showed the highest activity. PMID:26373158

  17. EEG Classification for Hybrid Brain-Computer Interface Using a Tensor Based Multiclass Multimodal Analysis Scheme

    PubMed Central

    Ji, Hongfei; Li, Jie; Lu, Rongrong; Gu, Rong; Cao, Lei; Gong, Xiaoliang

    2016-01-01

    Electroencephalogram- (EEG-) based brain-computer interface (BCI) systems usually utilize one type of changes in the dynamics of brain oscillations for control, such as event-related desynchronization/synchronization (ERD/ERS), steady state visual evoked potential (SSVEP), and P300 evoked potentials. There is a recent trend to detect more than one of these signals in one system to create a hybrid BCI. However, in this case, EEG data were always divided into groups and analyzed by the separate processing procedures. As a result, the interactive effects were ignored when different types of BCI tasks were executed simultaneously. In this work, we propose an improved tensor based multiclass multimodal scheme especially for hybrid BCI, in which EEG signals are denoted as multiway tensors, a nonredundant rank-one tensor decomposition model is proposed to obtain nonredundant tensor components, a weighted fisher criterion is designed to select multimodal discriminative patterns without ignoring the interactive effects, and support vector machine (SVM) is extended to multiclass classification. Experiment results suggest that the proposed scheme can not only identify the different changes in the dynamics of brain oscillations induced by different types of tasks but also capture the interactive effects of simultaneous tasks properly. Therefore, it has great potential use for hybrid BCI. PMID:26880873

  18. EEG Classification for Hybrid Brain-Computer Interface Using a Tensor Based Multiclass Multimodal Analysis Scheme.

    PubMed

    Ji, Hongfei; Li, Jie; Lu, Rongrong; Gu, Rong; Cao, Lei; Gong, Xiaoliang

    2016-01-01

    Electroencephalogram- (EEG-) based brain-computer interface (BCI) systems usually utilize one type of changes in the dynamics of brain oscillations for control, such as event-related desynchronization/synchronization (ERD/ERS), steady state visual evoked potential (SSVEP), and P300 evoked potentials. There is a recent trend to detect more than one of these signals in one system to create a hybrid BCI. However, in this case, EEG data were always divided into groups and analyzed by the separate processing procedures. As a result, the interactive effects were ignored when different types of BCI tasks were executed simultaneously. In this work, we propose an improved tensor based multiclass multimodal scheme especially for hybrid BCI, in which EEG signals are denoted as multiway tensors, a nonredundant rank-one tensor decomposition model is proposed to obtain nonredundant tensor components, a weighted fisher criterion is designed to select multimodal discriminative patterns without ignoring the interactive effects, and support vector machine (SVM) is extended to multiclass classification. Experiment results suggest that the proposed scheme can not only identify the different changes in the dynamics of brain oscillations induced by different types of tasks but also capture the interactive effects of simultaneous tasks properly. Therefore, it has great potential use for hybrid BCI. PMID:26880873

  19. Bowen York tensors

    NASA Astrophysics Data System (ADS)

    Beig, Robert; Krammer, Werner

    2004-02-01

    For a conformally flat 3-space, we derive a family of linear second-order partial differential operators which sends vectors into trace-free, symmetric 2-tensors. These maps, which are parametrized by conformal Killing vectors on the 3-space, are such that the divergence of the resulting tensor field depends only on the divergence of the original vector field. In particular, these maps send source-free electric fields into TT tensors. Moreover, if the original vector field is the Coulomb field on {\\bb R}^3\\backslash \\lbrace0\\rbrace , the resulting tensor fields on {\\bb R}^3\\backslash \\lbrace0\\rbrace are nothing but the family of TT tensors originally written by Bowen and York.

  20. A low-cost polysilicon process based on the synthesis and decomposition of dichlorosilane

    NASA Technical Reports Server (NTRS)

    Mccormick, J. R.; Plahutnik, F.; Sawyer, D.; Arvidson, A.; Goldfarb, S.

    1982-01-01

    Major process steps of a dichlorosilane based chemical vapor deposition (CVD) process for the production of polycrystalline silicon have been evaluated. While an economic analysis of the process indicates that it is not capable of meeting JPL/DOE price objectives ($14.00/kg in 1980 dollars), product price in the $19.00/kg to $25.00/kg range may be achieved. Product quality has been evaluated and ascertained to be comparable to semiconductor-grade polycrystalline silicon. Solar cells fabricated from the material are also equivalent to those fabricated from semiconductor-grade polycrystalline silicon.

  1. A low-cost polysilicon process based on the synthesis and decomposition of dichlorosilane

    SciTech Connect

    McCormick, J.R.; Arvidson, A.; Goldfarb, S.; Plahutnik, F.; Sayer, D.

    1982-09-01

    Major process steps of a dichlorosilane based chemical vapor deposition (CVD) process for the production of polycrystalline silicon have been evaluated. While an economic analysis of the process indicates that it is not capable of meeting JPL/DOE price objectives ($14.00/kg in 1980 dollars), product price in the $19.00/kg to $25.00/kg range may be achieved. Product quality has been evaluated and ascertained to be comparable to semiconductor-grade polycrystalline silicon. Solar cells fabricated from the material are also equivalent to those fabricated from semiconductor-grade polycrystalline silicon.

  2. Fundamental phenomena on fuel decomposition and boundary-layer combustion processes with applications to hybrid rocket motors

    NASA Astrophysics Data System (ADS)

    Kuo, Kenneth K.; Lu, Yeu-Cherng; Chiaverini, Martin J.; Harting, George C.; Johnson, David K.; Serin, Nadir

    The experimental study on the fundamental processes involved in fuel decomposition and boundary-layer combustion in hybrid rocket motors is continuously being conducted at the High Pressure Combustion Laboratory of The Pennsylvania State University. This research will provide a useful engineering technology base in the development of hybrid rocket motors as well as a fundamental understanding of the complex processes involved in hybrid propulsion. A high-pressure, 2-D slab motor has been designed, manufactured, and utilized for conducting seven test firings using HTPB fuel processed at PSU. A total of 20 fuel slabs have been received from the Mcdonnell Douglas Aerospace Corporation. Ten of these fuel slabs contain an array of fine-wire thermocouples for measuring solid fuel surface and subsurface temperatures. Diagnostic instrumentation used in the test include high-frequency pressure transducers for measuring static and dynamic motor pressures and fine-wire thermocouples for measuring solid fuel surface and subsurface temperatures. The ultrasonic pulse-echo technique as well as a real-time x-ray radiography system have been used to obtain independent measurements of instantaneous solid fuel regression rates.

  3. Achieving Low Overpotential Li-O₂ Battery Operations by Li₂O₂ Decomposition through One-Electron Processes.

    PubMed

    Xie, Jin; Dong, Qi; Madden, Ian; Yao, Xiahui; Cheng, Qingmei; Dornath, Paul; Fan, Wei; Wang, Dunwei

    2015-12-01

    As a promising high-capacity energy storage technology, Li-O2 batteries face two critical challenges, poor cycle lifetime and low round-trip efficiencies, both of which are connected to the high overpotentials. The problem is particularly acute during recharge, where the reactions typically follow two-electron mechanisms that are inherently slow. Here we present a strategy that can significantly reduce recharge overpotentials. Our approach seeks to promote Li2O2 decomposition by one-electron processes, and the key is to stabilize the important intermediate of superoxide species. With the introduction of a highly polarizing electrolyte, we observe that recharge processes are successfully switched from a two-electron pathway to a single-electron one. While a similar one-electron route has been reported for the discharge processes, it has rarely been described for recharge except for the initial stage due to the poor mobilities of surface bound superoxide ions (O2(-)), a necessary intermediate for the mechanism. Key to our observation is the solvation of O2(-) by an ionic liquid electrolyte (PYR14TFSI). Recharge overpotentials as low as 0.19 V at 100 mA/g(carbon) are measured. PMID:26583874

  4. Fundamental phenomena on fuel decomposition and boundary-layer combustion processes with applications to hybrid rocket motors

    NASA Technical Reports Server (NTRS)

    Kuo, Kenneth K.; Lu, Yeu-Cherng; Chiaverini, Martin J.; Harting, George C.; Johnson, David K.; Serin, Nadir

    1995-01-01

    The experimental study on the fundamental processes involved in fuel decomposition and boundary-layer combustion in hybrid rocket motors is continuously being conducted at the High Pressure Combustion Laboratory of The Pennsylvania State University. This research will provide a useful engineering technology base in the development of hybrid rocket motors as well as a fundamental understanding of the complex processes involved in hybrid propulsion. A high-pressure, 2-D slab motor has been designed, manufactured, and utilized for conducting seven test firings using HTPB fuel processed at PSU. A total of 20 fuel slabs have been received from the Mcdonnell Douglas Aerospace Corporation. Ten of these fuel slabs contain an array of fine-wire thermocouples for measuring solid fuel surface and subsurface temperatures. Diagnostic instrumentation used in the test include high-frequency pressure transducers for measuring static and dynamic motor pressures and fine-wire thermocouples for measuring solid fuel surface and subsurface temperatures. The ultrasonic pulse-echo technique as well as a real-time x-ray radiography system have been used to obtain independent measurements of instantaneous solid fuel regression rates.

  5. On the Decomposition of Martensite During Bake Hardening of Thermomechanically Processed TRIP Steels

    SciTech Connect

    Pereloma, E. V.; Miller, Michael K; Timokhina, I. B.

    2008-01-01

    Thermomechanically processed (TMP) CMnSi transformation-induced plasticity (TRIP) steels with and without additions of Nb, Mo, or Al were subjected to prestraining and bake hardening. Atom probe tomography (APT) revealed the presence of fine C-rich clusters in the martensite of all studied steels after the thermomechanical processing. After bake hardening, the formation of iron carbides, containing from 25 to 90 at. pct C, was observed. The evolution of iron carbide compositions was independent of steel composition and was a function of carbide size.

  6. Towards a physical understanding of stratospheric cooling under global warming through a process-based decomposition method

    NASA Astrophysics Data System (ADS)

    Yang, Yang; Ren, R.-C.; Cai, Ming

    2016-02-01

    The stratosphere has been cooling under global warming, the causes of which are not yet well understood. This study applied a process-based decomposition method (CFRAM; Coupled Surface-Atmosphere Climate Feedback Response Analysis Method) to the simulation results of a Coupled Model Intercomparison Project, phase 5 (CMIP5) model (CCSM4; Community Climate System Model, version 4), to demonstrate the responsible radiative and non-radiative processes involved in the stratospheric cooling. By focusing on the long-term stratospheric temperature changes between the "historical run" and the 8.5 W m-2 Representative Concentration Pathway (RCP8.5) scenario, this study demonstrates that the changes of radiative radiation due to CO2, ozone and water vapor are the main divers of stratospheric cooling in both winter and summer. They contribute to the cooling changes by reducing the net radiative energy (mainly downward radiation) received by the stratospheric layer. In terms of the global average, their contributions are around -5, -1.5, and -1 K, respectively. However, the observed stratospheric cooling is much weaker than the cooling by radiative processes. It is because changes in atmospheric dynamic processes act to strongly mitigate the radiative cooling by yielding a roughly 4 K warming on the global average base. In particular, the much stronger/weaker dynamic warming in the northern/southern winter extratropics is associated with an increase of the planetary-wave activity in the northern winter, but a slight decrease in the southern winter hemisphere, under global warming. More importantly, although radiative processes dominate the stratospheric cooling, the spatial patterns are largely determined by the non-radiative effects of dynamic processes.

  7. Decomposition of cyclohexanoic acid by the UV/H2O2 process under various conditions.

    PubMed

    Afzal, Atefeh; Drzewicz, Przemysław; Martin, Jonathan W; Gamal El-Din, Mohamed

    2012-06-01

    Naphthenic acids (NAs) are a broad range of alicyclic and aliphatic compounds that are persistent and contribute to the toxicity of oil sands process affected water (OSPW). In this investigation, cyclohexanoic acid (CHA) was selected as a model naphthenic acid, and its oxidation was investigated using advanced oxidation employing a low-pressure ultraviolet light in the presence of hydrogen peroxide (UV/H(2)O(2) process). The effects of two pHs and common OSPW constituents, such as chloride (Cl(-)) and carbonate (CO(3)(2-)) were investigated in ultrapure water. The optimal molar ratio of H(2)O(2) to CHA in the treatment process was also investigated. The pH had no significant effect on the degradation, nor on the formation and degradation of byproducts in ultrapure water. The presence of CO(3)(2-) or Cl(-) significantly decreased the CHA degradation rate. The presence of 700 mg/L CO(3)(2-) or 500 mg/L Cl(-), typical concentrations in OSPW, caused a 55% and 23% decrease in the pseudo-first order degradation rate constants for CHA, respectively. However, no change in byproducts or in the degradation trend of byproducts, in the presence of scavengers was observed. A real OSPW matrix also had a significant impact by decreasing the CHA degradation rate, such that by spiking CHA into the OSPW, the degradation rate decreased up to 82% relative to that in ultrapure water. The results of this study show that UV/H(2)O(2) AOP is capable of degrading CHA as a model NA in ultrapure water. However, in the real applications, the effect of radical scavengers should be taken into consideration for the achievement of best performance of the process. PMID:22521165

  8. Ozone decomposition

    PubMed Central

    Batakliev, Todor; Georgiev, Vladimir; Anachkov, Metody; Rakovsky, Slavcho

    2014-01-01

    Catalytic ozone decomposition is of great significance because ozone is a toxic substance commonly found or generated in human environments (aircraft cabins, offices with photocopiers, laser printers, sterilizers). Considerable work has been done on ozone decomposition reported in the literature. This review provides a comprehensive summary of the literature, concentrating on analysis of the physico-chemical properties, synthesis and catalytic decomposition of ozone. This is supplemented by a review on kinetics and catalyst characterization which ties together the previously reported results. Noble metals and oxides of transition metals have been found to be the most active substances for ozone decomposition. The high price of precious metals stimulated the use of metal oxide catalysts and particularly the catalysts based on manganese oxide. It has been determined that the kinetics of ozone decomposition is of first order importance. A mechanism of the reaction of catalytic ozone decomposition is discussed, based on detailed spectroscopic investigations of the catalytic surface, showing the existence of peroxide and superoxide surface intermediates. PMID:26109880

  9. Ozone decomposition.

    PubMed

    Batakliev, Todor; Georgiev, Vladimir; Anachkov, Metody; Rakovsky, Slavcho; Zaikov, Gennadi E

    2014-06-01

    Catalytic ozone decomposition is of great significance because ozone is a toxic substance commonly found or generated in human environments (aircraft cabins, offices with photocopiers, laser printers, sterilizers). Considerable work has been done on ozone decomposition reported in the literature. This review provides a comprehensive summary of the literature, concentrating on analysis of the physico-chemical properties, synthesis and catalytic decomposition of ozone. This is supplemented by a review on kinetics and catalyst characterization which ties together the previously reported results. Noble metals and oxides of transition metals have been found to be the most active substances for ozone decomposition. The high price of precious metals stimulated the use of metal oxide catalysts and particularly the catalysts based on manganese oxide. It has been determined that the kinetics of ozone decomposition is of first order importance. A mechanism of the reaction of catalytic ozone decomposition is discussed, based on detailed spectroscopic investigations of the catalytic surface, showing the existence of peroxide and superoxide surface intermediates. PMID:26109880

  10. Physical and chemical processes of low-temperature plasma decomposition of liquids under ultrasonic treatment

    NASA Astrophysics Data System (ADS)

    Bulychev, N. A.; Kazaryan, M. A.

    2015-12-01

    In this work, a low-temperature plasma initiated in liquid media between electrodes has been shown to be able to decompose hydrogen containing organic molecules leading to obtaining gaseous products with volume part of hydrogen higher than 90% (up to gas chromatography data). Preliminary evaluations of energetic efficiency, calculated from combustion energy of hydrogen and initial liquids and electrical energy consumption have demonstrated the efficiency about 60-70% depending on initial liquids composition. Theoretical calculations of voltage and current values for this process have been done, that is in good agreement with experimental data.

  11. General route for the decomposition of InAs quantum dots during the capping process

    NASA Astrophysics Data System (ADS)

    González, D.; Reyes, D. F.; Utrilla, A. D.; Ben, T.; Braza, V.; Guzman, A.; Hierro, A.; Ulloa, J. M.

    2016-03-01

    The effect of the capping process on the morphology of InAs/GaAs quantum dots (QDs) by using different GaAs-based capping layers (CLs), ranging from strain reduction layers to strain compensating layers, has been studied by transmission microscopic techniques. For this, we have measured simultaneously the height and diameter in buried and uncapped QDs covering populations of hundreds of QDs that are statistically reliable. First, the uncapped QD population evolves in all cases from a pyramidal shape into a more homogenous distribution of buried QDs with a spherical-dome shape, despite the different mechanisms implicated in the QD capping. Second, the shape of the buried QDs depends only on the final QD size, where the radius of curvature is function of the base diameter independently of the CL composition and growth conditions. An asymmetric evolution of the QDs’ morphology takes place, in which the QD height and base diameter are modified in the amount required to adopt a similar stable shape characterized by a averaged aspect ratio of 0.21. Our results contradict the traditional model of QD material redistribution from the apex to the base and point to a different universal behavior of the overgrowth processes in self-organized InAs QDs.

  12. General route for the decomposition of InAs quantum dots during the capping process.

    PubMed

    González, D; Reyes, D F; Utrilla, A D; Ben, T; Braza, V; Guzman, A; Hierro, A; Ulloa, J M

    2016-03-29

    The effect of the capping process on the morphology of InAs/GaAs quantum dots (QDs) by using different GaAs-based capping layers (CLs), ranging from strain reduction layers to strain compensating layers, has been studied by transmission microscopic techniques. For this, we have measured simultaneously the height and diameter in buried and uncapped QDs covering populations of hundreds of QDs that are statistically reliable. First, the uncapped QD population evolves in all cases from a pyramidal shape into a more homogenous distribution of buried QDs with a spherical-dome shape, despite the different mechanisms implicated in the QD capping. Second, the shape of the buried QDs depends only on the final QD size, where the radius of curvature is function of the base diameter independently of the CL composition and growth conditions. An asymmetric evolution of the QDs' morphology takes place, in which the QD height and base diameter are modified in the amount required to adopt a similar stable shape characterized by a averaged aspect ratio of 0.21. Our results contradict the traditional model of QD material redistribution from the apex to the base and point to a different universal behavior of the overgrowth processes in self-organized InAs QDs. PMID:26891164

  13. Temperature Adaptations in the Terminal Processes of Anaerobic Decomposition of Yellowstone National Park and Icelandic Hot Spring Microbial Mats

    PubMed Central

    Sandbeck, Kenneth A.; Ward, David M.

    1982-01-01

    The optimum temperatures for methanogenesis in microbial mats of four neutral to alkaline, low-sulfate hot springs in Yellowstone National Park were between 50 and 60°C, which was 13 to 23°C lower than the upper temperature for mat development. Significant methanogenesis at 65°C was only observed in one of the springs. Methane production in samples collected at a 51 or 62°C site in Octopus Spring was increased by incubation at higher temperatures and was maximal at 70°C. Strains of Methanobacterium thermoautotrophicum were isolated from 50, 55, 60, and 65°C sites in Octopus Spring at the temperatures of the collection sites. The optimum temperature for growth and methanogenesis of each isolate was 65°C. Similar results were found for the potential rate of sulfate reduction in an Icelandic hot spring microbial mat in which sulfate reduction dominated methane production as a terminal process in anaerobic decomposition. The potential rate of sulfate reduction along the thermal gradient of the mat was greatest at 50°C, but incubation at 60°C of the samples obtained at 50°C increased the rate. Adaptation to different mat temperatures, common among various microorganisms and processes in the mats, did not appear to occur in the processes and microorganisms which terminate the anaerobic food chain. Other factors must explain why the maximal rates of these processes are restricted to moderate temperatures of the mat ecosystem. PMID:16346109

  14. Microscopic Approaches to Decomposition and Burning Processes of a Micro Plastic Resin Particle under Abrupt Heating

    NASA Astrophysics Data System (ADS)

    Ohiwa, Norio; Ishino, Yojiro; Yamamoto, Atsunori; Yamakita, Ryuji

    To elucidate the possibility and availability of thermal recycling of waste plastic resin from a basic and microscopic viewpoint, a series of abrupt heating processes of a spherical micro plastic particle having a diameter of about 200 μm is observed, when it is abruptly exposed to hot oxidizing combustion gas. Three ingenious devices are introduced and two typical plastic resins of polyethylene terephthalate and polyethylene are used. In this paper the dependency of internal and external appearances of residual plastic embers on the heating time and the ingredients of plastic resins is optically analyzed, along with appearances of internal micro bubbling, multiple micro explosions and jets, and micro diffusion flames during abrupt heating. Based on temporal variations of the surface area of a micro plastic particle, the apparent burning rate constant is also evaluated and compared with those of well-known volatile liquid fuels.

  15. Decomposition of lignin from sugar cane bagasse during ozonation process monitored by optical and mass spectrometries.

    PubMed

    Souza-Corrêa, J A; Ridenti, M A; Oliveira, C; Araújo, S R; Amorim, J

    2013-03-21

    Mass spectrometry was used to monitor neutral chemical species from sugar cane bagasse that could volatilize during the bagasse ozonation process. Lignin fragments and some radicals liberated by direct ozone reaction with the biomass structure were detected. Ozone density was monitored during the ozonation by optical absorption spectroscopy. The optical results indicated that the ozone interaction with the bagasse material was better for bagasse particle sizes less than or equal to 0.5 mm. Both techniques have shown that the best condition for the ozone diffusion in the bagasse was at 50% of its moisture content. In addition, Fourier transform infrared spectroscopy (FTIR) and scanning electron microscopy (SEM) were employed to analyze the lignin bond disruptions and morphology changes of the bagasse surface that occurred due to the ozonolysis reactions as well. Appropriate chemical characterization of the lignin content in bagasse before and after its ozonation was also carried out. PMID:23441875

  16. Decomposition techniques

    USGS Publications Warehouse

    Chao, T.T.; Sanzolone, R.F.

    1992-01-01

    Sample decomposition is a fundamental and integral step in the procedure of geochemical analysis. It is often the limiting factor to sample throughput, especially with the recent application of the fast and modern multi-element measurement instrumentation. The complexity of geological materials makes it necessary to choose the sample decomposition technique that is compatible with the specific objective of the analysis. When selecting a decomposition technique, consideration should be given to the chemical and mineralogical characteristics of the sample, elements to be determined, precision and accuracy requirements, sample throughput, technical capability of personnel, and time constraints. This paper addresses these concerns and discusses the attributes and limitations of many techniques of sample decomposition along with examples of their application to geochemical analysis. The chemical properties of reagents as to their function as decomposition agents are also reviewed. The section on acid dissolution techniques addresses the various inorganic acids that are used individually or in combination in both open and closed systems. Fluxes used in sample fusion are discussed. The promising microwave-oven technology and the emerging field of automation are also examined. A section on applications highlights the use of decomposition techniques for the determination of Au, platinum group elements (PGEs), Hg, U, hydride-forming elements, rare earth elements (REEs), and multi-elements in geological materials. Partial dissolution techniques used for geochemical exploration which have been treated in detail elsewhere are not discussed here; nor are fire-assaying for noble metals and decomposition techniques for X-ray fluorescence or nuclear methods be discussed. ?? 1992.

  17. Kinetic Analysis of Isothermal Decomposition Process of Sodium Bicarbonate Using the Weibull Probability Function—Estimation of Density Distribution Functions of the Apparent Activation Energies

    NASA Astrophysics Data System (ADS)

    Janković, Bojan

    2009-10-01

    The decomposition process of sodium bicarbonate (NaHCO3) has been studied by thermogravimetry in isothermal conditions at four different operating temperatures (380 K, 400 K, 420 K, and 440 K). It was found that the experimental integral and differential conversion curves at the different operating temperatures can be successfully described by the isothermal Weibull distribution function with a unique value of the shape parameter ( β = 1.07). It was also established that the Weibull distribution parameters ( β and η) show independent behavior on the operating temperature. Using the integral and differential (Friedman) isoconversional methods, in the conversion (α) range of 0.20 ≤ α ≤ 0.80, the apparent activation energy ( E a ) value was approximately constant ( E a, int = 95.2 kJmol-1 and E a, diff = 96.6 kJmol-1, respectively). The values of E a calculated by both isoconversional methods are in good agreement with the value of E a evaluated from the Arrhenius equation (94.3 kJmol-1), which was expressed through the scale distribution parameter ( η). The Málek isothermal procedure was used for estimation of the kinetic model for the investigated decomposition process. It was found that the two-parameter Šesták-Berggren (SB) autocatalytic model best describes the NaHCO3 decomposition process with the conversion function f(α) = α0.18(1-α)1.19. It was also concluded that the calculated density distribution functions of the apparent activation energies ( ddfE a ’s) are not dependent on the operating temperature, which exhibit the highly symmetrical behavior (shape factor = 1.00). The obtained isothermal decomposition results were compared with corresponding results of the nonisothermal decomposition process of NaHCO3.

  18. Joint application of a statistical optimization process and Empirical Mode Decomposition to Magnetic Resonance Sounding Noise Cancelation

    NASA Astrophysics Data System (ADS)

    Ghanati, Reza; Fallahsafari, Mahdi; Hafizi, Mohammad Kazem

    2014-12-01

    The signal quality of Magnetic Resonance Sounding (MRS) measurements is a crucial criterion. The accuracy of the estimation of the signal parameters (i.e. E0 and T2*) strongly depends on amplitude and conditions of ambient electromagnetic interferences at the site of investigation. In this paper, in order to enhance the performance in the noisy environments, a two-step noise cancelation approach based on the Empirical Mode Decomposition (EMD) and a statistical method is proposed. In the first stage, the noisy signal is adaptively decomposed into intrinsic oscillatory components called intrinsic mode functions (IMFs) by means of the EMD algorithm. Afterwards based on an automatic procedure the noisy IMFs are detected, and then the partly de-noised signal is reconstructed through the no-noise IMFs. In the second stage, the signal obtained from the initial section enters an optimization process to cancel the remnant noise, and consequently, estimate the signal parameters. The strategy is tested on a synthetic MRS signal contaminated with Gaussian noise, spiky events and harmonic noise, and on real data. By applying successively the proposed steps, we can remove the noise from the signal to a high extent and the performance indexes, particularly signal to noise ratio, will increase significantly.

  19. Comparison of the thermal decomposition processes of several aminoalcohol-based ZnO inks with one containing ethanolamine

    NASA Astrophysics Data System (ADS)

    Gómez-Núñez, Alberto; Roura, Pere; López, Concepción; Vilà, Anna

    2016-09-01

    Four inks for the production of ZnO semiconducting films have been prepared with zinc acetate dihydrate as precursor salt and one among the following aminoalcohols: aminopropanol (APr), aminomethyl butanol (AMB), aminophenol (APh) and aminobenzyl alcohol (AB) as stabilizing agent. Their thermal decomposition process has been analyzed in situ by thermogravimetric analysis (TGA), differential scanning calorimetry (DSC) and evolved gas analysis (EGA), whereas the solid product has been analysed ex-situ by X-ray diffraction (XRD) and infrared spectroscopy (IR). Although, except for the APh ink, crystalline ZnO is already obtained at 300 °C, the films contain an organic residue that evolves at higher temperature in the form of a large variety of nitrogen-containing cyclic compounds. The results indicate that APr can be a better stabilizing agent than ethanolamine (EA). It gives larger ZnO crystal sizes with similar carbon content. However, a common drawback of all the amino stabilizers (EA included) is that nitrogen atoms have not been completely removed from the ZnO film at the highest temperature of our experiments (600 °C).

  20. Correlation of Fe/Cr phase decomposition process and age-hardening in Fe-15Cr ferritic alloys

    NASA Astrophysics Data System (ADS)

    Chen, Dongsheng; Kimura, Akihiko; Han, Wentuo

    2014-12-01

    The effects of thermal aging on the microstructure and mechanical properties of Fe-15Cr ferritic model alloys were investigated by TEM examinations, micro-hardness measurements and tensile tests. The materials used in this work were Fe-15Cr, Fe-15Cr-C and Fe-15Cr-X alloys, where X refers to Si, Mn and Ni to simulate a pressure vessel steel. Specimens were isothermally aged at 475 °C up to 5000 h. Thermal aging causes a significant increase in the hardness and strength. An almost twice larger hardening is required for embrittlement of Fe-15Cr-X relative to Fe-15Cr. The age-hardening is mainly due to the formation of Cr-rich α‧ precipitates, while the addition of minor elements has a small effect on the saturation level of age-hardening. The correlation of phase decomposition process and age-hardening in Fe-15Cr alloy was interpreted by dispersion strengthened models.

  1. Morphology and phase modifications of MoO{sub 3} obtained by metallo-organic decomposition processes

    SciTech Connect

    Barros Santos, Elias de; Martins de Souza e Silva, Juliana; Odone Mazali, Italo

    2010-11-15

    Molybdenum oxide samples were prepared using different temperatures and atmospheric conditions by metallo-organic decomposition processes and were characterized by XRD, SEM and DRS UV/Vis and Raman spectroscopies. Variation in the synthesis conditions resulted in solids with different morphologies and oxygen vacancy concentrations. Intense characteristic Raman bands of crystalline orthorhombic {alpha}-MoO{sub 3}, occurring at 992 cm{sup -1} and 820 cm{sup -1}, are observed and their shifts can be related to the differences in the structure of the solids obtained. The sample obtained under nitrogen flow at 1073 K is a phase mixture of orthorhombic {alpha}-MoO{sub 3} and monoclinic {beta}-MoO{sub 3}. The characterization results suggest that the molybdenum oxide samples are non-stoichiometric and are described as MoO{sub x} with x < 2.94. Variations in the reaction conditions make it possible to tune the number of oxygen defects and the band gap of the final material.

  2. Phenol Decomposition Process by Pulsed-discharge Plasma above a Water Surface in Oxygen and Argon Atmosphere

    NASA Astrophysics Data System (ADS)

    Shiota, Haruki; Itabashi, Hideyuki; Satoh, Kohki; Itoh, Hidenori

    By-products from phenol by the exposure of pulsed-discharge plasma above a phenol aqueous solution are investigated by gas chromatography mass spectrometry, and the decomposition process of phenol is deduced. When Ar is used as a background gas, catechol, hydroquinone and 4-hydroxy-2-cyclohexene-1-on are produced, and no O3 is detected; therefore, active species such as OH, O, HO2, H2O2, which are produced from H2O in the discharge, can convert phenol into those by-products. When O2 is used as a background gas, formic acid, maleic acid, succinic acid and 4,6-dihydroxy-2,4-hexadienoic acid are produced in addition to catechol and hydroquinone. O3 is produced in the discharge plasma, so that phenol is probably decomposed into 4,6-dihydroxy-2,4-hexadienoic acid by 1,3-dipolar addition reaction with O3, and then 4,6-dihydroxy-2,4-hexadienoic acid can be decomposed into formic acid, maleic acid and succinic acid by 1,3-dipolar addition reaction with O3.

  3. The structure of correlation tensors in homogeneous anisotropic turbulence

    NASA Technical Reports Server (NTRS)

    Matthaeus, W. H.; Smith, C.

    1980-01-01

    The study of turbulence with spatially homogeneous but anisotropic statistical properties has applications in space physics and laboratory plasma physics. The first step in the systematic study of such fluctuations is the elucidation of the kinematic properties of the relevant statistical objects, which are the correlation tensors. The theory of isotropic tensors, developed by Robertson, Chandrasekhar and others, is reviewed and extended to cover the general case of turbulence with a pseudo-vector preferred direction, without assuming mirror reflection invariance. Attention is focused on two point correlation functions and it is shown that the form of the decomposition into proper and pseudo-tensor contributions is restricted by the homogeneity requirement. It is also shown that the vector and pseudo-vector preferred direction cases yield different results. An explicit form of the two point correlation tensor is presented which is appropriate for analyzing interplanetary magnetic fluctuations. A procedure for determining the magnetic helicity from experimental data is presented.

  4. Fuel decomposition and boundary-layer combustion processes of hybrid rocket motors

    NASA Technical Reports Server (NTRS)

    Chiaverini, Martin J.; Harting, George C.; Lu, Yeu-Cherng; Kuo, Kenneth K.; Serin, Nadir; Johnson, David K.

    1995-01-01

    Using a high-pressure, two-dimensional hybrid motor, an experimental investigation was conducted on fundamental processes involved in hybrid rocket combustion. HTPB (Hydroxyl-terminated Polybutadiene) fuel cross-linked with diisocyanate was burned with GOX under various operating conditions. Large-amplitude pressure oscillations were encountered in earlier test runs. After identifying the source of instability and decoupling the GOX feed-line system and combustion chamber, the pressure oscillations were drastically reduced from +/-20% of the localized mean pressure to an acceptable range of +/-1.5% Embedded fine-wire thermocouples indicated that the surface temperature of the burning fuel was around 1000 K depending upon axial locations and operating conditions. Also, except near the leading-edge region, the subsurface thermal wave profiles in the upstream locations are thicker than those in the downstream locations since the solid-fuel regression rate, in general, increases with distance along the fuel slab. The recovered solid fuel slabs in the laminar portion of the boundary layer exhibited smooth surfaces, indicating the existence of a liquid melt layer on the burning fuel surface in the upstream region. After the transition section, which displayed distinct transverse striations, the surface roughness pattern became quite random and very pronounced in the downstream turbulent boundary-layer region. Both real-time X-ray radiography and ultrasonic pulse-echo techniques were used to determine the instantaneous web thickness burned and instantaneous solid-fuel regression rates over certain portions of the fuel slabs. Globally averaged and axially dependent but time-averaged regression rates were also obtained and presented.

  5. Exotic species as modifiers of ecosystem processes: Litter decomposition in native and invaded secondary forests of NW Argentina

    NASA Astrophysics Data System (ADS)

    Aragón, Roxana; Montti, Lia; Ayup, María Marta; Fernández, Romina

    2014-01-01

    Invasions of exotic tree species can cause profound changes in community composition and structure, and may even cause legacy effect on nutrient cycling via litter production. In this study, we compared leaf litter decomposition of two invasive exotic trees (Ligustrum lucidum and Morus sp.) and two dominant native trees (Cinnamomum porphyria and Cupania vernalis) in native and invaded (Ligustrum-dominated) forest stands in NW Argentina. We measured leaf attributes and environmental characteristics in invaded and native stands to isolate the effects of litter quality and habitat characteristics. Species differed in their decomposition rates and, as predicted by the different species colonization status (pioneer vs. late successional), exotic species decayed more rapidly than native ones. Invasion by L. lucidum modified environmental attributes by reducing soil humidity. Decomposition constants (k) tended to be slightly lower (-5%) for all species in invaded stands. High SLA, low tensile strength, and low C:N of Morus sp. distinguish this species from the native ones and explain its higher decomposition rate. Contrary to our expectations, L. lucidum leaf attributes were similar to those of native species. Decomposition rates also differed between the two exotic species (35% higher in Morus sp.), presumably due to leaf attributes and colonization status. Given the high decomposition rate of L. lucidum litter (more than 6 times that of natives) we expect an acceleration of nutrient circulation at ecosystem level in Ligustrum-dominated stands. This may occur in spite of the modified environmental conditions that are associated with L. lucidum invasion.

  6. Investigation of thermal decomposition as the kinetic process that causes the loss of crystalline structure in sucrose using a chemical analysis approach (part II).

    PubMed

    Lee, Joo Won; Thomas, Leonard C; Jerrell, John; Feng, Hao; Cadwallader, Keith R; Schmidt, Shelly J

    2011-01-26

    High performance liquid chromatography (HPLC) on a calcium form cation exchange column with refractive index and photodiode array detection was used to investigate thermal decomposition as the cause of the loss of crystalline structure in sucrose. Crystalline sucrose structure was removed using a standard differential scanning calorimetry (SDSC) method (fast heating method) and a quasi-isothermal modulated differential scanning calorimetry (MDSC) method (slow heating method). In the fast heating method, initial decomposition components, glucose (0.365%) and 5-HMF (0.003%), were found in the sucrose sample coincident with the onset temperature of the first endothermic peak. In the slow heating method, glucose (0.411%) and 5-HMF (0.003%) were found in the sucrose sample coincident with the holding time (50 min) at which the reversing heat capacity began to increase. In both methods, even before the crystalline structure in sucrose was completely removed, unidentified thermal decomposition components were formed. These results prove not only that the loss of crystalline structure in sucrose is caused by thermal decomposition, but also that it is achieved via a time-temperature combination process. This knowledge is important for quality assurance purposes and for developing new sugar based food and pharmaceutical products. In addition, this research provides new insights into the caramelization process, showing that caramelization can occur under low temperature (significantly below the literature reported melting temperature), albeit longer time, conditions. PMID:21175200

  7. Prediction of apparent trabecular bone stiffness through fourth-order fabric tensors.

    PubMed

    Moreno, Rodrigo; Smedby, Örjan; Pahr, Dieter H

    2016-08-01

    The apparent stiffness tensor is an important mechanical parameter for characterizing trabecular bone. Previous studies have modeled this parameter as a function of mechanical properties of the tissue, bone density, and a second-order fabric tensor, which encodes both anisotropy and orientation of trabecular bone. Although these models yield strong correlations between observed and predicted stiffness tensors, there is still space for reducing accuracy errors. In this paper, we propose a model that uses fourth-order instead of second-order fabric tensors. First, the totally symmetric part of the stiffness tensor is assumed proportional to the fourth-order fabric tensor in the logarithmic scale. Second, the asymmetric part of the stiffness tensor is derived from relationships among components of the harmonic tensor decomposition of the stiffness tensor. The mean intercept length (MIL), generalized MIL (GMIL), and fourth-order global structure tensor were computed from images acquired through microcomputed tomography of 264 specimens of the femur. The predicted tensors were compared to the stiffness tensors computed by using the micro-finite element method ([Formula: see text]FE), which was considered as the gold standard, yielding strong correlations ([Formula: see text] above 0.962). The GMIL tensor yielded the best results among the tested fabric tensors. The Frobenius error, geodesic error, and the error of the norm were reduced by applying the proposed model by 3.75, 0.07, and 3.16 %, respectively, compared to the model by Zysset and Curnier (Mech Mater 21(4):243-250, 1995) with the second-order MIL tensor. From the results, fourth-order fabric tensors are a good alternative to the more expensive [Formula: see text]FE stiffness predictions. PMID:26341838

  8. Measuring Nematic Susceptibilities from the Elastoresistivity Tensor

    NASA Astrophysics Data System (ADS)

    Hristov, A. T.; Shapiro, M. C.; Hlobil, Patrick; Maharaj, Akash; Chu, Jiun-Haw; Fisher, Ian

    The elastoresistivity tensor mijkl relates changes in resistivity to the strain on a material. As a fourth-rank tensor, it contains considerably more information about the material than the simpler (second-rank) resistivity tensor; in particular, certain elastoresistivity coefficients can be related to thermodynamic susceptibilities and serve as a direct probe of symmetry breaking at a phase transition. The aim of this talk is twofold. First, we enumerate how symmetry both constrains the structure of the elastoresistivity tensor into an easy-to-understand form and connects tensor elements to thermodynamic susceptibilities. In the process, we generalize previous studies of elastoresistivity to include the effects of magnetic field. Second, we describe an approach to measuring quantities in the elastoresistivity tensor with a novel transverse measurement, which is immune to relative strain offsets. These techniques are then applied to BaFe2As2 in a proof of principle measurement. This work is supported by the Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division, under Contract DE-AC02-76SF00515.

  9. Singular value decomposition for photon-processing nuclear imaging systems and applications for reconstruction and computing null functions.

    PubMed

    Jha, Abhinav K; Barrett, Harrison H; Frey, Eric C; Clarkson, Eric; Caucci, Luca; Kupinski, Matthew A

    2015-09-21

    Recent advances in technology are enabling a new class of nuclear imaging systems consisting of detectors that use real-time maximum-likelihood (ML) methods to estimate the interaction position, deposited energy, and other attributes of each photon-interaction event and store these attributes in a list format. This class of systems, which we refer to as photon-processing (PP) nuclear imaging systems, can be described by a fundamentally different mathematical imaging operator that allows processing of the continuous-valued photon attributes on a per-photon basis. Unlike conventional photon-counting (PC) systems that bin the data into images, PP systems do not have any binning-related information loss. Mathematically, while PC systems have an infinite-dimensional null space due to dimensionality considerations, PP systems do not necessarily suffer from this issue. Therefore, PP systems have the potential to provide improved performance in comparison to PC systems. To study these advantages, we propose a framework to perform the singular-value decomposition (SVD) of the PP imaging operator. We use this framework to perform the SVD of operators that describe a general two-dimensional (2D) planar linear shift-invariant (LSIV) PP system and a hypothetical continuously rotating 2D single-photon emission computed tomography (SPECT) PP system. We then discuss two applications of the SVD framework. The first application is to decompose the object being imaged by the PP imaging system into measurement and null components. We compare these components to the measurement and null components obtained with PC systems. In the process, we also present a procedure to compute the null functions for a PC system. The second application is designing analytical reconstruction algorithms for PP systems. The proposed analytical approach exploits the fact that PP systems acquire data in a continuous domain to estimate a continuous object function. The approach is parallelizable and

  10. Singular value decomposition for photon-processing nuclear imaging systems and applications for reconstruction and computing null functions

    NASA Astrophysics Data System (ADS)

    Jha, Abhinav K.; Barrett, Harrison H.; Frey, Eric C.; Clarkson, Eric; Caucci, Luca; Kupinski, Matthew A.

    2015-09-01

    Recent advances in technology are enabling a new class of nuclear imaging systems consisting of detectors that use real-time maximum-likelihood (ML) methods to estimate the interaction position, deposited energy, and other attributes of each photon-interaction event and store these attributes in a list format. This class of systems, which we refer to as photon-processing (PP) nuclear imaging systems, can be described by a fundamentally different mathematical imaging operator that allows processing of the continuous-valued photon attributes on a per-photon basis. Unlike conventional photon-counting (PC) systems that bin the data into images, PP systems do not have any binning-related information loss. Mathematically, while PC systems have an infinite-dimensional null space due to dimensionality considerations, PP systems do not necessarily suffer from this issue. Therefore, PP systems have the potential to provide improved performance in comparison to PC systems. To study these advantages, we propose a framework to perform the singular-value decomposition (SVD) of the PP imaging operator. We use this framework to perform the SVD of operators that describe a general two-dimensional (2D) planar linear shift-invariant (LSIV) PP system and a hypothetical continuously rotating 2D single-photon emission computed tomography (SPECT) PP system. We then discuss two applications of the SVD framework. The first application is to decompose the object being imaged by the PP imaging system into measurement and null components. We compare these components to the measurement and null components obtained with PC systems. In the process, we also present a procedure to compute the null functions for a PC system. The second application is designing analytical reconstruction algorithms for PP systems. The proposed analytical approach exploits the fact that PP systems acquire data in a continuous domain to estimate a continuous object function. The approach is parallelizable and

  11. Woodland Decomposition.

    ERIC Educational Resources Information Center

    Napier, J.

    1988-01-01

    Outlines the role of the main organisms involved in woodland decomposition and discusses some of the variables affecting the rate of nutrient cycling. Suggests practical work that may be of value to high school students either as standard practice or long-term projects. (CW)

  12. In-situ and self-distributed: A new understanding on catalyzed thermal decomposition process of ammonium perchlorate over Nd{sub 2}O{sub 3}

    SciTech Connect

    Zou, Min Wang, Xin Jiang, Xiaohong Lu, Lude

    2014-05-01

    Catalyzed thermal decomposition process of ammonium perchlorate (AP) over neodymium oxide (Nd{sub 2}O{sub 3}) was investigated. Catalytic performances of nanometer-sized Nd{sub 2}O{sub 3} and micrometer-sized Nd{sub 2}O{sub 3} were evaluated by differential scanning calorimetry (DSC). In contrast to universal concepts, catalysts in different sizes have nearly similar catalytic activities. Based on structural and morphological variation of the catalysts during the reaction, combined with mass spectrum analyses and studies of unmixed style, a new understanding of this catalytic process was proposed. We believed that the newly formed chloride neodymium oxide (NdOCl) was the real catalytic species in the overall thermal decomposition of AP over Nd{sub 2}O{sub 3}. Meanwhile, it was the “self-distributed” procedure which occurred within the reaction that also worked for the improvement of overall catalytic activities. This work is of great value in understanding the roles of micrometer-sized catalysts used in heterogeneous reactions, especially the solid–solid reactions which could generate a large quantity of gaseous species. - Graphical abstract: In-situ and self-distributed reaction process in thermal decomposition of AP catalyzed by Nd{sub 2}O{sub 3}. - Highlights: • Micro- and nano-Nd{sub 2}O{sub 3} for catalytic thermal decomposition of AP. • No essential differences on their catalytic performances. • Structural and morphological variation of catalysts digs out catalytic mechanism. • This catalytic process is “in-situ and self-distributed” one.

  13. Effect of mountain climatic elevation gradient and litter origin on decomposition processes: long-term experiment with litter-bags

    NASA Astrophysics Data System (ADS)

    Klimek, Beata; Niklińska, Maria; Chodak, Marcin

    2013-04-01

    Temperature is one of the most important factors affecting soil organic matter decomposition. Mountain areas with vertical gradients of temperature and precipitation provide an opportunity to observe climate changes similar to those observed at various latitudes and may serve as an approximation for climatic changes. The aim of the study was to compare the effects of climatic conditions and initial properties of litter on decomposition processes and thermal sensitivity of forest litter. The litter was collected at three altitudes (600, 900, 1200 m a.s.l.) in the Beskidy Mts (southern Poland), put into litter-bags and exposed in the field since autumn 2011. The litter collected at single altitude was exposed at the altitude it was taken and also at the two other altitudes. The litter-bags were laid out on five mountains, treated as replicates. Starting on April 2012, single sets of litter-bags were collected every five weeks. The laboratory measurements included determination of dry mass loss and chemical composition (Corg, Nt, St, Mg, Ca, Na, K, Cu, Zn) of the litter. In the additional litter-bag sets, taken in spring and autumn 2012, microbial properties were measured. To determine the effect of litter properties and climatic conditions of elevation sites on decomposing litter thermal sensitivity the respiration rate of litter was measured at 5°C, 15°C and 25°C and calculated as Q10 L and Q10 H (ratios of respiration rate between 5° and 15°C and between 15°C and 25°C, respectively). The functional diversity of soil microbes was measured with Biolog® ECO plates, structural diversity with phospholipid fatty acids (PLFA). Litter mass lost during first year of incubation was characterized by high variability and mean mass lost ranged up to a 30% of initial mass. After autumn sampling we showed, that mean respiration rate of litter (dry mass) from the 600m a.s.l site exposed on 600m a.s.l. was the highest at each tested temperature. In turn, the lowest mean

  14. The non-uniqueness of the atomistic stress tensor and its relationship to the generalized Beltrami representation

    NASA Astrophysics Data System (ADS)

    Admal, Nikhil Chandra; Tadmor, E. B.

    2016-08-01

    The non-uniqueness of the atomistic stress tensor is a well-known issue when defining continuum fields for atomistic systems. In this paper, we study the non-uniqueness of the atomistic stress tensor stemming from the non-uniqueness of the potential energy representation. In particular, we show using rigidity theory that the distribution associated with the potential part of the atomistic stress tensor can be decomposed into an irrotational part that is independent of the potential energy representation, and a traction-free solenoidal part. Therefore, we have identified for the atomistic stress tensor a discrete analog of the continuum generalized Beltrami representation (a version of the vector Helmholtz decomposition for symmetric tensors). We demonstrate the validity of these analogies using a numerical test. A program for performing the decomposition of the atomistic stress tensor called MDStressLab is available online at

  15. Re-examination of Chinese semantic processing and syntactic processing: evidence from conventional ERPs and reconstructed ERPs by residue iteration decomposition (RIDE).

    PubMed

    Wang, Fang; Ouyang, Guang; Zhou, Changsong; Wang, Suiping

    2015-01-01

    A number of studies have explored the time course of Chinese semantic and syntactic processing. However, whether syntactic processing occurs earlier than semantics during Chinese sentence reading is still under debate. To further explore this issue, an event-related potentials (ERPs) experiment was conducted on 21 native Chinese speakers who read individually-presented Chinese simple sentences (NP1+VP+NP2) word-by-word for comprehension and made semantic plausibility judgments. The transitivity of the verbs was manipulated to form three types of stimuli: congruent sentences (CON), sentences with a semantically violated NP2 following a transitive verb (semantic violation, SEM), and sentences with a semantically violated NP2 following an intransitive verb (combined semantic and syntactic violation, SEM+SYN). The ERPs evoked from the target NP2 were analyzed by using the Residue Iteration Decomposition (RIDE) method to reconstruct the ERP waveform blurred by trial-to-trial variability, as well as by using the conventional ERP method based on stimulus-locked averaging. The conventional ERP analysis showed that, compared with the critical words in CON, those in SEM and SEM+SYN elicited an N400-P600 biphasic pattern. The N400 effects in both violation conditions were of similar size and distribution, but the P600 in SEM+SYN was bigger than that in SEM. Compared with the conventional ERP analysis, RIDE analysis revealed a larger N400 effect and an earlier P600 effect (in the time window of 500-800 ms instead of 570-810ms). Overall, the combination of conventional ERP analysis and the RIDE method for compensating for trial-to-trial variability confirmed the non-significant difference between SEM and SEM+SYN in the earlier N400 time window. Converging with previous findings on other Chinese structures, the current study provides further precise evidence that syntactic processing in Chinese does not occur earlier than semantic processing. PMID:25615600

  16. A LOW-COST PROCESS FOR THE SYNTHESIS OF NANOSIZE YTTRIA-STABILIZED ZIRCONIA (YSZ) BY MOLECULAR DECOMPOSITION

    SciTech Connect

    Anil V. Virkar

    2004-05-06

    This report summarizes the results of work done during the performance period on this project, between October 1, 2002 and December 31, 2003, with a three month no-cost extension. The principal objective of this work was to develop a low-cost process for the synthesis of sinterable, fine powder of YSZ. The process is based on molecular decomposition (MD) wherein very fine particles of YSZ are formed by: (1) Mixing raw materials in a powder form, (2) Synthesizing compound containing YSZ and a fugitive constituent by a conventional process, and (3) Selectively leaching (decomposing) the fugitive constituent, thus leaving behind insoluble YSZ of a very fine particle size. While there are many possible compounds, which can be used as precursors, the one selected for the present work was Y-doped Na{sub 2}ZrO{sub 3}, where the fugitive constituent is Na{sub 2}O. It can be readily demonstrated that the potential cost of the MD process for the synthesis of very fine (or nanosize) YSZ is considerably lower than the commonly used processes, namely chemical co-precipitation and combustion synthesis. Based on the materials cost alone, for a 100 kg batch, the cost of YSZ made by chemical co-precipitation is >$50/kg, while that of the MD process should be <$10/kg. Significant progress was made during the performance period on this project. The highlights of the progress are given here in a bullet form. (1) From the two selected precursors listed in Phase I proposal, namely Y-doped BaZrO{sub 3} and Y-doped Na{sub 2}ZrO{sub 3}, selection of Y-doped Na{sub 2}ZrO{sub 3} was made for the synthesis of nanosize (or fine) YSZ. This was based on the potential cost of the precursor, the need to use only water for leaching, and the short time required for the process. (2) For the synthesis of calcia-stabilized zirconia (CSZ), which has the potential for use in place of YSZ in the anode of SOFC, Ca-doped Na{sub 2}ZrO{sub 3} was demonstrated as a suitable precursor. (3) Synthesis of Y

  17. Evaluation of Bayesian tensor estimation using tensor coherence

    NASA Astrophysics Data System (ADS)

    Kim, Dae-Jin; Kim, In-Young; Jeong, Seok-Oh; Park, Hae-Jeong

    2009-06-01

    Fiber tractography, a unique and non-invasive method to estimate axonal fibers within white matter, constructs the putative streamlines from diffusion tensor MRI by interconnecting voxels according to the propagation direction defined by the diffusion tensor. This direction has uncertainties due to the properties of underlying fiber bundles, neighboring structures and image noise. Therefore, robust estimation of the diffusion direction is essential to reconstruct reliable fiber pathways. For this purpose, we propose a tensor estimation method using a Bayesian framework, which includes an a priori probability distribution based on tensor coherence indices, to utilize both the neighborhood direction information and the inertia moment as regularization terms. The reliability of the proposed tensor estimation was evaluated using Monte Carlo simulations in terms of accuracy and precision with four synthetic tensor fields at various SNRs and in vivo human data of brain and calf muscle. Proposed Bayesian estimation demonstrated the relative robustness to noise and the higher reliability compared to the simple tensor regression.

  18. Grid-based electronic structure calculations: The tensor decomposition approach

    NASA Astrophysics Data System (ADS)

    Rakhuba, M. V.; Oseledets, I. V.

    2016-05-01

    We present a fully grid-based approach for solving Hartree-Fock and all-electron Kohn-Sham equations based on low-rank approximation of three-dimensional electron orbitals. Due to the low-rank structure the total complexity of the algorithm depends linearly with respect to the one-dimensional grid size. Linear complexity allows for the usage of fine grids, e.g. 81923 and, thus, cheap extrapolation procedure. We test the proposed approach on closed-shell atoms up to the argon, several molecules and clusters of hydrogen atoms. All tests show systematical convergence with the required accuracy.

  19. Gogny interactions with tensor terms

    NASA Astrophysics Data System (ADS)

    Anguiano, M.; Lallena, A. M.; Co', G.; De Donno, V.; Grasso, M.; Bernard, R. N.

    2016-07-01

    We present a perturbative approach to include tensor terms in the Gogny interaction. We do not change the values of the usual parameterisations, with the only exception of the spin-orbit term, and we add tensor terms whose only free parameters are the strengths of the interactions. We identify observables sensitive to the presence of the tensor force in Hartree-Fock, Hartree-Fock-Bogoliubov and random phase approximation calculations. We show the need of including two tensor contributions, at least: a pure tensor term and a tensor-isospin term. We show results relevant for the inclusion of the tensor term for single-particle energies, charge-conserving magnetic excitations and Gamow-Teller excitations.

  20. The atomic strain tensor

    SciTech Connect

    Mott, P.H.; Argon, A.S. ); Suter, U.W. Massachusetts Institute of Technology, Cambridge, MA )

    1992-07-01

    A definition of the local atomic strain increments in three dimensions and an algorithm for computing them is presented. An arbitrary arrangement of atoms is tessellated in to Delaunay tetrahedra, identifying interstices, and Voronoi polyhedra, identifying atomic domains. The deformation gradient increment tensor for interstitial space is obtained from the displacement increments of the corner atoms of Delaunay tetrahedra. The atomic site strain increment tensor is then obtained by finding the intersection of the Delaunay tetrahedra with the Voronoi polyhedra, accumulating the individual deformation gradient contributions of the intersected Delaunay tetrahedra into the Voronoi polyhedra. An example application is discussed, showing how the atomic strain clarifies the relative local atomic movement for a polymeric glass treated at the atomic level. 6 refs. 10 figs.

  1. An Alternative to Tensors

    NASA Astrophysics Data System (ADS)

    Brown, Eric

    2008-10-01

    Some of the most beautiful and complex theories in physics are formulated in the language of tensors. While powerful, these methods are sometimes daunting to the uninitiated. I will introduce the use of Clifford Algebra as a practical alternative to the use of tensors. Many physical quantities can be represented in an indexless form. The boundary between the classical and the quantum worlds becomes a little more transparent. I will review some key concepts, and then talk about some of the things that I am doing with this interesting and powerful tool. Of note to some will be the development of rigid body dynamics for a game engine. Others may be interested in expressing the connection on a spin bundle. My intent is to prove to the audience that there exists an accessible mathematical tool that can be employed to probe the most difficult of topics in physics.

  2. Superconducting tensor gravity gradiometer

    NASA Technical Reports Server (NTRS)

    Paik, H. J.

    1981-01-01

    The employment of superconductivity and other material properties at cryogenic temperatures to fabricate sensitive, low-drift, gravity gradiometer is described. The device yields a reduction of noise of four orders of magnitude over room temperature gradiometers, and direct summation and subtraction of signals from accelerometers in varying orientations are possible with superconducting circuitry. Additional circuits permit determination of the linear and angular acceleration vectors independent of the measurement of the gravity gradient tensor. A dewar flask capable of maintaining helium in a liquid state for a year's duration is under development by NASA, and a superconducting tensor gravity gradiometer for the NASA Geodynamics Program is intended for a LEO polar trajectory to measure the harmonic expansion coefficients of the earth's gravity field up to order 300.

  3. Local anisotropy of fluids using Minkowski tensors

    NASA Astrophysics Data System (ADS)

    Kapfer, S. C.; Mickel, W.; Schaller, F. M.; Spanner, M.; Goll, C.; Nogawa, T.; Ito, N.; Mecke, K.; Schröder-Turk, G. E.

    2010-11-01

    Statistics of the free volume available to individual particles have previously been studied for simple and complex fluids, granular matter, amorphous solids, and structural glasses. Minkowski tensors provide a set of shape measures that are based on strong mathematical theorems and easily computed for polygonal and polyhedral bodies such as free volume cells (Voronoi cells). They characterize the local structure beyond the two-point correlation function and are suitable to define indices 0 <= βνa, b <= 1 of local anisotropy. Here, we analyze the statistics of Minkowski tensors for configurations of simple liquid models, including the ideal gas (Poisson point process), the hard disks and hard spheres ensemble, and the Lennard-Jones fluid. We show that Minkowski tensors provide a robust characterization of local anisotropy, which ranges from βνa, b≈0.3 for vapor phases to \\beta_\

  4. Direct Solution of the Chemical Master Equation Using Quantized Tensor Trains

    PubMed Central

    Kazeev, Vladimir; Khammash, Mustafa; Nip, Michael; Schwab, Christoph

    2014-01-01

    The Chemical Master Equation (CME) is a cornerstone of stochastic analysis and simulation of models of biochemical reaction networks. Yet direct solutions of the CME have remained elusive. Although several approaches overcome the infinite dimensional nature of the CME through projections or other means, a common feature of proposed approaches is their susceptibility to the curse of dimensionality, i.e. the exponential growth in memory and computational requirements in the number of problem dimensions. We present a novel approach that has the potential to “lift” this curse of dimensionality. The approach is based on the use of the recently proposed Quantized Tensor Train (QTT) formatted numerical linear algebra for the low parametric, numerical representation of tensors. The QTT decomposition admits both, algorithms for basic tensor arithmetics with complexity scaling linearly in the dimension (number of species) and sub-linearly in the mode size (maximum copy number), and a numerical tensor rounding procedure which is stable and quasi-optimal. We show how the CME can be represented in QTT format, then use the exponentially-converging -discontinuous Galerkin discretization in time to reduce the CME evolution problem to a set of QTT-structured linear equations to be solved at each time step using an algorithm based on Density Matrix Renormalization Group (DMRG) methods from quantum chemistry. Our method automatically adapts the “basis” of the solution at every time step guaranteeing that it is large enough to capture the dynamics of interest but no larger than necessary, as this would increase the computational complexity. Our approach is demonstrated by applying it to three different examples from systems biology: independent birth-death process, an example of enzymatic futile cycle, and a stochastic switch model. The numerical results on these examples demonstrate that the proposed QTT method achieves dramatic speedups and several orders of magnitude

  5. Direct solution of the Chemical Master Equation using quantized tensor trains.

    PubMed

    Kazeev, Vladimir; Khammash, Mustafa; Nip, Michael; Schwab, Christoph

    2014-03-01

    The Chemical Master Equation (CME) is a cornerstone of stochastic analysis and simulation of models of biochemical reaction networks. Yet direct solutions of the CME have remained elusive. Although several approaches overcome the infinite dimensional nature of the CME through projections or other means, a common feature of proposed approaches is their susceptibility to the curse of dimensionality, i.e. the exponential growth in memory and computational requirements in the number of problem dimensions. We present a novel approach that has the potential to "lift" this curse of dimensionality. The approach is based on the use of the recently proposed Quantized Tensor Train (QTT) formatted numerical linear algebra for the low parametric, numerical representation of tensors. The QTT decomposition admits both, algorithms for basic tensor arithmetics with complexity scaling linearly in the dimension (number of species) and sub-linearly in the mode size (maximum copy number), and a numerical tensor rounding procedure which is stable and quasi-optimal. We show how the CME can be represented in QTT format, then use the exponentially-converging hp-discontinuous Galerkin discretization in time to reduce the CME evolution problem to a set of QTT-structured linear equations to be solved at each time step using an algorithm based on Density Matrix Renormalization Group (DMRG) methods from quantum chemistry. Our method automatically adapts the "basis" of the solution at every time step guaranteeing that it is large enough to capture the dynamics of interest but no larger than necessary, as this would increase the computational complexity. Our approach is demonstrated by applying it to three different examples from systems biology: independent birth-death process, an example of enzymatic futile cycle, and a stochastic switch model. The numerical results on these examples demonstrate that the proposed QTT method achieves dramatic speedups and several orders of magnitude storage

  6. Physical decomposition of the gauge and gravitational fields

    SciTech Connect

    Chen Xiangsong; Zhu Benchao

    2011-04-15

    Physical decomposition of the non-Abelian gauge field has recently helped to achieve a meaningful gluon spin. Here we extend this approach to gravity and attempt a meaningful gravitational energy. The metric is unambiguously separated into a pure geometric term which contributes a null curvature tensor, and a physical term which represents the true gravitational effect and always vanishes in a flat space-time. By this decomposition the conventional pseudotensors of the gravitational stress-energy are easily rescued to produce a definite physical result. Our decomposition applies to any symmetric tensor, and has an interesting relation to the transverse-traceless decomposition discussed by Arnowitt, Deser and Misner, and by York.

  7. Structured data-sparse approximation to high order tensors arising from the deterministic Boltzmann equation

    NASA Astrophysics Data System (ADS)

    Khoromskij, Boris N.

    2007-09-01

    We develop efficient data-sparse representations to a class of high order tensors via a block many-fold Kronecker product decomposition. Such a decomposition is based on low separation-rank approximations of the corresponding multivariate generating function. We combine the Sinc interpolation and a quadrature-based approximation with hierarchically organised block tensor-product formats. Different matrix and tensor operations in the generalised Kronecker tensor-product format including the Hadamard-type product can be implemented with the low cost. An application to the collision integral from the deterministic Boltzmann equation leads to an asymptotical cost O(n^4log^beta n) - O(n^5log^beta n) in the one-dimensional problem size n (depending on the model kernel function), which noticeably improves the complexity O(n^6log^beta n) of the full matrix representation.

  8. Tensor-based detection of T wave alternans using ECG.

    PubMed

    Goovaerts, Griet; Vandenberk, Bert; Willems, Rik; Van Huffel, Sabine

    2015-08-01

    T wave alternans is defined as changes in the T wave amplitude in an ABABAB-pattern. It can be found in ECG signals of patients with heart diseases and is a possible indicator to predict the risk on sudden cardiac death. Due to its low amplitude, robust automatic T wave alternans detection is a difficult task. We present a new method to detect T wave alternans in multichannel ECG signals. The use of tensors (multidimensional matrices) permits the combination of the information present in different channels, making detection more reliable. The possibility of decomposition of incomplete tensors is exploited to deal with noisy ECG segments. Using a sliding window of 128 heartbeats, a tensor is constructed of the T waves of all channels. Canonical Polyadic Decomposition is applied to this tensor and the resulting loading vectors are examined for information about the T wave behavior in three dimensions. T wave alternans is detected using a sign change counting method that is able to extract both the T wave alternans length and magnitude. When applying this novel method to a database of patients with multiple positive T wave alternans tests using the clinically available spectral method tests, both the length and the magnitude of the detected T wave alternans is larger for these subjects than for subjects in a control group. PMID:26737901

  9. Carbon decomposition process of the residual biomass in the paddy soil of a single-crop rice field

    NASA Astrophysics Data System (ADS)

    Okada, K.; Iwata, T.

    2014-12-01

    In cultivated fields, residual organic matter is plowed into soil after harvest and decaying in fallow season. Greenhouse gases such as CO2 and CH4 is generated by the decomposition of the substantial organic matter and released into the atmosphere. In some fields, open burning is carried out by tradition, when carbon in residual matter is released into atmosphere as CO2. However, burning effect on carbon budget between crop lands and atmosphere is not entirely considered yet. In this study, coarse organic matter (COM) in paddy soil of a single-crop rice field was sampled on regular intervals between January, 2011 and August, 2014 The amount of carbon release from residual matter was estimated by analyzing of the variations in carbon content of COM. Effects of soil temperature (Ts) and soil water content (SWC) at the paddy field on the rate of carbon decomposition was investigated. Though decreasing rate of COM was much smaller in winter season, it is accelerated at the warming season between April and June every year. Decomposition was resisted for next rice cultivated season despite of highest soil temperature. In addition, the observational field was divided into two areas, and three time open burning experiments were conducted in November, 2011, 2012, and 2013. In each year, three sampling surveys, or plants before harvest and residuals before and after the burning experiment, were done. From these surveys, it is suggested that about 48±2% of carbon contents of above-ground plant was yield out as grain by harvest, and about 27±2% of carbon emitted as CO2 by burning. Carbon content of residuals plowed into soil after the harvest was estimated 293±1 and 220±36gC/m2 in no-burned and burned area, respectively, based on three-years average. It is estimated that 70 and 60% of the first input amount of COM was decomposed after a year in no-burned and burned area, respectively.

  10. Tensor powers for non-simply laced Lie algebras B2-case

    NASA Astrophysics Data System (ADS)

    Kulish, P. P.; Lyakhovsky, V. D.; Postnova, O. V.

    2012-02-01

    We study the decomposition problem for tensor powers of B2-fundamental modules. To solve this problem singular weight technique and injection fan algorithms are applied. Properties of multiplicity coefficients are formulated in terms of multiplicity functions. These functions are constructed showing explicitly the dependence of multiplicity coefficients on the highest weight coordinates and the tensor power parameter. It is thus possible to study general properties of multiplicity coefficients for powers of the fundamental B2-modules.

  11. Thermal decomposition of [Co(en)3][Fe(CN)6]∙ 2H2O: Topotactic dehydration process, valence and spin exchange mechanism elucidation

    PubMed Central

    2013-01-01

    Background The Prussian blue analogues represent well-known and extensively studied group of coordination species which has many remarkable applications due to their ion-exchange, electron transfer or magnetic properties. Among them, Co-Fe Prussian blue analogues have been extensively studied due to the photoinduced magnetization. Surprisingly, their suitability as precursors for solid-state synthesis of magnetic nanoparticles is almost unexplored. In this paper, the mechanism of thermal decomposition of [Co(en)3][Fe(CN)6] ∙∙ 2H2O (1a) is elucidated, including the topotactic dehydration, valence and spins exchange mechanisms suggestion and the formation of a mixture of CoFe2O4-Co3O4 (3:1) as final products of thermal degradation. Results The course of thermal decomposition of 1a in air atmosphere up to 600°C was monitored by TG/DSC techniques, 57Fe Mössbauer and IR spectroscopy. As first, the topotactic dehydration of 1a to the hemihydrate [Co(en)3][Fe(CN)6] ∙∙ 1/2H2O (1b) occurred with preserving the single-crystal character as was confirmed by the X-ray diffraction analysis. The consequent thermal decomposition proceeded in further four stages including intermediates varying in valence and spin states of both transition metal ions in their structures, i.e. [FeII(en)2(μ-NC)CoIII(CN)4], FeIII(NH2CH2CH3)2(μ-NC)2CoII(CN)3] and FeIII[CoII(CN)5], which were suggested mainly from 57Fe Mössbauer, IR spectral and elemental analyses data. Thermal decomposition was completed at 400°C when superparamagnetic phases of CoFe2O4 and Co3O4 in the molar ratio of 3:1 were formed. During further temperature increase (450 and 600°C), the ongoing crystallization process gave a new ferromagnetic phase attributed to the CoFe2O4-Co3O4 nanocomposite particles. Their formation was confirmed by XRD and TEM analyses. In-field (5 K / 5 T) Mössbauer spectrum revealed canting of Fe(III) spin in almost fully inverse spinel structure of CoFe2O4. Conclusions It has been found

  12. Synthesis of lead zirconate titanate nanofibres and the Fourier-transform infrared characterization of their metallo-organic decomposition process

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Santiago-Avilés, Jorge J.

    2004-01-01

    We have synthesized Pb(Zr0.52Ti0.48)O3 fibres with diameters ranging from 500 nm to several microns using electrospinning and metallo-organic decomposition techniques (Wang et al 2002 Mater. Res. Soc. Symp. Proc. 702 359). By a refinement of our electrospinning technique, i.e. by increasing the viscosity of the precursor solution, and by adding a filter to the tip of the syringe, the diameter of the synthesized PZT fibres has been reduced to the neighbourhood of 100 nm. The complex thermal decomposition was characterized using Fourier-transform infrared (FTIR) spectroscopy and x-ray diffraction (XRD). It was found that alcohol evaporated during electrospinning and that most of the organic groups had pyrolysed before the intermediate pyrochlore phase was formed. There is a good correspondence between XRD and FTIR spectra. We also verify that a thin film of platinum coated on the silicon substrate catalyses the phase transformation of the pyrochlore into the perovskite phase.

  13. Killing and conformal Killing tensors

    NASA Astrophysics Data System (ADS)

    Heil, Konstantin; Moroianu, Andrei; Semmelmann, Uwe

    2016-08-01

    We introduce an appropriate formalism in order to study conformal Killing (symmetric) tensors on Riemannian manifolds. We reprove in a simple way some known results in the field and obtain several new results, like the classification of conformal Killing 2-tensors on Riemannian products of compact manifolds, Weitzenböck formulas leading to non-existence results, and construct various examples of manifolds with conformal Killing tensors.

  14. Projectors and seed conformal blocks for traceless mixed-symmetry tensors

    NASA Astrophysics Data System (ADS)

    Costa, Miguel S.; Hansen, Tobias; Penedones, João; Trevisani, Emilio

    2016-07-01

    In this paper we derive the projectors to all irreducible SO( d) representations (traceless mixed-symmetry tensors) that appear in the partial wave decomposition of a conformal correlator of four stress-tensors in d dimensions. These projectors are given in a closed form for arbitrary length l 1 of the first row of the Young diagram. The appearance of Gegenbauer polynomials leads directly to recursion relations in l 1 for seed conformal blocks. Further results include a differential operator that generates the projectors to traceless mixed-symmetry tensors and the general normalization constant of the shadow operator.

  15. FaRe: A Mathematica package for tensor reduction of Feynman integrals

    NASA Astrophysics Data System (ADS)

    Re Fiorentin, Michele

    2016-08-01

    In this paper, we present FaRe, a package for Mathematica that implements the decomposition of a generic tensor Feynman integral, with arbitrary loop number, into scalar integrals in higher dimension. In order for FaRe to work, the package FeynCalc is needed, so that the tensor structure of the different contributions is preserved and the obtained scalar integrals are grouped accordingly. FaRe can prove particularly useful when it is preferable to handle Feynman integrals with free Lorentz indices and tensor reduction of high-order integrals is needed. This can then be achieved with several powerful existing tools.

  16. Notes on super Killing tensors

    NASA Astrophysics Data System (ADS)

    Howe, P. S.; Lindström, U.

    2016-03-01

    The notion of a Killing tensor is generalised to a superspace setting. Conserved quantities associated with these are defined for superparticles and Poisson brackets are used to define a supersymmetric version of the even Schouten-Nijenhuis bracket. Superconformal Killing tensors in flat superspaces are studied for spacetime dimensions 3,4,5,6 and 10. These tensors are also presented in analytic superspaces and super-twistor spaces for 3,4 and 6 dimensions. Algebraic structures associated with superconformal Killing tensors are also briefly discussed.

  17. Kinetics and mechanism of monomolecular heterolysis of framework compounds. V. Ionization-fragmentation process in decomposition of 1-adamantyl chloroformate

    SciTech Connect

    Ponomareva, E.A.; Yavorskaya, I.F.; Dvorko, G.V.

    1988-08-10

    The decomposition of 1-adamantyl chloroformate in acetonitrile, nitrobenzene, benzene, and isopropyl and tert-butyl alcohols in the presence of triphenylverdazyls as internal indicator was investigated preparatively and kinetically. In nitrobenzene small additions of water increase the reaction rate, and additions of tetraethylammonium halides reduce it. In isopropyl and tert-butyl alcohols and in nitrobenzene in the presence of tetraethylammonium halides the reaction rate depends on the nature of the substituent in the verdazyl. The reaction rate increases linearly with increase in the dielectric constant of the medium. It is assumed that an intimate ion pair is formed at the first stage of the reaction and undergoes fragmentation in the controlling stage to 1-adamantyl chloride or is converted into a solvent-separated ion pair. The latter reacts with the verdazyl or undergoes fragmentation to 1-adamantyl chloride.

  18. Relativistic Lagrangian displacement field and tensor perturbations

    NASA Astrophysics Data System (ADS)

    Rampf, Cornelius; Wiegand, Alexander

    2014-12-01

    We investigate the purely spatial Lagrangian coordinate transformation from the Lagrangian to the basic Eulerian frame. We demonstrate three techniques for extracting the relativistic displacement field from a given solution in the Lagrangian frame. These techniques are (a) from defining a local set of Eulerian coordinates embedded into the Lagrangian frame; (b) from performing a specific gauge transformation; and (c) from a fully nonperturbative approach based on the Arnowitt-Deser-Misner (ADM) split. The latter approach shows that this decomposition is not tied to a specific perturbative formulation for the solution of the Einstein equations. Rather, it can be defined at the level of the nonperturbative coordinate change from the Lagrangian to the Eulerian description. Studying such different techniques is useful because it allows us to compare and develop further the various approximation techniques available in the Lagrangian formulation. We find that one has to solve the gravitational wave equation in the relativistic analysis, otherwise the corresponding Newtonian limit will necessarily contain spurious nonpropagating tensor artifacts at second order in the Eulerian frame. We also derive the magnetic part of the Weyl tensor in the Lagrangian frame, and find that it is not only excited by gravitational waves but also by tensor perturbations which are induced through the nonlinear frame dragging. We apply our findings to calculate for the first time the relativistic displacement field, up to second order, for a Λ CDM Universe in the presence of a local primordial non-Gaussian component. Finally, we also comment on recent claims about whether mass conservation in the Lagrangian frame is violated.

  19. On Endomorphisms of Quantum Tensor Space

    NASA Astrophysics Data System (ADS)

    Lehrer, Gustav Isaac; Zhang, Ruibin

    2008-12-01

    We give a presentation of the endomorphism algebra End_{mathcal {U}q(mathfrak {sl}2)}(V^{⊗ r}) , where V is the three-dimensional irreducible module for quantum {mathfrak {sl}_2} over the function field {mathbb {C}(q^{1/2})} . This will be as a quotient of the Birman-Wenzl-Murakami algebra BMW r ( q) : = BMW r ( q -4, q 2 - q -2) by an ideal generated by a single idempotent Φ q . Our presentation is in analogy with the case where V is replaced by the two-dimensional irreducible {mathcal {U}_q(mathfrak {sl}2)} -module, the BMW algebra is replaced by the Hecke algebra H r ( q) of type A r-1, Φ q is replaced by the quantum alternator in H 3( q), and the endomorphism algebra is the classical realisation of the Temperley-Lieb algebra on tensor space. In particular, we show that all relations among the endomorphisms defined by the R-matrices on {V^{⊗ r}} are consequences of relations among the three R-matrices acting on {V^{⊗ 4}}. The proof makes extensive use of the theory of cellular algebras. Potential applications include the decomposition of tensor powers when q is a root of unity.

  20. A self-documenting source-independent data format for computer processing of tensor time series. [for filing satellite geophysical data

    NASA Technical Reports Server (NTRS)

    Mcpherron, R. L.

    1976-01-01

    The UCLA Space Science Group has developed a fixed format intermediate data set called a block data set, which is designed to hold multiple segments of multicomponent sampled data series. The format is sufficiently general so that tensor functions of one or more independent variables can be stored in the form of virtual data. This makes it possible for the unit data records of the block data set to be arrays of a single dependent variable rather than discrete samples. The format is self-documenting with parameter, label and header records completely characterizing the contents of the file. The block data set has been applied to the filing of satellite data (of ATS-6 among others).

  1. Reducing tensor magnetic gradiometer data for unexploded ordnance detection

    USGS Publications Warehouse

    Bracken, Robert E.; Brown, Philip J.

    2005-01-01

    We performed a survey to demonstrate the effectiveness of a prototype tensor magnetic gradiometer system (TMGS) for detection of buried unexploded ordnance (UXO). In order to achieve a useful result, we designed a data-reduction procedure that resulted in a realistic magnetic gradient tensor and devised a simple way of viewing complicated tensor data, not only to assess the validity of the final resulting tensor, but also to preview the data at interim stages of processing. The final processed map of the surveyed area clearly shows a sharp anomaly that peaks almost directly over the target UXO. This map agrees well with a modeled map derived from dipolar sources near the known target locations. From this agreement, it can be deduced that the reduction process is valid, making the prototype TMGS a foundation for development of future systems and processes.

  2. Relationship between the Decomposition Process of Coarse Woody Debris and Fungal Community Structure as Detected by High-Throughput Sequencing in a Deciduous Broad-Leaved Forest in Japan

    PubMed Central

    Yamashita, Satoshi; Masuya, Hayato; Abe, Shin; Masaki, Takashi; Okabe, Kimiko

    2015-01-01

    We examined the relationship between the community structure of wood-decaying fungi, detected by high-throughput sequencing, and the decomposition rate using 13 years of data from a forest dynamics plot. For molecular analysis and wood density measurements, drill dust samples were collected from logs and stumps of Fagus and Quercus in the plot. Regression using a negative exponential model between wood density and time since death revealed that the decomposition rate of Fagus was greater than that of Quercus. The residual between the expected value obtained from the regression curve and the observed wood density was used as a decomposition rate index. Principal component analysis showed that the fungal community compositions of both Fagus and Quercus changed with time since death. Principal component analysis axis scores were used as an index of fungal community composition. A structural equation model for each wood genus was used to assess the effect of fungal community structure traits on the decomposition rate and how the fungal community structure was determined by the traits of coarse woody debris. Results of the structural equation model suggested that the decomposition rate of Fagus was affected by two fungal community composition components: one that was affected by time since death and another that was not affected by the traits of coarse woody debris. In contrast, the decomposition rate of Quercus was not affected by coarse woody debris traits or fungal community structure. These findings suggest that, in the case of Fagus coarse woody debris, the fungal community structure is related to the decomposition process of its host substrate. Because fungal community structure is affected partly by the decay stage and wood density of its substrate, these factors influence each other. Further research on interactive effects is needed to improve our understanding of the relationship between fungal community structure and the woody debris decomposition process

  3. Thermal decomposition of tetramethyl orthosilicate in the gas phase: An experimental and theoretical study of the initiation process

    SciTech Connect

    Chu, J.C.S.; Soller, R.; Lin, M.C. ); Melius, C.F. )

    1995-01-12

    The thermal decomposition of Si(OCH[sub 3])[sub 4] (TMOS) has been studied by FTIR at temperatures between 858 and 968 K. The experiment was carried out in a static cell at a constant pressure of 700 Torr under highly diluted conditions. Additional experiments were performed by using toluene as a radical scavenger. The species monitored included TMOS, CH[sub 2]O, CH[sub 4], and CO. According to these measurements, the first-order global rate constants for the disappearance of TMOS without and with toluene can be given by k[sub g] = 1.4 x 10[sup 16] exp(-81 200/RT) s[sup [minus]1] and k[sub g] = 2.0 x 10[sup 14] exp(-74 500/RT) s[sup [minus]1], respectively. The noticeable difference between the two sets of Arrhenius parameters suggests that, in the absence of the inhibitor, the reactant was consumed to a significant extent by radical attacks at higher temperatures. The experimental data were kinetically modeled with the aid of a quantum-chemical calculation using the BAC-MP4 method. The results of the kinetic modeling, using the mechanism constructed on the basis of the quantum-chemical data and the known C/H/O chemistry, identified two rate-controlling reactions whose first-order rate constants are given here. 22 refs., 15 figs., 3 tabs.

  4. Mode decomposition evolution equations

    PubMed Central

    Wang, Yang; Wei, Guo-Wei; Yang, Siyang

    2011-01-01

    Partial differential equation (PDE) based methods have become some of the most powerful tools for exploring the fundamental problems in signal processing, image processing, computer vision, machine vision and artificial intelligence in the past two decades. The advantages of PDE based approaches are that they can be made fully automatic, robust for the analysis of images, videos and high dimensional data. A fundamental question is whether one can use PDEs to perform all the basic tasks in the image processing. If one can devise PDEs to perform full-scale mode decomposition for signals and images, the modes thus generated would be very useful for secondary processing to meet the needs in various types of signal and image processing. Despite of great progress in PDE based image analysis in the past two decades, the basic roles of PDEs in image/signal analysis are only limited to PDE based low-pass filters, and their applications to noise removal, edge detection, segmentation, etc. At present, it is not clear how to construct PDE based methods for full-scale mode decomposition. The above-mentioned limitation of most current PDE based image/signal processing methods is addressed in the proposed work, in which we introduce a family of mode decomposition evolution equations (MoDEEs) for a vast variety of applications. The MoDEEs are constructed as an extension of a PDE based high-pass filter (Europhys. Lett., 59(6): 814, 2002) by using arbitrarily high order PDE based low-pass filters introduced by Wei (IEEE Signal Process. Lett., 6(7): 165, 1999). The use of arbitrarily high order PDEs is essential to the frequency localization in the mode decomposition. Similar to the wavelet transform, the present MoDEEs have a controllable time-frequency localization and allow a perfect reconstruction of the original function. Therefore, the MoDEE operation is also called a PDE transform. However, modes generated from the present approach are in the spatial or time domain and can be

  5. 3-D inversion of magnetotelluric Phase Tensor

    NASA Astrophysics Data System (ADS)

    Patro, Prasanta; Uyeshima, Makoto

    2010-05-01

    Three-dimensional (3-D) inversion of the magnetotelluric (MT) has become a routine practice among the MT community due to progress of algorithms for 3-D inverse problems (e.g. Mackie and Madden, 1993; Siripunvaraporn et al., 2005). While availability of such 3-D inversion codes have increased the resolving power of the MT data and improved the interpretation, on the other hand, still the galvanic effects poses difficulties in interpretation of resistivity structure obtained from the MT data. In order to tackle the galvanic distortion of MT data, Caldwell et al., (2004) introduced the concept of phase tensor. They demonstrated how the regional phase information can be retrieved from the observed impedance tensor without any assumptions for structural dimension, where both the near surface inhomogeneity and the regional conductivity structures can be 3-D. We made an attempt to modify a 3-D inversion code (Siripunvaraporn et al., 2005) to directly invert the phase tensor elements. We present here the main modification done in the sensitivity calculation and then show a few synthetic studies and its application to the real data. The synthetic model study suggests that the prior model (m_0) setting is important in retrieving the true model. This is because estimation of correct induction scale length lacks in the phase tensor inversion process. Comparison between results from conventional impedance inversion and new phase tensor inversion suggests that, in spite of presence of the galvanic distortion (due to near surface checkerboard anomalies in our case), the new inverion algorithm retrieves the regional conductivitity structure reliably. We applied the new inversion to the real data from the Indian sub continent and compared with the results from conventional impedance inversion.

  6. Skyrme tensor force in heavy ion collisions

    NASA Astrophysics Data System (ADS)

    Stevenson, P. D.; Suckling, E. B.; Fracasso, S.; Barton, M. C.; Umar, A. S.

    2016-05-01

    Background: It is generally acknowledged that the time-dependent Hartree-Fock (TDHF) method provides a useful foundation for a fully microscopic many-body theory of low-energy heavy ion reactions. The TDHF method is also known in nuclear physics in the small-amplitude domain, where it provides a useful description of collective states, and is based on the mean-field formalism, which has been a relatively successful approximation to the nuclear many-body problem. Currently, the TDHF theory is being widely used in the study of fusion excitation functions, fission, and deep-inelastic scattering of heavy mass systems, while providing a natural foundation for many other studies. Purpose: With the advancement of computational power it is now possible to undertake TDHF calculations without any symmetry assumptions and incorporate the major strides made by the nuclear structure community in improving the energy density functionals used in these calculations. In particular, time-odd and tensor terms in these functionals are naturally present during the dynamical evolution, while being absent or minimally important for most static calculations. The parameters of these terms are determined by the requirement of Galilean invariance or local gauge invariance but their significance for the reaction dynamics have not been fully studied. This work addresses this question with emphasis on the tensor force. Method: The full version of the Skyrme force, including terms arising only from the Skyrme tensor force, is applied to the study of collisions within a completely symmetry-unrestricted TDHF implementation. Results: We examine the effect on upper fusion thresholds with and without the tensor force terms and find an effect on the fusion threshold energy of the order several MeV. Details of the distribution of the energy within terms in the energy density functional are also discussed. Conclusions: Terms in the energy density functional linked to the tensor force can play a non

  7. Extracting the diffusion tensor from molecular dynamics simulation with Milestoning

    SciTech Connect

    Mugnai, Mauro L.; Elber, Ron

    2015-01-07

    We propose an algorithm to extract the diffusion tensor from Molecular Dynamics simulations with Milestoning. A Kramers-Moyal expansion of a discrete master equation, which is the Markovian limit of the Milestoning theory, determines the diffusion tensor. To test the algorithm, we analyze overdamped Langevin trajectories and recover a multidimensional Fokker-Planck equation. The recovery process determines the flux through a mesh and estimates local kinetic parameters. Rate coefficients are converted to the derivatives of the potential of mean force and to coordinate dependent diffusion tensor. We illustrate the computation on simple models and on an atomically detailed system—the diffusion along the backbone torsions of a solvated alanine dipeptide.

  8. Link prediction on evolving graphs using matrix and tensor factorizations.

    SciTech Connect

    Dunlavy, Daniel M.; Acar, Evrim; Kolda, Tamara Gibson

    2010-06-01

    The data in many disciplines such as social networks, web analysis, etc. is link-based, and the link structure can be exploited for many different data mining tasks. In this paper, we consider the problem of temporal link prediction: Given link data for time periods 1 through T, can we predict the links in time period T + 1? Specifically, we look at bipartite graphs changing over time and consider matrix- and tensor-based methods for predicting links. We present a weight-based method for collapsing multi-year data into a single matrix. We show how the well-known Katz method for link prediction can be extended to bipartite graphs and, moreover, approximated in a scalable way using a truncated singular value decomposition. Using a CANDECOMP/PARAFAC tensor decomposition of the data, we illustrate the usefulness of exploiting the natural three-dimensional structure of temporal link data. Through several numerical experiments, we demonstrate that both matrix- and tensor-based techniques are effective for temporal link prediction despite the inherent difficulty of the problem.

  9. Parallel Tensor Compression for Large-Scale Scientific Data.

    SciTech Connect

    Kolda, Tamara G.; Ballard, Grey; Austin, Woody Nathan

    2015-10-01

    As parallel computing trends towards the exascale, scientific data produced by high-fidelity simulations are growing increasingly massive. For instance, a simulation on a three-dimensional spatial grid with 512 points per dimension that tracks 64 variables per grid point for 128 time steps yields 8 TB of data. By viewing the data as a dense five way tensor, we can compute a Tucker decomposition to find inherent low-dimensional multilinear structure, achieving compression ratios of up to 10000 on real-world data sets with negligible loss in accuracy. So that we can operate on such massive data, we present the first-ever distributed memory parallel implementation for the Tucker decomposition, whose key computations correspond to parallel linear algebra operations, albeit with nonstandard data layouts. Our approach specifies a data distribution for tensors that avoids any tensor data redistribution, either locally or in parallel. We provide accompanying analysis of the computation and communication costs of the algorithms. To demonstrate the compression and accuracy of the method, we apply our approach to real-world data sets from combustion science simulations. We also provide detailed performance results, including parallel performance in both weak and strong scaling experiments.

  10. Tensor Network Contractions for #SAT

    NASA Astrophysics Data System (ADS)

    Biamonte, Jacob D.; Morton, Jason; Turner, Jacob

    2015-09-01

    The computational cost of counting the number of solutions satisfying a Boolean formula, which is a problem instance of #SAT, has proven subtle to quantify. Even when finding individual satisfying solutions is computationally easy (e.g. 2-SAT, which is in ), determining the number of solutions can be #-hard. Recently, computational methods simulating quantum systems experienced advancements due to the development of tensor network algorithms and associated quantum physics-inspired techniques. By these methods, we give an algorithm using an axiomatic tensor contraction language for n-variable #SAT instances with complexity where c is the number of COPY-tensors, g is the number of gates, and d is the maximal degree of any COPY-tensor. Thus, n-variable counting problems can be solved efficiently when their tensor network expression has at most COPY-tensors and polynomial fan-out. This framework also admits an intuitive proof of a variant of the Tovey conjecture (the r,1-SAT instance of the Dubois-Tovey theorem). This study increases the theory, expressiveness and application of tensor based algorithmic tools and provides an alternative insight on these problems which have a long history in statistical physics and computer science.

  11. Tensor-network algorithm for nonequilibrium relaxation in the thermodynamic limit

    NASA Astrophysics Data System (ADS)

    Hotta, Yoshihito

    2016-06-01

    We propose a tensor-network algorithm for discrete-time stochastic dynamics of a homogeneous system in the thermodynamic limit. We map a d -dimensional nonequilibrium Markov process to a (d +1 ) -dimensional infinite tensor network by using a higher-order singular-value decomposition. As an application of the algorithm, we compute the nonequilibrium relaxation from a fully magnetized state to equilibrium of the one- and two-dimensional Ising models with periodic boundary conditions. Utilizing the translational invariance of the systems, we analyze the behavior in the thermodynamic limit directly. We estimated the dynamical critical exponent z =2.16 (5 ) for the two-dimensional Ising model. Our approach fits well with the framework of the nonequilibrium-relaxation method. Our algorithm can compute time evolution of the magnetization of a large system precisely for a relatively short period. In the nonequilibrium-relaxation method, one needs to simulate dynamics of a large system for a short time. The combination of the two provides a different approach to the study of critical phenomena.

  12. MATLAB tensor classes for fast algorithm prototyping.

    SciTech Connect

    Bader, Brett William; Kolda, Tamara Gibson

    2004-10-01

    Tensors (also known as mutidimensional arrays or N-way arrays) are used in a variety of applications ranging from chemometrics to psychometrics. We describe four MATLAB classes for tensor manipulations that can be used for fast algorithm prototyping. The tensor class extends the functionality of MATLAB's multidimensional arrays by supporting additional operations such as tensor multiplication. The tensor as matrix class supports the 'matricization' of a tensor, i.e., the conversion of a tensor to a matrix (and vice versa), a commonly used operation in many algorithms. Two additional classes represent tensors stored in decomposed formats: cp tensor and tucker tensor. We descibe all of these classes and then demonstrate their use by showing how to implement several tensor algorithms that have appeared in the literature.

  13. Tensor classification of structure in smoothed particle hydrodynamics density fields

    NASA Astrophysics Data System (ADS)

    Forgan, Duncan; Bonnell, Ian; Lucas, William; Rice, Ken

    2016-04-01

    As hydrodynamic simulations increase in scale and resolution, identifying structures with non-trivial geometries or regions of general interest becomes increasingly challenging. There is a growing need for algorithms that identify a variety of different features in a simulation without requiring a `by eye' search. We present tensor classification as such a technique for smoothed particle hydrodynamics (SPH). These methods have already been used to great effect in N-Body cosmological simulations, which require smoothing defined as an input free parameter. We show that tensor classification successfully identifies a wide range of structures in SPH density fields using its native smoothing, removing a free parameter from the analysis and preventing the need for tessellation of the density field, as required by some classification algorithms. As examples, we show that tensor classification using the tidal tensor and the velocity shear tensor successfully identifies filaments, shells and sheet structures in giant molecular cloud simulations, as well as spiral arms in discs. The relationship between structures identified using different tensors illustrates how different forces compete and co-operate to produce the observed density field. We therefore advocate the use of multiple tensors to classify structure in SPH simulations, to shed light on the interplay of multiple physical processes.

  14. Real-time framework for tensor-based image enhancement for object classification

    NASA Astrophysics Data System (ADS)

    Cyganek, Bogusław; Smołka, Bogdan

    2016-04-01

    In many practical situations visual pattern recognition is vastly burdened by low quality of input images due to noise, geometrical distortions, as well as low quality of the acquisition hardware. However, although there are techniques of image quality improvements, such as nonlinear filtering, there are only few attempts reported in the literature that try to build these enhancement methods into a complete chain for multi-dimensional object recognition such as color video or hyperspectral images. In this work we propose a joint multilinear signal filtering and classification system built upon the multi-dimensional (tensor) approach. Tensor filtering is performed by the multi-dimensional input signal projection into the tensor subspace spanned by the best-rank tensor decomposition method. On the other hand, object classification is done by construction of the tensor sub-space constructed based on the Higher-Order Singular Value Decomposition method applied to the prototype patters. In the experiments we show that the proposed chain allows high object recognition accuracy in the real-time even from the poor quality prototypes. Even more importantly, the proposed framework allows unified classification of signals of any dimensions, such as color images or video sequences which are exemplars of 3D and 4D tensors, respectively. The paper discussed also some practical issues related to implementation of the key components of the proposed system.

  15. Metallo-Organic Decomposition (MOD) film development

    NASA Technical Reports Server (NTRS)

    Parker, J.

    1986-01-01

    The processing techniques and problems encountered in formulating metallo-organic decomposition (MOD) films used in contracting structures for thin solar cells are described. The use of thermogravimetric analysis (TGA) and differential scanning calorimetry (DSC) techniques performed at Jet Propulsion Laboratory (JPL) in understanding the decomposition reactions lead to improvements in process procedures. The characteristics of the available MOD films were described in detail.

  16. Tensor-polarized structure functions: Tensor structure of deuteron in 2020's

    NASA Astrophysics Data System (ADS)

    Kumano, S.

    2014-10-01

    We explain spin structure for a spin-one hadron, in which there are new structure functions, in addition to the ones (F1, F2, g1, g2) which exist for the spin-1/2 nucleon, associated with its tensor structure. The new structure functions are b1, b2, b3, and b4 in deep inelastic scattering of a charged-lepton from a spin-one hadron such as the deuteron. Among them, twist- two functions are related by the Callan-Gross type relation b2 = 2xb1 in the Bjorken scaling limit. First, these new structure functions are introduced, and useful formulae are derived for projection operators of b1-4 from a hadron tensor Wμν. Second, a sum rule is explained for b1, and possible tensor-polarized distributions are discussed by using HERMES data in order to propose future experimental measurements and to compare them with theoretical models. A proposal was approved to measure b1 at the Thomas Jefferson National Accelerator Facility (JLab), so that much progress is expected for b1 in the near future. Third, formalisms of polarized proton-deuteron Drell-Yan processes are explained for probing especially tensor- polarized antiquark distributions, which were suggested by the HERMES data. The studies of the tensor-polarized structure functions will open a new era in 2020's for tensor-structure studies in terms of quark and gluon degrees of freedom, which are very different from ordinary descriptions in terms of nucleons and mesons.

  17. Attributing analysis on the model bias in surface temperature in the climate system model FGOALS-s2 through a process-based decomposition method

    NASA Astrophysics Data System (ADS)

    Yang, Yang; Ren, Rongcai; Cai, Ming; Rao, Jian

    2015-04-01

    This study uses the coupled atmosphere-surface climate feedback-response analysis method (CFRAM) to analyze the surface temperature biases in the Flexible Global Ocean-Atmosphere-Land System model, spectral version 2 (FGOALS-s2) in January and July. The process-based decomposition of the surface temperature biases, defined as the difference between the model and ERA-Interim during 1979-2005, enables us to attribute the model surface temperature biases to individual radiative processes including ozone, water vapor, cloud, and surface albedo; and non-radiative processes including surface sensible and latent heat fluxes, and dynamic processes at the surface and in the atmosphere. The results show that significant model surface temperature biases are almost globally present, are generally larger over land than over oceans, and are relatively larger in summer than in winter. Relative to the model biases in non-radiative processes, which tend to dominate the surface temperature biases in most parts of the world, biases in radiative processes are much smaller, except in the sub-polar Antarctic region where the cold biases from the much overestimated surface albedo are compensated for by the warm biases from nonradiative processes. The larger biases in non-radiative processes mainly lie in surface heat fluxes and in surface dynamics, which are twice as large in the Southern Hemisphere as in the Northern Hemisphere and always tend to compensate for each other. In particular, the upward/downward heat fluxes are systematically underestimated/overestimated in most parts of the world, and are mainly compensated for by surface dynamic processes including the increased heat storage in deep oceans across the globe.

  18. Hydrogen peroxide catalytic decomposition

    NASA Technical Reports Server (NTRS)

    Parrish, Clyde F. (Inventor)

    2010-01-01

    Nitric oxide in a gaseous stream is converted to nitrogen dioxide using oxidizing species generated through the use of concentrated hydrogen peroxide fed as a monopropellant into a catalyzed thruster assembly. The hydrogen peroxide is preferably stored at stable concentration levels, i.e., approximately 50%-70% by volume, and may be increased in concentration in a continuous process preceding decomposition in the thruster assembly. The exhaust of the thruster assembly, rich in hydroxyl and/or hydroperoxy radicals, may be fed into a stream containing oxidizable components, such as nitric oxide, to facilitate their oxidation.

  19. Benefits and Costs of Lexical Decomposition and Semantic Integration during the Processing of Transparent and Opaque English Compounds

    ERIC Educational Resources Information Center

    Ji, Hongbo; Gagne, Christina L.; Spalding, Thomas L.

    2011-01-01

    Six lexical decision experiments were conducted to examine the influence of complex structure on the processing speed of English compounds. All experiments revealed that semantically transparent compounds (e.g., "rosebud") were processed more quickly than matched monomorphemic words (e.g., "giraffe"). Opaque compounds (e.g., "hogwash") were also…

  20. Tensor visualizations in computational geomechanics

    NASA Astrophysics Data System (ADS)

    Jeremi, Boris; Scheuermann, Gerik; Frey, Jan; Yang, Zhaohui; Hamann, Bernd; Joy, Kenneth I.; Hagen, Hans

    2002-08-01

    We present a novel technique for visualizing tensors in three dimensional (3D) space. Of particular interest is the visualization of stress tensors resulting from 3D numerical simulations in computational geomechanics. To this end we present three different approaches to visualizing tensors in 3D space, namely hedgehogs, hyperstreamlines and hyperstreamsurfaces. We also present a number of examples related to stress distributions in 3D solids subjected to single and load couples. In addition, we present stress visualizations resulting from single-pile and pile-group computations. The main objective of this work is to investigate various techniques for visualizing general Cartesian tensors of rank 2 and it's application to geomechanics problems.

  1. Understanding the systematic air temperature biases in a coupled climate system model through a process-based decomposition method

    NASA Astrophysics Data System (ADS)

    Ren, R.-C.; Yang, Yang; Cai, Ming; Rao, Jian

    2015-10-01

    A quantitative attribution analysis is performed on the systematic atmospheric temperature biases in a coupled climate system model (flexible global ocean-atmosphere-land system model, spectral version 2) in reference to the European Center for Medium-Range Weather Forecasts, Re-analysis Interim data during 1979-2005. By adopting the coupled surface-atmosphere climate feedback response analysis method, the model temperature biases are related to model biases in representing the radiative processes including water vapor, ozone, clouds and surface albedo, and the non-radiative processes including surface heat fluxes and other dynamic processes. The results show that the temperature biases due to biases in radiative and non-radiative processes tend to compensate one another. In general, the radiative biases tend to dominate in the summer hemisphere, whereas the non-radiative biases dominate in the winter hemisphere. The temperature biases associated with radiative processes due to biases in ozone and water vapor content are the main contributors to the total temperature bias in the tropical and summer stratosphere. The overestimated surface albedo in both polar regions always results in significant cold biases in the atmosphere above in the summer season. Apart from these radiative biases, the zonal-mean patterns of the temperature biases in both boreal winter and summer are largely determined by model biases in non-radiative processes. In particular, the stronger non-radiative process biases in the northern winter hemisphere are responsible for the relatively larger `cold pole' bias in the northern winter polar stratosphere.

  2. Scalable tensor factorizations with incomplete data.

    SciTech Connect

    Morup, Morten; Dunlavy, Daniel M.; Acar, Evrim; Kolda, Tamara Gibson

    2010-07-01

    The problem of incomplete data - i.e., data with missing or unknown values - in multi-way arrays is ubiquitous in biomedical signal processing, network traffic analysis, bibliometrics, social network analysis, chemometrics, computer vision, communication networks, etc. We consider the problem of how to factorize data sets with missing values with the goal of capturing the underlying latent structure of the data and possibly reconstructing missing values (i.e., tensor completion). We focus on one of the most well-known tensor factorizations that captures multi-linear structure, CANDECOMP/PARAFAC (CP). In the presence of missing data, CP can be formulated as a weighted least squares problem that models only the known entries. We develop an algorithm called CP-WOPT (CP Weighted OPTimization) that uses a first-order optimization approach to solve the weighted least squares problem. Based on extensive numerical experiments, our algorithm is shown to successfully factorize tensors with noise and up to 99% missing data. A unique aspect of our approach is that it scales to sparse large-scale data, e.g., 1000 x 1000 x 1000 with five million known entries (0.5% dense). We further demonstrate the usefulness of CP-WOPT on two real-world applications: a novel EEG (electroencephalogram) application where missing data is frequently encountered due to disconnections of electrodes and the problem of modeling computer network traffic where data may be absent due to the expense of the data collection process.

  3. Tensor integrand reduction via Laurent expansion

    DOE PAGESBeta

    Hirschi, Valentin; Peraro, Tiziano

    2016-06-09

    We introduce a new method for the application of one-loop integrand reduction via the Laurent expansion algorithm, as implemented in the public C++ library Ninja. We show how the coefficients of the Laurent expansion can be computed by suitable contractions of the loop numerator tensor with cut-dependent projectors, making it possible to interface Ninja to any one-loop matrix element generator that can provide the components of this tensor. We implemented this technique in the Ninja library and interfaced it to MadLoop, which is part of the public MadGraph5_aMC@NLO framework. We performed a detailed performance study, comparing against other public reductionmore » tools, namely CutTools, Samurai, IREGI, PJFry++ and Golem95. We find that Ninja out-performs traditional integrand reduction in both speed and numerical stability, the latter being on par with that of the tensor integral reduction tool Golem95 which is however more limited and slower than Ninja. Lastly, we considered many benchmark multi-scale processes of increasing complexity, involving QCD and electro-weak corrections as well as effective non-renormalizable couplings, showing that Ninja’s performance scales well with both the rank and multiplicity of the considered process.« less

  4. Tensor integrand reduction via Laurent expansion

    NASA Astrophysics Data System (ADS)

    Hirschi, Valentin; Peraro, Tiziano

    2016-06-01

    We introduce a new method for the application of one-loop integrand reduction via the Laurent expansion algorithm, as implemented in the public C ++ library N inja. We show how the coefficients of the Laurent expansion can be computed by suitable contractions of the loop numerator tensor with cut-dependent projectors, making it possible to interface N inja to any one-loop matrix element generator that can provide the components of this tensor. We implemented this technique in the N inja library and interfaced it to M adL oop, which is part of the public M adG raph5_ aMC@NLO framework. We performed a detailed performance study, comparing against other public reduction tools, namely C utT ools, S amurai, IREGI, PJF ry++ and G olem95. We find that N inja out-performs traditional integrand reduction in both speed and numerical stability, the latter being on par with that of the tensor integral reduction tool Golem95 which is however more limited and slower than N inja. We considered many benchmark multi-scale processes of increasing complexity, involving QCD and electro-weak corrections as well as effective non-renormalizable couplings, showing that N inja's performance scales well with both the rank and multiplicity of the considered process.

  5. A three-dimensional domain decomposition method for large-scale DFT electronic structure calculations

    NASA Astrophysics Data System (ADS)

    Duy, Truong Vinh Truong; Ozaki, Taisuke

    2014-03-01

    With tens of petaflops supercomputers already in operation and exaflops machines expected to appear within the next 10 years, efficient parallel computational methods are required to take advantage of such extreme-scale machines. In this paper, we present a three-dimensional domain decomposition scheme for enabling large-scale electronic structure calculations based on density functional theory (DFT) on massively parallel computers. It is composed of two methods: (i) the atom decomposition method and (ii) the grid decomposition method. In the former method, we develop a modified recursive bisection method based on the moment of inertia tensor to reorder the atoms along a principal axis so that atoms that are close in real space are also close on the axis to ensure data locality. The atoms are then divided into sub-domains depending on their projections onto the principal axis in a balanced way among the processes. In the latter method, we define four data structures for the partitioning of grid points that are carefully constructed to make data locality consistent with that of the clustered atoms for minimizing data communications between the processes. We also propose a decomposition method for solving the Poisson equation using the three-dimensional FFT in Hartree potential calculation, which is shown to be better in terms of communication efficiency than a previously proposed parallelization method based on a two-dimensional decomposition. For evaluation, we perform benchmark calculations with our open-source DFT code, OpenMX, paying particular attention to the O(N) Krylov subspace method. The results show that our scheme exhibits good strong and weak scaling properties, with the parallel efficiency at 131,072 cores being 67.7% compared to the baseline of 16,384 cores with 131,072 atoms of the diamond structure on the K computer.

  6. Visualizing second order tensor fields with hyperstreamlines

    NASA Technical Reports Server (NTRS)

    Delmarcelle, Thierry; Hesselink, Lambertus

    1993-01-01

    Hyperstreamlines are a generalization to second order tensor fields of the conventional streamlines used in vector field visualization. As opposed to point icons commonly used in visualizing tensor fields, hyperstreamlines form a continuous representation of the complete tensor information along a three-dimensional path. This technique is useful in visulaizing both symmetric and unsymmetric three-dimensional tensor data. Several examples of tensor field visualization in solid materials and fluid flows are provided.

  7. Adaptive Multilinear Tensor Product Wavelets.

    PubMed

    Weiss, Kenneth; Lindstrom, Peter

    2016-01-01

    Many foundational visualization techniques including isosurfacing, direct volume rendering and texture mapping rely on piecewise multilinear interpolation over the cells of a mesh. However, there has not been much focus within the visualization community on techniques that efficiently generate and encode globally continuous functions defined by the union of multilinear cells. Wavelets provide a rich context for analyzing and processing complicated datasets. In this paper, we exploit adaptive regular refinement as a means of representing and evaluating functions described by a subset of their nonzero wavelet coefficients. We analyze the dependencies involved in the wavelet transform and describe how to generate and represent the coarsest adaptive mesh with nodal function values such that the inverse wavelet transform is exactly reproduced via simple interpolation (subdivision) over the mesh elements. This allows for an adaptive, sparse representation of the function with on-demand evaluation at any point in the domain. We focus on the popular wavelets formed by tensor products of linear B-splines, resulting in an adaptive, nonconforming but crack-free quadtree (2D) or octree (3D) mesh that allows reproducing globally continuous functions via multilinear interpolation over its cells. PMID:26529742

  8. Conceptualizing and Estimating Process Speed in Studies Employing Ecological Momentary Assessment Designs: A Multilevel Variance Decomposition Approach

    ERIC Educational Resources Information Center

    Shiyko, Mariya P.; Ram, Nilam

    2011-01-01

    Researchers have been making use of ecological momentary assessment (EMA) and other study designs that sample feelings and behaviors in real time and in naturalistic settings to study temporal dynamics and contextual factors of a wide variety of psychological, physiological, and behavioral processes. As EMA designs become more widespread,…

  9. Tensor analysis methods for activity characterization in spatiotemporal data

    SciTech Connect

    Haass, Michael Joseph; Van Benthem, Mark Hilary; Ochoa, Edward M.

    2014-03-01

    Tensor (multiway array) factorization and decomposition offers unique advantages for activity characterization in spatio-temporal datasets because these methods are compatible with sparse matrices and maintain multiway structure that is otherwise lost in collapsing for regular matrix factorization. This report describes our research as part of the PANTHER LDRD Grand Challenge to develop a foundational basis of mathematical techniques and visualizations that enable unsophisticated users (e.g. users who are not steeped in the mathematical details of matrix algebra and mulitway computations) to discover hidden patterns in large spatiotemporal data sets.

  10. Development of the Tensoral Computer Language

    NASA Technical Reports Server (NTRS)

    Ferziger, Joel; Dresselhaus, Eliot

    1996-01-01

    The research scientist or engineer wishing to perform large scale simulations or to extract useful information from existing databases is required to have expertise in the details of the particular database, the numerical methods and the computer architecture to be used. This poses a significant practical barrier to the use of simulation data. The goal of this research was to develop a high-level computer language called Tensoral, designed to remove this barrier. The Tensoral language provides a framework in which efficient generic data manipulations can be easily coded and implemented. First of all, Tensoral is general. The fundamental objects in Tensoral represent tensor fields and the operators that act on them. The numerical implementation of these tensors and operators is completely and flexibly programmable. New mathematical constructs and operators can be easily added to the Tensoral system. Tensoral is compatible with existing languages. Tensoral tensor operations co-exist in a natural way with a host language, which may be any sufficiently powerful computer language such as Fortran, C, or Vectoral. Tensoral is very-high-level. Tensor operations in Tensoral typically act on entire databases (i.e., arrays) at one time and may, therefore, correspond to many lines of code in a conventional language. Tensoral is efficient. Tensoral is a compiled language. Database manipulations are simplified optimized and scheduled by the compiler eventually resulting in efficient machine code to implement them.