Science.gov

Sample records for processing tensor decomposition

  1. Tensor Decomposition for Signal Processing and Machine Learning

    NASA Astrophysics Data System (ADS)

    Sidiropoulos, Nicholas D.; De Lathauwer, Lieven; Fu, Xiao; Huang, Kejun; Papalexakis, Evangelos E.; Faloutsos, Christos

    2017-07-01

    Tensors or {\\em multi-way arrays} are functions of three or more indices $(i,j,k,\\cdots)$ -- similar to matrices (two-way arrays), which are functions of two indices $(r,c)$ for (row,column). Tensors have a rich history, stretching over almost a century, and touching upon numerous disciplines; but they have only recently become ubiquitous in signal and data analytics at the confluence of signal processing, statistics, data mining and machine learning. This overview article aims to provide a good starting point for researchers and practitioners interested in learning about and working with tensors. As such, it focuses on fundamentals and motivation (using various application examples), aiming to strike an appropriate balance of breadth {\\em and depth} that will enable someone having taken first graduate courses in matrix algebra and probability to get started doing research and/or developing tensor algorithms and software. Some background in applied optimization is useful but not strictly required. The material covered includes tensor rank and rank decomposition; basic tensor factorization models and their relationships and properties (including fairly good coverage of identifiability); broad coverage of algorithms ranging from alternating optimization to stochastic gradient; statistical performance analysis; and applications ranging from source separation to collaborative filtering, mixture and topic modeling, classification, and multilinear subspace learning.

  2. Generating functions for tensor product decomposition

    NASA Astrophysics Data System (ADS)

    Fuksa, Jan; Pošta, Severin

    2013-11-01

    The paper deals with the tensor product decomposition problem. Tensor product decompositions are of great importance in the quantum physics. A short outline of the state of the art for the of semisimple Lie groups is mentioned. The generality of generating functions is used to solve tensor products. The corresponding generating function is rational. The feature of this technique lies in the fact that the decompositions of all tensor products of all irreducible representations are solved simultaneously. Obtaining the generating function is a difficult task in general. We propose some changes to an algorithm using Patera-Sharp character generators to find this generating function, which simplifies the whole problem to simple operations over rational functions.

  3. Tensor decomposition of EEG signals: a brief review.

    PubMed

    Cong, Fengyu; Lin, Qiu-Hua; Kuang, Li-Dan; Gong, Xiao-Feng; Astikainen, Piia; Ristaniemi, Tapani

    2015-06-15

    Electroencephalography (EEG) is one fundamental tool for functional brain imaging. EEG signals tend to be represented by a vector or a matrix to facilitate data processing and analysis with generally understood methodologies like time-series analysis, spectral analysis and matrix decomposition. Indeed, EEG signals are often naturally born with more than two modes of time and space, and they can be denoted by a multi-way array called as tensor. This review summarizes the current progress of tensor decomposition of EEG signals with three aspects. The first is about the existing modes and tensors of EEG signals. Second, two fundamental tensor decomposition models, canonical polyadic decomposition (CPD, it is also called parallel factor analysis-PARAFAC) and Tucker decomposition, are introduced and compared. Moreover, the applications of the two models for EEG signals are addressed. Particularly, the determination of the number of components for each mode is discussed. Finally, the N-way partial least square and higher-order partial least square are described for a potential trend to process and analyze brain signals of two modalities simultaneously.

  4. An optimization approach for fitting canonical tensor decompositions.

    SciTech Connect

    Dunlavy, Daniel M.; Acar, Evrim; Kolda, Tamara Gibson

    2009-02-01

    Tensor decompositions are higher-order analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as the CANDECOMP/PARAFAC decomposition (CPD), which expresses a tensor as the sum of component rank-one tensors and is used in a multitude of applications such as chemometrics, signal processing, neuroscience, and web analysis. The task of computing the CPD, however, can be difficult. The typical approach is based on alternating least squares (ALS) optimization, which can be remarkably fast but is not very accurate. Previously, nonlinear least squares (NLS) methods have also been recommended; existing NLS methods are accurate but slow. In this paper, we propose the use of gradient-based optimization methods. We discuss the mathematical calculation of the derivatives and further show that they can be computed efficiently, at the same cost as one iteration of ALS. Computational experiments demonstrate that the gradient-based optimization methods are much more accurate than ALS and orders of magnitude faster than NLS.

  5. Tensor network decompositions in the presence of a global symmetry

    SciTech Connect

    Singh, Sukhwinder; Pfeifer, Robert N. C.; Vidal, Guifre

    2010-11-15

    Tensor network decompositions offer an efficient description of certain many-body states of a lattice system and are the basis of a wealth of numerical simulation algorithms. We discuss how to incorporate a global symmetry, given by a compact, completely reducible group G, in tensor network decompositions and algorithms. This is achieved by considering tensors that are invariant under the action of the group G. Each symmetric tensor decomposes into two types of tensors: degeneracy tensors, containing all the degrees of freedom, and structural tensors, which only depend on the symmetry group. In numerical calculations, the use of symmetric tensors ensures the preservation of the symmetry, allows selection of a specific symmetry sector, and significantly reduces computational costs. On the other hand, the resulting tensor network can be interpreted as a superposition of exponentially many spin networks. Spin networks are used extensively in loop quantum gravity, where they represent states of quantum geometry. Our work highlights their importance in the context of tensor network algorithms as well, thus setting the stage for cross-fertilization between these two areas of research.

  6. Identifying key nodes in multilayer networks based on tensor decomposition

    NASA Astrophysics Data System (ADS)

    Wang, Dingjie; Wang, Haitao; Zou, Xiufen

    2017-06-01

    The identification of essential agents in multilayer networks characterized by different types of interactions is a crucial and challenging topic, one that is essential for understanding the topological structure and dynamic processes of multilayer networks. In this paper, we use the fourth-order tensor to represent multilayer networks and propose a novel method to identify essential nodes based on CANDECOMP/PARAFAC (CP) tensor decomposition, referred to as the EDCPTD centrality. This method is based on the perspective of multilayer networked structures, which integrate the information of edges among nodes and links between different layers to quantify the importance of nodes in multilayer networks. Three real-world multilayer biological networks are used to evaluate the performance of the EDCPTD centrality. The bar chart and ROC curves of these multilayer networks indicate that the proposed approach is a good alternative index to identify real important nodes. Meanwhile, by comparing the behavior of both the proposed method and the aggregated single-layer methods, we demonstrate that neglecting the multiple relationships between nodes may lead to incorrect identification of the most versatile nodes. Furthermore, the Gene Ontology functional annotation demonstrates that the identified top nodes based on the proposed approach play a significant role in many vital biological processes. Finally, we have implemented many centrality methods of multilayer networks (including our method and the published methods) and created a visual software based on the MATLAB GUI, called ENMNFinder, which can be used by other researchers.

  7. 3D tensor-based blind multispectral image decomposition for tumor demarcation

    NASA Astrophysics Data System (ADS)

    Kopriva, Ivica; Peršin, Antun

    2010-03-01

    Blind decomposition of multi-spectral fluorescent image for tumor demarcation is formulated exploiting tensorial structure of the image. First contribution of the paper is identification of the matrix of spectral responses and 3D tensor of spatial distributions of the materials present in the image from Tucker3 or PARAFAC models of 3D image tensor. Second contribution of the paper is clustering based estimation of the number of the materials present in the image as well as matrix of their spectral profiles. 3D tensor of the spatial distributions of the materials is recovered through 3-mode multiplication of the multi-spectral image tensor and inverse of the matrix of spectral profiles. Tensor representation of the multi-spectral image preserves its local spatial structure that is lost, due to vectorization process, when matrix factorization-based decomposition methods (such as non-negative matrix factorization and independent component analysis) are used. Superior performance of the tensor-based image decomposition over matrix factorization-based decompositions is demonstrated on experimental red-green-blue (RGB) image with known ground truth as well as on RGB fluorescent images of the skin tumor (basal cell carcinoma).

  8. Tensor Decompositions for Learning Latent Variable Models

    DTIC Science & Technology

    2012-12-08

    of a tensor, 2011. arXiv:1004.4953. [CSC+12] S. B. Cohen, K. Stratos, M. Collins, D. P. Foster, and L. Ungar . Spectral learning of latent-variable...12] P. S. Dhillon, J. Rodu, M. Collins, D. P. Foster, and L. H. Ungar . Spectral dependency parsing with latent variables. In EMNLP-CoNLL, 2012. [DS07...Foster, J. Rodu, and L. H. Ungar . Spectral dimensionality reduction for HMMs, 2012. arXiv:1203.6130. [GvL96] G. H. Golub and C. F. van Loan. Matrix

  9. 3D extension of Tensorial Polar Decomposition. Application to (photo-)elasticity tensors

    NASA Astrophysics Data System (ADS)

    Desmorat, Rodrigue; Desmorat, Boris

    2016-06-01

    The orthogonalized harmonic decomposition of symmetric fourth-order tensors (i.e. having major and minor indicial symmetries, such as elasticity tensors) is completed by a representation of harmonic fourth-order tensors H by means of two second-order harmonic (symmetric deviatoric) tensors only. A similar decomposition is obtained for non-symmetric tensors (i.e. having minor indicial symmetry only, such as photo-elasticity tensors or elasto-plasticity tangent operators) introducing a fourth-order major antisymmetric traceless tensor Z. The tensor Z is represented by means of one harmonic second-order tensor and one antisymmetric second-order tensor only. Representations of totally symmetric (rari-constant), symmetric and major antisymmetric fourth-order tensors are simple particular cases of the proposed general representation. Closed-form expressions for tensor decomposition are given in the monoclinic case. Practical applications to elasticity and photo-elasticity monoclinic tensors are finally presented. xml:lang="fr"

  10. Calculating vibrational spectra of molecules using tensor train decomposition

    NASA Astrophysics Data System (ADS)

    Rakhuba, Maxim; Oseledets, Ivan

    2016-09-01

    We propose a new algorithm for calculation of vibrational spectra of molecules using tensor train decomposition. Under the assumption that eigenfunctions lie on a low-parametric manifold of low-rank tensors we suggest using well-known iterative methods that utilize matrix inversion (locally optimal block preconditioned conjugate gradient method, inverse iteration) and solve corresponding linear systems inexactly along this manifold. As an application, we accurately compute vibrational spectra (84 states) of acetonitrile molecule CH3CN on a laptop in one hour using only 100 MB of memory to represent all computed eigenfunctions.

  11. Blind multispectral image decomposition by 3D nonnegative tensor factorization.

    PubMed

    Kopriva, Ivica; Cichocki, Andrzej

    2009-07-15

    Alpha-divergence-based nonnegative tensor factorization (NTF) is applied to blind multispectral image (MSI) decomposition. The matrix of spectral profiles and the matrix of spatial distributions of the materials resident in the image are identified from the factors in Tucker3 and PARAFAC models. NTF preserves local structure in the MSI that is lost as a result of vectorization of the image when nonnegative matrix factorization (NMF)- or independent component analysis (ICA)-based decompositions are used. Moreover, NTF based on the PARAFAC model is unique up to permutation and scale under mild conditions. To achieve this, NMF- and ICA-based factorizations, respectively, require enforcement of sparseness (orthogonality) and statistical independence constraints on the spatial distributions of the materials resident in the MSI, and these conditions do not hold. We demonstrate efficiency of the NTF-based factorization in relation to NMF- and ICA-based factorizations on blind decomposition of the experimental MSI with the known ground truth.

  12. Tensor decomposition and nonlocal means based spectral CT reconstruction

    NASA Astrophysics Data System (ADS)

    Zhang, Yanbo; Yu, Hengyong

    2016-10-01

    As one of the state-of-the-art detectors, photon counting detector is used in spectral CT to classify the received photons into several energy channels and generate multichannel projection simultaneously. However, the projection always contains severe noise due to the low counts in each energy channel. How to reconstruct high-quality images from photon counting detector based spectral CT is a challenging problem. It is widely accepted that there exists self-similarity over the spatial domain in a CT image. Moreover, because a multichannel CT image is obtained from the same object at different energy, images among channels are highly correlated. Motivated by these two characteristics of the spectral CT, we employ tensor decomposition and nonlocal means methods for spectral CT iterative reconstruction. Our method includes three basic steps. First, each channel image is updated by using the OS-SART. Second, small 3D volumetric patches (tensor) are extracted from the multichannel image, and higher-order singular value decomposition (HOSVD) is performed on each tensor, which can help to enhance the spatial sparsity and spectral correlation. Third, in order to employ the self-similarity in CT images, similar patches are grouped to reduce noise using the nonlocal means method. These three steps are repeated alternatively till the stopping criteria are met. The effectiveness of the developed algorithm is validated on both numerically simulated and realistic preclinical datasets. Our results show that the proposed method achieves promising performance in terms of noise reduction and fine structures preservation.

  13. Crossing Fibers Detection with an Analytical High Order Tensor Decomposition

    PubMed Central

    Megherbi, T.; Kachouane, M.; Oulebsir-Boumghar, F.; Deriche, R.

    2014-01-01

    Diffusion magnetic resonance imaging (dMRI) is the only technique to probe in vivo and noninvasively the fiber structure of human brain white matter. Detecting the crossing of neuronal fibers remains an exciting challenge with an important impact in tractography. In this work, we tackle this challenging problem and propose an original and efficient technique to extract all crossing fibers from diffusion signals. To this end, we start by estimating, from the dMRI signal, the so-called Cartesian tensor fiber orientation distribution (CT-FOD) function, whose maxima correspond exactly to the orientations of the fibers. The fourth order symmetric positive definite tensor that represents the CT-FOD is then analytically decomposed via the application of a new theoretical approach and this decomposition is used to accurately extract all the fibers orientations. Our proposed high order tensor decomposition based approach is minimal and allows recovering the whole crossing fibers without any a priori information on the total number of fibers. Various experiments performed on noisy synthetic data, on phantom diffusion, data and on human brain data validate our approach and clearly demonstrate that it is efficient, robust to noise and performs favorably in terms of angular resolution and accuracy when compared to some classical and state-of-the-art approaches. PMID:25246940

  14. Tensor product decomposition methods applied to complex flow data

    NASA Astrophysics Data System (ADS)

    von Larcher, Thomas; Klein, Rupert; Schneider, Reinhold; Wolf, Sebastian; Huber, Benjamin

    2017-04-01

    Low-rank multilevel approximation methods are an important tool in numerical analysis and in scientific computing. Those methods are often suited to attack high-dimensional problems successfully and allow very compact representations of large data sets. Specifically, hierarchical tensor product decomposition methods emerge as an promising approach for application to data that are concerned with cascade-of-scales problems as, e.g., in turbulent fluid dynamics. We focus on two particular objectives, that is representing turbulent data in an appropriate compact form and, secondly and as a long-term goal, finding self-similar vortex structures in multiscale problems. The question here is whether tensor product methods can support the development of improved understanding of the multiscale behavior and whether they are an improved starting point in the development of compact storage schemes for solutions of such problems relative to linear ansatz spaces. We present the reconstruction capabilities of a tensor decomposition based modeling approach tested against 3D turbulent channel flow data.

  15. Uncertainty propagation in orbital mechanics via tensor decomposition

    NASA Astrophysics Data System (ADS)

    Sun, Yifei; Kumar, Mrinal

    2016-03-01

    Uncertainty forecasting in orbital mechanics is an essential but difficult task, primarily because the underlying Fokker-Planck equation (FPE) is defined on a relatively high dimensional (6-D) state-space and is driven by the nonlinear perturbed Keplerian dynamics. In addition, an enormously large solution domain is required for numerical solution of this FPE (e.g. encompassing the entire orbit in the x-y-z subspace), of which the state probability density function (pdf) occupies a tiny fraction at any given time. This coupling of large size, high dimensionality and nonlinearity makes for a formidable computational task, and has caused the FPE for orbital uncertainty propagation to remain an unsolved problem. To the best of the authors' knowledge, this paper presents the first successful direct solution of the FPE for perturbed Keplerian mechanics. To tackle the dimensionality issue, the time-varying state pdf is approximated in the CANDECOMP/PARAFAC decomposition tensor form where all the six spatial dimensions as well as the time dimension are separated from one other. The pdf approximation for all times is obtained simultaneously via the alternating least squares algorithm. Chebyshev spectral differentiation is employed for discretization on account of its spectral ("super-fast") convergence rate. To facilitate the tensor decomposition and control the solution domain size, system dynamics is expressed using spherical coordinates in a noninertial reference frame. Numerical results obtained on a regular personal computer are compared with Monte Carlo simulations.

  16. Reduction of Linear Combinations of Tensors by Ideal Decompositions

    NASA Astrophysics Data System (ADS)

    Fiedler, Bernd

    2001-04-01

    Symmetry properties of r-times covariant tensors T can be described by certain linear subspaces W of the group ring { K}[{ S}r] of a symmetric group { S}r. If for a class of tensors T such a W is known, the elements of the orthogonal subspace W⊥ of W within the dual space { K}[{ S}r]* of { K}[{ S}r] yield linear identities needed for a treatment of the term combination problem for the coordinates of the T. In earlier papers1,2 we gave the structure of these W for every situation which appears in symbolic tensor calculations by computer. Characterizing idempotents of such W and machinable, linear equation systems for W⊥ can be determined on the basis of an ideal decomposition algorithm which works in every semisimple ring up to an isomorphism. Furthermore, we use tools such as the Littlewood-Richardson rule, plethysms and discrete Fourier transforms for { S}r to increase the efficience of calculations. All described methods were implemented in a Mathematica package called PERMS.

  17. Databases post-processing in Tensoral

    NASA Technical Reports Server (NTRS)

    Dresselhaus, Eliot

    1994-01-01

    The Center for Turbulent Research (CTR) post-processing effort aims to make turbulence simulations and data more readily and usefully available to the research and industrial communities. The Tensoral language, introduced in this document and currently existing in prototype form, is the foundation of this effort. Tensoral provides a convenient and powerful protocol to connect users who wish to analyze fluids databases with the authors who generate them. In this document we introduce Tensoral and its prototype implementation in the form of a user's guide. This guide focuses on use of Tensoral for post-processing turbulence databases. The corresponding document - the Tensoral 'author's guide' - which focuses on how authors can make databases available to users via the Tensoral system - is currently unwritten. Section 1 of this user's guide defines Tensoral's basic notions: we explain the class of problems at hand and how Tensoral abstracts them. Section 2 defines Tensoral syntax for mathematical expressions. Section 3 shows how these expressions make up Tensoral statements. Section 4 shows how Tensoral statements and expressions are embedded into other computer languages (such as C or Vectoral) to make Tensoral programs. We conclude with a complete example program.

  18. Tensor decomposition for multi-tissue gene expression experiments

    PubMed Central

    Hore, Victoria; Viñuela, Ana; Buil, Alfonso; Knight, Julian; McCarthy, Mark I; Small, Kerrin; Marchini, Jonathan

    2016-01-01

    Genome wide association studies of gene expression traits and other cellular phenotypes have been successful in revealing links between genetic variation and biological processes. The majority of discoveries have uncovered cis eQTL effects via mass univariate testing of SNPs against gene expression in single tissues. We present a Bayesian method for multi-tissue experiments focusing on uncovering gene networks linked to genetic variation. Our method decomposes the 3D array (or tensor) of gene expression measurements into a set of latent components. We identify sparse gene networks, which can then be tested for association against genetic variation genome-wide. We apply our method to a dataset of 845 individuals from the TwinsUK cohort with gene expression measured via RNA sequencing in adipose, LCLs and skin. We uncover several gene networks with a genetic basis and clear biological and statistical significance. Extensions of this approach will allow integration of multi-omic, environmental and phenotypic datasets. PMID:27479908

  19. Tensor decomposition for multiple-tissue gene expression experiments.

    PubMed

    Hore, Victoria; Viñuela, Ana; Buil, Alfonso; Knight, Julian; McCarthy, Mark I; Small, Kerrin; Marchini, Jonathan

    2016-09-01

    Genome-wide association studies of gene expression traits and other cellular phenotypes have successfully identified links between genetic variation and biological processes. The majority of discoveries have uncovered cis-expression quantitative trait locus (eQTL) effects via mass univariate testing of SNPs against gene expression in single tissues. Here we present a Bayesian method for multiple-tissue experiments focusing on uncovering gene networks linked to genetic variation. Our method decomposes the 3D array (or tensor) of gene expression measurements into a set of latent components. We identify sparse gene networks that can then be tested for association against genetic variation across the genome. We apply our method to a data set of 845 individuals from the TwinsUK cohort with gene expression measured via RNA-seq analysis in adipose, lymphoblastoid cell lines (LCLs) and skin. We uncover several gene networks with a genetic basis and clear biological and statistical significance. Extensions of this approach will allow integration of different omics, environmental and phenotypic data sets.

  20. Symmetric Tensor Decomposition Description of Fermionic Many-Body Wave Functions

    NASA Astrophysics Data System (ADS)

    Uemura, Wataru; Sugino, Osamu

    2012-12-01

    The configuration interaction (CI) is a versatile wave function theory for interacting fermions, but it involves an extremely long CI series. Using a symmetric tensor decomposition method, we convert the CI series into a compact and numerically tractable form. The converted series encompasses the Hartree-Fock state in the first term and rapidly converges to the full-CI state, as numerically tested by using small molecules. Provided that the length of the symmetric tensor decomposition CI series grows only moderately with the increasing complexity of the system, the new method will serve as one of the alternative variational methods to achieve full CI with enhanced practicability.

  1. Thermochemical water decomposition processes

    NASA Technical Reports Server (NTRS)

    Chao, R. E.

    1974-01-01

    Thermochemical processes which lead to the production of hydrogen and oxygen from water without the consumption of any other material have a number of advantages when compared to other processes such as water electrolysis. It is possible to operate a sequence of chemical steps with net work requirements equal to zero at temperatures well below the temperature required for water dissociation in a single step. Various types of procedures are discussed, giving attention to halide processes, reverse Deacon processes, iron oxide and carbon oxide processes, and metal and alkali metal processes. Economical questions are also considered.

  2. Thermochemical water decomposition processes

    NASA Technical Reports Server (NTRS)

    Chao, R. E.

    1974-01-01

    Thermochemical processes which lead to the production of hydrogen and oxygen from water without the consumption of any other material have a number of advantages when compared to other processes such as water electrolysis. It is possible to operate a sequence of chemical steps with net work requirements equal to zero at temperatures well below the temperature required for water dissociation in a single step. Various types of procedures are discussed, giving attention to halide processes, reverse Deacon processes, iron oxide and carbon oxide processes, and metal and alkali metal processes. Economical questions are also considered.

  3. Higher order singular value decomposition of tensors for fusion of registered images

    NASA Astrophysics Data System (ADS)

    Thomason, Michael G.; Gregor, Jens

    2011-01-01

    This paper describes a computational method using tensor math for higher order singular value decomposition (HOSVD) of registered images. Tensor decomposition is a rigorous way to expose structure embedded in multidimensional datasets. Given a dataset of registered 2-D images, the dataset is represented in tensor format and HOSVD of the tensor is computed to obtain a set of 2-D basis images. The basis images constitute a linear decomposition of the original dataset. HOSVD is data-driven and does not require the user to select parameters or assign thresholds. A specific application uses the basis images for pixel-level fusion of registered images into a single image for visualization. The fusion is optimized with respect to a measure of mean squared error. HOSVD and image fusion are illustrated empirically with four real datasets: (1) visible and infrared data of a natural scene, (2) MRI and x ray CT brain images, and in nondestructive testing (3) x ray, ultrasound, and eddy current images, and (4) x ray, ultrasound, and shearography images.

  4. Performance of tensor decomposition-based modal identification under nonstationary vibration

    NASA Astrophysics Data System (ADS)

    Friesen, P.; Sadhu, A.

    2017-03-01

    Health monitoring of civil engineering structures is of paramount importance when they are subjected to natural hazards or extreme climatic events like earthquake, strong wind gusts or man-made excitations. Most of the traditional modal identification methods are reliant on stationarity assumption of the vibration response and posed difficulty while analyzing nonstationary vibration (e.g. earthquake or human-induced vibration). Recently tensor decomposition based methods are emerged as powerful and yet generic blind (i.e. without requiring a knowledge of input characteristics) signal decomposition tool for structural modal identification. In this paper, a tensor decomposition based system identification method is further explored to estimate modal parameters using nonstationary vibration generated due to either earthquake or pedestrian induced excitation in a structure. The effects of lag parameters and sensor densities on tensor decomposition are studied with respect to the extent of nonstationarity of the responses characterized by the stationary duration and peak ground acceleration of the earthquake. A suite of more than 1400 earthquakes is used to investigate the performance of the proposed method under a wide variety of ground motions utilizing both complete and partial measurements of a high-rise building model. Apart from the earthquake, human-induced nonstationary vibration of a real-life pedestrian bridge is also used to verify the accuracy of the proposed method.

  5. Predicting the reference evapotranspiration based on tensor decomposition

    NASA Astrophysics Data System (ADS)

    Misaghian, Negin; Shamshirband, Shahaboddin; Petković, Dalibor; Gocic, Milan; Mohammadi, Kasra

    2016-09-01

    Most of the available models for reference evapotranspiration (ET0) estimation are based upon only an empirical equation for ET0. Thus, one of the main issues in ET0 estimation is the appropriate integration of time information and different empirical ET0 equations to determine ET0 and boost the precision. The FAO-56 Penman-Monteith, adjusted Hargreaves, Blaney-Criddle, Priestley-Taylor, and Jensen-Haise equations were utilized in this study for estimating ET0 for two stations of Belgrade and Nis in Serbia using collected data for the period of 1980 to 2010. Three-order tensor is used to capture three-way correlations among months, years, and ET0 information. Afterward, the latent correlations among ET0 parameters were found by the multiway analysis to enhance the quality of the prediction. The suggested method is valuable as it takes into account simultaneous relations between elements, boosts the prediction precision, and determines latent associations. Models are compared with respect to coefficient of determination (R 2), mean absolute error (MAE), and root-mean-square error (RMSE). The proposed tensor approach has a R 2 value of greater than 0.9 for all selected ET0 methods at both selected stations, which is acceptable for the ET0 prediction. RMSE is ranged between 0.247 and 0.485 mm day-1 at Nis station and between 0.277 and 0.451 mm day-1 at Belgrade station, while MAE is between 0.140 and 0.337 mm day-1 at Nis and between 0.208 and 0.360 mm day-1 at Belgrade station. The best performances are achieved by Priestley-Taylor model at Nis station (R 2 = 0.985, MAE = 0.140 mm day-1, RMSE = 0.247 mm day-1) and FAO-56 Penman-Monteith model at Belgrade station (MAE = 0.208 mm day-1, RMSE = 0.277 mm day-1, R 2 = 0.975).

  6. Tensor decomposition in electronic structure calculations on 3D Cartesian grids

    SciTech Connect

    Khoromskij, B.N. Khoromskaia, V.; Chinnamsetty, S.R.; Flad, H.-J.

    2009-09-01

    In this paper, we investigate a novel approach based on the combination of Tucker-type and canonical tensor decomposition techniques for the efficient numerical approximation of functions and operators in electronic structure calculations. In particular, we study applicability of tensor approximations for the numerical solution of Hartree-Fock and Kohn-Sham equations on 3D Cartesian grids. We show that the orthogonal Tucker-type tensor approximation of electron density and Hartree potential of simple molecules leads to low tensor rank representations. This enables an efficient tensor-product convolution scheme for the computation of the Hartree potential using a collocation-type approximation via piecewise constant basis functions on a uniform nxnxn grid. Combined with the Richardson extrapolation, our approach exhibits O(h{sup 3}) convergence in the grid-size h=O(n{sup -1}). Moreover, this requires O(3rn+r{sup 3}) storage, where r denotes the Tucker rank of the electron density with r=O(logn), almost uniformly in n. For example, calculations of the Coulomb matrix and the Hartree-Fock energy for the CH{sub 4} molecule, with a pseudopotential on the C atom, achieved accuracies of the order of 10{sup -6} hartree with a grid-size n of several hundreds. Since the tensor-product convolution in 3D is performed via 1D convolution transforms, our scheme markedly outperforms the 3D-FFT in both the computing time and storage requirements.

  7. ON THE DECOMPOSITION OF STRESS AND STRAIN TENSORS INTO SPHERICAL AND DEVIATORIC PARTS

    PubMed Central

    Augusti, G.; Martin, J. B.; Prager, W.

    1969-01-01

    It is well known that Hooke's law for a linearly elastic, isotropic solid may be written in the form of two relations that involve only the spherical or only the deviatoric parts of the tensors of stress and strain. The example of the linearly elastic, transversely isotropic solid is used to show that this decomposition is not, in general, feasible for linearly elastic, anisotropic solids. The discussion is extended to a large class of work-hardening rigid, plastic solids, and it is shown that the considered decomposition can only be achieved for the incompressible solids of this class. PMID:16591754

  8. Exploiting multi-lead electrocardiogram correlations using robust third-order tensor decomposition

    PubMed Central

    Dandapat, Samarendra

    2015-01-01

    In this Letter, a robust third-order tensor decomposition of multi-lead electrocardiogram (MECG) comprising of 12-leads is proposed to reduce the dimension of the storage data. An order-3 tensor structure is employed to represent the MECG data by rearranging the MECG information in three dimensions. The three-dimensions of the formed tensor represent the number of leads, beats and samples of some fixed ECG duration. Dimension reduction of such an arrangement exploits correlations present among the successive beats (intra-beat and inter-beat) and across the leads (inter-lead). The higher-order singular value decomposition is used to decompose the tensor data. In addition, multiscale analysis has been added for effective care of ECG information. It grossly segments the ECG characteristic waves (P-wave, QRS-complex, ST-segment and T-wave etc.) into different sub-bands. In the meantime, it separates high-frequency noise components into lower-order sub-bands which helps in removing noise from the original data. For evaluation purposes, we have used the publicly available PTB diagnostic database. The proposed method outperforms the existing algorithms where compression ratio is under 10 for MECG data. Results show that the original MECG data volume can be reduced by more than 45 times with acceptable diagnostic distortion level. PMID:26609416

  9. Exploiting multi-lead electrocardiogram correlations using robust third-order tensor decomposition.

    PubMed

    Padhy, Sibasankar; Dandapat, Samarendra

    2015-10-01

    In this Letter, a robust third-order tensor decomposition of multi-lead electrocardiogram (MECG) comprising of 12-leads is proposed to reduce the dimension of the storage data. An order-3 tensor structure is employed to represent the MECG data by rearranging the MECG information in three dimensions. The three-dimensions of the formed tensor represent the number of leads, beats and samples of some fixed ECG duration. Dimension reduction of such an arrangement exploits correlations present among the successive beats (intra-beat and inter-beat) and across the leads (inter-lead). The higher-order singular value decomposition is used to decompose the tensor data. In addition, multiscale analysis has been added for effective care of ECG information. It grossly segments the ECG characteristic waves (P-wave, QRS-complex, ST-segment and T-wave etc.) into different sub-bands. In the meantime, it separates high-frequency noise components into lower-order sub-bands which helps in removing noise from the original data. For evaluation purposes, we have used the publicly available PTB diagnostic database. The proposed method outperforms the existing algorithms where compression ratio is under 10 for MECG data. Results show that the original MECG data volume can be reduced by more than 45 times with acceptable diagnostic distortion level.

  10. A new moment-tensor decomposition for seismic events in anisotropic media

    NASA Astrophysics Data System (ADS)

    Chapman, C. H.; Leaney, W. S.

    2012-01-01

    Investigating the mechanisms of small seismic sources usually consists of three steps: determining the moment tensor of the source; decomposing the moment tensor into parameters that can be interpreted in terms of physical mechanisms and displaying those parameters. This paper concerns the second and third steps. Two existing methods—the Riedesel-Jordan and Hudson-Pearce-Rogers parameters and displays—are reviewed, compared and contrasted, and advantages and disadvantages of the two methods are discussed. One disadvantage is that neither method takes into consideration the effect of anisotropy on the interpretation. In microseisms, anisotropy can be important. A new procedure based on the biaxial decomposition of the potency tensor is introduced which explicitly allows for anisotropy and interprets the moment tensor in terms of an isotropic pressure change and a displacement discontinuity on a fault. It is shown that this interpretation is always possible for any moment tensor whatever the anisotropy. To compare the pressure change with the displacement discontinuity, it is useful to be able to determine the volume change from the pressure source in any medium. This depends on the embedded bulk modulus, which differs from the normal bulk modulus. The embedded modulus in isotropic media is well known and the equivalent anisotropic result is derived in this paper. Interpreting a seismic source in terms of the volume change due to a pressure change and a displacement discontinuity on a fault allows a simple 3-D graphical glyph to be used to display the interpretation.

  11. Real-time object recognition in multidimensional images based on joined extended structural tensor and higher-order tensor decomposition methods

    NASA Astrophysics Data System (ADS)

    Cyganek, Boguslaw; Smolka, Bogdan

    2015-02-01

    In this paper a system for real-time recognition of objects in multidimensional video signals is proposed. Object recognition is done by pattern projection into the tensor subspaces obtained from the factorization of the signal tensors representing the input signal. However, instead of taking only the intensity signal the novelty of this paper is first to build the Extended Structural Tensor representation from the intensity signal that conveys information on signal intensities, as well as on higher-order statistics of the input signals. This way the higher-order input pattern tensors are built from the training samples. Then, the tensor subspaces are built based on the Higher-Order Singular Value Decomposition of the prototype pattern tensors. Finally, recognition relies on measurements of the distance of a test pattern projected into the tensor subspaces obtained from the training tensors. Due to high-dimensionality of the input data, tensor based methods require high memory and computational resources. However, recent achievements in the technology of the multi-core microprocessors and graphic cards allows real-time operation of the multidimensional methods as is shown and analyzed in this paper based on real examples of object detection in digital images.

  12. Tensoral for post-processing users and simulation authors

    NASA Technical Reports Server (NTRS)

    Dresselhaus, Eliot

    1993-01-01

    The CTR post-processing effort aims to make turbulence simulations and data more readily and usefully available to the research and industrial communities. The Tensoral language, which provides the foundation for this effort, is introduced here in the form of a user's guide. The Tensoral user's guide is presented in two main sections. Section one acts as a general introduction and guides database users who wish to post-process simulation databases. Section two gives a brief description of how database authors and other advanced users can make simulation codes and/or the databases they generate available to the user community via Tensoral database back ends. The two-part structure of this document conforms to the two-level design structure of the Tensoral language. Tensoral has been designed to be a general computer language for performing tensor calculus and statistics on numerical data. Tensoral's generality allows it to be used for stand-alone native coding of high-level post-processing tasks (as described in section one of this guide). At the same time, Tensoral's specialization to a minute task (namely, to numerical tensor calculus and statistics) allows it to be easily embedded into applications written partly in Tensoral and partly in other computer languages (here, C and Vectoral). Embedded Tensoral, aimed at advanced users for more general coding (e.g. of efficient simulations, for interfacing with pre-existing software, for visualization, etc.), is described in section two of this guide.

  13. Highly Efficient and Scalable Compound Decomposition of Two-Electron Integral Tensor and Its Application in Coupled Cluster Calculations.

    PubMed

    Peng, Bo; Kowalski, Karol

    2017-09-12

    The representation and storage of two-electron integral tensors are vital in large-scale applications of accurate electronic structure methods. Low-rank representation and efficient storage strategy of integral tensors can significantly reduce the numerical overhead and consequently time-to-solution of these methods. In this work, by combining pivoted incomplete Cholesky decomposition (CD) with a follow-up truncated singular vector decomposition (SVD), we develop a decomposition strategy to approximately represent the two-electron integral tensor in terms of low-rank vectors. A systematic benchmark test on a series of 1-D, 2-D, and 3-D carbon-hydrogen systems demonstrates high efficiency and scalability of the compound two-step decomposition of the two-electron integral tensor in our implementation. For the size of the atomic basis set, Nb, ranging from ∼100 up to ∼2,000, the observed numerical scaling of our implementation shows [Formula: see text] versus [Formula: see text] cost of performing single CD on the two-electron integral tensor in most of the other implementations. More importantly, this decomposition strategy can significantly reduce the storage requirement of the atomic orbital (AO) two-electron integral tensor from [Formula: see text] to [Formula: see text] with moderate decomposition thresholds. The accuracy tests have been performed using ground- and excited-state formulations of coupled cluster formalism employing single and double excitations (CCSD) on several benchmark systems including the C60 molecule described by nearly 1,400 basis functions. The results show that the decomposition thresholds can be generally set to 10(-4) to 10(-3) to give acceptable compromise between efficiency and accuracy.

  14. Representing Matrix Cracks Through Decomposition of the Deformation Gradient Tensor in Continuum Damage Mechanics Methods

    NASA Technical Reports Server (NTRS)

    Leone, Frank A., Jr.

    2015-01-01

    A method is presented to represent the large-deformation kinematics of intraply matrix cracks and delaminations in continuum damage mechanics (CDM) constitutive material models. The method involves the additive decomposition of the deformation gradient tensor into 'crack' and 'bulk material' components. The response of the intact bulk material is represented by a reduced deformation gradient tensor, and the opening of an embedded cohesive interface is represented by a normalized cohesive displacement-jump vector. The rotation of the embedded interface is tracked as the material deforms and as the crack opens. The distribution of the total local deformation between the bulk material and the cohesive interface components is determined by minimizing the difference between the cohesive stress and the bulk material stress projected onto the cohesive interface. The improvements to the accuracy of CDM models that incorporate the presented method over existing approaches are demonstrated for a single element subjected to simple shear deformation and for a finite element model of a unidirectional open-hole tension specimen. The material model is implemented as a VUMAT user subroutine for the Abaqus/Explicit finite element software. The presented deformation gradient decomposition method reduces the artificial load transfer across matrix cracks subjected to large shearing deformations, and avoids the spurious secondary failure modes that often occur in analyses based on conventional progressive damage models.

  15. Towards Interactive Construction of Topical Hierarchy: A Recursive Tensor Decomposition Approach

    PubMed Central

    Wang, Chi; Liu, Xueqing; Song, Yanglei; Han, Jiawei

    2015-01-01

    Automatic construction of user-desired topical hierarchies over large volumes of text data is a highly desirable but challenging task. This study proposes to give users freedom to construct topical hierarchies via interactive operations such as expanding a branch and merging several branches. Existing hierarchical topic modeling techniques are inadequate for this purpose because (1) they cannot consistently preserve the topics when the hierarchy structure is modified; and (2) the slow inference prevents swift response to user requests. In this study, we propose a novel method, called STROD, that allows efficient and consistent modification of topic hierarchies, based on a recursive generative model and a scalable tensor decomposition inference algorithm with theoretical performance guarantee. Empirical evaluation shows that STROD reduces the runtime of construction by several orders of magnitude, while generating consistent and quality hierarchies. PMID:26705505

  16. Tensor-multi-scalar theories: relativistic stars and 3 + 1 decomposition

    NASA Astrophysics Data System (ADS)

    Horbatsch, Michael; Silva, Hector O.; Gerosa, Davide; Pani, Paolo; Berti, Emanuele; Gualtieri, Leonardo; Sperhake, Ulrich

    2015-10-01

    Gravitational theories with multiple scalar fields coupled to the metric and each other—a natural extension of the well studied single-scalar-tensor theories—are interesting phenomenological frameworks to describe deviations from general relativity in the strong-field regime. In these theories, the N-tuple of scalar fields takes values in a coordinate patch of an N-dimensional Riemannian target-space manifold whose properties are poorly constrained by weak-field observations. Here we introduce for simplicity a non-trivial model with two scalar fields and a maximally symmetric target-space manifold. Within this model we present a preliminary investigation of spontaneous scalarization for relativistic, perfect fluid stellar models in spherical symmetry. We find that the scalarization threshold is determined by the eigenvalues of a symmetric scalar-matter coupling matrix, and that the properties of strongly scalarized stellar configurations additionally depend on the target-space curvature radius. In preparation for numerical relativity simulations, we also write down the 3 + 1 decomposition of the field equations for generic tensor-multi-scalar theories.

  17. Enabling High-Dimensional Hierarchical Uncertainty Quantification by ANOVA and Tensor-Train Decomposition

    SciTech Connect

    Zhang, Zheng; Yang, Xiu; Oseledets, Ivan V.; Karniadakis, George E.; Daniel, Luca

    2015-01-01

    Hierarchical uncertainty quantification can reduce the computational cost of stochastic circuit simulation by employing spectral methods at different levels. This paper presents an efficient framework to simulate hierarchically some challenging stochastic circuits/systems that include high-dimensional subsystems. Due to the high parameter dimensionality, it is challenging to both extract surrogate models at the low level of the design hierarchy and to handle them in the high-level simulation. In this paper, we develop an efficient analysis of variance-based stochastic circuit/microelectromechanical systems simulator to efficiently extract the surrogate models at the low level. In order to avoid the curse of dimensionality, we employ tensor-train decomposition at the high level to construct the basis functions and Gauss quadrature points. As a demonstration, we verify our algorithm on a stochastic oscillator with four MEMS capacitors and 184 random parameters. This challenging example is efficiently simulated by our simulator at the cost of only 10min in MATLAB on a regular personal computer.

  18. Multidimensional Compressed Sensing MRI Using Tensor Decomposition-Based Sparsifying Transform

    PubMed Central

    Yu, Yeyang; Jin, Jin; Liu, Feng; Crozier, Stuart

    2014-01-01

    Compressed Sensing (CS) has been applied in dynamic Magnetic Resonance Imaging (MRI) to accelerate the data acquisition without noticeably degrading the spatial-temporal resolution. A suitable sparsity basis is one of the key components to successful CS applications. Conventionally, a multidimensional dataset in dynamic MRI is treated as a series of two-dimensional matrices, and then various matrix/vector transforms are used to explore the image sparsity. Traditional methods typically sparsify the spatial and temporal information independently. In this work, we propose a novel concept of tensor sparsity for the application of CS in dynamic MRI, and present the Higher-order Singular Value Decomposition (HOSVD) as a practical example. Applications presented in the three- and four-dimensional MRI data demonstrate that HOSVD simultaneously exploited the correlations within spatial and temporal dimensions. Validations based on cardiac datasets indicate that the proposed method achieved comparable reconstruction accuracy with the low-rank matrix recovery methods and, outperformed the conventional sparse recovery methods. PMID:24901331

  19. Aridity and decomposition processes in complex landscapes

    NASA Astrophysics Data System (ADS)

    Ossola, Alessandro; Nyman, Petter

    2015-04-01

    Decomposition of organic matter is a key biogeochemical process contributing to nutrient cycles, carbon fluxes and soil development. The activity of decomposers depends on microclimate, with temperature and rainfall being major drivers. In complex terrain the fine-scale variation in microclimate (and hence water availability) as a result of slope orientation is caused by differences in incoming radiation and surface temperature. Aridity, measured as the long-term balance between net radiation and rainfall, is a metric that can be used to represent variations in water availability within the landscape. Since aridity metrics can be obtained at fine spatial scales, they could theoretically be used to investigate how decomposition processes vary across complex landscapes. In this study, four research sites were selected in tall open sclerophyll forest along a aridity gradient (Budyko dryness index ranging from 1.56 -2.22) where microclimate, litter moisture and soil moisture were monitored continuously for one year. Litter bags were packed to estimate decomposition rates (k) using leaves of a tree species not present in the study area (Eucalyptus globulus) in order to avoid home-field advantage effects. Litter mass loss was measured to assess the activity of macro-decomposers (6mm litter bag mesh size), meso-decomposers (1 mm mesh), microbes above-ground (0.2 mm mesh) and microbes below-ground (2 cm depth, 0.2 mm mesh). Four replicates for each set of bags were installed at each site and bags were collected at 1, 2, 4, 7 and 12 months since installation. We first tested whether differences in microclimate due to slope orientation have significant effects on decomposition processes. Then the dryness index was related to decomposition rates to evaluate if small-scale variation in decomposition can be predicted using readily available information on rainfall and radiation. Decomposition rates (k), calculated fitting single pool negative exponential models, generally

  20. Tensor based geology preserving reservoir parameterization with Higher Order Singular Value Decomposition (HOSVD)

    NASA Astrophysics Data System (ADS)

    Afra, Sardar; Gildin, Eduardo

    2016-09-01

    Parameter estimation through robust parameterization techniques has been addressed in many works associated with history matching and inverse problems. Reservoir models are in general complex, nonlinear, and large-scale with respect to the large number of states and unknown parameters. Thus, having a practical approach to replace the original set of highly correlated unknown parameters with non-correlated set of lower dimensionality, that captures the most significant features comparing to the original set, is of high importance. Furthermore, de-correlating system's parameters while keeping the geological description intact is critical to control the ill-posedness nature of such problems. We introduce the advantages of a new low dimensional parameterization approach for reservoir characterization applications utilizing multilinear algebra based techniques like higher order singular value decomposition (HOSVD). In tensor based approaches like HOSVD, 2D permeability images are treated as they are, i.e., the data structure is kept as it is, whereas in conventional dimensionality reduction algorithms like SVD data has to be vectorized. Hence, compared to classical methods, higher redundancy reduction with less information loss can be achieved through decreasing present redundancies in all dimensions. In other words, HOSVD approximation results in a better compact data representation with respect to least square sense and geological consistency in comparison with classical algorithms. We examined the performance of the proposed parameterization technique against SVD approach on the SPE10 benchmark reservoir model as well as synthetic channelized permeability maps to demonstrate the capability of the proposed method. Moreover, to acquire statistical consistency, we repeat all experiments for a set of 1000 unknown geological samples and provide comparison using RMSE analysis. Results prove that, for a fixed compression ratio, the performance of the proposed approach

  1. Biogeochemistry of Decomposition and Detrital Processing

    NASA Astrophysics Data System (ADS)

    Sanderman, J.; Amundson, R.

    2003-12-01

    Decomposition is a key ecological process that roughly balances net primary production in terrestrial ecosystems and is an essential process in resupplying nutrients to the plant community. Decomposition consists of three concurrent processes: communition or fragmentation, leaching of water-soluble compounds, and microbial catabolism. Decomposition can also be viewed as a sequential process, what Eijsackers and Zehnder (1990) compare to a Russian matriochka doll. Soil macrofauna fragment and partially solubilize plant residues, facilitating establishment of a community of decomposer microorganisms. This decomposer community will gradually shift as the most easily degraded plant compounds are utilized and the more recalcitrant materials begin to accumulate. Given enough time and the proper environmental conditions, most naturally occurring compounds can completely be mineralized to inorganic forms. Simultaneously with mineralization, the process of humification acts to transform a fraction of the plant residues into stable soil organic matter (SOM) or humus. For reference, Schlesinger (1990) estimated that only ˜0.7% of detritus eventually becomes stabilized into humus.Decomposition plays a key role in the cycling of most plant macro- and micronutrients and in the formation of humus. Figure 1 places the roles of detrital processing and mineralization within the context of the biogeochemical cycling of essential plant nutrients. Chapin (1991) found that while the atmosphere supplied 4% and mineral weathering supplied no nitrogen and <1% of phosphorus, internal nutrient recycling is the source for >95% of all the nitrogen and phosphorus uptake by tundra species in Barrow, Alaska. In a cool temperate forest, nutrient recycling accounted for 93%, 89%, 88%, and 65% of total sources for nitrogen, phosphorus, potassium, and calcium, respectively ( Chapin, 1991). (13K)Figure 1. A decomposition-centric biogeochemical model of nutrient cycling. Although there is significant

  2. Tensoral: A system for post-processing turbulence simulation data

    NASA Technical Reports Server (NTRS)

    Dresselhaus, Eliot

    1993-01-01

    Many computer simulations in engineering and science -- and especially in computational fluid dynamics (CFD) -- produce huge quantities of numerical data. These data are often so large as to make even relatively simple post-processing of this data unwieldy. The data, once computed and quality-assured, is most likely analyzed by only a few people. As a result, much useful numerical data is under-utilized. Since future state-of-the-art simulations will produce even larger datasets, will use more complex flow geometries, and will be performed on more complex supercomputers, data management issues will become increasingly cumbersome. My goal is to provide software which will automate the present and future task of managing and post-processing large turbulence datasets. My research has focused on the development of these software tools -- specifically, through the development of a very high-level language called 'Tensoral'. The ultimate goal of Tensoral is to convert high-level mathematical expressions (tensor algebra, calculus, and statistics) into efficient low-level programs which numerically calculate these expressions given simulation datasets. This approach to the database and post-processing problem has several advantages. Using Tensoral the numerical and data management details of a simulation are shielded from the concerns of the end user. This shielding is carried out without sacrificing post-processor efficiency and robustness. Another advantage of Tensoral is that its very high-level nature lends itself to portability across a wide variety of computing (and supercomputing) platforms. This is especially important considering the rapidity of changes in supercomputing hardware.

  3. Tensor Algebra Library for NVidia Graphics Processing Units

    SciTech Connect

    Liakh, Dmitry

    2015-03-16

    This is a general purpose math library implementing basic tensor algebra operations on NVidia GPU accelerators. This software is a tensor algebra library that can perform basic tensor algebra operations, including tensor contractions, tensor products, tensor additions, etc., on NVidia GPU accelerators, asynchronously with respect to the CPU host. It supports a simultaneous use of multiple NVidia GPUs. Each asynchronous API function returns a handle which can later be used for querying the completion of the corresponding tensor algebra operation on a specific GPU. The tensors participating in a particular tensor operation are assumed to be stored in local RAM of a node or GPU RAM. The main research area where this library can be utilized is the quantum many-body theory (e.g., in electronic structure theory).

  4. Tensor Algebra Library for NVidia Graphics Processing Units

    SciTech Connect

    Liakh, Dmitry

    2015-03-16

    This is a general purpose math library implementing basic tensor algebra operations on NVidia GPU accelerators. This software is a tensor algebra library that can perform basic tensor algebra operations, including tensor contractions, tensor products, tensor additions, etc., on NVidia GPU accelerators, asynchronously with respect to the CPU host. It supports a simultaneous use of multiple NVidia GPUs. Each asynchronous API function returns a handle which can later be used for querying the completion of the corresponding tensor algebra operation on a specific GPU. The tensors participating in a particular tensor operation are assumed to be stored in local RAM of a node or GPU RAM. The main research area where this library can be utilized is the quantum many-body theory (e.g., in electronic structure theory).

  5. Diffusion tensor fiber tracking on graphics processing units.

    PubMed

    Mittmann, Adiel; Comunello, Eros; von Wangenheim, Aldo

    2008-10-01

    Diffusion tensor magnetic resonance imaging has been successfully applied to the process of fiber tracking, which determines the location of fiber bundles within the human brain. This process, however, can be quite lengthy when run on a regular workstation. We present a means of executing this process by making use of the graphics processing units of computers' video cards, which provide a low-cost parallel execution environment that algorithms like fiber tracking can benefit from. With this method we have achieved performance gains varying from 14 to 40 times on common computers. Because of accuracy issues inherent to current graphics processing units, we define a variation index in order to assess how close the results obtained with our method are to those generated by programs running on the central processing units of computers. This index shows that results produced by our method are acceptable when compared to those of traditional programs.

  6. Transmit Array Interpolation for DOA Estimation via Tensor Decomposition in 2-D MIMO Radar

    NASA Astrophysics Data System (ADS)

    Cao, Ming-Yang; Vorobyov, Sergiy A.; Hassanien, Aboulnasr

    2017-10-01

    In this paper, we propose a two-dimensional (2D) joint transmit array interpolation and beamspace design for planar array mono-static multiple-input-multiple-output (MIMO) radar for direction-of-arrival (DOA) estimation via tensor modeling. Our underlying idea is to map the transmit array to a desired array and suppress the transmit power outside the spatial sector of interest. In doing so, the signal-tonoise ratio is improved at the receive array. Then, we fold the received data along each dimension into a tensorial structure and apply tensor-based methods to obtain DOA estimates. In addition, we derive a close-form expression for DOA estimation bias caused by interpolation errors and argue for using a specially designed look-up table to compensate the bias. The corresponding Cramer-Rao Bound (CRB) is also derived. Simulation results are provided to show the performance of the proposed method and compare its performance to CRB.

  7. Decomposition

    USGS Publications Warehouse

    Middleton, Beth A.

    2014-01-01

    A cornerstone of ecosystem ecology, decomposition was recognized as a fundamental process driving the exchange of energy in ecosystems by early ecologists such as Lindeman 1942 and Odum 1960). In the history of ecology, studies of decomposition were incorporated into the International Biological Program in the 1960s to compare the nature of organic matter breakdown in various ecosystem types. Such studies still have an important role in ecological studies of today. More recent refinements have brought debates on the relative role microbes, invertebrates and environment in the breakdown and release of carbon into the atmosphere, as well as how nutrient cycling, production and other ecosystem processes regulated by decomposition may shift with climate change. Therefore, this bibliography examines the primary literature related to organic matter breakdown, but it also explores topics in which decomposition plays a key supporting role including vegetation composition, latitudinal gradients, altered ecosystems, anthropogenic impacts, carbon storage, and climate change models. Knowledge of these topics is relevant to both the study of ecosystem ecology as well projections of future conditions for human societies.

  8. A non-statistical regularization approach and a tensor product decomposition method applied to complex flow data

    NASA Astrophysics Data System (ADS)

    von Larcher, Thomas; Blome, Therese; Klein, Rupert; Schneider, Reinhold; Wolf, Sebastian; Huber, Benjamin

    2016-04-01

    Handling high-dimensional data sets like they occur e.g. in turbulent flows or in multiscale behaviour of certain types in Geosciences are one of the big challenges in numerical analysis and scientific computing. A suitable solution is to represent those large data sets in an appropriate compact form. In this context, tensor product decomposition methods currently emerge as an important tool. One reason is that these methods often enable one to attack high-dimensional problems successfully, another that they allow for very compact representations of large data sets. We follow the novel Tensor-Train (TT) decomposition method to support the development of improved understanding of the multiscale behavior and the development of compact storage schemes for solutions of such problems. One long-term goal of the project is the construction of a self-consistent closure for Large Eddy Simulations (LES) of turbulent flows that explicitly exploits the tensor product approach's capability of capturing self-similar structures. Secondly, we focus on a mixed deterministic-stochastic subgrid scale modelling strategy currently under development for application in Finite Volume Large Eddy Simulation (LES) codes. Advanced methods of time series analysis for the databased construction of stochastic models with inherently non-stationary statistical properties and concepts of information theory based on a modified Akaike information criterion and on the Bayesian information criterion for the model discrimination are used to construct surrogate models for the non-resolved flux fluctuations. Vector-valued auto-regressive models with external influences form the basis for the modelling approach [1], [2], [4]. Here, we present the reconstruction capabilities of the two modeling approaches tested against 3D turbulent channel flow data computed by direct numerical simulation (DNS) for an incompressible, isothermal fluid at Reynolds number Reτ = 590 (computed by [3]). References [1] I

  9. Advanced Insights into Functional Brain Connectivity by Combining Tensor Decomposition and Partial Directed Coherence

    PubMed Central

    Leistritz, Lutz; Witte, Herbert; Schiecke, Karin

    2015-01-01

    Quantification of functional connectivity in physiological networks is frequently performed by means of time-variant partial directed coherence (tvPDC), based on time-variant multivariate autoregressive models. The principle advantage of tvPDC lies in the combination of directionality, time variance and frequency selectivity simultaneously, offering a more differentiated view into complex brain networks. Yet the advantages specific to tvPDC also cause a large number of results, leading to serious problems in interpretability. To counter this issue, we propose the decomposition of multi-dimensional tvPDC results into a sum of rank-1 outer products. This leads to a data condensation which enables an advanced interpretation of results. Furthermore it is thereby possible to uncover inherent interaction patterns of induced neuronal subsystems by limiting the decomposition to several relevant channels, while retaining the global influence determined by the preceding multivariate AR estimation and tvPDC calculation of the entire scalp. Finally a comparison between several subjects is considerably easier, as individual tvPDC results are summarized within a comprehensive model equipped with subject-specific loading coefficients. A proof-of-principle of the approach is provided by means of simulated data; EEG data of an experiment concerning visual evoked potentials are used to demonstrate the applicability to real data. PMID:26046537

  10. Adaptation of motor imagery EEG classification model based on tensor decomposition

    NASA Astrophysics Data System (ADS)

    Li, Xinyang; Guan, Cuntai; Zhang, Haihong; Keng Ang, Kai; Ong, Sim Heng

    2014-10-01

    Objective. Session-to-session nonstationarity is inherent in brain-computer interfaces based on electroencephalography. The objective of this paper is to quantify the mismatch between the training model and test data caused by nonstationarity and to adapt the model towards minimizing the mismatch. Approach. We employ a tensor model to estimate the mismatch in a semi-supervised manner, and the estimate is regularized in the discriminative objective function. Main results. The performance of the proposed adaptation method was evaluated on a dataset recorded from 16 subjects performing motor imagery tasks on different days. The classification results validated the advantage of the proposed method in comparison with other regularization-based or spatial filter adaptation approaches. Experimental results also showed that there is a significant correlation between the quantified mismatch and the classification accuracy. Significance. The proposed method approached the nonstationarity issue from the perspective of data-model mismatch, which is more direct than data variation measurement. The results also demonstrated that the proposed method is effective in enhancing the performance of the feature extraction model.

  11. A Continuum Damage Mechanics Model to Predict Kink-Band Propagation Using Deformation Gradient Tensor Decomposition

    NASA Technical Reports Server (NTRS)

    Bergan, Andrew C.; Leone, Frank A., Jr.

    2016-01-01

    A new model is proposed that represents the kinematics of kink-band formation and propagation within the framework of a mesoscale continuum damage mechanics (CDM) model. The model uses the recently proposed deformation gradient decomposition approach to represent a kink band as a displacement jump via a cohesive interface that is embedded in an elastic bulk material. The model is capable of representing the combination of matrix failure in the frame of a misaligned fiber and instability due to shear nonlinearity. In contrast to conventional linear or bilinear strain softening laws used in most mesoscale CDM models for longitudinal compression, the constitutive response of the proposed model includes features predicted by detailed micromechanical models. These features include: 1) the rotational kinematics of the kink band, 2) an instability when the peak load is reached, and 3) a nonzero plateau stress under large strains.

  12. Nested Vector-Sensor Array Processing via Tensor Modeling (Briefing Charts)

    DTIC Science & Technology

    2014-04-24

    the matrix singular value decomposition ( SVD ) [3]. • The HOSVD of tensor T can be written as T = K×1 U1 ×2 U2 ×3 U3 ×4 U4, (4) where U1,U3 ∈ CN̄×N̄...and U2,U4 ∈ CNc×Nc are orthonormal matrices, provided by the SVD of the i-mode matricization of the tensor T : T (i) = UiΛiV H i . K ∈ CN̄×Nc×N̄×Nc is

  13. A patch-based tensor decomposition algorithm for M-FISH image classification.

    PubMed

    Wang, Min; Huang, Ting-Zhu; Li, Jingyao; Wang, Yu-Ping

    2017-06-01

    Multiplex-fluorescence in situ hybridization (M-FISH) is a chromosome imaging technique which can be used to detect chromosomal abnormalities such as translocations, deletions, duplications, and inversions. Chromosome classification from M-FISH imaging data is a key step to implement the technique. In the classified M-FISH image, each pixel in a chromosome is labeled with a class index and drawn with a pseudo-color so that geneticists can easily conduct diagnosis, for example, identifying chromosomal translocations by examining color changes between chromosomes. However, the information of pixels in a neighborhood is often overlooked by existing approaches. In this work, we assume that the pixels in a patch belong to the same class and use the patch to represent the center pixel's class information, by which we can use the correlations of neighboring pixels and the structural information across different spectral channels for the classification. On the basis of assumption, we propose a patch-based classification algorithm by using higher order singular value decomposition (HOSVD). The developed method has been tested on a comprehensive M-FISH database that we established, demonstrating improved performance. When compared with other pixel-wise M-FISH image classifiers such as fuzzy c-means clustering (FCM), adaptive fuzzy c-means clustering (AFCM), improved adaptive fuzzy c-means clustering (IAFCM), and sparse representation classification (SparseRC) methods, the proposed method gave the highest correct classification ratio (CCR), which can translate into improved diagnosis of genetic diseases and cancers. © 2016 International Society for Advancement of Cytometry. © 2016 International Society for Advancement of Cytometry.

  14. When are Overcomplete Topic Models Identifiable? Uniqueness of Tensor Tucker Decompositions with Structured Sparsity

    DTIC Science & Technology

    2013-08-14

    Unsupervised feature learning and deep learning : A review and new perspectives. arXiv preprint arXiv:1206.5538, 2012. [2] Michael S. Lewicki, Terrence J...1017–1025, 2011. [14] Li Deng and Dong Yu. Deep Learning for Signal and Information Processing. NOW Publishers, 2013. [15] J.B. Kruskal. Three-way

  15. Tracking Hierarchical Processing in Morphological Decomposition with Brain Potentials

    ERIC Educational Resources Information Center

    Lavric, Aureliu; Elchlepp, Heike; Rastle, Kathleen

    2012-01-01

    One important debate in psycholinguistics concerns the nature of morphological decomposition processes in visual word recognition (e.g., darkness = {dark} + {-ness}). One theory claims that these processes arise during orthographic analysis and prior to accessing meaning (Rastle & Davis, 2008), and another argues that these processes arise through…

  16. Tracking Hierarchical Processing in Morphological Decomposition with Brain Potentials

    ERIC Educational Resources Information Center

    Lavric, Aureliu; Elchlepp, Heike; Rastle, Kathleen

    2012-01-01

    One important debate in psycholinguistics concerns the nature of morphological decomposition processes in visual word recognition (e.g., darkness = {dark} + {-ness}). One theory claims that these processes arise during orthographic analysis and prior to accessing meaning (Rastle & Davis, 2008), and another argues that these processes arise through…

  17. Tensor Invariant Processing for Munitions/Clutter Classifications Interim Report on SNR and Background Leveling Requirements

    DTIC Science & Technology

    2012-12-01

    INTERIM REPORT Tensor Invariant Processing for Munitions/Clutter Classifications Interim Report on SNR and Background Leveling Requirements...00-00-2012 to 00-00-2012 4. TITLE AND SUBTITLE Tensor Invariant Processing for Munitions/Clutter Classifications Interim Report on SNR and...Camp Beale in 2011 and found no impact due to signal-to-noise ratio ( SNR ) and background leveling effects. However, the minimum polarizability

  18. Low-rank canonical-tensor decomposition of potential energy surfaces: application to grid-based diagrammatic vibrational Green's function theory

    DOE PAGES

    Rai, Prashant; Sargsyan, Khachik; Najm, Habib; ...

    2017-03-07

    Here, a new method is proposed for a fast evaluation of high-dimensional integrals of potential energy surfaces (PES) that arise in many areas of quantum dynamics. It decomposes a PES into a canonical low-rank tensor format, reducing its integral into a relatively short sum of products of low-dimensional integrals. The decomposition is achieved by the alternating least squares (ALS) algorithm, requiring only a small number of single-point energy evaluations. Therefore, it eradicates a force-constant evaluation as the hotspot of many quantum dynamics simulations and also possibly lifts the curse of dimensionality. This general method is applied to the anharmonic vibrationalmore » zero-point and transition energy calculations of molecules using the second-order diagrammatic vibrational many-body Green's function (XVH2) theory with a harmonic-approximation reference. In this application, high dimensional PES and Green's functions are both subjected to a low-rank decomposition. Evaluating the molecular integrals over a low-rank PES and Green's functions as sums of low-dimensional integrals using the Gauss–Hermite quadrature, this canonical-tensor-decomposition-based XVH2 (CT-XVH2) achieves an accuracy of 0.1 cm-1 or higher and nearly an order of magnitude speedup as compared with the original algorithm using force constants for water and formaldehyde.« less

  19. Decomposition: A Strategy for Query Processing.

    ERIC Educational Resources Information Center

    Wong, Eugene; Youssefi, Karel

    Multivariable queries can be processed in the data base management system INGRES. The general procedure is to decompose the query into a sequence of one-variable queries using two processes. One process is reduction which requires breaking off components of the query which are joined to it by a single variable. The other process,…

  20. Theoretical estimate on tensor-polarization asymmetry in proton-deuteron Drell-Yan process

    NASA Astrophysics Data System (ADS)

    Kumano, S.; Song, Qin-Tao

    2016-09-01

    Tensor-polarized parton distribution functions are new quantities in spin-1 hadrons such as the deuteron, and they could probe new quark-gluon dynamics in hadron and nuclear physics. In charged-lepton deep inelastic scattering, they are studied by the twist-2 structure functions b1 and b2. The HERMES Collaboration found unexpectedly large b1 values compared to a naive theoretical expectation based on the standard deuteron model. The situation should be significantly improved in the near future by an approved experiment to measure b1 at Thomas Jefferson National Accelerator Facility (JLab). There is also an interesting indication in the HERMES result that finite antiquark tensor polarization exists. It could play an important role in solving a mechanism on tensor structure in the quark-gluon level. The tensor-polarized antiquark distributions are not easily determined from the charged-lepton deep inelastic scattering; however, they can be measured in a proton-deuteron Drell-Yan process with a tensor-polarized deuteron target. In this article, we estimate the tensor-polarization asymmetry for a possible Fermilab Main-Injector experiment by using optimum tensor-polarized parton distribution functions to explain the HERMES measurement. We find that the asymmetry is typically a few percent. If it is measured, it could probe new hadron physics, and such studies could create an interesting field of high-energy spin physics. In addition, we find that a significant tensor-polarized gluon distribution should exist due to Q2 evolution, even if it were zero at a low Q2 scale. The tensor-polarized gluon distribution has never been observed, so it is an interesting future project.

  1. Relativized hierarchical decomposition of Markov decision processes.

    PubMed

    Ravindran, B

    2013-01-01

    Reinforcement Learning (RL) is a popular paradigm for sequential decision making under uncertainty. A typical RL algorithm operates with only limited knowledge of the environment and with limited feedback on the quality of the decisions. To operate effectively in complex environments, learning agents require the ability to form useful abstractions, that is, the ability to selectively ignore irrelevant details. It is difficult to derive a single representation that is useful for a large problem setting. In this chapter, we describe a hierarchical RL framework that incorporates an algebraic framework for modeling task-specific abstraction. The basic notion that we will explore is that of a homomorphism of a Markov Decision Process (MDP). We mention various extensions of the basic MDP homomorphism framework in order to accommodate different commonly understood notions of abstraction, namely, aspects of selective attention. Parts of the work described in this chapter have been reported earlier in several papers (Narayanmurthy and Ravindran, 2007, 2008; Ravindran and Barto, 2002, 2003a,b; Ravindran et al., 2007).

  2. The ergodic decomposition of stationary discrete random processes

    NASA Technical Reports Server (NTRS)

    Gray, R. M.; Davisson, L. D.

    1974-01-01

    The ergodic decomposition is discussed, and a version focusing on the structure of individual sample functions of stationary processes is proved for the special case of discrete-time random processes with discrete alphabets. The result is stronger in this case than the usual theorem, and the proof is both intuitive and simple. Estimation-theoretic and information-theoretic interpretations are developed and applied to prove existence theorems for universal source codes, both noiseless and with a fidelity criterion.

  3. Density Functional Studies of Decomposition Processes of Energetic Molecules

    DTIC Science & Technology

    1994-11-03

    34-- I of Energetic Molecules Dr. Richard S. Miller 6. AUTHOR(S) Peter Politzer, Jorge M. Seminario and M. Edward Grice R&T Code 4131DO2 7. PERFORMING...49) ,’i. " o MW U of. tongi.ig2 Density Functional Studies of Decomposition Processes of Energetic Molecules Peter Politzer, Jorge M. Seminario and...Initio Molecular Orbital Theory, (Wiley-Interscience, New York, 1986). 9. J. M. Seminario , M. Grodzicki and P. Politzer, in Density Functional Methods

  4. Analysis of benzoquinone decomposition in solution plasma process

    NASA Astrophysics Data System (ADS)

    Bratescu, M. A.; Saito, N.

    2016-01-01

    The decomposition of p-benzoquinone (p-BQ) in Solution Plasma Processing (SPP) was analyzed by Coherent Anti-Stokes Raman Spectroscopy (CARS) by monitoring the change of the anti-Stokes signal intensity of the vibrational transitions of the molecule, during and after SPP. Just in the beginning of the SPP treatment, the CARS signal intensities of the ring vibrational molecular transitions increased under the influence of the electric field of plasma. The results show that plasma influences the p-BQ molecules in two ways: (i) plasma produces a polarization and an orientation of the molecules in the local electric field of plasma and (ii) the gas phase plasma supplies, in the liquid phase, hydrogen and hydroxyl radicals, which reduce or oxidize the molecules, respectively, generating different carboxylic acids. The decomposition of p-BQ after SPP was confirmed by UV-visible absorption spectroscopy and liquid chromatography.

  5. Matrix decomposition graphics processing unit solver for Poisson image editing

    NASA Astrophysics Data System (ADS)

    Lei, Zhao; Wei, Li

    2012-10-01

    In recent years, gradient-domain methods have been widely discussed in the image processing field, including seamless cloning and image stitching. These algorithms are commonly carried out by solving a large sparse linear system: the Poisson equation. However, solving the Poisson equation is a computational and memory intensive task which makes it not suitable for real-time image editing. A new matrix decomposition graphics processing unit (GPU) solver (MDGS) is proposed to settle the problem. A matrix decomposition method is used to distribute the work among GPU threads, so that MDGS will take full advantage of the computing power of current GPUs. Additionally, MDGS is a hybrid solver (combines both the direct and iterative techniques) and has two-level architecture. These enable MDGS to generate identical solutions with those of the common Poisson methods and achieve high convergence rate in most cases. This approach is advantageous in terms of parallelizability, enabling real-time image processing, low memory-taken and extensive applications.

  6. A decomposition of irreversible diffusion processes without detailed balance

    NASA Astrophysics Data System (ADS)

    Qian, Hong

    2013-05-01

    As a generalization of deterministic, nonlinear conservative dynamical systems, a notion of canonical conservative dynamics with respect to a positive, differentiable stationary density ρ(x) is introduced: dot{x}=j(x) in which ∇.(ρ(x)j(x)) = 0. Such systems have a conserved "generalized free energy function" F[u] = ∫u(x, t)ln (u(x, t)/ρ(x))dx in phase space with a density flow u(x, t) satisfying ∂ut = -∇.(ju). Any general stochastic diffusion process without detailed balance, in terms of its Fokker-Planck equation, can be decomposed into a reversible diffusion process with detailed balance and a canonical conservative dynamics. This decomposition can be rigorously established in a function space with inner product defined as ⟨ϕ, ψ⟩ = ∫ρ-1(x)ϕ(x)ψ(x)dx. Furthermore, a law for balancing F[u] can be obtained: The non-positive dF[u(x, t)]/dt = Ein(t) - ep(t) where the "source" Ein(t) ⩾ 0 and the "sink" ep(t) ⩾ 0 are known as house-keeping heat and entropy production, respectively. A reversible diffusion has Ein(t) = 0. For a linear (Ornstein-Uhlenbeck) diffusion process, our decomposition is equivalent to the previous approaches developed by Graham and Ao, as well as the theory of large deviations. In terms of two different formulations of time reversal for a same stochastic process, the meanings of dissipative and conservative stationary dynamics are discussed.

  7. Accelerated decomposition techniques for large discounted Markov decision processes

    NASA Astrophysics Data System (ADS)

    Larach, Abdelhadi; Chafik, S.; Daoui, C.

    2017-03-01

    Many hierarchical techniques to solve large Markov decision processes (MDPs) are based on the partition of the state space into strongly connected components (SCCs) that can be classified into some levels. In each level, smaller problems named restricted MDPs are solved, and then these partial solutions are combined to obtain the global solution. In this paper, we first propose a novel algorithm, which is a variant of Tarjan's algorithm that simultaneously finds the SCCs and their belonging levels. Second, a new definition of the restricted MDPs is presented to ameliorate some hierarchical solutions in discounted MDPs using value iteration (VI) algorithm based on a list of state-action successors. Finally, a robotic motion-planning example and the experiment results are presented to illustrate the benefit of the proposed decomposition algorithms.

  8. Canonical polyadic decomposition of third-order semi-nonnegative semi-symmetric tensors using LU and QR matrix factorizations

    NASA Astrophysics Data System (ADS)

    Wang, Lu; Albera, Laurent; Kachenoura, Amar; Shu, Huazhong; Senhadji, Lotfi

    2014-12-01

    Semi-symmetric three-way arrays are essential tools in blind source separation (BSS) particularly in independent component analysis (ICA). These arrays can be built by resorting to higher order statistics of the data. The canonical polyadic (CP) decomposition of such semi-symmetric three-way arrays allows us to identify the so-called mixing matrix, which contains the information about the intensities of some latent source signals present in the observation channels. In addition, in many applications, such as the magnetic resonance spectroscopy (MRS), the columns of the mixing matrix are viewed as relative concentrations of the spectra of the chemical components. Therefore, the two loading matrices of the three-way array, which are equal to the mixing matrix, are nonnegative. Most existing CP algorithms handle the symmetry and the nonnegativity separately. Up to now, very few of them consider both the semi-nonnegativity and the semi-symmetry structure of the three-way array. Nevertheless, like all the methods based on line search, trust region strategies, and alternating optimization, they appear to be dependent on initialization, requiring in practice a multi-initialization procedure. In order to overcome this drawback, we propose two new methods, called [InlineEquation not available: see fulltext.] and [InlineEquation not available: see fulltext.], to solve the problem of CP decomposition of semi-nonnegative semi-symmetric three-way arrays. Firstly, we rewrite the constrained optimization problem as an unconstrained one. In fact, the nonnegativity constraint of the two symmetric modes is ensured by means of a square change of variable. Secondly, a Jacobi-like optimization procedure is adopted because of its good convergence property. More precisely, the two new methods use LU and QR matrix factorizations, respectively, which consist in formulating high-dimensional optimization problems into several sequential polynomial and rational subproblems. By using both LU

  9. Catalytic hydrothermal processing of microalgae: decomposition and upgrading of lipids.

    PubMed

    Biller, P; Riley, R; Ross, A B

    2011-04-01

    Hydrothermal processing of high lipid feedstock such as microalgae is an alternative method of oil extraction which has obvious benefits for high moisture containing biomass. A range of microalgae and lipids extracted from terrestrial oil seed have been processed at 350 °C, at pressures of 150-200 bar in water. Hydrothermal liquefaction is shown to convert the triglycerides to fatty acids and alkanes in the presence of certain heterogeneous catalysts. This investigation has compared the composition of lipids and free fatty acids from solvent extraction to those from hydrothermal processing. The initial decomposition products include free fatty acids and glycerol, and the potential for de-oxygenation using heterogeneous catalysts has been investigated. The results indicate that the bio-crude yields from the liquefaction of microalgae were increased slightly with the use of heterogeneous catalysts but the higher heating value (HHV) and the level of de-oxygenation increased, by up to 10%. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. ENVIRONMENTAL ASSESSMENT OF THE BASE CATALYZED DECOMPOSITION (BCD) PROCESS

    EPA Science Inventory

    This report summarizes laboratory-scale, pilot-scale, and field performance data on BCD (Base Catalyzed Decomposition) and technology, collected to date by various governmental, academic, and private organizations.

  11. ENVIRONMENTAL ASSESSMENT OF THE BASE CATALYZED DECOMPOSITION (BCD) PROCESS

    EPA Science Inventory

    This report summarizes laboratory-scale, pilot-scale, and field performance data on BCD (Base Catalyzed Decomposition) and technology, collected to date by various governmental, academic, and private organizations.

  12. CO2 decomposition using electrochemical process in molten salts

    NASA Astrophysics Data System (ADS)

    Otake, Koya; Kinoshita, Hiroshi; Kikuchi, Tatsuya; Suzuki, Ryosuke O.

    2012-08-01

    The electrochemical decomposition of CO2 gas to carbon and oxygen gas in LiCl-Li2O and CaCl2-CaO molten salts was studied. This process consists of electrochemical reduction of Li2O and CaO, as well as the thermal reduction of CO2 gas by the respective metallic Li and Ca. Two kinds of ZrO2 solid electrolytes were tested as an oxygen ion conductor, and the electrolytes removed oxygen ions from the molten salts to the outside of the reactor. After electrolysis in both salts, the aggregations of nanometer-scale amorphous carbon and rod-like graphite crystals were observed by transmission electron microscopy. When 9.7 %CO2-Ar mixed gas was blown into LiCl-Li2O and CaCl2-CaO molten salts, the current efficiency was evaluated to be 89.7 % and 78.5 %, respectively, by the exhaust gas analysis and the supplied charge. When a solid electrolyte with higher ionic conductivity was used, the current and carbon production became larger. It was found that the rate determining step is the diffusion of oxygen ions into the ZrO2 solid electrolyte.

  13. Decomposition of two haloacetic acids in water using UV radiation, ozone and advanced oxidation processes.

    PubMed

    Wang, Kunping; Guo, Jinsong; Yang, Min; Junji, Hirotsuji; Deng, Rongsen

    2009-03-15

    The decomposition of two haloacetic acids (HAAs), dichloroacetic acid (DCAA) and trichloroacetic acid (TCAA), from water was studied by means of single oxidants: ozone, UV radiation; and by the advanced oxidation processes (AOPs) constituted by combinations of O(3)/UV radiation, H(2)O(2)/UV radiation, O(3)/H(2)O(2), O(3)/H(2)O(2)/UV radiation. The concentrations of HAAs were analyzed at specified time intervals to elucidate the decomposition of HAAs. Single O(3) or UV did not result in perceptible decomposition of HAAs within the applied reaction time. O(3)/UV showed to be more suitable for the decomposition of DCAA and TCAA in water among the six methods of oxidation. Decomposition of DCAA was easier than TCAA by AOPs. For O(3)/UV in the semi-continuous mode, the effective utilization rate of ozone for HAA decomposition decreased with ozone addition. The kinetics of HAAs decomposition by O(3)/UV and the influence of coexistent humic acids and HCO(3)(-) on the decomposition process were investigated. The decomposition of the HAAs by the O(3)/UV accorded with the pseudo-first-order mode under the constant initial dissolved O(3) concentration and fixed UV radiation. The pseudo-first-order rate constant for the decomposition of DCAA was more than four times that for TCAA. Humic acids can cause the H(2)O(2) accumulation and the decrease in rate constants of HAAs decomposition in the O(3)/UV process. The rate constants for the decomposition of DCAA and TCAA decreased by 41.1% and 23.8%, respectively, when humic acids were added at a concentration of 1.2mgTOC/L. The rate constants decreased by 43.5% and 25.9%, respectively, at an HCO(3)(-) concentration of 1.0mmol/L.

  14. Tensor-based Dictionary Learning for Spectral CT Reconstruction

    PubMed Central

    Zhang, Yanbo; Wang, Ge

    2016-01-01

    Spectral computed tomography (CT) produces an energy-discriminative attenuation map of an object, extending a conventional image volume with a spectral dimension. In spectral CT, an image can be sparsely represented in each of multiple energy channels, and are highly correlated among energy channels. According to this characteristics, we propose a tensor-based dictionary learning method for spectral CT reconstruction. In our method, tensor patches are extracted from an image tensor, which is reconstructed using the filtered backprojection (FBP), to form a training dataset. With the Candecomp/Parafac decomposition, a tensor-based dictionary is trained, in which each atom is a rank-one tensor. Then, the trained dictionary is used to sparsely represent image tensor patches during an iterative reconstruction process, and the alternating minimization scheme is adapted for optimization. The effectiveness of our proposed method is validated with both numerically simulated and real preclinical mouse datasets. The results demonstrate that the proposed tensor-based method generally produces superior image quality, and leads to more accurate material decomposition than the currently popular popular methods. PMID:27541628

  15. Capturing molecular multimode relaxation processes in excitable gases based on decomposition of acoustic relaxation spectra

    NASA Astrophysics Data System (ADS)

    Zhu, Ming; Liu, Tingting; Wang, Shu; Zhang, Kesheng

    2017-08-01

    Existing two-frequency reconstructive methods can only capture primary (single) molecular relaxation processes in excitable gases. In this paper, we present a reconstructive method based on the novel decomposition of frequency-dependent acoustic relaxation spectra to capture the entire molecular multimode relaxation process. This decomposition of acoustic relaxation spectra is developed from the frequency-dependent effective specific heat, indicating that a multi-relaxation process is the sum of the interior single-relaxation processes. Based on this decomposition, we can reconstruct the entire multi-relaxation process by capturing the relaxation times and relaxation strengths of N interior single-relaxation processes, using the measurements of acoustic absorption and sound speed at 2N frequencies. Experimental data for the gas mixtures CO2-N2 and CO2-O2 validate our decomposition and reconstruction approach.

  16. Predictability of the Dynamic Mode Decomposition in Coastal Processes

    NASA Astrophysics Data System (ADS)

    Wang, Ruo-Qian; Herdman, Liv; Stacey, Mark; Barnard, Patrick

    2016-11-01

    Dynamic Mode Decomposition (DMD) is a model order reduction technique that helps reduce the complexity of computational models. DMD is frequently easier to interpret physically than the Proper Orthogonal Decomposition. The DMD can also produce the eigenvalues of each mode to show the trend of the mode, establishing the rate of growth or decay, but the original DMD cannot produce the contributing weights of the modes. The challenge is selecting the important modes to build a reduced order model. DMD variants have been developed to estimate the weights of each mode. One of the popular methods is called Optimal Mode Decomposition (OMD). This method decomposes the data matrix into a product of the DMD modes, the diagonal weight matrix, and the Vandermonde matrix. The weight matrix can be used to rank the importance of the mode contributions and ultimately leads to the reduced order model for prediction and controlling purpose. We are currently applying DMD to a numerical simulation of the San Francisco Bay, which features complicated coastal geometry, multiple frequency components, and high periodicity. Since DMD defines modes with specific frequencies, we expect DMD would produce a good approximation, but the preliminary results show that the predictability of the DMD is poor if unimportant modes are dropped according to the OMD. We are currently testing other DMD variants and will report our findings in the presentation.

  17. Process characteristics and layout decomposition of self-aligned sextuple patterning

    NASA Astrophysics Data System (ADS)

    Kang, Weiling; Chen, Yijian

    2013-03-01

    Self-aligned sextuple patterning (SASP) is a promising technique to scale down the half pitch of IC features to sub- 10nm region. In this paper, the process characteristics and decomposition methods of both positive-tone (pSASP) and negative-tone SASP (nSASP) techniques are discussed, and a variety of decomposition rules are studied. By using a node-grouping method, nSASP layout conflicting graph can be significantly simplified. Graph searching and coloring algorithm is developed for feature/color assignment. We demonstrate that by generating assisting mandrels, nSASP layout decomposition can be degenerated into an nSADP decomposition problem. The proposed decomposition algorithm is successfully verified with several commonly used 2-D layout examples.

  18. C%2B%2B tensor toolbox user manual.

    SciTech Connect

    Plantenga, Todd D.; Kolda, Tamara Gibson

    2012-04-01

    The C++ Tensor Toolbox is a software package for computing tensor decompositions. It is based on the Matlab Tensor Toolbox, and is particularly optimized for sparse data sets. This user manual briefly overviews tensor decomposition mathematics, software capabilities, and installation of the package. Tensors (also known as multidimensional arrays or N-way arrays) are used in a variety of applications ranging from chemometrics to network analysis. The Tensor Toolbox provides classes for manipulating dense, sparse, and structured tensors in C++. The Toolbox compiles into libraries and is intended for use with custom applications written by users.

  19. Azo dye Acid Red 27 decomposition kinetics during ozone oxidation and adsorption processes.

    PubMed

    Beak, Mi H; Ijagbemi, Christianah O; Kim, Dong S

    2009-05-01

    To elucidate the effects of ozone dosage, catalysts, and temperature on azo dye decomposition rate in treatment processes, the decomposition kinetics of Acid Red 27 by ozone was investigated. Acid Red 27 decomposition rate followed the first-order reaction with complete dye discoloration in 20 min of ozone reaction. The dye decay rate increases as ozone dosage increases. Using Mn, Zn and Ni as transition metal catalysts during the ozone oxidation process, Mn displayed the greatest catalytic effect with significant increase in the rate of decomposition. The rate of decomposition decreases with increase in temperature and beyond 40 degrees C, increase in decomposition rate was followed by a corresponding increase in temperature. The FT-IR spectra in the range of 1,000-1,800 cm(-1) revealed specific band variations after the ozone oxidation process, portraying structural changes traceable to cleavage of bonds in the benzene ring, the sulphite salt group, and the C-N located beside the -N = N- bond. From the (1)H-NMR spectra, the breaking down of the benzene ring showed the disappearance of the 10 H peaks at 7-8 ppm, which later emerged with a new peak at 6.16 ppm. In a parallel batch test of azo dye Acid Red 27 adsorption onto activated carbon, a low adsorption capacity was observed in the adsorption test carried out after three minutes of ozone injection while the adsorption process without ozone injection yielded a high adsorption capacity.

  20. Complex variational mode decomposition for signal processing applications

    NASA Astrophysics Data System (ADS)

    Wang, Yanxue; Liu, Fuyun; Jiang, Zhansi; He, Shuilong; Mo, Qiuyun

    2017-03-01

    Complex-valued signals occur in many areas of science and engineering and are thus of fundamental interest. The complex variational mode decomposition (CVMD) is proposed as a natural and a generic extension of the original VMD algorithm for the analysis of complex-valued data in this work. Moreover, the equivalent filter bank structure of the CVMD in the presence of white noise, and the effects of initialization of center frequency on the filter bank property are both investigated via numerical experiments. Benefiting from the advantages of CVMD algorithm, its bi-directional Hilbert time-frequency spectrum is developed as well, in which the positive and negative frequency components are formulated on the positive and negative frequency planes separately. Several applications in the real-world complex-valued signals support the analysis.

  1. Diffusion tensor imaging reveals white matter microstructure correlations with auditory processing ability.

    PubMed

    Schmithorst, Vincent J; Holland, Scott K; Plante, Elena

    2011-01-01

    Correlation of white matter microstructure with various cognitive processing tasks and with overall intelligence has been previously demonstrated. We investigate the correlation of white matter microstructure with various higher-order auditory processing tasks, including interpretation of speech-in-noise, recognition of low-pass frequency filtered words, and interpretation of time-compressed sentences at two different values of compression. These tests are typically used to diagnose auditory processing disorder (APD) in children. Our hypothesis is that correlations between white matter microstructure in tracts connecting the temporal, frontal, and parietal lobes, as well as callosal pathways, will be seen. Previous functional imaging studies have shown correlations between activation in temporal, frontal, and parietal regions from higher-order auditory processing tasks. In addition, we hypothesize that the regions displaying correlations will vary according to the task because each task uses a different set of skills. Diffusion tensor imaging (DTI) data were acquired from a cohort of 17 normal-hearing children aged 9 to 11 yrs. Fractional anisotropy (FA), a measure of white matter fiber tract integrity and organization, was computed and correlated on a voxelwise basis with performance on the auditory processing tasks, controlling for age, sex, and full-scale IQ. Divergent correlations of white matter FA depending on the particular auditory processing task were found. Positive correlations were found between FA and speech-in-noise in white matter adjoining prefrontal areas and between FA and filtered words in the corpus callosum. Regions exhibiting correlations with time-compressed sentences varied depending on the degree of compression: the greater degree of compression (with the greatest difficulty) resulted in correlations in white matter adjoining prefrontal (dorsal and ventral), whereas the smaller degree of compression (with less difficulty) resulted in

  2. Developmental process of the arcuate fasciculus from infancy to adolescence: a diffusion tensor imaging study

    PubMed Central

    Tak, Hyeong Jun; Kim, Jin Hyun; Son, Su Min

    2016-01-01

    We investigated the radiologic developmental process of the arcuate fasciculus (AF) using subcomponent diffusion tensor imaging (DTI) analysis in typically developing volunteers. DTI data were acquired from 96 consecutive typically developing children, aged 0–14 years. AF subcomponents, including the posterior, anterior, and direct AF tracts were analyzed. Success rates of analysis (AR) and fractional anisotropy (FA) values of each subcomponent tract were measured and compared. AR of all subcomponent tracts, except the posterior, showed a significant increase with aging (P < 0.05). Subcomponent tracts had a specific developmental sequence: First, the posterior AF tract, second, the anterior AF tract, and last, the direct AF tract in identical hemispheres. FA values of all subcomponent tracts, except right direct AF tract, showed correlation with subject's age (P < 0.05). Increased AR and FA values were observed in female subjects in young age (0–2 years) group compared with males (P < 0.05). The direct AF tract showed leftward hemispheric asymmetry and this tendency showed greater consolidation in older age (3–14 years) groups (P < 0.05). These findings demonstrated the radiologic developmental patterns of the AF from infancy to adolescence using subcomponent DTI analysis. The AF showed a specific developmental sequence, sex difference in younger age, and hemispheric asymmetry in older age. PMID:27482222

  3. Nonlinear color-image decomposition for image processing of a digital color camera

    NASA Astrophysics Data System (ADS)

    Saito, Takahiro; Aizawa, Haruya; Yamada, Daisuke; Komatsu, Takashi

    2009-01-01

    This paper extends the BV (Bounded Variation) - G and/or the BV-L1 variational nonlinear image-decomposition approaches, which are considered to be useful for image processing of a digital color camera, to genuine color-image decomposition approaches. For utilizing inter-channel color cross-correlations, this paper first introduces TV (Total Variation) norms of color differences and TV norms of color sums into the BV-G and/or BV-L1 energy functionals, and then derives denoising-type decomposition-algorithms with an over-complete wavelet transform, through applying the Besov-norm approximation to the variational problems. Our methods decompose a noisy color image without producing undesirable low-frequency colored artifacts in its separated BV-component, and they achieve desirable high-quality color-image decomposition, which is very robust against colored random noise.

  4. The tensor hierarchy algebra

    SciTech Connect

    Palmkvist, Jakob

    2014-01-15

    We introduce an infinite-dimensional Lie superalgebra which is an extension of the U-duality Lie algebra of maximal supergravity in D dimensions, for 3 ⩽ D ⩽ 7. The level decomposition with respect to the U-duality Lie algebra gives exactly the tensor hierarchy of representations that arises in gauge deformations of the theory described by an embedding tensor, for all positive levels p. We prove that these representations are always contained in those coming from the associated Borcherds-Kac-Moody superalgebra, and we explain why some of the latter representations are not included in the tensor hierarchy. The most remarkable feature of our Lie superalgebra is that it does not admit a triangular decomposition like a (Borcherds-)Kac-Moody (super)algebra. Instead the Hodge duality relations between level p and D − 2 − p extend to negative p, relating the representations at the first two negative levels to the supersymmetry and closure constraints of the embedding tensor.

  5. Stage efficiency in the analysis of thermochemical water decomposition processes

    NASA Technical Reports Server (NTRS)

    Conger, W. L.; Funk, J. E.; Carty, R. H.; Soliman, M. A.; Cox, K. E.

    1976-01-01

    The procedure for analyzing thermochemical water-splitting processes using the figure of merit is expanded to include individual stage efficiencies and loss coefficients. The use of these quantities to establish the thermodynamic insufficiencies of each stage is shown. A number of processes are used to illustrate these concepts and procedures and to demonstrate the facility with which process steps contributing most to the cycle efficiency are found. The procedure allows attention to be directed to those steps of the process where the greatest increase in total cycle efficiency can be obtained.

  6. Stage efficiency in the analysis of thermochemical water decomposition processes

    NASA Technical Reports Server (NTRS)

    Conger, W. L.; Funk, J. E.; Carty, R. H.; Soliman, M. A.; Cox, K. E.

    1976-01-01

    The procedure for analyzing thermochemical water-splitting processes using the figure of merit is expanded to include individual stage efficiencies and loss coefficients. The use of these quantities to establish the thermodynamic insufficiencies of each stage is shown. A number of processes are used to illustrate these concepts and procedures and to demonstrate the facility with which process steps contributing most to the cycle efficiency are found. The procedure allows attention to be directed to those steps of the process where the greatest increase in total cycle efficiency can be obtained.

  7. Analytical gradients for tensor hyper-contracted MP2 and SOS-MP2 on graphical processing units

    NASA Astrophysics Data System (ADS)

    Song, Chenchen; Martínez, Todd J.

    2017-10-01

    Analytic energy gradients for tensor hyper-contraction (THC) are derived and implemented for second-order Møller-Plesset perturbation theory (MP2), with and without the scaled-opposite-spin (SOS)-MP2 approximation. By exploiting the THC factorization, the formal scaling of MP2 and SOS-MP2 gradient calculations with respect to system size is reduced to quartic and cubic, respectively. An efficient implementation has been developed that utilizes both graphics processing units and sparse tensor techniques exploiting spatial sparsity of the atomic orbitals. THC-MP2 has been applied to both geometry optimization and ab initio molecular dynamics (AIMD) simulations. The resulting energy conservation in micro-canonical AIMD demonstrates that the implementation provides accurate nuclear gradients with respect to the THC-MP2 potential energy surfaces.

  8. Exothermic Behavior of Thermal Decomposition of Sodium Percarbonate: Kinetic Deconvolution of Successive Endothermic and Exothermic Processes.

    PubMed

    Nakano, Masayoshi; Wada, Takeshi; Koga, Nobuyoshi

    2015-09-24

    This study focused on the kinetic modeling of the thermal decomposition of sodium percarbonate (SPC, sodium carbonate-hydrogen peroxide (2/3)). The reaction is characterized by apparently different kinetic profiles of mass-loss and exothermic behavior as recorded by thermogravimetry and differential scanning calorimetry, respectively. This phenomenon results from a combination of different kinetic features of the reaction involving two overlapping mass-loss steps controlled by the physico-geometry of the reaction and successive endothermic and exothermic processes caused by the detachment and decomposition of H2O2(g). For kinetic modeling, the overall reaction was initially separated into endothermic and exothermic processes using kinetic deconvolution analysis. Then, both of the endothermic and exothermic processes were further separated into two reaction steps accounting for the physico-geometrically controlled reaction that occurs in two steps. Kinetic modeling through kinetic deconvolution analysis clearly illustrates the appearance of the net exothermic effect is the result of a slight delay of the exothermic process to the endothermic process in each physico-geometrically controlled reaction step. This demonstrates that kinetic modeling attempted in this study is useful for interpreting the exothermic behavior of solid-state reactions such as the oxidative decomposition of solids and thermal decomposition of oxidizing agent.

  9. Tensor SVD and distributed control

    NASA Astrophysics Data System (ADS)

    Iyer, Ram V.

    2005-05-01

    The (approximate) diagonalization of symmetric matrices has been studied in the past in the context of distributed control of an array of collocated smart actuators and sensors. For distributed control using a two dimensional array of actuators and sensors, it is more natural to describe the system transfer function as a complex tensor rather than a complex matrix. In this paper, we study the problem of approximately diagonalizing a transfer function tensor via the tensor singular value decomposition (TSVD) for a locally spatially invariant system, and study its application along with the technique of recursive orthogonal transforms to achieve distributed control for a smart structure.

  10. Decomposition of repetition priming processes in word translation.

    PubMed

    Francis, Wendy S; Durán, Gabriela; Augustini, Beatriz K; Luévano, Genoveva; Arzate, José C; Sáenz, Silvia P

    2011-01-01

    Translation in fluent bilinguals requires comprehension of a stimulus word and subsequent production, or retrieval and articulation, of the response word. Four repetition-priming experiments with Spanish–English bilinguals (N = 274) decomposed these processes using selective facilitation to evaluate their unique priming contributions and factorial combination to evaluate the degree of process overlap or dependence. In Experiment 1, symmetric priming between semantic classification and translation tasks indicated that bilinguals do not covertly translate words during semantic classification. In Experiments 2 and 3, semantic classification of words and word-cued picture drawing facilitated word-comprehension processes of translation, and picture naming facilitated word-production processes. These effects were independent, consistent with a sequential model and with the conclusion that neither semantic classification nor word-cued picture drawing elicits covert translation. Experiment 4 showed that 2 tasks involving word-retrieval processes--written word translation and picture naming--had subadditive effects on later translation. Incomplete transfer from written translation to spoken translation indicated that preparation for articulation also benefited from repetition in the less-fluent language.

  11. Kinetic analysis of overlapping multistep thermal decomposition comprising exothermic and endothermic processes: thermolysis of ammonium dinitramide.

    PubMed

    Muravyev, Nikita V; Koga, Nobuyoshi; Meerov, Dmitry B; Pivkina, Alla N

    2017-01-25

    This study focused on kinetic modeling of a specific type of multistep heterogeneous reaction comprising exothermic and endothermic reaction steps, as exemplified by the practical kinetic analysis of the experimental kinetic curves for the thermal decomposition of molten ammonium dinitramide (ADN). It is known that the thermal decomposition of ADN occurs as a consecutive two step mass-loss process comprising the decomposition of ADN and subsequent evaporation/decomposition of in situ generated ammonium nitrate. These reaction steps provide exothermic and endothermic contributions, respectively, to the overall thermal effect. The overall reaction process was deconvoluted into two reaction steps using simultaneously recorded thermogravimetry and differential scanning calorimetry (TG-DSC) curves by considering the different physical meanings of the kinetic data derived from TG and DSC by P value analysis. The kinetic data thus separated into exothermic and endothermic reaction steps were kinetically characterized using kinetic computation methods including isoconversional method, combined kinetic analysis, and master plot method. The overall kinetic behavior was reproduced as the sum of the kinetic equations for each reaction step considering the contributions to the rate data derived from TG and DSC. During reproduction of the kinetic behavior, the kinetic parameters and contributions of each reaction step were optimized using kinetic deconvolution analysis. As a result, the thermal decomposition of ADN was successfully modeled as partially overlapping exothermic and endothermic reaction steps. The logic of the kinetic modeling was critically examined, and the practical usefulness of phenomenological modeling for the thermal decomposition of ADN was illustrated to demonstrate the validity of the methodology and its applicability to similar complex reaction processes.

  12. Decomposition of Repetition Priming Processes in Word Translation

    ERIC Educational Resources Information Center

    Francis, Wendy S.; Duran, Gabriela; Augustini, Beatriz K.; Luevano, Genoveva; Arzate, Jose C.; Saenz, Silvia P.

    2011-01-01

    Translation in fluent bilinguals requires comprehension of a stimulus word and subsequent production, or retrieval and articulation, of the response word. Four repetition-priming experiments with Spanish-English bilinguals (N = 274) decomposed these processes using selective facilitation to evaluate their unique priming contributions and factorial…

  13. Decomposition of Repetition Priming Processes in Word Translation

    ERIC Educational Resources Information Center

    Francis, Wendy S.; Duran, Gabriela; Augustini, Beatriz K.; Luevano, Genoveva; Arzate, Jose C.; Saenz, Silvia P.

    2011-01-01

    Translation in fluent bilinguals requires comprehension of a stimulus word and subsequent production, or retrieval and articulation, of the response word. Four repetition-priming experiments with Spanish-English bilinguals (N = 274) decomposed these processes using selective facilitation to evaluate their unique priming contributions and factorial…

  14. Iron oxalate decomposition process by means of Mössbauer spectroscopy and nuclear forward scattering

    NASA Astrophysics Data System (ADS)

    Smrčka, David; Procházka, Vít; Novák, Petr; Kašlík, Josef; Vrba, Vlastimil

    2016-10-01

    This study reports the transformation kinetics of the thermal decomposition of the iron(II) oxalate dihydrate studied in detail by two different techniques: the transmission Mössbauer spectroscopy and the nuclear forward scattering of synchrotron radiation. Both methods were applied to observe three steps of the decomposition process when the iron oxalate transforms to the amorphous iron oxide. The hematite/maghemite ratio was determined from the transmission Mössbauer spectra using an evaluation procedure based on a subtraction of two opposite sides of spectra. The results obtained indicate that the amount of hematite increases with an annealing time prolongation.

  15. PROCESS OF COATING WITH NICKEL BY THE DECOMPOSITION OF NICKEL CARBONYL

    DOEpatents

    Hoover, T.B.

    1959-04-01

    An improved process is presented for the deposition of nickel coatings by the thermal decomposition of nickel carbonyl vapor. The improvement consists in incorporating a small amount of hydrogen sulfide gas in the nickel carbonyl plating gas. It is postulated that the hydrogen sulfide functions as a catalyst. i

  16. A study of the process of nonisothermal decomposition of phenolformaldehyde polymers by differential thermal analysis

    SciTech Connect

    Petrova, O.M.; Fedoseev, S.D.; Komarova, T.V.

    1984-01-01

    A calculation has been made of the activation energy of the thermal decomposition of phenol-formaldehyde polymers. It has been established that for nonisothermal conditions the rate of performance of the process does not affect the effective activation energy calculated by means of Piloyan's equation.

  17. [Putrefaction in a mortuary cold room? Unusual progression of postmortem decomposition processes].

    PubMed

    Kunz, Sebastian N; Brandtner, Herwig; Meyer, Harald

    2013-01-01

    This article illustrates the rare case of rapid body decomposition in an uncommonly short postmortem interval. A clear discrepancy between early postmortem changes at the crime scene and advanced body decomposition at the time of autopsy were seen. Subsequent police investigation identified a failure in the cooling system of the morgue as probable cause. However, due to the postmortem status of the body, a moderate rise in temperature alone is not considered to have caused the full extent of postmortem changes. Therefore, other factors must have been present, which accelerated the postmortem decomposition processes. In our opinion, the most reasonable explanation for this phenomenon would be a rather long resting time of the corpse in a non-refrigerated hearse on a hot summer day.

  18. Factors and processes causing accelerated decomposition in human cadavers - An overview.

    PubMed

    Zhou, Chong; Byard, Roger W

    2011-01-01

    Artefactually enhanced putrefactive and autolytic changes may be misinterpreted as indicating a prolonged postmortem interval and throw doubt on the veracity of witness statements. Review of files from Forensic Science SA and the literature revealed a number of external and internal factors that may be responsible for accelerating these processes. Exogenous factors included exposure to elevated environmental temperatures, both outdoors and indoors, exacerbated by increased humidity or fires. Situations indoor involved exposure to central heating, hot water, saunas and electric blankets. Deaths within motor vehicles were also characterized by enhanced decomposition. Failure to quickly or adequately refrigerate bodies may also lead to early decomposition. Endogenous factors included fever, infections, illicit and prescription drugs, obesity and insulin-dependent diabetes mellitus. When these factors or conditions are identified at autopsy less significance should, therefore, be attached to changes of decomposition as markers of time since death. Copyright © 2010 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  19. Method for increasing steam decomposition in a coal gasification process

    DOEpatents

    Wilson, M.W.

    1987-03-23

    The gasification of coal in the presence of steam and oxygen is significantly enhanced by introducing a thermochemical water- splitting agent such as sulfuric acid, into the gasifier for decomposing the steam to provide additional oxygen and hydrogen usable in the gasification process for the combustion of the coal and enrichment of the gaseous gasification products. The addition of the water-splitting agent into the gasifier also allows for the operation of the reactor at a lower temperature.

  20. Method for increasing steam decomposition in a coal gasification process

    DOEpatents

    Wilson, Marvin W.

    1988-01-01

    The gasification of coal in the presence of steam and oxygen is significantly enhanced by introducing a thermochemical water-splitting agent such as sulfuric acid, into the gasifier for decomposing the steam to provide additional oxygen and hydrogen usable in the gasification process for the combustion of the coal and enrichment of the gaseous gasification products. The addition of the water-splitting agent into the gasifier also allows for the operation of the reactor at a lower temperature.

  1. Subensemble decomposition and Markov process analysis of Burgers turbulence.

    PubMed

    Zhang, Zhi-Xiong; She, Zhen-Su

    2011-08-01

    A numerical and statistical study is performed to describe the positive and negative local subgrid energy fluxes in the one-dimensional random-force-driven Burgers turbulence (Burgulence). We use a subensemble method to decompose the field into shock wave and rarefaction wave subensembles by group velocity difference. We observe that the shock wave subensemble shows a strong intermittency which dominates the whole Burgulence field, while the rarefaction wave subensemble satisfies the Kolmogorov 1941 (K41) scaling law. We calculate the two subensemble probabilities and find that in the inertial range they maintain scale invariance, which is the important feature of turbulence self-similarity. We reveal that the interconversion of shock and rarefaction waves during the equation's evolution displays in accordance with a Markov process, which has a stationary transition probability matrix with the elements satisfying universal functions and, when the time interval is much greater than the corresponding characteristic value, exhibits the scale-invariant property.

  2. A statistical approach based on accumulated degree-days to predict decomposition-related processes in forensic studies.

    PubMed

    Michaud, Jean-Philippe; Moreau, Gaétan

    2011-01-01

    Using pig carcasses exposed over 3 years in rural fields during spring, summer, and fall, we studied the relationship between decomposition stages and degree-day accumulation (i) to verify the predictability of the decomposition stages used in forensic entomology to document carcass decomposition and (ii) to build a degree-day accumulation model applicable to various decomposition-related processes. Results indicate that the decomposition stages can be predicted with accuracy from temperature records and that a reliable degree-day index can be developed to study decomposition-related processes. The development of degree-day indices opens new doors for researchers and allows for the application of inferential tools unaffected by climatic variability, as well as for the inclusion of statistics in a science that is primarily descriptive and in need of validation methods in courtroom proceedings.

  3. ChIP-PIT: Enhancing the Analysis of ChIP-Seq Data Using Convex-Relaxed Pair-Wise Interaction Tensor Decomposition.

    PubMed

    Zhu, Lin; Guo, Wei-Li; Deng, Su-Ping; Huang, De-Shuang

    2016-01-01

    In recent years, thanks to the efforts of individual scientists and research consortiums, a huge amount of chromatin immunoprecipitation followed by high-throughput sequencing (ChIP-seq) experimental data have been accumulated. Instead of investigating them independently, several recent studies have convincingly demonstrated that a wealth of scientific insights can be gained by integrative analysis of these ChIP-seq data. However, when used for the purpose of integrative analysis, a serious drawback of current ChIP-seq technique is that it is still expensive and time-consuming to generate ChIP-seq datasets of high standard. Most researchers are therefore unable to obtain complete ChIP-seq data for several TFs in a wide variety of cell lines, which considerably limits the understanding of transcriptional regulation pattern. In this paper, we propose a novel method called ChIP-PIT to overcome the aforementioned limitation. In ChIP-PIT, ChIP-seq data corresponding to a diverse collection of cell types, TFs and genes are fused together using the three-mode pair-wise interaction tensor (PIT) model, and the prediction of unperformed ChIP-seq experimental results is formulated as a tensor completion problem. Computationally, we propose efficient first-order method based on extensions of coordinate descent method to learn the optimal solution of ChIP-PIT, which makes it particularly suitable for the analysis of massive scale ChIP-seq data. Experimental evaluation the ENCODE data illustrate the usefulness of the proposed model.

  4. The neural basis of novelty and appropriateness in processing of creative chunk decomposition.

    PubMed

    Huang, Furong; Fan, Jin; Luo, Jing

    2015-06-01

    Novelty and appropriateness have been recognized as the fundamental features of creative thinking. However, the brain mechanisms underlying these features remain largely unknown. In this study, we used event-related functional magnetic resonance imaging (fMRI) to dissociate these mechanisms in a revised creative chunk decomposition task in which participants were required to perform different types of chunk decomposition that systematically varied in novelty and appropriateness. We found that novelty processing involved functional areas for procedural memory (caudate), mental rewarding (substantia nigra, SN), and visual-spatial processing, whereas appropriateness processing was mediated by areas for declarative memory (hippocampus), emotional arousal (amygdala), and orthography recognition. These results indicate that non-declarative and declarative memory systems may jointly contribute to the two fundamental features of creative thinking.

  5. Chemical dehalogenation treatment: Base-catalyzed decomposition process (BCDP). Tech data sheet

    SciTech Connect

    Not Available

    1992-07-01

    The Base-Catalyzed Decomposition Process (BCDP) is an efficient, relatively inexpensive treatment process for polychlorinated biphenyls (PCBs). It is also effective on other halogenated contaminants such as insecticides, herbicides, pentachlorophenol (PCP), lindane, and chlorinated dibenzodioxins and furans. The heart of BCDP is the rotary reactor in which most of the decomposition takes place. The contaminated soil is first screened, processed with a crusher and pug mill, and stockpiled. Next, in the main treatment step, this stockpile is mixed with sodium bicarbonate (in the amount of 10% of the weight of the stockpile) and heated for about one hour at 630 F in the rotary reactor. Most (about 60% to 90%) of the PCBs in the soil are decomposed in this step. The remainder are volatilized, captured, and decomposed.

  6. Multidimensional seismic data reconstruction using tensor analysis

    NASA Astrophysics Data System (ADS)

    Kreimer, Nadia

    Exploration seismology utilizes the seismic wavefield for prospecting oil and gas. The seismic reflection experiment consists on deploying sources and receivers in the surface of an area of interest. When the sources are activated, the receivers measure the wavefield that is reflected from different subsurface interfaces and store the information as time-series called traces or seismograms. The seismic data depend on two source coordinates, two receiver coordinates and time (a 5D volume). Obstacles in the field, logistical and economical factors constrain seismic data acquisition. Therefore, the wavefield sampling is incomplete in the four spatial dimensions. Seismic data undergoes different processes. In particular, the reconstruction process is responsible for correcting sampling irregularities of the seismic wavefield. This thesis focuses on the development of new methodologies for the reconstruction of multidimensional seismic data. This thesis examines techniques based on tensor algebra and proposes three methods that exploit the tensor nature of the seismic data. The fully sampled volume is low-rank in the frequency-space domain. The rank increases when we have missing traces and/or noise. The methods proposed perform rank reduction on frequency slices of the 4D spatial volume. The first method employs the Higher-Order Singular Value Decomposition (HOSVD) immersed in an iterative algorithm that reinserts weighted observations. The second method uses a sequential truncated SVD on the unfoldings of the tensor slices (SEQ-SVD). The third method formulates the rank reduction problem as a convex optimization problem. The measure of the rank is replaced by the nuclear norm of the tensor and the alternating direction method of multipliers (ADMM) minimizes the cost function. All three methods have the interesting property that they are robust to curvature of the reflections, unlike many reconstruction methods. Finally, we present a comparison between the methods

  7. Moment tensors, state of stress and their relation to faulting processes in Gujarat, western India

    NASA Astrophysics Data System (ADS)

    Aggarwal, Sandeep Kumar; Khan, Prosanta Kumar; Mohanty, Sarada Prasad; Roumelioti, Zafeiria

    2016-10-01

    Time domain moment tensor analysis of 145 earthquakes (Mw 3.2 to 5.1), occurring during the period 2006-2014 in Gujarat region, has been performed. The events are mainly confined in the Kachchh area demarcated by the Island belt and Kachchh Mainland faults to its north and south, and two transverse faults to its east and west. Libraries of Green's functions were established using the 1D velocity model of Kachchh, Saurashtra and Mainland Gujarat. Green's functions and broadband displacement waveforms filtered at low frequency (0.5-0.8 Hz) were inverted to determine the moment tensor solutions. The estimated solutions were rigorously tested through number of iterations at different source depths for finding reliable source locations. The identified heterogeneous nature of the stress fields in the Kachchh area allowed us to divide this into four Zones 1-4. The stress inversion results indicate that the Zone 1 is dominated with radial compression, Zone 2 with strike-slip compression, and Zones 3 and 4 with strike-slip extensions. The analysis further shows that the epicentral region of 2001 MW 7.7 Bhuj mainshock, located at the junction of Zones 2, 3 and 4, was associated with predominant compressional stress and strike-slip motion along ∼ NNE-SSW striking fault on the western margin of the Wagad uplift. Other tectonically active parts of Gujarat (e.g. Jamnagar, Talala and Mainland) show earthquake activities are dominantly associated with strike-slip extension/compression faulting. Stress inversion analysis shows that the maximum compressive stress axes (σ1) are vertical for both the Jamnagar and Talala regions and horizontal for the Mainland Gujarat. These stress regimes are distinctly different from those of the Kachchh region.

  8. Decomposition of gaseous organic contaminants by surface discharge induced plasma chemical processing -- SPCP

    SciTech Connect

    Oda, Tetsuji; Yamashita, Ryuichi; Haga, Ichiro; Takahashi, Tadashi; Masuda, Senichi

    1996-01-01

    The decomposition performance of the surface induced plasma chemical processing (SPCP) for chlorofluorocarbon (83 ppm CFC-113 in air), acetone, trichloroethylene, and isopropylalcohol was experimentally examined. In every case, very high decomposition performance, more than 90 or 99% removal rate, is realized when the residence time is about 1 second and the input electric power for a 16 cm{sup 3} reactor is about 10 W. Acetone is the most stable compound and alcohol is most easily decomposed. The decomposed product-analysis by a GasChromato-MassSpectrometer has just started but very poor results are obtained. In fact, some portion of the isopropylalcohol may change to acetone which is worse than alcohol. The necessary energy to decompose one mol gas diluted in the air is calculated form the experiments. The necessary energy level for acetone and trichloroethylene is about one-tenth or one-fiftieth of that for chlorofluorocarbon.

  9. Controlled decomposition and oxidation: A treatment method for gaseous process effluents

    NASA Astrophysics Data System (ADS)

    McKinley, Roger J. B., Sr.

    1990-07-01

    The safe disposal of effluent gases produced by the electronics industry deserves special attention. Due to the hazardous nature of many of the materials used, it is essential to control and treat the reactants and reactant by-products as they are exhausted from the process tool and prior to their release into the manufacturing facility's exhaust system and the atmosphere. Controlled decomposition and oxidation (CDO) is one method of treating effluent gases from thin film deposition processes. CDO equipment applications, field experience, and results of the use of CDO equipment and technological advances gained from the field experiences are discussed.

  10. Controlled decomposition and oxidation: A treatment method for gaseous process effluents

    NASA Technical Reports Server (NTRS)

    Mckinley, Roger J. B., Sr.

    1990-01-01

    The safe disposal of effluent gases produced by the electronics industry deserves special attention. Due to the hazardous nature of many of the materials used, it is essential to control and treat the reactants and reactant by-products as they are exhausted from the process tool and prior to their release into the manufacturing facility's exhaust system and the atmosphere. Controlled decomposition and oxidation (CDO) is one method of treating effluent gases from thin film deposition processes. CDO equipment applications, field experience, and results of the use of CDO equipment and technological advances gained from the field experiences are discussed.

  11. Factors controlling decomposition in arctic tundra and related root mycorrhizal processes

    SciTech Connect

    Linkins, A.E.

    1990-01-01

    Work proposed for the final year of Phase 1 of the R D Program will focus on three areas: (1) acquire soil and root-mycorrhizal process data which will incorporate baseline enzymatic and soil respiration data, as has been collected during the duration of the project, into the manipulations in the project initiated by Drs. Chapin and Schimmel. Additional enzymatic data on a broader range of organic nitrogen compound decomposition will be collected to better integrate existing decomposition data and modeling structure with the expanded information to be collected on nitrogen dynamics in soils and plant compartments. This activity will principally be done in the new dust disturbance experiment the overall project has planned. (2) Finalize data sets on the complete mineralization of cellulose, and cellulose like plant structural material, and cellulose intermediate hydrolysis products into CO2 and CH4 from soils from water track and non-water track soils and soils from riparian sedge moss meadow vegetation areas. Gas efflux from these soils will be measured in closed microcosms in which the soils will be manipulated to alter their redox state. (3) Continue developing and testing the GAS models on decomposition and plant growth and nutrient acquisition. The primary activity of this project will be on this latter task. 22 refs.

  12. Analysis of a Methanol Decomposition Process by a Nonthermal Plasma Flow

    NASA Astrophysics Data System (ADS)

    Sato, Takehiko; Kambe, Makoto; Nishiyama, Hideya

    In the present study, experimental and numerical analyses were adopted to clarify key reactive species for methanol decomposition processes using a nonthermal plasma flow. The nonthermal plasma flow was generated by a dielectric barrier discharge (DBD) as a radical production source. The experimental methods were as follows. Working gas was air of 1-10Sl/min. The peak-to-peak applied voltage was 16-20kV with sine wave of 1Hz-7kHz. The characteristics of gas velocity, gas temperature, ozone concentration and methanol decomposition efficiency were measured. Those characteristics were also numerically analyzed using conservation equations of mass, chemical component, momentum and energy, and state of equation. The simulation model takes into account reactive species, which have chemical reaction with the methanol. The detailed reaction mechanism used in this model consists of 108 elementary reactions and 41 chemical species. Inlet conditions are partially given by experimental results. Finally, effects of reactive species such as O, OH, H, NO, etc. on methanol decomposition characteristics are numerically analyzed. The results obtained in this study are summarized as follows. (1) Existence of excited atoms of O, N and excited molecular of OH, N2(B3Πg), N2(A3Σu+), NO are implied in the discharge region. (2) The methanol below 50ppm is decomposed completely by using DBD at discharge conditions as V=16kVpp and f=100Hz. (3) The reactive species are most important factor to decompose methanol, as the full decomposition is obtained under all injection positions. (4) In numerical analysis, it is clarified that OH is the important radical to decompose the methanol.

  13. Validating the performance of one-time decomposition for fMRI analysis using ICA with automatic target generation process.

    PubMed

    Yao, Shengnan; Zeng, Weiming; Wang, Nizhuan; Chen, Lei

    2013-07-01

    Independent component analysis (ICA) has been proven to be effective for functional magnetic resonance imaging (fMRI) data analysis. However, ICA decomposition requires to optimize the unmixing matrix iteratively whose initial values are generated randomly. Thus the randomness of the initialization leads to different ICA decomposition results. Therefore, just one-time decomposition for fMRI data analysis is not usually reliable. Under this circumstance, several methods about repeated decompositions with ICA (RDICA) were proposed to reveal the stability of ICA decomposition. Although utilizing RDICA has achieved satisfying results in validating the performance of ICA decomposition, RDICA cost much computing time. To mitigate the problem, in this paper, we propose a method, named ATGP-ICA, to do the fMRI data analysis. This method generates fixed initial values with automatic target generation process (ATGP) instead of being produced randomly. We performed experimental tests on both hybrid data and fMRI data to indicate the effectiveness of the new method and made a performance comparison of the traditional one-time decomposition with ICA (ODICA), RDICA and ATGP-ICA. The proposed method demonstrated that it not only could eliminate the randomness of ICA decomposition, but also could save much computing time compared to RDICA. Furthermore, the ROC (Receiver Operating Characteristic) power analysis also denoted the better signal reconstruction performance of ATGP-ICA than that of RDICA.

  14. Chlorine/UV Process for Decomposition and Detoxification of Microcystin-LR.

    PubMed

    Zhang, Xinran; Li, Jing; Yang, Jer-Yen; Wood, Karl V; Rothwell, Arlene P; Li, Weiguang; Blatchley Iii, Ernest R

    2016-07-19

    Microcystin-LR (MC-LR) is a potent hepatotoxin that is often associated with blooms of cyanobacteria. Experiments were conducted to evaluate the efficiency of the chlorine/UV process for MC-LR decomposition and detoxification. Chlorinated MC-LR was observed to be more photoactive than MC-LR. LC/MS analyses confirmed that the arginine moiety represented an important reaction site within the MC-LR molecule for conditions of chlorination below the chlorine demand of the molecule. Prechlorination activated MC-LR toward UV254 exposure by increasing the product of the molar absorption coefficient and the quantum yield of chloro-MC-LR, relative to the unchlorinated molecule. This mechanism of decay is fundamentally different than the conventional view of chlorine/UV as an advanced oxidation process. A toxicity assay based on human liver cells indicated MC-LR degradation byproducts in the chlorine/UV process possessed less cytotoxicity than those that resulted from chlorination or UV254 irradiation applied separately. MC-LR decomposition and detoxification in this combined process were more effective at pH 8.5 than at pH 7.5 or 6.5. These results suggest that the chlorine/UV process could represent an effective strategy for control of microcystins and their associated toxicity in drinking water supplies.

  15. Discussion of stress tensor nonuniqueness with application to nonuniform, particulate systems

    SciTech Connect

    Aidun, J.B.

    1993-01-01

    The indeterminacy of the mechanical stress tensor has been noted in several developments of expressions for stress in a system of particles. It is generally agreed that physical quantities related to the stress tensor must be insensitive to this nonuniqueness, but there is no definitive prescription for insuring it. Kroener's tensor decomposition theorem is applied to the mechanical stress tensor [sup [sigma

  16. Input-decomposition balance of heterotrophic processes in a warm-temperate mixed forest in Japan

    NASA Astrophysics Data System (ADS)

    Jomura, M.; Kominami, Y.; Ataka, M.; Makita, N.; Dannoura, M.; Miyama, T.; Tamai, K.; Goto, Y.; Sakurai, S.

    2010-12-01

    Carbon accumulation in forest ecosystem has been evaluated using three approaches. One is net ecosystem exchange (NEE) estimated by tower flux measurement. The second is net ecosystem production (NEP) estimated by biometric measurements. NEP can be expressed as the difference between net primary production and heterotrophic respiration. NEP can also be expressed as the annual increment in the plant biomass (ΔW) plus soil (ΔS) carbon pools defined as follows; NEP = ΔW+ΔS The third approach needs to evaluate annual carbon increment in soil compartment. Soil carbon accumulation rate could not be measured directly in a short term because of the small amount of annual accumulation. Soil carbon accumulation rate can be estimated by a model calculation. Rothamsted carbon model is a soil organic carbon turnover model and a useful tool to estimate the rate of soil carbon accumulation. However, the model has not sufficiently included variations in decomposition processes of organic matters in forest ecosystems. Organic matter in forest ecosystems have a different turnover rate that creates temporal variations in input-decomposition balance and also have a large variation in spatial distribution. Thus, in order to estimate the rate of soil carbon accumulation, temporal and spatial variation in input-decomposition balance of heterotrophic processes should be incorporated in the model. In this study, we estimated input-decomposition balance and the rate of soil carbon accumulation using the modified Roth-C model. We measured respiration rate of many types of organic matters, such as leaf litter, fine root litter, twigs and coarse woody debris using a chamber method. We can illustrate the relation of respiration rate to diameter of organic matters. Leaf and fine root litters have no diameter, so assumed to be zero in diameter. Organic matters in small size, such as leaf and fine root litter, have high decomposition respiration. It could be caused by the difference in

  17. A quantitative acoustic emission study on fracture processes in ceramics based on wavelet packet decomposition

    SciTech Connect

    Ning, J. G.; Chu, L.; Ren, H. L.

    2014-08-28

    We base a quantitative acoustic emission (AE) study on fracture processes in alumina ceramics on wavelet packet decomposition and AE source location. According to the frequency characteristics, as well as energy and ringdown counts of AE, the fracture process is divided into four stages: crack closure, nucleation, development, and critical failure. Each of the AE signals is decomposed by a 2-level wavelet package decomposition into four different (from-low-to-high) frequency bands (AA{sub 2}, AD{sub 2}, DA{sub 2}, and DD{sub 2}). The energy eigenvalues P{sub 0}, P{sub 1}, P{sub 2}, and P{sub 3} corresponding to these four frequency bands are calculated. By analyzing changes in P{sub 0} and P{sub 3} in the four stages, we determine the inverse relationship between AE frequency and the crack source size during ceramic fracture. AE signals with regard to crack nucleation can be expressed when P{sub 0} is less than 5 and P{sub 3} more than 60; whereas AE signals with regard to dangerous crack propagation can be expressed when more than 92% of P{sub 0} is greater than 4, and more than 95% of P{sub 3} is less than 45. Geiger location algorithm is used to locate AE sources and cracks in the sample. The results of this location algorithm are consistent with the positions of fractures in the sample when observed under a scanning electronic microscope; thus the locations of fractures located with Geiger's method can reflect the fracture process. The stage division by location results is in a good agreement with the division based on AE frequency characteristics. We find that both wavelet package decomposition and Geiger's AE source locations are suitable for the identification of the evolutionary process of cracks in alumina ceramics.

  18. Decomposition of 1,4-dioxane by advanced oxidation and biochemical process.

    PubMed

    Kim, Chang-Gyun; Seo, Hyung-Joon; Lee, Byung-Ryul

    2006-01-01

    This study was undertaken to determine the optimal decomposition conditions when 1,4-dioxane was degraded using either the AOPs (Advanced Oxidation Processes) or the BAC-TERRA microbial complex. The advanced oxidation was operated with H2O2, in the range 4.7 to 51 mM, under 254 nm (25 W lamp) illumination, while varying the reaction parameters, such as the air flow rate and reaction time. The greatest oxidation rate (96%) of 1,4-dioxane was achieved with H2O2 concentration of 17 mM after a 2-hr reaction. As a result of this reaction, organic acid intermediates were formed, such as acetic, propionic and butyric acids. Furthermore, the study revealed that suspended particles, i.e., bio-flocs, kaolin and pozzolan, in the reaction were able to have an impact on the extent of 1,4-dioxane decomposition. The decomposition of 1,4-dioxane in the presence of bio-flocs was significantly declined due to hindered UV penetration through the solution as a result of the consistent dispersion of bio-particles. In contrast, dosing with pozzolan decomposed up to 98.8% of the 1,4-dioxane after 2 hr of reaction. Two actual wastewaters, from polyester manufacturing, containing 1,4-dioxane in the range 370 to 450 mg/L were able to be oxidized by as high as 100% within 15 min with the introduction of 100:200 (mg/L) Fe(II):H202 under UV illumination. Aerobic biological decomposition, employing BAC-TERRA, was able to remove up to 90% of 1,4-dioxane after 15 days of incubation. In the meantime, the by-products (i.e., acetic, propionic and valeric acid) generated were similar to those formed during the AOPs investigation. According to kinetic studies, both photo-decomposition and biodegradation of 1,4-dioxane followed pseudo first-order reaction kinetics, with k = 5 x 10(-4) s(-1) and 2.38 x 10(-6) s(-1), respectively. It was concluded that 1,4-dioxane could be readily degraded by both AOPs and BAC-TERRA, and that the actual polyester wastewater containing 1,4-dioxane could be successfully

  19. An observation on the decomposition process of gasoline-ingested monkey carcasses in a secondary forest in Malaysia.

    PubMed

    Rumiza, A R; Khairul, O; Zuha, R M; Heo, C C

    2010-12-01

    This study was designed to mimic homicide or suicide cases using gasoline. Six adult long-tailed macaque (Macaca fascicularis), weighing between 2.5 to 4.0 kg, were equally divided into control and test groups. The control group was sacrificed by a lethal dose of phenobarbital intracardiac while test group was force fed with two doses of gasoline LD50 (37.7 ml/kg) after sedation with phenobarbital. All carcasses were then placed in a decomposition site to observe the decomposition and invasion process of cadaveric fauna on the carcasses. A total of five decomposition stages were recognized during this study. This study was performed during July 2007. Fresh stage of control and test carcasses occurred between 0 to 15 and 0 to 39 hours of exposure, respectively. The subsequent decomposition stages also exhibited the similar pattern whereby the decomposition process of control carcasses were faster than tested one. The first larvae were found on control carcasses after 9 hours of death while the test group carcasses had only their first blowfly eggs after 15 hours of exposure. Blow flies, Achoetandrus rufifacies and Chrysomya megacephala were the most dominant invader of both carcasses throughout the decaying process. Diptera collected from control carcasses comprised of scuttle fly, Megaselia scalaris and flesh fly, sarcophagid. We concluded that the presence of gasoline and its odor on the carcass had delayed the arrival of insect to the carcasses, thereby slowing down the decomposition process in the carcass by 6 hours.

  20. MATLAB Tensor Toolbox

    SciTech Connect

    Kolda, Tamara G.; Bader, Brett W.

    2006-08-03

    This software provides a collection of MATLAB classes for tensor manipulations that can be used for fast algorithm prototyping. The tensor class extends the functionality of MATLAB's multidimensional arrays by supporting additional operations such as tensor multiplication. We have also added support for sparse tensor, tensors in Kruskal or Tucker format, and tensors stored as matrices (both dense and sparse).

  1. Role of phyllosphere fungi of forest trees in the development of decomposer fungal communities and decomposition processes of leaf litter.

    PubMed

    Osono, T

    2006-08-01

    The ecology of endophytic and epiphytic phyllosphere fungi of forest trees is reviewed with special emphasis on the development of decomposer fungal communities and decomposition processes of leaf litter. A total of 41 genera of phyllosphere fungi have been reported to occur on leaf litter of tree species in 19 genera. The relative proportion of phyllosphere fungi in decomposer fungal communities ranges from 2% to 100%. Phyllosphere fungi generally disappear in the early stages of decomposition, although a few species persist until the late stages. Phyllosphere fungi have the ability to utilize various organic compounds as carbon sources, and the marked decomposing ability is associated with ligninolytic activity. The role of phyllosphere fungi in the decomposition of soluble components during the early stages is relatively small in spite of their frequent occurrence. Recently, the roles of phyllosphere fungi in the decomposition of structural components have been documented with reference to lignin and cellulose decomposition, nutrient dynamics, and accumulation and decomposition of soil organic matter. It is clear from this review that several of the common phyllosphere fungi of forest trees are primarily saprobic, being specifically adapted to colonize and utilize dead host tissue, and that some phyllosphere fungi with marked abilities to decompose litter components play important roles in decomposition of structural components, nutrient dynamics, and soil organic matter accumulation.

  2. Tensor Modeling Based for Airborne LiDAR Data Classification

    NASA Astrophysics Data System (ADS)

    Li, N.; Liu, C.; Pfeifer, N.; Yin, J. F.; Liao, Z. Y.; Zhou, Y.

    2016-06-01

    Feature selection and description is a key factor in classification of Earth observation data. In this paper a classification method based on tensor decomposition is proposed. First, multiple features are extracted from raw LiDAR point cloud, and raster LiDAR images are derived by accumulating features or the "raw" data attributes. Then, the feature rasters of LiDAR data are stored as a tensor, and tensor decomposition is used to select component features. This tensor representation could keep the initial spatial structure and insure the consideration of the neighborhood. Based on a small number of component features a k nearest neighborhood classification is applied.

  3. A detailed kinetic model for the hydrothermal decomposition process of sewage sludge.

    PubMed

    Yin, Fengjun; Chen, Hongzhen; Xu, Guihua; Wang, Guangwei; Xu, Yuanjian

    2015-12-01

    A detailed kinetic model for the hydrothermal decomposition (HTD) of sewage sludge was developed based on an explicit reaction scheme considering exact intermediates including protein, saccharide, NH4(+)-N and acetic acid. The parameters were estimated by a series of kinetic data at a temperature range of 180-300°C. This modeling framework is capable of revealing stoichiometric relationships between different components by determining the conversion coefficients and identifying the reaction behaviors by determining rate constants and activation energies. The modeling work shows that protein and saccharide are the primary intermediates in the initial stage of HTD resulting from the fast reduction of biomass. The oxidation processes of macromolecular products to acetic acid are highly dependent on reaction temperature and dramatically restrained when temperature is below 220°C. Overall, this detailed model is meaningful for process simulation and kinetic analysis.

  4. Denoising NMR time-domain signal by singular-value decomposition accelerated by graphics processing units.

    PubMed

    Man, Pascal P; Bonhomme, Christian; Babonneau, Florence

    2014-01-01

    We present a post-processing method that decreases the NMR spectrum noise without line shape distortion. As a result the signal-to-noise (S/N) ratio of a spectrum increases. This method is called Cadzow enhancement procedure that is based on the singular-value decomposition of time-domain signal. We also provide software whose execution duration is a few seconds for typical data when it is executed in modern graphic-processing unit. We tested this procedure not only on low sensitive nucleus (29)Si in hybrid materials but also on low gyromagnetic ratio, quadrupole nucleus (87)Sr in reference sample Sr(NO3)2. Improving the spectrum S/N ratio facilitates the determination of T/Q ratio of hybrid materials. It is also applicable to simulated spectrum, resulting shorter simulation duration for powder averaging. An estimation of the number of singular values needed for denoising is also provided.

  5. Noise-assisted data processing with empirical mode decomposition in biomedical signals.

    PubMed

    Karagiannis, Alexandros; Constantinou, Philip

    2011-01-01

    In this paper, a methodology is described in order to investigate the performance of empirical mode decomposition (EMD) in biomedical signals, and especially in the case of electrocardiogram (ECG). Synthetic ECG signals corrupted with white Gaussian noise are employed and time series of various lengths are processed with EMD in order to extract the intrinsic mode functions (IMFs). A statistical significance test is implemented for the identification of IMFs with high-level noise components and their exclusion from denoising procedures. Simulation campaign results reveal that a decrease of processing time is accomplished with the introduction of preprocessing stage, prior to the application of EMD in biomedical time series. Furthermore, the variation in the number of IMFs according to the type of the preprocessing stage is studied as a function of SNR and time-series length. The application of the methodology in MIT-BIH ECG records is also presented in order to verify the findings in real ECG signals.

  6. An integrated condition-monitoring method for a milling process using reduced decomposition features

    NASA Astrophysics Data System (ADS)

    Liu, Jie; Wu, Bo; Wang, Yan; Hu, Youmin

    2017-08-01

    Complex and non-stationary cutting chatter affects productivity and quality in the milling process. Developing an effective condition-monitoring approach is critical to accurately identify cutting chatter. In this paper, an integrated condition-monitoring method is proposed, where reduced features are used to efficiently recognize and classify machine states in the milling process. In the proposed method, vibration signals are decomposed into multiple modes with variational mode decomposition, and Shannon power spectral entropy is calculated to extract features from the decomposed signals. Principal component analysis is adopted to reduce feature size and computational cost. With the extracted feature information, the probabilistic neural network model is used to recognize and classify the machine states, including stable, transition, and chatter states. Experimental studies are conducted, and results show that the proposed method can effectively detect cutting chatter during different milling operation conditions. This monitoring method is also efficient enough to satisfy fast machine state recognition and classification.

  7. Implementing the sine transform of fermionic modes as a tensor network

    NASA Astrophysics Data System (ADS)

    Epple, Hannes; Fries, Pascal; Hinrichsen, Haye

    2017-09-01

    Based on the algebraic theory of signal processing, we recursively decompose the discrete sine transform of the first kind (DST-I) into small orthogonal block operations. Using a diagrammatic language, we then second-quantize this decomposition to construct a tensor network implementing the DST-I for fermionic modes on a lattice. The complexity of the resulting network is shown to scale as 5/4 n logn (not considering swap gates), where n is the number of lattice sites. Our method provides a systematic approach of generalizing Ferris' spectral tensor network for nontrivial boundary conditions.

  8. Surface modification processes during methane decomposition on Cu-promoted Ni–ZrO2 catalysts

    PubMed Central

    Wolfbeisser, Astrid; Klötzer, Bernhard; Mayr, Lukas; Rameshan, Raffael; Zemlyanov, Dmitry; Bernardi, Johannes; Rupprechter, Günther

    2015-01-01

    The surface chemistry of methane on Ni–ZrO2 and bimetallic CuNi–ZrO2 catalysts and the stability of the CuNi alloy under reaction conditions of methane decomposition were investigated by combining reactivity measurements and in situ synchrotron-based near-ambient pressure XPS. Cu was selected as an exemplary promoter for modifying the reactivity of Ni and enhancing the resistance against coke formation. We observed an activation process occurring in methane between 650 and 735 K with the exact temperature depending on the composition which resulted in an irreversible modification of the catalytic performance of the bimetallic catalysts towards a Ni-like behaviour. The sudden increase in catalytic activity could be explained by an increase in the concentration of reduced Ni atoms at the catalyst surface in the active state, likely as a consequence of the interaction with methane. Cu addition to Ni improved the desired resistance against carbon deposition by lowering the amount of coke formed. As a key conclusion, the CuNi alloy shows limited stability under relevant reaction conditions. This system is stable only in a limited range of temperature up to ~700 K in methane. Beyond this temperature, segregation of Ni species causes a fast increase in methane decomposition rate. In view of the applicability of this system, a detailed understanding of the stability and surface composition of the bimetallic phases present and the influence of the Cu promoter on the surface chemistry under relevant reaction conditions are essential. PMID:25815163

  9. Decomposition strategies in the problems of simulation of additive laser technology processes

    NASA Astrophysics Data System (ADS)

    Khomenko, M. D.; Dubrov, A. V.; Mirzade, F. Kh.

    2016-11-01

    The development of additive technologies and their application in industry is associated with the possibility of predicting the final properties of a crystallized added material. This paper describes the problem characterized by a dynamic and spatially nonuniform computational complexity, which, in the case of uniform decomposition of a computational domain, leads to an unbalanced load on computational cores. The strategy of partitioning of the computational domain is used, which minimizes the CPU time losses in the serial computations of the additive technological process. The chosen strategy is optimal from the standpoint of a priori unknown dynamic computational load distribution. The scaling of the computational problem on the cluster of the Institute on Laser and Information Technologies (RAS) that uses the InfiniBand interconnect is determined. The use of the parallel code with optimal decomposition made it possible to significantly reduce the computational time (down to several hours), which is important in the context of development of the software package for support of engineering activity in the field of additive technology.

  10. Nucleation versus spinodal decomposition in phase formation processes in multicomponent solutions

    NASA Astrophysics Data System (ADS)

    Schmelzer, Jürn W. P.; Abyzov, Alexander S.; Möller, Jörg

    2004-10-01

    In the present paper, some further results of application of the generalized Gibbs' approach [J. W. P. Schmelzer et al., J. Chem. Phys. 112, 3820 (2000); 114, 5180 (2001); 119, 6166 (2003)] to describing new-phase formation processes are outlined. The path of cluster evolution in size and composition space is determined taking into account both thermodynamic and kinetic factors. The basic features of these paths of evolution are discussed in detail for a simple model of a binary mixture. According to this analysis, size and composition of the clusters of the newly evolving phase change in an unexpected way which is qualitatively different as compared to the classical picture of nucleation-growth processes. As shown, nucleation (i.e., the first stage of cluster formation starting from metastable initial states) exhibits properties resembling spinodal decomposition (the size remains nearly constant while the composition changes) although the presence of an activation barrier distinguishes the nucleation process from true spinodal decomposition. In addition, it is shown that phase formation both in metastable and unstable initial states near the classical spinodal may proceed via a passage of a ridge of the thermodynamic potential with a finite work of the activation barrier even though (for unstable initial states) the value of the work of critical cluster formation (corresponding to the saddle point of the thermodynamic potential) is zero. This way, it turns out that nucleation concepts—in a modified form as compared with the classical picture—may govern also phase formation processes starting from unstable initial states. In contrast to the classical Gibbs' approach, the generalized Gibbs' method provides a description of phase changes both in binodal and spinodal regions of the phase diagram and confirms the point of view assuming a continuity of the basic features of the phase transformation kinetics in the vicinity of the classical spinodal curve.

  11. Reduction of nitrous oxide emissions from biological nutrient removal processes by thermal decomposition.

    PubMed

    Pedros, Philip B; Askari, Omid; Metghalchi, Hameed

    2016-12-01

    During the last decade municipal wastewater treatment plants have been regulated with increasingly stringent nutrient removal requirements including nitrogen. Typically biological treatment processes are employed to meet these limits. Although the nitrogen in the wastewater stream is reduced, certain steps in the biological processes allow for the release of gaseous nitrous oxide (N2O), a greenhouse gas (GHG). A comprehensive study was conducted to investigate the potential to mitigate N2O emissions from biological nutrient removal (BNR) processes by means of thermal decomposition. The study examined using the off gases from the biological process, instead of ambient air, as the oxidant gas for the combustion of biomethane. A detailed analysis was done to examine the concentration of N2O and 58 other gases that exited the combustion process. The analysis was based on the assumption that the exhaust gases were in chemical equilibrium since the residence time in the combustor is sufficiently longer than the chemical characteristics. For all inlet N2O concentrations the outlet concentrations were close to zero. Additionally, the emission of hydrogen sulfide (H2S) and ten commonly occurring volatile organic compounds (VOCs) were also examined as a means of odor control for biological secondary treatment processes or as potential emissions from an anaerobic reactor of a BNR process. The sulfur released from the H2S formed sulfur dioxide (SO2) and eight of the ten VOCs were destroyed.

  12. Linear friction weld process monitoring of fixture cassette deformations using empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Bakker, O. J.; Gibson, C.; Wilson, P.; Lohse, N.; Popov, A. A.

    2015-10-01

    Due to its inherent advantages, linear friction welding is a solid-state joining process of increasing importance to the aerospace, automotive, medical and power generation equipment industries. Tangential oscillations and forge stroke during the burn-off phase of the joining process introduce essential dynamic forces, which can also be detrimental to the welding process. Since burn-off is a critical phase in the manufacturing stage, process monitoring is fundamental for quality and stability control purposes. This study aims to improve workholding stability through the analysis of fixture cassette deformations. Methods and procedures for process monitoring are developed and implemented in a fail-or-pass assessment system for fixture cassette deformations during the burn-off phase. Additionally, the de-noised signals are compared to results from previous production runs. The observed deformations as a consequence of the forces acting on the fixture cassette are measured directly during the welding process. Data on the linear friction-welding machine are acquired and de-noised using empirical mode decomposition, before the burn-off phase is extracted. This approach enables a direct, objective comparison of the signal features with trends from previous successful welds. The capacity of the whole process monitoring system is validated and demonstrated through the analysis of a large number of signals obtained from welding experiments.

  13. A New Generation of Brain-Computer Interfaces Driven by Discovery of Latent EEG-fMRI Linkages Using Tensor Decomposition

    PubMed Central

    Deshpande, Gopikrishna; Rangaprakash, D.; Oeding, Luke; Cichocki, Andrzej; Hu, Xiaoping P.

    2017-01-01

    A Brain-Computer Interface (BCI) is a setup permitting the control of external devices by decoding brain activity. Electroencephalography (EEG) has been extensively used for decoding brain activity since it is non-invasive, cheap, portable, and has high temporal resolution to allow real-time operation. Due to its poor spatial specificity, BCIs based on EEG can require extensive training and multiple trials to decode brain activity (consequently slowing down the operation of the BCI). On the other hand, BCIs based on functional magnetic resonance imaging (fMRI) are more accurate owing to its superior spatial resolution and sensitivity to underlying neuronal processes which are functionally localized. However, due to its relatively low temporal resolution, high cost, and lack of portability, fMRI is unlikely to be used for routine BCI. We propose a new approach for transferring the capabilities of fMRI to EEG, which includes simultaneous EEG/fMRI sessions for finding a mapping from EEG to fMRI, followed by a BCI run from only EEG data, but driven by fMRI-like features obtained from the mapping identified previously. Our novel data-driven method is likely to discover latent linkages between electrical and hemodynamic signatures of neural activity hitherto unexplored using model-driven methods, and is likely to serve as a template for a novel multi-modal strategy wherein cross-modal EEG-fMRI interactions are exploited for the operation of a unimodal EEG system, leading to a new generation of EEG-based BCIs. PMID:28638316

  14. A domain decomposition parallel processing algorithm for molecular dynamics simulations of polymers

    NASA Astrophysics Data System (ADS)

    Brown, David; Clarke, Julian H. R.; Okuda, Motoi; Yamazaki, Takao

    1994-10-01

    We describe in this paper a domain decomposition molecular dynamics algorithm for use on distributed memory parallel computers which is capable of handling systems containing rigid bond constraints and three- and four-body potentials as well as non-bonded potentials. The algorithm has been successfully implemented on the Fujitsu 1024 processor element AP1000 machine. The performance has been compared with and benchmarked against the alternative cloning method of parallel processing [D. Brown, J.H.R. Clarke, M. Okuda and T. Yamazaki, J. Chem. Phys., 100 (1994) 1684] and results obtained using other scalar and vector machines. Two parallel versions of the SHAKE algorithm, which solves the bond length constraints problem, have been compared with regard to optimising the performance of this procedure.

  15. Automated multiscale morphometry of muscle disease from second harmonic generation microscopy using tensor-based image processing.

    PubMed

    Garbe, Christoph S; Buttgereit, Andreas; Schürmann, Sebastian; Friedrich, Oliver

    2012-01-01

    Practically, all chronic diseases are characterized by tissue remodeling that alters organ and cellular function through changes to normal organ architecture. Some morphometric alterations become irreversible and account for disease progression even on cellular levels. Early diagnostics to categorize tissue alterations, as well as monitoring progression or remission of disturbed cytoarchitecture upon treatment in the same individual, are a new emerging field. They strongly challenge spatial resolution and require advanced imaging techniques and strategies for detecting morphological changes. We use a combined second harmonic generation (SHG) microscopy and automated image processing approach to quantify morphology in an animal model of inherited Duchenne muscular dystrophy (mdx mouse) with age. Multiphoton XYZ image stacks from tissue slices reveal vast morphological deviation in muscles from old mdx mice at different scales of cytoskeleton architecture: cell calibers are irregular, myofibrils within cells are twisted, and sarcomere lattice disruptions (detected as "verniers") are larger in number compared to samples from healthy mice. In young mdx mice, such alterations are only minor. The boundary-tensor approach, adapted and optimized for SHG data, is a suitable approach to allow quick quantitative morphometry in whole tissue slices. The overall detection performance of the automated algorithm compares very well with manual "by eye" detection, the latter being time consuming and prone to subjective errors. Our algorithm outperfoms manual detection by time with similar reliability. This approach will be an important prerequisite for the implementation of a clinical image databases to diagnose and monitor specific morphological alterations in chronic (muscle) diseases.

  16. Demonstration of base catalyzed decomposition process, Navy Public Works Center, Guam, Mariana Islands

    SciTech Connect

    Schmidt, A.J.; Freeman, H.D.; Brown, M.D.; Zacher, A.H.; Neuenschwander, G.N.; Wilcox, W.A.; Gano, S.R.; Kim, B.C.; Gavaskar, A.R.

    1996-02-01

    Base Catalyzed Decomposition (BCD) is a chemical dehalogenation process designed for treating soils and other substrate contaminated with polychlorinated biphenyls (PCB), pesticides, dioxins, furans, and other hazardous organic substances. PCBs are heavy organic liquids once widely used in industry as lubricants, heat transfer oils, and transformer dielectric fluids. In 1976, production was banned when PCBs were recognized as carcinogenic substances. It was estimated that significant quantities (one billion tons) of U.S. soils, including areas on U.S. military bases outside the country, were contaminated by PCB leaks and spills, and cleanup activities began. The BCD technology was developed in response to these activities. This report details the evolution of the process, from inception to deployment in Guam, and describes the process and system components provided to the Navy to meet the remediation requirements. The report is divided into several sections to cover the range of development and demonstration activities. Section 2.0 gives an overview of the project history. Section 3.0 describes the process chemistry and remediation steps involved. Section 4.0 provides a detailed description of each component and specific development activities. Section 5.0 details the testing and deployment operations and provides the results of the individual demonstration campaigns. Section 6.0 gives an economic assessment of the process. Section 7.0 presents the conclusions and recommendations form this project. The appendices contain equipment and instrument lists, equipment drawings, and detailed run and analytical data.

  17. Decomposition of aniline in aqueous solution by UV/TiO2 process with applying bias potential.

    PubMed

    Ku, Young; Chiu, Ping-Chin; Chou, Yiang-Chen

    2010-11-15

    Application of bias potential to the photocatalytic decomposition of aniline in aqueous solution was studied under various solution pH, bias potentials and concentrations of potassium chloride. The decomposition of aniline by UV/TiO(2) process was found to be enhanced with the application of bias potential of lower voltages; however, the electrolysis of aniline became more dominant as the applying bias potential exceeding 1.0 V. Based on the experimental results and calculated synergetic factors, the application of bias potential improved the decomposition of aniline more noticeably in acidic solutions than that in alkaline solutions. Decomposition of aniline by UV/bias/TiO(2) process in alkaline solutions was increased to certain extent with the concentration of potassium chloride present in aqueous solution. Experimental results also indicated that the energy consumed by applying bias potential for aniline decomposition by UV/bias/TiO(2) process might be much lower than that consumed for increasing light intensity for photocatalysis.

  18. Plasma-assisted decomposition of methanol and trichloroethylene in atmospheric pressure air streams by electrical discharge processing

    SciTech Connect

    Hsiao, M.C.; Merritt, B.T.; Penetrante, B.M.; Vogtlin, G.E.; Wallman, P.H.

    1995-09-01

    Experiments are presented on the plasma-assisted decomposition of dilute concentrations of methanol and trichloroethylene in atmospheric pressure air streams by electrical discharge processing. This investigation used two types of discharge reactors, a dielectric-barrier and a pulsed corona discharge reactor, to study the effects of gas temperature and electrical energy input on the decomposition chemistry and byproduct formation. Our experimental data on both methanol and trichloroethylene show that, under identical gas conditions, the type of electrical discharge reactor does not affect the energy requirements for decomposition or byproduct formation. Our experiments on methanol show that discharge processing converts methanol to CO{sub {ital x}} with an energy yield that increases with temperature. In contrast to the results from methanol, CO{sub {ital x}} is only a minor product in the decomposition of trichloroethylene. In addition, higher temperatures decrease the energy yield for trichloroethylene. This effect may be due to increased competition from decomposition of the byproducts dichloroacetyl chloride and phosgene. In all cases plasma processing using an electrical discharge device produces CO preferentially over CO{sub 2}.

  19. Plasma-assisted decomposition of methanol and trichloroethylene in atmospheric pressure air streams by electrical discharge processing

    NASA Astrophysics Data System (ADS)

    Hsiao, M. C.; Merritt, B. T.; Penetrante, B. M.; Vogtlin, G. E.; Wallman, P. H.

    1995-09-01

    Experiments are presented on the plasma-assisted decomposition of dilute concentrations of methanol and trichloroethylene in atmospheric pressure air streams by electrical discharge processing. This investigation used two types of discharge reactors, a dielectric-barrier and a pulsed corona discharge reactor, to study the effects of gas temperature and electrical energy input on the decomposition chemistry and byproduct formation. Our experimental data on both methanol and trichloroethylene show that, under identical gas conditions, the type of electrical discharge reactor does not affect the energy requirements for decomposition or byproduct formation. Our experiments on methanol show that discharge processing converts methanol to COx with an energy yield that increases with temperature. In contrast to the results from methanol, COx is only a minor product in the decomposition of trichloroethylene. In addition, higher temperatures decrease the energy yield for trichloroethylene. This effect may be due to increased competition from decomposition of the byproducts dichloroacetyl chloride and phosgene. In all cases plasma processing using an electrical discharge device produces CO preferentially over CO2.

  20. Efficient photoreductive decomposition of N-nitrosodimethylamine by UV/iodide process.

    PubMed

    Sun, Zhuyu; Zhang, Chaojie; Zhao, Xiaoyun; Chen, Jing; Zhou, Qi

    2017-05-05

    N-nitrosodimethylamine (NDMA) has aroused extensive concern as a disinfection byproduct due to its high toxicity and elevated concentration levels in water sources. This study investigates the photoreductive decomposition of NDMA by UV/iodide process. The results showed that this process is an effective strategy for the treatment of NDMA with 99.2% NDMA removed within 10min. The depletion of NDMA by UV/iodide process obeyed pseudo-first-order kinetics with a rate constant (k1) of 0.60±0.03min(-1). Hydrated electrons (eaq(-)) generated by the UV irradiation of iodide were proven to play a critical role. Dimethylamine (DMA) and nitrite (NO2(-)) were formed as the main intermediate products, which completely converted to formate (HCOO(-)), ammonium (NH4(+)) and nitrogen (N2). Therefore, not only the high efficiencies in NDMA destruction, but the elimination of toxic intermediates make UV/iodide process advantageous. A photoreduction mechanism was proposed: NDMA initially absorbed photons to a photoexcited state, and underwent a cleavage of NNO bond under the attack of eaq(-). The solution pH had little impact on NDMA removal. However, alkaline conditions were more favorable for the elimination of DMA and NO2(-), thus effectively reducing the secondary pollution.

  1. Character Decomposition and Transposition Processes of Chinese Compound Words in Rapid Serial Visual Presentation

    PubMed Central

    Cao, Hong-Wen; Yang, Ke-Yu; Yan, Hong-Mei

    2017-01-01

    Character order information is encoded at the initial stage of Chinese word processing, however, its time course remains underspecified. In this study, we assess the exact time course of the character decomposition and transposition processes of two-character Chinese compound words (canonical, transposed, or reversible words) compared with pseudowords using dual-target rapid serial visual presentation (RSVP) of stimuli appearing at 30 ms per character with no inter-stimulus interval. The results indicate that Chinese readers can identify words with character transpositions in rapid succession; however, a transposition cost is involved in identifying transposed words compared to canonical words. In RSVP reading, character order of words is more likely to be reversed during the period from 30 to 180 ms for canonical and reversible words, but the period from 30 to 240 ms for transposed words. Taken together, the findings demonstrate that the holistic representation of the base word is activated, however, the order of the two constituent characters is not strictly processed during the very early stage of visual word processing. PMID:28408895

  2. Decomposition of phenylarsonic acid by AOP processes: degradation rate constants and by-products.

    PubMed

    Jaworek, K; Czaplicka, M; Bratek, Ł

    2014-10-01

    The paper presents results of the studies photodegradation, photooxidation, and oxidation of phenylarsonic acid (PAA) in aquatic solution. The water solutions, which consist of 2.7 g dm(-3) phenylarsonic acid, were subjected to advance oxidation process (AOP) in UV, UV/H2O2, UV/O3, H2O2, and O3 systems under two pH conditions. Kinetic rate constants and half-life of phenylarsonic acid decomposition reaction are presented. The results from the study indicate that at pH 2 and 7, PAA degradation processes takes place in accordance with the pseudo first order kinetic reaction. The highest rate constants (10.45 × 10(-3) and 20.12 × 10(-3)) and degradation efficiencies at pH 2 and 7 were obtained at UV/O3 processes. In solution, after processes, benzene, phenol, acetophenone, o-hydroxybiphenyl, p-hydroxybiphenyl, benzoic acid, benzaldehyde, and biphenyl were identified.

  3. Probabilistic Round Trip Contamination Analysis of a Mars Sample Acquisition and Handling Process Using Markovian Decompositions

    NASA Technical Reports Server (NTRS)

    Hudson, Nicolas; Lin, Ying; Barengoltz, Jack

    2010-01-01

    A method for evaluating the probability of a Viable Earth Microorganism (VEM) contaminating a sample during the sample acquisition and handling (SAH) process of a potential future Mars Sample Return mission is developed. A scenario where multiple core samples would be acquired using a rotary percussive coring tool, deployed from an arm on a MER class rover is analyzed. The analysis is conducted in a structured way by decomposing sample acquisition and handling process into a series of discrete time steps, and breaking the physical system into a set of relevant components. At each discrete time step, two key functions are defined: The probability of a VEM being released from each component, and the transport matrix, which represents the probability of VEM transport from one component to another. By defining the expected the number of VEMs on each component at the start of the sampling process, these decompositions allow the expected number of VEMs on each component at each sampling step to be represented as a Markov chain. This formalism provides a rigorous mathematical framework in which to analyze the probability of a VEM entering the sample chain, as well as making the analysis tractable by breaking the process down into small analyzable steps.

  4. Probabilistic Round Trip Contamination Analysis of a Mars Sample Acquisition and Handling Process Using Markovian Decompositions

    NASA Technical Reports Server (NTRS)

    Hudson, Nicolas; Lin, Ying; Barengoltz, Jack

    2010-01-01

    A method for evaluating the probability of a Viable Earth Microorganism (VEM) contaminating a sample during the sample acquisition and handling (SAH) process of a potential future Mars Sample Return mission is developed. A scenario where multiple core samples would be acquired using a rotary percussive coring tool, deployed from an arm on a MER class rover is analyzed. The analysis is conducted in a structured way by decomposing sample acquisition and handling process into a series of discrete time steps, and breaking the physical system into a set of relevant components. At each discrete time step, two key functions are defined: The probability of a VEM being released from each component, and the transport matrix, which represents the probability of VEM transport from one component to another. By defining the expected the number of VEMs on each component at the start of the sampling process, these decompositions allow the expected number of VEMs on each component at each sampling step to be represented as a Markov chain. This formalism provides a rigorous mathematical framework in which to analyze the probability of a VEM entering the sample chain, as well as making the analysis tractable by breaking the process down into small analyzable steps.

  5. Interactive multiscale tensor reconstruction for multiresolution volume visualization.

    PubMed

    Suter, Susanne K; Guitián, José A Iglesias; Marton, Fabio; Agus, Marco; Elsener, Andreas; Zollikofer, Christoph P E; Gopi, M; Gobbetti, Enrico; Pajarola, Renato

    2011-12-01

    Large scale and structurally complex volume datasets from high-resolution 3D imaging devices or computational simulations pose a number of technical challenges for interactive visual analysis. In this paper, we present the first integration of a multiscale volume representation based on tensor approximation within a GPU-accelerated out-of-core multiresolution rendering framework. Specific contributions include (a) a hierarchical brick-tensor decomposition approach for pre-processing large volume data, (b) a GPU accelerated tensor reconstruction implementation exploiting CUDA capabilities, and (c) an effective tensor-specific quantization strategy for reducing data transfer bandwidth and out-of-core memory footprint. Our multiscale representation allows for the extraction, analysis and display of structural features at variable spatial scales, while adaptive level-of-detail rendering methods make it possible to interactively explore large datasets within a constrained memory footprint. The quality and performance of our prototype system is evaluated on large structurally complex datasets, including gigabyte-sized micro-tomographic volumes.

  6. Thermochemistry and kinetics of graphite oxide exothermic decomposition for safety in large-scale storage and processing.

    PubMed

    Qiu, Yang; Collin, Felten; Hurt, Robert H; Külaots, Indrek

    2016-01-01

    The success of graphene technologies will require the development of safe and cost-effective nano-manufacturing methods. Special safety issues arise for manufacturing routes based on graphite oxide (GO) as an intermediate due to its energetic behavior. This article presents a detailed thermochemical and kinetic study of GO exothermic decomposition designed to identify the conditions and material compositions that avoid explosive events during storage and processing at large scale. It is shown that GO becomes more reactive for thermal decomposition when it is pretreated with OH(-) in suspension and the effect is reversible by back-titration to low pH. This OH(-) effect can lower the decomposition reaction exotherm onset temperature by up to 50 degrees of Celsius, causing overlap with common drying operations (100-120°C) and possible self-heating and thermal runaway during processing. Spectroscopic and modeling evidence suggest epoxide groups are primarily responsible for the energetic behavior, and epoxy ring opening/closing reactions are offered as an explanation for the reversible effects of pH on decomposition kinetics and enthalpies. A quantitative kinetic model is developed for GO thermal decomposition and used in a series of case studies to predict the storage conditions under which spontaneous self-heating, thermal runaway, and explosions can be avoided.

  7. Character Decomposition and Transposition Processes in Chinese Compound Words Modulates Attentional Blink.

    PubMed

    Cao, Hongwen; Gao, Min; Yan, Hongmei

    2016-01-01

    The attentional blink (AB) is the phenomenon in which the identification of the second of two targets (T2) is attenuated if it is presented less than 500 ms after the first target (T1). Although the AB is eliminated in canonical word conditions, it remains unclear whether the character order in compound words affects the magnitude of the AB. Morpheme decomposition and transposition of Chinese two-character compound words can provide an effective means to examine AB priming and to assess combinations of the component representations inherent to visual word identification. In the present study, we examined the processing of consecutive targets in a rapid serial visual presentation (RSVP) paradigm using Chinese two-character compound words in which the two characters were transposed to form meaningful words or meaningless combinations (reversible, transposed, or canonical words). We found that when two Chinese characters that form a compound word, regardless of their order, are presented in an RSVP sequence, the likelihood of an AB for the second character is greatly reduced or eliminated compared to when the two characters constitute separate words rather than a compound word. Moreover, the order of the report for the two characters is more likely to be reversed when the normal order of the two characters in a compound word is reversed, especially when the interval between the presentation of the two characters is extremely short. These findings are more consistent with the cognitive strategy hypothesis than the resource-limited hypothesis during character decomposition and transposition of Chinese two-character compound words. These results suggest that compound characters are perceived as a unit, rather than two separate words. The data further suggest that readers could easily understand the text with character transpositions in compound words during Chinese reading.

  8. Character Decomposition and Transposition Processes in Chinese Compound Words Modulates Attentional Blink

    PubMed Central

    Cao, Hongwen; Gao, Min; Yan, Hongmei

    2016-01-01

    The attentional blink (AB) is the phenomenon in which the identification of the second of two targets (T2) is attenuated if it is presented less than 500 ms after the first target (T1). Although the AB is eliminated in canonical word conditions, it remains unclear whether the character order in compound words affects the magnitude of the AB. Morpheme decomposition and transposition of Chinese two-character compound words can provide an effective means to examine AB priming and to assess combinations of the component representations inherent to visual word identification. In the present study, we examined the processing of consecutive targets in a rapid serial visual presentation (RSVP) paradigm using Chinese two-character compound words in which the two characters were transposed to form meaningful words or meaningless combinations (reversible, transposed, or canonical words). We found that when two Chinese characters that form a compound word, regardless of their order, are presented in an RSVP sequence, the likelihood of an AB for the second character is greatly reduced or eliminated compared to when the two characters constitute separate words rather than a compound word. Moreover, the order of the report for the two characters is more likely to be reversed when the normal order of the two characters in a compound word is reversed, especially when the interval between the presentation of the two characters is extremely short. These findings are more consistent with the cognitive strategy hypothesis than the resource-limited hypothesis during character decomposition and transposition of Chinese two-character compound words. These results suggest that compound characters are perceived as a unit, rather than two separate words. The data further suggest that readers could easily understand the text with character transpositions in compound words during Chinese reading. PMID:27379003

  9. Fundamental phenomena on fuel decomposition and boundary layer combustion processes with applications to hybrid rocket motors

    NASA Technical Reports Server (NTRS)

    Kuo, Kenneth K.; Lu, Y. C.; Chiaverini, Martin J.; Harting, George C.

    1994-01-01

    An experimental study on the fundamental processes involved in fuel decomposition and boundary layer combustion in hybrid rocket motors is being conducted at the High Pressure Combustion Laboratory of the Pennsylvania State University. This research should provide a useful engineering technology base in the development of hybrid rocket motors as well as a fundamental understanding of the complex processes involved in hybrid propulsion. A high pressure slab motor has been designed and manufactured for conducting experimental investigations. Oxidizer (LOX or GOX) supply and control systems have been designed and partly constructed for the head-end injection into the test chamber. Experiments using HTPB fuel, as well as fuels supplied by NASA designated industrial companies will be conducted. Design and construction of fuel casting molds and sample holders have been completed. The portion of these items for industrial company fuel casting will be sent to the McDonnell Douglas Aerospace Corporation in the near future. The study focuses on the following areas: observation of solid fuel burning processes with LOX or GOX, measurement and correlation of solid fuel regression rate with operating conditions, measurement of flame temperature and radical species concentrations, determination of the solid fuel subsurface temperature profile, and utilization of experimental data for validation of a companion theoretical study (Part 2) also being conducted at PSU.

  10. Fundamental phenomena on fuel decomposition and boundary layer combustion processes with applications to hybrid rocket motors

    NASA Astrophysics Data System (ADS)

    Kuo, Kenneth K.; Lu, Y. C.; Chiaverini, Martin J.; Harting, George C.

    1994-11-01

    An experimental study on the fundamental processes involved in fuel decomposition and boundary layer combustion in hybrid rocket motors is being conducted at the High Pressure Combustion Laboratory of the Pennsylvania State University. This research should provide a useful engineering technology base in the development of hybrid rocket motors as well as a fundamental understanding of the complex processes involved in hybrid propulsion. A high pressure slab motor has been designed and manufactured for conducting experimental investigations. Oxidizer (LOX or GOX) supply and control systems have been designed and partly constructed for the head-end injection into the test chamber. Experiments using HTPB fuel, as well as fuels supplied by NASA designated industrial companies will be conducted. Design and construction of fuel casting molds and sample holders have been completed. The portion of these items for industrial company fuel casting will be sent to the McDonnell Douglas Aerospace Corporation in the near future. The study focuses on the following areas: observation of solid fuel burning processes with LOX or GOX, measurement and correlation of solid fuel regression rate with operating conditions, measurement of flame temperature and radical species concentrations, determination of the solid fuel subsurface temperature profile, and utilization of experimental data for validation of a companion theoretical study (Part 2) also being conducted at PSU.

  11. Putting domain decomposition at the heart of a mesh-based simulation process

    NASA Astrophysics Data System (ADS)

    Chow, Peter; Addison, Clifford

    2002-12-01

    In computational mechanics analyses such as those in computational fluid dynamics and computational structure mechanics, some 60-90% of total modelling time is taken by specifying and creating the model of the geometry and mesh. The rest of the time is spent in actual analyses and interpreting the results. This is especially true for industries such as aerospace and electronics, where 3D geometrically complex models with multiple physical processes are common. Advances in computational hardware and software have tended to increase the proportion of time spent in model creation, partly because such advances have made it feasible to solve hard and complex geometry problems in a timely fashion. This paper shows one way to exploit the advances in computation to reduce the model creation time and potentially the overall modelling time, namely the use of domain decomposition to define consistent and coherent global models based on existing component geometry and mesh models. In keeping with existing modelling processes the re-engineering cost for the process is minimal.

  12. The classical model for moment tensors

    NASA Astrophysics Data System (ADS)

    Tape, W.; Tape, C.

    2013-12-01

    A seismic moment tensor is a description of an earthquake source, but the description is indirect. The moment tensor describes seismic radiation rather than the actual physical process that initiates the radiation. A moment tensor 'model' then ties the physical process to the moment tensor. The model is not unique, and the physical process is therefore not unique. In the classical moment tensor model (Aki and Richards, 1980), an earthquake arises from slip along a planar fault, but with the slip not necessarily in the plane of the fault. The model specifies the resulting moment tensor in terms of the slip vector, the fault normal vector, and the Lame elastic parameters, assuming isotropy. We review the classical model in the context of the fundamental lune. The lune is closely related to the space of moment tensors, and it provides a setting that is conceptually natural as well as pictorial. In addition to the classical model, we consider a crack plus double couple model (CDC model) in which a moment tensor is regarded as the sum of a crack tensor and a double couple. A compilation of full moment tensors from the literature reveals large deviations in Poisson's ratio as implied by the classical model. Either the classical model is inadequate or the published full moment tensors have very large uncertainties. We question the common interpretation of the isotropic component as a volume change in the source region.

  13. Age-Related Modifications of Diffusion Tensor Imaging Parameters and White Matter Hyperintensities as Inter-Dependent Processes.

    PubMed

    Pelletier, Amandine; Periot, Olivier; Dilharreguy, Bixente; Hiba, Bassem; Bordessoules, Martine; Chanraud, Sandra; Pérès, Karine; Amieva, Hélène; Dartigues, Jean-François; Allard, Michèle; Catheline, Gwénaëlle

    2015-01-01

    Microstructural changes of White Matter (WM) associated with aging have been widely described through Diffusion Tensor Imaging (DTI) parameters. In parallel, White Matter Hyperintensities (WMH) as observed on a T2-weighted MRI are extremely common in older individuals. However, few studies have investigated both phenomena conjointly. The present study investigates aging effects on DTI parameters in absence and in presence of WMH. Diffusion maps were constructed based on 21 directions DTI scans of young adults (n = 19, mean age = 33 SD = 7.4) and two age-matched groups of older adults, one presenting low-level-WMH (n = 20, mean age = 78, SD = 3.2) and one presenting high-level-WMH (n = 20, mean age = 79, SD = 5.4). Older subjects with low-level-WMH presented modifications of DTI parameters in comparison to younger subjects, fitting with the DTI pattern classically described in aging, i.e., Fractional Anisotropy (FA) decrease/Radial Diffusivity (RD) increase. Furthermore, older subjects with high-level-WMH showed higher DTI modifications in Normal Appearing White Matter (NAWM) in comparison to those with low-level-WMH. Finally, in older subjects with high-level-WMH, FA, and RD values of NAWM were associated with to WMH burden. Therefore, our findings suggest that DTI modifications and the presence of WMH would be two inter-dependent processes but occurring within different temporal windows. DTI changes would reflect the early phase of white matter changes and WMH would appear as a consequence of those changes.

  14. [Rates of decomposition processes in mountain soils of the Sudeten as a function of edaphic-climatic and biotic factors].

    PubMed

    Striganova, B R; Bienkowski, P

    2000-01-01

    The rate of grass litter decomposition was studied in soils of the Karkonosze Mountains of the Sudeten at different altitudes. Parallel structural-functional investigations of the soil animal population on the example of soil macrofauna were carried out and heavy metals were assayed in the soil at stationary plots to reveal the effects of both natural and anthropogenic factors on the soil biological activity. The recent contamination of soil in the Sudeten by heavy metals and sulfur does not affect the spatial distribution and abundance of the soil-dwelling invertebrates and the decomposition rates. The latter correlated to a high level of soil saprotroph activity. The activity of the decomposition processes depends on the soil content of organic matter, conditions of soil drainage, and the temperature of upper soil horizon.

  15. Feedback processes in cellulose thermal decomposition: implications for fire-retarding strategies and treatments

    NASA Astrophysics Data System (ADS)

    Ball, R.; McIntosh, A. C.; Brindley, J.

    2004-06-01

    A simple dynamical system that models the competitive thermokinetics and chemistry of cellulose decomposition is examined, with reference to evidence from experimental studies indicating that char formation is a low activation energy exothermal process and volatilization is a high activation energy endothermal process. The thermohydrolysis chemistry at the core of the primary competition is described. Essentially, the competition is between two nucleophiles, a molecule of water and an -OH group on C6 of an end glucosyl cation, to form either a reducing chain fragment with the propensity to undergo the bond-forming reactions that ultimately form char, or a levoglucosan end-fragment that depolymerizes to volatile products. The results of this analysis suggest that promotion of char formation under thermal stress can actually increase the production of flammable volatiles. Thus, we would like to convey an important safety message in this paper: in some situations where heat and mass transfer is restricted in cellulosic materials, such as furnishings, insulation, and stockpiles, the use of char-promoting treatments for fire retardation may have the effect of increasing the risk of flaming combustion.

  16. Fundamental phenomena on fuel decomposition and boundary layer combustion processes with applications to hybrid rocket motors

    NASA Technical Reports Server (NTRS)

    Kuo, Kenneth K.; Lu, Y. C.; Chiaverini, Martin J.; Harting, George C.

    1994-01-01

    An experimental study on the fundamental processes involved in fuel decomposition and boundary layer combustion in hybrid rocket motors is being conducted at the High Pressure Combustion Laboratory of the Pennsylvania State University. This research should provide an engineering technology base for development of large scale hybrid rocket motors as well as a fundamental understanding of the complex processes involved in hybrid propulsion. A high pressure slab motor has been designed for conducting experimental investigations. Oxidizer (LOX or GOX) is injected through the head-end over a solid fuel (HTPB) surface. Experiments using fuels supplied by NASA designated industrial companies will also be conducted. The study focuses on the following areas: measurement and observation of solid fuel burning with LOX or GOX, correlation of solid fuel regression rate with operating conditions, measurement of flame temperature and radical species concentrations, determination of the solid fuel subsurface temperature profile, and utilization of experimental data for validation of a companion theoretical study also being conducted at PSU.

  17. Mathematical simulation of thermal decomposition processes in coking polymers during intense heating

    SciTech Connect

    Shlenskii, O.F.; Polyakov, A.A.

    1994-12-01

    Description of nonstationary heat transfer in heat-shielding materials based on cross-linked polymers, mathematical simulation of chemical engineering processes of treating coking and fiery coals, and designing calculations all require taking thermal destruction kinetics into account. The kinetics of chemical transformations affects the substance density change depending on the temperature, the time, the heat-release function, and other properties of materials. The traditionally accepted description of the thermal destruction kinetics of coking materials is based on formulating a set of kinetic equations, in which only chemical transformations are taken into account. However, such an approach does not necessarily agree with the obtained experimental data for the case of intense heating. The authors propose including the parameters characterizing the decrease of intermolecular interaction in a comparatively narrow temperature interval (20-40 K) into the set of kinetic equations. In the neighborhood of a certain temperature T{sub 1}, which is called the limiting temperature of thermal decomposition, a decrease in intermolecular interaction causes an increase in the rates of chemical and phase transformations. The effect of the enhancement of destruction processes has been found experimentally by the contact thermal analysis method.

  18. Empirical mode decomposition as a time-varying multirate signal processing system

    NASA Astrophysics Data System (ADS)

    Yang, Yanli

    2016-08-01

    Empirical mode decomposition (EMD) can adaptively split composite signals into narrow subbands termed intrinsic mode functions (IMFs). Although an analytical expression of IMFs extracted by EMD from signals is introduced in Yang et al. (2013) [1], it is only used for the case of extrema spaced uniformly. In this paper, the EMD algorithm is analyzed from digital signal processing perspective for the case of extrema spaced nonuniformly. Firstly, the extrema extraction is represented by a time-varying extrema decimator. The nonuniform extrema extraction is analyzed through modeling the time-varying extrema decimation at a fixed time point as a time-invariant decimation. Secondly, by using the impulse/summation approach, spline interpolation for knots spaced nonuniformly is shown as two basic operations, time-varying interpolation and filtering by a time-varying spline filter. Thirdly, envelopes of signals are written as the output of the time-varying spline filter. An expression of envelopes of signals in both time and frequency domain is presented. The EMD algorithm is then described as a time-varying multirate signal processing system. Finally, an equation to model IMFs is derived by using a matrix formulation in time domain for the general case of extrema spaced nonuniformly.

  19. Multilinear operators for higher-order decompositions.

    SciTech Connect

    Kolda, Tamara Gibson

    2006-04-01

    We propose two new multilinear operators for expressing the matrix compositions that are needed in the Tucker and PARAFAC (CANDECOMP) decompositions. The first operator, which we call the Tucker operator, is shorthand for performing an n-mode matrix multiplication for every mode of a given tensor and can be employed to concisely express the Tucker decomposition. The second operator, which we call the Kruskal operator, is shorthand for the sum of the outer-products of the columns of N matrices and allows a divorce from a matricized representation and a very concise expression of the PARAFAC decomposition. We explore the properties of the Tucker and Kruskal operators independently of the related decompositions. Additionally, we provide a review of the matrix and tensor operations that are frequently used in the context of tensor decompositions.

  20. The Formation of Sodium Stannate from Mineral Cassiterite by the Alkaline Decomposition Process with Sodium Carbonate (Na2CO3)

    NASA Astrophysics Data System (ADS)

    Andriyah, L.; Lalasari, L. H.; Manaf, A.

    2017-02-01

    Extraction of cassiterite using alkaline decomposition of sodium carbonate (Na2CO3) has been studied. Cassiterite (SnO2) is a mineral ore that contains tin (Sn) about 57.82 wt% and impurities like quartz, ilmenite, monazite, rutile and zircon. The initial step for the process was to remove the impurities in cassiterite through washing and separation by a high magnetic separator (HTS). The aim of this research is to increase the added value of cassiterite from local area Indonesia that using alkaline decomposition to form sodium stannate (Na2SnO3). The result shows that cassiterite from Indonesia can form sodium stannate (Na2SnO3) which soluble with water in the leaching process. The longer the time for decomposition, the more phases of sodium stannate that will be formed. Optimum result reached when the decomposition process was done in 850 °C for 4 hours with a mole ratio Na2CO3 to cassiterite 3:2. High Score Plus (HSP) was used in this research to analyze the mass of sodium stannate (Na2SnO3). HSP analysis showed that mass of sodium stannate (Na2SnO3) is 70.3 wt%.

  1. Decomposition of non-ionic surfactant Tergitol TMN-10 by the Fenton process in the presence of iron oxide nanoparticles.

    PubMed

    Kos, L; Michalska, K; Perkowski, J

    2014-11-01

    The aim of our studies was to determine the efficiency of decomposition of non-ionic surfactant by the Fenton method in the presence of iron nanocompounds and to compare it with the classical Fenton method. The subject of studies was water solutions of non-ionic detergent Tergitol TMN-10 used in textile industry. Water solutions of the surfactant were subjected to treatment by the classical Fenton method and to treatment in the presence of iron nanocompounds. In the samples of liquid solutions containing the surfactant, chemical oxygen demand (COD) and total organic carbon (TOC) were determined. The Fenton process was optimized based on studies of the effect of compounds used in the treatment, doses of iron and nanoiron, hydrogen peroxide and pH of the solution on surfactant decomposition. Iron oxide nanopowder catalyzed the process of detergent decomposition, increasing its efficiency and the degree of mineralization. It was found that the efficiency of the surfactant decomposition in the process with the use of iron nanocompounds was by 10 to 30 % higher than that in the classical method. The amounts of formed deposits were also several times smaller.

  2. Tensor-Factorized Neural Networks.

    PubMed

    Chien, Jen-Tzung; Bao, Yi-Ting

    2017-04-17

    The growing interests in multiway data analysis and deep learning have drawn tensor factorization (TF) and neural network (NN) as the crucial topics. Conventionally, the NN model is estimated from a set of one-way observations. Such a vectorized NN is not generalized for learning the representation from multiway observations. The classification performance using vectorized NN is constrained, because the temporal or spatial information in neighboring ways is disregarded. More parameters are required to learn the complicated data structure. This paper presents a new tensor-factorized NN (TFNN), which tightly integrates TF and NN for multiway feature extraction and classification under a unified discriminative objective. This TFNN is seen as a generalized NN, where the affine transformation in an NN is replaced by the multilinear and multiway factorization for tensor-based NN. The multiway information is preserved through layerwise factorization. Tucker decomposition and nonlinear activation are performed in each hidden layer. The tensor-factorized error backpropagation is developed to train TFNN with the limited parameter size and computation time. This TFNN can be further extended to realize the convolutional TFNN (CTFNN) by looking at small subtensors through the factorized convolution. Experiments on real-world classification tasks demonstrate that TFNN and CTFNN attain substantial improvement when compared with an NN and a convolutional NN, respectively.

  3. KOALA: A program for the processing and decomposition of transient spectra

    NASA Astrophysics Data System (ADS)

    Grubb, Michael P.; Orr-Ewing, Andrew J.; Ashfold, Michael N. R.

    2014-06-01

    Extracting meaningful kinetic traces from time-resolved absorption spectra is a non-trivial task, particularly for solution phase spectra where solvent interactions can substantially broaden and shift the transition frequencies. Typically, each spectrum is composed of signal from a number of molecular species (e.g., excited states, intermediate complexes, product species) with overlapping spectral features. Additionally, the profiles of these spectral features may evolve in time (i.e., signal nonlinearity), further complicating the decomposition process. Here, we present a new program for decomposing mixed transient spectra into their individual component spectra and extracting the corresponding kinetic traces: KOALA (Kinetics Observed After Light Absorption). The software combines spectral target analysis with brute-force linear least squares fitting, which is computationally efficient because of the small nonlinear parameter space of most spectral features. Within, we demonstrate the application of KOALA to two sets of experimental transient absorption spectra with multiple mixed spectral components. Although designed for decomposing solution-phase transient absorption data, KOALA may in principle be applied to any time-evolving spectra with multiple components.

  4. Spectral decomposition of P50 suppression in schizophrenia during concurrent visual processing.

    PubMed

    Moran, Zachary D; Williams, Terrance J; Bachman, Peter; Nuechterlein, Keith H; Subotnik, Kenneth L; Yee, Cindy M

    2012-09-01

    Reduced suppression of the auditory P50 event-related potential has long been associated with schizophrenia, but the mechanisms associated with the generation and suppression of the P50 are not well understood. Recent investigations have used spectral decomposition of the electroencephalograph (EEG) signal to gain additional insight into the ongoing electrophysiological activity that may be reflected by the P50 suppression deficit. The present investigation extended this line of study by examining how both a traditional measure of sensory gating and the ongoing EEG from which it is extracted might be modified by the presence of concurrent visual stimulation - perhaps better characterizing gating deficits as they occur in a real-world, complex sensory environment. The EEG was obtained from 18 patients with schizophrenia and 17 healthy control subjects during the P50 suppression paradigm and while identical auditory paired-stimuli were presented concurrently with affectively neutral pictures. Consistent with prior research, schizophrenia patients differed from healthy subjects in gating of power in the theta range; theta activity also was modulated by visual stimulation. In addition, schizophrenia patients showed intact gating but overall increased power in the gamma range, consistent with a model of NMDA receptor dysfunction in the disorder. These results are in line with a model of schizophrenia in which impairments in neural synchrony are related to sensory demands and the processing of multimodal information. Copyright © 2012 Elsevier B.V. All rights reserved.

  5. Trace Norm Regularized CANDECOMP/PARAFAC Decomposition With Missing Data.

    PubMed

    Liu, Yuanyuan; Shang, Fanhua; Jiao, Licheng; Cheng, James; Cheng, Hong

    2015-11-01

    In recent years, low-rank tensor completion (LRTC) problems have received a significant amount of attention in computer vision, data mining, and signal processing. The existing trace norm minimization algorithms for iteratively solving LRTC problems involve multiple singular value decompositions of very large matrices at each iteration. Therefore, they suffer from high computational cost. In this paper, we propose a novel trace norm regularized CANDECOMP/PARAFAC decomposition (TNCP) method for simultaneous tensor decomposition and completion. We first formulate a factor matrix rank minimization model by deducing the relation between the rank of each factor matrix and the mode- n rank of a tensor. Then, we introduce a tractable relaxation of our rank function, and then achieve a convex combination problem of much smaller-scale matrix trace norm minimization. Finally, we develop an efficient algorithm based on alternating direction method of multipliers to solve our problem. The promising experimental results on synthetic and real-world data validate the effectiveness of our TNCP method. Moreover, TNCP is significantly faster than the state-of-the-art methods and scales to larger problems.

  6. Extended vector-tensor theories

    NASA Astrophysics Data System (ADS)

    Kimura, Rampei; Naruko, Atsushi; Yoshida, Daisuke

    2017-01-01

    Recently, several extensions of massive vector theory in curved space-time have been proposed in many literatures. In this paper, we consider the most general vector-tensor theories that contain up to two derivatives with respect to metric and vector field. By imposing a degeneracy condition of the Lagrangian in the context of ADM decomposition of space-time to eliminate an unwanted mode, we construct a new class of massive vector theories where five degrees of freedom can propagate, corresponding to three for massive vector modes and two for massless tensor modes. We find that the generalized Proca and the beyond generalized Proca theories up to the quartic Lagrangian, which should be included in this formulation, are degenerate theories even in curved space-time. Finally, introducing new metric and vector field transformations, we investigate the properties of thus obtained theories under such transformations.

  7. Efficient MATLAB computations with sparse and factored tensors.

    SciTech Connect

    Bader, Brett William; Kolda, Tamara Gibson (Sandia National Lab, Livermore, CA)

    2006-12-01

    In this paper, the term tensor refers simply to a multidimensional or N-way array, and we consider how specially structured tensors allow for efficient storage and computation. First, we study sparse tensors, which have the property that the vast majority of the elements are zero. We propose storing sparse tensors using coordinate format and describe the computational efficiency of this scheme for various mathematical operations, including those typical to tensor decomposition algorithms. Second, we study factored tensors, which have the property that they can be assembled from more basic components. We consider two specific types: a Tucker tensor can be expressed as the product of a core tensor (which itself may be dense, sparse, or factored) and a matrix along each mode, and a Kruskal tensor can be expressed as the sum of rank-1 tensors. We are interested in the case where the storage of the components is less than the storage of the full tensor, and we demonstrate that many elementary operations can be computed using only the components. All of the efficiencies described in this paper are implemented in the Tensor Toolbox for MATLAB.

  8. Unsupervised Tensor Mining for Big Data Practitioners.

    PubMed

    Papalexakis, Evangelos E; Faloutsos, Christos

    2016-09-01

    Multiaspect data are ubiquitous in modern Big Data applications. For instance, different aspects of a social network are the different types of communication between people, the time stamp of each interaction, and the location associated to each individual. How can we jointly model all those aspects and leverage the additional information that they introduce to our analysis? Tensors, which are multidimensional extensions of matrices, are a principled and mathematically sound way of modeling such multiaspect data. In this article, our goal is to popularize tensors and tensor decompositions to Big Data practitioners by demonstrating their effectiveness, outlining challenges that pertain to their application in Big Data scenarios, and presenting our recent work that tackles those challenges. We view this work as a step toward a fully automated, unsupervised tensor mining tool that can be easily and broadly adopted by practitioners in academia and industry.

  9. Mathematical modeling of frontal process in thermal decomposition of a substance with allowance for the finite velocity of heat propagation

    SciTech Connect

    Shlenskii, O.F.; Murashov, G.G.

    1982-05-01

    In describing frontal processes of thermal decomposition of high-energy condensed substances, for example detonation, it is common practice to write the equation for the conservation of energy without any limitations on the heat propagation velocity (HPV). At the same time, it is known that in calculating fast processes of heat conduction, the assumption of an infinitely high HPV is not always justified. In order to evaluate the influence of the HPV on the results from calculations of heat conduction process under conditions of a short-term exothermic decomposition of a condensed substance, the solution of the problem of heating a semiinfinite, thermally unstable solid body with boundary conditions of the third kind on the surface has been examined.

  10. Empirical mode decomposition analysis of random processes in the solar atmosphere

    NASA Astrophysics Data System (ADS)

    Kolotkov, D. Y.; Anfinogentov, S. A.; Nakariakov, V. M.

    2016-08-01

    Context. Coloured noisy components with a power law spectral energy distribution are often shown to appear in solar signals of various types. Such a frequency-dependent noise may indicate the operation of various randomly distributed dynamical processes in the solar atmosphere. Aims: We develop a recipe for the correct usage of the empirical mode decomposition (EMD) technique in the presence of coloured noise, allowing for clear distinguishing between quasi-periodic oscillatory phenomena in the solar atmosphere and superimposed random background processes. For illustration, we statistically investigate extreme ultraviolet (EUV) emission intensity variations observed with SDO/AIA in the coronal (171 Å), chromospheric (304 Å), and upper photospheric (1600 Å) layers of the solar atmosphere, from a quiet sun and a sunspot umbrae region. Methods: EMD has been used for analysis because of its adaptive nature and essential applicability to the processing non-stationary and amplitude-modulated time series. For the comparison of the results obtained with EMD, we use the Fourier transform technique as an etalon. Results: We empirically revealed statistical properties of synthetic coloured noises in EMD, and suggested a scheme that allows for the detection of noisy components among the intrinsic modes obtained with EMD in real signals. Application of the method to the solar EUV signals showed that they indeed behave randomly and could be represented as a combination of different coloured noises characterised by a specific value of the power law indices in their spectral energy distributions. On the other hand, 3-min oscillations in the analysed sunspot were detected to have energies significantly above the corresponding noise level. Conclusions: The correct accounting for the background frequency-dependent random processes is essential when using EMD for analysis of oscillations in the solar atmosphere. For the quiet sun region the power law index was found to increase

  11. Spinodal decomposition of tungsten-containing phases in functional coatings obtained via high-energy implantation processes

    NASA Astrophysics Data System (ADS)

    Davydov, S. V.; Petrov, E. V.

    2017-08-01

    We have studied structural and phase transformations in tungsten-containing functional coatings of carbon steels obtained during the high-energy processes of implanting tungsten carbide micropowders by the method of complex pulse electromechanical processing and micropowders of tungsten by technology of directed energy of explosion based on the effect of superdeep penetration of solid particles (Usherenko effect). It has been shown that, during thermomechanical action, intensive steel austenization occurs in the deformation zone with the dissolution of tungsten carbide powder, the carbidization of tungsten powder, and the subsequent formation of composite gradient structures as a result of the decay of supercooled austenite supersaturated by tungsten according to the diffusion mechanism and the mechanism of spinodal decomposition. Separate zones of tungsten-containing phases of the alloy are in the liquid-phase state, as well as undergo spinodal decomposition with the formation of highly disperse carbide phases of globular morphology.

  12. [Decomposition of corpses--a microbial degradation process with special reference to mummification, formation of adipocere and incomplete putrified corpes].

    PubMed

    Schoenen, Dirk

    2013-01-01

    Decomposition of the human body is a microbial process. It is influenced by the environmental situation and it depends to a high degree on the exchange of substances between the corpse and the environment. Mummification occurs at low humidity or frost. Adipocere arises from lack of oxygen, incomplete putrified corpses develop when there is no exchange of air or water between the corpse and the environment.

  13. Effect of water vapor on the thermal decomposition process of zinc hydroxide chloride and crystal growth of zinc oxide

    SciTech Connect

    Kozawa, Takahiro; Onda, Ayumu; Yanagisawa, Kazumichi; Kishi, Akira; Masuda, Yasuaki

    2011-03-15

    Thermal decomposition process of zinc hydroxide chloride (ZHC), Zn{sub 5}(OH){sub 8}Cl{sub 2}.H{sub 2}O, prepared by a hydrothermal slow-cooling method has been investigated by simultaneous X-ray diffractometry and differential scanning calorimetry (XRD-DSC) and thermogravimetric-differential thermal analysis (TG-DTA) in a humidity-controlled atmosphere. ZHC was decomposed to ZnO through {beta}-Zn(OH)Cl as the intermediate phase, leaving amorphous hydrated ZnCl{sub 2}. In humid N{sub 2} with P{sub H{sub 2O}}=4.5 and 10 kPa, the hydrolysis of residual ZnCl{sub 2} was accelerated and the theoretical amount of ZnO was obtained at lower temperatures than in dry N{sub 2}, whereas significant weight loss was caused by vaporization of residual ZnCl{sub 2} in dry N{sub 2}. ZnO formed by calcinations in a stagnant air atmosphere had the same morphology of the original ZHC crystals and consisted of the c-axis oriented column-like particle arrays. On the other hand, preferred orientation of ZnO was inhibited in the case of calcinations in 100% water vapor. A detailed thermal decomposition process of ZHC and the effect of water vapor on the crystal growth of ZnO are discussed. -- Graphical abstract: Thermal decomposition process of zinc hydroxide chloride (ZHC), Zn{sub 5}(OH){sub 8}Cl{sub 2}.H{sub 2}O, has been investigated by novel thermal analyses with three different water vapor partial pressures. In the water vapor atmosphere, the formation of ZnO was completed at lower temperatures than in dry. Display Omitted highlights: > We examine the thermal decomposition of zinc hydroxide chloride in water vapor. > Water vapor had no effects on its thermal decomposition up to 230 {sup o}C. > Water vapor accelerated the decomposition of the residual ZnCl{sub 2} in ZnO. > Without water vapor, a large amount of ZnCl{sub 2} evaporated to form the c-axis oriented ZnO.

  14. Modelling regulation of decomposition and related root/mycorrhizal processes in arctic tundra soils

    SciTech Connect

    Linkins, A.E.

    1992-01-01

    Since this was the final year of this project principal activities were directed towards either collecting data needed to complete existing incomplete data sets or writing manuscripts. Data sets on Imnaviat Creek watershed basin are functionally complete and data finialized on the cellulose mineralizaiton and dust impact on soil organic carbon and phsophorus decomposition. Seven manuscripts were prepared, and are briefly outlined.

  15. Modelling regulation of decomposition and related root/mycorrhizal processes in arctic tundra soils. Final report

    SciTech Connect

    Linkins, A.E.

    1992-09-01

    Since this was the final year of this project principal activities were directed towards either collecting data needed to complete existing incomplete data sets or writing manuscripts. Data sets on Imnaviat Creek watershed basin are functionally complete and data finialized on the cellulose mineralizaiton and dust impact on soil organic carbon and phsophorus decomposition. Seven manuscripts were prepared, and are briefly outlined.

  16. The Spatial Variability of Organic Matter and Decomposition Processes at the Marsh Scale

    NASA Astrophysics Data System (ADS)

    Yousefi Lalimi, Fateme; Silvestri, Sonia; D'Alpaos, Andrea; Roner, Marcella; Marani, Marco

    2017-04-01

    Coastal salt marshes sequester carbon as they respond to the local Rate of Relative Sea Level Rise (RRSLR) and their accretion rate is governed by inorganic soil deposition, organic soil production, and soil organic matter (SOM) decomposition. It is generally recognized that SOM plays a central role in marsh vertical dynamics, but while existing limited observations and modelling results suggest that SOME varies widely at the marsh scale, we lack systematic observations aimed at understanding how SOM production is modulated spatially as a result of biomass productivity and decomposition rate. Marsh topography and distance to the creek can affect biomass and SOM production, while a higher topographic elevation increases drainage, evapotranspiration, aeration, thereby likely inducing higher SOM decomposition rates. Data collected in salt marshes in the northern Venice Lagoon (Italy) show that, even though plant productivity decreases in the lower areas of a marsh located farther away from channel edges, the relative contribution of organic soil production to the overall vertical soil accretion tends to remain constant as the distance from the channel increases. These observations suggest that the competing effects between biomass production and aeration/decomposition determine a contribution of organic soil to total accretion which remains approximately constant with distance from the creek, in spite of the declining plant productivity. Here we test this hypothesis using new observations of SOM and decomposition rates from marshes in North Carolina. The objective is to fill the gap in our understanding of the spatial distribution, at the marsh scale, of the organic and inorganic contributions to marsh accretion in response to RRSLR.

  17. Bilayer linearized tensor renormalization group approach for thermal tensor networks

    NASA Astrophysics Data System (ADS)

    Dong, Yong-Liang; Chen, Lei; Liu, Yun-Jing; Li, Wei

    2017-04-01

    Thermal tensor networks constitute an efficient and versatile representation for quantum lattice models at finite temperatures. By Trotter-Suzuki decomposition, one obtains a D +1 dimensional TTN for the D -dimensional quantum system and then employs efficient renormalizaton group (RG) contractions to obtain the thermodynamic properties with high precision. The linearized tensor renormalization group (LTRG) method, which can be used to contract TTN efficiently and calculate the thermodynamics, is briefly reviewed and then generalized to a bilayer form. We dub this bilayer algorithm as LTRG++ and explore its performance in both finite- and infinite-size systems, finding the numerical accuracy significantly improved compared to single-layer algorithm. Moreover, we show that the LTRG++ algorithm in an infinite-size system is in essence equivalent to transfer-matrix renormalization group method, while reformulated in a tensor network language. As an application of LTRG++, we simulate an extended fermionic Hubbard model numerically, where the phase separation phenomenon, ground-state phase diagram, as well as quantum criticality-enhanced magnetocaloric effects, are investigated.

  18. The classical model for moment tensors

    NASA Astrophysics Data System (ADS)

    Tape, Walter; Tape, Carl

    2013-12-01

    A seismic moment tensor is a description of an earthquake source, but the description is indirect. The moment tensor describes seismic radiation rather than the actual physical process that initiates the radiation. A moment tensor `model' then ties the physical process to the moment tensor. The model is not unique, and the physical process is therefore not unique. In the classical moment tensor model, an earthquake arises from slip along a planar fault, but with the slip not necessarily in the plane of the fault. The model specifies the resulting moment tensor in terms of the slip vector, the fault normal vector and the Lamé elastic parameters, assuming isotropy. We review the classical model in the context of the fundamental lune. The lune is closely related to the space of moment tensors, and it provides a setting that is conceptually natural as well as pictorial. In addition to the classical model, we consider a crack plus double-couple model (CDC model) in which a moment tensor is regarded as the sum of a crack tensor and a double couple.

  19. Human action recognition based on point context tensor shape descriptor

    NASA Astrophysics Data System (ADS)

    Li, Jianjun; Mao, Xia; Chen, Lijiang; Wang, Lan

    2017-07-01

    Motion trajectory recognition is one of the most important means to determine the identity of a moving object. A compact and discriminative feature representation method can improve the trajectory recognition accuracy. This paper presents an efficient framework for action recognition using a three-dimensional skeleton kinematic joint model. First, we put forward a rotation-scale-translation-invariant shape descriptor based on point context (PC) and the normal vector of hypersurface to jointly characterize local motion and shape information. Meanwhile, an algorithm for extracting the key trajectory based on the confidence coefficient is proposed to reduce the randomness and computational complexity. Second, to decrease the eigenvalue decomposition time complexity, a tensor shape descriptor (TSD) based on PC that can globally capture the spatial layout and temporal order to preserve the spatial information of each frame is proposed. Then, a multilinear projection process is achieved by tensor dynamic time warping to map the TSD to a low-dimensional tensor subspace of the same size. Experimental results show that the proposed shape descriptor is effective and feasible, and the proposed approach obtains considerable performance improvement over the state-of-the-art approaches with respect to accuracy on a public action dataset.

  20. Study on the mechanism of copper-ammonia complex decomposition in struvite formation process and enhanced ammonia and copper removal.

    PubMed

    Peng, Cong; Chai, Liyuan; Tang, Chongjian; Min, Xiaobo; Song, Yuxia; Duan, Chengshan; Yu, Cheng

    2017-01-01

    Heavy metals and ammonia are difficult to remove from wastewater, as they easily combine into refractory complexes. The struvite formation method (SFM) was applied for the complex decomposition and simultaneous removal of heavy metal and ammonia. The results indicated that ammonia deprivation by SFM was the key factor leading to the decomposition of the copper-ammonia complex ion. Ammonia was separated from solution as crystalline struvite, and the copper mainly co-precipitated as copper hydroxide together with struvite. Hydrogen bonding and electrostatic attraction were considered to be the main surface interactions between struvite and copper hydroxide. Hydrogen bonding was concluded to be the key factor leading to the co-precipitation. In addition, incorporation of copper ions into the struvite crystal also occurred during the treatment process. Copyright © 2016. Published by Elsevier B.V.

  1. Effect of cooking temperature on the percentage colour formation, nitrite decomposition and sarcoplasmic protein denaturation in processed meat products.

    PubMed

    Okayama, T; Fujii, M; Yamanoue, M

    1991-01-01

    The effect of cooking temperature and time on the percentage colour formation, nitrite decomposition and denaturation of sarcoplasmic proteins in processed meat products was investigated in detail. The colour forming percentage increased with a rise in temperature of heating, especially at 50-60°C (P < 0·05). The percentage nitrite decomposition was promoted by the retention time of cooking rather than by the cooking temperature (P < 0·05). The percentage of sarcoplasmic proteins denatured was enhanced by heating temperature in the range 50-80°C (especially at 50-60°C) (P < 0·05). The relationship between the percentage colour formation and the percentage of sarcoplasmic proteins denatured is discussed. The SDS-PAGE patterns of the heat-treated samples revealed the components of the sarcoplasmic proteins which had been denatured.

  2. Image processing using proper orthogonal and dynamic mode decompositions for the study of cavitation developing on a NACA0015 foil

    NASA Astrophysics Data System (ADS)

    Prothin, Sebastien; Billard, Jean-Yves; Djeridi, Henda

    2016-10-01

    The purpose of the present study is to get a better understanding of the hydrodynamic instabilities of sheet cavities which develop along solid walls. The main objective is to highlight the spatial and temporal behavior of such a cavity when it develops on a NACA0015 foil at high Reynolds number. Experimental results show a quasi-steady, periodic, bifurcation domain, with aperiodic cavity behavior corresponding to σ/2 α values of 5.75, 5, 4.3 and 3.58. Robust mathematical methods of signal postprocessing (proper orthogonal decomposition and dynamic mode decomposition) were applied in order to emphasize the spatio-temporal nature of the flow. These new techniques put in evidence the 3D effects due to the reentrant jet instabilities or due to propagating shock wave mechanism at the origin of the shedding process of the cavitation cloud.

  3. Multi-Zone Liquid Thrust Chamber Performance Code with Domain Decomposition for Parallel Processing

    NASA Technical Reports Server (NTRS)

    Navaz, Homayun K.

    2002-01-01

    -equation turbulence model, and two-phase flow. To overcome these limitations, the LTCP code is rewritten to include the multi-zone capability with domain decomposition that makes it suitable for parallel processing, i.e., enabling the code to run every zone or sub-domain on a separate processor. This can reduce the run time by a factor of 6 to 8, depending on the problem.

  4. Multi-Zone Liquid Thrust Chamber Performance Code with Domain Decomposition for Parallel Processing

    NASA Technical Reports Server (NTRS)

    Navaz, Homayun K.

    2002-01-01

    -equation turbulence model, and two-phase flow. To overcome these limitations, the LTCP code is rewritten to include the multi-zone capability with domain decomposition that makes it suitable for parallel processing, i.e., enabling the code to run every zone or sub-domain on a separate processor. This can reduce the run time by a factor of 6 to 8, depending on the problem.

  5. Atomic-batched tensor decomposed two-electron repulsion integrals

    NASA Astrophysics Data System (ADS)

    Schmitz, Gunnar; Madsen, Niels Kristian; Christiansen, Ove

    2017-04-01

    We present a new integral format for 4-index electron repulsion integrals, in which several strategies like the Resolution-of-the-Identity (RI) approximation and other more general tensor-decomposition techniques are combined with an atomic batching scheme. The 3-index RI integral tensor is divided into sub-tensors defined by atom pairs on which we perform an accelerated decomposition to the canonical product (CP) format. In a first step, the RI integrals are decomposed to a high-rank CP-like format by repeated singular value decompositions followed by a rank reduction, which uses a Tucker decomposition as an intermediate step to lower the prefactor of the algorithm. After decomposing the RI sub-tensors (within the Coulomb metric), they can be reassembled to the full decomposed tensor (RC approach) or the atomic batched format can be maintained (ABC approach). In the first case, the integrals are very similar to the well-known tensor hypercontraction integral format, which gained some attraction in recent years since it allows for quartic scaling implementations of MP2 and some coupled cluster methods. On the MP2 level, the RC and ABC approaches are compared concerning efficiency and storage requirements. Furthermore, the overall accuracy of this approach is assessed. Initial test calculations show a good accuracy and that it is not limited to small systems.

  6. Invitation to Random Tensors

    NASA Astrophysics Data System (ADS)

    Gurau, Razvan

    2016-09-01

    This article is preface to the SIGMA special issue ''Tensor Models, Formalism and Applications'', http://www.emis.de/journals/SIGMA/Tensor_Models.html. The issue is a collection of eight excellent, up to date reviews on random tensor models. The reviews combine pedagogical introductions meant for a general audience with presentations of the most recent developments in the field. This preface aims to give a condensed panoramic overview of random tensors as the natural generalization of random matrices to higher dimensions.

  7. The processing of rotor startup signals based on empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Gai, Guanghong

    2006-01-01

    In this paper, we applied empirical mode decomposition method to analyse rotor startup signals, which are non-stationary and contain a lot of additional information other than that from its stationary running signals. The methodology developed in this paper decomposes the original startup signals into intrinsic oscillation modes or intrinsic modes function (IMFs). Then, we obtained rotating frequency components for Bode diagrams plot by corresponding IMFs, according to the characteristics of rotor system. The method can obtain precise critical speed without complex hardware support. The low-frequency components were extracted from these IMFs in vertical and horizontal directions. Utilising these components, we constructed a drift locus of rotor revolution centre, which provides some significant information to fault diagnosis of rotating machinery. Also, we proved that empirical mode decomposition method is more precise than Fourier filter for the extraction of low-frequency component.

  8. Study of Fundamental Chemical Processes in Explosive Decomposition by Laser-Powered Homogeneous Pyrolysis.

    DTIC Science & Technology

    1981-11-12

    nitrotoluenes actually represent surface- catalyzed reactions . Preliminary qualitative results for pyrolysis of ortho-nitrotoluene in the absence of hot...quantitative validity. LPHP studies of azoisopropane decomposition chosen as a radical-forming test reaction , show the accepted literature parameters to...systematic errors or by rate control exerted by secondary reactions . (2) Support from these VLPP studies for the conclusion that some previous kinetic

  9. A linearly approximated iterative Gaussian decomposition method for waveform LiDAR processing

    NASA Astrophysics Data System (ADS)

    Mountrakis, Giorgos; Li, Yuguang

    2017-07-01

    Full-waveform LiDAR (FWL) decomposition results often act as the basis for key LiDAR-derived products, for example canopy height, biomass and carbon pool estimation, leaf area index calculation and under canopy detection. To date, the prevailing method for FWL product creation is the Gaussian Decomposition (GD) based on a non-linear Levenberg-Marquardt (LM) optimization for Gaussian node parameter estimation. GD follows a ;greedy; approach that may leave weak nodes undetected, merge multiple nodes into one or separate a noisy single node into multiple ones. In this manuscript, we propose an alternative decomposition method called Linearly Approximated Iterative Gaussian Decomposition (LAIGD method). The novelty of the LAIGD method is that it follows a multi-step ;slow-and-steady; iterative structure, where new Gaussian nodes are quickly discovered and adjusted using a linear fitting technique before they are forwarded for a non-linear optimization. Two experiments were conducted, one using real full-waveform data from NASA's land, vegetation, and ice sensor (LVIS) and another using synthetic data containing different number of nodes and degrees of overlap to assess performance in variable signal complexity. LVIS data revealed considerable improvements in RMSE (44.8% lower), RSE (56.3% lower) and rRMSE (74.3% lower) values compared to the benchmark GD method. These results were further confirmed with the synthetic data. Furthermore, the proposed multi-step method reduces execution times in half, an important consideration as there are plans for global coverage with the upcoming Global Ecosystem Dynamics Investigation LiDAR sensor on the International Space Station.

  10. In Vivo Generalized Diffusion Tensor Imaging (GDTI) Using Higher-Order Tensors (HOT)

    PubMed Central

    Liu, Chunlei; Mang, Sarah C.; Moseley, Michael E.

    2009-01-01

    Generalized diffusion tensor imaging (GDTI) using higher order tensor statistics (HOT) generalizes the technique of diffusion tensor imaging (DTI) by including the effect of non-Gaussian diffusion on the signal of magnetic resonance imaging (MRI). In GDTI-HOT, the effect of non-Gaussian diffusion is characterized by higher order tensor statistics (i.e. the cumulant tensors or the moment tensors) such as the covariance matrix (the second-order cumulant tensor), the skewness tensor (the third-order cumulant tensor) and the kurtosis tensor (the fourth-order cumulant tensor) etc. Previously, Monte Carlo simulations have been applied to verify the validity of this technique in reconstructing complicated fiber structures. However, no in vivo implementation of GDTI-HOT has been reported. The primary goal of this study is to establish GDTI-HOT as a feasible in vivo technique for imaging non-Gaussian diffusion. We show that probability distribution function (PDF) of the molecular diffusion process can be measured in vivo with GDTI-HOT and be visualized with 3D glyphs. By comparing GDTI-HOT to fiber structures that are revealed by the highest resolution DWI possible in vivo, we show that the GDTI-HOT can accurately predict multiple fiber orientations within one white matter voxel. Furthermore, through bootstrap analysis we demonstrate that in vivo measurement of HOT elements is reproducible with a small statistical variation that is similar to that of DTI. PMID:19953513

  11. Genotypic diversity of an invasive plant species promotes litter decomposition and associated processes.

    PubMed

    Wang, Xiao-Yan; Miao, Yuan; Yu, Shuo; Chen, Xiao-Yong; Schmid, Bernhard

    2014-03-01

    Following studies that showed negative effects of species loss on ecosystem functioning, newer studies have started to investigate if similar consequences could result from reductions of genetic diversity within species. We tested the influence of genotypic richness and dissimilarity (plots containing one, three, six or 12 genotypes) in stands of the invasive plant Solidago canadensis in China on the decomposition of its leaf litter and associated soil animals over five monthly time intervals. We found that the logarithm of genotypic richness was positively linearly related to mass loss of C, N and P from the litter and to richness and abundance of soil animals on the litter samples. The mixing proportion of litter from two sites, but not genotypic dissimilarity of mixtures, had additional effects on measured variables. The litter diversity effects on soil animals were particularly strong under the most stressful conditions of hot weather in July: at this time richness and abundance of soil animals were higher in 12-genotype litter mixtures than even in the highest corresponding one-genotype litter. The litter diversity effects on decomposition were in part mediated by soil animals: the abundance of Acarina, when used as covariate in the analysis, fully explained the litter diversity effects on mass loss of N and P. Overall, our study shows that high genotypic richness of S. canadensis leaf litter positively affects richness and abundance of soil animals, which in turn accelerate litter decomposition and P release from litter.

  12. Monograph On Tensor Notations

    NASA Technical Reports Server (NTRS)

    Sirlin, Samuel W.

    1993-01-01

    Eight-page report describes systems of notation used most commonly to represent tensors of various ranks, with emphasis on tensors in Cartesian coordinate systems. Serves as introductory or refresher text for scientists, engineers, and others familiar with basic concepts of coordinate systems, vectors, and partial derivatives. Indicial tensor, vector, dyadic, and matrix notations, and relationships among them described.

  13. Comparative analysis of allelopathic effects produced by four forestry species during decomposition process in their soils in Galicia (NW Spain).

    PubMed

    Souto, X C; Gonzales, L; Reigosa, M J

    1994-11-01

    The development of toxicity produced by vegetable litter of four forest species (Quercus robur L.,Pinus radiata D.Don.,Eucalyptus globulus Labill, andAcacia melanoxylon R.Br.) was studied during the decomposition process in each of the soils where the species were found. The toxicity of the extracts was measured by the effects produced on germination and growth ofLactuca saliva L. var. Great Lakes seeds. The phenolic composition of the leaves of the four species was also studied using high-performance liquid chromatographic analysis (HPLC). It was verified that toxicity was clearly reflected in the first stages of leaf decomposition inE. globulus andA. melanoxylon, due to phytotoxic compounds liberated by their litter. At the end of half a year of decomposition, inhibition due to the vegetable material was not observed, but the soils associated with these two species appeared to be responsible for the toxic effects. On the other hand, the phenolic profiles are quite different among the four species, and greater complexity in the two toxic species (E. globulus andA. melanoxylon) was observed.

  14. Block term decomposition for modelling epileptic seizures

    NASA Astrophysics Data System (ADS)

    Hunyadi, Borbála; Camps, Daan; Sorber, Laurent; Paesschen, Wim Van; Vos, Maarten De; Huffel, Sabine Van; Lathauwer, Lieven De

    2014-12-01

    Recordings of neural activity, such as EEG, are an inherent mixture of different ongoing brain processes as well as artefacts and are typically characterised by low signal-to-noise ratio. Moreover, EEG datasets are often inherently multidimensional, comprising information in time, along different channels, subjects, trials, etc. Additional information may be conveyed by expanding the signal into even more dimensions, e.g. incorporating spectral features applying wavelet transform. The underlying sources might show differences in each of these modes. Therefore, tensor-based blind source separation techniques which can extract the sources of interest from such multiway arrays, simultaneously exploiting the signal characteristics in all dimensions, have gained increasing interest. Canonical polyadic decomposition (CPD) has been successfully used to extract epileptic seizure activity from wavelet-transformed EEG data (Bioinformatics 23(13):i10-i18, 2007; NeuroImage 37:844-854, 2007), where each source is described by a rank-1 tensor, i.e. by the combination of one particular temporal, spectral and spatial signature. However, in certain scenarios, where the seizure pattern is nonstationary, such a trilinear signal model is insufficient. Here, we present the application of a recently introduced technique, called block term decomposition (BTD) to separate EEG tensors into rank- ( L r , L r ,1) terms, allowing to model more variability in the data than what would be possible with CPD. In a simulation study, we investigate the robustness of BTD against noise and different choices of model parameters. Furthermore, we show various real EEG recordings where BTD outperforms CPD in capturing complex seizure characteristics.

  15. Geodesic-loxodromes for diffusion tensor interpolation and difference measurement.

    PubMed

    Kindlmann, Gordon; Estépar, Raúl San José; Niethammer, Marc; Haker, Steven; Westin, Carl-Fredrik

    2007-01-01

    In algorithms for processing diffusion tensor images, two common ingredients are interpolating tensors, and measuring the distance between them. We propose a new class of interpolation paths for tensors, termed geodesic-loxodromes, which explicitly preserve clinically important tensor attributes, such as mean diffusivity or fractional anisotropy, while using basic differential geometry to interpolate tensor orientation. This contrasts with previous Riemannian and Log-Euclidean methods that preserve the determinant. Path integrals of tangents of geodesic-loxodromes generate novel measures of over-all difference between two tensors, and of difference in shape and in orientation.

  16. Moment tensor mechanisms from Iberia

    NASA Astrophysics Data System (ADS)

    Stich, D.; Morales, J.

    2003-12-01

    New moment tensor solutions are presented for small and moderate earthquakes in Spain, Portugal and the westernmost Mediterranean Sea for the period from 2002 to present. Moment tensor inversion, to estimate focal mechanism, depth and magnitude, is applied at the Instituto Andaluz de Geof¡sica (IAG) in a routine manner to regional earthquakes with local magnitude larger then or equal 3.5. Recent improvements of broadband network coverage contribute to relatively high rates of success: Since beginning of 2002, we could obtain valuable solutions, in the sense that moment tensor synthetic waveforms fit adequately the main characteristics of the observed seismograms, for about 50% of all events of the initial selection. Results are available on-line at http://www.ugr.es/~iag/tensor/. To date, the IAG moment tensor catalogue contains 90 solutions since 1984 and gives a relatively detailed picture of seismotectonics in the Ibero-maghrebian region, covering also low seismicity areas like intraplate Iberia. Solutions are concentrated in southern Spain and the Alboran Sea along the diffuse African-Eurasian plate boundary. These solutions reveal characteristics of the transition between the reverse faulting regime in Algeria and predominately normal faulting on the Iberian Peninsula. Further we discuss the available mechanisms for intermediate deep events, related to subcrustal tectonic processes at the plate contact.

  17. Decomposition and biodegradability enhancement of textile wastewater using a combination of electron beam irradiation and activated sludge process.

    PubMed

    Mohd Nasir, Norlirubayah; Teo Ming, Ting; Ahmadun, Fakhru'l-Razi; Sobri, Shafreeza

    2010-01-01

    The research conducted a study on decomposition and biodegradability enhancement of textile wastewater using a combination of electron beam irradiation and activated sludge process. The purposes of this research are to remove pollutant through decomposition and to enhance the biodegradability of textile wastewater. The wastewater is treated using electron beam irradiation as a pre-treatment before undergo an activated sludge process. As a result, for non-irradiated wastewater, the COD removal was achieved to be between 70% and 79% after activated sludge process. The improvement of COD removal efficiency increased to 94% after irradiation of treated effluent at the dose of 50 kGy. Meanwhile, the BOD(5) removal efficiencies of non-irradiated and irradiated textile wastewater were reported to be between 80 and 87%, and 82 and 99.2%, respectively. The maximum BOD(5) removal efficiency was achieved at day 1 (HRT 5 days) of the process of an irradiated textile wastewater which is 99.2%. The biodegradability ratio of non-irradiated wastewater was reported to be between 0.34 and 0.61, while the value of biodegradability ratio of an irradiated wastewater increased to be between 0.87 and 0.96. The biodegradability enhancement of textile wastewater is increased with increasing the doses. Therefore, an electron beam radiation holds a greatest application of removing pollutants and also on enhancing the biodegradability of textile wastewater.

  18. Decomposition of intact chicken feathers by a thermophile in combination with an acidulocomposting garbage-treatment process.

    PubMed

    Shigeri, Yasushi; Matsui, Tatsunobu; Watanabe, Kunihiko

    2009-11-01

    In order to develop a practical method for the decomposition of intact chicken feathers, a moderate thermophile strain, Meiothermus ruber H328, having strong keratinolytic activity, was used in a bio-type garbage-treatment machine working with an acidulocomposting process. The addition of strain H328 cells (15 g) combined with acidulocomposting in the garbage machine resulted in 70% degradation of intact chicken feathers (30 g) within 14 d. This degradation efficiency is comparable to a previous result employing the strain as a single bacterium in flask culture, and it indicates that strain H328 can promote intact feather degradation activity in a garbage machine currently on the market.

  19. Tensor hypercontraction. II. Least-squares renormalization

    NASA Astrophysics Data System (ADS)

    Parrish, Robert M.; Hohenstein, Edward G.; Martínez, Todd J.; Sherrill, C. David

    2012-12-01

    The least-squares tensor hypercontraction (LS-THC) representation for the electron repulsion integral (ERI) tensor is presented. Recently, we developed the generic tensor hypercontraction (THC) ansatz, which represents the fourth-order ERI tensor as a product of five second-order tensors [E. G. Hohenstein, R. M. Parrish, and T. J. Martínez, J. Chem. Phys. 137, 044103 (2012)], 10.1063/1.4732310. Our initial algorithm for the generation of the THC factors involved a two-sided invocation of overlap-metric density fitting, followed by a PARAFAC decomposition, and is denoted PARAFAC tensor hypercontraction (PF-THC). LS-THC supersedes PF-THC by producing the THC factors through a least-squares renormalization of a spatial quadrature over the otherwise singular 1/r12 operator. Remarkably, an analytical and simple formula for the LS-THC factors exists. Using this formula, the factors may be generated with O(N^5) effort if exact integrals are decomposed, or O(N^4) effort if the decomposition is applied to density-fitted integrals, using any choice of density fitting metric. The accuracy of LS-THC is explored for a range of systems using both conventional and density-fitted integrals in the context of MP2. The grid fitting error is found to be negligible even for extremely sparse spatial quadrature grids. For the case of density-fitted integrals, the additional error incurred by the grid fitting step is generally markedly smaller than the underlying Coulomb-metric density fitting error. The present results, coupled with our previously published factorizations of MP2 and MP3, provide an efficient, robust O(N^4) approach to both methods. Moreover, LS-THC is generally applicable to many other methods in quantum chemistry.

  20. Solar radiation influence on the decomposition process of diclofenac in surface waters.

    PubMed

    Bartels, Peter; von Tümpling, Wolf

    2007-03-01

    Diclofenac can be detected in surface water of many rivers with human impacts worldwide. The observed decrease of the diclofenac concentration in waters and the formation of its photochemical transformation products under the impact of natural irradiation during one to 16 days are explained in this article. In semi-natural laboratory tests and in a field experiment it could be shown, that sunlight stimulates the decomposition of diclofenac in surface waters. During one day intensive solar radiation in middle European summer diclofenac decomposes in the surface layer of the water (0 to 5 cm) up to 83%, determined in laboratory exposition experiments. After two weeks in a field experiment, the diclofenac was not detectable anymore in the water surface layer (limit of quantification: 5 ng/L). At a water depth of 50 cm, within two weeks 96% of the initial concentration was degraded, while in 100 cm depth 2/3 of the initial diclofenac concentration remained. With the decomposition, stable and meta-stable photolysis products were formed and observed by UV detection. Beyond that the chemical structure of these products were determined. Three transformation products, that were not described in the literature so far, were identified and quantified with GC-MS.

  1. Toluene decomposition performance and NOx by-product formation during a DBD-catalyst process.

    PubMed

    Guo, Yufang; Liao, Xiaobin; Fu, Mingli; Huang, Haibao; Ye, Daiqi

    2015-02-01

    Characteristics of toluene decomposition and formation of nitrogen oxide (NOx) by-products were investigated in a dielectric barrier discharge (DBD) reactor with/without catalyst at room temperature and atmospheric pressure. Four kinds of metal oxides, i.e., manganese oxide (MnOx), iron oxide (FeOx), cobalt oxide (CoOx) and copper oxide (CuO), supported on Al2O3/nickel foam, were used as catalysts. It was found that introducing catalysts could improve toluene removal efficiency, promote decomposition of by-product ozone and enhance CO2 selectivity. In addition, NOx was suppressed with the decrease of specific energy density (SED) and the increase of humidity, gas flow rate and toluene concentration, or catalyst introduction. Among the four kinds of catalysts, the CuO catalyst showed the best performance in NOx suppression. The MnOx catalyst exhibited the lowest concentration of O3 and highest CO2 selectivity but the highest concentration of NOx. A possible pathway for NOx production in DBD was discussed. The contributions of oxygen active species and hydroxyl radicals are dominant in NOx suppression.

  2. Catalytic conversion of 1,2-dichlorobenzene using V2O5/TiO2 catalysts by a thermal decomposition process.

    PubMed

    Chin, Sungmin; Jurng, Jongsoo; Lee, Jae-Heon; Moon, Seung-Jae

    2009-05-01

    This study examined the catalytic oxidation of 1,2-dichlorobenzene on V(2)O(5)/TiO(2) nanoparticles. The V(2)O(5)/TiO(2) nanoparticles were synthesized by the thermal decomposition of vanadium oxytripropoxide and titanium tetraisopropoxide. The effects of the synthesis conditions, such as the synthesis temperature and precursor heating temperature, were investigated. The specific surface areas of V(2)O(5)/TiO(2) nanoparticles increased with increasing synthesis temperature and decreasing precursor heating temperature. The catalytic oxidation rate of the V(2)O(5)/TiO(2) catalyst formed by thermal decomposition process at a catalytic reaction temperature of 150 and 200 degrees C was 46% and 95%, respectively. As a result, it was concluded that the V(2)O(5)/TiO(2) catalysts synthesized by a thermal decomposition process showed good performance for 1,2-DCB decomposition at a lower temperature.

  3. 3D reconstruction of tensors and vectors

    SciTech Connect

    Defrise, Michel; Gullberg, Grant T.

    2005-02-17

    Here we have developed formulations for the reconstruction of 3D tensor fields from planar (Radon) and line-integral (X-ray) projections of 3D vector and tensor fields. Much of the motivation for this work is the potential application of MRI to perform diffusion tensor tomography. The goal is to develop a theory for the reconstruction of both Radon planar and X-ray or line-integral projections because of the flexibility of MRI to obtain both of these type of projections in 3D. The development presented here for the linear tensor tomography problem provides insight into the structure of the nonlinear MRI diffusion tensor inverse problem. A particular application of tensor imaging in MRI is the potential application of cardiac diffusion tensor tomography for determining in vivo cardiac fiber structure. One difficulty in the cardiac application is the motion of the heart. This presents a need for developing future theory for tensor tomography in a motion field. This means developing a better understanding of the MRI signal for diffusion processes in a deforming media. The techniques developed may allow the application of MRI tensor tomography for the study of structure of fiber tracts in the brain, atherosclerotic plaque, and spine in addition to fiber structure in the heart. However, the relations presented are also applicable to other fields in medical imaging such as diffraction tomography using ultrasound. The mathematics presented can also be extended to exponential Radon transform of tensor fields and to other geometric acquisitions such as cone beam tomography of tensor fields.

  4. Seismically Inferred Rupture Process of the 2011 Tohoku-Oki Earthquake by Using Data-Validated 3D and 2.5D Green's Tensor Waveforms

    NASA Astrophysics Data System (ADS)

    Okamoto, T.; Takenaka, H.; Hara, T.; Nakamura, T.; Aoki, T.

    2014-12-01

    We analyze "seismic" rupture process of the March 11, 2011 Tohoku-Oki earthquake (GCMT Mw9.1) by using a non-linear multi-time-window waveform inversion method. We incorporate the effect of the near-source laterally heterogeneous structure on the synthetic Green's tensor waveforms; otherwise the analysis may result in erroneous solutions [1]. To increase the resolution we use teleseismic and strong-motion seismograms jointly because the one-sided distribution of strong-motion station may cause reduced resolution near the trench axis [2]. We use a 2.5D FDM [3] for teleseismic P-waves and a full 3D FDM that incorporates topography, oceanic water layer, 3D heterogeneity and attenuation for strong-motions [4]. We apply multi-GPU acceleration by using the TSUBAME supercomputer in Tokyo Institute of Technology [5]. We "validated" the Green's tensor waveforms with a point-source moment tensor inversion analysis for a small (Mw5.8) shallow event: we confirm the observed waveforms are reproduced well with the synthetics.The inferred slip distribution using the 2.5D and 3D Green's functions has large slips (max. 37 m) near the hypocenter and small slips near the trench (figure). Also an isolated slip region is identified close to Fukushima prefecture. These features are similar to those obtained by our preliminary study [4]. The land-ward large slips and trench-ward small slips have also been reported by [2]. It is remarkable that we confirmed these features by using data-validated Green's functions. On the other hand very large slips are inferred close to the trench when we apply "1D" Green's functions that do not incorporate the lateral heterogeneity. Our result suggests the trench-ward large deformation that caused large tsunamis did not radiate strong seismic waves. Very slow slips (e.g., the tsunami earthquake), delayed slips and anelastic deformation are among the candidates of the physical processes of the deformation.[1] Okamoto and Takenaka, EPS, 61, e17-e20, 2009

  5. A thready affair: linking fungal diversity and community dynamics to terrestrial decomposition processes.

    PubMed

    van der Wal, Annemieke; Geydan, Thomas D; Kuyper, Thomas W; de Boer, Wietse

    2013-07-01

    Filamentous fungi are critical to the decomposition of terrestrial organic matter and, consequently, in the global carbon cycle. In particular, their contribution to degradation of recalcitrant lignocellulose complexes has been widely studied. In this review, we focus on the functioning of terrestrial fungal decomposers and examine the factors that affect their activities and community dynamics. In relation to this, impacts of global warming and increased N deposition are discussed. We also address the contribution of fungal decomposer studies to the development of general community ecological concepts such as diversity-functioning relationships, succession, priority effects and home-field advantage. Finally, we indicate several research directions that will lead to a more complete understanding of the ecological roles of terrestrial decomposer fungi such as their importance in turnover of rhizodeposits, the consequences of interactions with other organisms and niche differentiation.

  6. Species-specific effects of elevated ozone on wetland plants and decomposition processes.

    PubMed

    Williamson, Jennifer; Mills, Gina; Freeman, Chris

    2010-05-01

    Seven species from two contrasting wetlands, an upland bog and a lowland rich fen in North Wales, UK, were exposed to elevated ozone (150 ppb for 5 days and 20 ppb for 2 days per week) or low ozone (20 ppb) for four weeks in solardomes. The rich fen species were: Molinia caerulea, Juncus subnodulosus, Potentilla erecta and Hydrocotyle vulgaris and the bog species were: Carex echinata, Potentilla erecta and Festuca rubra. Senescence significantly increased under elevated ozone in all seven species but only Molinia caerulea showed a reduction in biomass under elevated ozone. Decomposition rates of plants exposed to elevated ozone, as measured by carbon dioxide efflux from dried plant material inoculated with peat slurry, increased for Potentilla erecta with higher hydrolytic enzyme activities. In contrast, a decrease in enzyme activities and a non-significant decrease in carbon dioxide efflux occurred in the grasses, sedge and rush species.

  7. Process Versus Product in Social Learning: Comparative Diffusion Tensor Imaging of Neural Systems for Action Execution–Observation Matching in Macaques, Chimpanzees, and Humans

    PubMed Central

    Hecht, Erin E.; Gutman, David A.; Preuss, Todd M.; Sanchez, Mar M.; Parr, Lisa A.; Rilling, James K.

    2013-01-01

    Social learning varies among primate species. Macaques only copy the product of observed actions, or emulate, while humans and chimpanzees also copy the process, or imitate. In humans, imitation is linked to the mirror system. Here we compare mirror system connectivity across these species using diffusion tensor imaging. In macaques and chimpanzees, the preponderance of this circuitry consists of frontal–temporal connections via the extreme/external capsules. In contrast, humans have more substantial temporal–parietal and frontal–parietal connections via the middle/inferior longitudinal fasciculi and the third branch of the superior longitudinal fasciculus. In chimpanzees and humans, but not in macaques, this circuitry includes connections with inferior temporal cortex. In humans alone, connections with superior parietal cortex were also detected. We suggest a model linking species differences in mirror system connectivity and responsivity with species differences in behavior, including adaptations for imitation and social learning of tool use. PMID:22539611

  8. Randomized interpolative decomposition of separated representations

    NASA Astrophysics Data System (ADS)

    Biagioni, David J.; Beylkin, Daniel; Beylkin, Gregory

    2015-01-01

    We introduce an algorithm to compute tensor interpolative decomposition (dubbed CTD-ID) for the reduction of the separation rank of Canonical Tensor Decompositions (CTDs). Tensor ID selects, for a user-defined accuracy ɛ, a near optimal subset of terms of a CTD to represent the remaining terms via a linear combination of the selected terms. CTD-ID can be used as an alternative to or in combination with the Alternating Least Squares (ALS) algorithm. We present examples of its use within a convergent iteration to compute inverse operators in high dimensions. We also briefly discuss the spectral norm as a computational alternative to the Frobenius norm in estimating approximation errors of tensor ID. We reduce the problem of finding tensor IDs to that of constructing interpolative decompositions of certain matrices. These matrices are generated via randomized projection of the terms of the given tensor. We provide cost estimates and several examples of the new approach to the reduction of separation rank.

  9. Multiple seismogenic processes for high-frequency earthquakes at Katmai National Park, Alaska: Evidence from stress tensor inversions of fault-plane solutions

    USGS Publications Warehouse

    Moran, S.C.

    2003-01-01

    The volcanological significance of seismicity within Katmai National Park has been debated since the first seismograph was installed in 1963, in part because Katmai seismicity consists almost entirely of high-frequency earthquakes that can be caused by a wide range of processes. I investigate this issue by determining 140 well-constrained first-motion fault-plane solutions for shallow (depth < 9 km) earthquakes occuring between 1995 and 2001 and inverting these solutions for the stress tensor in different regions within the park. Earthquakes removed by several kilometers from the volcanic axis occur in a stress field characterized by horizontally oriented ??1 and ??3 axes, with ??1 rotated slightly (12??) relative to the NUVELIA subduction vector, indicating that these earthquakes are occurring in response to regional tectonic forces. On the other hand, stress tensors for earthquake clusters beneath several Katmai cluster volcanoes have vertically oriented ??1 axes, indicating that these events are occuring in response to local, not regional, processes. At Martin-Mageik, vertically oriented ??1 is most consistent with failure under edifice loading conditions in conjunction with localized pore pressure increases associated with hydrothermal circulation cells. At Trident-Novarupta, it is consistent with a number of possible models, including occurence along fractures formed during the 1912 eruption that now serve as horizontal conduits for migrating fluids and/or volatiles from nearby degassing and cooling magma bodies. At Mount Katmai, it is most consistent with continued seismicity along ring-fracture systems created in the 1912 eruption, perhaps enhanced by circulating hydrothermal fluids and/or seepage from the caldera-filling lake.

  10. Diffusion tensor imaging.

    PubMed

    Jones, Derek K; Leemans, Alexander

    2011-01-01

    Diffusion tensor MRI (DT-MRI) is the only non-invasive method for characterising the microstructural organization of tissue in vivo. Generating parametric maps that help to visualise different aspects of the tissue microstructure (mean diffusivity, tissue anisotropy and dominant fibre orientation) involves a number of steps from deciding on the optimal acquisition parameters on the scanner, collecting the data, pre-processing the data and fitting the model to generating final parametric maps for entry into statistical data analysis. Here, we describe an entire protocol that we have used on over 400 subjects with great success in our laboratory. In the 'Notes' section, we justify our choice of the various parameters/choices along the way so that the reader may adapt/modify the protocol to their own time/hardware constraints.

  11. Unraveling the Decomposition Process of Lead(II) Acetate: Anhydrous Polymorphs, Hydrates, and Byproducts and Room Temperature Phosphorescence.

    PubMed

    Martínez-Casado, Francisco J; Ramos-Riesco, Miguel; Rodríguez-Cheda, José A; Cucinotta, Fabio; Matesanz, Emilio; Miletto, Ivana; Gianotti, Enrica; Marchese, Leonardo; Matěj, Zdeněk

    2016-09-06

    Lead(II) acetate [Pb(Ac)2, where Ac = acetate group (CH3-COO(-))2] is a very common salt with many and varied uses throughout history. However, only lead(II) acetate trihydrate [Pb(Ac)2·3H2O] has been characterized to date. In this paper, two enantiotropic polymorphs of the anhydrous salt, a novel hydrate [lead(II) acetate hemihydrate: Pb(Ac)2·(1)/2H2O], and two decomposition products [corresponding to two different basic lead(II) acetates: Pb4O(Ac)6 and Pb2O(Ac)2] are reported, with their structures being solved for the first time. The compounds present a variety of molecular arrangements, being 2D or 1D coordination polymers. A thorough thermal analysis, by differential scanning calorimetry (DSC) and thermogravimetric analysis (TGA), was also carried out to study the behavior and thermal data of the salt and its decomposition process, in inert and oxygenated atmospheres, identifying the phases and byproducts that appear. The complex thermal behavior of lead(II) acetate is now solved, finding the existence of another hydrate, two anhydrous enantiotropic polymorphs, and some byproducts. Moreover, some of them are phosphorescent at room temperature. The compounds were studied by TGA, DSC, X-ray diffraction, and UV-vis spectroscopy.

  12. Tensor distribution function

    NASA Astrophysics Data System (ADS)

    Leow, Alex D.; Zhu, Siwei

    2008-03-01

    Diffusion weighted MR imaging is a powerful tool that can be employed to study white matter microstructure by examining the 3D displacement profile of water molecules in brain tissue. By applying diffusion-sensitizing gradients along a minimum of 6 directions, second-order tensors (represetnted by 3-by-3 positive definiite matrices) can be computed to model dominant diffusion processes. However, it has been shown that conventional DTI is not sufficient to resolve more complicated white matter configurations, e.g. crossing fiber tracts. More recently, High Angular Resolution Diffusion Imaging (HARDI) seeks to address this issue by employing more than 6 gradient directions. To account for fiber crossing when analyzing HARDI data, several methodologies have been introduced. For example, q-ball imaging was proposed to approximate Orientation Diffusion Function (ODF). Similarly, the PAS method seeks to reslove the angular structure of displacement probability functions using the maximum entropy principle. Alternatively, deconvolution methods extract multiple fiber tracts by computing fiber orientations using a pre-specified single fiber response function. In this study, we introduce Tensor Distribution Function (TDF), a probability function defined on the space of symmetric and positive definite matrices. Using calculus of variations, we solve for the TDF that optimally describes the observed data. Here, fiber crossing is modeled as an ensemble of Gaussian diffusion processes with weights specified by the TDF. Once this optimal TDF is determined, ODF can easily be computed by analytical integration of the resulting displacement probability function. Moreover, principle fiber directions can also be directly derived from the TDF.

  13. EEG Classification for Hybrid Brain-Computer Interface Using a Tensor Based Multiclass Multimodal Analysis Scheme.

    PubMed

    Ji, Hongfei; Li, Jie; Lu, Rongrong; Gu, Rong; Cao, Lei; Gong, Xiaoliang

    2016-01-01

    Electroencephalogram- (EEG-) based brain-computer interface (BCI) systems usually utilize one type of changes in the dynamics of brain oscillations for control, such as event-related desynchronization/synchronization (ERD/ERS), steady state visual evoked potential (SSVEP), and P300 evoked potentials. There is a recent trend to detect more than one of these signals in one system to create a hybrid BCI. However, in this case, EEG data were always divided into groups and analyzed by the separate processing procedures. As a result, the interactive effects were ignored when different types of BCI tasks were executed simultaneously. In this work, we propose an improved tensor based multiclass multimodal scheme especially for hybrid BCI, in which EEG signals are denoted as multiway tensors, a nonredundant rank-one tensor decomposition model is proposed to obtain nonredundant tensor components, a weighted fisher criterion is designed to select multimodal discriminative patterns without ignoring the interactive effects, and support vector machine (SVM) is extended to multiclass classification. Experiment results suggest that the proposed scheme can not only identify the different changes in the dynamics of brain oscillations induced by different types of tasks but also capture the interactive effects of simultaneous tasks properly. Therefore, it has great potential use for hybrid BCI.

  14. EEG Classification for Hybrid Brain-Computer Interface Using a Tensor Based Multiclass Multimodal Analysis Scheme

    PubMed Central

    Ji, Hongfei; Li, Jie; Lu, Rongrong; Gu, Rong; Cao, Lei; Gong, Xiaoliang

    2016-01-01

    Electroencephalogram- (EEG-) based brain-computer interface (BCI) systems usually utilize one type of changes in the dynamics of brain oscillations for control, such as event-related desynchronization/synchronization (ERD/ERS), steady state visual evoked potential (SSVEP), and P300 evoked potentials. There is a recent trend to detect more than one of these signals in one system to create a hybrid BCI. However, in this case, EEG data were always divided into groups and analyzed by the separate processing procedures. As a result, the interactive effects were ignored when different types of BCI tasks were executed simultaneously. In this work, we propose an improved tensor based multiclass multimodal scheme especially for hybrid BCI, in which EEG signals are denoted as multiway tensors, a nonredundant rank-one tensor decomposition model is proposed to obtain nonredundant tensor components, a weighted fisher criterion is designed to select multimodal discriminative patterns without ignoring the interactive effects, and support vector machine (SVM) is extended to multiclass classification. Experiment results suggest that the proposed scheme can not only identify the different changes in the dynamics of brain oscillations induced by different types of tasks but also capture the interactive effects of simultaneous tasks properly. Therefore, it has great potential use for hybrid BCI. PMID:26880873

  15. Using Regional Moment Tensors to Constrain Earthquake Processes following the 2010 Darfield and 2011 Canterbury New Zealand Earthquake Sequences

    NASA Astrophysics Data System (ADS)

    Herman, M. W.; Furlong, K. P.; Herrmann, R. B.; Benz, H.

    2011-12-01

    We model regional broadband data from the South Island of New Zealand to determine regional moment tensor solutions for the mainshock and selected aftershocks of the M7.0, 3 September 2011, M6.1, 21 February 2011 and M6.0 13 June 2011 earthquakes that occurred near Christchurch, New Zealand. Arrival time picks from both the local and regional strong motion and broadband data were used to determine preliminary earthquake locations using a previously published South Island velocity model. Rayleigh and Love surface wave dispersion measurements were then made from selected events to refine the velocity model in order to better match the predominantly large regional surface waves. RMT solutions were computed using the procedures of Herrmann et al. (2011). In total, we computed RMT solutions for 82 events in the magnitude range of Mw3.5-7.0. Although the crustal faulting behavior in the region has been argued to reflect a complex interaction of strike slip and thrust faulting, the dominant faulting style in the sequence is right-lateral, strike-slip (75 events), with nodal planes striking west-east to southwest-northeast. There are only five purely reverse mechanisms, at the western end of the sequence, in the vicinity of the Harper Hills blind thrust. The main Mw 7.0 rupture shows both local small-scale stepovers and one larger (~ 5-10 km width) right stepover near 172.40°E. Although we expect normal faulting associated with this larger stepover, during the first month after the main shock we observe only two normal fault mechanisms and 13 strike slip (inferred E-W right-lateral) events in the stepover region, and since that time, the sense of faulting has been dominated by right-lateral, strike-slip events, perhaps indicating a sequence of short E-W fault segments in the region. The February and June 2011 events occurred along the same trend at the eastern end of the sequence, and show similar strike slip mechanisms to the majority of events to the west, but the

  16. The Search for a Volatile Human Specific Marker in the Decomposition Process.

    PubMed

    Rosier, E; Loix, S; Develter, W; Van de Voorde, W; Tytgat, J; Cuypers, E

    2015-01-01

    In this study, a validated method using a thermal desorber combined with a gas chromatograph coupled to mass spectrometry was used to identify the volatile organic compounds released during decomposition of 6 human and 26 animal remains in a laboratory environment during a period of 6 months. 452 compounds were identified. Among them a human specific marker was sought using principle component analysis. We found a combination of 8 compounds (ethyl propionate, propyl propionate, propyl butyrate, ethyl pentanoate, pyridine, diethyl disulfide, methyl(methylthio)ethyl disulfide and 3-methylthio-1-propanol) that led to the distinction of human and pig remains from other animal remains. Furthermore, it was possible to separate the pig remains from human remains based on 5 esters (3-methylbutyl pentanoate, 3-methylbutyl 3-methylbutyrate, 3-methylbutyl 2-methylbutyrate, butyl pentanoate and propyl hexanoate). Further research in the field with full bodies has to corroborate these results and search for one or more human specific markers. These markers would allow a more efficiently training of cadaver dogs or portable detection devices could be developed.

  17. Photocatalytic decomposition of bromate ion by the UV/P25-Graphene processes.

    PubMed

    Huang, Xin; Wang, Longyong; Zhou, Jizhi; Gao, Naiyun

    2014-06-15

    The photocatalysis of bromate (BrO3(-)) attracts much attention as BrO3(-) is a carcinogenic and genotoxic contaminant in drinking water. In this work, TiO2-graphene composite (P25-GR) photocatalyst for BrO3(-) reduction were prepared by a facile one-step hydrothermal method, which exhibited a higher capacity of BrO3(-) removal than P25 or GR did. The maximum removal of BrO3(-) was observed in the optimal conductions of 1% GR doping and at pH 6.8. Compared with that without UV, the higher decreasing of BrO3(-) on the composite indicates that BrO3(-) decomposition was predominantly contributed to photo-reduction with UV rather than adsorption. This hypothesis was supported by the decreasing of [BrO3(-)] with the synchronous increasing of [Br(-)] at nearly constant amount of total Bromine ([BrO3(-)] + [Br(-)]). Furthermore, the improvement of BrO3(-) reduction on P25-GR was observed in the treatment of a tap water. However, the efficiency of BrO3(-) removal was less than that in deionized water, probably due to the consumption of photo-generated electrons and the adsorption of natural organic matters (NOM) on graphene.

  18. The Search for a Volatile Human Specific Marker in the Decomposition Process

    PubMed Central

    Rosier, E.; Loix, S.; Develter, W.; Van de Voorde, W.; Tytgat, J.; Cuypers, E.

    2015-01-01

    In this study, a validated method using a thermal desorber combined with a gas chromatograph coupled to mass spectrometry was used to identify the volatile organic compounds released during decomposition of 6 human and 26 animal remains in a laboratory environment during a period of 6 months. 452 compounds were identified. Among them a human specific marker was sought using principle component analysis. We found a combination of 8 compounds (ethyl propionate, propyl propionate, propyl butyrate, ethyl pentanoate, pyridine, diethyl disulfide, methyl(methylthio)ethyl disulfide and 3-methylthio-1-propanol) that led to the distinction of human and pig remains from other animal remains. Furthermore, it was possible to separate the pig remains from human remains based on 5 esters (3-methylbutyl pentanoate, 3-methylbutyl 3-methylbutyrate, 3-methylbutyl 2-methylbutyrate, butyl pentanoate and propyl hexanoate). Further research in the field with full bodies has to corroborate these results and search for one or more human specific markers. These markers would allow a more efficiently training of cadaver dogs or portable detection devices could be developed. PMID:26375029

  19. Feasibility study: Application of the geopressured-geothermal resource to pyrolytic conversion or decomposition/detoxification processes

    SciTech Connect

    Propp, W.A.; Grey, A.E.; Negus-de Wys, J.; Plum, M.M.; Haefner, D.R.

    1991-09-01

    This study presents a preliminary evaluation of the technical and economic feasibility of selected conceptual processes for pyrolytic conversion of organic feedstocks or the decomposition/detoxification of hazardous wastes by coupling the process to the geopressured-geothermal resource. The report presents a detailed discussion of the resource and of each process selected for evaluation including the technical evaluation of each. A separate section presents the economic methodology used and the evaluation of the technically viable process. A final section presents conclusions and recommendations. Three separate processes were selected for evaluation. These are pyrolytic conversion of biomass to petroleum like fluids, wet air oxidation (WAO) at subcritical conditions for destruction of hazardous waste, and supercritical water oxidation (SCWO) also for the destruction of hazardous waste. The scientific feasibility of all three processes has been previously established by various bench-scale and pilot-scale studies. For a variety of reasons detailed in the report the SCWO process is the only one deemed to be technically feasible, although the effects of the high solids content of the geothermal brine need further study. This technology shows tremendous promise for contributing to solving the nation's energy and hazardous waste problems. However, the current economic analysis suggests that it is uneconomical at this time. 50 refs., 5 figs., 7 tabs.

  20. Invariant perfect tensors

    NASA Astrophysics Data System (ADS)

    Li, Youning; Han, Muxin; Grassl, Markus; Zeng, Bei

    2017-06-01

    Invariant tensors are states in the SU(2) tensor product representation that are invariant under SU(2) action. They play an important role in the study of loop quantum gravity. On the other hand, perfect tensors are highly entangled many-body quantum states with local density matrices maximally mixed. Recently, the notion of perfect tensors has attracted a lot of attention in the fields of quantum information theory, condensed matter theory, and quantum gravity. In this work, we introduce the concept of an invariant perfect tensor (IPT), which is an n-valent tensor that is both invariant and perfect. We discuss the existence and construction of IPTs. For bivalent tensors, the IPT is the unique singlet state for each local dimension. The trivalent IPT also exists and is uniquely given by Wigner’s 3j symbol. However, we show that, surprisingly, 4-valent IPTs do not exist for any identical local dimension d. On the contrary, when the dimension is large, almost all invariant tensors are asymptotically perfect, which is a consequence of the phenomenon of the concentration of measure for multipartite quantum states.

  1. Generalization of the tensor renormalization group approach to 3-D or higher dimensions

    NASA Astrophysics Data System (ADS)

    Teng, Peiyuan

    2017-04-01

    In this paper, a way of generalizing the tensor renormalization group (TRG) is proposed. Mathematically, the connection between patterns of tensor renormalization group and the concept of truncation sequence in polytope geometry is discovered. A theoretical contraction framework is therefore proposed. Furthermore, the canonical polyadic decomposition is introduced to tensor network theory. A numerical verification of this method on the 3-D Ising model is carried out.

  2. Coupling experimental data and a prototype model to probe the physical and chemical processes of 2,4-dinitroimidazole solid-phase thermal decomposition

    SciTech Connect

    Behrens, R.; Minier, L.; Bulusu, S.

    1998-12-31

    The time-dependent, solid-phase thermal decomposition behavior of 2,4-dinitroimidazole (2,4-DNI) has been measured utilizing simultaneous thermogravimetric modulated beam mass spectrometry (STMBMS) methods. The decomposition products consist of gaseous and non-volatile polymeric products. The temporal behavior of the gas formation rates of the identified products indicate that the overall thermal decomposition process is complex. In isothermal experiments with 2,4-DNI in the solid phase, four distinguishing features are observed: (1) elevated rates of gas formation are observed during the early stages of the decomposition, which appear to be correlated to the presence of exogenous water in the sample; (2) this is followed by a period of relatively constant rates of gas formation; (3) next, the rates of gas formation accelerate, characteristic of an autocatalytic reaction; (4) finally, the 2,4-DNI is depleted and gaseous decomposition products continue to evolve at a decreasing rate. A physicochemical and mathematical model of the decomposition of 2,4-DNI has been developed and applied to the experimental results. The first generation of this model is described in this paper. Differences between the first generation of the model and the experimental data collected under different conditions suggest refinements for the next generation of the model.

  3. Peatland microbial communities and decomposition processes in the james bay lowlands, Canada.

    PubMed

    Preston, Michael D; Smemo, Kurt A; McLaughlin, James W; Basiliko, Nathan

    2012-01-01

    Northern peatlands are a large repository of atmospheric carbon due to an imbalance between primary production by plants and microbial decomposition. The James Bay Lowlands (JBL) of northern Ontario are a large peatland-complex but remain relatively unstudied. Climate change models predict the region will experience warmer and drier conditions, potentially altering plant community composition, and shifting the region from a long-term carbon sink to a source. We collected a peat core from two geographically separated (ca. 200 km) ombrotrophic peatlands (Victor and Kinoje Bogs) and one minerotrophic peatland (Victor Fen) located near Victor Bog within the JBL. We characterized (i) archaeal, bacterial, and fungal community structure with terminal restriction fragment length polymorphism of ribosomal DNA, (ii) estimated microbial activity using community level physiological profiling and extracellular enzymes activities, and (iii) the aeration and temperature dependence of carbon mineralization at three depths (0-10, 50-60, and 100-110 cm) from each site. Similar dominant microbial taxa were observed at all three peatlands despite differences in nutrient content and substrate quality. In contrast, we observed differences in basal respiration, enzyme activity, and the magnitude of substrate utilization, which were all generally higher at Victor Fen and similar between the two bogs. However, there was no preferential mineralization of carbon substrates between the bogs and fens. Microbial community composition did not correlate with measures of microbial activity but pH was a strong predictor of activity across all sites and depths. Increased peat temperature and aeration stimulated CO(2) production but this did not correlate with a change in enzyme activities. Potential microbial activity in the JBL appears to be influenced by the quality of the peat substrate and the presence of microbial inhibitors, which suggests the existing peat substrate will have a large

  4. Peatland Microbial Communities and Decomposition Processes in the James Bay Lowlands, Canada

    PubMed Central

    Preston, Michael D.; Smemo, Kurt A.; McLaughlin, James W.; Basiliko, Nathan

    2012-01-01

    Northern peatlands are a large repository of atmospheric carbon due to an imbalance between primary production by plants and microbial decomposition. The James Bay Lowlands (JBL) of northern Ontario are a large peatland-complex but remain relatively unstudied. Climate change models predict the region will experience warmer and drier conditions, potentially altering plant community composition, and shifting the region from a long-term carbon sink to a source. We collected a peat core from two geographically separated (ca. 200 km) ombrotrophic peatlands (Victor and Kinoje Bogs) and one minerotrophic peatland (Victor Fen) located near Victor Bog within the JBL. We characterized (i) archaeal, bacterial, and fungal community structure with terminal restriction fragment length polymorphism of ribosomal DNA, (ii) estimated microbial activity using community level physiological profiling and extracellular enzymes activities, and (iii) the aeration and temperature dependence of carbon mineralization at three depths (0–10, 50–60, and 100–110 cm) from each site. Similar dominant microbial taxa were observed at all three peatlands despite differences in nutrient content and substrate quality. In contrast, we observed differences in basal respiration, enzyme activity, and the magnitude of substrate utilization, which were all generally higher at Victor Fen and similar between the two bogs. However, there was no preferential mineralization of carbon substrates between the bogs and fens. Microbial community composition did not correlate with measures of microbial activity but pH was a strong predictor of activity across all sites and depths. Increased peat temperature and aeration stimulated CO2 production but this did not correlate with a change in enzyme activities. Potential microbial activity in the JBL appears to be influenced by the quality of the peat substrate and the presence of microbial inhibitors, which suggests the existing peat substrate will have a large

  5. Regeneration of glass nanofluidic chips through a multiple-step sequential thermochemical decomposition process at high temperatures.

    PubMed

    Xu, Yan; Wu, Qian; Shimatani, Yuji; Yamaguchi, Koji

    2015-10-07

    Due to the lack of regeneration methods, the reusability of nanofluidic chips is a significant technical challenge impeding the efficient and economic promotion of both fundamental research and practical applications on nanofluidics. Herein, a simple method for the total regeneration of glass nanofluidic chips was described. The method consists of sequential thermal treatment with six well-designed steps, which correspond to four sequential thermal and thermochemical decomposition processes, namely, dehydration, high-temperature redox chemical reaction, high-temperature gasification, and cooling. The method enabled the total regeneration of typical 'dead' glass nanofluidic chips by eliminating physically clogged nanoparticles in the nanochannels, removing chemically reacted organic matter on the glass surface and regenerating permanent functional surfaces of dissimilar materials localized in the nanochannels. The method provides a technical solution to significantly improve the reusability of glass nanofluidic chips and will be useful for the promotion and acceleration of research and applications on nanofluidics.

  6. Photocatalytic Decomposition of Methylene Blue Over MIL-53(Fe) Prepared Using Microwave-Assisted Process Under Visible Light Irradiation.

    PubMed

    Trinh, Nguyen Duy; Hong, Seong-Soo

    2015-07-01

    Iron-based MIL-53 crystals with uniform size were successfully synthesized using a microwave-assisted solvothermal method and characterized by XRD, FE-SEM and DRS. We also investigated the photocatalytic activity of MIL-53(Fe) for the decomposition of methylene blue using H2O2 as an electron acceptor. From XRD and SEM results, the fully crystallized MIL-53(Fe) materials were obtained regardless of preparation method. From DRS results, MIL-53(Fe) samples prepared using microwave-assisted process displayed the absorption spectrum up to the visible region and then they showed the high photocatalytic activity under visible light irradiation. The MIL-53(Fe) catalyst prepared by two times microwave irradiation showed the highest activity.

  7. Topological study of the late steps of the artemisinin decomposition process: modeling the outcome of the experimentally obtained products.

    PubMed

    Moles, Pamela; Oliva, Mónica; Safont, Vicent S

    2011-01-20

    By using 6,7,8-trioxabicyclo[3.2.2]nonane as the artemisinin model and dihydrated Fe(OH)(2) as the heme model, we report a theoretical study of the late steps of the artemisinin decomposition process. The study offers two viewpoints: first, the energetic and geometric parameters are obtained and analyzed, and hence, different reaction paths have been studied. The second point of view uses the electron localization function (ELF) and the atoms in molecules (AIM) methodology, to conduct a complete topological study of such steps. The MO analysis together with the spin density description has also been used. The obtained results agree nicely with the experimental data, and a new mechanistic proposal that explains the experimentally determined outcome of deoxiartemisinin has been postulated.

  8. Decomposition of 3,5-dinitrobenzamide in aqueous solution during UV/H2O2 and UV/TiO2 oxidation processes.

    PubMed

    Yan, Yingjie; Liao, Qi-Nan; Ji, Feng; Wang, Wei; Yuan, Shoujun; Hu, Zhen-Hu

    2017-02-01

    3,5-Dinitrobenzamide has been widely used as a feed additive to control coccidiosis in poultry, and part of the added 3,5-dinitrobenzamide is excreted into wastewater and surface water. The removal of 3,5-dinitrobenzamide from wastewater and surface water has not been reported in previous studies. Highly reactive hydroxyl radicals from UV/hydrogen peroxide (H2O2) and UV/titanium dioxide (TiO2) advanced oxidation processes (AOPs) can decompose organic contaminants efficiently. In this study, the decomposition of 3,5-dinitrobenzamide in aqueous solution during UV/H2O2 and UV/TiO2 oxidation processes was investigated. The decomposition of 3,5-dinitrobenzamide fits well with a fluence-based pseudo-first-order kinetics model. The decomposition in both two oxidation processes was affected by solution pH, and was inhibited under alkaline conditions. Inorganic anions such as NO3(-), Cl(-), SO4(2-), HCO3(-), and CO3(2-) inhibited the degradation of 3,5-dinitrobenzamide during the UV/H2O2 and UV/TiO2 oxidation processes. After complete decomposition in both oxidation processes, approximately 50% of 3,5-dinitrobenzamide was decomposed into organic intermediates, and the rest was mineralized to CO2, H2O, and other inorganic anions. Ions such as NH4(+), NO3(-), and NO2(-) were released into aqueous solution during the degradation. The primary decomposition products of 3,5-dinitrobenzamide were identified using time-of-flight mass spectrometry (LCMS-IT-TOF). Based on these products and ions release, a possible decomposition pathway of 3,5-dinitrobenzamide in both UV/H2O2 and UV/TiO2 processes was proposed.

  9. A low-cost polysilicon process based on the synthesis and decomposition of dichlorosilane

    NASA Technical Reports Server (NTRS)

    Mccormick, J. R.; Plahutnik, F.; Sawyer, D.; Arvidson, A.; Goldfarb, S.

    1982-01-01

    Major process steps of a dichlorosilane based chemical vapor deposition (CVD) process for the production of polycrystalline silicon have been evaluated. While an economic analysis of the process indicates that it is not capable of meeting JPL/DOE price objectives ($14.00/kg in 1980 dollars), product price in the $19.00/kg to $25.00/kg range may be achieved. Product quality has been evaluated and ascertained to be comparable to semiconductor-grade polycrystalline silicon. Solar cells fabricated from the material are also equivalent to those fabricated from semiconductor-grade polycrystalline silicon.

  10. A low-cost polysilicon process based on the synthesis and decomposition of dichlorosilane

    NASA Technical Reports Server (NTRS)

    Mccormick, J. R.; Plahutnik, F.; Sawyer, D.; Arvidson, A.; Goldfarb, S.

    1982-01-01

    Major process steps of a dichlorosilane based chemical vapor deposition (CVD) process for the production of polycrystalline silicon have been evaluated. While an economic analysis of the process indicates that it is not capable of meeting JPL/DOE price objectives ($14.00/kg in 1980 dollars), product price in the $19.00/kg to $25.00/kg range may be achieved. Product quality has been evaluated and ascertained to be comparable to semiconductor-grade polycrystalline silicon. Solar cells fabricated from the material are also equivalent to those fabricated from semiconductor-grade polycrystalline silicon.

  11. Multiple alignment tensors from a denatured protein.

    PubMed

    Gebel, Erika B; Ruan, Ke; Tolman, Joel R; Shortle, David

    2006-07-26

    The structural content of the denatured state has yet to be fully characterized. In recent years, large residual dipolar couplings (RDCs) from denatured proteins have been observed under alignment conditions produced by bicelles and strained polyacrylamide gels. In this report, we describe efforts to extend our picture of the residual structure in denatured nuclease by measuring RDCs with multiple alignment tensors. Backbone amide 15N-1H RDCs were collected from 4 M urea for a total of eight RDC data sets. The RDCs were analyzed by singular value decomposition (SVD) to determine the number of independent alignment tensors present in the data. On the basis of the resultant singular values and propagated error estimates, it is clear that there are at least three independent alignment tensors. These three independent RDC datasets can be reconstituted as orthogonal linear combinations, (OLC)-RDC datasets, of the eight actually recorded. The first, second, and third OLC-RDC datasets are highly robust to the removal of any single experimental RDC dataset, establishing the presence of three independent alignment tensors, sampled well above the level of experimental uncertainty. The observation that the RDC data span three or more dimensions of the five-dimensional parameter space demonstrates that the ensemble average structure of denatured nuclease must be asymmetric with respect to these three orthogonal principal axes, which is not inconsistent with earlier work demonstrating that it has a nativelike topology.

  12. Fundamental phenomena on fuel decomposition and boundary-layer combustion processes with applications to hybrid rocket motors

    NASA Technical Reports Server (NTRS)

    Kuo, Kenneth K.; Lu, Yeu-Cherng; Chiaverini, Martin J.; Harting, George C.; Johnson, David K.; Serin, Nadir

    1995-01-01

    The experimental study on the fundamental processes involved in fuel decomposition and boundary-layer combustion in hybrid rocket motors is continuously being conducted at the High Pressure Combustion Laboratory of The Pennsylvania State University. This research will provide a useful engineering technology base in the development of hybrid rocket motors as well as a fundamental understanding of the complex processes involved in hybrid propulsion. A high-pressure, 2-D slab motor has been designed, manufactured, and utilized for conducting seven test firings using HTPB fuel processed at PSU. A total of 20 fuel slabs have been received from the Mcdonnell Douglas Aerospace Corporation. Ten of these fuel slabs contain an array of fine-wire thermocouples for measuring solid fuel surface and subsurface temperatures. Diagnostic instrumentation used in the test include high-frequency pressure transducers for measuring static and dynamic motor pressures and fine-wire thermocouples for measuring solid fuel surface and subsurface temperatures. The ultrasonic pulse-echo technique as well as a real-time x-ray radiography system have been used to obtain independent measurements of instantaneous solid fuel regression rates.

  13. Towards a physical understanding of stratospheric cooling under global warming through a process-based decomposition method

    NASA Astrophysics Data System (ADS)

    Yang, Yang; Ren, R.-C.; Cai, Ming

    2016-12-01

    The stratosphere has been cooling under global warming, the causes of which are not yet well understood. This study applied a process-based decomposition method (CFRAM; Coupled Surface-Atmosphere Climate Feedback Response Analysis Method) to the simulation results of a Coupled Model Intercomparison Project, phase 5 (CMIP5) model (CCSM4; Community Climate System Model, version 4), to demonstrate the responsible radiative and non-radiative processes involved in the stratospheric cooling. By focusing on the long-term stratospheric temperature changes between the "historical run" and the 8.5 W m-2 Representative Concentration Pathway (RCP8.5) scenario, this study demonstrates that the changes of radiative radiation due to CO2, ozone and water vapor are the main divers of stratospheric cooling in both winter and summer. They contribute to the cooling changes by reducing the net radiative energy (mainly downward radiation) received by the stratospheric layer. In terms of the global average, their contributions are around -5, -1.5, and -1 K, respectively. However, the observed stratospheric cooling is much weaker than the cooling by radiative processes. It is because changes in atmospheric dynamic processes act to strongly mitigate the radiative cooling by yielding a roughly 4 K warming on the global average base. In particular, the much stronger/weaker dynamic warming in the northern/southern winter extratropics is associated with an increase of the planetary-wave activity in the northern winter, but a slight decrease in the southern winter hemisphere, under global warming. More importantly, although radiative processes dominate the stratospheric cooling, the spatial patterns are largely determined by the non-radiative effects of dynamic processes.

  14. Combined TGA-MS kinetic analysis of multistep processes. Thermal decomposition and ceramification of polysilazane and polysiloxane preceramic polymers.

    PubMed

    García-Garrido, C; Sánchez-Jiménez, P E; Pérez-Maqueda, L A; Perejón, A; Criado, José M

    2016-10-26

    The polymer-to-ceramic transformation kinetics of two widely employed ceramic precursors, 1,3,5,7-tetramethyl-1,3,5,7-tetravinylcyclotetrasiloxane (TTCS) and polyureamethylvinylsilazane (CERASET), have been investigated using coupled thermogravimetry and mass spectrometry (TG-MS), Raman, XRD and FTIR. The thermally induced decomposition of the pre-ceramic polymer is the critical step in the synthesis of polymer derived ceramics (PDCs) and accurate kinetic modeling is key to attaining a complete understanding of the underlying process and to attempt any behavior predictions. However, obtaining a precise kinetic description of processes of such complexity, consisting of several largely overlapping physico-chemical processes comprising the cleavage of the starting polymeric network and the release of organic moieties, is extremely difficult. Here, by using the evolved gases detected by MS as a guide it has been possible to determine the number of steps that compose the overall process, which was subsequently resolved using a semiempirical deconvolution method based on the Frasier-Suzuki function. Such a function is more appropriate that the more usual Gaussian or Lorentzian functions since it takes into account the intrinsic asymmetry of kinetic curves. Then, the kinetic parameters of each constituent step were independently determined using both model-free and model-fitting procedures, and it was found that the processes obey mostly diffusion models which can be attributed to the diffusion of the released gases through the solid matrix. The validity of the obtained kinetic parameters was tested not only by the successful reconstruction of the original experimental curves, but also by predicting the kinetic curves of the overall processes yielded by different thermal schedules and by a mixed TTCS-CERASET precursor.

  15. Tensor Network Renormalization.

    PubMed

    Evenbly, G; Vidal, G

    2015-10-30

    We introduce a coarse-graining transformation for tensor networks that can be applied to study both the partition function of a classical statistical system and the Euclidean path integral of a quantum many-body system. The scheme is based upon the insertion of optimized unitary and isometric tensors (disentanglers and isometries) into the tensor network and has, as its key feature, the ability to remove short-range entanglement or correlations at each coarse-graining step. Removal of short-range entanglement results in scale invariance being explicitly recovered at criticality. In this way we obtain a proper renormalization group flow (in the space of tensors), one that in particular (i) is computationally sustainable, even for critical systems, and (ii) has the correct structure of fixed points, both at criticality and away from it. We demonstrate the proposed approach in the context of the 2D classical Ising model.

  16. Tensor coupling and pseudospin symmetry in nuclei

    SciTech Connect

    Alberto, P.; Castro, A.S. de; Lisboa, R.; Malheiro, M.

    2005-03-01

    In this work we study the contribution of the isoscalar tensor coupling to the realization of pseudospin symmetry in nuclei. Using realistic values for the tensor coupling strength, we show that this coupling reduces noticeably the pseudospin splittings, especially for single-particle levels near the Fermi surface. By using an energy decomposition of the pseudospin energy splittings, we show that the changes in these splittings come mainly through the changes induced in the lower radial wave function for the low-lying pseudospin partners and through changes in the expectation value of the pseudospin-orbit coupling term for surface partners. This allows us to confirm the conclusion already reached in previous studies, namely that the pseudospin symmetry in nuclei is of a dynamical nature.

  17. Oxidative decomposition of p-nitroaniline in water by solar photo-Fenton advanced oxidation process.

    PubMed

    Sun, Jian-Hui; Sun, Sheng-Peng; Fan, Mao-Hong; Guo, Hui-Qin; Lee, Yi-Fan; Sun, Rui-Xia

    2008-05-01

    The degradation of p-nitroaniline (PNA) in water by solar photo-Fenton advanced oxidation process was investigated in this study. The effects of different reaction parameters including pH value of solutions, dosages of hydrogen peroxide and ferrous ion, initial PNA concentration and temperature on the degradation of PNA have been studied. The optimum conditions for the degradation of PNA in water were considered to be: the pH value at 3.0, 10 mmol L(-1) H(2)O(2), 0.05 mmol L(-1) Fe(2+), 0.072-0.217 mmol L(-1) PNA and temperature at 20 degrees C. Under the optimum conditions, the degradation efficiencies of PNA were more than 98% within 30 min reaction. The degradation characteristic of PNA showed that the conjugated pi systems of the aromatic ring in PNA molecules were effectively destructed. The experimental results indicated solar photo-Fenton process has more advantages compared with classical Fenton process, such as higher oxidation power, wider working pH range, lower ferrous ion usage, etc. Furthermore, the present study showed the potential use of solar photo-Fenton process for PNA containing wastewater treatment.

  18. Square Deal: Lower Bounds and Improved Relaxations for Tensor Recovery

    DTIC Science & Technology

    2013-08-16

    a low-rank tensor from incomplete information is a recurring problem in signal processing and machine learning . The most popular convex relaxation of...is a recurring problem in signal processing and machine learning . The most popular convex relaxation of this problem minimizes the sum of the nuclear...results to low-rank tensors is not obvious. The numerical algebra of tensors is fraught with hardness results [HL09]. For example, even computing a

  19. Fusing Functional MRI and Diffusion Tensor Imaging Measures of Brain Function and Structure to Predict Working Memory and Processing Speed Performance among Inter-episode Bipolar Patients.

    PubMed

    McKenna, Benjamin S; Theilmann, Rebecca J; Sutherland, Ashley N; Eyler, Lisa T

    2015-05-01

    Evidence for abnormal brain function as measured with diffusion tensor imaging (DTI) and functional magnetic resonance imaging (fMRI) and cognitive dysfunction have been observed in inter-episode bipolar disorder (BD) patients. We aimed to create a joint statistical model of white matter integrity and functional response measures in explaining differences in working memory and processing speed among BD patients. Medicated inter-episode BD (n=26; age=45.2±10.1 years) and healthy comparison (HC; n=36; age=46.3±11.5 years) participants completed 51-direction DTI and fMRI while performing a working memory task. Participants also completed a processing speed test. Tract-based spatial statistics identified common white matter tracts where fractional anisotropy was calculated from atlas-defined regions of interest. Brain responses within regions of interest activation clusters were also calculated. Least angle regression was used to fuse fMRI and DTI data to select the best joint neuroimaging predictors of cognitive performance for each group. While there was overlap between groups in which regions were most related to cognitive performance, some relationships differed between groups. For working memory accuracy, BD-specific predictors included bilateral dorsolateral prefrontal cortex from fMRI, splenium of the corpus callosum, left uncinate fasciculus, and bilateral superior longitudinal fasciculi from DTI. For processing speed, the genu and splenium of the corpus callosum and right superior longitudinal fasciculus from DTI were significant predictors of cognitive performance selectively for BD patients. BD patients demonstrated unique brain-cognition relationships compared to HC. These findings are a first step in discovering how interactions of structural and functional brain abnormalities contribute to cognitive impairments in BD.

  20. Decomposition of Time Scales in Linear Systems and Markovian Decision Processes.

    DTIC Science & Technology

    1980-11-01

    Inventory theory (17] iii. Queuing theory [18]. Markovian decision processes can be traced back to Bellman’s development of Dynamic Programming (19,20...corresponds to the demand in terms of generating units needed. Markov models of this type are common in optimal resource scheduling problems [22,59]. The units...scale systems is a researchI area that will always remain rich in potential. More demanding performance leads to more complex models necessitating the

  1. Diffusion-tensor imaging of major white matter tracts and their role in language processing in aphasia.

    PubMed

    Ivanova, Maria V; Isaev, Dmitry Yu; Dragoy, Olga V; Akinina, Yulia S; Petrushevskiy, Alexey G; Fedina, Oksana N; Shklovsky, Victor M; Dronkers, Nina F

    2016-12-01

    A growing literature is pointing towards the importance of white matter tracts in understanding the neural mechanisms of language processing, and determining the nature of language deficits and recovery patterns in aphasia. Measurements extracted from diffusion-weighted (DW) images provide comprehensive in vivo measures of local microstructural properties of fiber pathways. In the current study, we compared microstructural properties of major white matter tracts implicated in language processing in each hemisphere (these included arcuate fasciculus (AF), superior longitudinal fasciculus (SLF), inferior longitudinal fasciculus (ILF), inferior frontal-occipital fasciculus (IFOF), uncinate fasciculus (UF), and corpus callosum (CC), and corticospinal tract (CST) for control purposes) between individuals with aphasia and healthy controls and investigated the relationship between these neural indices and language deficits. Thirty-seven individuals with aphasia due to left hemisphere stroke and eleven age-matched controls were scanned using DW imaging sequences. Fractional anisotropy (FA), mean diffusivity (MD), radial diffusivity (RD), axial diffusivity (AD) values for each major white matter tract were extracted from DW images using tract masks chosen from standardized atlases. Individuals with aphasia were also assessed with a standardized language test in Russian targeting comprehension and production at the word and sentence level. Individuals with aphasia had significantly lower FA values for left hemisphere tracts and significantly higher values of MD, RD and AD for both left and right hemisphere tracts compared to controls, all indicating profound impairment in tract integrity. Language comprehension was predominantly related to integrity of the left IFOF and left ILF, while language production was mainly related to integrity of the left AF. In addition, individual segments of these three tracts were differentially associated with language production and

  2. Design rule optimization for 65-nm-node (CMOS5) BEOL using process and layout decomposition methodology

    NASA Astrophysics Data System (ADS)

    Honda, K.; Peter, K.; Zhang, Y.; Yu, B.; Park, K.; Li, Xiaolei; Michaels, K.; Yamada, Shinichi; Noguchi, T.

    2004-05-01

    With downscaling of dimensions, essential challenges on layout printability significantly increase. The design rule cannot be shrunk with linearity any more. Historically, in the early development stage, simple test patterns like snake/comb or border/borderless via chains were used for identifying design and process issues electrically. However it is unclear how much these patterns represent the sensitive patterns for the real critical failures. The lack of these kinds of critical patterns would always cause yield problems in the volume production. In this paper, we show the result of evaluating 65-nm BEOL process by using the test patterns that can cover critical layout situations. Especially, it was focused on the line end via hole, which is believed to cause the systematic yield degradation. The key steps in our process/design decomposition methodology are design attribute and process space analysis. By exploring the process space for a given design, the method allows to find the most challenging patterns to print due to various process issues. The test patterns were generated from critical pattern extracted from standard cells library by considering our preliminary opc and mask design flow. Simulation of all test patterns are performed to ensure that DOE range is sufficient to cover the entire process/design space. These patterns are generated from the 65nm node ground design rule. It used a size of 90nm as metal minimum width and space, and a size of 100nm for fixed via hole diameter. It was confirmed by simulations that all the test pattern represent for the original design on each module process/design space. All the test patterns were measured by the standard parametric e-test setup. The amount of line end pull back can be inferred from the via resistance, and the amount of line end widening can be inferred from the leakage current between via chains and neighboring lines. Thus the meaningful information about the OPC and litho process can be obtained

  3. General route for the decomposition of InAs quantum dots during the capping process.

    PubMed

    González, D; Reyes, D F; Utrilla, A D; Ben, T; Braza, V; Guzman, A; Hierro, A; Ulloa, J M

    2016-03-29

    The effect of the capping process on the morphology of InAs/GaAs quantum dots (QDs) by using different GaAs-based capping layers (CLs), ranging from strain reduction layers to strain compensating layers, has been studied by transmission microscopic techniques. For this, we have measured simultaneously the height and diameter in buried and uncapped QDs covering populations of hundreds of QDs that are statistically reliable. First, the uncapped QD population evolves in all cases from a pyramidal shape into a more homogenous distribution of buried QDs with a spherical-dome shape, despite the different mechanisms implicated in the QD capping. Second, the shape of the buried QDs depends only on the final QD size, where the radius of curvature is function of the base diameter independently of the CL composition and growth conditions. An asymmetric evolution of the QDs' morphology takes place, in which the QD height and base diameter are modified in the amount required to adopt a similar stable shape characterized by a averaged aspect ratio of 0.21. Our results contradict the traditional model of QD material redistribution from the apex to the base and point to a different universal behavior of the overgrowth processes in self-organized InAs QDs.

  4. General route for the decomposition of InAs quantum dots during the capping process

    NASA Astrophysics Data System (ADS)

    González, D.; Reyes, D. F.; Utrilla, A. D.; Ben, T.; Braza, V.; Guzman, A.; Hierro, A.; Ulloa, J. M.

    2016-03-01

    The effect of the capping process on the morphology of InAs/GaAs quantum dots (QDs) by using different GaAs-based capping layers (CLs), ranging from strain reduction layers to strain compensating layers, has been studied by transmission microscopic techniques. For this, we have measured simultaneously the height and diameter in buried and uncapped QDs covering populations of hundreds of QDs that are statistically reliable. First, the uncapped QD population evolves in all cases from a pyramidal shape into a more homogenous distribution of buried QDs with a spherical-dome shape, despite the different mechanisms implicated in the QD capping. Second, the shape of the buried QDs depends only on the final QD size, where the radius of curvature is function of the base diameter independently of the CL composition and growth conditions. An asymmetric evolution of the QDs’ morphology takes place, in which the QD height and base diameter are modified in the amount required to adopt a similar stable shape characterized by a averaged aspect ratio of 0.21. Our results contradict the traditional model of QD material redistribution from the apex to the base and point to a different universal behavior of the overgrowth processes in self-organized InAs QDs.

  5. Decomposition of Iodinated Pharmaceuticals by UV-254 nm-assisted Advanced Oxidation Processes.

    PubMed

    Duan, Xiaodi; He, Xuexiang; Wang, Dong; Mezyk, Stephen P; Otto, Shauna C; Marfil-Vega, Ruth; Mills, Marc A; Dionysiou, Dionysios D

    2017-02-05

    Iodinated pharmaceuticals, thyroxine (a thyroid hormone) and diatrizoate (an iodinated X-ray contrast medium), are among the most prescribed active pharmaceutical ingredients. Both of them have been reported to potentially disrupt thyroid homeostasis even at very low concentrations. In this study, UV-254 nm-based photolysis and photochemical processes, i.e., UV only, UV/H2O2, and UV/S2O8(2-), were evaluated for the destruction of these two pharmaceuticals. Approximately 40% of 0.5μM thyroxine or diatrizoate was degraded through direct photolysis at UV fluence of 160mJcm(-2), probably resulting from the photosensitive cleavage of C-I bonds. While the addition of H2O2 only accelerated the degradation efficiency to a low degree, the destruction rates of both chemicals were significantly enhanced in the UV/S2O8(2-) system, suggesting the potential vulnerability of the iodinated chemicals toward UV/S2O8(2-) treatment. Such efficient destruction also occurred in the presence of radical scavengers when biologically treated wastewater samples were used as reaction matrices. The effects of initial oxidant concentrations, solution pH, as well as the presence of natural organic matter (humic acid or fulvic acid) and alkalinity were also investigated in this study. These results provide insights for the removal of iodinated pharmaceuticals in water and/or wastewater using UV-based photochemical processes.

  6. Temperature Adaptations in the Terminal Processes of Anaerobic Decomposition of Yellowstone National Park and Icelandic Hot Spring Microbial Mats

    PubMed Central

    Sandbeck, Kenneth A.; Ward, David M.

    1982-01-01

    The optimum temperatures for methanogenesis in microbial mats of four neutral to alkaline, low-sulfate hot springs in Yellowstone National Park were between 50 and 60°C, which was 13 to 23°C lower than the upper temperature for mat development. Significant methanogenesis at 65°C was only observed in one of the springs. Methane production in samples collected at a 51 or 62°C site in Octopus Spring was increased by incubation at higher temperatures and was maximal at 70°C. Strains of Methanobacterium thermoautotrophicum were isolated from 50, 55, 60, and 65°C sites in Octopus Spring at the temperatures of the collection sites. The optimum temperature for growth and methanogenesis of each isolate was 65°C. Similar results were found for the potential rate of sulfate reduction in an Icelandic hot spring microbial mat in which sulfate reduction dominated methane production as a terminal process in anaerobic decomposition. The potential rate of sulfate reduction along the thermal gradient of the mat was greatest at 50°C, but incubation at 60°C of the samples obtained at 50°C increased the rate. Adaptation to different mat temperatures, common among various microorganisms and processes in the mats, did not appear to occur in the processes and microorganisms which terminate the anaerobic food chain. Other factors must explain why the maximal rates of these processes are restricted to moderate temperatures of the mat ecosystem. PMID:16346109

  7. Ozone decomposition

    PubMed Central

    Batakliev, Todor; Georgiev, Vladimir; Anachkov, Metody; Rakovsky, Slavcho

    2014-01-01

    Catalytic ozone decomposition is of great significance because ozone is a toxic substance commonly found or generated in human environments (aircraft cabins, offices with photocopiers, laser printers, sterilizers). Considerable work has been done on ozone decomposition reported in the literature. This review provides a comprehensive summary of the literature, concentrating on analysis of the physico-chemical properties, synthesis and catalytic decomposition of ozone. This is supplemented by a review on kinetics and catalyst characterization which ties together the previously reported results. Noble metals and oxides of transition metals have been found to be the most active substances for ozone decomposition. The high price of precious metals stimulated the use of metal oxide catalysts and particularly the catalysts based on manganese oxide. It has been determined that the kinetics of ozone decomposition is of first order importance. A mechanism of the reaction of catalytic ozone decomposition is discussed, based on detailed spectroscopic investigations of the catalytic surface, showing the existence of peroxide and superoxide surface intermediates. PMID:26109880

  8. Ozone decomposition.

    PubMed

    Batakliev, Todor; Georgiev, Vladimir; Anachkov, Metody; Rakovsky, Slavcho; Zaikov, Gennadi E

    2014-06-01

    Catalytic ozone decomposition is of great significance because ozone is a toxic substance commonly found or generated in human environments (aircraft cabins, offices with photocopiers, laser printers, sterilizers). Considerable work has been done on ozone decomposition reported in the literature. This review provides a comprehensive summary of the literature, concentrating on analysis of the physico-chemical properties, synthesis and catalytic decomposition of ozone. This is supplemented by a review on kinetics and catalyst characterization which ties together the previously reported results. Noble metals and oxides of transition metals have been found to be the most active substances for ozone decomposition. The high price of precious metals stimulated the use of metal oxide catalysts and particularly the catalysts based on manganese oxide. It has been determined that the kinetics of ozone decomposition is of first order importance. A mechanism of the reaction of catalytic ozone decomposition is discussed, based on detailed spectroscopic investigations of the catalytic surface, showing the existence of peroxide and superoxide surface intermediates.

  9. Three-dimensional display of peripheral nerves in the wrist region based on MR diffusion tensor imaging and maximum intensity projection post-processing.

    PubMed

    Ding, Wen Quan; Zhou, Xue Jun; Tang, Jin Bo; Gu, Jian Hui; Jin, Dong Sheng

    2015-06-01

    To achieve 3-dimensional (3D) display of peripheral nerves in the wrist region by using maximum intensity projection (MIP) post-processing methods to reconstruct raw images acquired by a diffusion tensor imaging (DTI) scan, and to explore its clinical applications. We performed DTI scans in 6 (DTI6) and 25 (DTI25) diffusion directions on 20 wrists of 10 healthy young volunteers, 6 wrists of 5 patients with carpal tunnel syndrome, 6 wrists of 6 patients with nerve lacerations, and one patient with neurofibroma. The MIP post-processing methods employed 2 types of DTI raw images: (1) single-direction and (2) T2-weighted trace. The fractional anisotropy (FA) and apparent diffusion coefficient (ADC) values of the median and ulnar nerves were measured at multiple testing sites. Two radiologists used custom evaluation scales to assess the 3D nerve imaging quality independently. In both DTI6 and DTI25, nerves in the wrist region could be displayed clearly by the 2 MIP post-processing methods. The FA and ADC values were not significantly different between DTI6 and DTI25, except for the FA values of the ulnar nerves at the level of pisiform bone (p=0.03). As to the imaging quality of each MIP post-processing method, there were no significant differences between DTI6 and DTI25 (p>0.05). The imaging quality of single-direction MIP post-processing was better than that from T2-weighted traces (p<0.05) because of the higher nerve signal intensity. Three-dimensional displays of peripheral nerves in the wrist region can be achieved by MIP post-processing for single-direction images and T2-weighted trace images for both DTI6 and DTI25. The FA and ADC values of the median nerves can be accurately measured by using DTI6 data. Adopting 6-direction DTI scan and MIP post-processing is an efficient method for evaluating peripheral nerves. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  10. Decomposition approach of the nitrogen generation process: empirical study on the Shimabara Peninsula in Japan.

    PubMed

    Fujii, Hidemichi; Nakagawa, Kei; Kagabu, Makoto

    2016-11-01

    Groundwater nitrate pollution is one of the most prevalent water-related environmental problems worldwide. The objective of this study is to identify the determinants of nitrogen pollutant changes with a focus on the nitrogen generation process. The novelty of our research framework is to cost-effectively identify the factors involved in nitrogen pollutant generation using public data. This study focuses on three determinant factors: (1) nitrogen intensity changes, (2) structural changes, and (3) scale changes. This study empirically analyses three sectors, including crop production, farm animals, and the household, on the Shimabara Peninsula in Japan. Our results show that the nitrogen supply from crop production sectors has decreased because the production has been scaled down and shifted towards lower nitrogen intensive crops. In the farm animal sector, the nitrogen supply has also been successfully reduced due to scaling-down efforts. Households have decreased the nitrogen supply by diffusion of integrated septic tank and sewerage systems.

  11. Coherence analysis using canonical coordinate decomposition with applications to sparse processing and optimal array deployment

    NASA Astrophysics Data System (ADS)

    Azimi-Sadjadi, Mahmood R.; Pezeshki, Ali; Wade, Robert L.

    2004-09-01

    Sparse array processing methods are typically used to improve the spatial resolution of sensor arrays for the estimation of direction of arrival (DOA). The fundamental assumption behind these methods is that signals that are received by the sparse sensors (or a group of sensors) are coherent. However, coherence may vary significantly with the changes in environmental, terrain, and, operating conditions. In this paper canonical correlation analysis is used to study the variations in coherence between pairs of sub-arrays in a sparse array problem. The data set for this study is a subset of an acoustic signature data set, acquired from the US Army TACOM-ARDEC, Picatinny Arsenal, NJ. This data set is collected using three wagon-wheel type arrays with five microphones. The results show that in nominal operating conditions, i.e. no extreme wind noise or masking effects by trees, building, etc., the signals collected at different sensor arrays are indeed coherent even at distant node separation.

  12. Decomposition of lignin from sugar cane bagasse during ozonation process monitored by optical and mass spectrometries.

    PubMed

    Souza-Corrêa, J A; Ridenti, M A; Oliveira, C; Araújo, S R; Amorim, J

    2013-03-21

    Mass spectrometry was used to monitor neutral chemical species from sugar cane bagasse that could volatilize during the bagasse ozonation process. Lignin fragments and some radicals liberated by direct ozone reaction with the biomass structure were detected. Ozone density was monitored during the ozonation by optical absorption spectroscopy. The optical results indicated that the ozone interaction with the bagasse material was better for bagasse particle sizes less than or equal to 0.5 mm. Both techniques have shown that the best condition for the ozone diffusion in the bagasse was at 50% of its moisture content. In addition, Fourier transform infrared spectroscopy (FTIR) and scanning electron microscopy (SEM) were employed to analyze the lignin bond disruptions and morphology changes of the bagasse surface that occurred due to the ozonolysis reactions as well. Appropriate chemical characterization of the lignin content in bagasse before and after its ozonation was also carried out.

  13. Microscopic Approaches to Decomposition and Burning Processes of a Micro Plastic Resin Particle under Abrupt Heating

    NASA Astrophysics Data System (ADS)

    Ohiwa, Norio; Ishino, Yojiro; Yamamoto, Atsunori; Yamakita, Ryuji

    To elucidate the possibility and availability of thermal recycling of waste plastic resin from a basic and microscopic viewpoint, a series of abrupt heating processes of a spherical micro plastic particle having a diameter of about 200 μm is observed, when it is abruptly exposed to hot oxidizing combustion gas. Three ingenious devices are introduced and two typical plastic resins of polyethylene terephthalate and polyethylene are used. In this paper the dependency of internal and external appearances of residual plastic embers on the heating time and the ingredients of plastic resins is optically analyzed, along with appearances of internal micro bubbling, multiple micro explosions and jets, and micro diffusion flames during abrupt heating. Based on temporal variations of the surface area of a micro plastic particle, the apparent burning rate constant is also evaluated and compared with those of well-known volatile liquid fuels.

  14. A process-based decomposition of decadal-scale surface temperature evolutions over East Asia

    NASA Astrophysics Data System (ADS)

    Chen, Junwen; Deng, Yi; Lin, Wenshi; Yang, Song

    2017-08-01

    This study partitions the observed decadal evolution of surface temperature and surface temperature differences between two decades (early 2000s and early 1980s) over the East Asian continent into components associated with individual radiative and non-radiative (dynamical) processes in the context of the coupled atmosphere-surface climate feedback-response analysis method (CFRAM). Rapid warming in this region occurred in late 1980s and early 2000s with a transient pause of warming between the two periods. The rising CO2 concentration provides a sustained, region-wide warming contribution and surface albedo effect, largely related to snow cover change, is important for warming/cooling over high-latitude and high-elevation regions. Sensible hear flux and surface dynamics dominates the evolution of surface temperature, with latent heat flux and atmospheric dynamics working against them mostly through large-scale and convective/turbulent heat transport. Cloud via its shortwave effect provides positive contributions to warming over southern Siberia and South China. The longwave effect associated with water vapor change contributes significant warming over northern India, Tibetan Plateau, and central Siberia. Impacts of solar irradiance and ozone changes are relatively small. The strongest year-to-year temperature fluctuation occurred at a rapid warming (1987-1988) and a rapid cooling (1995-1996) period. The pattern of the rapid warming receives major positive contributions from sensible heat flux with changes in atmospheric dynamics, water vapor, clouds, and albedo providing secondary positive contributions, while surface dynamics and latent heat flux providing negative contributions. The signs of the contributions from individual processes to the rapid cooling are almost opposite to those to the rapid warming.

  15. Decomposition of nitrotoluenes from trinitrotoluene manufacturing process by Electro-Fenton oxidation.

    PubMed

    Chen, Wen-Shing; Liang, Jing-Song

    2008-06-01

    Oxidative degradation of dinitrotoluene (DNT) isomers and 2,4,6-trinitrotoluene (TNT) in spent acid was conducted by Electro-Fenton's reagents. The electrolytic experiments were carried out to elucidate the influence of various operating parameters on the performance of mineralization of total organic compounds (TOC) in spent acid, including reaction temperature, dosage of oxygen, sulfuric acid concentration and dosage of ferrous ions. It deserves to note that organic compounds could be completely destructed by Electro-Fenton's reagent with in situ electrogenerated hydrogen peroxide obtained from cathodic reduction of oxygen, which was mainly supplied by anodic oxidation of water. Based on the spectra analyzed by gas chromatograph/mass spectrometer, it is proposed that initial denitration of 2,4,6-TNT gives rise to formation of 2,4-DNT and/or 2,6-DNT, which undergo the cleavage of nitro group into o-mononitrotoluene, followed by denitration to toluene and subsequent oxidation of the methyl group. Owing to the removal of both TOC and partial amounts of water simultaneously, the electrolytic method established is potentially applied to regenerate spent acid from toluene nitration processes in practice.

  16. Application of Contois, Tessier, and first-order kinetics for modeling and simulation of a composting decomposition process.

    PubMed

    Wang, Yongjiang; Witarsa, Freddy

    2016-11-01

    An integrated model was developed by associating separate degradation kinetics for an array of degradations during a decomposition process, which was considered as a novelty of this study. The raw composting material was divided into soluble, hemi-/cellulose, lignin, NBVS, ash, water, and free air-space. Considering their specific capabilities of expressing certain degradation phenomenon, Contois, Tessier (an extension to Monod kinetic), and first-order kinetics were employed to calculate the biochemical rates. It was found that the degradation of soluble substrate was relatively faster which could reach a maximum rate of about 0.4perhour. The hydrolysis of lignin was rate-limiting with a maximum rate of about 0.04perhour. The dry-based peak concentrations of soluble, hemi-/cellulose and lignin degraders were about 0.9, 0.2 and 0.3kgm(-3), respectively. Model developed, as a platform, allows degradation simulation of composting material that could be separated into the different components used in this study.

  17. Morphology and phase modifications of MoO{sub 3} obtained by metallo-organic decomposition processes

    SciTech Connect

    Barros Santos, Elias de; Martins de Souza e Silva, Juliana; Odone Mazali, Italo

    2010-11-15

    Molybdenum oxide samples were prepared using different temperatures and atmospheric conditions by metallo-organic decomposition processes and were characterized by XRD, SEM and DRS UV/Vis and Raman spectroscopies. Variation in the synthesis conditions resulted in solids with different morphologies and oxygen vacancy concentrations. Intense characteristic Raman bands of crystalline orthorhombic {alpha}-MoO{sub 3}, occurring at 992 cm{sup -1} and 820 cm{sup -1}, are observed and their shifts can be related to the differences in the structure of the solids obtained. The sample obtained under nitrogen flow at 1073 K is a phase mixture of orthorhombic {alpha}-MoO{sub 3} and monoclinic {beta}-MoO{sub 3}. The characterization results suggest that the molybdenum oxide samples are non-stoichiometric and are described as MoO{sub x} with x < 2.94. Variations in the reaction conditions make it possible to tune the number of oxygen defects and the band gap of the final material.

  18. Modified detrended fluctuation analysis based on empirical mode decomposition for the characterization of anti-persistent processes

    NASA Astrophysics Data System (ADS)

    Qian, Xi-Yuan; Gu, Gao-Feng; Zhou, Wei-Xing

    2011-11-01

    Detrended fluctuation analysis (DFA) is a simple but very efficient method for investigating the power-law long-term correlations of non-stationary time series, in which a detrending step is necessary to obtain the local fluctuations at different timescales. We propose to determine the local trends through empirical mode decomposition (EMD) and perform the detrending operation by removing the EMD-based local trends, which gives an EMD-based DFA method. Similarly, we also propose a modified multifractal DFA algorithm, called an EMD-based MFDFA. The performance of the EMD-based DFA and MFDFA methods is assessed with extensive numerical experiments based on fractional Brownian motion and multiplicative cascading process. We find that the EMD-based DFA method performs better than the classic DFA method in the determination of the Hurst index when the time series is strongly anticorrelated and the EMD-based MFDFA method outperforms the traditional MFDFA method when the moment order q of the detrended fluctuations is positive. We apply the EMD-based MFDFA to the 1 min data of Shanghai Stock Exchange Composite index, and the presence of multifractality is confirmed. We also analyze the daily Austrian electricity prices and confirm its anti-persistence.

  19. Comparison of the thermal decomposition processes of several aminoalcohol-based ZnO inks with one containing ethanolamine

    NASA Astrophysics Data System (ADS)

    Gómez-Núñez, Alberto; Roura, Pere; López, Concepción; Vilà, Anna

    2016-09-01

    Four inks for the production of ZnO semiconducting films have been prepared with zinc acetate dihydrate as precursor salt and one among the following aminoalcohols: aminopropanol (APr), aminomethyl butanol (AMB), aminophenol (APh) and aminobenzyl alcohol (AB) as stabilizing agent. Their thermal decomposition process has been analyzed in situ by thermogravimetric analysis (TGA), differential scanning calorimetry (DSC) and evolved gas analysis (EGA), whereas the solid product has been analysed ex-situ by X-ray diffraction (XRD) and infrared spectroscopy (IR). Although, except for the APh ink, crystalline ZnO is already obtained at 300 °C, the films contain an organic residue that evolves at higher temperature in the form of a large variety of nitrogen-containing cyclic compounds. The results indicate that APr can be a better stabilizing agent than ethanolamine (EA). It gives larger ZnO crystal sizes with similar carbon content. However, a common drawback of all the amino stabilizers (EA included) is that nitrogen atoms have not been completely removed from the ZnO film at the highest temperature of our experiments (600 °C).

  20. Study of fundamental chemical processes in explosive decomposition by laser-powered homogeneous pyrolysis. Final report 1 jul 78-31 aug 81

    SciTech Connect

    McMillen, D.F.; Golden, D.M.

    1981-11-12

    Very Low-Pressure Pyrolysis studies of 2,4-dinitrotoluene decomposition resulted in decomposition rates consistent with log (ks) = 12.1 - 43.9/2.3 RT. These results support the conclusion that previously reported 'anomalously' low Arrhenius parameters for the homogeneous gas-phase decomposition of ortho-nitrotoluene actually represent surface-catalyzed reactions. Preliminary qualitative results for pyrolysis of ortho-nitrotouene in the absence of hot reactor walls, using the Laser-Powered Homogeneous Pyrolysis technique (LPHP), provide further support for this conclusion: only products resulting from Ph-NO2 bond scission were observed; no products indicating complex intramolecular oxidation-reduction or elimination processes could be detected. The LPHP technique was successfully modified to use a pulsed laser and a heated flow system, so that the technique becomes suitable for study of surface-sensitive, low vapor pressure substrates such as TNT. The validity and accuracy of the technique was demonstrated by applying it to the decomposition of substances whose Arrhenius parameters for decomposition were already well known. IR-fluorescence measurements show that the temperature-space-time behavior under the present LPHP conditions is in agreement with expectations and with requirements which must be met if the method is to have quantitative validity. LPHP studies of azoisopropane decomposition, chosen as a radical-forming test reaction, show the accepted literature parameters to be substantially in error and indicate that the correct values are in all probability much closer to those measured in this work: log (k/s) = 13.9 - 41.2/2.3 RT.

  1. Exotic species as modifiers of ecosystem processes: Litter decomposition in native and invaded secondary forests of NW Argentina

    NASA Astrophysics Data System (ADS)

    Aragón, Roxana; Montti, Lia; Ayup, María Marta; Fernández, Romina

    2014-01-01

    Invasions of exotic tree species can cause profound changes in community composition and structure, and may even cause legacy effect on nutrient cycling via litter production. In this study, we compared leaf litter decomposition of two invasive exotic trees (Ligustrum lucidum and Morus sp.) and two dominant native trees (Cinnamomum porphyria and Cupania vernalis) in native and invaded (Ligustrum-dominated) forest stands in NW Argentina. We measured leaf attributes and environmental characteristics in invaded and native stands to isolate the effects of litter quality and habitat characteristics. Species differed in their decomposition rates and, as predicted by the different species colonization status (pioneer vs. late successional), exotic species decayed more rapidly than native ones. Invasion by L. lucidum modified environmental attributes by reducing soil humidity. Decomposition constants (k) tended to be slightly lower (-5%) for all species in invaded stands. High SLA, low tensile strength, and low C:N of Morus sp. distinguish this species from the native ones and explain its higher decomposition rate. Contrary to our expectations, L. lucidum leaf attributes were similar to those of native species. Decomposition rates also differed between the two exotic species (35% higher in Morus sp.), presumably due to leaf attributes and colonization status. Given the high decomposition rate of L. lucidum litter (more than 6 times that of natives) we expect an acceleration of nutrient circulation at ecosystem level in Ligustrum-dominated stands. This may occur in spite of the modified environmental conditions that are associated with L. lucidum invasion.

  2. Fuel decomposition and boundary-layer combustion processes of hybrid rocket motors

    NASA Technical Reports Server (NTRS)

    Chiaverini, Martin J.; Harting, George C.; Lu, Yeu-Cherng; Kuo, Kenneth K.; Serin, Nadir; Johnson, David K.

    1995-01-01

    Using a high-pressure, two-dimensional hybrid motor, an experimental investigation was conducted on fundamental processes involved in hybrid rocket combustion. HTPB (Hydroxyl-terminated Polybutadiene) fuel cross-linked with diisocyanate was burned with GOX under various operating conditions. Large-amplitude pressure oscillations were encountered in earlier test runs. After identifying the source of instability and decoupling the GOX feed-line system and combustion chamber, the pressure oscillations were drastically reduced from +/-20% of the localized mean pressure to an acceptable range of +/-1.5% Embedded fine-wire thermocouples indicated that the surface temperature of the burning fuel was around 1000 K depending upon axial locations and operating conditions. Also, except near the leading-edge region, the subsurface thermal wave profiles in the upstream locations are thicker than those in the downstream locations since the solid-fuel regression rate, in general, increases with distance along the fuel slab. The recovered solid fuel slabs in the laminar portion of the boundary layer exhibited smooth surfaces, indicating the existence of a liquid melt layer on the burning fuel surface in the upstream region. After the transition section, which displayed distinct transverse striations, the surface roughness pattern became quite random and very pronounced in the downstream turbulent boundary-layer region. Both real-time X-ray radiography and ultrasonic pulse-echo techniques were used to determine the instantaneous web thickness burned and instantaneous solid-fuel regression rates over certain portions of the fuel slabs. Globally averaged and axially dependent but time-averaged regression rates were also obtained and presented.

  3. Decomposition techniques

    USGS Publications Warehouse

    Chao, T.T.; Sanzolone, R.F.

    1992-01-01

    Sample decomposition is a fundamental and integral step in the procedure of geochemical analysis. It is often the limiting factor to sample throughput, especially with the recent application of the fast and modern multi-element measurement instrumentation. The complexity of geological materials makes it necessary to choose the sample decomposition technique that is compatible with the specific objective of the analysis. When selecting a decomposition technique, consideration should be given to the chemical and mineralogical characteristics of the sample, elements to be determined, precision and accuracy requirements, sample throughput, technical capability of personnel, and time constraints. This paper addresses these concerns and discusses the attributes and limitations of many techniques of sample decomposition along with examples of their application to geochemical analysis. The chemical properties of reagents as to their function as decomposition agents are also reviewed. The section on acid dissolution techniques addresses the various inorganic acids that are used individually or in combination in both open and closed systems. Fluxes used in sample fusion are discussed. The promising microwave-oven technology and the emerging field of automation are also examined. A section on applications highlights the use of decomposition techniques for the determination of Au, platinum group elements (PGEs), Hg, U, hydride-forming elements, rare earth elements (REEs), and multi-elements in geological materials. Partial dissolution techniques used for geochemical exploration which have been treated in detail elsewhere are not discussed here; nor are fire-assaying for noble metals and decomposition techniques for X-ray fluorescence or nuclear methods be discussed. ?? 1992.

  4. Investigation of thermal decomposition as the kinetic process that causes the loss of crystalline structure in sucrose using a chemical analysis approach (part II).

    PubMed

    Lee, Joo Won; Thomas, Leonard C; Jerrell, John; Feng, Hao; Cadwallader, Keith R; Schmidt, Shelly J

    2011-01-26

    High performance liquid chromatography (HPLC) on a calcium form cation exchange column with refractive index and photodiode array detection was used to investigate thermal decomposition as the cause of the loss of crystalline structure in sucrose. Crystalline sucrose structure was removed using a standard differential scanning calorimetry (SDSC) method (fast heating method) and a quasi-isothermal modulated differential scanning calorimetry (MDSC) method (slow heating method). In the fast heating method, initial decomposition components, glucose (0.365%) and 5-HMF (0.003%), were found in the sucrose sample coincident with the onset temperature of the first endothermic peak. In the slow heating method, glucose (0.411%) and 5-HMF (0.003%) were found in the sucrose sample coincident with the holding time (50 min) at which the reversing heat capacity began to increase. In both methods, even before the crystalline structure in sucrose was completely removed, unidentified thermal decomposition components were formed. These results prove not only that the loss of crystalline structure in sucrose is caused by thermal decomposition, but also that it is achieved via a time-temperature combination process. This knowledge is important for quality assurance purposes and for developing new sugar based food and pharmaceutical products. In addition, this research provides new insights into the caramelization process, showing that caramelization can occur under low temperature (significantly below the literature reported melting temperature), albeit longer time, conditions.

  5. Mother canonical tensor model

    NASA Astrophysics Data System (ADS)

    Narain, Gaurav; Sasakura, Naoki

    2017-07-01

    The canonical tensor model (CTM) is a tensor model formulated in the Hamilton formalism as a totally constrained system with first class constraints, the algebraic structure of which is very similar to that of the ADM formalism of general relativity. It has recently been shown that a formal continuum limit of the classical equation of motion of CTM in a derivative expansion of the tensor up to the fourth derivatives agrees with that of a coupled system of general relativity and a scalar field in the Hamilton-Jacobi formalism. This suggests the existence of a ‘mother’ tensor model which derives CTM through the Hamilton-Jacobi procedure, and we have successfully found such a ‘mother’ CTM (mCTM) in this paper. The quantization of the mCTM is as straightforward as the CTM. However, we have not been able to identify all the secondary constraints, and therefore the full structure of the model has been left for future study. Nonetheless, we have found some exact physical wave functions and classical phase spaces, which can be shown to solve the primary and all the (possibly infinite) secondary constraints in the quantum and classical cases, respectively, and have thereby proven the non-triviality of the model. It has also been shown that the mCTM has more interesting dynamics than the CTM from the perspective of randomly connected tensor networks.

  6. Rank-based decompositions of morphological templates.

    PubMed

    Sussner, P; Ritter, G X

    2000-01-01

    Methods for matrix decomposition have found numerous applications in image processing, in particular for the problem of template decomposition. Since existing matrix decomposition techniques are mainly concerned with the linear domain, we consider it timely to investigate matrix decomposition techniques in the nonlinear domain with applications in image processing. The mathematical basis for these investigations is the new theory of rank within minimax algebra. Thus far, only minimax decompositions of rank 1 and rank 2 matrices into outer product expansions are known to the image processing community. We derive a heuristic algorithm for the decomposition of matrices having arbitrary rank.

  7. Tensor Galileons and gravity

    NASA Astrophysics Data System (ADS)

    Chatzistavrakidis, Athanasios; Khoo, Fech Scen; Roest, Diederik; Schupp, Peter

    2017-03-01

    The particular structure of Galileon interactions allows for higher-derivative terms while retaining second order field equations for scalar fields and Abelian p-forms. In this work we introduce an index-free formulation of these interactions in terms of two sets of Grassmannian variables. We employ this to construct Galileon interactions for mixed-symmetry tensor fields and coupled systems thereof. We argue that these tensors are the natural generalization of scalars with Galileon symmetry, similar to p-forms and scalars with a shift-symmetry. The simplest case corresponds to linearised gravity with Lovelock invariants, relating the Galileon symmetry to diffeomorphisms. Finally, we examine the coupling of a mixed-symmetry tensor to gravity, and demonstrate in an explicit example that the inclusion of appropriate counterterms retains second order field equations.

  8. Singular value decomposition for photon-processing nuclear imaging systems and applications for reconstruction and computing null functions

    PubMed Central

    Jha, Abhinav K.; Barrett, Harrison H.; Frey, Eric C.; Clarkson, Eric; Caucci, Luca; Kupinski, Matthew A.

    2015-01-01

    Recent advances in technology are enabling a new class of nuclear imaging systems consisting of detectors that use real-time maximum-likelihood (ML) methods to estimate the interaction position, deposited energy, and other attributes of each photon-interaction event and store these attributes in a list format. This class of systems, which we refer to as photon-processing (PP) nuclear imaging systems, can be described by a fundamentally different mathematical imaging operator that allows processing of the continuous-valued photon attributes on a per-photon basis. Unlike conventional photon-counting (PC) systems that bin the data into images, PP systems do not have any binning-related information loss. Mathematically, while PC systems have an infinite-dimensional null space due to dimensionality considerations, PP systems do not necessarily suffer from this issue. Therefore, PP systems have the potential to provide improved performance in comparison to PC systems. To study these advantages, in this paper, we propose a framework to perform the singular-value decomposition (SVD) of the PP imaging operator. We use this framework to perform the SVD of operators that describe a general two-dimensional (2D) planar linear shift-invariant (LSIV) PP system and a hypothetical continuously rotating 2D single-photon emission computed tomography (SPECT) PP system. We then discuss two applications of the SVD framework. The first application is to decompose the object being imaged by the PP imaging system into measurement and null components. We compare these components to the measurement and null components obtained with PC systems. In the process, we also present a procedure to compute the null functions for a PC system. The second application is designing analytical reconstruction algorithms for PP systems. The proposed analytical method exploits the fact that PP systems acquire data in a continuous domain to estimate a continuous object function. The approach is parallelizable

  9. Singular value decomposition for photon-processing nuclear imaging systems and applications for reconstruction and computing null functions.

    PubMed

    Jha, Abhinav K; Barrett, Harrison H; Frey, Eric C; Clarkson, Eric; Caucci, Luca; Kupinski, Matthew A

    2015-09-21

    Recent advances in technology are enabling a new class of nuclear imaging systems consisting of detectors that use real-time maximum-likelihood (ML) methods to estimate the interaction position, deposited energy, and other attributes of each photon-interaction event and store these attributes in a list format. This class of systems, which we refer to as photon-processing (PP) nuclear imaging systems, can be described by a fundamentally different mathematical imaging operator that allows processing of the continuous-valued photon attributes on a per-photon basis. Unlike conventional photon-counting (PC) systems that bin the data into images, PP systems do not have any binning-related information loss. Mathematically, while PC systems have an infinite-dimensional null space due to dimensionality considerations, PP systems do not necessarily suffer from this issue. Therefore, PP systems have the potential to provide improved performance in comparison to PC systems. To study these advantages, we propose a framework to perform the singular-value decomposition (SVD) of the PP imaging operator. We use this framework to perform the SVD of operators that describe a general two-dimensional (2D) planar linear shift-invariant (LSIV) PP system and a hypothetical continuously rotating 2D single-photon emission computed tomography (SPECT) PP system. We then discuss two applications of the SVD framework. The first application is to decompose the object being imaged by the PP imaging system into measurement and null components. We compare these components to the measurement and null components obtained with PC systems. In the process, we also present a procedure to compute the null functions for a PC system. The second application is designing analytical reconstruction algorithms for PP systems. The proposed analytical approach exploits the fact that PP systems acquire data in a continuous domain to estimate a continuous object function. The approach is parallelizable and

  10. Singular value decomposition for photon-processing nuclear imaging systems and applications for reconstruction and computing null functions

    NASA Astrophysics Data System (ADS)

    Jha, Abhinav K.; Barrett, Harrison H.; Frey, Eric C.; Clarkson, Eric; Caucci, Luca; Kupinski, Matthew A.

    2015-09-01

    Recent advances in technology are enabling a new class of nuclear imaging systems consisting of detectors that use real-time maximum-likelihood (ML) methods to estimate the interaction position, deposited energy, and other attributes of each photon-interaction event and store these attributes in a list format. This class of systems, which we refer to as photon-processing (PP) nuclear imaging systems, can be described by a fundamentally different mathematical imaging operator that allows processing of the continuous-valued photon attributes on a per-photon basis. Unlike conventional photon-counting (PC) systems that bin the data into images, PP systems do not have any binning-related information loss. Mathematically, while PC systems have an infinite-dimensional null space due to dimensionality considerations, PP systems do not necessarily suffer from this issue. Therefore, PP systems have the potential to provide improved performance in comparison to PC systems. To study these advantages, we propose a framework to perform the singular-value decomposition (SVD) of the PP imaging operator. We use this framework to perform the SVD of operators that describe a general two-dimensional (2D) planar linear shift-invariant (LSIV) PP system and a hypothetical continuously rotating 2D single-photon emission computed tomography (SPECT) PP system. We then discuss two applications of the SVD framework. The first application is to decompose the object being imaged by the PP imaging system into measurement and null components. We compare these components to the measurement and null components obtained with PC systems. In the process, we also present a procedure to compute the null functions for a PC system. The second application is designing analytical reconstruction algorithms for PP systems. The proposed analytical approach exploits the fact that PP systems acquire data in a continuous domain to estimate a continuous object function. The approach is parallelizable and

  11. Characterization of a sucrose/starch matrix through positron annihilation lifetime spectroscopy: unravelling the decomposition and glass transition processes.

    PubMed

    Sharma, Sandeep Kumar; Roudaut, Gaëlle; Fabing, Isabelle; Duplâtre, Gilles

    2010-11-14

    The triplet state of positronium, o-Ps, is used as a probe to characterize a starch-20% w/w sucrose matrix as a function of temperature (T). A two-step decomposition (of sucrose, and then starch) starts at 440 K as shown by a decrease in the o-Ps intensity (I(3)) and lifetime (τ(3)), the latter also disclosing the occurrence of a glass transition. Upon sucrose decomposition, the matrix acquires properties (reduced size and density of nanoholes) that are different from those of pure starch. A model is successfully established, describing the variations of both I(3) and τ(3) with T and yields a glass transition temperature, T(g) = (446 ± 2) K, in spite of the concomitant sucrose decomposition. Unexpectedly, the starch volume fraction (as probed through thermal gravimetry) decreases with T at a higher rate than the free volume fraction (as probed through PALS).

  12. Killing tensors on tori

    NASA Astrophysics Data System (ADS)

    Heil, Konstantin; Moroianu, Andrei; Semmelmann, Uwe

    2017-07-01

    We show that Killing tensors on conformally flat n-dimensional tori whose conformal factor only depends on one variable, are polynomials in the metric and in the Killing vector fields. In other words, every first integral of the geodesic flow polynomial in the momenta on the sphere bundle of such a torus is linear in the momenta.

  13. Woodland Decomposition.

    ERIC Educational Resources Information Center

    Napier, J.

    1988-01-01

    Outlines the role of the main organisms involved in woodland decomposition and discusses some of the variables affecting the rate of nutrient cycling. Suggests practical work that may be of value to high school students either as standard practice or long-term projects. (CW)

  14. Woodland Decomposition.

    ERIC Educational Resources Information Center

    Napier, J.

    1988-01-01

    Outlines the role of the main organisms involved in woodland decomposition and discusses some of the variables affecting the rate of nutrient cycling. Suggests practical work that may be of value to high school students either as standard practice or long-term projects. (CW)

  15. Theoretical investigations of elementary processes in the chemical vapor deposition of silicon from silane. Unimolecular decomposition of SiH4

    NASA Astrophysics Data System (ADS)

    Viswanathan, R.; Thompson, Donald L.; Raff, L. M.

    1984-05-01

    The rates and mechanism for the unimolecular decomposition of SiH4 have been investigated using quasiclassical trajectory methods to follow the dynamics and Metropolis sampling procedures to average over the initial SiH4 phase space. The semiempirical potential-energy surface has been fitted to scaled SCF calculations and to a variety of experimental data. It gives the correct SiH4 equilibrium structure, reaction endothermicities, and bond energies for SiH4, SiH3, and SiH2. All hydrogen atoms are treated in an equivalent fashion. Excellent first-order decay plots are obtained for the microcanonical rates for the total SiH4 decomposition as well as for the separate decomposition channels. The low-energy pathway is found to be a three-center elimination to form SiH2+H2. The decomposition channel forming SiH3+H becomes important only at internal SiH4 energies in excess of 5.0 eV. Comparison of computed falloff curves with RRKM calculations fitted to experimental results indicates that the critical threshold energy for the three-center reaction lies in the range 2.10processes that resemble ``half-collisions'' of SiH3+H are found to be important decomposition pathways.

  16. In-situ and self-distributed: A new understanding on catalyzed thermal decomposition process of ammonium perchlorate over Nd{sub 2}O{sub 3}

    SciTech Connect

    Zou, Min Wang, Xin Jiang, Xiaohong Lu, Lude

    2014-05-01

    Catalyzed thermal decomposition process of ammonium perchlorate (AP) over neodymium oxide (Nd{sub 2}O{sub 3}) was investigated. Catalytic performances of nanometer-sized Nd{sub 2}O{sub 3} and micrometer-sized Nd{sub 2}O{sub 3} were evaluated by differential scanning calorimetry (DSC). In contrast to universal concepts, catalysts in different sizes have nearly similar catalytic activities. Based on structural and morphological variation of the catalysts during the reaction, combined with mass spectrum analyses and studies of unmixed style, a new understanding of this catalytic process was proposed. We believed that the newly formed chloride neodymium oxide (NdOCl) was the real catalytic species in the overall thermal decomposition of AP over Nd{sub 2}O{sub 3}. Meanwhile, it was the “self-distributed” procedure which occurred within the reaction that also worked for the improvement of overall catalytic activities. This work is of great value in understanding the roles of micrometer-sized catalysts used in heterogeneous reactions, especially the solid–solid reactions which could generate a large quantity of gaseous species. - Graphical abstract: In-situ and self-distributed reaction process in thermal decomposition of AP catalyzed by Nd{sub 2}O{sub 3}. - Highlights: • Micro- and nano-Nd{sub 2}O{sub 3} for catalytic thermal decomposition of AP. • No essential differences on their catalytic performances. • Structural and morphological variation of catalysts digs out catalytic mechanism. • This catalytic process is “in-situ and self-distributed” one.

  17. Effect of mountain climatic elevation gradient and litter origin on decomposition processes: long-term experiment with litter-bags

    NASA Astrophysics Data System (ADS)

    Klimek, Beata; Niklińska, Maria; Chodak, Marcin

    2013-04-01

    Temperature is one of the most important factors affecting soil organic matter decomposition. Mountain areas with vertical gradients of temperature and precipitation provide an opportunity to observe climate changes similar to those observed at various latitudes and may serve as an approximation for climatic changes. The aim of the study was to compare the effects of climatic conditions and initial properties of litter on decomposition processes and thermal sensitivity of forest litter. The litter was collected at three altitudes (600, 900, 1200 m a.s.l.) in the Beskidy Mts (southern Poland), put into litter-bags and exposed in the field since autumn 2011. The litter collected at single altitude was exposed at the altitude it was taken and also at the two other altitudes. The litter-bags were laid out on five mountains, treated as replicates. Starting on April 2012, single sets of litter-bags were collected every five weeks. The laboratory measurements included determination of dry mass loss and chemical composition (Corg, Nt, St, Mg, Ca, Na, K, Cu, Zn) of the litter. In the additional litter-bag sets, taken in spring and autumn 2012, microbial properties were measured. To determine the effect of litter properties and climatic conditions of elevation sites on decomposing litter thermal sensitivity the respiration rate of litter was measured at 5°C, 15°C and 25°C and calculated as Q10 L and Q10 H (ratios of respiration rate between 5° and 15°C and between 15°C and 25°C, respectively). The functional diversity of soil microbes was measured with Biolog® ECO plates, structural diversity with phospholipid fatty acids (PLFA). Litter mass lost during first year of incubation was characterized by high variability and mean mass lost ranged up to a 30% of initial mass. After autumn sampling we showed, that mean respiration rate of litter (dry mass) from the 600m a.s.l site exposed on 600m a.s.l. was the highest at each tested temperature. In turn, the lowest mean

  18. xTENSOR:. a Free Fast Abstract Tensor Manipulator

    NASA Astrophysics Data System (ADS)

    Martín-García, José M.

    2008-09-01

    The package xTensor is introduced, a very fast and general manipulator of tensor expressions for Mathematica. Manifolds and vector bundles can be defined containing tensor fields with arbitrary symmetry, connections of any type, metrics and other objects. Based on the Penrose abstract-index notation, xTensor has a single canonicalizer which fully simplifies all expressions, using highly efficient techniques of computational group theory. A number of companion packages have been developed to address particular problems in General Relativity, like metric perturbation theory or the manipulation of the Riemann tensor.

  19. Diffusion Tensor Image Registration Using Hybrid Connectivity and Tensor Features

    PubMed Central

    Wang, Qian; Yap, Pew-Thian; Wu, Guorong; Shen, Dinggang

    2014-01-01

    Most existing diffusion tensor imaging (DTI) registration methods estimate structural correspondences based on voxelwise matching of tensors. The rich connectivity information that is given by DTI, however, is often neglected. In this article, we propose to integrate complementary information given by connectivity features and tensor features for improved registration accuracy. To utilize connectivity information, we place multiple anchors representing different brain anatomies in the image space, and define the connectivity features for each voxel as the geodesic distances from all anchors to the voxel under consideration. The geodesic distance, which is computed in relation to the tensor field, encapsulates information of brain connectivity. We also extract tensor features for every voxel to reflect the local statistics of tensors in its neighborhood. We then combine both connectivity features and tensor features for registration of tensor images. From the images, landmarks are selected automatically and their correspondences are determined based on their connectivity and tensor feature vectors. The deformation field that deforms one tensor image to the other is iteratively estimated and optimized according to the landmarks and their associated correspondences. Experimental results show that, by using connectivity features and tensor features simultaneously, registration accuracy is increased substantially compared with the cases using either type of features alone. PMID:24293159

  20. Evaluation of bayesian tensor estimation using tensor coherence.

    PubMed

    Kim, Dae-Jin; Kim, In-Young; Jeong, Seok-Oh; Park, Hae-Jeong

    2009-06-21

    Fiber tractography, a unique and non-invasive method to estimate axonal fibers within white matter, constructs the putative streamlines from diffusion tensor MRI by interconnecting voxels according to the propagation direction defined by the diffusion tensor. This direction has uncertainties due to the properties of underlying fiber bundles, neighboring structures and image noise. Therefore, robust estimation of the diffusion direction is essential to reconstruct reliable fiber pathways. For this purpose, we propose a tensor estimation method using a Bayesian framework, which includes an a priori probability distribution based on tensor coherence indices, to utilize both the neighborhood direction information and the inertia moment as regularization terms. The reliability of the proposed tensor estimation was evaluated using Monte Carlo simulations in terms of accuracy and precision with four synthetic tensor fields at various SNRs and in vivo human data of brain and calf muscle. Proposed Bayesian estimation demonstrated the relative robustness to noise and the higher reliability compared to the simple tensor regression.

  1. Re-Examination of Chinese Semantic Processing and Syntactic Processing: Evidence from Conventional ERPs and Reconstructed ERPs by Residue Iteration Decomposition (RIDE)

    PubMed Central

    Wang, Fang; Ouyang, Guang; Zhou, Changsong; Wang, Suiping

    2015-01-01

    A number of studies have explored the time course of Chinese semantic and syntactic processing. However, whether syntactic processing occurs earlier than semantics during Chinese sentence reading is still under debate. To further explore this issue, an event-related potentials (ERPs) experiment was conducted on 21 native Chinese speakers who read individually-presented Chinese simple sentences (NP1+VP+NP2) word-by-word for comprehension and made semantic plausibility judgments. The transitivity of the verbs was manipulated to form three types of stimuli: congruent sentences (CON), sentences with a semantically violated NP2 following a transitive verb (semantic violation, SEM), and sentences with a semantically violated NP2 following an intransitive verb (combined semantic and syntactic violation, SEM+SYN). The ERPs evoked from the target NP2 were analyzed by using the Residue Iteration Decomposition (RIDE) method to reconstruct the ERP waveform blurred by trial-to-trial variability, as well as by using the conventional ERP method based on stimulus-locked averaging. The conventional ERP analysis showed that, compared with the critical words in CON, those in SEM and SEM+SYN elicited an N400–P600 biphasic pattern. The N400 effects in both violation conditions were of similar size and distribution, but the P600 in SEM+SYN was bigger than that in SEM. Compared with the conventional ERP analysis, RIDE analysis revealed a larger N400 effect and an earlier P600 effect (in the time window of 500–800 ms instead of 570–810ms). Overall, the combination of conventional ERP analysis and the RIDE method for compensating for trial-to-trial variability confirmed the non-significant difference between SEM and SEM+SYN in the earlier N400 time window. Converging with previous findings on other Chinese structures, the current study provides further precise evidence that syntactic processing in Chinese does not occur earlier than semantic processing. PMID:25615600

  2. Endoscopic approach to tensor fold in patients with attic cholesteatoma.

    PubMed

    Marchioni, Daniele; Mattioli, Francesco; Alicandri-Ciufelli, Matteo; Presutti, Livio

    2009-09-01

    The endoscopic approach to attic cholesteatoma allows clear observation of the tensor fold area and consequently, excision of the tensor fold, modifying the epitympanic diaphragm. This permits good removal of cholesteatoma and direct ventilation of the upper unit, preventing the development of a retraction pocket or attic cholesteatoma recurrence, with good functional results. An isthmus block associated with a complete tensor fold is a necessary condition for creation and development of an attic cholesteatoma. During surgical treatment of attic cholesteatoma, tensor fold removal is required to restore ventilation of the attic region. Use of a microscope does not allow exposure of the tensor fold area and so removal of the tensor fold can be very difficult. In contrast, the endoscope permits better visualization of the tensor fold area, and this aids understanding of the anatomy of the tensor fold and its removal, restoring attic ventilation. In all, 21 patients with limited attic cholesteatoma underwent an endoscopic approach with complete removal of the disease. Patients with a wide external ear canal were operated through an exclusively endoscopic transcanal approach; patients with a narrow external ear canal or who were affected by external canal exostosis were operated through a traditional retroauricular incision and meatoplasty followed by the endoscopic transcanal approach. In 18/21 patients, the endoscope permitted the discovery of different anatomical morphologies of the tensor fold. Sixteen patients presented a complete tensor fold (one with an anomalous transversal orientation), one patient presented an incomplete tensor fold and one patient presented a bony ridge in the cochleariform region. In all 16 cases of complete tensor tympani fold, the fold was removed and anterior epitympanic ventilation was restored. The ridge bone over the cochleariform process was also removed with a microdrill.

  3. Entanglement, tensor networks and black hole horizons

    NASA Astrophysics Data System (ADS)

    Molina-Vilaplana, J.; Prior, J.

    2014-11-01

    We elaborate on a previous proposal by Hartman and Maldacena on a tensor network which accounts for the scaling of the entanglement entropy in a system at a finite temperature. In this construction, the ordinary entanglement renormalization flow given by the class of tensor networks known as the Multi Scale Entanglement Renormalization Ansatz (MERA), is supplemented by an additional entanglement structure at the length scale fixed by the temperature. The network comprises two copies of a MERA circuit with a fixed number of layers and a pure matrix product state which joins both copies by entangling the infrared degrees of freedom of both MERA networks. The entanglement distribution within this bridge state defines reduced density operators on both sides which cause analogous effects to the presence of a black hole horizon when computing the entanglement entropy at finite temperature in the AdS/CFT correspondence. The entanglement and correlations during the thermalization process of a system after a quantum quench are also analyzed. To this end, a full tensor network representation of the action of local unitary operations on the bridge state is proposed. This amounts to a tensor network which grows in size by adding succesive layers of bridge states. Finally, we discuss on the holographic interpretation of the tensor network through a notion of distance within the network which emerges from its entanglement distribution.

  4. A simple process for the preparation of copper (I) oxide nanoparticles by a thermal decomposition process with borane tert-butylamine complex.

    PubMed

    Kim, Na Rae; Jung, Inyu; Jo, Yun Hwan; Lee, Hyuck Mo

    2013-09-01

    To control the optical properties of Cu2O for a variety of application, we synthesized Cu2O in nanoscale without other treatments. Cu2O nanoparticles with an average size of 2.7 nm (sigma < or = 3.7%) were successfully synthesized in this study via a modified thermal decomposition process. Copper (II) acetylacetonate was used as a precursor, and oleylamine was used as a solvent, a surfactant and a reducing agent. The oleylamine-mediated synthesis allowed for the preparation of Cu2O nanoparticles with a narrower size distribution, and the nanoparticles were synthesized in the presence of a borane tert-butylamine (BTB) complex, where BTB was a strong co-reducing agent together with oleylamine. UV-vis spectroscopy analysis suggest that band gap energy of these Cu2O particles is enlarged from 2.1 eV in the bulk to 3.1 eV in the 2.7-nm nanoparticles, which is larger than most other reported value of Cu2O nanoparticles. Therefore, these nanoparticles could be used as a transparent material because of transformed optical property.

  5. Bayesian inference and interpretation of centroid moment tensors of the 2016 Kumamoto earthquake sequence, Kyushu, Japan

    NASA Astrophysics Data System (ADS)

    Hallo, Miroslav; Asano, Kimiyuki; Gallovič, František

    2017-09-01

    On April 16, 2016, Kumamoto prefecture in Kyushu region, Japan, was devastated by a shallow M JMA7.3 earthquake. The series of foreshocks started by M JMA6.5 foreshock 28 h before the mainshock. They have originated in Hinagu fault zone intersecting the mainshock Futagawa fault zone; hence, the tectonic background for this earthquake sequence is rather complex. Here we infer centroid moment tensors (CMTs) for 11 events with M JMA between 4.8 and 6.5, using strong motion records of the K-NET, KiK-net and F-net networks. We use upgraded Bayesian full-waveform inversion code ISOLA-ObsPy, which takes into account uncertainty of the velocity model. Such an approach allows us to reliably assess uncertainty of the CMT parameters including the centroid position. The solutions show significant systematic spatial and temporal variations throughout the sequence. Foreshocks are right-lateral steeply dipping strike-slip events connected to the NE-SW shear zone. Those located close to the intersection of the Hinagu and Futagawa fault zones are dipping slightly to ESE, while those in the southern area are dipping to WNW. Contrarily, aftershocks are mostly normal dip-slip events, being related to the N-S extensional tectonic regime. Most of the deviatoric moment tensors contain only minor CLVD component, which can be attributed to the velocity model uncertainty. Nevertheless, two of the CMTs involve a significant CLVD component, which may reflect complex rupture process. Decomposition of those moment tensors into two pure shear moment tensors suggests combined right-lateral strike-slip and normal dip-slip mechanisms, consistent with the tectonic settings of the intersection of the Hinagu and Futagawa fault zones.[Figure not available: see fulltext.

  6. Direct Solution of the Chemical Master Equation Using Quantized Tensor Trains

    PubMed Central

    Kazeev, Vladimir; Khammash, Mustafa; Nip, Michael; Schwab, Christoph

    2014-01-01

    The Chemical Master Equation (CME) is a cornerstone of stochastic analysis and simulation of models of biochemical reaction networks. Yet direct solutions of the CME have remained elusive. Although several approaches overcome the infinite dimensional nature of the CME through projections or other means, a common feature of proposed approaches is their susceptibility to the curse of dimensionality, i.e. the exponential growth in memory and computational requirements in the number of problem dimensions. We present a novel approach that has the potential to “lift” this curse of dimensionality. The approach is based on the use of the recently proposed Quantized Tensor Train (QTT) formatted numerical linear algebra for the low parametric, numerical representation of tensors. The QTT decomposition admits both, algorithms for basic tensor arithmetics with complexity scaling linearly in the dimension (number of species) and sub-linearly in the mode size (maximum copy number), and a numerical tensor rounding procedure which is stable and quasi-optimal. We show how the CME can be represented in QTT format, then use the exponentially-converging -discontinuous Galerkin discretization in time to reduce the CME evolution problem to a set of QTT-structured linear equations to be solved at each time step using an algorithm based on Density Matrix Renormalization Group (DMRG) methods from quantum chemistry. Our method automatically adapts the “basis” of the solution at every time step guaranteeing that it is large enough to capture the dynamics of interest but no larger than necessary, as this would increase the computational complexity. Our approach is demonstrated by applying it to three different examples from systems biology: independent birth-death process, an example of enzymatic futile cycle, and a stochastic switch model. The numerical results on these examples demonstrate that the proposed QTT method achieves dramatic speedups and several orders of magnitude

  7. Direct solution of the Chemical Master Equation using quantized tensor trains.

    PubMed

    Kazeev, Vladimir; Khammash, Mustafa; Nip, Michael; Schwab, Christoph

    2014-03-01

    The Chemical Master Equation (CME) is a cornerstone of stochastic analysis and simulation of models of biochemical reaction networks. Yet direct solutions of the CME have remained elusive. Although several approaches overcome the infinite dimensional nature of the CME through projections or other means, a common feature of proposed approaches is their susceptibility to the curse of dimensionality, i.e. the exponential growth in memory and computational requirements in the number of problem dimensions. We present a novel approach that has the potential to "lift" this curse of dimensionality. The approach is based on the use of the recently proposed Quantized Tensor Train (QTT) formatted numerical linear algebra for the low parametric, numerical representation of tensors. The QTT decomposition admits both, algorithms for basic tensor arithmetics with complexity scaling linearly in the dimension (number of species) and sub-linearly in the mode size (maximum copy number), and a numerical tensor rounding procedure which is stable and quasi-optimal. We show how the CME can be represented in QTT format, then use the exponentially-converging hp-discontinuous Galerkin discretization in time to reduce the CME evolution problem to a set of QTT-structured linear equations to be solved at each time step using an algorithm based on Density Matrix Renormalization Group (DMRG) methods from quantum chemistry. Our method automatically adapts the "basis" of the solution at every time step guaranteeing that it is large enough to capture the dynamics of interest but no larger than necessary, as this would increase the computational complexity. Our approach is demonstrated by applying it to three different examples from systems biology: independent birth-death process, an example of enzymatic futile cycle, and a stochastic switch model. The numerical results on these examples demonstrate that the proposed QTT method achieves dramatic speedups and several orders of magnitude storage

  8. Superconducting tensor gravity gradiometer

    NASA Technical Reports Server (NTRS)

    Paik, H. J.

    1981-01-01

    The employment of superconductivity and other material properties at cryogenic temperatures to fabricate sensitive, low-drift, gravity gradiometer is described. The device yields a reduction of noise of four orders of magnitude over room temperature gradiometers, and direct summation and subtraction of signals from accelerometers in varying orientations are possible with superconducting circuitry. Additional circuits permit determination of the linear and angular acceleration vectors independent of the measurement of the gravity gradient tensor. A dewar flask capable of maintaining helium in a liquid state for a year's duration is under development by NASA, and a superconducting tensor gravity gradiometer for the NASA Geodynamics Program is intended for a LEO polar trajectory to measure the harmonic expansion coefficients of the earth's gravity field up to order 300.

  9. Grid-based electronic structure calculations: The tensor decomposition approach

    NASA Astrophysics Data System (ADS)

    Rakhuba, M. V.; Oseledets, I. V.

    2016-05-01

    We present a fully grid-based approach for solving Hartree-Fock and all-electron Kohn-Sham equations based on low-rank approximation of three-dimensional electron orbitals. Due to the low-rank structure the total complexity of the algorithm depends linearly with respect to the one-dimensional grid size. Linear complexity allows for the usage of fine grids, e.g. 81923 and, thus, cheap extrapolation procedure. We test the proposed approach on closed-shell atoms up to the argon, several molecules and clusters of hydrogen atoms. All tests show systematical convergence with the required accuracy.

  10. Grid-based electronic structure calculations: The tensor decomposition approach

    SciTech Connect

    Rakhuba, M.V.; Oseledets, I.V.

    2016-05-01

    We present a fully grid-based approach for solving Hartree–Fock and all-electron Kohn–Sham equations based on low-rank approximation of three-dimensional electron orbitals. Due to the low-rank structure the total complexity of the algorithm depends linearly with respect to the one-dimensional grid size. Linear complexity allows for the usage of fine grids, e.g. 8192{sup 3} and, thus, cheap extrapolation procedure. We test the proposed approach on closed-shell atoms up to the argon, several molecules and clusters of hydrogen atoms. All tests show systematical convergence with the required accuracy.

  11. Refinement of Regional Distance Seismic Moment Tensor and Uncertainty Analysis for Source-Type Identification

    DTIC Science & Technology

    2014-09-02

    station locations of HUMMING ALBATROSS . .................32 Figure 17. Full moment tensor solutions and decompositions as a function source depth...Geophysical for providing the Humming Albatross waveform data. Plots were made with GMT (Wessel and Smith, 1998). 1. SUMMARY In this project...synthetic studies, we applied the moment tensor method to the HUMMING ALBATROSS quarry blast events, which is an excellent dataset in terms of understanding

  12. Residue decomposition of submodel of WEPS

    USDA-ARS?s Scientific Manuscript database

    The Residue Decomposition submodel of the Wind Erosion Prediction System (WEPS) simulates the decrease in crop residue biomass due to microbial activity. The decomposition process is modeled as a first-order reaction with temperature and moisture as driving variables. Decomposition is a function of ...

  13. E6Tensors: A Mathematica package for E6 Tensors

    NASA Astrophysics Data System (ADS)

    Deppisch, Thomas

    2017-04-01

    We present the Mathematica package E6Tensors, a tool for explicit tensor calculations in E6 gauge theories. In addition to matrix expressions for the group generators of E6, it provides structure constants, various higher rank tensors and expressions for the representations 27, 78, 351 and 351‧. This paper comes along with a short manual including physically relevant examples. I further give a complete list of gauge invariant, renormalisable terms for superpotentials and Lagrangians.

  14. FaRe: A Mathematica package for tensor reduction of Feynman integrals

    NASA Astrophysics Data System (ADS)

    Re Fiorentin, Michele

    2016-08-01

    In this paper, we present FaRe, a package for Mathematica that implements the decomposition of a generic tensor Feynman integral, with arbitrary loop number, into scalar integrals in higher dimension. In order for FaRe to work, the package FeynCalc is needed, so that the tensor structure of the different contributions is preserved and the obtained scalar integrals are grouped accordingly. FaRe can prove particularly useful when it is preferable to handle Feynman integrals with free Lorentz indices and tensor reduction of high-order integrals is needed. This can then be achieved with several powerful existing tools.

  15. Projectors and seed conformal blocks for traceless mixed-symmetry tensors

    NASA Astrophysics Data System (ADS)

    Costa, Miguel S.; Hansen, Tobias; Penedones, João; Trevisani, Emilio

    2016-07-01

    In this paper we derive the projectors to all irreducible SO( d) representations (traceless mixed-symmetry tensors) that appear in the partial wave decomposition of a conformal correlator of four stress-tensors in d dimensions. These projectors are given in a closed form for arbitrary length l 1 of the first row of the Young diagram. The appearance of Gegenbauer polynomials leads directly to recursion relations in l 1 for seed conformal blocks. Further results include a differential operator that generates the projectors to traceless mixed-symmetry tensors and the general normalization constant of the shadow operator.

  16. Discussion of stress tensor nonuniqueness with application to nonuniform, particulate systems

    SciTech Connect

    Aidun, J.B.

    1993-06-01

    The indeterminacy of the mechanical stress tensor has been noted in several developments of expressions for stress in a system of particles. It is generally agreed that physical quantities related to the stress tensor must be insensitive to this nonuniqueness, but there is no definitive prescription for insuring it. Kroener`s tensor decomposition theorem is applied to the mechanical stress tensor {sup {sigma}}{sub ij} to show that its complete determination requires specification of its ``incompatibility,`` {epsilon}{sub ijk} {epsilon}{sub lmn} {sup {partial_derivative}}{sub j} {sup {partial_derivative}}{sub m} {sup {sigma}}{sub kn}, in addition to its divergence, which is obtained from the momentum conservation relation. For a particulate system, stress tensor incompatibility is shown to vanish to recover the correct expression for macroscopically observable traction. This result removes concern about nonuniqueness without requiring equilibrium or arbitrarily-defined force lines.

  17. Catalyst for sodium chlorate decomposition

    NASA Technical Reports Server (NTRS)

    Wydeven, T.

    1972-01-01

    Production of oxygen by rapid decomposition of cobalt oxide and sodium chlorate mixture is discussed. Cobalt oxide serves as catalyst to accelerate reaction. Temperature conditions and chemical processes involved are described.

  18. Carbon decomposition process of the residual biomass in the paddy soil of a single-crop rice field

    NASA Astrophysics Data System (ADS)

    Okada, K.; Iwata, T.

    2014-12-01

    In cultivated fields, residual organic matter is plowed into soil after harvest and decaying in fallow season. Greenhouse gases such as CO2 and CH4 is generated by the decomposition of the substantial organic matter and released into the atmosphere. In some fields, open burning is carried out by tradition, when carbon in residual matter is released into atmosphere as CO2. However, burning effect on carbon budget between crop lands and atmosphere is not entirely considered yet. In this study, coarse organic matter (COM) in paddy soil of a single-crop rice field was sampled on regular intervals between January, 2011 and August, 2014 The amount of carbon release from residual matter was estimated by analyzing of the variations in carbon content of COM. Effects of soil temperature (Ts) and soil water content (SWC) at the paddy field on the rate of carbon decomposition was investigated. Though decreasing rate of COM was much smaller in winter season, it is accelerated at the warming season between April and June every year. Decomposition was resisted for next rice cultivated season despite of highest soil temperature. In addition, the observational field was divided into two areas, and three time open burning experiments were conducted in November, 2011, 2012, and 2013. In each year, three sampling surveys, or plants before harvest and residuals before and after the burning experiment, were done. From these surveys, it is suggested that about 48±2% of carbon contents of above-ground plant was yield out as grain by harvest, and about 27±2% of carbon emitted as CO2 by burning. Carbon content of residuals plowed into soil after the harvest was estimated 293±1 and 220±36gC/m2 in no-burned and burned area, respectively, based on three-years average. It is estimated that 70 and 60% of the first input amount of COM was decomposed after a year in no-burned and burned area, respectively.

  19. OPERATOR NORM INEQUALITIES BETWEEN TENSOR UNFOLDINGS ON THE PARTITION LATTICE.

    PubMed

    Wang, Miaoyan; Duc, Khanh Dao; Fischer, Jonathan; Song, Yun S

    2017-05-01

    Interest in higher-order tensors has recently surged in data-intensive fields, with a wide range of applications including image processing, blind source separation, community detection, and feature extraction. A common paradigm in tensor-related algorithms advocates unfolding (or flattening) the tensor into a matrix and applying classical methods developed for matrices. Despite the popularity of such techniques, how the functional properties of a tensor changes upon unfolding is currently not well understood. In contrast to the body of existing work which has focused almost exclusively on matricizations, we here consider all possible unfoldings of an order-k tensor, which are in one-to-one correspondence with the set of partitions of {1, …, k}. We derive general inequalities between the l(p) -norms of arbitrary unfoldings defined on the partition lattice. In particular, we demonstrate how the spectral norm (p = 2) of a tensor is bounded by that of its unfoldings, and obtain an improved upper bound on the ratio of the Frobenius norm to the spectral norm of an arbitrary tensor. For specially-structured tensors satisfying a generalized definition of orthogonal decomposability, we prove that the spectral norm remains invariant under specific subsets of unfolding operations.

  20. Thermal decomposition of [Co(en)3][Fe(CN)6]∙ 2H2O: Topotactic dehydration process, valence and spin exchange mechanism elucidation

    PubMed Central

    2013-01-01

    Background The Prussian blue analogues represent well-known and extensively studied group of coordination species which has many remarkable applications due to their ion-exchange, electron transfer or magnetic properties. Among them, Co-Fe Prussian blue analogues have been extensively studied due to the photoinduced magnetization. Surprisingly, their suitability as precursors for solid-state synthesis of magnetic nanoparticles is almost unexplored. In this paper, the mechanism of thermal decomposition of [Co(en)3][Fe(CN)6] ∙∙ 2H2O (1a) is elucidated, including the topotactic dehydration, valence and spins exchange mechanisms suggestion and the formation of a mixture of CoFe2O4-Co3O4 (3:1) as final products of thermal degradation. Results The course of thermal decomposition of 1a in air atmosphere up to 600°C was monitored by TG/DSC techniques, 57Fe Mössbauer and IR spectroscopy. As first, the topotactic dehydration of 1a to the hemihydrate [Co(en)3][Fe(CN)6] ∙∙ 1/2H2O (1b) occurred with preserving the single-crystal character as was confirmed by the X-ray diffraction analysis. The consequent thermal decomposition proceeded in further four stages including intermediates varying in valence and spin states of both transition metal ions in their structures, i.e. [FeII(en)2(μ-NC)CoIII(CN)4], FeIII(NH2CH2CH3)2(μ-NC)2CoII(CN)3] and FeIII[CoII(CN)5], which were suggested mainly from 57Fe Mössbauer, IR spectral and elemental analyses data. Thermal decomposition was completed at 400°C when superparamagnetic phases of CoFe2O4 and Co3O4 in the molar ratio of 3:1 were formed. During further temperature increase (450 and 600°C), the ongoing crystallization process gave a new ferromagnetic phase attributed to the CoFe2O4-Co3O4 nanocomposite particles. Their formation was confirmed by XRD and TEM analyses. In-field (5 K / 5 T) Mössbauer spectrum revealed canting of Fe(III) spin in almost fully inverse spinel structure of CoFe2O4. Conclusions It has been found

  1. Reducing tensor magnetic gradiometer data for unexploded ordnance detection

    USGS Publications Warehouse

    Bracken, Robert E.; Brown, Philip J.

    2005-01-01

    We performed a survey to demonstrate the effectiveness of a prototype tensor magnetic gradiometer system (TMGS) for detection of buried unexploded ordnance (UXO). In order to achieve a useful result, we designed a data-reduction procedure that resulted in a realistic magnetic gradient tensor and devised a simple way of viewing complicated tensor data, not only to assess the validity of the final resulting tensor, but also to preview the data at interim stages of processing. The final processed map of the surveyed area clearly shows a sharp anomaly that peaks almost directly over the target UXO. This map agrees well with a modeled map derived from dipolar sources near the known target locations. From this agreement, it can be deduced that the reduction process is valid, making the prototype TMGS a foundation for development of future systems and processes.

  2. Relativistic Lagrangian displacement field and tensor perturbations

    NASA Astrophysics Data System (ADS)

    Rampf, Cornelius; Wiegand, Alexander

    2014-12-01

    We investigate the purely spatial Lagrangian coordinate transformation from the Lagrangian to the basic Eulerian frame. We demonstrate three techniques for extracting the relativistic displacement field from a given solution in the Lagrangian frame. These techniques are (a) from defining a local set of Eulerian coordinates embedded into the Lagrangian frame; (b) from performing a specific gauge transformation; and (c) from a fully nonperturbative approach based on the Arnowitt-Deser-Misner (ADM) split. The latter approach shows that this decomposition is not tied to a specific perturbative formulation for the solution of the Einstein equations. Rather, it can be defined at the level of the nonperturbative coordinate change from the Lagrangian to the Eulerian description. Studying such different techniques is useful because it allows us to compare and develop further the various approximation techniques available in the Lagrangian formulation. We find that one has to solve the gravitational wave equation in the relativistic analysis, otherwise the corresponding Newtonian limit will necessarily contain spurious nonpropagating tensor artifacts at second order in the Eulerian frame. We also derive the magnetic part of the Weyl tensor in the Lagrangian frame, and find that it is not only excited by gravitational waves but also by tensor perturbations which are induced through the nonlinear frame dragging. We apply our findings to calculate for the first time the relativistic displacement field, up to second order, for a Λ CDM Universe in the presence of a local primordial non-Gaussian component. Finally, we also comment on recent claims about whether mass conservation in the Lagrangian frame is violated.

  3. On Endomorphisms of Quantum Tensor Space

    NASA Astrophysics Data System (ADS)

    Lehrer, Gustav Isaac; Zhang, Ruibin

    2008-12-01

    We give a presentation of the endomorphism algebra End_{mathcal {U}q(mathfrak {sl}2)}(V^{⊗ r}) , where V is the three-dimensional irreducible module for quantum {mathfrak {sl}_2} over the function field {mathbb {C}(q^{1/2})} . This will be as a quotient of the Birman-Wenzl-Murakami algebra BMW r ( q) : = BMW r ( q -4, q 2 - q -2) by an ideal generated by a single idempotent Φ q . Our presentation is in analogy with the case where V is replaced by the two-dimensional irreducible {mathcal {U}_q(mathfrak {sl}2)} -module, the BMW algebra is replaced by the Hecke algebra H r ( q) of type A r-1, Φ q is replaced by the quantum alternator in H 3( q), and the endomorphism algebra is the classical realisation of the Temperley-Lieb algebra on tensor space. In particular, we show that all relations among the endomorphisms defined by the R-matrices on {V^{⊗ r}} are consequences of relations among the three R-matrices acting on {V^{⊗ 4}}. The proof makes extensive use of the theory of cellular algebras. Potential applications include the decomposition of tensor powers when q is a root of unity.

  4. Full Moment Tensor Inversion as a Practical Tool in Case of Discrimination of Tectonic and Anthropogenic Seismicity in Poland

    NASA Astrophysics Data System (ADS)

    Lizurek, Grzegorz

    2017-01-01

    Tectonic seismicity in Poland is sparse. The biggest event was located near Myślenice in 17th century of magnitude 5.6. On the other hand, the anthropogenic seismicity is one of the highest in Europe related, for example, to underground mining in Upper Silesian Coal Basin (USCB) and Legnica Głogów Copper District (LGCD), open pit mining in "Bełchatów" brown coal mine and reservoir impoundment of Czorsztyn artificial lake. The level of seismic activity in these areas varies from tens to thousands of events per year. Focal mechanism and full moment tensor (MT) decomposition allow for deeper understanding of the seismogenic process leading to tectonic, induced, and triggered seismic events. The non-DC components of moment tensors are considered as an indicator of the induced seismicity. In this work, the MT inversion and decomposition is proved to be a robust tool for unveiling collapse-type events as well as the other induced events in Polish underground mining areas. The robustness and limitations of the presented method is exemplified by synthetic tests and by analyzing weak tectonic earthquakes. The spurious non-DC components of full MT solutions due to the noise and poor focal coverage are discussed. The results of the MT inversions of the human-related and tectonic earthquakes from Poland indicate this method as a useful part of the tectonic and anthropogenic seismicity discrimination workflow.

  5. Local recovery of lithospheric stress tensor from GOCE gravitational tensor

    NASA Astrophysics Data System (ADS)

    Eshagh, Mehdi

    2017-04-01

    The sublithospheric stress due to mantle convection can be computed from gravity data and propagated through the lithosphere by solving the boundary-value problem of elasticity for the Earth's lithosphere. In this case, a full tensor of stress can be computed at any point inside this elastic layer. Here, we present mathematical foundations for recovering such a tensor from gravitational tensor measured at satellite altitudes. The mathematical relations will be much simpler in this way than the case of using gravity data as no derivative of spherical harmonics (SHs) or Legendre polynomials is involved in the expressions. Here, new relations between the SH coefficients of the stress and gravitational tensor elements are presented. Thereafter, integral equations are established from them to recover the elements of stress tensor from those of the gravitational tensor. The integrals have no closed-form kernels, but they are easy to invert and their spatial truncation errors are reducible. The integral equations are used to invert the real data of the gravity field and steady-state ocean circulation explorer mission (GOCE), in 2009 November, over the South American plate and its surroundings to recover the stress tensor at a depth of 35 km. The recovered stress fields are in good agreement with the tectonic and geological features of the area.

  6. Local recovery of lithospheric stress tensor from GOCE gravitational tensor

    NASA Astrophysics Data System (ADS)

    Eshagh, Mehdi

    2017-01-01

    SUMMARYThe sub-lithospheric stress due to mantle convection can be computed from gravity data and propagated through the lithosphere by solving the boundary-value problem of elasticity for the Earth's lithosphere. In this case, a full <span class="hlt">tensor</span> of stress can be computed at any point inside this elastic layer. Here, we present mathematical foundations for recovering such a <span class="hlt">tensor</span> from gravitational <span class="hlt">tensor</span> measured at satellite altitudes. The mathematical relations will be much simpler in this way than the case of using gravity data as no derivative of spherical harmonics or Legendre polynomials is involved in the expressions. Here, new relations between the spherical harmonic coefficients of the stress and gravitational <span class="hlt">tensor</span> elements are presented. Thereafter integral equations are established from them to recover the elements of stress <span class="hlt">tensor</span> from those of the gravitational <span class="hlt">tensor</span>. The integrals have no closed-form kernels, but they are easy to invert and their spatial truncation errors are reducible. The integral equations are used to invert the real data of the gravity field and steady-state ocean circulation explorer (GOCE) mission, in November 2009, over the South American plate and its surroundings to recover the stress <span class="hlt">tensor</span> at a depth of 35 km. The recovered stress fields are in good agreement with the tectonic and geological features of the area.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/15690523','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/15690523"><span>A rigorous framework for diffusion <span class="hlt">tensor</span> calculus.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Batchelor, P G; Moakher, M; Atkinson, D; Calamante, F; Connelly, A</p> <p>2005-01-01</p> <p>In biological tissue, all eigenvalues of the diffusion <span class="hlt">tensor</span> are assumed to be positive. Calculations in diffusion <span class="hlt">tensor</span> MRI generally do not take into account this positive definiteness property of the <span class="hlt">tensor</span>. Here, the space of positive definite <span class="hlt">tensors</span> is used to construct a framework for diffusion <span class="hlt">tensor</span> analysis. The method defines a distance function between a pair of <span class="hlt">tensors</span> and the associated shortest path (geodesic) joining them. From this distance a method for computing <span class="hlt">tensor</span> means, a new measure of anisotropy, and a method for <span class="hlt">tensor</span> interpolation are derived. The method is illustrated using simulated and in vivo data. Copyright 2004 Wiley-Liss, Inc.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015JChPh.142a4105M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015JChPh.142a4105M"><span>Extracting the diffusion <span class="hlt">tensor</span> from molecular dynamics simulation with Milestoning</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mugnai, Mauro L.; Elber, Ron</p> <p>2015-01-01</p> <p>We propose an algorithm to extract the diffusion <span class="hlt">tensor</span> from Molecular Dynamics simulations with Milestoning. A Kramers-Moyal expansion of a discrete master equation, which is the Markovian limit of the Milestoning theory, determines the diffusion <span class="hlt">tensor</span>. To test the algorithm, we analyze overdamped Langevin trajectories and recover a multidimensional Fokker-Planck equation. The recovery <span class="hlt">process</span> determines the flux through a mesh and estimates local kinetic parameters. Rate coefficients are converted to the derivatives of the potential of mean force and to coordinate dependent diffusion <span class="hlt">tensor</span>. We illustrate the computation on simple models and on an atomically detailed system—the diffusion along the backbone torsions of a solvated alanine dipeptide.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/scitech/biblio/22415451','SCIGOV-STC'); return false;" href="https://www.osti.gov/scitech/biblio/22415451"><span>Extracting the diffusion <span class="hlt">tensor</span> from molecular dynamics simulation with Milestoning</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Mugnai, Mauro L.; Elber, Ron</p> <p>2015-01-07</p> <p>We propose an algorithm to extract the diffusion <span class="hlt">tensor</span> from Molecular Dynamics simulations with Milestoning. A Kramers-Moyal expansion of a discrete master equation, which is the Markovian limit of the Milestoning theory, determines the diffusion <span class="hlt">tensor</span>. To test the algorithm, we analyze overdamped Langevin trajectories and recover a multidimensional Fokker-Planck equation. The recovery <span class="hlt">process</span> determines the flux through a mesh and estimates local kinetic parameters. Rate coefficients are converted to the derivatives of the potential of mean force and to coordinate dependent diffusion <span class="hlt">tensor</span>. We illustrate the computation on simple models and on an atomically detailed system—the diffusion along the backbone torsions of a solvated alanine dipeptide.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25573551','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25573551"><span>Extracting the diffusion <span class="hlt">tensor</span> from molecular dynamics simulation with Milestoning.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Mugnai, Mauro L; Elber, Ron</p> <p>2015-01-07</p> <p>We propose an algorithm to extract the diffusion <span class="hlt">tensor</span> from Molecular Dynamics simulations with Milestoning. A Kramers-Moyal expansion of a discrete master equation, which is the Markovian limit of the Milestoning theory, determines the diffusion <span class="hlt">tensor</span>. To test the algorithm, we analyze overdamped Langevin trajectories and recover a multidimensional Fokker-Planck equation. The recovery <span class="hlt">process</span> determines the flux through a mesh and estimates local kinetic parameters. Rate coefficients are converted to the derivatives of the potential of mean force and to coordinate dependent diffusion <span class="hlt">tensor</span>. We illustrate the computation on simple models and on an atomically detailed system-the diffusion along the backbone torsions of a solvated alanine dipeptide.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016PhRvC..93e4617S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016PhRvC..93e4617S"><span>Skyrme <span class="hlt">tensor</span> force in heavy ion collisions</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Stevenson, P. D.; Suckling, E. B.; Fracasso, S.; Barton, M. C.; Umar, A. S.</p> <p>2016-05-01</p> <p>Background: It is generally acknowledged that the time-dependent Hartree-Fock (TDHF) method provides a useful foundation for a fully microscopic many-body theory of low-energy heavy ion reactions. The TDHF method is also known in nuclear physics in the small-amplitude domain, where it provides a useful description of collective states, and is based on the mean-field formalism, which has been a relatively successful approximation to the nuclear many-body problem. Currently, the TDHF theory is being widely used in the study of fusion excitation functions, fission, and deep-inelastic scattering of heavy mass systems, while providing a natural foundation for many other studies. Purpose: With the advancement of computational power it is now possible to undertake TDHF calculations without any symmetry assumptions and incorporate the major strides made by the nuclear structure community in improving the energy density functionals used in these calculations. In particular, time-odd and <span class="hlt">tensor</span> terms in these functionals are naturally present during the dynamical evolution, while being absent or minimally important for most static calculations. The parameters of these terms are determined by the requirement of Galilean invariance or local gauge invariance but their significance for the reaction dynamics have not been fully studied. This work addresses this question with emphasis on the <span class="hlt">tensor</span> force. Method: The full version of the Skyrme force, including terms arising only from the Skyrme <span class="hlt">tensor</span> force, is applied to the study of collisions within a completely symmetry-unrestricted TDHF implementation. Results: We examine the effect on upper fusion thresholds with and without the <span class="hlt">tensor</span> force terms and find an effect on the fusion threshold energy of the order several MeV. Details of the distribution of the energy within terms in the energy density functional are also discussed. Conclusions: Terms in the energy density functional linked to the <span class="hlt">tensor</span> force can play a non</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21164682','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21164682"><span>InGaN light emitting solar cells with a roughened N-face GaN surface through a laser <span class="hlt">decomposition</span> <span class="hlt">process</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chen, Kuei-Ting; Huang, Wan-Chun; Hsieh, Tsung-Han; Hsieh, Chang-Hua; Lin, Chia-Feng</p> <p>2010-10-25</p> <p>InGaN-based light-emitting solar cell (LESC) structure with an inverted pyramidal structure at GaN/sapphire interface was fabricated through a laser <span class="hlt">decomposition</span> <span class="hlt">process</span> and a wet crystallographic etching <span class="hlt">process</span>. The highest light output power of the laser-treated LESC structure, with a 56% backside roughened-area ratio, had a 75% enhancement compared to the conventional device at a 20 mA operating current. By increasing the backside roughened area, the cutoff wavelength of the transmittance spectra and the wavelength of the peak photovoltaic efficiency had a redshift phenomenon that could be caused by increasing the light absorption at InGaN active layer.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015IJE...102.1560K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015IJE...102.1560K"><span>Modified foreground segmentation for object tracking using wavelets in a <span class="hlt">tensor</span> framework</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kapoor, Rajiv; Rohilla, Rajesh</p> <p>2015-09-01</p> <p>Subspace-based techniques have become important in behaviour analysis, appearance modelling and tracking. Various vector and <span class="hlt">tensor</span> subspace learning techniques are already known that perform their operations in offline as well as in an online manner. In this work, we have improved upon a <span class="hlt">tensor</span>-based subspace learning by using fourth-order <span class="hlt">decomposition</span> and wavelets so as to have an advanced adaptive algorithm for robust and efficient background modelling and tracking in coloured video sequences. The proposed algorithm known as fourth-order incremental <span class="hlt">tensor</span> subspace learning algorithm uses the spatio-colour-temporal information by adaptive online update of the means and the eigen basis for each unfolding matrix using <span class="hlt">tensor</span> <span class="hlt">decomposition</span> to fourth-order image <span class="hlt">tensors</span>. The proposed method employs the wavelet transformation to an optimum <span class="hlt">decomposition</span> level in order to reduce the computational complexity by working on the approximate counterpart of the original scenes and also reduces noise in the given scene. Our tracking method is an unscented particle filter that utilises appearance knowledge and estimates the new state of the intended object. Various experiments have been performed to demonstrate the promising and convincing nature of the proposed method and the method works better than existing methods.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/1226255','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/1226255"><span>Parallel <span class="hlt">Tensor</span> Compression for Large-Scale Scientific Data.</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Kolda, Tamara G.; Ballard, Grey; Austin, Woody Nathan</p> <p>2015-10-01</p> <p>As parallel computing trends towards the exascale, scientific data produced by high-fidelity simulations are growing increasingly massive. For instance, a simulation on a three-dimensional spatial grid with 512 points per dimension that tracks 64 variables per grid point for 128 time steps yields 8 TB of data. By viewing the data as a dense five way <span class="hlt">tensor</span>, we can compute a Tucker <span class="hlt">decomposition</span> to find inherent low-dimensional multilinear structure, achieving compression ratios of up to 10000 on real-world data sets with negligible loss in accuracy. So that we can operate on such massive data, we present the first-ever distributed memory parallel implementation for the Tucker <span class="hlt">decomposition</span>, whose key computations correspond to parallel linear algebra operations, albeit with nonstandard data layouts. Our approach specifies a data distribution for <span class="hlt">tensors</span> that avoids any <span class="hlt">tensor</span> data redistribution, either locally or in parallel. We provide accompanying analysis of the computation and communication costs of the algorithms. To demonstrate the compression and accuracy of the method, we apply our approach to real-world data sets from combustion science simulations. We also provide detailed performance results, including parallel performance in both weak and strong scaling experiments.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/scitech/biblio/1021699','SCIGOV-STC'); return false;" href="https://www.osti.gov/scitech/biblio/1021699"><span>Link prediction on evolving graphs using matrix and <span class="hlt">tensor</span> factorizations.</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Dunlavy, Daniel M.; Acar, Evrim; Kolda, Tamara Gibson</p> <p>2010-06-01</p> <p>The data in many disciplines such as social networks, web analysis, etc. is link-based, and the link structure can be exploited for many different data mining tasks. In this paper, we consider the problem of temporal link prediction: Given link data for time periods 1 through T, can we predict the links in time period T + 1? Specifically, we look at bipartite graphs changing over time and consider matrix- and <span class="hlt">tensor</span>-based methods for predicting links. We present a weight-based method for collapsing multi-year data into a single matrix. We show how the well-known Katz method for link prediction can be extended to bipartite graphs and, moreover, approximated in a scalable way using a truncated singular value <span class="hlt">decomposition</span>. Using a CANDECOMP/PARAFAC <span class="hlt">tensor</span> <span class="hlt">decomposition</span> of the data, we illustrate the usefulness of exploiting the natural three-dimensional structure of temporal link data. Through several numerical experiments, we demonstrate that both matrix- and <span class="hlt">tensor</span>-based techniques are effective for temporal link prediction despite the inherent difficulty of the problem.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/40652','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/40652"><span>The Hy-C <span class="hlt">process</span> (thermal <span class="hlt">decomposition</span> of natural gas): Potentially the lowest cost source of hydrogen with the least CO{sub 2} emission</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Steinberg, M.</p> <p>1994-12-01</p> <p>The abundance of natural gas as a natural resource and its high hydrogen content make it a prime candidate for a low cost supply of hydrogen. The thermal <span class="hlt">decomposition</span> of natural gas by methane pyrolysis produces carbon and hydrogen. The <span class="hlt">process</span> energy required to produce one mol of hydrogen is only 5.3% of the higher heating value of methane. The thermal efficiency for hydrogen production as a fuel without the use of carbon as a fuel, can be as high as 60%. Conventional steam reforming of methane requires 8.9% <span class="hlt">process</span> energy per mole of hydrogen even though 4 moles of hydrogen can be produced per mole of methane, compared to 2 moles by methane pyrolysis. When considering greenhouse global gas warming, methane pyrolysis produces the least amount of CO{sub 2} emissions per unit of hydrogen and can be totally eliminated when the carbon produced is either sequestered or sold as a materials commodity, and hydrogen is used to fuel the <span class="hlt">process</span>. Conventional steam reforming of natural gas and CO shifting produces large amounts of CO{sub 2} emissions. The energy requirement for non-fossil, solar, nuclear, and hydropower production of hydrogen, mainly through electrolysis, is much greater than that from natural gas. From the resource available energy and environmental points of view, production of hydrogen by methane pyrolysis is most attractive. The by-product carbon black, when credited as a saleable material, makes hydrogen by thermal <span class="hlt">decomposition</span> of natural gas (the Hy-C <span class="hlt">process</span>) potentially the lowest cost source of large amounts of hydrogen.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4481346','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4481346"><span>Relationship between the <span class="hlt">Decomposition</span> <span class="hlt">Process</span> of Coarse Woody Debris and Fungal Community Structure as Detected by High-Throughput Sequencing in a Deciduous Broad-Leaved Forest in Japan</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Yamashita, Satoshi; Masuya, Hayato; Abe, Shin; Masaki, Takashi; Okabe, Kimiko</p> <p>2015-01-01</p> <p>We examined the relationship between the community structure of wood-decaying fungi, detected by high-throughput sequencing, and the <span class="hlt">decomposition</span> rate using 13 years of data from a forest dynamics plot. For molecular analysis and wood density measurements, drill dust samples were collected from logs and stumps of Fagus and Quercus in the plot. Regression using a negative exponential model between wood density and time since death revealed that the <span class="hlt">decomposition</span> rate of Fagus was greater than that of Quercus. The residual between the expected value obtained from the regression curve and the observed wood density was used as a <span class="hlt">decomposition</span> rate index. Principal component analysis showed that the fungal community compositions of both Fagus and Quercus changed with time since death. Principal component analysis axis scores were used as an index of fungal community composition. A structural equation model for each wood genus was used to assess the effect of fungal community structure traits on the <span class="hlt">decomposition</span> rate and how the fungal community structure was determined by the traits of coarse woody debris. Results of the structural equation model suggested that the <span class="hlt">decomposition</span> rate of Fagus was affected by two fungal community composition components: one that was affected by time since death and another that was not affected by the traits of coarse woody debris. In contrast, the <span class="hlt">decomposition</span> rate of Quercus was not affected by coarse woody debris traits or fungal community structure. These findings suggest that, in the case of Fagus coarse woody debris, the fungal community structure is related to the <span class="hlt">decomposition</span> <span class="hlt">process</span> of its host substrate. Because fungal community structure is affected partly by the decay stage and wood density of its substrate, these factors influence each other. Further research on interactive effects is needed to improve our understanding of the relationship between fungal community structure and the woody debris <span class="hlt">decomposition</span> <span class="hlt">process</span></p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26110605','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26110605"><span>Relationship between the <span class="hlt">decomposition</span> <span class="hlt">process</span> of coarse woody debris and fungal community structure as detected by high-throughput sequencing in a deciduous broad-leaved forest in Japan.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yamashita, Satoshi; Masuya, Hayato; Abe, Shin; Masaki, Takashi; Okabe, Kimiko</p> <p>2015-01-01</p> <p>We examined the relationship between the community structure of wood-decaying fungi, detected by high-throughput sequencing, and the <span class="hlt">decomposition</span> rate using 13 years of data from a forest dynamics plot. For molecular analysis and wood density measurements, drill dust samples were collected from logs and stumps of Fagus and Quercus in the plot. Regression using a negative exponential model between wood density and time since death revealed that the <span class="hlt">decomposition</span> rate of Fagus was greater than that of Quercus. The residual between the expected value obtained from the regression curve and the observed wood density was used as a <span class="hlt">decomposition</span> rate index. Principal component analysis showed that the fungal community compositions of both Fagus and Quercus changed with time since death. Principal component analysis axis scores were used as an index of fungal community composition. A structural equation model for each wood genus was used to assess the effect of fungal community structure traits on the <span class="hlt">decomposition</span> rate and how the fungal community structure was determined by the traits of coarse woody debris. Results of the structural equation model suggested that the <span class="hlt">decomposition</span> rate of Fagus was affected by two fungal community composition components: one that was affected by time since death and another that was not affected by the traits of coarse woody debris. In contrast, the <span class="hlt">decomposition</span> rate of Quercus was not affected by coarse woody debris traits or fungal community structure. These findings suggest that, in the case of Fagus coarse woody debris, the fungal community structure is related to the <span class="hlt">decomposition</span> <span class="hlt">process</span> of its host substrate. Because fungal community structure is affected partly by the decay stage and wood density of its substrate, these factors influence each other. Further research on interactive effects is needed to improve our understanding of the relationship between fungal community structure and the woody debris <span class="hlt">decomposition</span> <span class="hlt">process</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016MNRAS.457.2501F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016MNRAS.457.2501F"><span><span class="hlt">Tensor</span> classification of structure in smoothed particle hydrodynamics density fields</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Forgan, Duncan; Bonnell, Ian; Lucas, William; Rice, Ken</p> <p>2016-04-01</p> <p>As hydrodynamic simulations increase in scale and resolution, identifying structures with non-trivial geometries or regions of general interest becomes increasingly challenging. There is a growing need for algorithms that identify a variety of different features in a simulation without requiring a `by eye' search. We present <span class="hlt">tensor</span> classification as such a technique for smoothed particle hydrodynamics (SPH). These methods have already been used to great effect in N-Body cosmological simulations, which require smoothing defined as an input free parameter. We show that <span class="hlt">tensor</span> classification successfully identifies a wide range of structures in SPH density fields using its native smoothing, removing a free parameter from the analysis and preventing the need for tessellation of the density field, as required by some classification algorithms. As examples, we show that <span class="hlt">tensor</span> classification using the tidal <span class="hlt">tensor</span> and the velocity shear <span class="hlt">tensor</span> successfully identifies filaments, shells and sheet structures in giant molecular cloud simulations, as well as spiral arms in discs. The relationship between structures identified using different <span class="hlt">tensors</span> illustrates how different forces compete and co-operate to produce the observed density field. We therefore advocate the use of multiple <span class="hlt">tensors</span> to classify structure in SPH simulations, to shed light on the interplay of multiple physical <span class="hlt">processes</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016MTDM..tmp...36S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016MTDM..tmp...36S"><span>A viscoelastic Unitary Crack-Opening strain <span class="hlt">tensor</span> for crack width assessment in fractured concrete structures</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sciumè, Giuseppe; Benboudjema, Farid</p> <p>2016-09-01</p> <p>A post-<span class="hlt">processing</span> technique which allows computing crack width in concrete is proposed for a viscoelastic damage model. Concrete creep is modeled by means of a Kelvin-Voight cell while the damage model is that of Mazars in its local form. Due to the local damage approach, the constitutive model is regularized with respect to finite element mesh to avoid mesh dependency in the computed solution (regularization is based on fracture energy). The presented method is an extension to viscoelasticity of the approach proposed by Matallah et al. (Int. J. Numer. Anal. Methods Geomech. 34(15):1615-1633, 2010) for a purely elastic damage model. The viscoelastic Unitary Crack-Opening (UCO) strain <span class="hlt">tensor</span> is computed accounting for evolution with time of surplus of stress related to damage; this stress is obtained from <span class="hlt">decomposition</span> of the effective stress <span class="hlt">tensor</span>. From UCO the normal crack width is then derived accounting for finite element characteristic length in the direction orthogonal to crack. This extension is quite natural and allows for accounting of creep impact on opening/closing of cracks in time dependent problems. A graphical interpretation of the viscoelastic UCO using Mohr's circles is proposed and application cases together with a theoretical validation are presented to show physical consistency of computed viscoelastic UCO.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li class="active"><span>16</span></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_16 --> <div id="page_17" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li class="active"><span>17</span></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="321"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015JSP...160.1389B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015JSP...160.1389B"><span><span class="hlt">Tensor</span> Network Contractions for #SAT</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Biamonte, Jacob D.; Morton, Jason; Turner, Jacob</p> <p>2015-09-01</p> <p>The computational cost of counting the number of solutions satisfying a Boolean formula, which is a problem instance of #SAT, has proven subtle to quantify. Even when finding individual satisfying solutions is computationally easy (e.g. 2-SAT, which is in ), determining the number of solutions can be #-hard. Recently, computational methods simulating quantum systems experienced advancements due to the development of <span class="hlt">tensor</span> network algorithms and associated quantum physics-inspired techniques. By these methods, we give an algorithm using an axiomatic <span class="hlt">tensor</span> contraction language for n-variable #SAT instances with complexity where c is the number of COPY-<span class="hlt">tensors</span>, g is the number of gates, and d is the maximal degree of any COPY-<span class="hlt">tensor</span>. Thus, n-variable counting problems can be solved efficiently when their <span class="hlt">tensor</span> network expression has at most COPY-<span class="hlt">tensors</span> and polynomial fan-out. This framework also admits an intuitive proof of a variant of the Tovey conjecture (the r,1-SAT instance of the Dubois-Tovey theorem). This study increases the theory, expressiveness and application of <span class="hlt">tensor</span> based algorithmic tools and provides an alternative insight on these problems which have a long history in statistical physics and computer science.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/974890','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/974890"><span>MATLAB <span class="hlt">tensor</span> classes for fast algorithm prototyping.</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Bader, Brett William; Kolda, Tamara Gibson</p> <p>2004-10-01</p> <p><span class="hlt">Tensors</span> (also known as mutidimensional arrays or N-way arrays) are used in a variety of applications ranging from chemometrics to psychometrics. We describe four MATLAB classes for <span class="hlt">tensor</span> manipulations that can be used for fast algorithm prototyping. The <span class="hlt">tensor</span> class extends the functionality of MATLAB's multidimensional arrays by supporting additional operations such as <span class="hlt">tensor</span> multiplication. The <span class="hlt">tensor</span> as matrix class supports the 'matricization' of a <span class="hlt">tensor</span>, i.e., the conversion of a <span class="hlt">tensor</span> to a matrix (and vice versa), a commonly used operation in many algorithms. Two additional classes represent <span class="hlt">tensors</span> stored in decomposed formats: cp <span class="hlt">tensor</span> and tucker <span class="hlt">tensor</span>. We descibe all of these classes and then demonstrate their use by showing how to implement several <span class="hlt">tensor</span> algorithms that have appeared in the literature.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/scitech/biblio/6605215','SCIGOV-STC'); return false;" href="https://www.osti.gov/scitech/biblio/6605215"><span>Thermal <span class="hlt">decomposition</span> of tetramethyl orthosilicate in the gas phase: An experimental and theoretical study of the initiation <span class="hlt">process</span></span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Chu, J.C.S.; Soller, R.; Lin, M.C. ); Melius, C.F. )</p> <p>1995-01-12</p> <p>The thermal <span class="hlt">decomposition</span> of Si(OCH[sub 3])[sub 4] (TMOS) has been studied by FTIR at temperatures between 858 and 968 K. The experiment was carried out in a static cell at a constant pressure of 700 Torr under highly diluted conditions. Additional experiments were performed by using toluene as a radical scavenger. The species monitored included TMOS, CH[sub 2]O, CH[sub 4], and CO. According to these measurements, the first-order global rate constants for the disappearance of TMOS without and with toluene can be given by k[sub g] = 1.4 x 10[sup 16] exp(-81 200/RT) s[sup [minus]1] and k[sub g] = 2.0 x 10[sup 14] exp(-74 500/RT) s[sup [minus]1], respectively. The noticeable difference between the two sets of Arrhenius parameters suggests that, in the absence of the inhibitor, the reactant was consumed to a significant extent by radical attacks at higher temperatures. The experimental data were kinetically modeled with the aid of a quantum-chemical calculation using the BAC-MP4 method. The results of the kinetic modeling, using the mechanism constructed on the basis of the quantum-chemical data and the known C/H/O chemistry, identified two rate-controlling reactions whose first-order rate constants are given here. 22 refs., 15 figs., 3 tabs.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016SPIE.9897E..0QC','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016SPIE.9897E..0QC"><span>Real-time framework for <span class="hlt">tensor</span>-based image enhancement for object classification</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cyganek, Bogusław; Smołka, Bogdan</p> <p>2016-04-01</p> <p>In many practical situations visual pattern recognition is vastly burdened by low quality of input images due to noise, geometrical distortions, as well as low quality of the acquisition hardware. However, although there are techniques of image quality improvements, such as nonlinear filtering, there are only few attempts reported in the literature that try to build these enhancement methods into a complete chain for multi-dimensional object recognition such as color video or hyperspectral images. In this work we propose a joint multilinear signal filtering and classification system built upon the multi-dimensional (<span class="hlt">tensor</span>) approach. <span class="hlt">Tensor</span> filtering is performed by the multi-dimensional input signal projection into the <span class="hlt">tensor</span> subspace spanned by the best-rank <span class="hlt">tensor</span> <span class="hlt">decomposition</span> method. On the other hand, object classification is done by construction of the <span class="hlt">tensor</span> sub-space constructed based on the Higher-Order Singular Value <span class="hlt">Decomposition</span> method applied to the prototype patters. In the experiments we show that the proposed chain allows high object recognition accuracy in the real-time even from the poor quality prototypes. Even more importantly, the proposed framework allows unified classification of signals of any dimensions, such as color images or video sequences which are exemplars of 3D and 4D <span class="hlt">tensors</span>, respectively. The paper discussed also some practical issues related to implementation of the key components of the proposed system.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1995PhRvD..52.2850G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1995PhRvD..52.2850G"><span><span class="hlt">Tensor</span> interactions and τ decays</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Godina Nava, J. J.; López Castro, G.</p> <p>1995-09-01</p> <p>We study the effects of charged <span class="hlt">tensor</span> weak currents on the strangeness-changing decays of the τ lepton. First, we use the available information on the K+e3 form factors to obtain B(τ--->K-π0ντ)~10-4 when the Kπ system is produced in an antisymmetric <span class="hlt">tensor</span> configuration. Then we propose a mechanism for the direct production of the K*2(1430) in τ decays. Using the current upper limit on this decay we set a bound on the symmetric <span class="hlt">tensor</span> interactions.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014JPhCS.543a2001K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014JPhCS.543a2001K"><span><span class="hlt">Tensor</span>-polarized structure functions: <span class="hlt">Tensor</span> structure of deuteron in 2020's</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kumano, S.</p> <p>2014-10-01</p> <p>We explain spin structure for a spin-one hadron, in which there are new structure functions, in addition to the ones (F1, F2, g1, g2) which exist for the spin-1/2 nucleon, associated with its <span class="hlt">tensor</span> structure. The new structure functions are b1, b2, b3, and b4 in deep inelastic scattering of a charged-lepton from a spin-one hadron such as the deuteron. Among them, twist- two functions are related by the Callan-Gross type relation b2 = 2xb1 in the Bjorken scaling limit. First, these new structure functions are introduced, and useful formulae are derived for projection operators of b1-4 from a hadron <span class="hlt">tensor</span> Wμν. Second, a sum rule is explained for b1, and possible <span class="hlt">tensor</span>-polarized distributions are discussed by using HERMES data in order to propose future experimental measurements and to compare them with theoretical models. A proposal was approved to measure b1 at the Thomas Jefferson National Accelerator Facility (JLab), so that much progress is expected for b1 in the near future. Third, formalisms of polarized proton-deuteron Drell-Yan <span class="hlt">processes</span> are explained for probing especially <span class="hlt">tensor</span>- polarized antiquark distributions, which were suggested by the HERMES data. The studies of the <span class="hlt">tensor</span>-polarized structure functions will open a new era in 2020's for <span class="hlt">tensor</span>-structure studies in terms of quark and gluon degrees of freedom, which are very different from ordinary descriptions in terms of nucleons and mesons.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3293488','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3293488"><span>Mode <span class="hlt">decomposition</span> evolution equations</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Wang, Yang; Wei, Guo-Wei; Yang, Siyang</p> <p>2011-01-01</p> <p>Partial differential equation (PDE) based methods have become some of the most powerful tools for exploring the fundamental problems in signal <span class="hlt">processing</span>, image <span class="hlt">processing</span>, computer vision, machine vision and artificial intelligence in the past two decades. The advantages of PDE based approaches are that they can be made fully automatic, robust for the analysis of images, videos and high dimensional data. A fundamental question is whether one can use PDEs to perform all the basic tasks in the image <span class="hlt">processing</span>. If one can devise PDEs to perform full-scale mode <span class="hlt">decomposition</span> for signals and images, the modes thus generated would be very useful for secondary <span class="hlt">processing</span> to meet the needs in various types of signal and image <span class="hlt">processing</span>. Despite of great progress in PDE based image analysis in the past two decades, the basic roles of PDEs in image/signal analysis are only limited to PDE based low-pass filters, and their applications to noise removal, edge detection, segmentation, etc. At present, it is not clear how to construct PDE based methods for full-scale mode <span class="hlt">decomposition</span>. The above-mentioned limitation of most current PDE based image/signal <span class="hlt">processing</span> methods is addressed in the proposed work, in which we introduce a family of mode <span class="hlt">decomposition</span> evolution equations (MoDEEs) for a vast variety of applications. The MoDEEs are constructed as an extension of a PDE based high-pass filter (Europhys. Lett., 59(6): 814, 2002) by using arbitrarily high order PDE based low-pass filters introduced by Wei (IEEE Signal <span class="hlt">Process</span>. Lett., 6(7): 165, 1999). The use of arbitrarily high order PDEs is essential to the frequency localization in the mode <span class="hlt">decomposition</span>. Similar to the wavelet transform, the present MoDEEs have a controllable time-frequency localization and allow a perfect reconstruction of the original function. Therefore, the MoDEE operation is also called a PDE transform. However, modes generated from the present approach are in the spatial or time domain and can be</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22408289','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22408289"><span>Mode <span class="hlt">decomposition</span> evolution equations.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wang, Yang; Wei, Guo-Wei; Yang, Siyang</p> <p>2012-03-01</p> <p>Partial differential equation (PDE) based methods have become some of the most powerful tools for exploring the fundamental problems in signal <span class="hlt">processing</span>, image <span class="hlt">processing</span>, computer vision, machine vision and artificial intelligence in the past two decades. The advantages of PDE based approaches are that they can be made fully automatic, robust for the analysis of images, videos and high dimensional data. A fundamental question is whether one can use PDEs to perform all the basic tasks in the image <span class="hlt">processing</span>. If one can devise PDEs to perform full-scale mode <span class="hlt">decomposition</span> for signals and images, the modes thus generated would be very useful for secondary <span class="hlt">processing</span> to meet the needs in various types of signal and image <span class="hlt">processing</span>. Despite of great progress in PDE based image analysis in the past two decades, the basic roles of PDEs in image/signal analysis are only limited to PDE based low-pass filters, and their applications to noise removal, edge detection, segmentation, etc. At present, it is not clear how to construct PDE based methods for full-scale mode <span class="hlt">decomposition</span>. The above-mentioned limitation of most current PDE based image/signal <span class="hlt">processing</span> methods is addressed in the proposed work, in which we introduce a family of mode <span class="hlt">decomposition</span> evolution equations (MoDEEs) for a vast variety of applications. The MoDEEs are constructed as an extension of a PDE based high-pass filter (Europhys. Lett., 59(6): 814, 2002) by using arbitrarily high order PDE based low-pass filters introduced by Wei (IEEE Signal <span class="hlt">Process</span>. Lett., 6(7): 165, 1999). The use of arbitrarily high order PDEs is essential to the frequency localization in the mode <span class="hlt">decomposition</span>. Similar to the wavelet transform, the present MoDEEs have a controllable time-frequency localization and allow a perfect reconstruction of the original function. Therefore, the MoDEE operation is also called a PDE transform. However, modes generated from the present approach are in the spatial or time domain and can be</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MeScT..28c5403L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MeScT..28c5403L"><span><span class="hlt">Tensor</span>-based dynamic reconstruction method for electrical capacitance tomography</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lei, J.; Mu, H. P.; Liu, Q. B.; Li, Z. H.; Liu, S.; Wang, X. Y.</p> <p>2017-03-01</p> <p>Electrical capacitance tomography (ECT) is an attractive visualization measurement method, in which the acquisition of high-quality images is beneficial for the understanding of the underlying physical or chemical mechanisms of the dynamic behaviors of the measurement objects. In real-world measurement environments, imaging objects are often in a dynamic <span class="hlt">process</span>, and the exploitation of the spatial-temporal correlations related to the dynamic nature will contribute to improving the imaging quality. Different from existing imaging methods that are often used in ECT measurements, in this paper a dynamic image sequence is stacked into a third-order <span class="hlt">tensor</span> that consists of a low rank <span class="hlt">tensor</span> and a sparse <span class="hlt">tensor</span> within the framework of the multiple measurement vectors model and the multi-way data analysis method. The low rank <span class="hlt">tensor</span> models the similar spatial distribution information among frames, which is slowly changing over time, and the sparse <span class="hlt">tensor</span> captures the perturbations or differences introduced in each frame, which is rapidly changing over time. With the assistance of the Tikhonov regularization theory and the <span class="hlt">tensor</span>-based multi-way data analysis method, a new cost function, with the considerations of the multi-frames measurement data, the dynamic evolution information of a time-varying imaging object and the characteristics of the low rank <span class="hlt">tensor</span> and the sparse <span class="hlt">tensor</span>, is proposed to convert the imaging task in the ECT measurement into a reconstruction problem of a third-order image <span class="hlt">tensor</span>. An effective algorithm is developed to search for the optimal solution of the proposed cost function, and the images are reconstructed via a batching pattern. The feasibility and effectiveness of the developed reconstruction method are numerically validated.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006AGUFMIN31A1320C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006AGUFMIN31A1320C"><span>Visualization of Time-Varying Strain Green <span class="hlt">Tensors</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Callaghan, S. A.; Maechling, P.</p> <p>2006-12-01</p> <p>Geophysical <span class="hlt">tensor</span> data calculated by earthquake wave propagation simulation codes is used to investigate stresses and strains near the earth's surface. To assist scientists with the interpretation of <span class="hlt">tensor</span> data sets, we have developed distributed <span class="hlt">processing</span> and visualization techniques for visualizing time-varying, volumetric <span class="hlt">tensor</span> data. We have applied our techniques to strain Green <span class="hlt">tensor</span> data calculated for the SCEC/CME CyberShake project in order to explore basin effects in Southern California. One step in the CyberShake project workflow is to generate strain Green <span class="hlt">tensors</span> for a volume to allow physics-based seismic hazard analysis. These volumes are typically 400 x 400 x 40 km with grid points every 200 m, with 1800 timesteps, yielding multiple terabytes of <span class="hlt">tensor</span> data in many small files. To graphically display the six-component <span class="hlt">tensors</span>, we use ellipsoids with the major axes aligned with the three eigenvectors, scaled according to the normalized eigenvalues, and colored based on the magnitude of the eigenvalues. This allows for visualization of the <span class="hlt">tensor</span> magnitudes, which span a range of over 105, while keeping the ellipsoids a constant size. This software was implemented using the Mesa implementation of OpenGL using the C language. In order to allow interactive visualization of the data, rendering is performed on a parallel computational cluster and real- time images are sent to the user via network sockets. To enable meaningful investigation of the data, a scale for the ellipsoid colors is included. Additionally, a georeferenced surface image is added to provide a point of reference for the user and allow analysis of <span class="hlt">tensor</span> behavior with other georeferenced data, enabling validation of the CyberShake software and examination of varying ground motions due to basin effects.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2002IJNAM..26..925J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2002IJNAM..26..925J"><span><span class="hlt">Tensor</span> visualizations in computational geomechanics</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Jeremi, Boris; Scheuermann, Gerik; Frey, Jan; Yang, Zhaohui; Hamann, Bernd; Joy, Kenneth I.; Hagen, Hans</p> <p>2002-08-01</p> <p>We present a novel technique for visualizing <span class="hlt">tensors</span> in three dimensional (3D) space. Of particular interest is the visualization of stress <span class="hlt">tensors</span> resulting from 3D numerical simulations in computational geomechanics. To this end we present three different approaches to visualizing <span class="hlt">tensors</span> in 3D space, namely hedgehogs, hyperstreamlines and hyperstreamsurfaces. We also present a number of examples related to stress distributions in 3D solids subjected to single and load couples. In addition, we present stress visualizations resulting from single-pile and pile-group computations. The main objective of this work is to investigate various techniques for visualizing general Cartesian <span class="hlt">tensors</span> of rank 2 and it's application to geomechanics problems.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26357122','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26357122"><span>Derived Metric <span class="hlt">Tensors</span> for Flow Surface Visualization.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Obermaier, H; Joy, K I</p> <p>2012-12-01</p> <p>Integral flow surfaces constitute a widely used flow visualization tool due to their capability to convey important flow information such as fluid transport, mixing, and domain segmentation. Current flow surface rendering techniques limit their expressiveness, however, by focusing virtually exclusively on displacement visualization, visually neglecting the more complex notion of deformation such as shearing and stretching that is central to the field of continuum mechanics. To incorporate this information into the flow surface visualization and analysis <span class="hlt">process</span>, we derive a metric <span class="hlt">tensor</span> field that encodes local surface deformations as induced by the velocity gradient of the underlying flow field. We demonstrate how properties of the resulting metric <span class="hlt">tensor</span> field are capable of enhancing present surface visualization and generation methods and develop novel surface querying, sampling, and visualization techniques. The provided results show how this step towards unifying classic flow visualization and more advanced concepts from continuum mechanics enables more detailed and improved flow analysis.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JHEP...06..060H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JHEP...06..060H"><span><span class="hlt">Tensor</span> integrand reduction via Laurent expansion</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hirschi, Valentin; Peraro, Tiziano</p> <p>2016-06-01</p> <p>We introduce a new method for the application of one-loop integrand reduction via the Laurent expansion algorithm, as implemented in the public C ++ library N inja. We show how the coefficients of the Laurent expansion can be computed by suitable contractions of the loop numerator <span class="hlt">tensor</span> with cut-dependent projectors, making it possible to interface N inja to any one-loop matrix element generator that can provide the components of this <span class="hlt">tensor</span>. We implemented this technique in the N inja library and interfaced it to M adL oop, which is part of the public M adG raph5_ aMC@NLO framework. We performed a detailed performance study, comparing against other public reduction tools, namely C utT ools, S amurai, IREGI, PJF ry++ and G olem95. We find that N inja out-performs traditional integrand reduction in both speed and numerical stability, the latter being on par with that of the <span class="hlt">tensor</span> integral reduction tool Golem95 which is however more limited and slower than N inja. We considered many benchmark multi-scale <span class="hlt">processes</span> of increasing complexity, involving QCD and electro-weak corrections as well as effective non-renormalizable couplings, showing that N inja's performance scales well with both the rank and multiplicity of the considered <span class="hlt">process</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/1281026','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/1281026"><span><span class="hlt">Tensor</span> integrand reduction via Laurent expansion</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Hirschi, Valentin; Peraro, Tiziano</p> <p>2016-06-09</p> <p>We introduce a new method for the application of one-loop integrand reduction via the Laurent expansion algorithm, as implemented in the public C++ library Ninja. We show how the coefficients of the Laurent expansion can be computed by suitable contractions of the loop numerator <span class="hlt">tensor</span> with cut-dependent projectors, making it possible to interface Ninja to any one-loop matrix element generator that can provide the components of this <span class="hlt">tensor</span>. We implemented this technique in the Ninja library and interfaced it to MadLoop, which is part of the public MadGraph5_aMC@NLO framework. We performed a detailed performance study, comparing against other public reduction tools, namely CutTools, Samurai, IREGI, PJFry++ and Golem95. We find that Ninja out-performs traditional integrand reduction in both speed and numerical stability, the latter being on par with that of the <span class="hlt">tensor</span> integral reduction tool Golem95 which is however more limited and slower than Ninja. Lastly, we considered many benchmark multi-scale <span class="hlt">processes</span> of increasing complexity, involving QCD and electro-weak corrections as well as effective non-renormalizable couplings, showing that Ninja’s performance scales well with both the rank and multiplicity of the considered <span class="hlt">process</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1281026-tensor-integrand-reduction-via-laurent-expansion','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1281026-tensor-integrand-reduction-via-laurent-expansion"><span><span class="hlt">Tensor</span> integrand reduction via Laurent expansion</span></a></p> <p><a target="_blank" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Hirschi, Valentin; Peraro, Tiziano</p> <p>2016-06-09</p> <p>We introduce a new method for the application of one-loop integrand reduction via the Laurent expansion algorithm, as implemented in the public C++ library Ninja. We show how the coefficients of the Laurent expansion can be computed by suitable contractions of the loop numerator <span class="hlt">tensor</span> with cut-dependent projectors, making it possible to interface Ninja to any one-loop matrix element generator that can provide the components of this <span class="hlt">tensor</span>. We implemented this technique in the Ninja library and interfaced it to MadLoop, which is part of the public MadGraph5_aMC@NLO framework. We performed a detailed performance study, comparing against other public reductionmore » tools, namely CutTools, Samurai, IREGI, PJFry++ and Golem95. We find that Ninja out-performs traditional integrand reduction in both speed and numerical stability, the latter being on par with that of the <span class="hlt">tensor</span> integral reduction tool Golem95 which is however more limited and slower than Ninja. Lastly, we considered many benchmark multi-scale <span class="hlt">processes</span> of increasing complexity, involving QCD and electro-weak corrections as well as effective non-renormalizable couplings, showing that Ninja’s performance scales well with both the rank and multiplicity of the considered <span class="hlt">process</span>.« less</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/scitech/biblio/1021587','SCIGOV-STC'); return false;" href="https://www.osti.gov/scitech/biblio/1021587"><span>Scalable <span class="hlt">tensor</span> factorizations with incomplete data.</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Morup, Morten; Dunlavy, Daniel M.; Acar, Evrim; Kolda, Tamara Gibson</p> <p>2010-07-01</p> <p>The problem of incomplete data - i.e., data with missing or unknown values - in multi-way arrays is ubiquitous in biomedical signal <span class="hlt">processing</span>, network traffic analysis, bibliometrics, social network analysis, chemometrics, computer vision, communication networks, etc. We consider the problem of how to factorize data sets with missing values with the goal of capturing the underlying latent structure of the data and possibly reconstructing missing values (i.e., <span class="hlt">tensor</span> completion). We focus on one of the most well-known <span class="hlt">tensor</span> factorizations that captures multi-linear structure, CANDECOMP/PARAFAC (CP). In the presence of missing data, CP can be formulated as a weighted least squares problem that models only the known entries. We develop an algorithm called CP-WOPT (CP Weighted OPTimization) that uses a first-order optimization approach to solve the weighted least squares problem. Based on extensive numerical experiments, our algorithm is shown to successfully factorize <span class="hlt">tensors</span> with noise and up to 99% missing data. A unique aspect of our approach is that it scales to sparse large-scale data, e.g., 1000 x 1000 x 1000 with five million known entries (0.5% dense). We further demonstrate the usefulness of CP-WOPT on two real-world applications: a novel EEG (electroencephalogram) application where missing data is frequently encountered due to disconnections of electrodes and the problem of modeling computer network traffic where data may be absent due to the expense of the data collection <span class="hlt">process</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..19..438G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..19..438G"><span>Obtaining orthotropic elasticity <span class="hlt">tensor</span> using entries zeroing method.</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gierlach, Bartosz; Danek, Tomasz</p> <p>2017-04-01</p> <p> rotation. Computations were parallelized with OpenMP to decrease computational time what enables different <span class="hlt">tensors</span> to be <span class="hlt">processed</span> by different threads. As a result the distributions of rotated <span class="hlt">tensor</span> entries values were obtained. For the entries which were to be zeroed we can observe almost normal distributions having mean equal to zero or sum of two normal distributions having inverse means. Non-zero entries represent different distributions with two or three maxima. Analysis of obtained results shows that described method produces consistent values of quaternions used to rotate <span class="hlt">tensors</span>. Despite of less complex target function in a <span class="hlt">process</span> of optimization in comparison to common approach, entries zeroing method provides results which can be applied to obtain an orthotropic <span class="hlt">tensor</span> with good reliability. Modification of the method can produce also a tool for obtaining effective <span class="hlt">tensors</span> belonging to another symmetry classes. This research was supported by the Polish National Science Center under contract No. DEC-2013/11/B/ST10/0472.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19870006987','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19870006987"><span>Metallo-Organic <span class="hlt">Decomposition</span> (MOD) film development</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Parker, J.</p> <p>1986-01-01</p> <p>The <span class="hlt">processing</span> techniques and problems encountered in formulating metallo-organic <span class="hlt">decomposition</span> (MOD) films used in contracting structures for thin solar cells are described. The use of thermogravimetric analysis (TGA) and differential scanning calorimetry (DSC) techniques performed at Jet Propulsion Laboratory (JPL) in understanding the <span class="hlt">decomposition</span> reactions lead to improvements in <span class="hlt">process</span> procedures. The characteristics of the available MOD films were described in detail.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JCoPh.339..285E','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JCoPh.339..285E"><span>A dynamical adaptive <span class="hlt">tensor</span> method for the Vlasov-Poisson system</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ehrlacher, Virginie; Lombardi, Damiano</p> <p>2017-06-01</p> <p>A numerical method is proposed to solve the full-Eulerian time-dependent Vlasov-Poisson system. The algorithm relies on the construction of a <span class="hlt">tensor</span> <span class="hlt">decomposition</span> of the solution whose rank is adapted at each time step. This <span class="hlt">decomposition</span> is obtained through the use of an efficient modified Progressive Generalized <span class="hlt">Decomposition</span> (PGD) method, whose convergence is proved. We suggest in addition a symplectic time-discretization splitting scheme that preserves the Hamiltonian properties of the system. This scheme is naturally obtained by considering the <span class="hlt">tensor</span> structure of the approximation. The proposed approach is illustrated through time-dependent 1D-1D, 2D-2D and 3D-3D numerical examples.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AdAtS..32..457Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AdAtS..32..457Y"><span>Attributing analysis on the model bias in surface temperature in the climate system model FGOALS-s2 through a <span class="hlt">process</span>-based <span class="hlt">decomposition</span> method</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yang, Yang; Ren, Rongcai; Cai, Ming; Rao, Jian</p> <p>2015-04-01</p> <p>This study uses the coupled atmosphere-surface climate feedback-response analysis method (CFRAM) to analyze the surface temperature biases in the Flexible Global Ocean-Atmosphere-Land System model, spectral version 2 (FGOALS-s2) in January and July. The <span class="hlt">process</span>-based <span class="hlt">decomposition</span> of the surface temperature biases, defined as the difference between the model and ERA-Interim during 1979-2005, enables us to attribute the model surface temperature biases to individual radiative <span class="hlt">processes</span> including ozone, water vapor, cloud, and surface albedo; and non-radiative <span class="hlt">processes</span> including surface sensible and latent heat fluxes, and dynamic <span class="hlt">processes</span> at the surface and in the atmosphere. The results show that significant model surface temperature biases are almost globally present, are generally larger over land than over oceans, and are relatively larger in summer than in winter. Relative to the model biases in non-radiative <span class="hlt">processes</span>, which tend to dominate the surface temperature biases in most parts of the world, biases in radiative <span class="hlt">processes</span> are much smaller, except in the sub-polar Antarctic region where the cold biases from the much overestimated surface albedo are compensated for by the warm biases from nonradiative <span class="hlt">processes</span>. The larger biases in non-radiative <span class="hlt">processes</span> mainly lie in surface heat fluxes and in surface dynamics, which are twice as large in the Southern Hemisphere as in the Northern Hemisphere and always tend to compensate for each other. In particular, the upward/downward heat fluxes are systematically underestimated/overestimated in most parts of the world, and are mainly compensated for by surface dynamic <span class="hlt">processes</span> including the increased heat storage in deep oceans across the globe.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li class="active"><span>17</span></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_17 --> <div id="page_18" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li class="active"><span>18</span></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="341"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19950012873','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19950012873"><span>Visualizing second order <span class="hlt">tensor</span> fields with hyperstreamlines</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Delmarcelle, Thierry; Hesselink, Lambertus</p> <p>1993-01-01</p> <p>Hyperstreamlines are a generalization to second order <span class="hlt">tensor</span> fields of the conventional streamlines used in vector field visualization. As opposed to point icons commonly used in visualizing <span class="hlt">tensor</span> fields, hyperstreamlines form a continuous representation of the complete <span class="hlt">tensor</span> information along a three-dimensional path. This technique is useful in visulaizing both symmetric and unsymmetric three-dimensional <span class="hlt">tensor</span> data. Several examples of <span class="hlt">tensor</span> field visualization in solid materials and fluid flows are provided.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19980003843','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19980003843"><span>Development of the <span class="hlt">Tensoral</span> Computer Language</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Ferziger, Joel; Dresselhaus, Eliot</p> <p>1996-01-01</p> <p>The research scientist or engineer wishing to perform large scale simulations or to extract useful information from existing databases is required to have expertise in the details of the particular database, the numerical methods and the computer architecture to be used. This poses a significant practical barrier to the use of simulation data. The goal of this research was to develop a high-level computer language called <span class="hlt">Tensoral</span>, designed to remove this barrier. The <span class="hlt">Tensoral</span> language provides a framework in which efficient generic data manipulations can be easily coded and implemented. First of all, <span class="hlt">Tensoral</span> is general. The fundamental objects in <span class="hlt">Tensoral</span> represent <span class="hlt">tensor</span> fields and the operators that act on them. The numerical implementation of these <span class="hlt">tensors</span> and operators is completely and flexibly programmable. New mathematical constructs and operators can be easily added to the <span class="hlt">Tensoral</span> system. <span class="hlt">Tensoral</span> is compatible with existing languages. <span class="hlt">Tensoral</span> <span class="hlt">tensor</span> operations co-exist in a natural way with a host language, which may be any sufficiently powerful computer language such as Fortran, C, or Vectoral. <span class="hlt">Tensoral</span> is very-high-level. <span class="hlt">Tensor</span> operations in <span class="hlt">Tensoral</span> typically act on entire databases (i.e., arrays) at one time and may, therefore, correspond to many lines of code in a conventional language. <span class="hlt">Tensoral</span> is efficient. <span class="hlt">Tensoral</span> is a compiled language. Database manipulations are simplified optimized and scheduled by the compiler eventually resulting in efficient machine code to implement them.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28847441','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28847441"><span>[Face rejuvenation with <span class="hlt">tensor</span> threads].</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cornette de Saint Cyr, B; Benouaiche, L</p> <p>2017-08-25</p> <p>The last decades has seen new priorities in treatment of a flabby, ageing face towards minimally invasive aesthetic surgery, to be accompanied and followed by the requirements to perform such interventions with the maximally reduced health hazards, with inconsiderable injury, without cuts and, respectively, to be followed by no resulting scars, as well as a short postoperative period. We propose a new reviewing presentation of the <span class="hlt">tensor</span> threads. After having explained the technology of the threads, we will discuss the good patient indication, the criteria which determine the choice of the threads and methods for each type of patient. There are many techniques, which we will present. Then, we will discuss the results, unsatisfactory outcomes obtained and complications encountered, as well as how to improve the cosmetic outcomes to be obtained. To conclude, we will propose a strategy for the long-term treatment of the neck and the face, preventing surgical management of the aging <span class="hlt">process</span>. Copyright © 2017. Published by Elsevier Masson SAS.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=Giraffe&id=EJ937817','ERIC'); return false;" href="http://eric.ed.gov/?q=Giraffe&id=EJ937817"><span>Benefits and Costs of Lexical <span class="hlt">Decomposition</span> and Semantic Integration during the <span class="hlt">Processing</span> of Transparent and Opaque English Compounds</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Ji, Hongbo; Gagne, Christina L.; Spalding, Thomas L.</p> <p>2011-01-01</p> <p>Six lexical decision experiments were conducted to examine the influence of complex structure on the <span class="hlt">processing</span> speed of English compounds. All experiments revealed that semantically transparent compounds (e.g., "rosebud") were <span class="hlt">processed</span> more quickly than matched monomorphemic words (e.g., "giraffe"). Opaque compounds (e.g., "hogwash") were also…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://eric.ed.gov/?q=decomposition&pg=4&id=EJ937817','ERIC'); return false;" href="https://eric.ed.gov/?q=decomposition&pg=4&id=EJ937817"><span>Benefits and Costs of Lexical <span class="hlt">Decomposition</span> and Semantic Integration during the <span class="hlt">Processing</span> of Transparent and Opaque English Compounds</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Ji, Hongbo; Gagne, Christina L.; Spalding, Thomas L.</p> <p>2011-01-01</p> <p>Six lexical decision experiments were conducted to examine the influence of complex structure on the <span class="hlt">processing</span> speed of English compounds. All experiments revealed that semantically transparent compounds (e.g., "rosebud") were <span class="hlt">processed</span> more quickly than matched monomorphemic words (e.g., "giraffe"). Opaque compounds (e.g., "hogwash") were also…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/1200656','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/1200656"><span><span class="hlt">Tensor</span> analysis methods for activity characterization in spatiotemporal data</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Haass, Michael Joseph; Van Benthem, Mark Hilary; Ochoa, Edward M.</p> <p>2014-03-01</p> <p><span class="hlt">Tensor</span> (multiway array) factorization and <span class="hlt">decomposition</span> offers unique advantages for activity characterization in spatio-temporal datasets because these methods are compatible with sparse matrices and maintain multiway structure that is otherwise lost in collapsing for regular matrix factorization. This report describes our research as part of the PANTHER LDRD Grand Challenge to develop a foundational basis of mathematical techniques and visualizations that enable unsophisticated users (e.g. users who are not steeped in the mathematical details of matrix algebra and mulitway computations) to discover hidden patterns in large spatiotemporal data sets.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/15946876','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/15946876"><span>MathNMR: spin and spatial <span class="hlt">tensor</span> manipulations in Mathematica.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Jerschow, Alexej</p> <p>2005-09-01</p> <p>Spin and spatial <span class="hlt">tensor</span> manipulations are frequently required to describe the theory of NMR experiments. A Mathematica package is presented here, which provides some of the most common functions for these calculations. Examples are the calculation of matrix representations of operators, commutators, projections, rotations, Redfield matrix elements, matrix <span class="hlt">decomposition</span> into basis operators, change of basis, coherence filtering, and the manipulation of Hamiltonians. The calculations can be performed for any spin system, containing spins 1/2 and quadrupolar spins alike, subject to computational limitations. The package will be available from upon acceptance of the article.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24573313','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24573313"><span>A <span class="hlt">tensor</span>-based subspace approach for bistatic MIMO radar in spatial colored noise.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wang, Xianpeng; Wang, Wei; Li, Xin; Wang, Junxiang</p> <p>2014-02-25</p> <p>In this paper, a new <span class="hlt">tensor</span>-based subspace approach is proposed to estimate the direction of departure (DOD) and the direction of arrival (DOA) for bistatic multiple-input multiple-output (MIMO) radar in the presence of spatial colored noise. Firstly, the received signals can be packed into a third-order measurement <span class="hlt">tensor</span> by exploiting the inherent structure of the matched filter. Then, the measurement <span class="hlt">tensor</span> can be divided into two sub-<span class="hlt">tensors</span>, and a cross-covariance <span class="hlt">tensor</span> is formulated to eliminate the spatial colored noise. Finally, the signal subspace is constructed by utilizing the higher-order singular value <span class="hlt">decomposition</span> (HOSVD) of the cross-covariance <span class="hlt">tensor</span>, and the DOD and DOA can be obtained through the estimation of signal parameters via rotational invariance technique (ESPRIT) algorithm, which are paired automatically. Since the multidimensional inherent structure and the cross-covariance <span class="hlt">tensor</span> technique are used, the proposed method provides better angle estimation performance than Chen's method, the ESPRIT algorithm and the multi-SVD method. Simulation results confirm the effectiveness and the advantage of the proposed method.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4003922','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4003922"><span>A <span class="hlt">Tensor</span>-Based Subspace Approach for Bistatic MIMO Radar in Spatial Colored Noise</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Wang, Xianpeng; Wang, Wei; Li, Xin; Wang, Junxiang</p> <p>2014-01-01</p> <p>In this paper, a new <span class="hlt">tensor</span>-based subspace approach is proposed to estimate the direction of departure (DOD) and the direction of arrival (DOA) for bistatic multiple-input multiple-output (MIMO) radar in the presence of spatial colored noise. Firstly, the received signals can be packed into a third-order measurement <span class="hlt">tensor</span> by exploiting the inherent structure of the matched filter. Then, the measurement <span class="hlt">tensor</span> can be divided into two sub-<span class="hlt">tensors</span>, and a cross-covariance <span class="hlt">tensor</span> is formulated to eliminate the spatial colored noise. Finally, the signal subspace is constructed by utilizing the higher-order singular value <span class="hlt">decomposition</span> (HOSVD) of the cross-covariance <span class="hlt">tensor</span>, and the DOD and DOA can be obtained through the estimation of signal parameters via rotational invariance technique (ESPRIT) algorithm, which are paired automatically. Since the multidimensional inherent structure and the cross-covariance <span class="hlt">tensor</span> technique are used, the proposed method provides better angle estimation performance than Chen's method, the ESPRIT algorithm and the multi-SVD method. Simulation results confirm the effectiveness and the advantage of the proposed method. PMID:24573313</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20100042321','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20100042321"><span>Hydrogen peroxide catalytic <span class="hlt">decomposition</span></span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Parrish, Clyde F. (Inventor)</p> <p>2010-01-01</p> <p>Nitric oxide in a gaseous stream is converted to nitrogen dioxide using oxidizing species generated through the use of concentrated hydrogen peroxide fed as a monopropellant into a catalyzed thruster assembly. The hydrogen peroxide is preferably stored at stable concentration levels, i.e., approximately 50%-70% by volume, and may be increased in concentration in a continuous <span class="hlt">process</span> preceding <span class="hlt">decomposition</span> in the thruster assembly. The exhaust of the thruster assembly, rich in hydroxyl and/or hydroperoxy radicals, may be fed into a stream containing oxidizable components, such as nitric oxide, to facilitate their oxidation.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5336066','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5336066"><span>A Signal <span class="hlt">Processing</span> Approach with a Smooth Empirical Mode <span class="hlt">Decomposition</span> to Reveal Hidden Trace of Corrosion in Highly Contaminated Guided Wave Signals for Concrete-Covered Pipes</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Rostami, Javad; Chen, Jingming; Tse, Peter W.</p> <p>2017-01-01</p> <p>Ultrasonic guided waves have been extensively applied for non-destructive testing of plate-like structures particularly pipes in past two decades. In this regard, if a structure has a simple geometry, obtained guided waves’ signals are easy to explain. However, any small degree of complexity in the geometry such as contacting with other materials may cause an extra amount of complication in the interpretation of guided wave signals. The problem deepens if defects have irregular shapes such as natural corrosion. Signal <span class="hlt">processing</span> techniques that have been proposed for guided wave signals’ analysis are generally good for simple signals obtained in a highly controlled experimental environment. In fact, guided wave signals in a real situation such as the existence of natural corrosion in wall-covered pipes are much more complicated. Considering pipes in residential buildings that pass through concrete walls, in this paper we introduced Smooth Empirical Mode <span class="hlt">Decomposition</span> (SEMD) to efficiently separate overlapped guided waves. As empirical mode <span class="hlt">decomposition</span> (EMD) which is a good candidate for analyzing non-stationary signals, suffers from some shortcomings, wavelet transform was adopted in the sifting stage of EMD to improve its outcome in SEMD. However, selection of mother wavelet that suits best for our purpose plays an important role. Since in guided wave inspection, the incident waves are well known and are usually tone-burst signals, we tailored a complex tone-burst signal to be used as our mother wavelet. In the sifting stage of EMD, wavelet de-noising was applied to eliminate unwanted frequency components from each IMF. SEMD greatly enhances the performance of EMD in guided wave analysis for highly contaminated signals. In our experiment on concrete covered pipes with natural corrosion, this method not only separates the concrete wall indication clearly in time domain signal, a natural corrosion with complex geometry that was hidden and located inside the</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28178220','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28178220"><span>A Signal <span class="hlt">Processing</span> Approach with a Smooth Empirical Mode <span class="hlt">Decomposition</span> to Reveal Hidden Trace of Corrosion in Highly Contaminated Guided Wave Signals for Concrete-Covered Pipes.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Rostami, Javad; Chen, Jingming; Tse, Peter W</p> <p>2017-02-07</p> <p>Ultrasonic guided waves have been extensively applied for non-destructive testing of plate-like structures particularly pipes in past two decades. In this regard, if a structure has a simple geometry, obtained guided waves' signals are easy to explain. However, any small degree of complexity in the geometry such as contacting with other materials may cause an extra amount of complication in the interpretation of guided wave signals. The problem deepens if defects have irregular shapes such as natural corrosion. Signal <span class="hlt">processing</span> techniques that have been proposed for guided wave signals' analysis are generally good for simple signals obtained in a highly controlled experimental environment. In fact, guided wave signals in a real situation such as the existence of natural corrosion in wall-covered pipes are much more complicated. Considering pipes in residential buildings that pass through concrete walls, in this paper we introduced Smooth Empirical Mode <span class="hlt">Decomposition</span> (SEMD) to efficiently separate overlapped guided waves. As empirical mode <span class="hlt">decomposition</span> (EMD) which is a good candidate for analyzing non-stationary signals, suffers from some shortcomings, wavelet transform was adopted in the sifting stage of EMD to improve its outcome in SEMD. However, selection of mother wavelet that suits best for our purpose plays an important role. Since in guided wave inspection, the incident waves are well known and are usually tone-burst signals, we tailored a complex tone-burst signal to be used as our mother wavelet. In the sifting stage of EMD, wavelet de-noising was applied to eliminate unwanted frequency components from each IMF. SEMD greatly enhances the performance of EMD in guided wave analysis for highly contaminated signals. In our experiment on concrete covered pipes with natural corrosion, this method not only separates the concrete wall indication clearly in time domain signal, a natural corrosion with complex geometry that was hidden and located inside the</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1275686','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1275686"><span>MR diffusion <span class="hlt">tensor</span> spectroscopy and imaging.</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Basser, P J; Mattiello, J; LeBihan, D</p> <p>1994-01-01</p> <p>This paper describes a new NMR imaging modality--MR diffusion <span class="hlt">tensor</span> imaging. It consists of estimating an effective diffusion <span class="hlt">tensor</span>, Deff, within a voxel, and then displaying useful quantities derived from it. We show how the phenomenon of anisotropic diffusion of water (or metabolites) in anisotropic tissues, measured noninvasively by these NMR methods, is exploited to determine fiber tract orientation and mean particle displacements. Once Deff is estimated from a series of NMR pulsed-gradient, spin-echo experiments, a tissue's three orthotropic axes can be determined. They coincide with the eigenvectors of Deff, while the effective diffusivities along these orthotropic directions are the eigenvalues of Deff. Diffusion ellipsoids, constructed in each voxel from Deff, depict both these orthotropic axes and the mean diffusion distances in these directions. Moreover, the three scalar invariants of Deff, which are independent of the tissue's orientation in the laboratory frame of reference, reveal useful information about molecular mobility reflective of local microstructure and anatomy. Inherently <span class="hlt">tensors</span> (like Deff) describing transport <span class="hlt">processes</span> in anisotropic media contain new information within a macroscopic voxel that scalars (such as the apparent diffusivity, proton density, T1, and T2) do not. Images FIGURE 4 FIGURE 5 FIGURE 6 FIGURE 7 FIGURE 8 PMID:8130344</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/17009699','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/17009699"><span>Uncertainty in diffusion <span class="hlt">tensor</span> based fibre tracking.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hahn, H K; Klein, J; Nimsky, C; Rexilius, J; Peitgen, H O</p> <p>2006-01-01</p> <p>Diffusion <span class="hlt">tensor</span> imaging and related fibre tracking techniques have the potential to identify major white matter tracts afflicted by an individual pathology or tracts at risk for a given surgical approach. However, the reliability of these techniques is known to be limited by image distortions, image noise, low spatial resolution, and the problem of identifying crossing fibres. This paper intends to bridge the gap between the requirements of neurosurgical applications and basic research on fibre tracking uncertainty. We acquired echo planar diffusion <span class="hlt">tensor</span> data from both 1.5 T and 3.0 T scanners. For fibre tracking, an extended deflection-based algorithm is employed with enhanced robustness to impaired fibre integrity such as caused by diffuse or infiltrating pathological <span class="hlt">processes</span>. Moreover, we present a method to assess and visualize the uncertainty of fibre reconstructions based on variational complex Gaussian noise, which provides an alternative to the bootstrap method. We compare fibre tracking results with and without variational noise as well as with artificially decreased image resolution and signal-to-noise. Using our fibre tracking technique, we found a high robustness to decreased image resolution and signal-to-noise. Still, the effects of image quality on the tracking result will depend on the employed fibre tracking algorithm and must be handled with care, especially when being used for neurosurgical planning or resection guidance. An advantage of the variational noise approach over the bootstrap technique is that it is applicable to any given set of diffusion <span class="hlt">tensor</span> images. We conclude that the presented approach allows for investigating the uncertainty of diffusion <span class="hlt">tensor</span> imaging based fibre tracking and might offer a perspective to overcome the problem of size underestimation observed by existing techniques.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/scitech/biblio/6636414','SCIGOV-STC'); return false;" href="https://www.osti.gov/scitech/biblio/6636414"><span>Diagenetic <span class="hlt">processes</span> near the sediment-water interface of Long Island Sound. I. <span class="hlt">Decomposition</span> and nutrient element geochemistry (S,N,P)</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Aller, R.C.</p> <p>1980-12-01</p> <p>Selected early diagenetic reactions associated with the <span class="hlt">decomposition</span> of organic matter in estuarine deposits of Long Island Sound are examined with particular emphasis on undstanding the role of benthic macroorganisms together with the depositional environment in controlling the <span class="hlt">decomposition</span> of surface sediments and in determining the flux of solutes between sediment and overlying water.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014GeoJI.196.1813C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014GeoJI.196.1813C"><span>Seismicity monitoring by cluster analysis of moment <span class="hlt">tensors</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cesca, Simone; Şen, Ali Tolga; Dahm, Torsten</p> <p>2014-03-01</p> <p>We suggest a new clustering approach to classify focal mechanisms from large moment <span class="hlt">tensor</span> catalogues, with the purpose of automatically identify families of earthquakes with similar source geometry, recognize the orientation of most active faults, and detect temporal variations of the rupture <span class="hlt">processes</span>. The approach differs in comparison to waveform similarity methods since clusters are detected even if they occur in large spatial distances. This approach is particularly helpful to analyse large moment <span class="hlt">tensor</span> catalogues, as in microseismicity applications, where a manual analysis and classification is not feasible. A flexible algorithm is here proposed: it can handle different metrics, norms, and focal mechanism representations. In particular, the method can handle full moment <span class="hlt">tensor</span> or constrained source model catalogues, for which different metrics are suggested. The method can account for variable uncertainties of different moment <span class="hlt">tensor</span> components. We verify the method with synthetic catalogues. An application to real data from mining induced seismicity illustrates possible applications of the method and demonstrate the cluster detection and event classification performance with different moment <span class="hlt">tensor</span> catalogues. Results proof that main earthquake source types occur on spatially separated faults, and that temporal changes in the number and characterization of focal mechanism clusters are detected. We suggest that moment <span class="hlt">tensor</span> clustering can help assessing time dependent hazard in mines.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013ApGeo..10..241Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013ApGeo..10..241Y"><span>Noise filtering of full-gravity gradient <span class="hlt">tensor</span> data</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yuan, Yuan; Huang, Da-Nian; Yu, Qing-Lu; Geng, Mei-Xia</p> <p>2013-06-01</p> <p>In oil and mineral exploration, gravity gradient <span class="hlt">tensor</span> data include higher-frequency signals than gravity data, which can be used to delineate small-scale anomalies. However, full-<span class="hlt">tensor</span> gradiometry (FTG) data are contaminated by high-frequency random noise. The separation of noise from high-frequency signals is one of the most challenging tasks in <span class="hlt">processing</span> of gravity gradient <span class="hlt">tensor</span> data. We first derive the Cartesian equations of gravity gradient <span class="hlt">tensors</span> under the constraint of the Laplace equation and the expression for the gravitational potential, and then we use the Cartesian equations to fit the measured gradient <span class="hlt">tensor</span> data by using optimal linear inversion and remove the noise from the measured data. Based on model tests, we confirm that not only this method removes the high-frequency random noise but also enhances the weak anomaly signals masked by the noise. Compared with traditional low-pass filtering methods, this method avoids removing noise by sacrificing resolution. Finally, we apply our method to real gravity gradient <span class="hlt">tensor</span> data acquired by Bell Geospace for the Vinton Dome at the Texas-Louisiana border.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/1163944','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/1163944"><span><span class="hlt">Tensor</span> Target Polarization at TRIUMF</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Smith, G</p> <p>2014-10-27</p> <p>The first measurements of <span class="hlt">tensor</span> observables in $\\pi \\vec{d}$ scattering experiments were performed in the mid-80's at TRIUMF, and later at SIN/PSI. The full suite of <span class="hlt">tensor</span> observables accessible in $\\pi \\vec{d}$ elastic scattering were measured: $T_{20}$, $T_{21}$, and $T_{22}$. The vector analyzing power $iT_{11}$ was also measured. These results led to a better understanding of the three-body theory used to describe this reaction. %Some measurements were also made in the absorption and breakup channels. A direct measurement of the target <span class="hlt">tensor</span> polarization was also made independent of the usual NMR techniques by exploiting the (nearly) model-independent result for the <span class="hlt">tensor</span> analyzing power at 90$^\\circ _{cm}$ in the $\\pi \\vec{d} \\rightarrow 2p$ reaction. This method was also used to check efforts to enhance the <span class="hlt">tensor</span> polarization by RF burning of the NMR spectrum. A brief description of the methods developed to measure and analyze these experiments is provided.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26249126','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26249126"><span><span class="hlt">Decomposition</span> Rate and Pattern in Hanging Pigs.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lynch-Aird, Jeanne; Moffatt, Colin; Simmons, Tal</p> <p>2015-09-01</p> <p>Accurate prediction of the postmortem interval requires an understanding of the <span class="hlt">decomposition</span> <span class="hlt">process</span> and the factors acting upon it. A controlled experiment, over 60 days at an outdoor site in the northwest of England, used 20 freshly killed pigs (Sus scrofa) as human analogues to study <span class="hlt">decomposition</span> rate and pattern. Ten pigs were hung off the ground and ten placed on the surface. Observed differences in the <span class="hlt">decomposition</span> pattern required a new <span class="hlt">decomposition</span> scoring scale to be produced for the hanging pigs to enable comparisons with the surface pigs. The difference in the rate of <span class="hlt">decomposition</span> between hanging and surface pigs was statistically significant (p=0.001). Hanging pigs reached advanced <span class="hlt">decomposition</span> stages sooner, but lagged behind during the early stages. This delay is believed to result from lower variety and quantity of insects, due to restricted beetle access to the aerial carcass, and/or writhing maggots falling from the carcass.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=decomposition&pg=7&id=EJ950126','ERIC'); return false;" href="http://eric.ed.gov/?q=decomposition&pg=7&id=EJ950126"><span>Conceptualizing and Estimating <span class="hlt">Process</span> Speed in Studies Employing Ecological Momentary Assessment Designs: A Multilevel Variance <span class="hlt">Decomposition</span> Approach</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Shiyko, Mariya P.; Ram, Nilam</p> <p>2011-01-01</p> <p>Researchers have been making use of ecological momentary assessment (EMA) and other study designs that sample feelings and behaviors in real time and in naturalistic settings to study temporal dynamics and contextual factors of a wide variety of psychological, physiological, and behavioral <span class="hlt">processes</span>. As EMA designs become more widespread,…</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li class="active"><span>18</span></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_18 --> <div id="page_19" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="361"> <li> <p><a target="_blank" onclick="trackOutboundLink('https://eric.ed.gov/?q=Decomposition+AND+Process&id=EJ950126','ERIC'); return false;" href="https://eric.ed.gov/?q=Decomposition+AND+Process&id=EJ950126"><span>Conceptualizing and Estimating <span class="hlt">Process</span> Speed in Studies Employing Ecological Momentary Assessment Designs: A Multilevel Variance <span class="hlt">Decomposition</span> Approach</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Shiyko, Mariya P.; Ram, Nilam</p> <p>2011-01-01</p> <p>Researchers have been making use of ecological momentary assessment (EMA) and other study designs that sample feelings and behaviors in real time and in naturalistic settings to study temporal dynamics and contextual factors of a wide variety of psychological, physiological, and behavioral <span class="hlt">processes</span>. As EMA designs become more widespread,…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AGUFM.A51P0335S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AGUFM.A51P0335S"><span>An Evaluation of the Parallel Ensemble Empirical Mode <span class="hlt">Decomposition</span> Method in Revealing the Role of Downscaling <span class="hlt">Processes</span> Associated with African Easterly Waves in Tropical Cyclone Genesis</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Shen, B. W.; Wu, Y.</p> <p>2015-12-01</p> <p>In this study, we applied the parallel version of the Ensemble Empirical Mode <span class="hlt">Decomposition</span> (PEEMD) for an analysis of 10-year (2004-2013) ERA-Interim global reanalysis data in order to explore multiscale interaction of tropical cyclone genesis associated with African Easterly Waves (AEWs) in sheared flows. Our focus was aimed at understanding the downscaling <span class="hlt">process</span> in multiscale flows during storm intensification. To represent the various length scales of atmospheric systems, we extracted Intrinsic Function Modes (IMFs) from raw data using the PEEMD and found that the non-oscillatory trend mode can be used to represent large scale environmental flow and the third oscillatory mode (IMF3) is to represent AEW/TC scale systems. Our results: 1) identified 42 developing cases from 272 AEWs, with 25 eventually developing into hurricanes; 2) indicated that maximum shear largely occurs over the ocean for the IMF3 mode and over land near the coast for the trend mode for developing cases, suggesting shear transfer between the trend mode and the IMF3; 3) displayed opposite wind shear tendencies for the trend mode and the IMF3 during storm intensification, signifying the downscaling <span class="hlt">process</span> in 13 hurricane cases along their tracks; 4) showed that among the 42 developing cases, only 13 of the 25 hurricanes were found with significant downscaling transfer features, so other <span class="hlt">processes</span> such as upscaling <span class="hlt">processes</span> may play an important role in the other developing cases, especially the remaining 12 hurricane cases. Investigating the upscaling <span class="hlt">process</span> between the convection scale and the AEW/TC requires data from the finer grid resolution and will be the subject of a future study.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016NatSR...636608H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016NatSR...636608H"><span>Clean thermal <span class="hlt">decomposition</span> of tertiary-alkyl metal thiolates to metal sulfides: environmentally-benign, non-polar inks for solution-<span class="hlt">processed</span> chalcopyrite solar cells</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Heo, Jungwoo; Kim, Gi-Hwan; Jeong, Jaeki; Yoon, Yung Jin; Seo, Jung Hwa; Walker, Bright; Kim, Jin Young</p> <p>2016-11-01</p> <p>We report the preparation of Cu2S, In2S3, CuInS2 and Cu(In,Ga)S2 semiconducting films via the spin coating and annealing of soluble tertiary-alkyl thiolate complexes. The thiolate compounds are readily prepared via the reaction of metal bases and tertiary-alkyl thiols. The thiolate complexes are soluble in common organic solvents and can be solution <span class="hlt">processed</span> by spin coating to yield thin films. Upon thermal annealing in the range of 200–400 °C, the tertiary-alkyl thiolates decompose cleanly to yield volatile dialkyl sulfides and metal sulfide films which are free of organic residue. Analysis of the reaction byproducts strongly suggests that the <span class="hlt">decomposition</span> proceeds via an SN1 mechanism. The composition of the films can be controlled by adjusting the amount of each metal thiolate used in the precursor solution yielding bandgaps in the range of 1.2 to 3.3 eV. The films form functioning p-n junctions when deposited in contact with CdS films prepared by the same method. Functioning solar cells are observed when such p-n junctions are prepared on transparent conducting substrates and finished by depositing electrodes with appropriate work functions. This method enables the fabrication of metal chalcogenide films on a large scale via a simple and chemically clear <span class="hlt">process</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5101475','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5101475"><span>Clean thermal <span class="hlt">decomposition</span> of tertiary-alkyl metal thiolates to metal sulfides: environmentally-benign, non-polar inks for solution-<span class="hlt">processed</span> chalcopyrite solar cells</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Heo, Jungwoo; Kim, Gi-Hwan; Jeong, Jaeki; Yoon, Yung Jin; Seo, Jung Hwa; Walker, Bright; Kim, Jin Young</p> <p>2016-01-01</p> <p>We report the preparation of Cu2S, In2S3, CuInS2 and Cu(In,Ga)S2 semiconducting films via the spin coating and annealing of soluble tertiary-alkyl thiolate complexes. The thiolate compounds are readily prepared via the reaction of metal bases and tertiary-alkyl thiols. The thiolate complexes are soluble in common organic solvents and can be solution <span class="hlt">processed</span> by spin coating to yield thin films. Upon thermal annealing in the range of 200–400 °C, the tertiary-alkyl thiolates decompose cleanly to yield volatile dialkyl sulfides and metal sulfide films which are free of organic residue. Analysis of the reaction byproducts strongly suggests that the <span class="hlt">decomposition</span> proceeds via an SN1 mechanism. The composition of the films can be controlled by adjusting the amount of each metal thiolate used in the precursor solution yielding bandgaps in the range of 1.2 to 3.3 eV. The films form functioning p-n junctions when deposited in contact with CdS films prepared by the same method. Functioning solar cells are observed when such p-n junctions are prepared on transparent conducting substrates and finished by depositing electrodes with appropriate work functions. This method enables the fabrication of metal chalcogenide films on a large scale via a simple and chemically clear <span class="hlt">process</span>. PMID:27827402</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27827402','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27827402"><span>Clean thermal <span class="hlt">decomposition</span> of tertiary-alkyl metal thiolates to metal sulfides: environmentally-benign, non-polar inks for solution-<span class="hlt">processed</span> chalcopyrite solar cells.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Heo, Jungwoo; Kim, Gi-Hwan; Jeong, Jaeki; Yoon, Yung Jin; Seo, Jung Hwa; Walker, Bright; Kim, Jin Young</p> <p>2016-11-09</p> <p>We report the preparation of Cu2S, In2S3, CuInS2 and Cu(In,Ga)S2 semiconducting films via the spin coating and annealing of soluble tertiary-alkyl thiolate complexes. The thiolate compounds are readily prepared via the reaction of metal bases and tertiary-alkyl thiols. The thiolate complexes are soluble in common organic solvents and can be solution <span class="hlt">processed</span> by spin coating to yield thin films. Upon thermal annealing in the range of 200-400 °C, the tertiary-alkyl thiolates decompose cleanly to yield volatile dialkyl sulfides and metal sulfide films which are free of organic residue. Analysis of the reaction byproducts strongly suggests that the <span class="hlt">decomposition</span> proceeds via an SN1 mechanism. The composition of the films can be controlled by adjusting the amount of each metal thiolate used in the precursor solution yielding bandgaps in the range of 1.2 to 3.3 eV. The films form functioning p-n junctions when deposited in contact with CdS films prepared by the same method. Functioning solar cells are observed when such p-n junctions are prepared on transparent conducting substrates and finished by depositing electrodes with appropriate work functions. This method enables the fabrication of metal chalcogenide films on a large scale via a simple and chemically clear <span class="hlt">process</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016LMaPh.106.1531C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016LMaPh.106.1531C"><span>O( N) Random <span class="hlt">Tensor</span> Models</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Carrozza, Sylvain; Tanasa, Adrian</p> <p>2016-11-01</p> <p>We define in this paper a class of three-index <span class="hlt">tensor</span> models, endowed with {O(N)^{⊗ 3}} invariance ( N being the size of the <span class="hlt">tensor</span>). This allows to generate, via the usual QFT perturbative expansion, a class of Feynman <span class="hlt">tensor</span> graphs which is strictly larger than the class of Feynman graphs of both the multi-orientable model (and hence of the colored model) and the U( N) invariant models. We first exhibit the existence of a large N expansion for such a model with general interactions. We then focus on the quartic model and we identify the leading and next-to-leading order (NLO) graphs of the large N expansion. Finally, we prove the existence of a critical regime and we compute the critical exponents, both at leading order and at NLO. This is achieved through the use of various analytic combinatorics techniques.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1984CP.....91...89N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1984CP.....91...89N"><span><span class="hlt">Tensor</span> formalism in anharmonic calculations</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nero, N.</p> <p>1984-11-01</p> <p>A new method is presented to compute cartesian <span class="hlt">tensors</span> in the expansion of curvilinear internal coordinates. Second- and higher-order coefficients are related to the metrics of the space of displacements. Components of the metric <span class="hlt">tensor</span> are taken from existing tables of inverse kinetic energy matrix elements or, when rotations are involved, derived from general invariance conditions of scalars within a molecule. This leads to a <span class="hlt">tensor</span> formalism particularly convenient in dealing with curvilinear coordinates in anharmonic calculations of vibrational frequencies. Formulae are given for elements of the potential energy matrix, related to quadratic and cubic force constants in terms of Christoffel symbols. The latter quantities are also used in the expansion of redundancy relations, with explicit coefficients given up to the third order.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/17626441','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/17626441"><span>Mechanistic studies of the photocatalytic degradation of methyl green: an investigation of products of the <span class="hlt">decomposition</span> <span class="hlt">processes</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chen, Chiing-Chang; Lu, Chung-Shin</p> <p>2007-06-15</p> <p>The methyl green (MG) dye dissolves into an alkaline solution when the pH value is too high (pH 9). The cationic MG dye molecules are converted into the colorless carbinol base (CB) and produce crystal violet (CV) dye and ethanol by hydroxide anion. Thirty-three intermediates of the <span class="hlt">process</span> were separated, identified, and characterized by HPLC-ESI-MS technique in this study and their evolution during the photocatalytic reaction is presented. Moreover, the other intermediates formed in the photocatalytic degradation MG <span class="hlt">processes</span> were separated and identified by HPLC-PDA technique. The results indicated that the N-de-methylated degradation of CV dye took place in a stepwise manner to yield N-de-methylated CV species, and the N-de-alkylated degradation of CB also took place in a stepwise manner to yield N-de-alkylated CB species generated during the <span class="hlt">processes</span>. Moreover, the oxidative degradation of the CV dye (or CB) occurs to yield 4-(N,N-dimethylamino)phenol (DAP), 4-(N,N-dimethylamino)-4'-(N',N'-dimethylamino)benzophenone (DDBP) and their N-de-methylated products [or to yield 4-(N-ethyl-N,N-dimethyl)aminophenol (EDAP), DDBP, 4-(N-ethyl-N,N-dimethylamino)-4'-(N',N'-dimethylamino)benzophenone (EDDBP), DAP, and their N-de-alkylated products], which were found for the first time. A proposed degradation pathway of CV and CB is presented, involving mainly the N-de-alkylation and oxidation reaction.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012PhRvD..86l5013D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012PhRvD..86l5013D"><span>Total angular momentum waves for scalar, vector, and <span class="hlt">tensor</span> fields</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Dai, Liang; Kamionkowski, Marc; Jeong, Donghui</p> <p>2012-12-01</p> <p>Most calculations in cosmological perturbation theory, including those dealing with the inflationary generation of perturbations, their time evolution, and their observational consequences, decompose those perturbations into plane waves (Fourier modes). However, for some calculations, particularly those involving observations performed on a spherical sky, a <span class="hlt">decomposition</span> into waves of fixed total angular momentum (TAM) may be more appropriate. Here we introduce TAM waves—solutions of fixed total angular momentum to the Helmholtz equation—for three-dimensional scalar, vector, and <span class="hlt">tensor</span> fields. The vector TAM waves of given total angular momentum can be decomposed further into a set of three basis functions of fixed orbital angular momentum, a set of fixed helicity, or a basis consisting of a longitudinal (L) and two transverse (E and B) TAM waves. The symmetric traceless rank-2 <span class="hlt">tensor</span> TAM waves can be similarly decomposed into a basis of fixed orbital angular momentum or fixed helicity, or a basis that consists of a longitudinal (L), two vector (VE and VB, of opposite parity), and two <span class="hlt">tensor</span> (TE and TB, of opposite parity) waves. We show how all of the vector and <span class="hlt">tensor</span> TAM waves can be obtained by applying derivative operators to scalar TAM waves. This operator approach then allows one to decompose a vector field into three covariant scalar fields for the L, E, and B components and symmetric-traceless-<span class="hlt">tensor</span> fields into five covariant scalar fields for the L, VE, VB, TE, and TB components. We provide projections of the vector and <span class="hlt">tensor</span> TAM waves onto vector and <span class="hlt">tensor</span> spherical harmonics. We provide calculational detail to facilitate the assimilation of this formalism into cosmological calculations. As an example, we calculate the power spectra of the deflection angle for gravitational lensing by density perturbations and by gravitational waves. We comment on an alternative approach to cosmic microwave background fluctuations based on TAM waves. An</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013CQGra..30s5006K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013CQGra..30s5006K"><span>Conformal <span class="hlt">tensors</span> via Lovelock gravity</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kastor, David</p> <p>2013-10-01</p> <p>Constructs from conformal geometry are important in low dimensional gravity models, while in higher dimensions the higher curvature interactions of Lovelock gravity are similarly prominent. Considering conformal invariance in the context of Lovelock gravity leads to natural, higher curvature generalizations of the Weyl, Schouten, Cotton and Bach <span class="hlt">tensors</span>, with properties that straightforwardly extend those of their familiar counterparts. As a first application, we introduce a new set of conformally invariant gravity theories in D = 4k dimensions, based on the squares of the higher curvature Weyl <span class="hlt">tensors</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014EGUGA..1613769B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014EGUGA..1613769B"><span>A General Probabilistic Framework (GPF) for <span class="hlt">process</span>-based models: blind validation, total error <span class="hlt">decomposition</span> and uncertainty reduction.</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Baroni, Gabriele; Jolley, Richard P.; Graeff, Thomas; Oswald, Sascha E.</p> <p>2014-05-01</p> <p><span class="hlt">Process</span>-based models are useful tools supporting research, policy analysis, and decision making. Ideally, they would only include input data and parameters having physical meaning and they could be applied in various conditions and scenario analysis. However, applicability of these models can be limited because they are affected by many sources of uncertainty, from scale issues to lack of knowledge. To overcome this limitation, a General Probabilistic Framework (GPF) for the application of <span class="hlt">process</span>-based models is proposed. A first assessment of the performance of the model is conducted in a blind validation, assuming all the possible sources of uncertainty. The Sobol/Saltelli global sensitivity analysis is used to decompose the total uncertainty of the model output. Based on the results of the sensitivity analysis, improvements of the model application are considered in a goal-oriented approach, in which monitoring and modeling are related in a continuous learning <span class="hlt">process</span>. This presentation describes the GPF and its application to two hydrological models. Firstly, the GPF is applied at field scale using a 1D physical-based hydrological model (SWAP). Secondly, the framework is applied at small catchment scale in combination with a spatially distributed hydrological model (SHETRAN). The models are evaluated considering different components of the water balance. The framework is conceptually simple, relatively easy to implement and it requires no modifications to existing source codes of simulation models. It can take into account all the various sources of uncertainty i.e. input data, parameters, model structures and observations. It can be extended to a wide variety of modelling applications, also when direct measurements of model output are not available. Further research will focus on the methods to account for correlation between the different sources of uncertainty.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26016539','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26016539"><span><span class="hlt">Tensor</span> numerical methods in quantum chemistry: from Hartree-Fock to excitation energies.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Khoromskaia, Venera; Khoromskij, Boris N</p> <p>2015-12-21</p> <p>We resume the recent successes of the grid-based <span class="hlt">tensor</span> numerical methods and discuss their prospects in real-space electronic structure calculations. These methods, based on the low-rank representation of the multidimensional functions and integral operators, first appeared as an accurate <span class="hlt">tensor</span> calculus for the 3D Hartree potential using 1D complexity operations, and have evolved to entirely grid-based <span class="hlt">tensor</span>-structured 3D Hartree-Fock eigenvalue solver. It benefits from <span class="hlt">tensor</span> calculation of the core Hamiltonian and two-electron integrals (TEI) in O(n log n) complexity using the rank-structured approximation of basis functions, electron densities and convolution integral operators all represented on 3D n × n × n Cartesian grids. The algorithm for calculating TEI <span class="hlt">tensor</span> in a form of the Cholesky <span class="hlt">decomposition</span> is based on multiple factorizations using algebraic 1D "density fitting" scheme, which yield an almost irreducible number of product basis functions involved in the 3D convolution integrals, depending on a threshold ε > 0. The basis functions are not restricted to separable Gaussians, since the analytical integration is substituted by high-precision <span class="hlt">tensor</span>-structured numerical quadratures. The <span class="hlt">tensor</span> approaches to post-Hartree-Fock calculations for the MP2 energy correction and for the Bethe-Salpeter excitation energies, based on using low-rank factorizations and the reduced basis method, were recently introduced. Another direction is towards the <span class="hlt">tensor</span>-based Hartree-Fock numerical scheme for finite lattices, where one of the numerical challenges is the summation of electrostatic potentials of a large number of nuclei. The 3D grid-based <span class="hlt">tensor</span> method for calculation of a potential sum on a L × L × L lattice manifests the linear in L computational work, O(L), instead of the usual O(L(3) log L) scaling by the Ewald-type approaches.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5048430','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5048430"><span>Factors controlling bark <span class="hlt">decomposition</span> and its role in wood <span class="hlt">decomposition</span> in five tropical tree species</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Dossa, Gbadamassi G. O.; Paudel, Ekananda; Cao, Kunfang; Schaefer, Douglas; Harrison, Rhett D.</p> <p>2016-01-01</p> <p>Organic matter <span class="hlt">decomposition</span> represents a vital ecosystem <span class="hlt">process</span> by which nutrients are made available for plant uptake and is a major flux in the global carbon cycle. Previous studies have investigated <span class="hlt">decomposition</span> of different plant parts, but few considered bark <span class="hlt">decomposition</span> or its role in <span class="hlt">decomposition</span> of wood. However, bark can comprise a large fraction of tree biomass. We used a common litter-bed approach to investigate factors affecting bark <span class="hlt">decomposition</span> and its role in wood <span class="hlt">decomposition</span> for five tree species in a secondary seasonal tropical rain forest in SW China. For bark, we implemented a litter bag experiment over 12 mo, using different mesh sizes to investigate effects of litter meso- and macro-fauna. For wood, we compared the <span class="hlt">decomposition</span> of branches with and without bark over 24 mo. Bark in coarse mesh bags decomposed 1.11–1.76 times faster than bark in fine mesh bags. For wood <span class="hlt">decomposition</span>, responses to bark removal were species dependent. Three species with slow wood <span class="hlt">decomposition</span> rates showed significant negative effects of bark-removal, but there was no significant effect in the other two species. Future research should also separately examine bark and wood <span class="hlt">decomposition</span>, and consider bark-removal experiments to better understand roles of bark in wood <span class="hlt">decomposition</span>. PMID:27698461</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27698461','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27698461"><span>Factors controlling bark <span class="hlt">decomposition</span> and its role in wood <span class="hlt">decomposition</span> in five tropical tree species.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Dossa, Gbadamassi G O; Paudel, Ekananda; Cao, Kunfang; Schaefer, Douglas; Harrison, Rhett D</p> <p>2016-10-04</p> <p>Organic matter <span class="hlt">decomposition</span> represents a vital ecosystem <span class="hlt">process</span> by which nutrients are made available for plant uptake and is a major flux in the global carbon cycle. Previous studies have investigated <span class="hlt">decomposition</span> of different plant parts, but few considered bark <span class="hlt">decomposition</span> or its role in <span class="hlt">decomposition</span> of wood. However, bark can comprise a large fraction of tree biomass. We used a common litter-bed approach to investigate factors affecting bark <span class="hlt">decomposition</span> and its role in wood <span class="hlt">decomposition</span> for five tree species in a secondary seasonal tropical rain forest in SW China. For bark, we implemented a litter bag experiment over 12 mo, using different mesh sizes to investigate effects of litter meso- and macro-fauna. For wood, we compared the <span class="hlt">decomposition</span> of branches with and without bark over 24 mo. Bark in coarse mesh bags decomposed 1.11-1.76 times faster than bark in fine mesh bags. For wood <span class="hlt">decomposition</span>, responses to bark removal were species dependent. Three species with slow wood <span class="hlt">decomposition</span> rates showed significant negative effects of bark-removal, but there was no significant effect in the other two species. Future research should also separately examine bark and wood <span class="hlt">decomposition</span>, and consider bark-removal experiments to better understand roles of bark in wood <span class="hlt">decomposition</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25784689','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25784689"><span>Co-composting of rose oil <span class="hlt">processing</span> waste with caged layer manure and straw or sawdust: effects of carbon source and C/N ratio on <span class="hlt">decomposition</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Onursal, Emrah; Ekinci, Kamil</p> <p>2015-04-01</p> <p>Rose oil is a specific essential oil that is produced mainly for the cosmetics industry in a few selected locations around the world. Rose oil production is a water distillation <span class="hlt">process</span> from petals of Rosa damascena Mill. Since the oil content of the rose petals of this variety is between 0.3-0.4% (w/w), almost 4000 to 3000 kg of rose petals are needed to produce 1 kg of rose oil. Rose oil production is a seasonal activity and takes place during the relatively short period where the roses are blooming. As a result, large quantities of solid waste are produced over a limited time interval. This research aims: (i) to determine the possibilities of aerobic co-composting as a waste management option for rose oil <span class="hlt">processing</span> waste with caged layer manure; (ii) to identify effects of different carbon sources - straw or sawdust on co-composting of rose oil <span class="hlt">processing</span> waste and caged layer manure, which are both readily available in Isparta, where significant rose oil production also takes place; (iii) to determine the effects of different C/N ratios on co-composting by the means of organic matter <span class="hlt">decomposition</span> and dry matter loss. Composting experiments were carried out by 12 identical laboratory-scale composting reactors (60 L) simultaneously. The results of the study showed that the best results were obtained with a mixture consisting of 50% rose oil <span class="hlt">processing</span> waste, 64% caged layer manure and 15% straw wet weight in terms of organic matter loss (66%) and dry matter loss (38%).</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013EJASP2013..124G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013EJASP2013..124G"><span>Generalized generating function with tucker <span class="hlt">decomposition</span> and alternating least squares for underdetermined blind identification</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gu, Fanglin; Zhang, Hang; Wang, Wenwu; Zhu, Desheng</p> <p>2013-12-01</p> <p>Generating function (GF) has been used in blind identification for real-valued signals. In this paper, the definition of GF is first generalized for complex-valued random variables in order to exploit the statistical information carried on complex signals in a more effective way. Then an algebraic structure is proposed to identify the mixing matrix from underdetermined mixtures using the generalized generating function (GGF). Two methods, namely GGF-ALS and GGF-TALS, are developed for this purpose. In the GGF-ALS method, the mixing matrix is estimated by the <span class="hlt">decomposition</span> of the <span class="hlt">tensor</span> constructed from the Hessian matrices of the GGF of the observations, using an alternating least squares (ALS) algorithm. The GGF-TALS method is an improved version of the GGF-ALS algorithm based on Tucker <span class="hlt">decomposition</span>. More specifically, the original <span class="hlt">tensor</span>, as formed in GGF-ALS, is first converted to a lower-rank core <span class="hlt">tensor</span> using the Tucker <span class="hlt">decomposition</span>, where the factors are obtained by the left singular-value <span class="hlt">decomposition</span> of the original <span class="hlt">tensor</span>'s mode-3 matrix. Then the mixing matrix is estimated by decomposing the core <span class="hlt">tensor</span> with the ALS algorithm. Simulation results show that (a) the proposed GGF-ALS and GGF-TALS approaches have almost the same performance in terms of the relative errors, whereas the GGF-TALS has much lower computational complexity, and (b) the proposed GGF algorithms have superior performance to the latest GF-based baseline approaches.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27287202','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27287202"><span>Highlighting earthworm contribution in uplifting biochemical response for organic matter <span class="hlt">decomposition</span> during vermifiltration <span class="hlt">processing</span> sewage sludge: Insights from proteomics.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Xing, Meiyan; Wang, Yin; Xu, Ting; Yang, Jian</p> <p>2016-09-01</p> <p>A vermifilter (VF) was steadily operated to explore the mechanism of lower microbial biomass and higher enzymatic activities due to the presence of earthworms, with a conventional biofilter (BF) as a control. The analysis of 2-DE indicated that 432 spots and 488 spots were clearly detected in the VF and BF biofilm. Furthermore, MALDI-TOF/TOF MS revealed that six differential up-regulated proteins, namely Aldehyde Dehydrogenase, Molecular chaperone GroEL, ATP synthase subunit alpha, Flagellin, Chaperone protein HtpG and ATP synthase subunit beta, changed progressively. Based on Gene Ontology annotation, these differential proteins mainly performed 71.38% ATP binding and 16.23% response to stress functions. Taken the VF <span class="hlt">process</span> performance merits into considerations, it was addressed that earthworm activities biochemically strengthened energy releasing of the microbial metabolism in an uncoupled manner.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21995007','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21995007"><span>Voxelwise multivariate statistics and brain-wide machine learning using the full diffusion <span class="hlt">tensor</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Fouque, Anne-Laure; Fillard, Pierre; Bargiacchi, Anne; Cachia, Arnaud; Zilbovicius, Monica; Thyreau, Benjamin; Le Floch, Edith; Ciuciu, Philippe; Duchesnay, Edouard</p> <p>2011-01-01</p> <p>In this paper, we propose to use the full diffusion <span class="hlt">tensor</span> to perform brain-wide score prediction on diffusion <span class="hlt">tensor</span> imaging (DTI) using the log-Euclidean framework., rather than the commonly used fractional anisotropy (FA). Indeed, scalar values such as the FA do not capture all the information contained in the diffusion <span class="hlt">tensor</span>. Additionally, full <span class="hlt">tensor</span> information is included in every step of the pre-<span class="hlt">processing</span> pipeline: registration, smoothing and feature selection using voxelwise multivariate regression analysis. This approach was tested on data obtained from 30 children and adolescents with autism spectrum disorder and showed some improvement over the FA-only analysis.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20389364','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20389364"><span>Radiative corrections to the polarizability <span class="hlt">tensor</span> of an electrically small anisotropic dielectric particle.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Albaladejo, S; Gómez-Medina, R; Froufe-Pérez, L S; Marinchio, H; Carminati, R; Torrado, J F; Armelles, G; García-Martín, A; Sáenz, J J</p> <p>2010-02-15</p> <p>Radiative corrections to the polarizability <span class="hlt">tensor</span> of isotropic particles are fundamental to understand the energy balance between absorption and scattering <span class="hlt">processes</span>. Equivalent radiative corrections for anisotropic particles are not well known. Assuming that the polarization within the particle is uniform, we derived a closed-form expression for the polarizability <span class="hlt">tensor</span> which includes radiative corrections. In the absence of absorption, this expression of the polarizability <span class="hlt">tensor</span> is consistent with the optical theorem. An analogous result for infinitely long cylinders was also derived. Magneto optical Kerr effects in non-absorbing nanoparticles with magneto-optical activity arise as a consequence of radiative corrections to the electrostatic polarizability <span class="hlt">tensor</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21180727','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21180727"><span>New insight in the template <span class="hlt">decomposition</span> <span class="hlt">process</span> of large zeolite ZSM-5 crystals: an in situ UV-Vis/fluorescence micro-spectroscopy study.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Karwacki, Lukasz; Weckhuysen, Bert M</p> <p>2011-03-07</p> <p>A combination of in situ UV-Vis and confocal fluorescence micro-spectroscopy was used to study the template <span class="hlt">decomposition</span> <span class="hlt">process</span> in large zeolite ZSM-5 crystals. Correlation of polarized light dependent UV-Vis absorption spectra with confocal fluorescence emission spectra in the 400-750 nm region allowed extracting localized information on the nature and amount of chemical species formed upon detemplation at the single particle level. It has been found by means of polarized light dependent UV-Vis absorption measurements that the progressive growth of molecules follows the orientation of the straight channels of ZSM-5 crystals. Oligomerizing template derivatives lead to the subsequent build-up of methyl-substituted benzenium cations and more extended coke-like species, which are thermally stable up to ∼740 K. Complementary confocal fluorescence emission spectra showed nearly equal distribution of these molecules within the entire volume of the thermally treated zeolite crystals. The strongest emission bands were appearing in the orange/red part of the visible spectrum, confirming the presence of large polyaromatic molecules.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_19 --> <div id="page_20" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="381"> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28634367','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28634367"><span>Proteomic analysis reveals large amounts of <span class="hlt">decomposition</span> enzymes and major metabolic pathways involved in algicidal <span class="hlt">process</span> of Trametes versicolor F21a.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Gao, Xueyan; Wang, Congyan; Dai, Wei; Ren, Shenrong; Tao, Fang; He, Xingbing; Han, Guomin; Wang, Wei</p> <p>2017-06-20</p> <p>A recent algicidal mode indicates that fungal mycelia can wrap and eliminate almost all co-cultivated algal cells within a short time span. However, the underlying molecular mechanism is rarely understood. We applied proteomic analysis to investigate the algicidal <span class="hlt">process</span> of Trametes versicolor F21a and identified 3,754 fungal proteins. Of these, 30 fungal enzymes with endo- or exoglycosidase activities such as β-1,3-glucanase, α-galactosidase, α-glucosidase, alginate lyase and chondroitin lyase were significantly up-regulated. These proteins belong to Glycoside Hydrolases, Auxiliary Activities, Carbohydrate Esterases and Polysaccharide Lyases, suggesting that these enzymes may degrade lipopolysaccharides, peptidoglycans and alginic acid of algal cells. Additionally, peptidase, exonuclease, manganese peroxidase and cytochrome c peroxidase, which decompose proteins and DNA or convert other small molecules of algal cells, could be other major <span class="hlt">decomposition</span> enzymes. Gene Ontology and KEGG pathway enrichment analysis demonstrated that pyruvate metabolism and tricarboxylic acid cycle pathways play a critical role in response to adverse environment via increasing energy production to synthesize lytic enzymes or uptake molecules. Carbon metabolism, selenocompound metabolism, sulfur assimilation and metabolism, as well as several amino acid biosynthesis pathways could play vital roles in the synthesis of nutrients required by fungal mycelia.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014AIPC.1637..134B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014AIPC.1637..134B"><span>Nested Taylor <span class="hlt">decomposition</span> in multivariate function <span class="hlt">decomposition</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Baykara, N. A.; Gürvit, Ercan</p> <p>2014-12-01</p> <p>Fluctuationlessness approximation applied to the remainder term of a Taylor <span class="hlt">decomposition</span> expressed in integral form is already used in many articles. Some forms of multi-point Taylor expansion also are considered in some articles. This work is somehow a combination these where the Taylor <span class="hlt">decomposition</span> of a function is taken where the remainder is expressed in integral form. Then the integrand is decomposed to Taylor again, not necessarily around the same point as the first <span class="hlt">decomposition</span> and a second remainder is obtained. After taking into consideration the necessary change of variables and converting the integration limits to the universal [0;1] interval a multiple integration system formed by a multivariate function is formed. Then it is intended to apply the Fluctuationlessness approximation to each of these integrals one by one and get better results as compared with the single node Taylor <span class="hlt">decomposition</span> on which the Fluctuationlessness is applied.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20170000338','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20170000338"><span>Synthesis, Structure, Characterization, and <span class="hlt">Decomposition</span> of Nickel Dithiocarbamates: Effect of Precursor Structure and <span class="hlt">Processing</span> Conditions on Solid-State Products</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Hepp, Aloysius F.; Kulis, Michael J.; McNatt, Jeremiah S.; Duffy, Norman V.; Hoops, Michael D.; Gorse, Elizabeth; Fanwick, Philip E.; Masnovi, John; Cowen, Jonathan E.; Dominey, Raymond N.</p> <p>2016-01-01</p> <p>Single-crystal X-ray structures of four nickel dithiocarbamate complexes, the homoleptic mixed-organic bis-dithiocarbamates Ni[S2CN(isopropyl)(benzyl)]2, Ni[S2CN(ethyl)(n-butyl)]2, and Ni[S2CN(phenyl)(benzyl)]2, as well as the heteroleptic mixed-ligand complex NiCl[P(phenyl)3][(S2CN(phenyl)(benzyl)], were determined. Synthetic, spectroscopic, structural, thermal, and sulfide materials studies are discussed in light of prior literature. The spectroscopic results are routine. A slightly distorted square-planar nickel coordination environment was observed for all four complexes. The organic residues adopt conformations to minimize steric interactions. Steric effects also may determine puckering, if any, about the nickel and nitrogen atoms, both of which are planar or nearly so. A trans-influence affects the Ni-S bond distances. Nitrogen atoms interact with the CS2 carbons with a bond order of about 1.5, and the other substituents on nitrogen display transoid conformations. There are no strong intermolecular interactions, consistent with prior observations of the volatility of nickel dithiocarbamate complexes. Thermogravimetric analysis of the homoleptic species under inert atmosphere is consistent with production of 1:1 nickel sulfide phases. Thermolysis of nickel dithiocarbamates under flowing nitrogen produced hexagonal or -NiS as the major phase; thermolysis under flowing forming gas produced millerite (-NiS) at 300 C, godlevskite (Ni9S8) at 325 and 350 C, and heazlewoodite (Ni3S2) at 400 and 450 C. Failure to exclude oxygen results in production of nickel oxide. Nickel sulfide phases produced seem to be primarily influenced by <span class="hlt">processing</span> conditions, in agreement with prior literature. Nickel dithiocarbamate complexes demonstrate significant promise to serve as single-source precursors to nickel sulfides, a quite interesting family of materials with numerous potential applications.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014NuPhB.886..436K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014NuPhB.886..436K"><span>Anyon condensation and <span class="hlt">tensor</span> categories</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kong, Liang</p> <p>2014-09-01</p> <p>Instead of studying anyon condensation in various concrete models, we take a bootstrap approach by considering an abstract situation, in which an anyon condensation happens in a 2-d topological phase with anyonic excitations given by a modular <span class="hlt">tensor</span> category C; and the anyons in the condensed phase are given by another modular <span class="hlt">tensor</span> category D. By a bootstrap analysis, we derive a relation between anyons in D-phase and anyons in C-phase from natural physical requirements. It turns out that the vacuum (or the <span class="hlt">tensor</span> unit) A in D-phase is necessary to be a connected commutative separable algebra in C, and the category D is equivalent to the category of local A-modules as modular <span class="hlt">tensor</span> categories. This condensation also produces a gapped domain wall with wall excitations given by the category of A-modules in C. A more general situation is also studied in this paper. We will also show how to determine such algebra A from the initial and final data. Multi-condensations and 1-d condensations will also be briefly discussed. Examples will be given in the toric code model, Kitaev quantum double models, Levin-Wen types of lattice models and some chiral topological phases.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA603921','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA603921"><span>Active <span class="hlt">Tensor</span> Magnetic Gradiometer System</span></a></p> <p><a target="_blank" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2007-11-01</p> <p>Modify Forward Computer Models .............................................................................................2 Modify TMGS Simulator...active magnetic gradient measurement system are based upon the existing <span class="hlt">tensor</span> magnetic gradiometer system ( TMGS ) developed under project MM-1328...Magnetic Gradiometer System ( TMGS ) for UXO Detection, Imaging, and Discrimination.” The TMGS developed under MM-1328 was successfully tested at the</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009PhDT.......201L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009PhDT.......201L"><span>Growth and barium zirconium oxide doping study on superconducting M-barium copper oxide (M = yttrium, samarium) films using a fluorine-free metal organic <span class="hlt">decomposition</span> <span class="hlt">process</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lu, Feng</p> <p></p> <p>We present a fluorine-free metal organic deposition (F-free MOD) <span class="hlt">process</span> - which is possibly a rapid and economic alternative to commercial trifluoroacetates metal organic deposition (TFA-MOD) and metal organic chemical vapor deposition (MOCVD) <span class="hlt">processes</span> - for the fabrication of high quality epitaxial high temperature superconducting YBa2Cu3O7-x (YBCO) films on both Rolling-Assisted Biaxially Textured Substrates (RABiTS) and single crystal substrates. We first studied the growth of YBCO and SmBCO films, and their resulting microstructure and superconducting properties. We produced epitaxial c-axis YBCO films with high critical current density (Jc) in excess of 106 A/cm2 at 77K in self field at the thickness of ˜1 mum. Because industrial applications demand high quality YBCO films with very high Jc, we investigated introducing BaZrO3 (BZO) nano-pinning sites in HTS thin films by our F-free MOD technique to improve Jc and the global pinning force (Fp). BZO-doped YBCO films were fabricated by adding extra Ba and Zr in the precursor solutions, according to the molar formula 1 YBCO + x BZO. We found the BZO content affects the growth of YBCO films and determined the optimum BZO content which leads to the most effective pinning enhancement and the least YBCO degradation. We achieved the maximum pinning force of ˜ 10 GN/m3 for x = 0.10 BZO-doped, 200 nm thick YBCO film on SrTiO3 single crystal substrates by modifying the pyrolysis from a one-step to a two-plateau <span class="hlt">decomposition</span> during the F-free MOD <span class="hlt">process</span>. For growing optimum BZO-doped YBCO films on RABiTS substrates, the F-free MOD <span class="hlt">process</span> was also optimized by adjusting the maximum growth temperature and growth time to achieve stronger pinning forces. Through-<span class="hlt">process</span> quenching studies indicate that BZO form 10--25 nm nanoparticles at the early stage of the <span class="hlt">process</span> and are stable during the following YBCO growth, demonstrating that chemically doping YBCO films with BZO using the F-free MOD <span class="hlt">process</span> is a very effective</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25978006','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25978006"><span>High-Field Electron Paramagnetic Resonance and Density Functional Theory Study of Stable Organic Radicals in Lignin: Influence of the Extraction <span class="hlt">Process</span>, Botanical Origin, and Protonation Reactions on the Radical g <span class="hlt">Tensor</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bährle, Christian; Nick, Thomas U; Bennati, Marina; Jeschke, Gunnar; Vogel, Frédéric</p> <p>2015-06-18</p> <p>The radical concentrations and g factors of stable organic radicals in different lignin preparations were determined by X-band EPR at 9 GHz. We observed that the g factors of these radicals are largely determined by the extraction <span class="hlt">process</span> and not by the botanical origin of the lignin. The parameter mostly influencing the g factor is the pH value during lignin extraction. This effect was studied in depth using high-field EPR spectroscopy at 263 GHz. We were able to determine the gxx, gyy, and gzz components of the g <span class="hlt">tensor</span> of the stable organic radicals in lignin. With the enhanced resolution of high-field EPR, distinct radical species could be found in this complex polymer. The radical species are assigned to substituted o-semiquinone radicals and can exist in different protonation states SH3+, SH2, SH1-, and S2-. The proposed model structures are supported by DFT calculations. The g principal values of the proposed structure were all in reasonable agreement with the experiments.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25164246','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25164246"><span>Low-rank approximation based non-negative multi-way array <span class="hlt">decomposition</span> on event-related potentials.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cong, Fengyu; Zhou, Guoxu; Astikainen, Piia; Zhao, Qibin; Wu, Qiang; Nandi, Asoke K; Hietanen, Jari K; Ristaniemi, Tapani; Cichocki, Andrzej</p> <p>2014-12-01</p> <p>Non-negative <span class="hlt">tensor</span> factorization (NTF) has been successfully applied to analyze event-related potentials (ERPs), and shown superiority in terms of capturing multi-domain features. However, the time-frequency representation of ERPs by higher-order <span class="hlt">tensors</span> are usually large-scale, which prevents the popularity of most <span class="hlt">tensor</span> factorization algorithms. To overcome this issue, we introduce a non-negative canonical polyadic <span class="hlt">decomposition</span> (NCPD) based on low-rank approximation (LRA) and hierarchical alternating least square (HALS) techniques. We applied NCPD (LRAHALS and benchmark HALS) and CPD to extract multi-domain features of a visual ERP. The features and components extracted by LRAHALS NCPD and HALS NCPD were very similar, but LRAHALS NCPD was 70 times faster than HALS NCPD. Moreover, the desired multi-domain feature of the ERP by NCPD showed a significant group difference (control versus depressed participants) and a difference in emotion <span class="hlt">processing</span> (fearful versus happy faces). This was more satisfactory than that by CPD, which revealed only a group difference.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014AGUFM.B43F0301N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014AGUFM.B43F0301N"><span>Elucidating effects of atmospheric deposition and peat <span class="hlt">decomposition</span> <span class="hlt">processes</span> on mercury accumulation rates in a northern Minnesota peatland over last 10,000 cal years</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nater, E. A.; Furman, O.; Toner, B. M.; Sebestyen, S. D.; Tfaily, M. M.; Chanton, J.; Fissore, C.; McFarlane, K. J.; Hanson, P. J.; Iversen, C. M.; Kolka, R. K.</p> <p>2014-12-01</p> <p>Climate change has the potential to affect mercury (Hg), sulfur (S) and carbon (C) stores and cycling in northern peatland ecosystems (NPEs). SPRUCE (Spruce and Peatland Responses Under Climate and Environmental change) is an interdisciplinary study of the effects of elevated temperature and CO2 enrichment on NPEs. Peat cores (0-3.0 m) were collected from 16 large plots located on the S1 peatland (an ombrotrophic bog treed with Picea mariana and Larix laricina) in August, 2012 for baseline characterization before the experiment begins. Peat samples were analyzed at depth increments for total Hg, bulk density, humification indices, and elemental composition. Net Hg accumulation rates over the last 10,000 years were derived from Hg concentrations and peat accumulation rates based on peat depth chronology established using 14C and 13C dating of peat cores. Historic Hg deposition rates are being modeled from pre-industrial deposition rates in S1 scaled by regional lake sediment records. Effects of peatland <span class="hlt">processes</span> and factors (hydrology, <span class="hlt">decomposition</span>, redox chemistry, vegetative changes, microtopography) on the biogeochemistry of Hg, S, and other elements are being assessed by comparing observed elemental depth profiles with accumulation profiles predicted solely from atmospheric deposition. We are using principal component analyses and cluster analyses to elucidate relationships between humification indices, peat physical properties, and inorganic and organic geochemistry data to interpret the main <span class="hlt">processes</span> controlling net Hg accumulation and elemental concentrations in surface and subsurface peat layers. These findings are critical to predicting how climate change will affect future accumulation of Hg as well as existing Hg stores in NPE, and for providing reference baselines for SPRUCE future investigations.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23169583','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23169583"><span>Effects of <span class="hlt">tensor</span> tympani muscle contraction on the middle ear and markers of a contracted muscle.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bance, Manohar; Makki, Fawaz M; Garland, Philip; Alian, Wael A; van Wijhe, Rene G; Savage, Julian</p> <p>2013-04-01</p> <p>Many otologic disorders have been attributed to dysfunction of the <span class="hlt">tensor</span> tympani muscle, including tinnitus, otalgia, Meniere's disease and sensorineural hearing loss. The objective of this study was to determine adequate stimuli for <span class="hlt">tensor</span> tympani contraction in humans and determine markers of the hypercontracted state that could be used to detect this <span class="hlt">process</span> in otologic disease. Multiple types of studies. Studies included 1) measuring middle ear impedance changes in response to orbital puffs of air, facial stroking, and self-vocalization; 2) measuring changes in stapes and eardrum vibrations and middle ear acoustic impedance in response to force loading of the <span class="hlt">tensor</span> tympani in fresh human cadaveric temporal bones; 3) measuring changes in acoustic impedance in two subjects who could voluntarily contract their <span class="hlt">tensor</span> tympani, and performing an audiogram with the muscle contracted in one of these subjects; and 4) developing a lumped parameter computer model of the middle ear while simulating various levels of <span class="hlt">tensor</span> tympani contraction. Orbital jets of air are the most effective stimuli for eliciting <span class="hlt">tensor</span> tympani contraction. As markers for <span class="hlt">tensor</span> tympani contraction, all investigations indicate that <span class="hlt">tensor</span> tympani hypercontraction should result in a low-frequency hearing loss, predominantly conductive, with a decrease in middle ear compliance. These markers should be searched for in otologic pathology states where the <span class="hlt">tensor</span> tympani is suspected of being hypercontracted. Copyright © 2012 The American Laryngological, Rhinological, and Otological Society, Inc.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/scitech/biblio/1169419','SCIGOV-STC'); return false;" href="https://www.osti.gov/scitech/biblio/1169419"><span>Collaborative Research: <span class="hlt">Process</span>-resolving <span class="hlt">Decomposition</span> of the Global Temperature Response to Modes of Low Frequency Variability in a Changing Climate</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Cai, Ming; Deng, Yi</p> <p>2015-02-06</p> <p>El Niño-Southern Oscillation (ENSO) and Annular Modes (AMs) represent respectively the most important modes of low frequency variability in the tropical and extratropical circulations. The future projection of the ENSO and AM variability, however, remains highly uncertain with the state-of-the-art coupled general circulation models. A comprehensive understanding of the factors responsible for the inter-model discrepancies in projecting future changes in the ENSO and AM variability, in terms of multiple feedback <span class="hlt">processes</span> involved, has yet to be achieved. The proposed research aims to identify sources of such uncertainty and establish a set of <span class="hlt">process</span>-resolving quantitative evaluations of the existing predictions of the future ENSO and AM variability. The proposed <span class="hlt">process</span>-resolving evaluations are based on a feedback analysis method formulated in Lu and Cai (2009), which is capable of partitioning 3D temperature anomalies/perturbations into components linked to 1) radiation-related thermodynamic <span class="hlt">processes</span> such as cloud and water vapor feedbacks, 2) local dynamical <span class="hlt">processes</span> including convection and turbulent/diffusive energy transfer and 3) non-local dynamical <span class="hlt">processes</span> such as the horizontal energy transport in the oceans and atmosphere. Taking advantage of the high-resolution, multi-model ensemble products from the Coupled Model Intercomparison Project Phase 5 (CMIP5) soon to be available at the Lawrence Livermore National Lab, we will conduct a <span class="hlt">process</span>-resolving <span class="hlt">decomposition</span> of the global three-dimensional (3D) temperature (including SST) response to the ENSO and AM variability in the preindustrial, historical and future climate simulated by these models. Specific research tasks include 1) identifying the model-observation discrepancies in the global temperature response to ENSO and AM variability and attributing such discrepancies to specific feedback <span class="hlt">processes</span>, 2) delineating the influence of anthropogenic radiative forcing on the key feedback <span class="hlt">processes</span></p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/scitech/biblio/1017019','SCIGOV-STC'); return false;" href="https://www.osti.gov/scitech/biblio/1017019"><span>Scalable <span class="hlt">tensor</span> factorizations with missing data.</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Morup, Morten; Dunlavy, Daniel M.; Acar, Evrim; Kolda, Tamara Gibson</p> <p>2010-04-01</p> <p>The problem of missing data is ubiquitous in domains such as biomedical signal <span class="hlt">processing</span>, network traffic analysis, bibliometrics, social network analysis, chemometrics, computer vision, and communication networks|all domains in which data collection is subject to occasional errors. Moreover, these data sets can be quite large and have more than two axes of variation, e.g., sender, receiver, time. Many applications in those domains aim to capture the underlying latent structure of the data; in other words, they need to factorize data sets with missing entries. If we cannot address the problem of missing data, many important data sets will be discarded or improperly analyzed. Therefore, we need a robust and scalable approach for factorizing multi-way arrays (i.e., <span class="hlt">tensors</span>) in the presence of missing data. We focus on one of the most well-known <span class="hlt">tensor</span> factorizations, CANDECOMP/PARAFAC (CP), and formulate the CP model as a weighted least squares problem that models only the known entries. We develop an algorithm called CP-WOPT (CP Weighted OPTimization) using a first-order optimization approach to solve the weighted least squares problem. Based on extensive numerical experiments, our algorithm is shown to successfully factor <span class="hlt">tensors</span> with noise and up to 70% missing data. Moreover, our approach is significantly faster than the leading alternative and scales to larger problems. To show the real-world usefulness of CP-WOPT, we illustrate its applicability on a novel EEG (electroencephalogram) application where missing data is frequently encountered due to disconnections of electrodes.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JESS..126...68P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JESS..126...68P"><span>Magnetotelluric impedance <span class="hlt">tensor</span> analysis for identification of transverse tectonic feature in the Wagad uplift, Kachchh, northwest India</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Pavan Kumar, G.; Kumar, Virender; Nagar, Mehul; Singh, Dilip; Mahendar, E.; Patel, Pruthul; Mahesh, P.</p> <p>2017-07-01</p> <p>The 2001 Bhuj earthquake (Mw 7.7) occurred in northwestern region of Indian peninsula has reactivated a couple of transverse faults to its surroundings. Intermediate to moderate magnitude earthquakes are occurring along these faults which includes recent Dholavira earthquake (Mw 5.1, 2012) suggesting distinct tectonic scenario in the region. We present the results of magnetotelluric (MT) impedance <span class="hlt">tensors</span> analyses of 18 sites located along a profile cutting various faults in the uplifted Wagad block of the Kachchh basin. The MT time series of 4-5 days recording duration have been <span class="hlt">processed</span> and the earth response functions are estimated in broad frequency range (0.01-1000 s). The observed impedance <span class="hlt">tensors</span> are analyzed by using three <span class="hlt">decomposition</span> techniques as well as by the phase <span class="hlt">tensor</span> method constraining with the induction arrows. The analyses suggest distinct tectonic feature within the block bounded by the South Wagad Fault (SWF) and the North Wagad Fault (NWF) particularly in the period band of 1-10 s. In the south of NWF, the telluric vectors and the major axes of the phase ellipses are aligned in the NNW-SSE to NW-SE direction where as a dominant E-W strike is obtained for northern side of the NWF. The transverse geo-electric strike coincides with the prominent clustering of seismicity after the Bhuj earthquake and trend of the Manfara transverse fault is located in close vicinity of the study area. We therefore suggest the presence NNW-SSE trending transverse structural feature in the Wagad uplift of the basin appears to play significant role in the current seismicity of the active intraplate region.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4852972','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4852972"><span>Diffusion <span class="hlt">tensor</span> MR microscopy of tissues with low diffusional anisotropy</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Bajd, Franci; Mattea, Carlos; Stapf, Siegfried</p> <p>2016-01-01</p> <p>Abstract Background Diffusion <span class="hlt">tensor</span> imaging exploits preferential diffusional motion of water molecules residing within tissue compartments for assessment of tissue structural anisotropy. However, instrumentation and post-<span class="hlt">processing</span> errors play an important role in determination of diffusion <span class="hlt">tensor</span> elements. In the study, several experimental factors affecting accuracy of diffusion <span class="hlt">tensor</span> determination were analyzed. Materials and methods Effects of signal-to-noise ratio and configuration of the applied diffusion-sensitizing gradients on fractional anisotropy bias were analyzed by means of numerical simulations. In addition, diffusion <span class="hlt">tensor</span> magnetic resonance microscopy experiments were performed on a tap water phantom and bovine articular cartilage-on-bone samples to verify the simulation results. Results In both, the simulations and the experiments, the multivariate linear regression of the diffusion-<span class="hlt">tensor</span> analysis yielded overestimated fractional anisotropy with low SNRs and with low numbers of applied diffusion-sensitizing gradients. Conclusions An increase of the apparent fractional anisotropy due to unfavorable experimental conditions can be overcome by applying a larger number of diffusion sensitizing gradients with small values of the condition number of the transformation matrix. This is in particular relevant in magnetic resonance microscopy, where imaging gradients are high and the signal-to-noise ratio is low. PMID:27247550</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26340788','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26340788"><span>Decentralized Dimensionality Reduction for Distributed <span class="hlt">Tensor</span> Data Across Sensor Networks.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Liang, Junli; Yu, Guoyang; Chen, Badong; Zhao, Minghua</p> <p>2016-11-01</p> <p>This paper develops a novel decentralized dimensionality reduction algorithm for the distributed <span class="hlt">tensor</span> data across sensor networks. The main contributions of this paper are as follows. First, conventional centralized methods, which utilize entire data to simultaneously determine all the vectors of the projection matrix along each <span class="hlt">tensor</span> mode, are not suitable for the network environment. Here, we relax the simultaneous <span class="hlt">processing</span> manner into the one-vector-by-one-vector (OVBOV) manner, i.e., determining the projection vectors (PVs) related to each <span class="hlt">tensor</span> mode one by one. Second, we prove that in the OVBOV manner each PV can be determined without modifying any <span class="hlt">tensor</span> data, which simplifies corresponding computations. Third, we cast the decentralized PV determination problem as a set of subproblems with consensus constraints, so that it can be solved in the network environment only by local computations and information communications among neighboring nodes. Fourth, we introduce the null space and transform the PV determination problem with complex orthogonality constraints into an equivalent hidden convex one without any orthogonality constraint, which can be solved by the Lagrange multiplier method. Finally, experimental results are given to show that the proposed algorithm is an effective dimensionality reduction scheme for the distributed <span class="hlt">tensor</span> data across the sensor networks.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA522288','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA522288"><span>Automatic Image <span class="hlt">Decomposition</span></span></a></p> <p><a target="_blank" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2004-02-01</p> <p>optimal selection. Keywords: Image <span class="hlt">decomposition</span>, structure, texture, bounded vari- ation, parameter selection, inpainting . 1. INTRODUCTION Natural images...or DC gray-values, etc. This <span class="hlt">decomposition</span> has been shown in [6] to be fundamental for image inpainting , the art of modifying an image in a non...tech- nique exploited in [6] for image inpainting (see also [1, 9, 12, 14] for other related <span class="hlt">decomposition</span> approaches). As we will see bel- low, there</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015JHEP...09..182C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015JHEP...09..182C"><span>Killing(-Yano) <span class="hlt">tensors</span> in string theory</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chervonyi, Yuri; Lunin, Oleg</p> <p>2015-09-01</p> <p>We construct the Killing(-Yano) <span class="hlt">tensors</span> for a large class of charged black holes in higher dimensions and study general properties of such <span class="hlt">tensors</span>, in particular, their behavior under string dualities. Killing(-Yano) <span class="hlt">tensors</span> encode the symmetries beyond isometries, which lead to insights into dynamics of particles and fields on a given geometry by providing a set of conserved quantities. By analyzing the eigenvalues of the Killing <span class="hlt">tensor</span>, we provide a prescription for constructing several conserved quantities starting from a single object, and we demonstrate that Killing <span class="hlt">tensors</span> in higher dimensions are always associated with ellipsoidal coordinates. We also determine the transformations of the Killing(-Yano) <span class="hlt">tensors</span> under string dualities, and find the unique modification of the Killing-Yano equation consistent with these symmetries. These results are used to construct the explicit form of the Killing(-Yano) <span class="hlt">tensors</span> for the Myers-Perry black hole in arbitrary number of dimensions and for its charged version.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4706544','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4706544"><span>Antisymmetric <span class="hlt">tensor</span> generalizations of affine vector fields</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Morisawa, Yoshiyuki; Tomoda, Kentaro</p> <p>2016-01-01</p> <p><span class="hlt">Tensor</span> generalizations of affine vector fields called symmetric and antisymmetric affine <span class="hlt">tensor</span> fields are discussed as symmetry of spacetimes. We review the properties of the symmetric ones, which have been studied in earlier works, and investigate the properties of the antisymmetric ones, which are the main theme in this paper. It is shown that antisymmetric affine <span class="hlt">tensor</span> fields are closely related to one-lower-rank antisymmetric <span class="hlt">tensor</span> fields which are parallelly transported along geodesics. It is also shown that the number of linear independent rank-p antisymmetric affine <span class="hlt">tensor</span> fields in n-dimensions is bounded by (n + 1)!/p!(n − p)!. We also derive the integrability conditions for antisymmetric affine <span class="hlt">tensor</span> fields. Using the integrability conditions, we discuss the existence of antisymmetric affine <span class="hlt">tensor</span> fields on various spacetimes. PMID:26858463</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017CPL...672...47P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017CPL...672...47P"><span>Low-rank factorization of electron integral <span class="hlt">tensors</span> and its application in electronic structure theory</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Peng, Bo; Kowalski, Karol</p> <p>2017-03-01</p> <p>In this letter, we apply reverse Cuthill-McKee (RCM) algorithm to transform two-electron integral <span class="hlt">tensors</span> to their block diagonal forms. By further applying Cholesky <span class="hlt">decomposition</span> (CD) on each of the diagonal blocks, we are able to represent the high-dimensional two-electron integral <span class="hlt">tensors</span> in terms of permutation matrices and low-rank Cholesky vectors. This representation facilitates low-rank factorizations of high-dimensional <span class="hlt">tensor</span> contractions in post-Hartree-Fock calculations. Here, we discuss the second-order Møller-Plesset (MP2) method and the linear coupled-cluster model with doubles (L-CCD) as examples to demonstrate the efficiency of this technique in representing the two-electron integrals in a compact form.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1342758-low-rank-factorization-electron-integral-tensors-its-application-electronic-structure-theory','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1342758-low-rank-factorization-electron-integral-tensors-its-application-electronic-structure-theory"><span>Low-rank factorization of electron integral <span class="hlt">tensors</span> and its application in electronic structure theory</span></a></p> <p><a target="_blank" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Peng, Bo; Kowalski, Karol</p> <p>2017-01-25</p> <p>In this paper, we apply reverse Cuthill-McKee (RCM) algorithm to transform two-electron integral <span class="hlt">tensors</span> to their block diagonal forms. By further applying Cholesky <span class="hlt">decomposition</span> (CD) on each of the diagonal blocks, we are able to represent the high-dimensional two-electron integral <span class="hlt">tensors</span> in terms of permutation matrices and low-rank Cholesky vectors. This representation facilitates low-rank factorizations of high-dimensional <span class="hlt">tensor</span> contractions in post-Hartree-Fock calculations. Finally, we discuss the second-order Møller-Plesset (MP2) method and the linear coupled-cluster model with doubles (L-CCD) as examples to demonstrate the efficiency of this technique in representing the two-electron integrals in a compact form.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_20 --> <div id="page_21" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="401"> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/scitech/biblio/21448534','SCIGOV-STC'); return false;" href="https://www.osti.gov/scitech/biblio/21448534"><span>Competition between the <span class="hlt">tensor</span> light shift and nonlinear Zeeman effect</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Chalupczak, W.; Wojciechowski, A.; Pustelny, S.; Gawlik, W.</p> <p>2010-08-15</p> <p>Many precision measurements (e.g., in spectroscopy, atomic clocks, quantum-information <span class="hlt">processing</span>, etc.) suffer from systematic errors introduced by the light shift. In our experimental configuration, however, the <span class="hlt">tensor</span> light shift plays a positive role enabling the observation of spectral features otherwise masked by the cancellation of the transition amplitudes and creating resonances at a frequency unperturbed either by laser power or beam inhomogeneity. These phenomena occur thanks to the special relation between the nonlinear Zeeman and light shift effects. The interplay between these two perturbations is systematically studied and the cancellation of the nonlinear Zeeman effect by the <span class="hlt">tensor</span> light shift is demonstrated.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/1164294','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/1164294"><span>Collaborative Research: <span class="hlt">Process</span>-Resolving <span class="hlt">Decomposition</span> of the Global Temperature Response to Modes of Low Frequency Variability in a Changing Climate</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Deng, Yi</p> <p>2014-11-24</p> <p>DOE-GTRC-05596 11/24/2104 Collaborative Research: <span class="hlt">Process</span>-Resolving <span class="hlt">Decomposition</span> of the Global Temperature Response to Modes of Low Frequency Variability in a Changing Climate PI: Dr. Yi Deng (PI) School of Earth and Atmospheric Sciences Georgia Institute of Technology 404-385-1821, yi.deng@eas.gatech.edu El Niño-Southern Oscillation (ENSO) and Annular Modes (AMs) represent respectively the most important modes of low frequency variability in the tropical and extratropical circulations. The projection of future changes in the ENSO and AM variability, however, remains highly uncertain with the state-of-the-science climate models. This project conducted a <span class="hlt">process</span>-resolving, quantitative evaluations of the ENSO and AM variability in the modern reanalysis observations and in climate model simulations. The goal is to identify and understand the sources of uncertainty and biases in models’ representation of ENSO and AM variability. Using a feedback analysis method originally formulated by one of the collaborative PIs, we partitioned the 3D atmospheric temperature anomalies and surface temperature anomalies associated with ENSO and AM variability into components linked to 1) radiation-related thermodynamic <span class="hlt">processes</span> such as cloud and water vapor feedbacks, 2) local dynamical <span class="hlt">processes</span> including convection and turbulent/diffusive energy transfer and 3) non-local dynamical <span class="hlt">processes</span> such as the horizontal energy transport in the oceans and atmosphere. In the past 4 years, the research conducted at Georgia Tech under the support of this project has led to 15 peer-reviewed publications and 9 conference/workshop presentations. Two graduate students and one postdoctoral fellow also received research training through participating the project activities. This final technical report summarizes key scientific discoveries we made and provides also a list of all publications and conference presentations resulted from research activities at Georgia Tech. The main findings include</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016APS..DPPG10140P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016APS..DPPG10140P"><span>Agyrotropic pressure <span class="hlt">tensor</span> induced by the plasma velocity shear</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Pegoraro, Francesco; Del Sarto, Danele; Califano, Francesco</p> <p>2016-10-01</p> <p>We show that the spatial inhomogeneity of a shear flow in a fluid plasma is transferred to a pressure anisotropy that has both a gyrotropic and a non gyrotropic component. We investigate this <span class="hlt">process</span> both analytically and numerically by including the full pressure <span class="hlt">tensor</span> dynamics. We determine the time evolution of the pressure agyrotropy and in general of the pressure <span class="hlt">tensor</span> anisotropization which arise from the action of both the magnetic eld and the flow strain <span class="hlt">tensor</span>. This mechanism can affect the onset and development of shear-induced fluid instabilities in plasmas and is relevant to the understanding of the origin of some of the non-Maxwellian distribution functions evidenced both in Vlasov simulations and in space plasma measurements that exhibit pressure agyrotropy.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4288545','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4288545"><span>Extracting the diffusion <span class="hlt">tensor</span> from molecular dynamics simulation with Milestoning</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Mugnai, Mauro L.; Elber, Ron</p> <p>2015-01-01</p> <p>We propose an algorithm to extract the diffusion <span class="hlt">tensor</span> from Molecular Dynamics simulations with Milestoning. A Kramers-Moyal expansion of a discrete master equation, which is the Markovian limit of the Milestoning theory, determines the diffusion <span class="hlt">tensor</span>. To test the algorithm, we analyze overdamped Langevin trajectories and recover a multidimensional Fokker-Planck equation. The recovery <span class="hlt">process</span> determines the flux through a mesh and estimates local kinetic parameters. Rate coefficients are converted to the derivatives of the potential of mean force and to coordinate dependent diffusion <span class="hlt">tensor</span>. We illustrate the computation on simple models and on an atomically detailed system—the diffusion along the backbone torsions of a solvated alanine dipeptide. PMID:25573551</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20443089','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20443089"><span>Dissolution enhancement of a drug exhibiting thermal and acidic <span class="hlt">decomposition</span> characteristics by fusion <span class="hlt">processing</span>: a comparative study of hot melt extrusion and KinetiSol dispersing.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hughey, Justin R; DiNunzio, James C; Bennett, Ryan C; Brough, Chris; Miller, Dave A; Ma, Hua; Williams, Robert O; McGinity, James W</p> <p>2010-06-01</p> <p>In this study, hot melt extrusion (HME) and KinetiSol Dispersing (KSD) were utilized to prepare dissolution-enhanced solid dispersions of Roche Research Compound A (ROA), a BCS class II drug. Preformulation characterization studies showed that ROA was chemically unstable at elevated temperatures and acidic pH values. Eudragit L100-55 and AQOAT LF (HPMCAS) were evaluated as carrier polymers. Dispersions were characterized for ROA recovery, crystallinity, homogeneity, and non-sink dissolution. Eudragit L100-55 dispersions prepared by HME required the use of micronized ROA and reduced residence times in order to become substantially amorphous. Compositions containing HPMCAS were also prepared by HME, but an amorphous dispersion could not be obtained. All HME compositions contained ROA-related impurities. KSD was investigated as a method to reduce the <span class="hlt">decomposition</span> of ROA while rendering compositions amorphous. Substantially amorphous, plasticizer free compositions were <span class="hlt">processed</span> successfully by KSD with significantly higher ROA recovery values and amorphous character than those achieved by HME. A near-infrared chemical imaging analysis was conducted on the solid dispersions as a measure of homogeneity. A statistical analysis showed similar levels of homogeneity in compositions containing Eudragit L100-55, while differences were observed in those containing HMPCAS. Non-sink dissolution analysis of all compositions showed rapid supersaturation after pH adjustment to approximately two to three times the equilibrium solubility of ROA, which was maintained for at least 24 h. The results of the study demonstrated that KSD is an effective method of forming dissolution-enhanced amorphous solid solutions in cases where HME is not a feasible technique.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28484319','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28484319"><span>Geometric <span class="hlt">decompositions</span> of collective motion.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Mischiati, Matteo; Krishnaprasad, P S</p> <p>2017-04-01</p> <p>Collective motion in nature is a captivating phenomenon. Revealing the underlying mechanisms, which are of biological and theoretical interest, will require empirical data, modelling and analysis techniques. Here, we contribute a geometric viewpoint, yielding a novel method of analysing movement. Snapshots of collective motion are portrayed as tangent vectors on configuration space, with length determined by the total kinetic energy. Using the geometry of fibre bundles and connections, this portrait is split into orthogonal components each tangential to a lower dimensional manifold derived from configuration space. The resulting <span class="hlt">decomposition</span>, when interleaved with classical shape space construction, is categorized into a family of kinematic modes-including rigid translations, rigid rotations, inertia <span class="hlt">tensor</span> transformations, expansions and compressions. Snapshots of empirical data from natural collectives can be allocated to these modes and weighted by fractions of total kinetic energy. Such quantitative measures can provide insight into the variation of the driving goals of a collective, as illustrated by applying these methods to a publicly available dataset of pigeon flocking. The geometric framework may also be profitably employed in the control of artificial systems of interacting agents such as robots.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017RSPSA.47360571M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017RSPSA.47360571M"><span>Geometric <span class="hlt">decompositions</span> of collective motion</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mischiati, Matteo; Krishnaprasad, P. S.</p> <p>2017-04-01</p> <p>Collective motion in nature is a captivating phenomenon. Revealing the underlying mechanisms, which are of biological and theoretical interest, will require empirical data, modelling and analysis techniques. Here, we contribute a geometric viewpoint, yielding a novel method of analysing movement. Snapshots of collective motion are portrayed as tangent vectors on configuration space, with length determined by the total kinetic energy. Using the geometry of fibre bundles and connections, this portrait is split into orthogonal components each tangential to a lower dimensional manifold derived from configuration space. The resulting <span class="hlt">decomposition</span>, when interleaved with classical shape space construction, is categorized into a family of kinematic modes-including rigid translations, rigid rotations, inertia <span class="hlt">tensor</span> transformations, expansions and compressions. Snapshots of empirical data from natural collectives can be allocated to these modes and weighted by fractions of total kinetic energy. Such quantitative measures can provide insight into the variation of the driving goals of a collective, as illustrated by applying these methods to a publicly available dataset of pigeon flocking. The geometric framework may also be profitably employed in the control of artificial systems of interacting agents such as robots.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22271823','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22271823"><span><span class="hlt">Tensor</span> completion for estimating missing values in visual data.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Liu, Ji; Musialski, Przemyslaw; Wonka, Peter; Ye, Jieping</p> <p>2013-01-01</p> <p>In this paper, we propose an algorithm to estimate missing values in <span class="hlt">tensors</span> of visual data. The values can be missing due to problems in the acquisition <span class="hlt">process</span> or because the user manually identified unwanted outliers. Our algorithm works even with a small amount of samples and it can propagate structure to fill larger missing regions. Our methodology is built on recent studies about matrix completion using the matrix trace norm. The contribution of our paper is to extend the matrix case to the <span class="hlt">tensor</span> case by proposing the first definition of the trace norm for <span class="hlt">tensors</span> and then by building a working algorithm. First, we propose a definition for the <span class="hlt">tensor</span> trace norm that generalizes the established definition of the matrix trace norm. Second, similarly to matrix completion, the <span class="hlt">tensor</span> completion is formulated as a convex optimization problem. Unfortunately, the straightforward problem extension is significantly harder to solve than the matrix case because of the dependency among multiple constraints. To tackle this problem, we developed three algorithms: simple low rank <span class="hlt">tensor</span> completion (SiLRTC), fast low rank <span class="hlt">tensor</span> completion (FaLRTC), and high accuracy low rank <span class="hlt">tensor</span> completion (HaLRTC). The SiLRTC algorithm is simple to implement and employs a relaxation technique to separate the dependent relationships and uses the block coordinate descent (BCD) method to achieve a globally optimal solution; the FaLRTC algorithm utilizes a smoothing scheme to transform the original nonsmooth problem into a smooth one and can be used to solve a general <span class="hlt">tensor</span> trace norm minimization problem; the HaLRTC algorithm applies the alternating direction method of multipliers (ADMMs) to our problem. Our experiments show potential applications of our algorithms and the quantitative evaluation indicates that our methods are more accurate and robust than heuristic approaches. The efficiency comparison indicates that FaLTRC and HaLRTC are more efficient than SiLRTC and between FaLRTC an</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=52827','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=52827"><span><span class="hlt">Tensor</span> species and symmetric functions.</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Méndez, M</p> <p>1991-01-01</p> <p>An equivariant representation of the symmetric group Sn (equivariant representation from here on) is defined as a particular type of <span class="hlt">tensor</span> species. For any <span class="hlt">tensor</span> species R the characteristic generating function of R is defined in a way that generalizes the Frobenius characters of representations of the symmetric groups. If R is an equivariant representation, then the characteristic is a homogeneous symmetric function. The combinatorial operations on equivariant representations correspond to formal operations on the respective characteristic functions. In particular, substitution of equivariant representations corresponds to plethysm of symmetric functions. Equivariant representations are constructed that have as characteristic the elementary, complete, and Schur functions. Bijective proofs are given for the formulas that connect them with the monomial symmetric functions. PMID:11607233</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JCAP...04..007A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JCAP...04..007A"><span>Scalar-<span class="hlt">tensor</span> linear inflation</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Artymowski, Michał; Racioppi, Antonio</p> <p>2017-04-01</p> <p>We investigate two approaches to non-minimally coupled gravity theories which present linear inflation as attractor solution: a) the scalar-<span class="hlt">tensor</span> theory approach, where we look for a scalar-<span class="hlt">tensor</span> theory that would restore results of linear inflation in the strong coupling limit for a non-minimal coupling to gravity of the form of f(varphi)R/2; b) the particle physics approach, where we motivate the form of the Jordan frame potential by loop corrections to the inflaton field. In both cases the Jordan frame potentials are modifications of the induced gravity inflationary scenario, but instead of the Starobinsky attractor they lead to linear inflation in the strong coupling limit.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhRvB..95d5117E','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhRvB..95d5117E"><span>Algorithms for <span class="hlt">tensor</span> network renormalization</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Evenbly, G.</p> <p>2017-01-01</p> <p>We discuss in detail algorithms for implementing <span class="hlt">tensor</span> network renormalization (TNR) for the study of classical statistical and quantum many-body systems. First, we recall established techniques for how the partition function of a 2 D classical many-body system or the Euclidean path integral of a 1 D quantum system can be represented as a network of <span class="hlt">tensors</span>, before describing how TNR can be implemented to efficiently contract the network via a sequence of coarse-graining transformations. The efficacy of the TNR approach is then benchmarked for the 2 D classical statistical and 1 D quantum Ising models; in particular the ability of TNR to maintain a high level of accuracy over sustained coarse-graining transformations, even at a critical point, is demonstrated.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JGP...120..262P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JGP...120..262P"><span>Surgery in colored <span class="hlt">tensor</span> models</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Pérez-Sánchez, Carlos I.</p> <p>2017-10-01</p> <p>Rooted in group field theory and matrix models, random <span class="hlt">tensor</span> models are a recent background-invariant approach to quantum gravity in arbitrary dimensions. Colored <span class="hlt">tensor</span> models (CTM) generate random triangulated orientable (pseudo)-manifolds. We analyze, in low dimensions, which known spaces are triangulated by specific CTM interactions. As a tool, we develop the graph-encoded surgery that is compatible with the quantum-field-theory-structure and use it to prove that a single model, the complex φ4-interaction in rank- 2, generates all orientable 2-bordisms, thus, in particular, also all orientable, closed surfaces. We show that certain quartic rank- 3 CTM, the φ34 -theory, has as boundary sector all closed, possibly disconnected, orientable surfaces. Hence all closed orientable surfaces are cobordant via manifolds generated by the φ34 -theory.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016CQGra..33iLT01N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016CQGra..33iLT01N"><span>Gravitational scalar-<span class="hlt">tensor</span> theory</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Naruko, Atsushi; Yoshida, Daisuke; Mukohyama, Shinji</p> <p>2016-05-01</p> <p>We consider a new form of gravity theories in which the action is written in terms of the Ricci scalar and its first and second derivatives. Despite the higher derivative nature of the action, the theory is ghost-free under an appropriate choice of the functional form of the Lagrangian. This model possesses 2 + 2 physical degrees of freedom, namely 2 scalar degrees and 2 <span class="hlt">tensor</span> degrees. We exhaust all such theories with the Lagrangian of the form f(R,{({{\</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017CMaPh.354..317G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017CMaPh.354..317G"><span>Bulk Universality for Random Lozenge Tilings Near Straight Boundaries and for <span class="hlt">Tensor</span> Products</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gorin, Vadim</p> <p>2017-08-01</p> <p>We prove that the asymptotic of the bulk local statistics in models of random lozenge tilings is universal in the vicinity of straight boundaries of the tiled domains. The result applies to uniformly random lozenge tilings of large polygonal domains on triangular lattice and to the probability measures describing the <span class="hlt">decomposition</span> in Gelfand-Tsetlin bases of <span class="hlt">tensor</span> products of representations of unitary groups. In a weaker form our theorem also applies to random domino tilings.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/scitech/biblio/22382020','SCIGOV-STC'); return false;" href="https://www.osti.gov/scitech/biblio/22382020"><span>Generalised <span class="hlt">tensor</span> fluctuations and inflation</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Cannone, Dario; Tasinato, Gianmassimo; Wands, David E-mail: g.tasinato@swansea.ac.uk</p> <p>2015-01-01</p> <p>Using an effective field theory approach to inflation, we examine novel properties of the spectrum of inflationary <span class="hlt">tensor</span> fluctuations, that arise when breaking some of the symmetries or requirements usually imposed on the dynamics of perturbations. During single-clock inflation, time-reparameterization invariance is broken by a time-dependent cosmological background. In order to explore more general scenarios, we consider the possibility that spatial diffeomorphism invariance is also broken by effective mass terms or by derivative operators for the metric fluctuations in the Lagrangian. We investigate the cosmological consequences of the breaking of spatial diffeomorphisms, focussing on operators that affect the power spectrum of fluctuations. We identify the operators for <span class="hlt">tensor</span> fluctuations that can provide a blue spectrum without violating the null energy condition, and operators for scalar fluctuations that lead to non-conservation of the comoving curvature perturbation on superhorizon scales even in single-clock inflation. In the last part of our work, we also examine the consequences of operators containing more than two spatial derivatives, discussing how they affect the sound speed of <span class="hlt">tensor</span> fluctuations, and showing that they can mimic some of the interesting effects of symmetry breaking operators, even in scenarios that preserve spatial diffeomorphism invariance.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19900008080','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19900008080"><span>An analysis of scatter <span class="hlt">decomposition</span></span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Nicol, David M.; Saltz, Joel H.</p> <p>1990-01-01</p> <p>A formal analysis of a powerful mapping technique known as scatter <span class="hlt">decomposition</span> is presented. Scatter <span class="hlt">decomposition</span> divides an irregular computational domain into a large number of equal sized pieces, and distributes them modularly among processors. A probabilistic model of workload in one dimension is used to formally explain why, and when scatter <span class="hlt">decomposition</span> works. The first result is that if correlation in workload is a convex function of distance, then scattering a more finely decomposed domain yields a lower average processor workload variance. The second result shows that if the workload <span class="hlt">process</span> is stationary Gaussian and the correlation function decreases linearly in distance until becoming zero and then remains zero, scattering a more finely decomposed domain yields a lower expected maximum processor workload. Finally it is shown that if the correlation function decreases linearly across the entire domain, then among all mappings that assign an equal number of domain pieces to each processor, scatter <span class="hlt">decomposition</span> minimizes the average processor workload variance. The dependence of these results on the assumption of decreasing correlation is illustrated with situations where a coarser granularity actually achieves better load balance.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2007NW.....94...12C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2007NW.....94...12C"><span>Cadaver <span class="hlt">decomposition</span> in terrestrial ecosystems</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Carter, David O.; Yellowlees, David; Tibbett, Mark</p> <p>2007-01-01</p> <p>A dead mammal (i.e. cadaver) is a high quality resource (narrow carbon:nitrogen ratio, high water content) that releases an intense, localised pulse of carbon and nutrients into the soil upon <span class="hlt">decomposition</span>. Despite the fact that as much as 5,000 kg of cadaver can be introduced to a square kilometre of terrestrial ecosystem each year, cadaver <span class="hlt">decomposition</span> remains a neglected microsere. Here we review the <span class="hlt">processes</span> associated with the introduction of cadaver-derived carbon and nutrients into soil from forensic and ecological settings to show that cadaver <span class="hlt">decomposition</span> can have a greater, albeit localised, effect on belowground ecology than plant and faecal resources. Cadaveric materials are rapidly introduced to belowground floral and faunal communities, which results in the formation of a highly concentrated island of fertility, or cadaver <span class="hlt">decomposition</span> island (CDI). CDIs are associated with increased soil microbial biomass, microbial activity (C mineralisation) and nematode abundance. Each CDI is an ephemeral natural disturbance that, in addition to releasing energy and nutrients to the wider ecosystem, acts as a hub by receiving these materials in the form of dead insects, exuvia and puparia, faecal matter (from scavengers, grazers and predators) and feathers (from avian scavengers and predators). As such, CDIs contribute to landscape heterogeneity. Furthermore, CDIs are a specialised habitat for a number of flies, beetles and pioneer vegetation, which enhances biodiversity in terrestrial ecosystems.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5464189','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5464189"><span>A Type-2 Block-Component-<span class="hlt">Decomposition</span> Based 2D AOA Estimation Algorithm for an Electromagnetic Vector Sensor Array</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Gao, Yu-Fei; Gui, Guan; Xie, Wei; Zou, Yan-Bin; Yang, Yue; Wan, Qun</p> <p>2017-01-01</p> <p>This paper investigates a two-dimensional angle of arrival (2D AOA) estimation algorithm for the electromagnetic vector sensor (EMVS) array based on Type-2 block component <span class="hlt">decomposition</span> (BCD) <span class="hlt">tensor</span> modeling. Such a <span class="hlt">tensor</span> <span class="hlt">decomposition</span> method can take full advantage of the multidimensional structural information of electromagnetic signals to accomplish blind estimation for array parameters with higher resolution. However, existing <span class="hlt">tensor</span> <span class="hlt">decomposition</span> methods encounter many restrictions in applications of the EMVS array, such as the strict requirement for uniqueness conditions of <span class="hlt">decomposition</span>, the inability to handle partially-polarized signals, etc. To solve these problems, this paper investigates <span class="hlt">tensor</span> modeling for partially-polarized signals of an L-shaped EMVS array. The 2D AOA estimation algorithm based on rank-(L1,L2,·) BCD is developed, and the uniqueness condition of <span class="hlt">decomposition</span> is analyzed. By means of the estimated steering matrix, the proposed algorithm can automatically achieve angle pair-matching. Numerical experiments demonstrate that the present algorithm has the advantages of both accuracy and robustness of parameter estimation. Even under the conditions of lower SNR, small angular separation and limited snapshots, the proposed algorithm still possesses better performance than subspace methods and the canonical polyadic <span class="hlt">decomposition</span> (CPD) method. PMID:28448431</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28448431','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28448431"><span>A Type-2 Block-Component-<span class="hlt">Decomposition</span> Based 2D AOA Estimation Algorithm for an Electromagnetic Vector Sensor Array.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Gao, Yu-Fei; Gui, Guan; Xie, Wei; Zou, Yan-Bin; Yang, Yue; Wan, Qun</p> <p>2017-04-27</p> <p>This paper investigates a two-dimensional angle of arrival (2D AOA) estimation algorithm for the electromagnetic vector sensor (EMVS) array based on Type-2 block component <span class="hlt">decomposition</span> (BCD) <span class="hlt">tensor</span> modeling. Such a <span class="hlt">tensor</span> <span class="hlt">decomposition</span> method can take full advantage of the multidimensional structural information of electromagnetic signals to accomplish blind estimation for array parameters with higher resolution. However, existing <span class="hlt">tensor</span> <span class="hlt">decomposition</span> methods encounter many restrictions in applications of the EMVS array, such as the strict requirement for uniqueness conditions of <span class="hlt">decomposition</span>, the inability to handle partially-polarized signals, etc. To solve these problems, this paper investigates <span class="hlt">tensor</span> modeling for partially-polarized signals of an L-shaped EMVS array. The 2D AOA estimation algorithm based on rank- ( L 1 , L 2 , · ) BCD is developed, and the uniqueness condition of <span class="hlt">decomposition</span> is analyzed. By means of the estimated steering matrix, the proposed algorithm can automatically achieve angle pair-matching. Numerical experiments demonstrate that the present algorithm has the advantages of both accuracy and robustness of parameter estimation. Even under the conditions of lower SNR, small angular separation and limited snapshots, the proposed algorithm still possesses better performance than subspace methods and the canonical polyadic <span class="hlt">decomposition</span> (CPD) method.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1985PhDT.........6R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1985PhDT.........6R"><span>Seismic moment <span class="hlt">tensor</span> recovery at low frequencies</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Riedesel, M. A.</p> <p></p> <p>A low-frequency, normal mode technique which provides estimates of the seismic moment <span class="hlt">tensor</span> in as many as ten separate 1 mHz bands is described. The basic data kernels are integrals of the complex spectra of the untaped seismograms with a bandwidth of .1 mHz, centered on the model frequencies of the fundamental modes. The frequency-domain integration <span class="hlt">process</span> reduces the sensitivity of the solutions to attenuation and splitting. Adjustments in the phase of the integrals are computed to compensate for the effects of latteral heterogeneity, station timing errors, and centroid time shifts. Estimates of the covariance of the solutions are used to provide uncertainties for the source mechanism and the principle stress axes. A graphical method is developed which allows a rapid visual assessment of the significance of nondouble-couple and isotropic components of the solutions. The method was applied to 57 earthquakes recorded on the IDA network between 1977 and 1984. The moment rate <span class="hlt">tensor</span> and its uncertainty was investigated in 1 mHz bands over the 1 to 11 mHz frequency range.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_21 --> <div id="page_22" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="421"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JCoPh.341..140G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JCoPh.341..140G"><span>Nearest-neighbor interaction systems in the <span class="hlt">tensor</span>-train format</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gelß, Patrick; Klus, Stefan; Matera, Sebastian; Schütte, Christof</p> <p>2017-07-01</p> <p>Low-rank <span class="hlt">tensor</span> approximation approaches have become an important tool in the scientific computing community. The aim is to enable the simulation and analysis of high-dimensional problems which cannot be solved using conventional methods anymore due to the so-called curse of dimensionality. This requires techniques to handle linear operators defined on extremely large state spaces and to solve the resulting systems of linear equations or eigenvalue problems. In this paper, we present a systematic <span class="hlt">tensor</span>-train <span class="hlt">decomposition</span> for nearest-neighbor interaction systems which is applicable to a host of different problems. With the aid of this <span class="hlt">decomposition</span>, it is possible to reduce the memory consumption as well as the computational costs significantly. Furthermore, it can be shown that in some cases the rank of the <span class="hlt">tensor</span> <span class="hlt">decomposition</span> does not depend on the network size. The format is thus feasible even for high-dimensional systems. We will illustrate the results with several guiding examples such as the Ising model, a system of coupled oscillators, and a CO oxidation model.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/4971','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/4971"><span><span class="hlt">Decomposition</span> of Sodium Tetraphenylborate</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Barnes, M.J.</p> <p>1998-11-20</p> <p>The chemical <span class="hlt">decomposition</span> of aqueous alkaline solutions of sodium tetraphenylborate (NaTPB) has been investigated. The focus of the investigation is on the determination of additives and/or variables which influence NaTBP <span class="hlt">decomposition</span>. This document describes work aimed at providing better understanding into the relationship of copper (II), solution temperature, and solution pH to NaTPB stability.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25291733','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25291733"><span>Sparse alignment for robust <span class="hlt">tensor</span> learning.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lai, Zhihui; Wong, Wai Keung; Xu, Yong; Zhao, Cairong; Sun, Mingming</p> <p>2014-10-01</p> <p>Multilinear/<span class="hlt">tensor</span> extensions of manifold learning based algorithms have been widely used in computer vision and pattern recognition. This paper first provides a systematic analysis of the multilinear extensions for the most popular methods by using alignment techniques, thereby obtaining a general <span class="hlt">tensor</span> alignment framework. From this framework, it is easy to show that the manifold learning based <span class="hlt">tensor</span> learning methods are intrinsically different from the alignment techniques. Based on the alignment framework, a robust <span class="hlt">tensor</span> learning method called sparse <span class="hlt">tensor</span> alignment (STA) is then proposed for unsupervised <span class="hlt">tensor</span> feature extraction. Different from the existing <span class="hlt">tensor</span> learning methods, L1- and L2-norms are introduced to enhance the robustness in the alignment step of the STA. The advantage of the proposed technique is that the difficulty in selecting the size of the local neighborhood can be avoided in the manifold learning based <span class="hlt">tensor</span> feature extraction algorithms. Although STA is an unsupervised learning method, the sparsity encodes the discriminative information in the alignment step and provides the robustness of STA. Extensive experiments on the well-known image databases as well as action and hand gesture databases by encoding object images as <span class="hlt">tensors</span> demonstrate that the proposed STA algorithm gives the most competitive performance when compared with the <span class="hlt">tensor</span>-based unsupervised learning methods.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2700196','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2700196"><span>TIMER: <span class="hlt">Tensor</span> Image Morphing for Elastic Registration</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Yap, Pew-Thian; Wu, Guorong; Zhu, Hongtu; Lin, Weili; Shen, Dinggang</p> <p>2009-01-01</p> <p>We propose a novel diffusion <span class="hlt">tensor</span> imaging (DTI) registration algorithm, called <span class="hlt">Tensor</span> Image Morphing for Elastic Registration (TIMER), which leverages the hierarchical guidance of regional distributions and local boundaries, both extracted directly from the <span class="hlt">tensors</span>. Currently available DTI registration methods generally extract <span class="hlt">tensor</span> scalar features from each <span class="hlt">tensor</span> to construct scalar maps. Subsequently, regional integration and other operations such as edge detection are performed to extract more features to guide the registration. However, there are two major limitations with these approaches. First, the computed regional features might not reflect the actual regional <span class="hlt">tensor</span> distributions. Second, by the same token, gradient maps calculated from the <span class="hlt">tensor</span>-derived scalar feature maps might not represent the actual tissue <span class="hlt">tensor</span> boundaries. To overcome these limitations, we propose a new approach which extracts regional and edge information directly from a <span class="hlt">tensor</span> neighborhood. Regional <span class="hlt">tensor</span> distribution information, such as mean and variance, is computed in a multiscale fashion directly from the <span class="hlt">tensors</span> by taking into account the voxel neighborhood of different sizes, and hence capturing <span class="hlt">tensor</span> information at different scales, which in turn can be employed to hierarchically guide the registration. Such multiscale scheme can help alleviate the problem of local minimum and is also more robust to noise since one can better determine the statistical properties of each voxel by taking into account the properties of its surrounding. Also incorporated in our method is edge information extracted directly from the <span class="hlt">tensors</span>, which is crucial to facilitate registration of tissue boundaries. Experiments involving real subjects, simulated subjects, fiber tracking, and atrophy detection indicate that TIMER performs better than the other methods in comparison (Yang et al., 2008a; Zhang et al., 2006). PMID:19398022</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19398022','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19398022"><span>TIMER: <span class="hlt">tensor</span> image morphing for elastic registration.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yap, Pew-Thian; Wu, Guorong; Zhu, Hongtu; Lin, Weili; Shen, Dinggang</p> <p>2009-08-15</p> <p>We propose a novel diffusion <span class="hlt">tensor</span> imaging (DTI) registration algorithm, called <span class="hlt">Tensor</span> Image Morphing for Elastic Registration (TIMER), which leverages the hierarchical guidance of regional distributions and local boundaries, both extracted directly from the <span class="hlt">tensors</span>. Currently available DTI registration methods generally extract <span class="hlt">tensor</span> scalar features from each <span class="hlt">tensor</span> to construct scalar maps. Subsequently, regional integration and other operations such as edge detection are performed to extract more features to guide the registration. However, there are two major limitations with these approaches. First, the computed regional features might not reflect the actual regional <span class="hlt">tensor</span> distributions. Second, by the same token, gradient maps calculated from the <span class="hlt">tensor</span>-derived scalar feature maps might not represent the actual tissue <span class="hlt">tensor</span> boundaries. To overcome these limitations, we propose a new approach which extracts regional and edge information directly from a <span class="hlt">tensor</span> neighborhood. Regional <span class="hlt">tensor</span> distribution information, such as mean and variance, is computed in a multiscale fashion directly from the <span class="hlt">tensors</span> by taking into account the voxel neighborhood of different sizes, and hence capturing <span class="hlt">tensor</span> information at different scales, which in turn can be employed to hierarchically guide the registration. Such multiscale scheme can help alleviate the problem of local minimum and is also more robust to noise since one can better determine the statistical properties of each voxel by taking into account the properties of its surrounding. Also incorporated in our method is edge information extracted directly from the <span class="hlt">tensors</span>, which is crucial to facilitate registration of tissue boundaries. Experiments involving real subjects, simulated subjects, fiber tracking, and atrophy detection indicate that TIMER performs better than the other methods (Yang et al., 2008; Zhang et al., 2006).</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/scitech/biblio/21250371','SCIGOV-STC'); return false;" href="https://www.osti.gov/scitech/biblio/21250371"><span>Gravitoelectromagnetic analogy based on tidal <span class="hlt">tensors</span></span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Costa, L. Filipe O.; Herdeiro, Carlos A. R.</p> <p>2008-07-15</p> <p>We propose a new approach to a physical analogy between general relativity and electromagnetism, based on tidal <span class="hlt">tensors</span> of both theories. Using this approach we write a covariant form for the gravitational analogues of the Maxwell equations, which makes transparent both the similarities and key differences between the two interactions. The following realizations of the analogy are given. The first one matches linearized gravitational tidal <span class="hlt">tensors</span> to exact electromagnetic tidal <span class="hlt">tensors</span> in Minkowski spacetime. The second one matches exact magnetic gravitational tidal <span class="hlt">tensors</span> for ultrastationary metrics to exact magnetic tidal <span class="hlt">tensors</span> of electromagnetism in curved spaces. In the third we show that our approach leads to a two-step exact derivation of Papapetrou's equation describing the force exerted on a spinning test particle. Analogous scalar invariants built from tidal <span class="hlt">tensors</span> of both theories are also discussed.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016PhRvA..94d2324B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016PhRvA..94d2324B"><span><span class="hlt">Tensor</span> eigenvalues and entanglement of symmetric states</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bohnet-Waldraff, F.; Braun, D.; Giraud, O.</p> <p>2016-10-01</p> <p><span class="hlt">Tensor</span> eigenvalues and eigenvectors have been introduced in the recent mathematical literature as a generalization of the usual matrix eigenvalues and eigenvectors. We apply this formalism to a <span class="hlt">tensor</span> that describes a multipartite symmetric state or a spin state, and we investigate to what extent the corresponding <span class="hlt">tensor</span> eigenvalues contain information about the multipartite entanglement (or, equivalently, the quantumness) of the state. This extends previous results connecting entanglement to spectral properties related to the state. We show that if the smallest <span class="hlt">tensor</span> eigenvalue is negative, the state is detected as entangled. While for spin-1 states the positivity of the smallest <span class="hlt">tensor</span> eigenvalue is equivalent to separability, we show that for higher values of the angular momentum there is a correlation between entanglement and the value of the smallest <span class="hlt">tensor</span> eigenvalue.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010SPIE.7695E..0JM','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010SPIE.7695E..0JM"><span>Denoising of hyperspectral images by best multilinear rank approximation of a <span class="hlt">tensor</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Marin-McGee, Maider; Velez-Reyes, Miguel</p> <p>2010-04-01</p> <p>The hyperspectral image cube can be modeled as a three dimensional array. <span class="hlt">Tensors</span> and the tools of multilinear algebra provide a natural framework to deal with this type of mathematical object. Singular value <span class="hlt">decomposition</span> (SVD) and its variants have been used by the HSI community for denoising of hyperspectral imagery. Denoising of HSI using SVD is achieved by finding a low rank approximation of a matrix representation of the hyperspectral image cube. This paper investigates similar concepts in hyperspectral denoising by using a low multilinear rank approximation the given HSI <span class="hlt">tensor</span> representation. The Best Multilinear Rank Approximation (BMRA) of a given <span class="hlt">tensor</span> A is to find a lower multilinear rank <span class="hlt">tensor</span> B that is as close as possible to A in the Frobenius norm. Different numerical methods to compute the BMRA using Alternating Least Square (ALS) method and Newton's Methods over product of Grassmann manifolds are presented. The effect of the multilinear rank, the numerical method used to compute the BMRA, and different parameter choices in those methods are studied. Results show that comparable results are achievable with both ALS and Newton type methods. Also, classification results using the filtered <span class="hlt">tensor</span> are better than those obtained either with denoising using SVD or MNF.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014EJASP2014...58H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014EJASP2014...58H"><span>Modeling individual HRTF <span class="hlt">tensor</span> using high-order partial least squares</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Huang, Qinghua; Li, Lin</p> <p>2014-12-01</p> <p>A <span class="hlt">tensor</span> is used to describe head-related transfer functions (HRTFs) depending on frequencies, sound directions, and anthropometric parameters. It keeps the multi-dimensional structure of measured HRTFs. To construct a multi-linear HRTF personalization model, an individual core <span class="hlt">tensor</span> is extracted from the original HRTFs using high-order singular value <span class="hlt">decomposition</span> (HOSVD). The individual core <span class="hlt">tensor</span> in lower-dimensional space acts as the output of the multi-linear model. Some key anthropometric parameters as the inputs of the model are selected by Laplacian scores and correlation analyses between all the measured parameters and the individual core <span class="hlt">tensor</span>. Then, the multi-linear regression model is constructed by high-order partial least squares (HOPLS), aiming to seek a joint subspace approximation for both the selected parameters and the individual core <span class="hlt">tensor</span>. The numbers of latent variables and loadings are used to control the complexity of the model and prevent overfitting feasibly. Compared with the partial least squares regression (PLSR) method, objective simulations demonstrate the better performance for predicting individual HRTFs especially for the sound directions ipsilateral to the concerned ear. The subjective listening tests show that the predicted individual HRTFs are approximate to the measured HRTFs for the sound localization.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24283465','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24283465"><span>Four-component relativistic density functional theory calculations of NMR shielding <span class="hlt">tensors</span> for paramagnetic systems.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Komorovsky, Stanislav; Repisky, Michal; Ruud, Kenneth; Malkina, Olga L; Malkin, Vladimir G</p> <p>2013-12-27</p> <p>A four-component relativistic method for the calculation of NMR shielding constants of paramagnetic doublet systems has been developed and implemented in the ReSpect program package. The method uses a Kramer unrestricted noncollinear formulation of density functional theory (DFT), providing the best DFT framework for property calculations of open-shell species. The evaluation of paramagnetic nuclear magnetic resonance (pNMR) <span class="hlt">tensors</span> reduces to the calculation of electronic g <span class="hlt">tensors</span>, hyperfine coupling <span class="hlt">tensors</span>, and NMR shielding <span class="hlt">tensors</span>. For all properties, modern four-component formulations were adopted. The use of both restricted kinetically and magnetically balanced basis sets along with gauge-including atomic orbitals ensures rapid basis-set convergence. These approaches are exact in the framework of the Dirac-Coulomb Hamiltonian, thus providing useful reference data for more approximate methods. Benchmark calculations on Ru(III) complexes demonstrate good performance of the method in reproducing experimental data and also its applicability to chemically relevant medium-sized systems. <span class="hlt">Decomposition</span> of the temperature-dependent part of the pNMR <span class="hlt">tensor</span> into the traditional contact and pseudocontact terms is proposed.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1988IJTP...27.1083C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1988IJTP...27.1083C"><span>Curvature <span class="hlt">tensors</span> unified field equations on SEXn</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chung, Kyung Tae; Lee, Il Young</p> <p>1988-09-01</p> <p>We study the curvature <span class="hlt">tensors</span> and field equations in the n-dimensional SE manifold SEXn. We obtain several basic properties of the vectors S λ and U λ and then of the SE curvature <span class="hlt">tensor</span> and its contractions, such as a generalized Ricci identity, a generalized Bianchi identity, and two variations of the Bianchi identity satisfied by the SE Einstein <span class="hlt">tensor</span>. Finally, a system of field equations is discussed in SEXn and one of its particular solutions is constructed and displayed.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013JMP....54h2303G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013JMP....54h2303G"><span>Some classes of renormalizable <span class="hlt">tensor</span> models</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Geloun, Joseph Ben; Livine, Etera R.</p> <p>2013-08-01</p> <p>We identify new families of renormalizable <span class="hlt">tensor</span> models from anterior renormalizable <span class="hlt">tensor</span> models via a mapping capable of reducing or increasing the rank of the theory without having an effect on the renormalizability property. Mainly, a version of the rank 3 <span class="hlt">tensor</span> model as defined by Ben Geloun and Samary [Ann. Henri Poincare 14, 1599 (2013); e-print arXiv:1201.0176 [hep-th</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20000012445&hterms=topology&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3Dtopology','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20000012445&hterms=topology&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3Dtopology"><span>The Topology of Symmetric <span class="hlt">Tensor</span> Fields</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Levin, Yingmei; Batra, Rajesh; Hesselink, Lambertus; Levy, Yuval</p> <p>1997-01-01</p> <p>Combinatorial topology, also known as "rubber sheet geometry", has extensive applications in geometry and analysis, many of which result from connections with the theory of differential equations. A link between topology and differential equations is vector fields. Recent developments in scientific visualization have shown that vector fields also play an important role in the analysis of second-order <span class="hlt">tensor</span> fields. A second-order <span class="hlt">tensor</span> field can be transformed into its eigensystem, namely, eigenvalues and their associated eigenvectors without loss of information content. Eigenvectors behave in a similar fashion to ordinary vectors with even simpler topological structures due to their sign indeterminacy. Incorporating information about eigenvectors and eigenvalues in a display technique known as hyperstreamlines reveals the structure of a <span class="hlt">tensor</span> field. The simplify and often complex <span class="hlt">tensor</span> field and to capture its important features, the <span class="hlt">tensor</span> is decomposed into an isotopic <span class="hlt">tensor</span> and a deviator. A <span class="hlt">tensor</span> field and its deviator share the same set of eigenvectors, and therefore they have a similar topological structure. A a deviator determines the properties of a <span class="hlt">tensor</span> field, while the isotopic part provides a uniform bias. Degenerate points are basic constituents of <span class="hlt">tensor</span> fields. In 2-D <span class="hlt">tensor</span> fields, there are only two types of degenerate points; while in 3-D, the degenerate points can be characterized in a Q'-R' plane. Compressible and incompressible flows share similar topological feature due to the similarity of their deviators. In the case of the deformation <span class="hlt">tensor</span>, the singularities of its deviator represent the area of vortex core in the field. In turbulent flows, the similarities and differences of the topology of the deformation and the Reynolds stress <span class="hlt">tensors</span> reveal that the basic addie-viscosity assuptions have their validity in turbulence modeling under certain conditions.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006MPLA...21.2599D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006MPLA...21.2599D"><span>Magnetic Branes from Generalized 't Hooft <span class="hlt">Tensor</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Duan, Yi-Shi; Wu, Shao-Feng</p> <p></p> <p>'t Hooft-Polykov magnetic monopole regularly realizes the Dirac magnetic monopole in terms of a two-rank <span class="hlt">tensor</span>, the so-called 't Hooft <span class="hlt">tensor</span> in 3D space. Based on the Chern kernel method, we propose the arbitrary rank 't Hooft <span class="hlt">tensors</span>, which universally determine the quantized low energy boundaries of generalized Georgi-Glashow models under asymptotic conditions. Furthermore, the dual magnetic branes theory is built up in terms of ϕ-mapping theory.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..1910616A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..1910616A"><span>Estimation of full moment <span class="hlt">tensors</span>, including uncertainties, for earthquakes, volcanic events, and nuclear explosions</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Alvizuri, Celso; Silwal, Vipul; Krischer, Lion; Tape, Carl</p> <p>2017-04-01</p> <p>A seismic moment <span class="hlt">tensor</span> is a 3 × 3 symmetric matrix that provides a compact representation of seismic events within Earth's crust. We develop an algorithm to estimate moment <span class="hlt">tensors</span> and their uncertainties from observed seismic data. For a given event, the algorithm performs a grid search over the six-dimensional space of moment <span class="hlt">tensors</span> by generating synthetic waveforms at each grid point and then evaluating a misfit function between the observed and synthetic waveforms. 'The' moment <span class="hlt">tensor</span> M for the event is then the moment <span class="hlt">tensor</span> with minimum misfit. To describe the uncertainty associated with M, we first convert the misfit function to a probability function. The uncertainty, or rather the confidence, is then given by the 'confidence curve' P(V ), where P(V ) is the probability that the true moment <span class="hlt">tensor</span> for the event lies within the neighborhood of M that has fractional volume V . The area under the confidence curve provides a single, abbreviated 'confidence parameter' for M. We apply the method to data from events in different regions and tectonic settings: small (Mw < 2.5) events at Uturuncu volcano in Bolivia, moderate (Mw > 4) earthquakes in the southern Alaska subduction zone, and natural and man-made events at the Nevada Test Site. Moment <span class="hlt">tensor</span> uncertainties allow us to better discriminate among moment <span class="hlt">tensor</span> source types and to assign physical <span class="hlt">processes</span> to the events.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFM.S31A2701A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFM.S31A2701A"><span>Estimation of full moment <span class="hlt">tensors</span>, including uncertainties,for earthquakes, volcanic events, and nuclear tests</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Alvizuri, C. R.; Silwal, V.; Krischer, L.; Tape, C.</p> <p>2016-12-01</p> <p>A seismic moment <span class="hlt">tensor</span> is a 3 X 3 symmetric matrix that provides a compact representation of seismic events within Earth's crust. We develop an algorithm to estimate moment <span class="hlt">tensors</span> and their uncertainties from observed seismic data. For a given event, the algorithm performs a grid search over the six-dimensional space of moment <span class="hlt">tensors</span> by generating synthetic waveforms at each grid point and then evaluating a misfit function between the observed and synthetic waveforms. 'The' moment <span class="hlt">tensor</span> M for the event is then the moment <span class="hlt">tensor</span> with minimum misfit. To describe the uncertainty associated with M, we first convert the misfit function to a probability function. The uncertainty, or rather the confidence, is then given by the 'confidence curve' P(V), where P(V) is the probability that the true moment <span class="hlt">tensor</span> for the event lies within the neighborhood of M that has fractional volume V. The area under the confidence curve provides a single, abbreviated 'confidence parameter' for M. We apply the method to data from events in different regions and tectonic settings: small (Mw < 2.5) events at Uturuncu volcano in Bolivia, moderate (Mw > 4) earthquakes in the southern Alaska subduction zone, and natural and man-made events at the Nevada Test Site. Moment <span class="hlt">tensor</span> uncertainties allow us to better discriminate among moment <span class="hlt">tensor</span> source types and to assign physical <span class="hlt">processes</span> to the events.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JSMTE..09.3102A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JSMTE..09.3102A"><span>The <span class="hlt">tensor</span> network theory library</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Al-Assam, S.; Clark, S. R.; Jaksch, D.</p> <p>2017-09-01</p> <p>In this technical paper we introduce the <span class="hlt">tensor</span> network theory (TNT) library—an open-source software project aimed at providing a platform for rapidly developing robust, easy to use and highly optimised code for TNT calculations. The objectives of this paper are (i) to give an overview of the structure of TNT library, and (ii) to help scientists decide whether to use the TNT library in their research. We show how to employ the TNT routines by giving examples of ground-state and dynamical calculations of one-dimensional bosonic lattice system. We also discuss different options for gaining access to the software available at www.tensornetworktheory.org.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhRvL.118k0504Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhRvL.118k0504Y"><span>Loop Optimization for <span class="hlt">Tensor</span> Network Renormalization</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yang, Shuo; Gu, Zheng-Cheng; Wen, Xiao-Gang</p> <p>2017-03-01</p> <p>We introduce a <span class="hlt">tensor</span> renormalization group scheme for coarse graining a two-dimensional <span class="hlt">tensor</span> network that can be successfully applied to both classical and quantum systems on and off criticality. The key innovation in our scheme is to deform a 2D <span class="hlt">tensor</span> network into small loops and then optimize the <span class="hlt">tensors</span> on each loop. In this way, we remove short-range entanglement at each iteration step and significantly improve the accuracy and stability of the renormalization flow. We demonstrate our algorithm in the classical Ising model and a frustrated 2D quantum model.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28368642','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28368642"><span>Loop Optimization for <span class="hlt">Tensor</span> Network Renormalization.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yang, Shuo; Gu, Zheng-Cheng; Wen, Xiao-Gang</p> <p>2017-03-17</p> <p>We introduce a <span class="hlt">tensor</span> renormalization group scheme for coarse graining a two-dimensional <span class="hlt">tensor</span> network that can be successfully applied to both classical and quantum systems on and off criticality. The key innovation in our scheme is to deform a 2D <span class="hlt">tensor</span> network into small loops and then optimize the <span class="hlt">tensors</span> on each loop. In this way, we remove short-range entanglement at each iteration step and significantly improve the accuracy and stability of the renormalization flow. We demonstrate our algorithm in the classical Ising model and a frustrated 2D quantum model.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014RJPCA..88.2308Q','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014RJPCA..88.2308Q"><span>Thermal <span class="hlt">decomposition</span> and non-isothermal <span class="hlt">decomposition</span> kinetics of carbamazepine</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Qi, Zhen-li; Zhang, Duan-feng; Chen, Fei-xiong; Miao, Jun-yan; Ren, Bao-zeng</p> <p>2014-12-01</p> <p>The thermal stability and kinetics of isothermal <span class="hlt">decomposition</span> of carbamazepine were studied under isothermal conditions by thermogravimetry (TGA) and differential scanning calorimetry (DSC) at three heating rates. Particularly, transformation of crystal forms occurs at 153.75°C. The activation energy of this thermal <span class="hlt">decomposition</span> <span class="hlt">process</span> was calculated from the analysis of TG curves by Flynn-Wall-Ozawa, Doyle, distributed activation energy model, Šatava-Šesták and Kissinger methods. There were two different stages of thermal <span class="hlt">decomposition</span> <span class="hlt">process</span>. For the first stage, E and log A [s-1] were determined to be 42.51 kJ mol-1 and 3.45, respectively. In the second stage, E and log A [s-1] were 47.75 kJ mol-1 and 3.80. The mechanism of thermal <span class="hlt">decomposition</span> was Avrami-Erofeev (the reaction order, n = 1/3), with integral form G(α) = [-ln(1 - α)]1/3 (α = ˜0.1-0.8) in the first stage and Avrami-Erofeev (the reaction order, n = 1) with integral form G(α) = -ln(1 - α) (α = ˜0.9-0.99) in the second stage. Moreover, Δ H ≠, Δ S ≠, Δ G ≠ values were 37.84 kJ mol-1, -192.41 J mol-1 K-1, 146.32 kJ mol-1 and 42.68 kJ mol-1, -186.41 J mol-1 K-1, 156.26 kJ mol-1 for the first and second stage, respectively.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_22 --> <div id="page_23" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="441"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JIEIB..97..227M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JIEIB..97..227M"><span>Hardware Implementation of Singular Value <span class="hlt">Decomposition</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Majumder, Swanirbhar; Shaw, Anil Kumar; Sarkar, Subir Kumar</p> <p>2016-06-01</p> <p>Singular value <span class="hlt">decomposition</span> (SVD) is a useful <span class="hlt">decomposition</span> technique which has important role in various engineering fields such as image compression, watermarking, signal <span class="hlt">processing</span>, and numerous others. SVD does not involve convolution operation, which make it more suitable for hardware implementation, unlike the most popular transforms. This paper reviews the various methods of hardware implementation for SVD computation. This paper also studies the time complexity and hardware complexity in various methods of SVD computation.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25764243','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25764243"><span>Diffusion <span class="hlt">tensor</span> imaging of peripheral nerves.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Naraghi, Ali M; Awdeh, Haitham; Wadhwa, Vibhor; Andreisek, Gustav; Chhabra, Avneesh</p> <p>2015-04-01</p> <p>Diffusion <span class="hlt">tensor</span> imaging (DTI) is a powerful MR imaging technique that can be used to probe the microstructural environment of highly anisotropic tissues such as peripheral nerves. DTI has been used predominantly in the central nervous system, and its application in the peripheral nervous system does pose some challenges related to imaging artifacts, the small caliber of peripheral nerves, and low water proton density. However advances in MRI hardware and software have made it possible to use the technique in the peripheral nervous system and to obtain functional data relating to the effect of pathologic <span class="hlt">processes</span> on peripheral nerves. This article reviews the imaging principles behind DTI and examines the literature regarding its application in assessing peripheral nerves. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20593175','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20593175"><span>Diffusion <span class="hlt">tensor</span> imaging of peripheral nerves.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Jambawalikar, Sachin; Baum, Jeremy; Button, Terry; Li, Haifang; Geronimo, Veronica; Gould, Elaine S</p> <p>2010-11-01</p> <p>Magnetic resonance diffusion <span class="hlt">tensor</span> imaging (DTI) allows the directional dependence of water diffusion to be studied. Analysis of the resulting image data allows for the determination of fractional anisotropy (FA), apparent diffusion coefficient (ADC), as well as allowing three-dimensional visualization of the fiber tract (tractography). We visualized the ulnar nerve of ten healthy volunteers with DTI. We found FA to be 0.752 ± 0.067 and the ADC to be 0.96 ± 0.13 × 10(-3) mm(2)/s. A nuts-and-bolts description of the physical aspects of DTI is provided as an educational <span class="hlt">process</span> for readers.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/scitech/biblio/22489544','SCIGOV-STC'); return false;" href="https://www.osti.gov/scitech/biblio/22489544"><span>Variance <span class="hlt">decomposition</span> in stochastic simulators</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Le Maître, O. P.; Knio, O. M.; Moraes, A.</p> <p>2015-06-28</p> <p>This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson <span class="hlt">processes</span>. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding <span class="hlt">decomposition</span>, the reformulation enables us to perform an orthogonal <span class="hlt">decomposition</span> of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JSV...392...56D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JSV...392...56D"><span>Dominant modal <span class="hlt">decomposition</span> method</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Dombovari, Zoltan</p> <p>2017-03-01</p> <p>The paper deals with the automatic <span class="hlt">decomposition</span> of experimental frequency response functions (FRF's) of mechanical structures. The <span class="hlt">decomposition</span> of FRF's is based on the Green function representation of free vibratory systems. After the determination of the impulse dynamic subspace, the system matrix is formulated and the poles are calculated directly. By means of the corresponding eigenvectors, the contribution of each element of the impulse dynamic subspace is determined and the sufficient <span class="hlt">decomposition</span> of the corresponding FRF is carried out. With the presented dominant modal <span class="hlt">decomposition</span> (DMD) method, the mode shapes, the modal participation vectors and the modal scaling factors are identified using the decomposed FRF's. Analytical example is presented along with experimental case studies taken from machine tool industry.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5096699','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5096699"><span>Using Matrix and <span class="hlt">Tensor</span> Factorizations for the Single-Trial Analysis of Population Spike Trains</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Onken, Arno; Liu, Jian K.; Karunasekara, P. P. Chamanthi R.; Delis, Ioannis; Gollisch, Tim; Panzeri, Stefano</p> <p>2016-01-01</p> <p>Advances in neuronal recording techniques are leading to ever larger numbers of simultaneously monitored neurons. This poses the important analytical challenge of how to capture compactly all sensory information that neural population codes carry in their spatial dimension (differences in stimulus tuning across neurons at different locations), in their temporal dimension (temporal neural response variations), or in their combination (temporally coordinated neural population firing). Here we investigate the utility of <span class="hlt">tensor</span> factorizations of population spike trains along space and time. These factorizations decompose a dataset of single-trial population spike trains into spatial firing patterns (combinations of neurons firing together), temporal firing patterns (temporal activation of these groups of neurons) and trial-dependent activation coefficients (strength of recruitment of such neural patterns on each trial). We validated various factorization methods on simulated data and on populations of ganglion cells simultaneously recorded in the salamander retina. We found that single-trial <span class="hlt">tensor</span> space-by-time <span class="hlt">decompositions</span> provided low-dimensional data-robust representations of spike trains that capture efficiently both their spatial and temporal information about sensory stimuli. <span class="hlt">Tensor</span> <span class="hlt">decompositions</span> with orthogonality constraints were the most efficient in extracting sensory information, whereas non-negative <span class="hlt">tensor</span> <span class="hlt">decompositions</span> worked well even on non-independent and overlapping spike patterns, and retrieved informative firing patterns expressed by the same population in response to novel stimuli. Our method showed that populations of retinal ganglion cells carried information in their spike timing on the ten-milliseconds-scale about spatial details of natural images. This information could not be recovered from the spike counts of these cells. First-spike latencies carried the majority of information provided by the whole spike train about fine-scale image</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27814363','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27814363"><span>Using Matrix and <span class="hlt">Tensor</span> Factorizations for the Single-Trial Analysis of Population Spike Trains.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Onken, Arno; Liu, Jian K; Karunasekara, P P Chamanthi R; Delis, Ioannis; Gollisch, Tim; Panzeri, Stefano</p> <p>2016-11-01</p> <p>Advances in neuronal recording techniques are leading to ever larger numbers of simultaneously monitored neurons. This poses the important analytical challenge of how to capture compactly all sensory information that neural population codes carry in their spatial dimension (differences in stimulus tuning across neurons at different locations), in their temporal dimension (temporal neural response variations), or in their combination (temporally coordinated neural population firing). Here we investigate the utility of <span class="hlt">tensor</span> factorizations of population spike trains along space and time. These factorizations decompose a dataset of single-trial population spike trains into spatial firing patterns (combinations of neurons firing together), temporal firing patterns (temporal activation of these groups of neurons) and trial-dependent activation coefficients (strength of recruitment of such neural patterns on each trial). We validated various factorization methods on simulated data and on populations of ganglion cells simultaneously recorded in the salamander retina. We found that single-trial <span class="hlt">tensor</span> space-by-time <span class="hlt">decompositions</span> provided low-dimensional data-robust representations of spike trains that capture efficiently both their spatial and temporal information about sensory stimuli. <span class="hlt">Tensor</span> <span class="hlt">decompositions</span> with orthogonality constraints were the most efficient in extracting sensory information, whereas non-negative <span class="hlt">tensor</span> <span class="hlt">decompositions</span> worked well even on non-independent and overlapping spike patterns, and retrieved informative firing patterns expressed by the same population in response to novel stimuli. Our method showed that populations of retinal ganglion cells carried information in their spike timing on the ten-milliseconds-scale about spatial details of natural images. This information could not be recovered from the spike counts of these cells. First-spike latencies carried the majority of information provided by the whole spike train about fine-scale image</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/15122674','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/15122674"><span>Characterizing non-Gaussian diffusion by using generalized diffusion <span class="hlt">tensors</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Liu, Chunlei; Bammer, Roland; Acar, Burak; Moseley, Michael E</p> <p>2004-05-01</p> <p>Diffusion <span class="hlt">tensor</span> imaging (DTI) is known to have a limited capability of resolving multiple fiber orientations within one voxel. This is mainly because the probability density function (PDF) for random spin displacement is non-Gaussian in the confining environment of biological tissues and, thus, the modeling of self-diffusion by a second-order <span class="hlt">tensor</span> breaks down. The statistical property of a non-Gaussian diffusion <span class="hlt">process</span> is characterized via the higher-order <span class="hlt">tensor</span> (HOT) coefficients by reconstructing the PDF of the random spin displacement. Those HOT coefficients can be determined by combining a series of complex diffusion-weighted measurements. The signal equation for an MR diffusion experiment was investigated theoretically by generalizing Fick's law to a higher-order partial differential equation (PDE) obtained via Kramers-Moyal expansion. A relationship has been derived between the HOT coefficients of the PDE and the higher-order cumulants of the random spin displacement. Monte-Carlo simulations of diffusion in a restricted environment with different geometrical shapes were performed, and the strengths and weaknesses of both HOT and established diffusion analysis techniques were investigated. The generalized diffusion <span class="hlt">tensor</span> formalism is capable of accurately resolving the underlying spin displacement for complex geometrical structures, of which neither conventional DTI nor diffusion-weighted imaging at high angular resolution (HARD) is capable. The HOT method helps illuminate some of the restrictions that are characteristic of these other methods. Furthermore, a direct relationship between HOT and q-space is also established.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27568983','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27568983"><span>Adaptive Fourier <span class="hlt">decomposition</span> based ECG denoising.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wang, Ze; Wan, Feng; Wong, Chi Man; Zhang, Liming</p> <p>2016-10-01</p> <p>A novel ECG denoising method is proposed based on the adaptive Fourier <span class="hlt">decomposition</span> (AFD). The AFD decomposes a signal according to its energy distribution, thereby making this algorithm suitable for separating pure ECG signal and noise with overlapping frequency ranges but different energy distributions. A stop criterion for the iterative <span class="hlt">decomposition</span> <span class="hlt">process</span> in the AFD is calculated on the basis of the estimated signal-to-noise ratio (SNR) of the noisy signal. The proposed AFD-based method is validated by the synthetic ECG signal using an ECG model and also real ECG signals from the MIT-BIH Arrhythmia Database both with additive Gaussian white noise. Simulation results of the proposed method show better performance on the denoising and the QRS detection in comparing with major ECG denoising schemes based on the wavelet transform, the Stockwell transform, the empirical mode <span class="hlt">decomposition</span>, and the ensemble empirical mode <span class="hlt">decomposition</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2000JaJAP..39.1378P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2000JaJAP..39.1378P"><span>Thermal <span class="hlt">Decomposition</span> of Poly(methylphenylsilane)</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Pan, Lujun; Zhang, Mei; Nakayama, Yoshikazu</p> <p>2000-03-01</p> <p>The thermal <span class="hlt">decomposition</span> of poly(methylphenylsilane) was performed at constant heating rates and isothermal conditions. The evolved gases were studied by ionization-threshold mass spectroscopy. Pyrolysis under isothermal conditions reveals that the <span class="hlt">decomposition</span> of poly(methylphenylsilane) is a type of depolymerization that has a first-order reaction. Kinetic analysis of the evolution spectra of CH3-Si-C6H5 radicals, phenyl and methyl substituents reveals the mechanism and activation energies of the <span class="hlt">decomposition</span> reactions in main chains and substituents. It is found that the <span class="hlt">decomposition</span> of main chains is a dominant reaction and results in the weight loss of approximately 90%. The effusion of phenyl and methyl substituents occurs in the two <span class="hlt">processes</span> of rearrangement of main chains and the formation of stable Si-C containing residuals.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/scitech/biblio/22252841','SCIGOV-STC'); return false;" href="https://www.osti.gov/scitech/biblio/22252841"><span>Communication: Acceleration of coupled cluster singles and doubles via orbital-weighted least-squares <span class="hlt">tensor</span> hypercontraction</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Parrish, Robert M.; Sherrill, C. David; Hohenstein, Edward G.; Kokkila, Sara I. L.; Martínez, Todd J.</p> <p>2014-05-14</p> <p>We apply orbital-weighted least-squares <span class="hlt">tensor</span> hypercontraction <span class="hlt">decomposition</span> of the electron repulsion integrals to accelerate the coupled cluster singles and doubles (CCSD) method. Using accurate and flexible low-rank factorizations of the electron repulsion integral <span class="hlt">tensor</span>, we are able to reduce the scaling of the most vexing particle-particle ladder term in CCSD from O(N{sup 6}) to O(N{sup 5}), with remarkably low error. Combined with a T{sub 1}-transformed Hamiltonian, this leads to substantial practical accelerations against an optimized density-fitted CCSD implementation.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/1340180','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/1340180"><span><span class="hlt">Tensor</span> Network Quantum Virtual Machine (TNQVM)</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>McCaskey, Alexander J.</p> <p>2016-11-18</p> <p>There is a lack of state-of-the-art quantum computing simulation software that scales on heterogeneous systems like Titan. <span class="hlt">Tensor</span> Network Quantum Virtual Machine (TNQVM) provides a quantum simulator that leverages a distributed network of GPUs to simulate quantum circuits in a manner that leverages recent results from <span class="hlt">tensor</span> network theory.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1996AIPC..359..413L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1996AIPC..359..413L"><span>Effects of <span class="hlt">tensor</span> interactions in τ decays</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>López Castro, G.; Godina Nava, J. J.</p> <p>1996-02-01</p> <p>Recent claims for the observation of antisymmetric weak <span class="hlt">tensor</span> currents in π and K decays are considered for the case of τ→Kπν transitions. Assuming the existence of symmetric <span class="hlt">tensor</span> currents, a mechanism for the direct production of the K2*(1430) spin-2 meson in τ decays is proposed.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26947573','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26947573"><span>Temporal dynamics of biotic and abiotic drivers of litter <span class="hlt">decomposition</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>García-Palacios, Pablo; Shaw, E Ashley; Wall, Diana H; Hättenschwiler, Stephan</p> <p>2016-05-01</p> <p>Climate, litter quality and decomposers drive litter <span class="hlt">decomposition</span>. However, little is known about whether their relative contribution changes at different <span class="hlt">decomposition</span> stages. To fill this gap, we evaluated the relative importance of leaf litter polyphenols, decomposer communities and soil moisture for litter C and N loss at different stages throughout the <span class="hlt">decomposition</span> <span class="hlt">process</span>. Although both microbial and nematode communities regulated litter C and N loss in the early <span class="hlt">decomposition</span> stages, soil moisture and legacy effects of initial differences in litter quality played a major role in the late stages of the <span class="hlt">process</span>. Our results provide strong evidence for substantial shifts in how biotic and abiotic factors control litter C and N dynamics during <span class="hlt">decomposition</span>. Taking into account such temporal dynamics will increase the predictive power of <span class="hlt">decomposition</span> models that are currently limited by a single-pool approach applying control variables uniformly to the entire decay <span class="hlt">process</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/scitech/biblio/1178516','SCIGOV-STC'); return false;" href="https://www.osti.gov/scitech/biblio/1178516"><span>CAST: Contraction Algorithm for Symmetric <span class="hlt">Tensors</span></span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Rajbhandari, Samyam; NIkam, Akshay; Lai, Pai-Wei; Stock, Kevin; Krishnamoorthy, Sriram; Sadayappan, Ponnuswamy</p> <p>2014-09-22</p> <p><span class="hlt">Tensor</span> contractions represent the most compute-intensive core kernels in ab initio computational quantum chemistry and nuclear physics. Symmetries in these <span class="hlt">tensor</span> contractions makes them difficult to load balance and scale to large distributed systems. In this paper, we develop an efficient and scalable algorithm to contract symmetric <span class="hlt">tensors</span>. We introduce a novel approach that avoids data redistribution in contracting symmetric <span class="hlt">tensors</span> while also avoiding redundant storage and maintaining load balance. We present experimental results on two parallel supercomputers for several symmetric contractions that appear in the CCSD quantum chemistry method. We also present a novel approach to <span class="hlt">tensor</span> redistribution that can take advantage of parallel hyperplanes when the initial distribution has replicated dimensions, and use collective broadcast when the final distribution has replicated dimensions, making the algorithm very efficient.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011JMP....52j3510S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011JMP....52j3510S"><span><span class="hlt">Tensor</span> models and 3-ary algebras</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sasakura, Naoki</p> <p>2011-10-01</p> <p><span class="hlt">Tensor</span> models are the generalization of matrix models, and are studied as models of quantum gravity in general dimensions. In this paper, I discuss the algebraic structure in the fuzzy space interpretation of the <span class="hlt">tensor</span> models which have a <span class="hlt">tensor</span> with three indices as its only dynamical variable. The algebraic structure is studied mainly from the perspective of 3-ary algebras. It is shown that the <span class="hlt">tensor</span> models have algebraic expressions, and that their symmetries are represented by 3-ary algebras. It is also shown that the 3-ary algebras of coordinates, which appear in the nonassociative fuzzy flat spacetimes corresponding to a certain class of configurations with Gaussian functions in the <span class="hlt">tensor</span> models, form Lie triple systems, and the associated Lie algebras are shown to agree with those of the Snyder's noncommutative spacetimes. The Poincare transformations of the coordinates on the fuzzy flat spacetimes are shown to be generated by 3-ary algebras.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016PhDT.......178A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016PhDT.......178A"><span>Estimation of full moment <span class="hlt">tensors</span>, including uncertainties, for earthquakes, volcanic events, and nuclear explosions</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Alvizuri, Celso R.</p> <p></p> <p>We present a catalog of full seismic moment <span class="hlt">tensors</span> for 63 events from Uturuncu volcano in Bolivia. The events were recorded during 2011-2012 in the PLUTONS seismic array of 24 broadband stations. Most events had magnitudes between 0.5 and 2.0 and did not generate discernible surface waves; the largest event was Mw 2.8. For each event we computed the misfit between observed and synthetic waveforms, and we used first-motion polarity measurements to reduce the number of possible solutions. Each moment <span class="hlt">tensor</span> solution was obtained using a grid search over the six-dimensional space of moment <span class="hlt">tensors</span>. For each event we show the misfit function in eigenvalue space, represented by a lune. We identify three subsets of the catalog: (1) 6 isotropic events, (2) 5 tensional crack events, and (3) a swarm of 14 events southeast of the volcanic center that appear to be double couples. The occurrence of positively isotropic events is consistent with other published results from volcanic and geothermal regions. Several of these previous results, as well as our results, cannot be interpreted within the context of either an oblique opening crack or a crack-plus-double-couple model. Proper characterization of uncertainties for full moment <span class="hlt">tensors</span> is critical for distinguishing among physical models of source <span class="hlt">processes</span>. A seismic moment <span class="hlt">tensor</span> is a 3x3 symmetric matrix that provides a compact representation of a seismic source. We develop an algorithm to estimate moment <span class="hlt">tensors</span> and their uncertainties from observed seismic data. For a given event, the algorithm performs a grid search over the six-dimensional space of moment <span class="hlt">tensors</span> by generating synthetic waveforms for each moment <span class="hlt">tensor</span> and then evaluating a misfit function between the observed and synthetic waveforms. 'The' moment <span class="hlt">tensor</span> M0 for the event is then the moment <span class="hlt">tensor</span> with minimum misfit. To describe the uncertainty associated with M0, we first convert the misfit function to a probability function. The uncertainty, or</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3444512','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3444512"><span>The Diffusion <span class="hlt">Tensor</span> Imaging Toolbox</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Alger, Jeffry R.</p> <p>2012-01-01</p> <p>During the past few years, the Journal of Neuroscience has published over 30 articles that describe investigations that used Diffusion <span class="hlt">Tensor</span> Imaging (DTI) and related techniques as a primary observation method. This illustrates a growing interest in DTI within the basic and clinical neuroscience communities. This article summarizes DTI methodology in terms that can be immediately understood by the neuroscientist who has little previous exposure to DTI. It describes the fundamentals of water molecular diffusion coefficient measurement in brain tissue and illustrates how these fundamentals can be used to form vivid and useful depictions of white matter macroscopic and microscopic anatomy. It also describes current research applications and the technique’s attributes and limitations. It is hoped that this article will help the readers of this Journal to more effectively evaluate neuroscience studies that use DTI. PMID:22649222</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24323102','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24323102"><span>Depth inpainting by <span class="hlt">tensor</span> voting.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kulkarni, Mandar; Rajagopalan, Ambasamudram N</p> <p>2013-06-01</p> <p>Depth maps captured by range scanning devices or by using optical cameras often suffer from missing regions due to occlusions, reflectivity, limited scanning area, sensor imperfections, etc. In this paper, we propose a fast and reliable algorithm for depth map inpainting using the <span class="hlt">tensor</span> voting (TV) framework. For less complex missing regions, local edge and depth information is utilized for synthesizing missing values. The depth variations are modeled by local planes using 3D TV, and missing values are estimated using plane equations. For large and complex missing regions, we collect and evaluate depth estimates from self-similar (training) datasets. We align the depth maps of the training set with the target (defective) depth map and evaluate the goodness of depth estimates among candidate values using 3D TV. We demonstrate the effectiveness of the proposed approaches on real as well as synthetic data.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014EL....10538002M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014EL....10538002M"><span>X-ray <span class="hlt">tensor</span> tomography</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Malecki, A.; Potdevin, G.; Biernath, T.; Eggl, E.; Willer, K.; Lasser, T.; Maisenbacher, J.; Gibmeier, J.; Wanner, A.; Pfeiffer, F.</p> <p>2014-02-01</p> <p>Here we introduce a new concept for x-ray computed tomography that yields information about the local micro-morphology and its orientation in each voxel of the reconstructed 3D tomogram. Contrary to conventional x-ray CT, which only reconstructs a single scalar value for each point in the 3D image, our approach provides a full scattering <span class="hlt">tensor</span> with multiple independent structural parameters in each volume element. In the application example shown in this study, we highlight that our method can visualize sub-pixel fiber orientations in a carbon composite sample, hence demonstrating its value for non-destructive testing applications. Moreover, as the method is based on the use of a conventional x-ray tube, we believe that it will also have a great impact in the wider range of material science investigations and in future medical diagnostics. The authors declare no competing financial interests.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_23 --> <div id="page_24" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="461"> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25320804','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25320804"><span>Complete set of invariants of a 4th order <span class="hlt">tensor</span>: the 12 tasks of HARDI from ternary quartics.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Papadopoulo, Théo; Ghosh, Aurobrata; Deriche, Rachid</p> <p>2014-01-01</p> <p>Invariants play a crucial role in Diffusion MRI. In DTI (2nd order <span class="hlt">tensors</span>), invariant scalars (FA, MD) have been successfully used in clinical applications. But DTI has limitations and HARDI models (e.g. 4th order <span class="hlt">tensors</span>) have been proposed instead. These, however, lack invariant features and computing them systematically is challenging. We present a simple and systematic method to compute a functionally complete set of invariants of a non-negative 3D 4th order <span class="hlt">tensor</span> with respect to SO3. Intuitively, this transforms the <span class="hlt">tensor</span>'s non-unique ternary quartic (TQ) <span class="hlt">decomposition</span> (from Hilbert's theorem) to a unique canonical representation independent of orientation - the invariants. The method consists of two steps. In the first, we reduce the 18 degrees-of-freedom (DOF) of a TQ representation by 3-DOFs via an orthogonal transformation. This transformation is designed to enhance a rotation-invariant property of choice of the 3D 4th order <span class="hlt">tensor</span>. In the second, we further reduce 3-DOFs via a 3D rotation transformation of coordinates to arrive at a canonical set of invariants to SO3 of the <span class="hlt">tensor</span>. The resulting invariants are, by construction, (i) functionally complete, (ii) functionally irreducible (if desired), (iii) computationally efficient and (iv) reversible (mappable to the TQ coefficients or shape); which is the novelty of our contribution in comparison to prior work. Results from synthetic and real data experiments validate the method and indicate its importance.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25162977','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25162977"><span>Mechanistic insights into formation of SnO₂ nanotubes: asynchronous <span class="hlt">decomposition</span> of poly(vinylpyrrolidone) in electrospun fibers during calcining <span class="hlt">process</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wu, Jinjin; Zeng, Dawen; Wang, Xiaoxia; Zeng, Lei; Huang, Qingwu; Tang, Gen; Xie, Changsheng</p> <p>2014-09-23</p> <p>The formation mechanism of SnO2 nanotubes (NTs) fabricated by generic electrospinning and calcining was revealed by systematically investigating the structural evolution of calcined fibers, product composition, and released volatile byproducts. The structural evolution of the fibers proceeded sequentially from dense fiber to wire-in-tube to nanotube. This remarkable structural evolution indicated a disparate thermal <span class="hlt">decomposition</span> of poly(vinylpyrrolidone) (PVP) in the interior and the surface of the fibers. PVP on the surface of the outer fibers decomposed completely at a lower temperature (<340 °C), due to exposure to oxygen, and SnO2 crystallized and formed a shell on the fiber. Interior PVP of the fiber was prone to loss of side substituents due to the oxygen-deficient <span class="hlt">decomposition</span>, leaving only the carbon main chain. The rest of the Sn crystallized when the pores formed resulting from the aggregation of SnO2 nanocrystals in the shell. The residual carbon chain did not decompose completely at temperatures less than 550 °C. We proposed a PVP-assisted Ostwald ripening mechanism for the formation of SnO2 NTs. This work directs the fabrication of diverse nanostructure metal oxide by generic electrospinning method.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19900002898','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19900002898"><span>Novel techniques for data <span class="hlt">decomposition</span> and load balancing for parallel <span class="hlt">processing</span> of vision systems: Implementation and evaluation using a motion estimation system</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.</p> <p>1989-01-01</p> <p>Computer vision systems employ a sequence of vision algorithms in which the output of an algorithm is the input of the next algorithm in the sequence. Algorithms that constitute such systems exhibit vastly different computational characteristics, and therefore, require different data <span class="hlt">decomposition</span> techniques and efficient load balancing techniques for parallel implementation. However, since the input data for a task is produced as the output data of the previous task, this information can be exploited to perform knowledge based data <span class="hlt">decomposition</span> and load balancing. Presented here are algorithms for a motion estimation system. The motion estimation is based on the point correspondence between the involved images which are a sequence of stereo image pairs. Researchers propose algorithms to obtain point correspondences by matching feature points among stereo image pairs at any two consecutive time instants. Furthermore, the proposed algorithms employ non-iterative procedures, which results in saving considerable amounts of computation time. The system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from consecutive time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28055828','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28055828"><span>Anisotropic Conductivity <span class="hlt">Tensor</span> Imaging of In Vivo Canine Brain Using DT-MREIT.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Jeong, Woo Chul; Sajib, Saurav Z K; Katoch, Nitish; Kim, Hyung Joong; Kwon, Oh In; Woo, Eung Je</p> <p>2017-01-01</p> <p>We present in vivo images of anisotropic electrical conductivity <span class="hlt">tensor</span> distributions inside canine brains using diffusion <span class="hlt">tensor</span> magnetic resonance electrical impedance tomography (DT-MREIT). The conductivity <span class="hlt">tensor</span> is represented as a product of an ion mobility <span class="hlt">tensor</span> and a scale factor of ion concentrations. Incorporating directional mobility information from water diffusion <span class="hlt">tensors</span>, we developed a stable <span class="hlt">process</span> to reconstruct anisotropic conductivity <span class="hlt">tensor</span> images from measured magnetic flux density data using an MRI scanner. Devising a new image reconstruction algorithm, we reconstructed anisotropic conductivity <span class="hlt">tensor</span> images of two canine brains with a pixel size of 1.25 mm. Though the reconstructed conductivity values matched well in general with those measured by using invasive probing methods, there were some discrepancies as well. The degree of white matter anisotropy was 2 to 4.5, which is smaller than previous findings of 5 to 10. The reconstructed conductivity value of the cerebrospinal fluid was about 1.3 S/m, which is smaller than previous measurements of about 1.8 S/m. Future studies of in vivo imaging experiments with disease models should follow this initial trial to validate clinical significance of DT-MREIT as a new diagnostic imaging modality. Applications in modeling and simulation studies of bioelectromagnetic phenomena including source imaging and electrical stimulation are also promising.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4948124','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4948124"><span>Spatial Mapping of Translational Diffusion Coefficients Using Diffusion <span class="hlt">Tensor</span> Imaging: A Mathematical Description</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>SHETTY, ANIL N.; CHIANG, SHARON; MALETIC-SAVATIC, MIRJANA; KASPRIAN, GREGOR; VANNUCCI, MARINA; LEE, WESLEY</p> <p>2016-01-01</p> <p>In this article, we discuss the theoretical background for diffusion weighted imaging and diffusion <span class="hlt">tensor</span> imaging. Molecular diffusion is a random <span class="hlt">process</span> involving thermal Brownian motion. In biological tissues, the underlying microstructures restrict the diffusion of water molecules, making diffusion directionally dependent. Water diffusion in tissue is mathematically characterized by the diffusion <span class="hlt">tensor</span>, the elements of which contain information about the magnitude and direction of diffusion and is a function of the coordinate system. Thus, it is possible to generate contrast in tissue based primarily on diffusion effects. Expressing diffusion in terms of the measured diffusion coefficient (eigenvalue) in any one direction can lead to errors. Nowhere is this more evident than in white matter, due to the preferential orientation of myelin fibers. The directional dependency is removed by diagonalization of the diffusion <span class="hlt">tensor</span>, which then yields a set of three eigenvalues and eigenvectors, representing the magnitude and direction of the three orthogonal axes of the diffusion ellipsoid, respectively. For example, the eigenvalue corresponding to the eigenvector along the long axis of the fiber corresponds qualitatively to diffusion with least restriction. Determination of the principal values of the diffusion <span class="hlt">tensor</span> and various anisotropic indices provides structural information. We review the use of diffusion measurements using the modified Stejskal–Tanner diffusion equation. The anisotropy is analyzed by decomposing the diffusion <span class="hlt">tensor</span> based on symmetrical properties describing the geometry of diffusion <span class="hlt">tensor</span>. We further describe diffusion <span class="hlt">tensor</span> properties in visualizing fiber tract organization of the human brain. PMID:27441031</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27441031','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27441031"><span>Spatial Mapping of Translational Diffusion Coefficients Using Diffusion <span class="hlt">Tensor</span> Imaging: A Mathematical Description.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Shetty, Anil N; Chiang, Sharon; Maletic-Savatic, Mirjana; Kasprian, Gregor; Vannucci, Marina; Lee, Wesley</p> <p>2014-01-01</p> <p>In this article, we discuss the theoretical background for diffusion weighted imaging and diffusion <span class="hlt">tensor</span> imaging. Molecular diffusion is a random <span class="hlt">process</span> involving thermal Brownian motion. In biological tissues, the underlying microstructures restrict the diffusion of water molecules, making diffusion directionally dependent. Water diffusion in tissue is mathematically characterized by the diffusion <span class="hlt">tensor</span>, the elements of which contain information about the magnitude and direction of diffusion and is a function of the coordinate system. Thus, it is possible to generate contrast in tissue based primarily on diffusion effects. Expressing diffusion in terms of the measured diffusion coefficient (eigenvalue) in any one direction can lead to errors. Nowhere is this more evident than in white matter, due to the preferential orientation of myelin fibers. The directional dependency is removed by diagonalization of the diffusion <span class="hlt">tensor</span>, which then yields a set of three eigenvalues and eigenvectors, representing the magnitude and direction of the three orthogonal axes of the diffusion ellipsoid, respectively. For example, the eigenvalue corresponding to the eigenvector along the long axis of the fiber corresponds qualitatively to diffusion with least restriction. Determination of the principal values of the diffusion <span class="hlt">tensor</span> and various anisotropic indices provides structural information. We review the use of diffusion measurements using the modified Stejskal-Tanner diffusion equation. The anisotropy is analyzed by decomposing the diffusion <span class="hlt">tensor</span> based on symmetrical properties describing the geometry of diffusion <span class="hlt">tensor</span>. We further describe diffusion <span class="hlt">tensor</span> properties in visualizing fiber tract organization of the human brain.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JHEP...02..098M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JHEP...02..098M"><span>Decomposing Nekrasov <span class="hlt">decomposition</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Morozov, A.; Zenkevich, Y.</p> <p>2016-02-01</p> <p>AGT relations imply that the four-point conformal block admits a <span class="hlt">decomposition</span> into a sum over pairs of Young diagrams of essentially rational Nekrasov functions — this is immediately seen when conformal block is represented in the form of a matrix model. However, the q-deformation of the same block has a deeper <span class="hlt">decomposition</span> — into a sum over a quadruple of Young diagrams of a product of four topological vertices. We analyze the interplay between these two <span class="hlt">decompositions</span>, their properties and their generalization to multi-point conformal blocks. In the latter case we explain how Dotsenko-Fateev all-with-all (star) pair "interaction" is reduced to the quiver model nearest-neighbor (chain) one. We give new identities for q-Selberg averages of pairs of generalized Macdonald polynomials. We also translate the slicing invariance of refined topological strings into the language of conformal blocks and interpret it as abelianization of generalized Macdonald polynomials.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AGUFM.S53B2801C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AGUFM.S53B2801C"><span>Moment <span class="hlt">Tensor</span> Analysis of Shallow Sources</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chiang, A.; Dreger, D. S.; Ford, S. R.; Walter, W. R.; Yoo, S. H.</p> <p>2015-12-01</p> <p>A potential issue for moment <span class="hlt">tensor</span> inversion of shallow seismic sources is that some moment <span class="hlt">tensor</span> components have vanishing amplitudes at the free surface, which can result in bias in the moment <span class="hlt">tensor</span> solution. The effects of the free-surface on the stability of the moment <span class="hlt">tensor</span> method becomes important as we continue to investigate and improve the capabilities of regional full moment <span class="hlt">tensor</span> inversion for source-type identification and discrimination. It is important to understand these free surface effects on discriminating shallow explosive sources for nuclear monitoring purposes. It may also be important in natural systems that have shallow seismicity such as volcanoes and geothermal systems. In this study, we apply the moment <span class="hlt">tensor</span> based discrimination method to the HUMMING ALBATROSS quarry blasts. These shallow chemical explosions at approximately 10 m depth and recorded up to several kilometers distance represent rather severe source-station geometry in terms of vanishing traction issues. We show that the method is capable of recovering a predominantly explosive source mechanism, and the combined waveform and first motion method enables the unique discrimination of these events. Recovering the correct yield using seismic moment estimates from moment <span class="hlt">tensor</span> inversion remains challenging but we can begin to put error bounds on our moment estimates using the NSS technique.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012PMB....57.5075L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012PMB....57.5075L"><span>Ultrasound elastic <span class="hlt">tensor</span> imaging: comparison with MR diffusion <span class="hlt">tensor</span> imaging in the myocardium</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lee, Wei-Ning; Larrat, Benoît; Pernot, Mathieu; Tanter, Mickaël</p> <p>2012-08-01</p> <p>We have previously proven the feasibility of ultrasound-based shear wave imaging (SWI) to non-invasively characterize myocardial fiber orientation in both in vitro porcine and in vivo ovine hearts. The SWI-estimated results were in good correlation with histology. In this study, we proposed a new and robust fiber angle estimation method through a <span class="hlt">tensor</span>-based approach for SWI, coined together as elastic <span class="hlt">tensor</span> imaging (ETI), and compared it with magnetic resonance diffusion <span class="hlt">tensor</span> imaging (DTI), a current gold standard and extensively reported non-invasive imaging technique for mapping fiber architecture. Fresh porcine (n = 5) and ovine (n = 5) myocardial samples (20 × 20 × 30 mm3) were studied. ETI was firstly performed to generate shear waves and to acquire the wave events at ultrafast frame rate (8000 fps). A 2.8 MHz phased array probe (pitch = 0.28 mm), connected to a prototype ultrasound scanner, was mounted on a customized MRI-compatible rotation device, which allowed both the rotation of the probe from -90° to 90° at 5° increments and co-registration between two imaging modalities. Transmural shear wave speed at all propagation directions realized was firstly estimated. The fiber angles were determined from the shear wave speed map using the least-squares method and eigen <span class="hlt">decomposition</span>. The test myocardial sample together with the rotation device was then placed inside a 7T MRI scanner. Diffusion was encoded in six directions. A total of 270 diffusion-weighted images (b = 1000 s mm-2, FOV = 30 mm, matrix size = 60 × 64, TR = 6 s, TE = 19 ms, 24 averages) and 45 B0 images were acquired in 14 h 30 min. The fiber structure was analyzed by the fiber-tracking module in software, MedINRIA. The fiber orientation in the overlapped myocardial region which both ETI and DTI accessed was therefore compared, thanks to the co-registered imaging system. Results from all ten samples showed good correlation (r2 = 0.81, p < 0.0001) and good agreement (3.05° bias</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008MaCom..77.1037J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008MaCom..77.1037J"><span>The generalized triangular <span class="hlt">decomposition</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Jiang, Yi; Hager, William W.; Li, Jian</p> <p>2008-06-01</p> <p>Given a complex matrix mathbf{H} , we consider the <span class="hlt">decomposition</span> mathbf{H} = mathbf{QRP}^* , where mathbf{R} is upper triangular and mathbf{Q} and mathbf{P} have orthonormal columns. Special instances of this <span class="hlt">decomposition</span> include the singular value <span class="hlt">decomposition</span> (SVD) and the Schur <span class="hlt">decomposition</span> where mathbf{R} is an upper triangular matrix with the eigenvalues of mathbf{H} on the diagonal. We show that any diagonal for mathbf{R} can be achieved that satisfies Weyl's multiplicative majorization conditions: prod_{iD1}^k \\vert r_{i}\\vert le prod_{iD1}^k sigma_i, ; ; 1 le k < K, quad prod_{iD1}^K \\vert r_{i}\\vert = prod_{iD1}^K sigma_i, where K is the rank of mathbf{H} , sigma_i is the i -th largest singular value of mathbf{H} , and r_{i} is the i -th largest (in magnitude) diagonal element of mathbf{R} . Given a vector mathbf{r} which satisfies Weyl's conditions, we call the <span class="hlt">decomposition</span> mathbf{H} = mathbf{QRP}^* , where mathbf{R} is upper triangular with prescribed diagonal mathbf{r} , the generalized triangular <span class="hlt">decomposition</span> (GTD). A direct (nonrecursive) algorithm is developed for computing the GTD. This algorithm starts with the SVD and applies a series of permutations and Givens rotations to obtain the GTD. The numerical stability of the GTD update step is established. The GTD can be used to optimize the power utilization of a communication channel, while taking into account quality of service requirements for subchannels. Another application of the GTD is to inverse eigenvalue problems where the goal is to construct matrices with prescribed eigenvalues and singular values.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19950022338','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19950022338"><span>Optimal domain <span class="hlt">decomposition</span> strategies</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Yoon, Yonghyun; Soni, Bharat K.</p> <p>1995-01-01</p> <p>The primary interest of the authors is in the area of grid generation, in particular, optimal domain <span class="hlt">decomposition</span> about realistic configurations. A grid generation procedure with optimal blocking strategies has been developed to generate multi-block grids for a circular-to-rectangular transition duct. The focus of this study is the domain <span class="hlt">decomposition</span> which optimizes solution algorithm/block compatibility based on geometrical complexities as well as the physical characteristics of flow field. The progress realized in this study is summarized in this paper.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhRvL.119g0401V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhRvL.119g0401V"><span>Bridging Perturbative Expansions with <span class="hlt">Tensor</span> Networks</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Vanderstraeten, Laurens; Mariën, Michaël; Haegeman, Jutho; Schuch, Norbert; Vidal, Julien; Verstraete, Frank</p> <p>2017-08-01</p> <p>We demonstrate that perturbative expansions for quantum many-body systems can be rephrased in terms of <span class="hlt">tensor</span> networks, thereby providing a natural framework for interpolating perturbative expansions across a quantum phase transition. This approach leads to classes of <span class="hlt">tensor</span>-network states parametrized by few parameters with a clear physical meaning, while still providing excellent variational energies. We also demonstrate how to construct perturbative expansions of the entanglement Hamiltonian, whose eigenvalues form the entanglement spectrum, and how the <span class="hlt">tensor</span>-network approach gives rise to order parameters for topological phase transitions.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016MSSP...68..207B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016MSSP...68..207B"><span>Low uncertainty method for inertia <span class="hlt">tensor</span> identification</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Barreto, J. P.; Muñoz, L. E.</p> <p>2016-02-01</p> <p>The uncertainty associated with the experimental identification of the inertia <span class="hlt">tensor</span> can be reduced by implementing adequate rotational and translational motions in the experiment. This paper proposes a particular 3D trajectory that improves the experimental measurement of the inertia <span class="hlt">tensor</span> of rigid bodies. Such a trajectory corresponds to a motion in which the object is rotated around a large number of instantaneous axes, while the center of gravity remains static. The uncertainty in the inertia <span class="hlt">tensor</span> components obtained with this practice is reduced by 45% in average, compared with those calculated using simple rotations around three perpendicular axes (Roll, Pitch, Yaw).</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4538590','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4538590"><span>Incremental Discriminant Analysis in <span class="hlt">Tensor</span> Space</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Chang, Liu; Weidong, Zhao; Tao, Yan; Qiang, Pu; Xiaodan, Du</p> <p>2015-01-01</p> <p>To study incremental machine learning in <span class="hlt">tensor</span> space, this paper proposes incremental <span class="hlt">tensor</span> discriminant analysis. The algorithm employs <span class="hlt">tensor</span> representation to carry on discriminant analysis and combine incremental learning to alleviate the computational cost. This paper proves that the algorithm can be unified into the graph framework theoretically and analyzes the time and space complexity in detail. The experiments on facial image detection have shown that the algorithm not only achieves sound performance compared with other algorithms, but also reduces the computational issues apparently. PMID:26339229</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/409872','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/409872"><span><span class="hlt">Tensor</span> methods for large, sparse unconstrained optimization</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Bouaricha, A.</p> <p>1996-11-01</p> <p><span class="hlt">Tensor</span> methods for unconstrained optimization were first introduced by Schnabel and Chow [SIAM J. Optimization, 1 (1991), pp. 293-315], who describe these methods for small to moderate size problems. This paper extends these methods to large, sparse unconstrained optimization problems. This requires an entirely new way of solving the <span class="hlt">tensor</span> model that makes the methods suitable for solving large, sparse optimization problems efficiently. We present test results for sets of problems where the Hessian at the minimizer is nonsingular and where it is singular. These results show that <span class="hlt">tensor</span> methods are significantly more efficient and more reliable than standard methods based on Newton`s method.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/scitech/biblio/22251660','SCIGOV-STC'); return false;" href="https://www.osti.gov/scitech/biblio/22251660"><span>Killing <span class="hlt">tensors</span>, warped products and the orthogonal separation of the Hamilton-Jacobi equation</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Rajaratnam, Krishan McLenaghan, Raymond G.</p> <p>2014-01-15</p> <p>We study Killing <span class="hlt">tensors</span> in the context of warped products and apply the results to the problem of orthogonal separation of the Hamilton-Jacobi equation. This work is motivated primarily by the case of spaces of constant curvature where warped products are abundant. We first characterize Killing <span class="hlt">tensors</span> which have a natural algebraic <span class="hlt">decomposition</span> in warped products. We then apply this result to show how one can obtain the Killing-Stäckel space (KS-space) for separable coordinate systems decomposable in warped products. This result in combination with Benenti's theory for constructing the KS-space of certain special separable coordinates can be used to obtain the KS-space for all orthogonal separable coordinates found by Kalnins and Miller in Riemannian spaces of constant curvature. Next we characterize when a natural Hamiltonian is separable in coordinates decomposable in a warped product by showing that the conditions originally given by Benenti can be reduced. Finally, we use this characterization and concircular <span class="hlt">tensors</span> (a special type of torsionless conformal Killing <span class="hlt">tensor</span>) to develop a general algorithm to determine when a natural Hamiltonian is separable in a special class of separable coordinates which include all orthogonal separable coordinates in spaces of constant curvature.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2005SPIE.5746..148K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2005SPIE.5746..148K"><span>Robust multi-component modeling of diffusion <span class="hlt">tensor</span> magnetic resonance imaging data</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kadah, Yasser M.; Ma, Xiangyang; LaConte, Stephen; Yassine, Inas; Hu, Xiaoping</p> <p>2005-04-01</p> <p>In conventional diffusion <span class="hlt">tensor</span> imaging (DTI) based on magnetic resonance data, each voxel is assumed to contain a single component having diffusion properties that can be fully represented by a single <span class="hlt">tensor</span>. In spite of its apparent lack of generality, this assumption has been widely used in clinical and research purpose. This resulted in situations where correct interpretation of data was hampered by mixing of components and/or tractography. Even though this assumption can be valid in some cases, the general case involves mixing of components resulting in significant deviation from the single <span class="hlt">tensor</span> model. Hence, a strategy that allows the <span class="hlt">decomposition</span> of data based on a mixture model has the potential of enhancing the diagnostic value of DTI. This work aims at developing a stable solution for the most general problem of multi-component modeling of diffusion <span class="hlt">tensor</span> imaging data. This model does not include any assumptions about the nature or volume ratio of any of the components and utilizes a projection pursuit based strategy whereby a combination of exhaustive search and least-squares estimation is used to estimate 1D projections of the solution. Then, such solutions are combined to compute the multidimensional components in a fast and robust manner. The new method is demonstrated by both computer simulations and real diffusion-weighted data. The preliminary results indicate the success of the new method and its potential to enhance the interpretation of DTI data sets.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014JMP....55a3505R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014JMP....55a3505R"><span>Killing <span class="hlt">tensors</span>, warped products and the orthogonal separation of the Hamilton-Jacobi equation</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rajaratnam, Krishan; McLenaghan, Raymond G.</p> <p>2014-01-01</p> <p>We study Killing <span class="hlt">tensors</span> in the context of warped products and apply the results to the problem of orthogonal separation of the Hamilton-Jacobi equation. This work is motivated primarily by the case of spaces of constant curvature where warped products are abundant. We first characterize Killing <span class="hlt">tensors</span> which have a natural algebraic <span class="hlt">decomposition</span> in warped products. We then apply this result to show how one can obtain the Killing-Stäckel space (KS-space) for separable coordinate systems decomposable in warped products. This result in combination with Benenti's theory for constructing the KS-space of certain special separable coordinates can be used to obtain the KS-space for all orthogonal separable coordinates found by Kalnins and Miller in Riemannian spaces of constant curvature. Next we characterize when a natural Hamiltonian is separable in coordinates decomposable in a warped product by showing that the conditions originally given by Benenti can be reduced. Finally, we use this characterization and concircular <span class="hlt">tensors</span> (a special type of torsionless conformal Killing <span class="hlt">tensor</span>) to develop a general algorithm to determine when a natural Hamiltonian is separable in a special class of separable coordinates which include all orthogonal separable coordinates in spaces of constant curvature.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19063978','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19063978"><span>Regularized positive-definite fourth order <span class="hlt">tensor</span> field estimation from DW-MRI.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Barmpoutis, Angelos; Hwang, Min Sig; Howland, Dena; Forder, John R; Vemuri, Baba C</p> <p>2009-03-01</p> <p>In Diffusion Weighted Magnetic Resonance Image (DW-MRI) <span class="hlt">processing</span>, a 2nd order <span class="hlt">tensor</span> has been commonly used to approximate the diffusivity function at each lattice point of the DW-MRI data. From this <span class="hlt">tensor</span> approximation, one can compute useful scalar quantities (e.g. anisotropy, mean diffusivity) which have been clinically used for monitoring encephalopathy, sclerosis, ischemia and other brain disorders. It is now well known that this 2nd-order <span class="hlt">tensor</span> approximation fails to capture complex local tissue structures, e.g. crossing fibers, and as a result, the scalar quantities derived from these <span class="hlt">tensors</span> are grossly inaccurate at such locations. In this paper we employ a 4th order symmetric positive-definite (SPD) <span class="hlt">tensor</span> approximation to represent the diffusivity function and present a novel technique to estimate these <span class="hlt">tensors</span> from the DW-MRI data guaranteeing the SPD property. Several articles have been reported in literature on higher order <span class="hlt">tensor</span> approximations of the diffusivity function but none of them guarantee the positivity of the estimates, which is a fundamental constraint since negative values of the diffusivity are not meaningful. In this paper we represent the 4th-order <span class="hlt">tensors</span> as ternary quartics and then apply Hilbert's theorem on ternary quartics along with the Iwasawa parametrization to guarantee an SPD 4th-order <span class="hlt">tensor</span> approximation from the DW-MRI data. The performance of this model is depicted on synthetic data as well as real DW-MRIs from a set of excised control and injured rat spinal cords, showing accurate estimation of scalar quantities such as generalized anisotropy and trace as well as fiber orientations.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2727997','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2727997"><span>Regularized Positive-Definite Fourth Order <span class="hlt">Tensor</span> Field Estimation from DW-MRI★</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Barmpoutis, Angelos; Vemuri, Baba C.; Howland, Dena; Forder, John R.</p> <p>2009-01-01</p> <p>In Diffusion Weighted Magnetic Resonance Image (DW-MRI) <span class="hlt">processing</span>, a 2nd order <span class="hlt">tensor</span> has been commonly used to approximate the diffusivity function at each lattice point of the DW-MRI data. From this <span class="hlt">tensor</span> approximation, one can compute useful scalar quantities (e.g. anisotropy, mean diffusivity) which have been clinically used for monitoring encephalopathy, sclerosis, ischemia and other brain disorders. It is now well known that this 2nd-order <span class="hlt">tensor</span> approximation fails to capture complex local tissue structures, e.g. crossing fibers, and as a result, the scalar quantities derived from these <span class="hlt">tensors</span> are grossly inaccurate at such locations. In this paper we employ a 4th order symmetric positive-definite (SPD) <span class="hlt">tensor</span> approximation to represent the diffusivity function and present a novel technique to estimate these <span class="hlt">tensors</span> from the DW-MRI data guaranteeing the SPD property. Several articles have been reported in literature on higher order <span class="hlt">tensor</span> approximations of the diffusivity function but none of them guarantee the positivity of the estimates, which is a fundamental constraint since negative values of the diffusivity are not meaningful. In this paper we represent the 4th-order <span class="hlt">tensors</span> as ternary quartics and then apply Hilbert’s theorem on ternary quartics along with the Iwasawa parametrization to guarantee an SPD 4th-order <span class="hlt">tensor</span> approximation from the DW-MRI data. The performance of this model is depicted on synthetic data as well as real DW-MRIs from a set of excised control and injured rat spinal cords, showing accurate estimation of scalar quantities such as generalized anisotropy and trace as well as fiber orientations. PMID:19063978</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_24 --> <div id="page_25" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="481"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013PhDT........98S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013PhDT........98S"><span>Characterizing dielectric <span class="hlt">tensors</span> of anisotropic materials from a single measurement</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Smith, Paula Kay</p> <p></p> <p>Ellipsometry techniques look at changes in polarization states to measure optical properties of thin film materials. A beam reflected from a substrate measures the real and imaginary parts of the index of the material represented as n and k, respectively. Measuring the substrate at several angles gives additional information that can be used to measure multilayer thin film stacks. However, the outstanding problem in standard ellipsometry is that it uses a limited number of incident polarization states (s and p). This limits the technique to isotropic materials. The technique discussed in this paper extends the standard <span class="hlt">process</span> to measure anisotropic materials by using a larger set of incident polarization states. By using a polarimeter to generate several incident polarization states and measure the polarization properties of the sample, ellipsometry can be performed on biaxial materials. Use of an optimization algorithm in conjunction with biaxial ellipsometry can more accurately determine the dielectric <span class="hlt">tensor</span> of individual layers in multilayer structures. Biaxial ellipsometry is a technique that measures the dielectric <span class="hlt">tensors</span> of a biaxial substrate, single-layer thin film, or multi-layer structure. The dielectric <span class="hlt">tensor</span> of a biaxial material consists of the real and imaginary parts of the three orthogonal principal indices (n x + ikx, ny +iky and nz + i kz) as well as three Euler angles (alpha, beta and gamma) to describe its orientation. The method utilized in this work measures an angle-of-incidence Mueller matrix from a Mueller matrix imaging polarimeter equipped with a pair of microscope objectives that have low polarization properties. To accurately determine the dielectric <span class="hlt">tensors</span> for multilayer samples, the angle-of-incidence Mueller matrix images are collected for multiple wavelengths. This is done in either a transmission mode or a reflection mode, each incorporates an appropriate dispersion model. Given approximate a priori knowledge of the dielectric</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JNEng..13b6005Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JNEng..13b6005Z"><span><span class="hlt">Tensor</span>-based classification of an auditory mobile BCI without a subject-specific calibration phase</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zink, Rob; Hunyadi, Borbála; Van Huffel, Sabine; De Vos, Maarten</p> <p>2016-04-01</p> <p>Objective. One of the major drawbacks in EEG brain-computer interfaces (BCI) is the need for subject-specific training of the classifier. By removing the need for a supervised calibration phase, new users could potentially explore a BCI faster. In this work we aim to remove this subject-specific calibration phase and allow direct classification. Approach. We explore canonical polyadic <span class="hlt">decompositions</span> and block term <span class="hlt">decompositions</span> of the EEG. These methods exploit structure in higher dimensional data arrays called <span class="hlt">tensors</span>. The BCI <span class="hlt">tensors</span> are constructed by concatenating ERP templates from other subjects to a target and non-target trial and the inherent structure guides a <span class="hlt">decomposition</span> that allows accurate classification. We illustrate the new method on data from a three-class auditory oddball paradigm. Main results. The presented approach leads to a fast and intuitive classification with accuracies competitive with a supervised and cross-validated LDA approach. Significance. The described methods are a promising new way of classifying BCI data with a forthright link to the original P300 ERP signal over the conventional and widely used supervised approaches.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/scitech/biblio/21020158','SCIGOV-STC'); return false;" href="https://www.osti.gov/scitech/biblio/21020158"><span>Evolution of <span class="hlt">tensor</span> perturbations in scalar-<span class="hlt">tensor</span> theories of gravity</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Carloni, Sante; Dunsby, Peter K. S.</p> <p>2007-03-15</p> <p>The evolution equations for <span class="hlt">tensor</span> perturbations in a generic scalar-<span class="hlt">tensor</span> theory of gravity are presented. Exact solutions are given for a specific class of theories and Friedmann-Lemaitre-Robertson-Walker backgrounds. In these cases it is shown that, although the evolution of <span class="hlt">tensor</span> models depends on the choice of parameters of the theory, no amplification is possible if the gravitational interaction is attractive.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26494360','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26494360"><span>Diffusion <span class="hlt">Tensor</span> Imaging of Pedophilia.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cantor, James M; Lafaille, Sophie; Soh, Debra W; Moayedi, Massieh; Mikulis, David J; Girard, Todd A</p> <p>2015-11-01</p> <p>Pedophilia is a principal motivator of child molestation, incurring great emotional and financial burdens on victims and society. Even among pedophiles who never commit any offense,the condition requires lifelong suppression and control. Previous comparison using voxel-based morphometry (VBM)of MR images from a large sample of pedophiles and controls revealed group differences in white matter. The present study therefore sought to verify and characterize white matter involvement using diffusion <span class="hlt">tensor</span> imaging (DTI), which better captures the microstructure of white matter than does VBM. Pedophilics ex offenders (n=24) were compared with healthy, age-matched controls with no criminal record and no indication of pedophilia (n=32). White matter microstructure was analyzed with Tract-Based Spatial Statistics, and the trajectories of implicated fiber bundles were identified by probabilistic tractography. Groups showed significant, highly focused differences in DTI parameters which related to participants’ genital responses to sexual depictions of children, but not to measures of psychopathy or to childhood histories of physical abuse, sexual abuse, or neglect. Some previously reported gray matter differences were suggested under highly liberal statistical conditions (p(uncorrected)<.005), but did not survive ordinary statistical correction (whole brain per voxel false discovery rate of 5%). These results confirm that pedophilia is characterized by neuroanatomical differences in white matter microstructure, over and above any neural characteristics attributable to psychopathy and childhood adversity, which show neuroanatomic footprints of their own. Although some gray matter structures were implicated previously, only few have emerged reliably.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/scitech/biblio/22572195','SCIGOV-STC'); return false;" href="https://www.osti.gov/scitech/biblio/22572195"><span>Entangled scalar and <span class="hlt">tensor</span> fluctuations during inflation</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Collins, Hael; Vardanyan, Tereza</p> <p>2016-11-29</p> <p>We show how the choice of an inflationary state that entangles scalar and <span class="hlt">tensor</span> fluctuations affects the angular two-point correlation functions of the T, E, and B modes of the cosmic microwave background. The propagators for a state starting with some general quadratic entanglement are solved exactly, leading to predictions for the primordial scalar-scalar, <span class="hlt">tensor-tensor</span>, and scalar-<span class="hlt">tensor</span> power spectra. These power spectra are expressed in terms of general functions that describe the entangling structure of the initial state relative to the standard Bunch-Davies vacuum. We illustrate how such a state would modify the angular correlations in the CMB with a simple example where the initial state is a small perturbation away from the Bunch-Davies state. Because the state breaks some of the rotational symmetries, the angular power spectra no longer need be strictly diagonal.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26124254','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26124254"><span>Quantum theory with bold operator <span class="hlt">tensors</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hardy, Lucien</p> <p>2015-08-06</p> <p>In this paper, we present a formulation of quantum theory in terms of bold operator <span class="hlt">tensors</span>. A circuit is built up of operations where an operation corresponds to a use of an apparatus. We associate collections of operator <span class="hlt">tensors</span> (which together comprise a bold operator) with these apparatus uses. We give rules for combining bold operator <span class="hlt">tensors</span> such that, for a circuit, they give a probability distribution over the possible outcomes. If we impose certain physicality constraints on the bold operator <span class="hlt">tensors</span>, then we get exactly the quantum formalism. We provide both symbolic and diagrammatic ways to represent these calculations. This approach is manifestly covariant in that it does not require us to foliate the circuit into time steps and then evolve a state. Thus, the approach forms a natural starting point for an operational approach to quantum field theory.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/1005408','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/1005408"><span>Shifted power method for computing <span class="hlt">tensor</span> eigenpairs.</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Mayo, Jackson R.; Kolda, Tamara Gibson</p> <p>2010-10-01</p> <p>Recent work on eigenvalues and eigenvectors for <span class="hlt">tensors</span> of order m {>=} 3 has been motivated by applications in blind source separation, magnetic resonance imaging, molecular conformation, and more. In this paper, we consider methods for computing real symmetric-<span class="hlt">tensor</span> eigenpairs of the form Ax{sup m-1} = {lambda}x subject to {parallel}x{parallel} = 1, which is closely related to optimal rank-1 approximation of a symmetric <span class="hlt">tensor</span>. Our contribution is a novel shifted symmetric higher-order power method (SS-HOPM), which we showis guaranteed to converge to a <span class="hlt">tensor</span> eigenpair. SS-HOPM can be viewed as a generalization of the power iteration method for matrices or of the symmetric higher-order power method. Additionally, using fixed point analysis, we can characterize exactly which eigenpairs can and cannot be found by the method. Numerical examples are presented, including examples from an extension of the method to fnding complex eigenpairs.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/scitech/biblio/22382092','SCIGOV-STC'); return false;" href="https://www.osti.gov/scitech/biblio/22382092"><span>The Weyl <span class="hlt">tensor</span> correlator in cosmological spacetimes</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Fröb, Markus B.</p> <p>2014-12-01</p> <p>We give a general expression for the Weyl <span class="hlt">tensor</span> two-point function in a general Friedmann-Lemaître-Robertson-Walker spacetime. We work in reduced phase space for the perturbations, i.e., quantize only the dynamical degrees of freedom without adding any gauge-fixing term. The general formula is illustrated by a calculation in slow-roll single-field inflation to first order in the slow-roll parameters ε and δ, and the result is shown to have the correct de Sitter limit as ε, δ → 0. Furthermore, it is seen that the Weyl <span class="hlt">tensor</span> correlation function in slow-roll does not suffer from infrared divergences, unlike the two-point functions of the metric and scalar field perturbations. Lastly, we show how to recover the usual <span class="hlt">tensor</span> power spectrum from the Weyl <span class="hlt">tensor</span> correlation function.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JCAP...11..059C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JCAP...11..059C"><span>Entangled scalar and <span class="hlt">tensor</span> fluctuations during inflation</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Collins, Hael; Vardanyan, Tereza</p> <p>2016-11-01</p> <p>We show how the choice of an inflationary state that entangles scalar and <span class="hlt">tensor</span> fluctuations affects the angular two-point correlation functions of the T, E, and B modes of the cosmic microwave background. The propagators for a state starting with some general quadratic entanglement are solved exactly, leading to predictions for the primordial scalar-scalar, <span class="hlt">tensor-tensor</span>, and scalar-<span class="hlt">tensor</span> power spectra. These power spectra are expressed in terms of general functions that describe the entangling structure of the initial state relative to the standard Bunch-Davies vacuum. We illustrate how such a state would modify the angular correlations in the CMB with a simple example where the initial state is a small perturbation away from the Bunch-Davies state. Because the state breaks some of the rotational symmetries, the angular power spectra no longer need be strictly diagonal.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016PhRvA..93a3855S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016PhRvA..93a3855S"><span>Kinetic-energy-momentum <span class="hlt">tensor</span> in electrodynamics</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sheppard, Cheyenne J.; Kemp, Brandon A.</p> <p>2016-01-01</p> <p>We show that the Einstein-Laub formulation of electrodynamics is invalid since it yields a stress-energy-momentum (SEM) <span class="hlt">tensor</span> that is not frame invariant. Two leading hypotheses for the kinetic formulation of electrodynamics (Chu and Einstein-Laub) are studied by use of the relativistic principle of virtual power, mathematical modeling, Lagrangian methods, and SEM transformations. The relativistic principle of virtual power is used to demonstrate the field dynamics associated with energy relations within a relativistic framework. Lorentz transformations of the respective SEM <span class="hlt">tensors</span> demonstrate the relativistic frameworks for each studied formulation. Mathematical modeling of stationary and moving media is used to illustrate the differences and discrepancies of specific proposed kinetic formulations, where energy relations and conservation theorems are employed. Lagrangian methods are utilized to derive the field kinetic Maxwell's equations, which are studied with respect to SEM <span class="hlt">tensor</span> transforms. Within each analysis, the Einstein-Laub formulation violates special relativity, which invalidates the Einstein-Laub SEM <span class="hlt">tensor</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/scitech/biblio/22454505','SCIGOV-STC'); return false;" href="https://www.osti.gov/scitech/biblio/22454505"><span>The Weyl <span class="hlt">tensor</span> correlator in cosmological spacetimes</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Fröb, Markus B.</p> <p>2014-12-05</p> <p>We give a general expression for the Weyl <span class="hlt">tensor</span> two-point function in a general Friedmann-Lemaître-Robertson-Walker spacetime. We work in reduced phase space for the perturbations, i.e., quantize only the dynamical degrees of freedom without adding any gauge-fixing term. The general formula is illustrated by a calculation in slow-roll single-field inflation to first order in the slow-roll parameters ϵ and δ, and the result is shown to have the correct de Sitter limit as ϵ,δ→0. Furthermore, it is seen that the Weyl <span class="hlt">tensor</span> correlation function in slow-roll does not suffer from infrared divergences, unlike the two-point functions of the metric and scalar field perturbations. Lastly, we show how to recover the usual <span class="hlt">tensor</span> power spectrum from the Weyl <span class="hlt">tensor</span> correlation function.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017CRMec.345..399S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017CRMec.345..399S"><span>Stochastic modeling and generation of random fields of elasticity <span class="hlt">tensors</span>: A unified information-theoretic approach</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Staber, Brian; Guilleminot, Johann</p> <p>2017-06-01</p> <p>In this Note, we present a unified approach to the information-theoretic modeling and simulation of a class of elasticity random fields, for all physical symmetry classes. The new stochastic representation builds upon a Walpole <span class="hlt">tensor</span> <span class="hlt">decomposition</span>, which allows the maximum entropy constraints to be decoupled in accordance with the <span class="hlt">tensor</span> (sub)algebras associated with the class under consideration. In contrast to previous works where the construction was carried out on the scalar-valued Walpole coordinates, the proposed strategy involves both matrix-valued and scalar-valued random fields. This enables, in particular, the construction of a generation algorithm based on a memoryless transformation, hence improving the computational efficiency of the framework. Two applications involving weak symmetries and sampling over spherical and cylindrical geometries are subsequently provided. These numerical experiments are relevant to the modeling of elastic interphases in nanocomposites, as well as to the simulation of spatially dependent wood properties for instance.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4664199','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4664199"><span><span class="hlt">Tensor</span> Classification of N-point Correlation Function features for Histology Tissue Segmentation</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Mosaliganti, Kishore; Janoos, Firdaus; Irfanoglu, Okan; Ridgway, Randall; Machiraju, Raghu; Huang, Kun; Saltz, Joel; Leone, Gustavo; Ostrowski, Michael</p> <p>2015-01-01</p> <p>In this paper, we utilize the N-point correlation functions (N-pcfs) to construct an appropriate feature space for achieving tissue segmentation in histology-stained microscopic images. The N-pcfs estimate microstructural constituent packing densities and their spatial distribution in a tissue sample. We represent the multi-phase properties estimated by the N-pcfs in a <span class="hlt">tensor</span> structure. Using a variant of higher-order singular value <span class="hlt">decomposition</span> (HOSVD) algorithm, we realize a robust classifier that provides a multi-linear description of the <span class="hlt">tensor</span> feature space. Validated results of the segmentation are presented in a case-study that focuses on understanding the genetic phenotyping differences in mouse placentae. PMID:18762444</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/16489242','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/16489242"><span>Application of modern <span class="hlt">tensor</span> calculus to engineered domain structures. 1. Calculation of tensorial covariants.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kopský, Vojtech</p> <p>2006-03-01</p> <p>This article is a roadmap to a systematic calculation and tabulation of tensorial covariants for the point groups of material physics. The following are the essential steps in the described approach to <span class="hlt">tensor</span> calculus. (i) An exact specification of the considered point groups by their embellished Hermann-Mauguin and Schoenflies symbols. (ii) Introduction of oriented Laue classes of magnetic point groups. (iii) An exact specification of matrix ireps (irreducible representations). (iv) Introduction of so-called typical (standard) bases and variables -- typical invariants, relative invariants or components of the typical covariants. (v) Introduction of Clebsch-Gordan products of the typical variables. (vi) Calculation of tensorial covariants of ascending ranks with consecutive use of tables of Clebsch-Gordan products. (vii) Opechowski's magic relations between tensorial <span class="hlt">decompositions</span>. These steps are illustrated for groups of the tetragonal oriented Laue class D(4z) -- 4(z)2(x)2(xy) of magnetic point groups and for <span class="hlt">tensors</span> up to fourth rank.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016APS..DFD.D8005A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016APS..DFD.D8005A"><span>An effective diffusivity model based on Koopman mode <span class="hlt">decomposition</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Arbabi, Hassan; Mezic, Igor</p> <p>2016-11-01</p> <p>In the previous work, we had shown that the Koopman mode <span class="hlt">decomposition</span> (KMD) can be used to analyze mixing of passive tracers in time-dependent flows. In this talk, we discuss the extension of this type of analysis to the case of advection-diffusion transport for passive scalar fields. Application of KMD to flows with complex time-dependence yields a <span class="hlt">decomposition</span> of the flow into mean, periodic and chaotic components. We briefly discuss the computation of these components using a combination of harmonic averaging and Discrete Fourier Transform. We propose a new effective diffusivity model in which the advection is dominated by mean and periodic components whereas the effect of chaotic motion is absorbed into an effective diffusivity <span class="hlt">tensor</span>. The performance of this model is investigated in the case of lid-driven cavity flow.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015NatSR...5E9491S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015NatSR...5E9491S"><span>Anisotropy of Local Stress <span class="hlt">Tensor</span> Leads to Line Tension</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Shao, Mingzhe; Wang, Jianjun; Zhou, Xin</p> <p>2015-04-01</p> <p>Line tension of three-phase contact lines is an important physical quantity in understanding many physical <span class="hlt">processes</span> such as heterogeneous nucleation, soft lithography and behaviours in biomembrane, such as budding, fission and fusion. Although the concept of line tension was proposed as the excess free energy in three-phase coexistence regions a century ago, its microscopic origin is subtle and achieves long-term concerns. In this paper, we correlate line tension with anisotropy of diagonal components of stress <span class="hlt">tensor</span> and give a general formula of line tension. By performing molecular dynamic simulations, we illustrate the formula proposed in Lennard-Jones gas/liquid/liquid and gas/liquid/solid systems, and find that the spatial distribution of line tension can be well revealed when the local distribution of stress <span class="hlt">tensor</span> is considered.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4696606','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4696606"><span>MULTISCALE <span class="hlt">TENSOR</span> ANISOTROPIC FILTERING OF FLUORESCENCE MICROSCOPY FOR DENOISING MICROVASCULATURE</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Prasath, V. B. S.; Pelapur, R.; Glinskii, O. V.; Glinsky, V. V.; Huxley, V. H.; Palaniappan, K.</p> <p>2015-01-01</p> <p>Fluorescence microscopy images are contaminated by noise and improving image quality without blurring vascular structures by filtering is an important step in automatic image analysis. The application of interest here is to automatically extract the structural components of the microvascular system with accuracy from images acquired by fluorescence microscopy. A robust denoising <span class="hlt">process</span> is necessary in order to extract accurate vascular morphology information. For this purpose, we propose a multiscale <span class="hlt">tensor</span> with anisotropic diffusion model which progressively and adaptively updates the amount of smoothing while preserving vessel boundaries accurately. Based on a coherency enhancing flow with planar confidence measure and fused 3D structure information, our method integrates multiple scales for microvasculature preservation and noise removal membrane structures. Experimental results on simulated synthetic images and epifluorescence images show the advantage of our improvement over other related diffusion filters. We further show that the proposed multiscale integration approach improves denoising accuracy of different <span class="hlt">tensor</span> diffusion methods to obtain better microvasculature segmentation. PMID:26730456</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26730456','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26730456"><span>MULTISCALE <span class="hlt">TENSOR</span> ANISOTROPIC FILTERING OF FLUORESCENCE MICROSCOPY FOR DENOISING MICROVASCULATURE.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Prasath, V B S; Pelapur, R; Glinskii, O V; Glinsky, V V; Huxley, V H; Palaniappan, K</p> <p>2015-04-01</p> <p>Fluorescence microscopy images are contaminated by noise and improving image quality without blurring vascular structures by filtering is an important step in automatic image analysis. The application of interest here is to automatically extract the structural components of the microvascular system with accuracy from images acquired by fluorescence microscopy. A robust denoising <span class="hlt">process</span> is necessary in order to extract accurate vascular morphology information. For this purpose, we propose a multiscale <span class="hlt">tensor</span> with anisotropic diffusion model which progressively and adaptively updates the amount of smoothing while preserving vessel boundaries accurately. Based on a coherency enhancing flow with planar confidence measure and fused 3D structure information, our method integrates multiple scales for microvasculature preservation and noise removal membrane structures. Experimental results on simulated synthetic images and epifluorescence images show the advantage of our improvement over other related diffusion filters. We further show that the proposed multiscale integration approach improves denoising accuracy of different <span class="hlt">tensor</span> diffusion methods to obtain better microvasculature segmentation.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21995019','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21995019"><span>Assessment of bias for MRI diffusion <span class="hlt">tensor</span> imaging using SIMEX.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lauzon, Carolyn B; Asman, Andrew J; Crainiceanu, Ciprian; Caffo, Brian C; Landman, Bennett A</p> <p>2011-01-01</p> <p>Diffusion <span class="hlt">Tensor</span> Imaging (DTI) is a Magnetic Resonance Imaging method for measuring water diffusion in vivo. One powerful DTI contrast is fractional anisotropy (FA). FA reflects the strength of water's diffusion directional preference and is a primary metric for neuronal fiber tracking. As with other DTI contrasts, FA measurements are obscured by the well established presence of bias. DTI bias has been challenging to assess because it is a multivariable problem including SNR, six <span class="hlt">tensor</span> parameters, and the DTI collection and <span class="hlt">processing</span> method used. SIMEX is a modem statistical technique that estimates bias by tracking measurement error as a function of added noise. Here, we use SIMEX to assess bias in FA measurements and show the method provides; i) accurate FA bias estimates, ii) representation of FA bias that is data set specific and accessible to non-statisticians, and iii) a first time possibility for incorporation of bias into DTI data analysis.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016OptRv..23..614L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016OptRv..23..614L"><span>Adaptive registration of diffusion <span class="hlt">tensor</span> images on lie groups</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Liu, Wei; Chen, LeiTing; Cai, HongBin; Qiu, Hang; Fei, Nanxi</p> <p>2016-08-01</p> <p>With diffusion <span class="hlt">tensor</span> imaging (DTI), more exquisite information on tissue microstructure is provided for medical image <span class="hlt">processing</span>. In this paper, we present a locally adaptive topology preserving method for DTI registration on Lie groups. The method aims to obtain more plausible diffeomorphisms for spatial transformations via accurate approximation for the local tangent space on the Lie group manifold. In order to capture an exact geometric structure of the Lie group, the local linear approximation is efficiently optimized by using the adaptive selection of the local neighborhood sizes on the given set of data points. Furthermore, numerical comparative experiments are conducted on both synthetic data and real DTI data to demonstrate that the proposed method yields a higher degree of topology preservation on a dense deformation <span class="hlt">tensor</span> field while improving the registration accuracy.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_25 --> <center> <div class="footer-extlink text-muted"><small>Some links on this page may take you to non-federal websites. Their policies may differ from this site.</small> </div> </center> <div id="footer-wrapper"> <div class="footer-content"> <div id="footerOSTI" class=""> <div class="row"> <div class="col-md-4 text-center col-md-push-4 footer-content-center"><small><a href="http://www.science.gov/disclaimer.html">Privacy and Security</a></small> <div class="visible-sm visible-xs push_footer"></div> </div> <div class="col-md-4 text-center col-md-pull-4 footer-content-left"> <img src="https://www.osti.gov/images/DOE_SC31.png" alt="U.S. Department of Energy" usemap="#doe" height="31" width="177"><map style="display:none;" name="doe" id="doe"><area shape="rect" coords="1,3,107,30" href="http://www.energy.gov" alt="U.S. Deparment of Energy"><area shape="rect" coords="114,3,165,30" href="http://www.science.energy.gov" alt="Office of Science"></map> <a ref="http://www.osti.gov" style="margin-left: 15px;"><img src="https://www.osti.gov/images/footerimages/ostigov53.png" alt="Office of Scientific and Technical Information" height="31" width="53"></a> <div class="visible-sm visible-xs push_footer"></div> </div> <div class="col-md-4 text-center footer-content-right"> <a href="http://www.science.gov"><img src="https://www.osti.gov/images/footerimages/scigov77.png" alt="science.gov" height="31" width="98"></a> <a href="http://worldwidescience.org"><img src="https://www.osti.gov/images/footerimages/wws82.png" alt="WorldWideScience.org" height="31" width="90"></a> </div> </div> </div> </div> </div> <p><br></p> </div><!-- container --> <script type="text/javascript"><!-- // var lastDiv = ""; function showDiv(divName) { // hide last div if (lastDiv) { document.getElementById(lastDiv).className = "hiddenDiv"; } //if value of the box is not nothing and an object with that name exists, then change the class if (divName && document.getElementById(divName)) { document.getElementById(divName).className = "visibleDiv"; lastDiv = divName; } } //--> </script> <script> /** * Function that tracks a click on an outbound link in Google Analytics. * This function takes a valid URL string as an argument, and uses that URL string * as the event label. */ var trackOutboundLink = function(url,collectionCode) { try { h = window.open(url); setTimeout(function() { ga('send', 'event', 'topic-page-click-through', collectionCode, url); }, 1000); } catch(err){} }; </script> <!-- Google Analytics --> <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-1122789-34', 'auto'); ga('send', 'pageview'); </script> <!-- End Google Analytics --> <script> showDiv('page_1') </script> </body> </html>