Tensor Decomposition for Signal Processing and Machine Learning
NASA Astrophysics Data System (ADS)
Sidiropoulos, Nicholas D.; De Lathauwer, Lieven; Fu, Xiao; Huang, Kejun; Papalexakis, Evangelos E.; Faloutsos, Christos
2017-07-01
Tensors or {\\em multi-way arrays} are functions of three or more indices $(i,j,k,\\cdots)$ -- similar to matrices (two-way arrays), which are functions of two indices $(r,c)$ for (row,column). Tensors have a rich history, stretching over almost a century, and touching upon numerous disciplines; but they have only recently become ubiquitous in signal and data analytics at the confluence of signal processing, statistics, data mining and machine learning. This overview article aims to provide a good starting point for researchers and practitioners interested in learning about and working with tensors. As such, it focuses on fundamentals and motivation (using various application examples), aiming to strike an appropriate balance of breadth {\\em and depth} that will enable someone having taken first graduate courses in matrix algebra and probability to get started doing research and/or developing tensor algorithms and software. Some background in applied optimization is useful but not strictly required. The material covered includes tensor rank and rank decomposition; basic tensor factorization models and their relationships and properties (including fairly good coverage of identifiability); broad coverage of algorithms ranging from alternating optimization to stochastic gradient; statistical performance analysis; and applications ranging from source separation to collaborative filtering, mixture and topic modeling, classification, and multilinear subspace learning.
Generating functions for tensor product decomposition
NASA Astrophysics Data System (ADS)
Fuksa, Jan; Pošta, Severin
2013-11-01
The paper deals with the tensor product decomposition problem. Tensor product decompositions are of great importance in the quantum physics. A short outline of the state of the art for the of semisimple Lie groups is mentioned. The generality of generating functions is used to solve tensor products. The corresponding generating function is rational. The feature of this technique lies in the fact that the decompositions of all tensor products of all irreducible representations are solved simultaneously. Obtaining the generating function is a difficult task in general. We propose some changes to an algorithm using Patera-Sharp character generators to find this generating function, which simplifies the whole problem to simple operations over rational functions.
Tensor decomposition of EEG signals: a brief review.
Cong, Fengyu; Lin, Qiu-Hua; Kuang, Li-Dan; Gong, Xiao-Feng; Astikainen, Piia; Ristaniemi, Tapani
2015-06-15
Electroencephalography (EEG) is one fundamental tool for functional brain imaging. EEG signals tend to be represented by a vector or a matrix to facilitate data processing and analysis with generally understood methodologies like time-series analysis, spectral analysis and matrix decomposition. Indeed, EEG signals are often naturally born with more than two modes of time and space, and they can be denoted by a multi-way array called as tensor. This review summarizes the current progress of tensor decomposition of EEG signals with three aspects. The first is about the existing modes and tensors of EEG signals. Second, two fundamental tensor decomposition models, canonical polyadic decomposition (CPD, it is also called parallel factor analysis-PARAFAC) and Tucker decomposition, are introduced and compared. Moreover, the applications of the two models for EEG signals are addressed. Particularly, the determination of the number of components for each mode is discussed. Finally, the N-way partial least square and higher-order partial least square are described for a potential trend to process and analyze brain signals of two modalities simultaneously.
An optimization approach for fitting canonical tensor decompositions.
Dunlavy, Daniel M.; Acar, Evrim; Kolda, Tamara Gibson
2009-02-01
Tensor decompositions are higher-order analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as the CANDECOMP/PARAFAC decomposition (CPD), which expresses a tensor as the sum of component rank-one tensors and is used in a multitude of applications such as chemometrics, signal processing, neuroscience, and web analysis. The task of computing the CPD, however, can be difficult. The typical approach is based on alternating least squares (ALS) optimization, which can be remarkably fast but is not very accurate. Previously, nonlinear least squares (NLS) methods have also been recommended; existing NLS methods are accurate but slow. In this paper, we propose the use of gradient-based optimization methods. We discuss the mathematical calculation of the derivatives and further show that they can be computed efficiently, at the same cost as one iteration of ALS. Computational experiments demonstrate that the gradient-based optimization methods are much more accurate than ALS and orders of magnitude faster than NLS.
Tensor network decompositions in the presence of a global symmetry
Singh, Sukhwinder; Pfeifer, Robert N. C.; Vidal, Guifre
2010-11-15
Tensor network decompositions offer an efficient description of certain many-body states of a lattice system and are the basis of a wealth of numerical simulation algorithms. We discuss how to incorporate a global symmetry, given by a compact, completely reducible group G, in tensor network decompositions and algorithms. This is achieved by considering tensors that are invariant under the action of the group G. Each symmetric tensor decomposes into two types of tensors: degeneracy tensors, containing all the degrees of freedom, and structural tensors, which only depend on the symmetry group. In numerical calculations, the use of symmetric tensors ensures the preservation of the symmetry, allows selection of a specific symmetry sector, and significantly reduces computational costs. On the other hand, the resulting tensor network can be interpreted as a superposition of exponentially many spin networks. Spin networks are used extensively in loop quantum gravity, where they represent states of quantum geometry. Our work highlights their importance in the context of tensor network algorithms as well, thus setting the stage for cross-fertilization between these two areas of research.
Identifying key nodes in multilayer networks based on tensor decomposition
NASA Astrophysics Data System (ADS)
Wang, Dingjie; Wang, Haitao; Zou, Xiufen
2017-06-01
The identification of essential agents in multilayer networks characterized by different types of interactions is a crucial and challenging topic, one that is essential for understanding the topological structure and dynamic processes of multilayer networks. In this paper, we use the fourth-order tensor to represent multilayer networks and propose a novel method to identify essential nodes based on CANDECOMP/PARAFAC (CP) tensor decomposition, referred to as the EDCPTD centrality. This method is based on the perspective of multilayer networked structures, which integrate the information of edges among nodes and links between different layers to quantify the importance of nodes in multilayer networks. Three real-world multilayer biological networks are used to evaluate the performance of the EDCPTD centrality. The bar chart and ROC curves of these multilayer networks indicate that the proposed approach is a good alternative index to identify real important nodes. Meanwhile, by comparing the behavior of both the proposed method and the aggregated single-layer methods, we demonstrate that neglecting the multiple relationships between nodes may lead to incorrect identification of the most versatile nodes. Furthermore, the Gene Ontology functional annotation demonstrates that the identified top nodes based on the proposed approach play a significant role in many vital biological processes. Finally, we have implemented many centrality methods of multilayer networks (including our method and the published methods) and created a visual software based on the MATLAB GUI, called ENMNFinder, which can be used by other researchers.
3D tensor-based blind multispectral image decomposition for tumor demarcation
NASA Astrophysics Data System (ADS)
Kopriva, Ivica; Peršin, Antun
2010-03-01
Blind decomposition of multi-spectral fluorescent image for tumor demarcation is formulated exploiting tensorial structure of the image. First contribution of the paper is identification of the matrix of spectral responses and 3D tensor of spatial distributions of the materials present in the image from Tucker3 or PARAFAC models of 3D image tensor. Second contribution of the paper is clustering based estimation of the number of the materials present in the image as well as matrix of their spectral profiles. 3D tensor of the spatial distributions of the materials is recovered through 3-mode multiplication of the multi-spectral image tensor and inverse of the matrix of spectral profiles. Tensor representation of the multi-spectral image preserves its local spatial structure that is lost, due to vectorization process, when matrix factorization-based decomposition methods (such as non-negative matrix factorization and independent component analysis) are used. Superior performance of the tensor-based image decomposition over matrix factorization-based decompositions is demonstrated on experimental red-green-blue (RGB) image with known ground truth as well as on RGB fluorescent images of the skin tumor (basal cell carcinoma).
Tensor Decompositions for Learning Latent Variable Models
2012-12-08
of a tensor, 2011. arXiv:1004.4953. [CSC+12] S. B. Cohen, K. Stratos, M. Collins, D. P. Foster, and L. Ungar . Spectral learning of latent-variable...12] P. S. Dhillon, J. Rodu, M. Collins, D. P. Foster, and L. H. Ungar . Spectral dependency parsing with latent variables. In EMNLP-CoNLL, 2012. [DS07...Foster, J. Rodu, and L. H. Ungar . Spectral dimensionality reduction for HMMs, 2012. arXiv:1203.6130. [GvL96] G. H. Golub and C. F. van Loan. Matrix
3D extension of Tensorial Polar Decomposition. Application to (photo-)elasticity tensors
NASA Astrophysics Data System (ADS)
Desmorat, Rodrigue; Desmorat, Boris
2016-06-01
The orthogonalized harmonic decomposition of symmetric fourth-order tensors (i.e. having major and minor indicial symmetries, such as elasticity tensors) is completed by a representation of harmonic fourth-order tensors H by means of two second-order harmonic (symmetric deviatoric) tensors only. A similar decomposition is obtained for non-symmetric tensors (i.e. having minor indicial symmetry only, such as photo-elasticity tensors or elasto-plasticity tangent operators) introducing a fourth-order major antisymmetric traceless tensor Z. The tensor Z is represented by means of one harmonic second-order tensor and one antisymmetric second-order tensor only. Representations of totally symmetric (rari-constant), symmetric and major antisymmetric fourth-order tensors are simple particular cases of the proposed general representation. Closed-form expressions for tensor decomposition are given in the monoclinic case. Practical applications to elasticity and photo-elasticity monoclinic tensors are finally presented. xml:lang="fr"
Calculating vibrational spectra of molecules using tensor train decomposition
NASA Astrophysics Data System (ADS)
Rakhuba, Maxim; Oseledets, Ivan
2016-09-01
We propose a new algorithm for calculation of vibrational spectra of molecules using tensor train decomposition. Under the assumption that eigenfunctions lie on a low-parametric manifold of low-rank tensors we suggest using well-known iterative methods that utilize matrix inversion (locally optimal block preconditioned conjugate gradient method, inverse iteration) and solve corresponding linear systems inexactly along this manifold. As an application, we accurately compute vibrational spectra (84 states) of acetonitrile molecule CH3CN on a laptop in one hour using only 100 MB of memory to represent all computed eigenfunctions.
Blind multispectral image decomposition by 3D nonnegative tensor factorization.
Kopriva, Ivica; Cichocki, Andrzej
2009-07-15
Alpha-divergence-based nonnegative tensor factorization (NTF) is applied to blind multispectral image (MSI) decomposition. The matrix of spectral profiles and the matrix of spatial distributions of the materials resident in the image are identified from the factors in Tucker3 and PARAFAC models. NTF preserves local structure in the MSI that is lost as a result of vectorization of the image when nonnegative matrix factorization (NMF)- or independent component analysis (ICA)-based decompositions are used. Moreover, NTF based on the PARAFAC model is unique up to permutation and scale under mild conditions. To achieve this, NMF- and ICA-based factorizations, respectively, require enforcement of sparseness (orthogonality) and statistical independence constraints on the spatial distributions of the materials resident in the MSI, and these conditions do not hold. We demonstrate efficiency of the NTF-based factorization in relation to NMF- and ICA-based factorizations on blind decomposition of the experimental MSI with the known ground truth.
Tensor decomposition and nonlocal means based spectral CT reconstruction
NASA Astrophysics Data System (ADS)
Zhang, Yanbo; Yu, Hengyong
2016-10-01
As one of the state-of-the-art detectors, photon counting detector is used in spectral CT to classify the received photons into several energy channels and generate multichannel projection simultaneously. However, the projection always contains severe noise due to the low counts in each energy channel. How to reconstruct high-quality images from photon counting detector based spectral CT is a challenging problem. It is widely accepted that there exists self-similarity over the spatial domain in a CT image. Moreover, because a multichannel CT image is obtained from the same object at different energy, images among channels are highly correlated. Motivated by these two characteristics of the spectral CT, we employ tensor decomposition and nonlocal means methods for spectral CT iterative reconstruction. Our method includes three basic steps. First, each channel image is updated by using the OS-SART. Second, small 3D volumetric patches (tensor) are extracted from the multichannel image, and higher-order singular value decomposition (HOSVD) is performed on each tensor, which can help to enhance the spatial sparsity and spectral correlation. Third, in order to employ the self-similarity in CT images, similar patches are grouped to reduce noise using the nonlocal means method. These three steps are repeated alternatively till the stopping criteria are met. The effectiveness of the developed algorithm is validated on both numerically simulated and realistic preclinical datasets. Our results show that the proposed method achieves promising performance in terms of noise reduction and fine structures preservation.
Crossing Fibers Detection with an Analytical High Order Tensor Decomposition
Megherbi, T.; Kachouane, M.; Oulebsir-Boumghar, F.; Deriche, R.
2014-01-01
Diffusion magnetic resonance imaging (dMRI) is the only technique to probe in vivo and noninvasively the fiber structure of human brain white matter. Detecting the crossing of neuronal fibers remains an exciting challenge with an important impact in tractography. In this work, we tackle this challenging problem and propose an original and efficient technique to extract all crossing fibers from diffusion signals. To this end, we start by estimating, from the dMRI signal, the so-called Cartesian tensor fiber orientation distribution (CT-FOD) function, whose maxima correspond exactly to the orientations of the fibers. The fourth order symmetric positive definite tensor that represents the CT-FOD is then analytically decomposed via the application of a new theoretical approach and this decomposition is used to accurately extract all the fibers orientations. Our proposed high order tensor decomposition based approach is minimal and allows recovering the whole crossing fibers without any a priori information on the total number of fibers. Various experiments performed on noisy synthetic data, on phantom diffusion, data and on human brain data validate our approach and clearly demonstrate that it is efficient, robust to noise and performs favorably in terms of angular resolution and accuracy when compared to some classical and state-of-the-art approaches. PMID:25246940
Tensor product decomposition methods applied to complex flow data
NASA Astrophysics Data System (ADS)
von Larcher, Thomas; Klein, Rupert; Schneider, Reinhold; Wolf, Sebastian; Huber, Benjamin
2017-04-01
Low-rank multilevel approximation methods are an important tool in numerical analysis and in scientific computing. Those methods are often suited to attack high-dimensional problems successfully and allow very compact representations of large data sets. Specifically, hierarchical tensor product decomposition methods emerge as an promising approach for application to data that are concerned with cascade-of-scales problems as, e.g., in turbulent fluid dynamics. We focus on two particular objectives, that is representing turbulent data in an appropriate compact form and, secondly and as a long-term goal, finding self-similar vortex structures in multiscale problems. The question here is whether tensor product methods can support the development of improved understanding of the multiscale behavior and whether they are an improved starting point in the development of compact storage schemes for solutions of such problems relative to linear ansatz spaces. We present the reconstruction capabilities of a tensor decomposition based modeling approach tested against 3D turbulent channel flow data.
Uncertainty propagation in orbital mechanics via tensor decomposition
NASA Astrophysics Data System (ADS)
Sun, Yifei; Kumar, Mrinal
2016-03-01
Uncertainty forecasting in orbital mechanics is an essential but difficult task, primarily because the underlying Fokker-Planck equation (FPE) is defined on a relatively high dimensional (6-D) state-space and is driven by the nonlinear perturbed Keplerian dynamics. In addition, an enormously large solution domain is required for numerical solution of this FPE (e.g. encompassing the entire orbit in the x-y-z subspace), of which the state probability density function (pdf) occupies a tiny fraction at any given time. This coupling of large size, high dimensionality and nonlinearity makes for a formidable computational task, and has caused the FPE for orbital uncertainty propagation to remain an unsolved problem. To the best of the authors' knowledge, this paper presents the first successful direct solution of the FPE for perturbed Keplerian mechanics. To tackle the dimensionality issue, the time-varying state pdf is approximated in the CANDECOMP/PARAFAC decomposition tensor form where all the six spatial dimensions as well as the time dimension are separated from one other. The pdf approximation for all times is obtained simultaneously via the alternating least squares algorithm. Chebyshev spectral differentiation is employed for discretization on account of its spectral ("super-fast") convergence rate. To facilitate the tensor decomposition and control the solution domain size, system dynamics is expressed using spherical coordinates in a noninertial reference frame. Numerical results obtained on a regular personal computer are compared with Monte Carlo simulations.
Reduction of Linear Combinations of Tensors by Ideal Decompositions
NASA Astrophysics Data System (ADS)
Fiedler, Bernd
2001-04-01
Symmetry properties of r-times covariant tensors T can be described by certain linear subspaces W of the group ring { K}[{ S}r] of a symmetric group { S}r. If for a class of tensors T such a W is known, the elements of the orthogonal subspace W⊥ of W within the dual space { K}[{ S}r]* of { K}[{ S}r] yield linear identities needed for a treatment of the term combination problem for the coordinates of the T. In earlier papers1,2 we gave the structure of these W for every situation which appears in symbolic tensor calculations by computer. Characterizing idempotents of such W and machinable, linear equation systems for W⊥ can be determined on the basis of an ideal decomposition algorithm which works in every semisimple ring up to an isomorphism. Furthermore, we use tools such as the Littlewood-Richardson rule, plethysms and discrete Fourier transforms for { S}r to increase the efficience of calculations. All described methods were implemented in a Mathematica package called PERMS.
Databases post-processing in Tensoral
NASA Technical Reports Server (NTRS)
Dresselhaus, Eliot
1994-01-01
The Center for Turbulent Research (CTR) post-processing effort aims to make turbulence simulations and data more readily and usefully available to the research and industrial communities. The Tensoral language, introduced in this document and currently existing in prototype form, is the foundation of this effort. Tensoral provides a convenient and powerful protocol to connect users who wish to analyze fluids databases with the authors who generate them. In this document we introduce Tensoral and its prototype implementation in the form of a user's guide. This guide focuses on use of Tensoral for post-processing turbulence databases. The corresponding document - the Tensoral 'author's guide' - which focuses on how authors can make databases available to users via the Tensoral system - is currently unwritten. Section 1 of this user's guide defines Tensoral's basic notions: we explain the class of problems at hand and how Tensoral abstracts them. Section 2 defines Tensoral syntax for mathematical expressions. Section 3 shows how these expressions make up Tensoral statements. Section 4 shows how Tensoral statements and expressions are embedded into other computer languages (such as C or Vectoral) to make Tensoral programs. We conclude with a complete example program.
Tensor decomposition for multi-tissue gene expression experiments
Hore, Victoria; Viñuela, Ana; Buil, Alfonso; Knight, Julian; McCarthy, Mark I; Small, Kerrin; Marchini, Jonathan
2016-01-01
Genome wide association studies of gene expression traits and other cellular phenotypes have been successful in revealing links between genetic variation and biological processes. The majority of discoveries have uncovered cis eQTL effects via mass univariate testing of SNPs against gene expression in single tissues. We present a Bayesian method for multi-tissue experiments focusing on uncovering gene networks linked to genetic variation. Our method decomposes the 3D array (or tensor) of gene expression measurements into a set of latent components. We identify sparse gene networks, which can then be tested for association against genetic variation genome-wide. We apply our method to a dataset of 845 individuals from the TwinsUK cohort with gene expression measured via RNA sequencing in adipose, LCLs and skin. We uncover several gene networks with a genetic basis and clear biological and statistical significance. Extensions of this approach will allow integration of multi-omic, environmental and phenotypic datasets. PMID:27479908
Tensor decomposition for multiple-tissue gene expression experiments.
Hore, Victoria; Viñuela, Ana; Buil, Alfonso; Knight, Julian; McCarthy, Mark I; Small, Kerrin; Marchini, Jonathan
2016-09-01
Genome-wide association studies of gene expression traits and other cellular phenotypes have successfully identified links between genetic variation and biological processes. The majority of discoveries have uncovered cis-expression quantitative trait locus (eQTL) effects via mass univariate testing of SNPs against gene expression in single tissues. Here we present a Bayesian method for multiple-tissue experiments focusing on uncovering gene networks linked to genetic variation. Our method decomposes the 3D array (or tensor) of gene expression measurements into a set of latent components. We identify sparse gene networks that can then be tested for association against genetic variation across the genome. We apply our method to a data set of 845 individuals from the TwinsUK cohort with gene expression measured via RNA-seq analysis in adipose, lymphoblastoid cell lines (LCLs) and skin. We uncover several gene networks with a genetic basis and clear biological and statistical significance. Extensions of this approach will allow integration of different omics, environmental and phenotypic data sets.
Symmetric Tensor Decomposition Description of Fermionic Many-Body Wave Functions
NASA Astrophysics Data System (ADS)
Uemura, Wataru; Sugino, Osamu
2012-12-01
The configuration interaction (CI) is a versatile wave function theory for interacting fermions, but it involves an extremely long CI series. Using a symmetric tensor decomposition method, we convert the CI series into a compact and numerically tractable form. The converted series encompasses the Hartree-Fock state in the first term and rapidly converges to the full-CI state, as numerically tested by using small molecules. Provided that the length of the symmetric tensor decomposition CI series grows only moderately with the increasing complexity of the system, the new method will serve as one of the alternative variational methods to achieve full CI with enhanced practicability.
Thermochemical water decomposition processes
NASA Technical Reports Server (NTRS)
Chao, R. E.
1974-01-01
Thermochemical processes which lead to the production of hydrogen and oxygen from water without the consumption of any other material have a number of advantages when compared to other processes such as water electrolysis. It is possible to operate a sequence of chemical steps with net work requirements equal to zero at temperatures well below the temperature required for water dissociation in a single step. Various types of procedures are discussed, giving attention to halide processes, reverse Deacon processes, iron oxide and carbon oxide processes, and metal and alkali metal processes. Economical questions are also considered.
Thermochemical water decomposition processes
NASA Technical Reports Server (NTRS)
Chao, R. E.
1974-01-01
Thermochemical processes which lead to the production of hydrogen and oxygen from water without the consumption of any other material have a number of advantages when compared to other processes such as water electrolysis. It is possible to operate a sequence of chemical steps with net work requirements equal to zero at temperatures well below the temperature required for water dissociation in a single step. Various types of procedures are discussed, giving attention to halide processes, reverse Deacon processes, iron oxide and carbon oxide processes, and metal and alkali metal processes. Economical questions are also considered.
Higher order singular value decomposition of tensors for fusion of registered images
NASA Astrophysics Data System (ADS)
Thomason, Michael G.; Gregor, Jens
2011-01-01
This paper describes a computational method using tensor math for higher order singular value decomposition (HOSVD) of registered images. Tensor decomposition is a rigorous way to expose structure embedded in multidimensional datasets. Given a dataset of registered 2-D images, the dataset is represented in tensor format and HOSVD of the tensor is computed to obtain a set of 2-D basis images. The basis images constitute a linear decomposition of the original dataset. HOSVD is data-driven and does not require the user to select parameters or assign thresholds. A specific application uses the basis images for pixel-level fusion of registered images into a single image for visualization. The fusion is optimized with respect to a measure of mean squared error. HOSVD and image fusion are illustrated empirically with four real datasets: (1) visible and infrared data of a natural scene, (2) MRI and x ray CT brain images, and in nondestructive testing (3) x ray, ultrasound, and eddy current images, and (4) x ray, ultrasound, and shearography images.
Performance of tensor decomposition-based modal identification under nonstationary vibration
NASA Astrophysics Data System (ADS)
Friesen, P.; Sadhu, A.
2017-03-01
Health monitoring of civil engineering structures is of paramount importance when they are subjected to natural hazards or extreme climatic events like earthquake, strong wind gusts or man-made excitations. Most of the traditional modal identification methods are reliant on stationarity assumption of the vibration response and posed difficulty while analyzing nonstationary vibration (e.g. earthquake or human-induced vibration). Recently tensor decomposition based methods are emerged as powerful and yet generic blind (i.e. without requiring a knowledge of input characteristics) signal decomposition tool for structural modal identification. In this paper, a tensor decomposition based system identification method is further explored to estimate modal parameters using nonstationary vibration generated due to either earthquake or pedestrian induced excitation in a structure. The effects of lag parameters and sensor densities on tensor decomposition are studied with respect to the extent of nonstationarity of the responses characterized by the stationary duration and peak ground acceleration of the earthquake. A suite of more than 1400 earthquakes is used to investigate the performance of the proposed method under a wide variety of ground motions utilizing both complete and partial measurements of a high-rise building model. Apart from the earthquake, human-induced nonstationary vibration of a real-life pedestrian bridge is also used to verify the accuracy of the proposed method.
Predicting the reference evapotranspiration based on tensor decomposition
NASA Astrophysics Data System (ADS)
Misaghian, Negin; Shamshirband, Shahaboddin; Petković, Dalibor; Gocic, Milan; Mohammadi, Kasra
2016-09-01
Most of the available models for reference evapotranspiration (ET0) estimation are based upon only an empirical equation for ET0. Thus, one of the main issues in ET0 estimation is the appropriate integration of time information and different empirical ET0 equations to determine ET0 and boost the precision. The FAO-56 Penman-Monteith, adjusted Hargreaves, Blaney-Criddle, Priestley-Taylor, and Jensen-Haise equations were utilized in this study for estimating ET0 for two stations of Belgrade and Nis in Serbia using collected data for the period of 1980 to 2010. Three-order tensor is used to capture three-way correlations among months, years, and ET0 information. Afterward, the latent correlations among ET0 parameters were found by the multiway analysis to enhance the quality of the prediction. The suggested method is valuable as it takes into account simultaneous relations between elements, boosts the prediction precision, and determines latent associations. Models are compared with respect to coefficient of determination (R 2), mean absolute error (MAE), and root-mean-square error (RMSE). The proposed tensor approach has a R 2 value of greater than 0.9 for all selected ET0 methods at both selected stations, which is acceptable for the ET0 prediction. RMSE is ranged between 0.247 and 0.485 mm day-1 at Nis station and between 0.277 and 0.451 mm day-1 at Belgrade station, while MAE is between 0.140 and 0.337 mm day-1 at Nis and between 0.208 and 0.360 mm day-1 at Belgrade station. The best performances are achieved by Priestley-Taylor model at Nis station (R 2 = 0.985, MAE = 0.140 mm day-1, RMSE = 0.247 mm day-1) and FAO-56 Penman-Monteith model at Belgrade station (MAE = 0.208 mm day-1, RMSE = 0.277 mm day-1, R 2 = 0.975).
Tensor decomposition in electronic structure calculations on 3D Cartesian grids
Khoromskij, B.N. Khoromskaia, V.; Chinnamsetty, S.R.; Flad, H.-J.
2009-09-01
In this paper, we investigate a novel approach based on the combination of Tucker-type and canonical tensor decomposition techniques for the efficient numerical approximation of functions and operators in electronic structure calculations. In particular, we study applicability of tensor approximations for the numerical solution of Hartree-Fock and Kohn-Sham equations on 3D Cartesian grids. We show that the orthogonal Tucker-type tensor approximation of electron density and Hartree potential of simple molecules leads to low tensor rank representations. This enables an efficient tensor-product convolution scheme for the computation of the Hartree potential using a collocation-type approximation via piecewise constant basis functions on a uniform nxnxn grid. Combined with the Richardson extrapolation, our approach exhibits O(h{sup 3}) convergence in the grid-size h=O(n{sup -1}). Moreover, this requires O(3rn+r{sup 3}) storage, where r denotes the Tucker rank of the electron density with r=O(logn), almost uniformly in n. For example, calculations of the Coulomb matrix and the Hartree-Fock energy for the CH{sub 4} molecule, with a pseudopotential on the C atom, achieved accuracies of the order of 10{sup -6} hartree with a grid-size n of several hundreds. Since the tensor-product convolution in 3D is performed via 1D convolution transforms, our scheme markedly outperforms the 3D-FFT in both the computing time and storage requirements.
ON THE DECOMPOSITION OF STRESS AND STRAIN TENSORS INTO SPHERICAL AND DEVIATORIC PARTS
Augusti, G.; Martin, J. B.; Prager, W.
1969-01-01
It is well known that Hooke's law for a linearly elastic, isotropic solid may be written in the form of two relations that involve only the spherical or only the deviatoric parts of the tensors of stress and strain. The example of the linearly elastic, transversely isotropic solid is used to show that this decomposition is not, in general, feasible for linearly elastic, anisotropic solids. The discussion is extended to a large class of work-hardening rigid, plastic solids, and it is shown that the considered decomposition can only be achieved for the incompressible solids of this class. PMID:16591754
Exploiting multi-lead electrocardiogram correlations using robust third-order tensor decomposition
Dandapat, Samarendra
2015-01-01
In this Letter, a robust third-order tensor decomposition of multi-lead electrocardiogram (MECG) comprising of 12-leads is proposed to reduce the dimension of the storage data. An order-3 tensor structure is employed to represent the MECG data by rearranging the MECG information in three dimensions. The three-dimensions of the formed tensor represent the number of leads, beats and samples of some fixed ECG duration. Dimension reduction of such an arrangement exploits correlations present among the successive beats (intra-beat and inter-beat) and across the leads (inter-lead). The higher-order singular value decomposition is used to decompose the tensor data. In addition, multiscale analysis has been added for effective care of ECG information. It grossly segments the ECG characteristic waves (P-wave, QRS-complex, ST-segment and T-wave etc.) into different sub-bands. In the meantime, it separates high-frequency noise components into lower-order sub-bands which helps in removing noise from the original data. For evaluation purposes, we have used the publicly available PTB diagnostic database. The proposed method outperforms the existing algorithms where compression ratio is under 10 for MECG data. Results show that the original MECG data volume can be reduced by more than 45 times with acceptable diagnostic distortion level. PMID:26609416
Exploiting multi-lead electrocardiogram correlations using robust third-order tensor decomposition.
Padhy, Sibasankar; Dandapat, Samarendra
2015-10-01
In this Letter, a robust third-order tensor decomposition of multi-lead electrocardiogram (MECG) comprising of 12-leads is proposed to reduce the dimension of the storage data. An order-3 tensor structure is employed to represent the MECG data by rearranging the MECG information in three dimensions. The three-dimensions of the formed tensor represent the number of leads, beats and samples of some fixed ECG duration. Dimension reduction of such an arrangement exploits correlations present among the successive beats (intra-beat and inter-beat) and across the leads (inter-lead). The higher-order singular value decomposition is used to decompose the tensor data. In addition, multiscale analysis has been added for effective care of ECG information. It grossly segments the ECG characteristic waves (P-wave, QRS-complex, ST-segment and T-wave etc.) into different sub-bands. In the meantime, it separates high-frequency noise components into lower-order sub-bands which helps in removing noise from the original data. For evaluation purposes, we have used the publicly available PTB diagnostic database. The proposed method outperforms the existing algorithms where compression ratio is under 10 for MECG data. Results show that the original MECG data volume can be reduced by more than 45 times with acceptable diagnostic distortion level.
A new moment-tensor decomposition for seismic events in anisotropic media
NASA Astrophysics Data System (ADS)
Chapman, C. H.; Leaney, W. S.
2012-01-01
Investigating the mechanisms of small seismic sources usually consists of three steps: determining the moment tensor of the source; decomposing the moment tensor into parameters that can be interpreted in terms of physical mechanisms and displaying those parameters. This paper concerns the second and third steps. Two existing methods—the Riedesel-Jordan and Hudson-Pearce-Rogers parameters and displays—are reviewed, compared and contrasted, and advantages and disadvantages of the two methods are discussed. One disadvantage is that neither method takes into consideration the effect of anisotropy on the interpretation. In microseisms, anisotropy can be important. A new procedure based on the biaxial decomposition of the potency tensor is introduced which explicitly allows for anisotropy and interprets the moment tensor in terms of an isotropic pressure change and a displacement discontinuity on a fault. It is shown that this interpretation is always possible for any moment tensor whatever the anisotropy. To compare the pressure change with the displacement discontinuity, it is useful to be able to determine the volume change from the pressure source in any medium. This depends on the embedded bulk modulus, which differs from the normal bulk modulus. The embedded modulus in isotropic media is well known and the equivalent anisotropic result is derived in this paper. Interpreting a seismic source in terms of the volume change due to a pressure change and a displacement discontinuity on a fault allows a simple 3-D graphical glyph to be used to display the interpretation.
NASA Astrophysics Data System (ADS)
Cyganek, Boguslaw; Smolka, Bogdan
2015-02-01
In this paper a system for real-time recognition of objects in multidimensional video signals is proposed. Object recognition is done by pattern projection into the tensor subspaces obtained from the factorization of the signal tensors representing the input signal. However, instead of taking only the intensity signal the novelty of this paper is first to build the Extended Structural Tensor representation from the intensity signal that conveys information on signal intensities, as well as on higher-order statistics of the input signals. This way the higher-order input pattern tensors are built from the training samples. Then, the tensor subspaces are built based on the Higher-Order Singular Value Decomposition of the prototype pattern tensors. Finally, recognition relies on measurements of the distance of a test pattern projected into the tensor subspaces obtained from the training tensors. Due to high-dimensionality of the input data, tensor based methods require high memory and computational resources. However, recent achievements in the technology of the multi-core microprocessors and graphic cards allows real-time operation of the multidimensional methods as is shown and analyzed in this paper based on real examples of object detection in digital images.
Tensoral for post-processing users and simulation authors
NASA Technical Reports Server (NTRS)
Dresselhaus, Eliot
1993-01-01
The CTR post-processing effort aims to make turbulence simulations and data more readily and usefully available to the research and industrial communities. The Tensoral language, which provides the foundation for this effort, is introduced here in the form of a user's guide. The Tensoral user's guide is presented in two main sections. Section one acts as a general introduction and guides database users who wish to post-process simulation databases. Section two gives a brief description of how database authors and other advanced users can make simulation codes and/or the databases they generate available to the user community via Tensoral database back ends. The two-part structure of this document conforms to the two-level design structure of the Tensoral language. Tensoral has been designed to be a general computer language for performing tensor calculus and statistics on numerical data. Tensoral's generality allows it to be used for stand-alone native coding of high-level post-processing tasks (as described in section one of this guide). At the same time, Tensoral's specialization to a minute task (namely, to numerical tensor calculus and statistics) allows it to be easily embedded into applications written partly in Tensoral and partly in other computer languages (here, C and Vectoral). Embedded Tensoral, aimed at advanced users for more general coding (e.g. of efficient simulations, for interfacing with pre-existing software, for visualization, etc.), is described in section two of this guide.
Peng, Bo; Kowalski, Karol
2017-09-12
The representation and storage of two-electron integral tensors are vital in large-scale applications of accurate electronic structure methods. Low-rank representation and efficient storage strategy of integral tensors can significantly reduce the numerical overhead and consequently time-to-solution of these methods. In this work, by combining pivoted incomplete Cholesky decomposition (CD) with a follow-up truncated singular vector decomposition (SVD), we develop a decomposition strategy to approximately represent the two-electron integral tensor in terms of low-rank vectors. A systematic benchmark test on a series of 1-D, 2-D, and 3-D carbon-hydrogen systems demonstrates high efficiency and scalability of the compound two-step decomposition of the two-electron integral tensor in our implementation. For the size of the atomic basis set, Nb, ranging from ∼100 up to ∼2,000, the observed numerical scaling of our implementation shows [Formula: see text] versus [Formula: see text] cost of performing single CD on the two-electron integral tensor in most of the other implementations. More importantly, this decomposition strategy can significantly reduce the storage requirement of the atomic orbital (AO) two-electron integral tensor from [Formula: see text] to [Formula: see text] with moderate decomposition thresholds. The accuracy tests have been performed using ground- and excited-state formulations of coupled cluster formalism employing single and double excitations (CCSD) on several benchmark systems including the C60 molecule described by nearly 1,400 basis functions. The results show that the decomposition thresholds can be generally set to 10(-4) to 10(-3) to give acceptable compromise between efficiency and accuracy.
NASA Technical Reports Server (NTRS)
Leone, Frank A., Jr.
2015-01-01
A method is presented to represent the large-deformation kinematics of intraply matrix cracks and delaminations in continuum damage mechanics (CDM) constitutive material models. The method involves the additive decomposition of the deformation gradient tensor into 'crack' and 'bulk material' components. The response of the intact bulk material is represented by a reduced deformation gradient tensor, and the opening of an embedded cohesive interface is represented by a normalized cohesive displacement-jump vector. The rotation of the embedded interface is tracked as the material deforms and as the crack opens. The distribution of the total local deformation between the bulk material and the cohesive interface components is determined by minimizing the difference between the cohesive stress and the bulk material stress projected onto the cohesive interface. The improvements to the accuracy of CDM models that incorporate the presented method over existing approaches are demonstrated for a single element subjected to simple shear deformation and for a finite element model of a unidirectional open-hole tension specimen. The material model is implemented as a VUMAT user subroutine for the Abaqus/Explicit finite element software. The presented deformation gradient decomposition method reduces the artificial load transfer across matrix cracks subjected to large shearing deformations, and avoids the spurious secondary failure modes that often occur in analyses based on conventional progressive damage models.
Towards Interactive Construction of Topical Hierarchy: A Recursive Tensor Decomposition Approach
Wang, Chi; Liu, Xueqing; Song, Yanglei; Han, Jiawei
2015-01-01
Automatic construction of user-desired topical hierarchies over large volumes of text data is a highly desirable but challenging task. This study proposes to give users freedom to construct topical hierarchies via interactive operations such as expanding a branch and merging several branches. Existing hierarchical topic modeling techniques are inadequate for this purpose because (1) they cannot consistently preserve the topics when the hierarchy structure is modified; and (2) the slow inference prevents swift response to user requests. In this study, we propose a novel method, called STROD, that allows efficient and consistent modification of topic hierarchies, based on a recursive generative model and a scalable tensor decomposition inference algorithm with theoretical performance guarantee. Empirical evaluation shows that STROD reduces the runtime of construction by several orders of magnitude, while generating consistent and quality hierarchies. PMID:26705505
Tensor-multi-scalar theories: relativistic stars and 3 + 1 decomposition
NASA Astrophysics Data System (ADS)
Horbatsch, Michael; Silva, Hector O.; Gerosa, Davide; Pani, Paolo; Berti, Emanuele; Gualtieri, Leonardo; Sperhake, Ulrich
2015-10-01
Gravitational theories with multiple scalar fields coupled to the metric and each other—a natural extension of the well studied single-scalar-tensor theories—are interesting phenomenological frameworks to describe deviations from general relativity in the strong-field regime. In these theories, the N-tuple of scalar fields takes values in a coordinate patch of an N-dimensional Riemannian target-space manifold whose properties are poorly constrained by weak-field observations. Here we introduce for simplicity a non-trivial model with two scalar fields and a maximally symmetric target-space manifold. Within this model we present a preliminary investigation of spontaneous scalarization for relativistic, perfect fluid stellar models in spherical symmetry. We find that the scalarization threshold is determined by the eigenvalues of a symmetric scalar-matter coupling matrix, and that the properties of strongly scalarized stellar configurations additionally depend on the target-space curvature radius. In preparation for numerical relativity simulations, we also write down the 3 + 1 decomposition of the field equations for generic tensor-multi-scalar theories.
Zhang, Zheng; Yang, Xiu; Oseledets, Ivan V.; Karniadakis, George E.; Daniel, Luca
2015-01-01
Hierarchical uncertainty quantification can reduce the computational cost of stochastic circuit simulation by employing spectral methods at different levels. This paper presents an efficient framework to simulate hierarchically some challenging stochastic circuits/systems that include high-dimensional subsystems. Due to the high parameter dimensionality, it is challenging to both extract surrogate models at the low level of the design hierarchy and to handle them in the high-level simulation. In this paper, we develop an efficient analysis of variance-based stochastic circuit/microelectromechanical systems simulator to efficiently extract the surrogate models at the low level. In order to avoid the curse of dimensionality, we employ tensor-train decomposition at the high level to construct the basis functions and Gauss quadrature points. As a demonstration, we verify our algorithm on a stochastic oscillator with four MEMS capacitors and 184 random parameters. This challenging example is efficiently simulated by our simulator at the cost of only 10min in MATLAB on a regular personal computer.
Multidimensional Compressed Sensing MRI Using Tensor Decomposition-Based Sparsifying Transform
Yu, Yeyang; Jin, Jin; Liu, Feng; Crozier, Stuart
2014-01-01
Compressed Sensing (CS) has been applied in dynamic Magnetic Resonance Imaging (MRI) to accelerate the data acquisition without noticeably degrading the spatial-temporal resolution. A suitable sparsity basis is one of the key components to successful CS applications. Conventionally, a multidimensional dataset in dynamic MRI is treated as a series of two-dimensional matrices, and then various matrix/vector transforms are used to explore the image sparsity. Traditional methods typically sparsify the spatial and temporal information independently. In this work, we propose a novel concept of tensor sparsity for the application of CS in dynamic MRI, and present the Higher-order Singular Value Decomposition (HOSVD) as a practical example. Applications presented in the three- and four-dimensional MRI data demonstrate that HOSVD simultaneously exploited the correlations within spatial and temporal dimensions. Validations based on cardiac datasets indicate that the proposed method achieved comparable reconstruction accuracy with the low-rank matrix recovery methods and, outperformed the conventional sparse recovery methods. PMID:24901331
Aridity and decomposition processes in complex landscapes
NASA Astrophysics Data System (ADS)
Ossola, Alessandro; Nyman, Petter
2015-04-01
Decomposition of organic matter is a key biogeochemical process contributing to nutrient cycles, carbon fluxes and soil development. The activity of decomposers depends on microclimate, with temperature and rainfall being major drivers. In complex terrain the fine-scale variation in microclimate (and hence water availability) as a result of slope orientation is caused by differences in incoming radiation and surface temperature. Aridity, measured as the long-term balance between net radiation and rainfall, is a metric that can be used to represent variations in water availability within the landscape. Since aridity metrics can be obtained at fine spatial scales, they could theoretically be used to investigate how decomposition processes vary across complex landscapes. In this study, four research sites were selected in tall open sclerophyll forest along a aridity gradient (Budyko dryness index ranging from 1.56 -2.22) where microclimate, litter moisture and soil moisture were monitored continuously for one year. Litter bags were packed to estimate decomposition rates (k) using leaves of a tree species not present in the study area (Eucalyptus globulus) in order to avoid home-field advantage effects. Litter mass loss was measured to assess the activity of macro-decomposers (6mm litter bag mesh size), meso-decomposers (1 mm mesh), microbes above-ground (0.2 mm mesh) and microbes below-ground (2 cm depth, 0.2 mm mesh). Four replicates for each set of bags were installed at each site and bags were collected at 1, 2, 4, 7 and 12 months since installation. We first tested whether differences in microclimate due to slope orientation have significant effects on decomposition processes. Then the dryness index was related to decomposition rates to evaluate if small-scale variation in decomposition can be predicted using readily available information on rainfall and radiation. Decomposition rates (k), calculated fitting single pool negative exponential models, generally
NASA Astrophysics Data System (ADS)
Afra, Sardar; Gildin, Eduardo
2016-09-01
Parameter estimation through robust parameterization techniques has been addressed in many works associated with history matching and inverse problems. Reservoir models are in general complex, nonlinear, and large-scale with respect to the large number of states and unknown parameters. Thus, having a practical approach to replace the original set of highly correlated unknown parameters with non-correlated set of lower dimensionality, that captures the most significant features comparing to the original set, is of high importance. Furthermore, de-correlating system's parameters while keeping the geological description intact is critical to control the ill-posedness nature of such problems. We introduce the advantages of a new low dimensional parameterization approach for reservoir characterization applications utilizing multilinear algebra based techniques like higher order singular value decomposition (HOSVD). In tensor based approaches like HOSVD, 2D permeability images are treated as they are, i.e., the data structure is kept as it is, whereas in conventional dimensionality reduction algorithms like SVD data has to be vectorized. Hence, compared to classical methods, higher redundancy reduction with less information loss can be achieved through decreasing present redundancies in all dimensions. In other words, HOSVD approximation results in a better compact data representation with respect to least square sense and geological consistency in comparison with classical algorithms. We examined the performance of the proposed parameterization technique against SVD approach on the SPE10 benchmark reservoir model as well as synthetic channelized permeability maps to demonstrate the capability of the proposed method. Moreover, to acquire statistical consistency, we repeat all experiments for a set of 1000 unknown geological samples and provide comparison using RMSE analysis. Results prove that, for a fixed compression ratio, the performance of the proposed approach
Biogeochemistry of Decomposition and Detrital Processing
NASA Astrophysics Data System (ADS)
Sanderman, J.; Amundson, R.
2003-12-01
Decomposition is a key ecological process that roughly balances net primary production in terrestrial ecosystems and is an essential process in resupplying nutrients to the plant community. Decomposition consists of three concurrent processes: communition or fragmentation, leaching of water-soluble compounds, and microbial catabolism. Decomposition can also be viewed as a sequential process, what Eijsackers and Zehnder (1990) compare to a Russian matriochka doll. Soil macrofauna fragment and partially solubilize plant residues, facilitating establishment of a community of decomposer microorganisms. This decomposer community will gradually shift as the most easily degraded plant compounds are utilized and the more recalcitrant materials begin to accumulate. Given enough time and the proper environmental conditions, most naturally occurring compounds can completely be mineralized to inorganic forms. Simultaneously with mineralization, the process of humification acts to transform a fraction of the plant residues into stable soil organic matter (SOM) or humus. For reference, Schlesinger (1990) estimated that only ˜0.7% of detritus eventually becomes stabilized into humus.Decomposition plays a key role in the cycling of most plant macro- and micronutrients and in the formation of humus. Figure 1 places the roles of detrital processing and mineralization within the context of the biogeochemical cycling of essential plant nutrients. Chapin (1991) found that while the atmosphere supplied 4% and mineral weathering supplied no nitrogen and <1% of phosphorus, internal nutrient recycling is the source for >95% of all the nitrogen and phosphorus uptake by tundra species in Barrow, Alaska. In a cool temperate forest, nutrient recycling accounted for 93%, 89%, 88%, and 65% of total sources for nitrogen, phosphorus, potassium, and calcium, respectively ( Chapin, 1991). (13K)Figure 1. A decomposition-centric biogeochemical model of nutrient cycling. Although there is significant
Tensoral: A system for post-processing turbulence simulation data
NASA Technical Reports Server (NTRS)
Dresselhaus, Eliot
1993-01-01
Many computer simulations in engineering and science -- and especially in computational fluid dynamics (CFD) -- produce huge quantities of numerical data. These data are often so large as to make even relatively simple post-processing of this data unwieldy. The data, once computed and quality-assured, is most likely analyzed by only a few people. As a result, much useful numerical data is under-utilized. Since future state-of-the-art simulations will produce even larger datasets, will use more complex flow geometries, and will be performed on more complex supercomputers, data management issues will become increasingly cumbersome. My goal is to provide software which will automate the present and future task of managing and post-processing large turbulence datasets. My research has focused on the development of these software tools -- specifically, through the development of a very high-level language called 'Tensoral'. The ultimate goal of Tensoral is to convert high-level mathematical expressions (tensor algebra, calculus, and statistics) into efficient low-level programs which numerically calculate these expressions given simulation datasets. This approach to the database and post-processing problem has several advantages. Using Tensoral the numerical and data management details of a simulation are shielded from the concerns of the end user. This shielding is carried out without sacrificing post-processor efficiency and robustness. Another advantage of Tensoral is that its very high-level nature lends itself to portability across a wide variety of computing (and supercomputing) platforms. This is especially important considering the rapidity of changes in supercomputing hardware.
Tensor Algebra Library for NVidia Graphics Processing Units
Liakh, Dmitry
2015-03-16
This is a general purpose math library implementing basic tensor algebra operations on NVidia GPU accelerators. This software is a tensor algebra library that can perform basic tensor algebra operations, including tensor contractions, tensor products, tensor additions, etc., on NVidia GPU accelerators, asynchronously with respect to the CPU host. It supports a simultaneous use of multiple NVidia GPUs. Each asynchronous API function returns a handle which can later be used for querying the completion of the corresponding tensor algebra operation on a specific GPU. The tensors participating in a particular tensor operation are assumed to be stored in local RAM of a node or GPU RAM. The main research area where this library can be utilized is the quantum many-body theory (e.g., in electronic structure theory).
Tensor Algebra Library for NVidia Graphics Processing Units
Liakh, Dmitry
2015-03-16
This is a general purpose math library implementing basic tensor algebra operations on NVidia GPU accelerators. This software is a tensor algebra library that can perform basic tensor algebra operations, including tensor contractions, tensor products, tensor additions, etc., on NVidia GPU accelerators, asynchronously with respect to the CPU host. It supports a simultaneous use of multiple NVidia GPUs. Each asynchronous API function returns a handle which can later be used for querying the completion of the corresponding tensor algebra operation on a specific GPU. The tensors participating in a particular tensor operation are assumed to be stored in local RAM of a node or GPU RAM. The main research area where this library can be utilized is the quantum many-body theory (e.g., in electronic structure theory).
Diffusion tensor fiber tracking on graphics processing units.
Mittmann, Adiel; Comunello, Eros; von Wangenheim, Aldo
2008-10-01
Diffusion tensor magnetic resonance imaging has been successfully applied to the process of fiber tracking, which determines the location of fiber bundles within the human brain. This process, however, can be quite lengthy when run on a regular workstation. We present a means of executing this process by making use of the graphics processing units of computers' video cards, which provide a low-cost parallel execution environment that algorithms like fiber tracking can benefit from. With this method we have achieved performance gains varying from 14 to 40 times on common computers. Because of accuracy issues inherent to current graphics processing units, we define a variation index in order to assess how close the results obtained with our method are to those generated by programs running on the central processing units of computers. This index shows that results produced by our method are acceptable when compared to those of traditional programs.
Transmit Array Interpolation for DOA Estimation via Tensor Decomposition in 2-D MIMO Radar
NASA Astrophysics Data System (ADS)
Cao, Ming-Yang; Vorobyov, Sergiy A.; Hassanien, Aboulnasr
2017-10-01
In this paper, we propose a two-dimensional (2D) joint transmit array interpolation and beamspace design for planar array mono-static multiple-input-multiple-output (MIMO) radar for direction-of-arrival (DOA) estimation via tensor modeling. Our underlying idea is to map the transmit array to a desired array and suppress the transmit power outside the spatial sector of interest. In doing so, the signal-tonoise ratio is improved at the receive array. Then, we fold the received data along each dimension into a tensorial structure and apply tensor-based methods to obtain DOA estimates. In addition, we derive a close-form expression for DOA estimation bias caused by interpolation errors and argue for using a specially designed look-up table to compensate the bias. The corresponding Cramer-Rao Bound (CRB) is also derived. Simulation results are provided to show the performance of the proposed method and compare its performance to CRB.
Middleton, Beth A.
2014-01-01
A cornerstone of ecosystem ecology, decomposition was recognized as a fundamental process driving the exchange of energy in ecosystems by early ecologists such as Lindeman 1942 and Odum 1960). In the history of ecology, studies of decomposition were incorporated into the International Biological Program in the 1960s to compare the nature of organic matter breakdown in various ecosystem types. Such studies still have an important role in ecological studies of today. More recent refinements have brought debates on the relative role microbes, invertebrates and environment in the breakdown and release of carbon into the atmosphere, as well as how nutrient cycling, production and other ecosystem processes regulated by decomposition may shift with climate change. Therefore, this bibliography examines the primary literature related to organic matter breakdown, but it also explores topics in which decomposition plays a key supporting role including vegetation composition, latitudinal gradients, altered ecosystems, anthropogenic impacts, carbon storage, and climate change models. Knowledge of these topics is relevant to both the study of ecosystem ecology as well projections of future conditions for human societies.
NASA Astrophysics Data System (ADS)
von Larcher, Thomas; Blome, Therese; Klein, Rupert; Schneider, Reinhold; Wolf, Sebastian; Huber, Benjamin
2016-04-01
Handling high-dimensional data sets like they occur e.g. in turbulent flows or in multiscale behaviour of certain types in Geosciences are one of the big challenges in numerical analysis and scientific computing. A suitable solution is to represent those large data sets in an appropriate compact form. In this context, tensor product decomposition methods currently emerge as an important tool. One reason is that these methods often enable one to attack high-dimensional problems successfully, another that they allow for very compact representations of large data sets. We follow the novel Tensor-Train (TT) decomposition method to support the development of improved understanding of the multiscale behavior and the development of compact storage schemes for solutions of such problems. One long-term goal of the project is the construction of a self-consistent closure for Large Eddy Simulations (LES) of turbulent flows that explicitly exploits the tensor product approach's capability of capturing self-similar structures. Secondly, we focus on a mixed deterministic-stochastic subgrid scale modelling strategy currently under development for application in Finite Volume Large Eddy Simulation (LES) codes. Advanced methods of time series analysis for the databased construction of stochastic models with inherently non-stationary statistical properties and concepts of information theory based on a modified Akaike information criterion and on the Bayesian information criterion for the model discrimination are used to construct surrogate models for the non-resolved flux fluctuations. Vector-valued auto-regressive models with external influences form the basis for the modelling approach [1], [2], [4]. Here, we present the reconstruction capabilities of the two modeling approaches tested against 3D turbulent channel flow data computed by direct numerical simulation (DNS) for an incompressible, isothermal fluid at Reynolds number Reτ = 590 (computed by [3]). References [1] I
Leistritz, Lutz; Witte, Herbert; Schiecke, Karin
2015-01-01
Quantification of functional connectivity in physiological networks is frequently performed by means of time-variant partial directed coherence (tvPDC), based on time-variant multivariate autoregressive models. The principle advantage of tvPDC lies in the combination of directionality, time variance and frequency selectivity simultaneously, offering a more differentiated view into complex brain networks. Yet the advantages specific to tvPDC also cause a large number of results, leading to serious problems in interpretability. To counter this issue, we propose the decomposition of multi-dimensional tvPDC results into a sum of rank-1 outer products. This leads to a data condensation which enables an advanced interpretation of results. Furthermore it is thereby possible to uncover inherent interaction patterns of induced neuronal subsystems by limiting the decomposition to several relevant channels, while retaining the global influence determined by the preceding multivariate AR estimation and tvPDC calculation of the entire scalp. Finally a comparison between several subjects is considerably easier, as individual tvPDC results are summarized within a comprehensive model equipped with subject-specific loading coefficients. A proof-of-principle of the approach is provided by means of simulated data; EEG data of an experiment concerning visual evoked potentials are used to demonstrate the applicability to real data. PMID:26046537
Adaptation of motor imagery EEG classification model based on tensor decomposition
NASA Astrophysics Data System (ADS)
Li, Xinyang; Guan, Cuntai; Zhang, Haihong; Keng Ang, Kai; Ong, Sim Heng
2014-10-01
Objective. Session-to-session nonstationarity is inherent in brain-computer interfaces based on electroencephalography. The objective of this paper is to quantify the mismatch between the training model and test data caused by nonstationarity and to adapt the model towards minimizing the mismatch. Approach. We employ a tensor model to estimate the mismatch in a semi-supervised manner, and the estimate is regularized in the discriminative objective function. Main results. The performance of the proposed adaptation method was evaluated on a dataset recorded from 16 subjects performing motor imagery tasks on different days. The classification results validated the advantage of the proposed method in comparison with other regularization-based or spatial filter adaptation approaches. Experimental results also showed that there is a significant correlation between the quantified mismatch and the classification accuracy. Significance. The proposed method approached the nonstationarity issue from the perspective of data-model mismatch, which is more direct than data variation measurement. The results also demonstrated that the proposed method is effective in enhancing the performance of the feature extraction model.
NASA Technical Reports Server (NTRS)
Bergan, Andrew C.; Leone, Frank A., Jr.
2016-01-01
A new model is proposed that represents the kinematics of kink-band formation and propagation within the framework of a mesoscale continuum damage mechanics (CDM) model. The model uses the recently proposed deformation gradient decomposition approach to represent a kink band as a displacement jump via a cohesive interface that is embedded in an elastic bulk material. The model is capable of representing the combination of matrix failure in the frame of a misaligned fiber and instability due to shear nonlinearity. In contrast to conventional linear or bilinear strain softening laws used in most mesoscale CDM models for longitudinal compression, the constitutive response of the proposed model includes features predicted by detailed micromechanical models. These features include: 1) the rotational kinematics of the kink band, 2) an instability when the peak load is reached, and 3) a nonzero plateau stress under large strains.
Nested Vector-Sensor Array Processing via Tensor Modeling (Briefing Charts)
2014-04-24
the matrix singular value decomposition ( SVD ) [3]. • The HOSVD of tensor T can be written as T = K×1 U1 ×2 U2 ×3 U3 ×4 U4, (4) where U1,U3 ∈ CN̄×N̄...and U2,U4 ∈ CNc×Nc are orthonormal matrices, provided by the SVD of the i-mode matricization of the tensor T : T (i) = UiΛiV H i . K ∈ CN̄×Nc×N̄×Nc is
A patch-based tensor decomposition algorithm for M-FISH image classification.
Wang, Min; Huang, Ting-Zhu; Li, Jingyao; Wang, Yu-Ping
2017-06-01
Multiplex-fluorescence in situ hybridization (M-FISH) is a chromosome imaging technique which can be used to detect chromosomal abnormalities such as translocations, deletions, duplications, and inversions. Chromosome classification from M-FISH imaging data is a key step to implement the technique. In the classified M-FISH image, each pixel in a chromosome is labeled with a class index and drawn with a pseudo-color so that geneticists can easily conduct diagnosis, for example, identifying chromosomal translocations by examining color changes between chromosomes. However, the information of pixels in a neighborhood is often overlooked by existing approaches. In this work, we assume that the pixels in a patch belong to the same class and use the patch to represent the center pixel's class information, by which we can use the correlations of neighboring pixels and the structural information across different spectral channels for the classification. On the basis of assumption, we propose a patch-based classification algorithm by using higher order singular value decomposition (HOSVD). The developed method has been tested on a comprehensive M-FISH database that we established, demonstrating improved performance. When compared with other pixel-wise M-FISH image classifiers such as fuzzy c-means clustering (FCM), adaptive fuzzy c-means clustering (AFCM), improved adaptive fuzzy c-means clustering (IAFCM), and sparse representation classification (SparseRC) methods, the proposed method gave the highest correct classification ratio (CCR), which can translate into improved diagnosis of genetic diseases and cancers. © 2016 International Society for Advancement of Cytometry. © 2016 International Society for Advancement of Cytometry.
2013-08-14
Unsupervised feature learning and deep learning : A review and new perspectives. arXiv preprint arXiv:1206.5538, 2012. [2] Michael S. Lewicki, Terrence J...1017–1025, 2011. [14] Li Deng and Dong Yu. Deep Learning for Signal and Information Processing. NOW Publishers, 2013. [15] J.B. Kruskal. Three-way
Tracking Hierarchical Processing in Morphological Decomposition with Brain Potentials
ERIC Educational Resources Information Center
Lavric, Aureliu; Elchlepp, Heike; Rastle, Kathleen
2012-01-01
One important debate in psycholinguistics concerns the nature of morphological decomposition processes in visual word recognition (e.g., darkness = {dark} + {-ness}). One theory claims that these processes arise during orthographic analysis and prior to accessing meaning (Rastle & Davis, 2008), and another argues that these processes arise through…
Tracking Hierarchical Processing in Morphological Decomposition with Brain Potentials
ERIC Educational Resources Information Center
Lavric, Aureliu; Elchlepp, Heike; Rastle, Kathleen
2012-01-01
One important debate in psycholinguistics concerns the nature of morphological decomposition processes in visual word recognition (e.g., darkness = {dark} + {-ness}). One theory claims that these processes arise during orthographic analysis and prior to accessing meaning (Rastle & Davis, 2008), and another argues that these processes arise through…
2012-12-01
INTERIM REPORT Tensor Invariant Processing for Munitions/Clutter Classifications Interim Report on SNR and Background Leveling Requirements...00-00-2012 to 00-00-2012 4. TITLE AND SUBTITLE Tensor Invariant Processing for Munitions/Clutter Classifications Interim Report on SNR and...Camp Beale in 2011 and found no impact due to signal-to-noise ratio ( SNR ) and background leveling effects. However, the minimum polarizability
Rai, Prashant; Sargsyan, Khachik; Najm, Habib; ...
2017-03-07
Here, a new method is proposed for a fast evaluation of high-dimensional integrals of potential energy surfaces (PES) that arise in many areas of quantum dynamics. It decomposes a PES into a canonical low-rank tensor format, reducing its integral into a relatively short sum of products of low-dimensional integrals. The decomposition is achieved by the alternating least squares (ALS) algorithm, requiring only a small number of single-point energy evaluations. Therefore, it eradicates a force-constant evaluation as the hotspot of many quantum dynamics simulations and also possibly lifts the curse of dimensionality. This general method is applied to the anharmonic vibrationalmore » zero-point and transition energy calculations of molecules using the second-order diagrammatic vibrational many-body Green's function (XVH2) theory with a harmonic-approximation reference. In this application, high dimensional PES and Green's functions are both subjected to a low-rank decomposition. Evaluating the molecular integrals over a low-rank PES and Green's functions as sums of low-dimensional integrals using the Gauss–Hermite quadrature, this canonical-tensor-decomposition-based XVH2 (CT-XVH2) achieves an accuracy of 0.1 cm-1 or higher and nearly an order of magnitude speedup as compared with the original algorithm using force constants for water and formaldehyde.« less
Decomposition: A Strategy for Query Processing.
ERIC Educational Resources Information Center
Wong, Eugene; Youssefi, Karel
Multivariable queries can be processed in the data base management system INGRES. The general procedure is to decompose the query into a sequence of one-variable queries using two processes. One process is reduction which requires breaking off components of the query which are joined to it by a single variable. The other process,…
Theoretical estimate on tensor-polarization asymmetry in proton-deuteron Drell-Yan process
NASA Astrophysics Data System (ADS)
Kumano, S.; Song, Qin-Tao
2016-09-01
Tensor-polarized parton distribution functions are new quantities in spin-1 hadrons such as the deuteron, and they could probe new quark-gluon dynamics in hadron and nuclear physics. In charged-lepton deep inelastic scattering, they are studied by the twist-2 structure functions b1 and b2. The HERMES Collaboration found unexpectedly large b1 values compared to a naive theoretical expectation based on the standard deuteron model. The situation should be significantly improved in the near future by an approved experiment to measure b1 at Thomas Jefferson National Accelerator Facility (JLab). There is also an interesting indication in the HERMES result that finite antiquark tensor polarization exists. It could play an important role in solving a mechanism on tensor structure in the quark-gluon level. The tensor-polarized antiquark distributions are not easily determined from the charged-lepton deep inelastic scattering; however, they can be measured in a proton-deuteron Drell-Yan process with a tensor-polarized deuteron target. In this article, we estimate the tensor-polarization asymmetry for a possible Fermilab Main-Injector experiment by using optimum tensor-polarized parton distribution functions to explain the HERMES measurement. We find that the asymmetry is typically a few percent. If it is measured, it could probe new hadron physics, and such studies could create an interesting field of high-energy spin physics. In addition, we find that a significant tensor-polarized gluon distribution should exist due to Q2 evolution, even if it were zero at a low Q2 scale. The tensor-polarized gluon distribution has never been observed, so it is an interesting future project.
Relativized hierarchical decomposition of Markov decision processes.
Ravindran, B
2013-01-01
Reinforcement Learning (RL) is a popular paradigm for sequential decision making under uncertainty. A typical RL algorithm operates with only limited knowledge of the environment and with limited feedback on the quality of the decisions. To operate effectively in complex environments, learning agents require the ability to form useful abstractions, that is, the ability to selectively ignore irrelevant details. It is difficult to derive a single representation that is useful for a large problem setting. In this chapter, we describe a hierarchical RL framework that incorporates an algebraic framework for modeling task-specific abstraction. The basic notion that we will explore is that of a homomorphism of a Markov Decision Process (MDP). We mention various extensions of the basic MDP homomorphism framework in order to accommodate different commonly understood notions of abstraction, namely, aspects of selective attention. Parts of the work described in this chapter have been reported earlier in several papers (Narayanmurthy and Ravindran, 2007, 2008; Ravindran and Barto, 2002, 2003a,b; Ravindran et al., 2007).
The ergodic decomposition of stationary discrete random processes
NASA Technical Reports Server (NTRS)
Gray, R. M.; Davisson, L. D.
1974-01-01
The ergodic decomposition is discussed, and a version focusing on the structure of individual sample functions of stationary processes is proved for the special case of discrete-time random processes with discrete alphabets. The result is stronger in this case than the usual theorem, and the proof is both intuitive and simple. Estimation-theoretic and information-theoretic interpretations are developed and applied to prove existence theorems for universal source codes, both noiseless and with a fidelity criterion.
Density Functional Studies of Decomposition Processes of Energetic Molecules
1994-11-03
34-- I of Energetic Molecules Dr. Richard S. Miller 6. AUTHOR(S) Peter Politzer, Jorge M. Seminario and M. Edward Grice R&T Code 4131DO2 7. PERFORMING...49) ,’i. " o MW U of. tongi.ig2 Density Functional Studies of Decomposition Processes of Energetic Molecules Peter Politzer, Jorge M. Seminario and...Initio Molecular Orbital Theory, (Wiley-Interscience, New York, 1986). 9. J. M. Seminario , M. Grodzicki and P. Politzer, in Density Functional Methods
Analysis of benzoquinone decomposition in solution plasma process
NASA Astrophysics Data System (ADS)
Bratescu, M. A.; Saito, N.
2016-01-01
The decomposition of p-benzoquinone (p-BQ) in Solution Plasma Processing (SPP) was analyzed by Coherent Anti-Stokes Raman Spectroscopy (CARS) by monitoring the change of the anti-Stokes signal intensity of the vibrational transitions of the molecule, during and after SPP. Just in the beginning of the SPP treatment, the CARS signal intensities of the ring vibrational molecular transitions increased under the influence of the electric field of plasma. The results show that plasma influences the p-BQ molecules in two ways: (i) plasma produces a polarization and an orientation of the molecules in the local electric field of plasma and (ii) the gas phase plasma supplies, in the liquid phase, hydrogen and hydroxyl radicals, which reduce or oxidize the molecules, respectively, generating different carboxylic acids. The decomposition of p-BQ after SPP was confirmed by UV-visible absorption spectroscopy and liquid chromatography.
Matrix decomposition graphics processing unit solver for Poisson image editing
NASA Astrophysics Data System (ADS)
Lei, Zhao; Wei, Li
2012-10-01
In recent years, gradient-domain methods have been widely discussed in the image processing field, including seamless cloning and image stitching. These algorithms are commonly carried out by solving a large sparse linear system: the Poisson equation. However, solving the Poisson equation is a computational and memory intensive task which makes it not suitable for real-time image editing. A new matrix decomposition graphics processing unit (GPU) solver (MDGS) is proposed to settle the problem. A matrix decomposition method is used to distribute the work among GPU threads, so that MDGS will take full advantage of the computing power of current GPUs. Additionally, MDGS is a hybrid solver (combines both the direct and iterative techniques) and has two-level architecture. These enable MDGS to generate identical solutions with those of the common Poisson methods and achieve high convergence rate in most cases. This approach is advantageous in terms of parallelizability, enabling real-time image processing, low memory-taken and extensive applications.
A decomposition of irreversible diffusion processes without detailed balance
NASA Astrophysics Data System (ADS)
Qian, Hong
2013-05-01
As a generalization of deterministic, nonlinear conservative dynamical systems, a notion of canonical conservative dynamics with respect to a positive, differentiable stationary density ρ(x) is introduced: dot{x}=j(x) in which ∇.(ρ(x)j(x)) = 0. Such systems have a conserved "generalized free energy function" F[u] = ∫u(x, t)ln (u(x, t)/ρ(x))dx in phase space with a density flow u(x, t) satisfying ∂ut = -∇.(ju). Any general stochastic diffusion process without detailed balance, in terms of its Fokker-Planck equation, can be decomposed into a reversible diffusion process with detailed balance and a canonical conservative dynamics. This decomposition can be rigorously established in a function space with inner product defined as ⟨ϕ, ψ⟩ = ∫ρ-1(x)ϕ(x)ψ(x)dx. Furthermore, a law for balancing F[u] can be obtained: The non-positive dF[u(x, t)]/dt = Ein(t) - ep(t) where the "source" Ein(t) ⩾ 0 and the "sink" ep(t) ⩾ 0 are known as house-keeping heat and entropy production, respectively. A reversible diffusion has Ein(t) = 0. For a linear (Ornstein-Uhlenbeck) diffusion process, our decomposition is equivalent to the previous approaches developed by Graham and Ao, as well as the theory of large deviations. In terms of two different formulations of time reversal for a same stochastic process, the meanings of dissipative and conservative stationary dynamics are discussed.
Accelerated decomposition techniques for large discounted Markov decision processes
NASA Astrophysics Data System (ADS)
Larach, Abdelhadi; Chafik, S.; Daoui, C.
2017-03-01
Many hierarchical techniques to solve large Markov decision processes (MDPs) are based on the partition of the state space into strongly connected components (SCCs) that can be classified into some levels. In each level, smaller problems named restricted MDPs are solved, and then these partial solutions are combined to obtain the global solution. In this paper, we first propose a novel algorithm, which is a variant of Tarjan's algorithm that simultaneously finds the SCCs and their belonging levels. Second, a new definition of the restricted MDPs is presented to ameliorate some hierarchical solutions in discounted MDPs using value iteration (VI) algorithm based on a list of state-action successors. Finally, a robotic motion-planning example and the experiment results are presented to illustrate the benefit of the proposed decomposition algorithms.
NASA Astrophysics Data System (ADS)
Wang, Lu; Albera, Laurent; Kachenoura, Amar; Shu, Huazhong; Senhadji, Lotfi
2014-12-01
Semi-symmetric three-way arrays are essential tools in blind source separation (BSS) particularly in independent component analysis (ICA). These arrays can be built by resorting to higher order statistics of the data. The canonical polyadic (CP) decomposition of such semi-symmetric three-way arrays allows us to identify the so-called mixing matrix, which contains the information about the intensities of some latent source signals present in the observation channels. In addition, in many applications, such as the magnetic resonance spectroscopy (MRS), the columns of the mixing matrix are viewed as relative concentrations of the spectra of the chemical components. Therefore, the two loading matrices of the three-way array, which are equal to the mixing matrix, are nonnegative. Most existing CP algorithms handle the symmetry and the nonnegativity separately. Up to now, very few of them consider both the semi-nonnegativity and the semi-symmetry structure of the three-way array. Nevertheless, like all the methods based on line search, trust region strategies, and alternating optimization, they appear to be dependent on initialization, requiring in practice a multi-initialization procedure. In order to overcome this drawback, we propose two new methods, called [InlineEquation not available: see fulltext.] and [InlineEquation not available: see fulltext.], to solve the problem of CP decomposition of semi-nonnegative semi-symmetric three-way arrays. Firstly, we rewrite the constrained optimization problem as an unconstrained one. In fact, the nonnegativity constraint of the two symmetric modes is ensured by means of a square change of variable. Secondly, a Jacobi-like optimization procedure is adopted because of its good convergence property. More precisely, the two new methods use LU and QR matrix factorizations, respectively, which consist in formulating high-dimensional optimization problems into several sequential polynomial and rational subproblems. By using both LU
Catalytic hydrothermal processing of microalgae: decomposition and upgrading of lipids.
Biller, P; Riley, R; Ross, A B
2011-04-01
Hydrothermal processing of high lipid feedstock such as microalgae is an alternative method of oil extraction which has obvious benefits for high moisture containing biomass. A range of microalgae and lipids extracted from terrestrial oil seed have been processed at 350 °C, at pressures of 150-200 bar in water. Hydrothermal liquefaction is shown to convert the triglycerides to fatty acids and alkanes in the presence of certain heterogeneous catalysts. This investigation has compared the composition of lipids and free fatty acids from solvent extraction to those from hydrothermal processing. The initial decomposition products include free fatty acids and glycerol, and the potential for de-oxygenation using heterogeneous catalysts has been investigated. The results indicate that the bio-crude yields from the liquefaction of microalgae were increased slightly with the use of heterogeneous catalysts but the higher heating value (HHV) and the level of de-oxygenation increased, by up to 10%. Copyright © 2011 Elsevier Ltd. All rights reserved.
ENVIRONMENTAL ASSESSMENT OF THE BASE CATALYZED DECOMPOSITION (BCD) PROCESS
This report summarizes laboratory-scale, pilot-scale, and field performance data on BCD (Base Catalyzed Decomposition) and technology, collected to date by various governmental, academic, and private organizations.
ENVIRONMENTAL ASSESSMENT OF THE BASE CATALYZED DECOMPOSITION (BCD) PROCESS
This report summarizes laboratory-scale, pilot-scale, and field performance data on BCD (Base Catalyzed Decomposition) and technology, collected to date by various governmental, academic, and private organizations.
CO2 decomposition using electrochemical process in molten salts
NASA Astrophysics Data System (ADS)
Otake, Koya; Kinoshita, Hiroshi; Kikuchi, Tatsuya; Suzuki, Ryosuke O.
2012-08-01
The electrochemical decomposition of CO2 gas to carbon and oxygen gas in LiCl-Li2O and CaCl2-CaO molten salts was studied. This process consists of electrochemical reduction of Li2O and CaO, as well as the thermal reduction of CO2 gas by the respective metallic Li and Ca. Two kinds of ZrO2 solid electrolytes were tested as an oxygen ion conductor, and the electrolytes removed oxygen ions from the molten salts to the outside of the reactor. After electrolysis in both salts, the aggregations of nanometer-scale amorphous carbon and rod-like graphite crystals were observed by transmission electron microscopy. When 9.7 %CO2-Ar mixed gas was blown into LiCl-Li2O and CaCl2-CaO molten salts, the current efficiency was evaluated to be 89.7 % and 78.5 %, respectively, by the exhaust gas analysis and the supplied charge. When a solid electrolyte with higher ionic conductivity was used, the current and carbon production became larger. It was found that the rate determining step is the diffusion of oxygen ions into the ZrO2 solid electrolyte.
Wang, Kunping; Guo, Jinsong; Yang, Min; Junji, Hirotsuji; Deng, Rongsen
2009-03-15
The decomposition of two haloacetic acids (HAAs), dichloroacetic acid (DCAA) and trichloroacetic acid (TCAA), from water was studied by means of single oxidants: ozone, UV radiation; and by the advanced oxidation processes (AOPs) constituted by combinations of O(3)/UV radiation, H(2)O(2)/UV radiation, O(3)/H(2)O(2), O(3)/H(2)O(2)/UV radiation. The concentrations of HAAs were analyzed at specified time intervals to elucidate the decomposition of HAAs. Single O(3) or UV did not result in perceptible decomposition of HAAs within the applied reaction time. O(3)/UV showed to be more suitable for the decomposition of DCAA and TCAA in water among the six methods of oxidation. Decomposition of DCAA was easier than TCAA by AOPs. For O(3)/UV in the semi-continuous mode, the effective utilization rate of ozone for HAA decomposition decreased with ozone addition. The kinetics of HAAs decomposition by O(3)/UV and the influence of coexistent humic acids and HCO(3)(-) on the decomposition process were investigated. The decomposition of the HAAs by the O(3)/UV accorded with the pseudo-first-order mode under the constant initial dissolved O(3) concentration and fixed UV radiation. The pseudo-first-order rate constant for the decomposition of DCAA was more than four times that for TCAA. Humic acids can cause the H(2)O(2) accumulation and the decrease in rate constants of HAAs decomposition in the O(3)/UV process. The rate constants for the decomposition of DCAA and TCAA decreased by 41.1% and 23.8%, respectively, when humic acids were added at a concentration of 1.2mgTOC/L. The rate constants decreased by 43.5% and 25.9%, respectively, at an HCO(3)(-) concentration of 1.0mmol/L.
Tensor-based Dictionary Learning for Spectral CT Reconstruction
Zhang, Yanbo; Wang, Ge
2016-01-01
Spectral computed tomography (CT) produces an energy-discriminative attenuation map of an object, extending a conventional image volume with a spectral dimension. In spectral CT, an image can be sparsely represented in each of multiple energy channels, and are highly correlated among energy channels. According to this characteristics, we propose a tensor-based dictionary learning method for spectral CT reconstruction. In our method, tensor patches are extracted from an image tensor, which is reconstructed using the filtered backprojection (FBP), to form a training dataset. With the Candecomp/Parafac decomposition, a tensor-based dictionary is trained, in which each atom is a rank-one tensor. Then, the trained dictionary is used to sparsely represent image tensor patches during an iterative reconstruction process, and the alternating minimization scheme is adapted for optimization. The effectiveness of our proposed method is validated with both numerically simulated and real preclinical mouse datasets. The results demonstrate that the proposed tensor-based method generally produces superior image quality, and leads to more accurate material decomposition than the currently popular popular methods. PMID:27541628
NASA Astrophysics Data System (ADS)
Zhu, Ming; Liu, Tingting; Wang, Shu; Zhang, Kesheng
2017-08-01
Existing two-frequency reconstructive methods can only capture primary (single) molecular relaxation processes in excitable gases. In this paper, we present a reconstructive method based on the novel decomposition of frequency-dependent acoustic relaxation spectra to capture the entire molecular multimode relaxation process. This decomposition of acoustic relaxation spectra is developed from the frequency-dependent effective specific heat, indicating that a multi-relaxation process is the sum of the interior single-relaxation processes. Based on this decomposition, we can reconstruct the entire multi-relaxation process by capturing the relaxation times and relaxation strengths of N interior single-relaxation processes, using the measurements of acoustic absorption and sound speed at 2N frequencies. Experimental data for the gas mixtures CO2-N2 and CO2-O2 validate our decomposition and reconstruction approach.
Predictability of the Dynamic Mode Decomposition in Coastal Processes
NASA Astrophysics Data System (ADS)
Wang, Ruo-Qian; Herdman, Liv; Stacey, Mark; Barnard, Patrick
2016-11-01
Dynamic Mode Decomposition (DMD) is a model order reduction technique that helps reduce the complexity of computational models. DMD is frequently easier to interpret physically than the Proper Orthogonal Decomposition. The DMD can also produce the eigenvalues of each mode to show the trend of the mode, establishing the rate of growth or decay, but the original DMD cannot produce the contributing weights of the modes. The challenge is selecting the important modes to build a reduced order model. DMD variants have been developed to estimate the weights of each mode. One of the popular methods is called Optimal Mode Decomposition (OMD). This method decomposes the data matrix into a product of the DMD modes, the diagonal weight matrix, and the Vandermonde matrix. The weight matrix can be used to rank the importance of the mode contributions and ultimately leads to the reduced order model for prediction and controlling purpose. We are currently applying DMD to a numerical simulation of the San Francisco Bay, which features complicated coastal geometry, multiple frequency components, and high periodicity. Since DMD defines modes with specific frequencies, we expect DMD would produce a good approximation, but the preliminary results show that the predictability of the DMD is poor if unimportant modes are dropped according to the OMD. We are currently testing other DMD variants and will report our findings in the presentation.
Process characteristics and layout decomposition of self-aligned sextuple patterning
NASA Astrophysics Data System (ADS)
Kang, Weiling; Chen, Yijian
2013-03-01
Self-aligned sextuple patterning (SASP) is a promising technique to scale down the half pitch of IC features to sub- 10nm region. In this paper, the process characteristics and decomposition methods of both positive-tone (pSASP) and negative-tone SASP (nSASP) techniques are discussed, and a variety of decomposition rules are studied. By using a node-grouping method, nSASP layout conflicting graph can be significantly simplified. Graph searching and coloring algorithm is developed for feature/color assignment. We demonstrate that by generating assisting mandrels, nSASP layout decomposition can be degenerated into an nSADP decomposition problem. The proposed decomposition algorithm is successfully verified with several commonly used 2-D layout examples.
C%2B%2B tensor toolbox user manual.
Plantenga, Todd D.; Kolda, Tamara Gibson
2012-04-01
The C++ Tensor Toolbox is a software package for computing tensor decompositions. It is based on the Matlab Tensor Toolbox, and is particularly optimized for sparse data sets. This user manual briefly overviews tensor decomposition mathematics, software capabilities, and installation of the package. Tensors (also known as multidimensional arrays or N-way arrays) are used in a variety of applications ranging from chemometrics to network analysis. The Tensor Toolbox provides classes for manipulating dense, sparse, and structured tensors in C++. The Toolbox compiles into libraries and is intended for use with custom applications written by users.
Azo dye Acid Red 27 decomposition kinetics during ozone oxidation and adsorption processes.
Beak, Mi H; Ijagbemi, Christianah O; Kim, Dong S
2009-05-01
To elucidate the effects of ozone dosage, catalysts, and temperature on azo dye decomposition rate in treatment processes, the decomposition kinetics of Acid Red 27 by ozone was investigated. Acid Red 27 decomposition rate followed the first-order reaction with complete dye discoloration in 20 min of ozone reaction. The dye decay rate increases as ozone dosage increases. Using Mn, Zn and Ni as transition metal catalysts during the ozone oxidation process, Mn displayed the greatest catalytic effect with significant increase in the rate of decomposition. The rate of decomposition decreases with increase in temperature and beyond 40 degrees C, increase in decomposition rate was followed by a corresponding increase in temperature. The FT-IR spectra in the range of 1,000-1,800 cm(-1) revealed specific band variations after the ozone oxidation process, portraying structural changes traceable to cleavage of bonds in the benzene ring, the sulphite salt group, and the C-N located beside the -N = N- bond. From the (1)H-NMR spectra, the breaking down of the benzene ring showed the disappearance of the 10 H peaks at 7-8 ppm, which later emerged with a new peak at 6.16 ppm. In a parallel batch test of azo dye Acid Red 27 adsorption onto activated carbon, a low adsorption capacity was observed in the adsorption test carried out after three minutes of ozone injection while the adsorption process without ozone injection yielded a high adsorption capacity.
Complex variational mode decomposition for signal processing applications
NASA Astrophysics Data System (ADS)
Wang, Yanxue; Liu, Fuyun; Jiang, Zhansi; He, Shuilong; Mo, Qiuyun
2017-03-01
Complex-valued signals occur in many areas of science and engineering and are thus of fundamental interest. The complex variational mode decomposition (CVMD) is proposed as a natural and a generic extension of the original VMD algorithm for the analysis of complex-valued data in this work. Moreover, the equivalent filter bank structure of the CVMD in the presence of white noise, and the effects of initialization of center frequency on the filter bank property are both investigated via numerical experiments. Benefiting from the advantages of CVMD algorithm, its bi-directional Hilbert time-frequency spectrum is developed as well, in which the positive and negative frequency components are formulated on the positive and negative frequency planes separately. Several applications in the real-world complex-valued signals support the analysis.
Schmithorst, Vincent J; Holland, Scott K; Plante, Elena
2011-01-01
Correlation of white matter microstructure with various cognitive processing tasks and with overall intelligence has been previously demonstrated. We investigate the correlation of white matter microstructure with various higher-order auditory processing tasks, including interpretation of speech-in-noise, recognition of low-pass frequency filtered words, and interpretation of time-compressed sentences at two different values of compression. These tests are typically used to diagnose auditory processing disorder (APD) in children. Our hypothesis is that correlations between white matter microstructure in tracts connecting the temporal, frontal, and parietal lobes, as well as callosal pathways, will be seen. Previous functional imaging studies have shown correlations between activation in temporal, frontal, and parietal regions from higher-order auditory processing tasks. In addition, we hypothesize that the regions displaying correlations will vary according to the task because each task uses a different set of skills. Diffusion tensor imaging (DTI) data were acquired from a cohort of 17 normal-hearing children aged 9 to 11 yrs. Fractional anisotropy (FA), a measure of white matter fiber tract integrity and organization, was computed and correlated on a voxelwise basis with performance on the auditory processing tasks, controlling for age, sex, and full-scale IQ. Divergent correlations of white matter FA depending on the particular auditory processing task were found. Positive correlations were found between FA and speech-in-noise in white matter adjoining prefrontal areas and between FA and filtered words in the corpus callosum. Regions exhibiting correlations with time-compressed sentences varied depending on the degree of compression: the greater degree of compression (with the greatest difficulty) resulted in correlations in white matter adjoining prefrontal (dorsal and ventral), whereas the smaller degree of compression (with less difficulty) resulted in
Tak, Hyeong Jun; Kim, Jin Hyun; Son, Su Min
2016-01-01
We investigated the radiologic developmental process of the arcuate fasciculus (AF) using subcomponent diffusion tensor imaging (DTI) analysis in typically developing volunteers. DTI data were acquired from 96 consecutive typically developing children, aged 0–14 years. AF subcomponents, including the posterior, anterior, and direct AF tracts were analyzed. Success rates of analysis (AR) and fractional anisotropy (FA) values of each subcomponent tract were measured and compared. AR of all subcomponent tracts, except the posterior, showed a significant increase with aging (P < 0.05). Subcomponent tracts had a specific developmental sequence: First, the posterior AF tract, second, the anterior AF tract, and last, the direct AF tract in identical hemispheres. FA values of all subcomponent tracts, except right direct AF tract, showed correlation with subject's age (P < 0.05). Increased AR and FA values were observed in female subjects in young age (0–2 years) group compared with males (P < 0.05). The direct AF tract showed leftward hemispheric asymmetry and this tendency showed greater consolidation in older age (3–14 years) groups (P < 0.05). These findings demonstrated the radiologic developmental patterns of the AF from infancy to adolescence using subcomponent DTI analysis. The AF showed a specific developmental sequence, sex difference in younger age, and hemispheric asymmetry in older age. PMID:27482222
Nonlinear color-image decomposition for image processing of a digital color camera
NASA Astrophysics Data System (ADS)
Saito, Takahiro; Aizawa, Haruya; Yamada, Daisuke; Komatsu, Takashi
2009-01-01
This paper extends the BV (Bounded Variation) - G and/or the BV-L1 variational nonlinear image-decomposition approaches, which are considered to be useful for image processing of a digital color camera, to genuine color-image decomposition approaches. For utilizing inter-channel color cross-correlations, this paper first introduces TV (Total Variation) norms of color differences and TV norms of color sums into the BV-G and/or BV-L1 energy functionals, and then derives denoising-type decomposition-algorithms with an over-complete wavelet transform, through applying the Besov-norm approximation to the variational problems. Our methods decompose a noisy color image without producing undesirable low-frequency colored artifacts in its separated BV-component, and they achieve desirable high-quality color-image decomposition, which is very robust against colored random noise.
Palmkvist, Jakob
2014-01-15
We introduce an infinite-dimensional Lie superalgebra which is an extension of the U-duality Lie algebra of maximal supergravity in D dimensions, for 3 ⩽ D ⩽ 7. The level decomposition with respect to the U-duality Lie algebra gives exactly the tensor hierarchy of representations that arises in gauge deformations of the theory described by an embedding tensor, for all positive levels p. We prove that these representations are always contained in those coming from the associated Borcherds-Kac-Moody superalgebra, and we explain why some of the latter representations are not included in the tensor hierarchy. The most remarkable feature of our Lie superalgebra is that it does not admit a triangular decomposition like a (Borcherds-)Kac-Moody (super)algebra. Instead the Hodge duality relations between level p and D − 2 − p extend to negative p, relating the representations at the first two negative levels to the supersymmetry and closure constraints of the embedding tensor.
Stage efficiency in the analysis of thermochemical water decomposition processes
NASA Technical Reports Server (NTRS)
Conger, W. L.; Funk, J. E.; Carty, R. H.; Soliman, M. A.; Cox, K. E.
1976-01-01
The procedure for analyzing thermochemical water-splitting processes using the figure of merit is expanded to include individual stage efficiencies and loss coefficients. The use of these quantities to establish the thermodynamic insufficiencies of each stage is shown. A number of processes are used to illustrate these concepts and procedures and to demonstrate the facility with which process steps contributing most to the cycle efficiency are found. The procedure allows attention to be directed to those steps of the process where the greatest increase in total cycle efficiency can be obtained.
Stage efficiency in the analysis of thermochemical water decomposition processes
NASA Technical Reports Server (NTRS)
Conger, W. L.; Funk, J. E.; Carty, R. H.; Soliman, M. A.; Cox, K. E.
1976-01-01
The procedure for analyzing thermochemical water-splitting processes using the figure of merit is expanded to include individual stage efficiencies and loss coefficients. The use of these quantities to establish the thermodynamic insufficiencies of each stage is shown. A number of processes are used to illustrate these concepts and procedures and to demonstrate the facility with which process steps contributing most to the cycle efficiency are found. The procedure allows attention to be directed to those steps of the process where the greatest increase in total cycle efficiency can be obtained.
Analytical gradients for tensor hyper-contracted MP2 and SOS-MP2 on graphical processing units
NASA Astrophysics Data System (ADS)
Song, Chenchen; Martínez, Todd J.
2017-10-01
Analytic energy gradients for tensor hyper-contraction (THC) are derived and implemented for second-order Møller-Plesset perturbation theory (MP2), with and without the scaled-opposite-spin (SOS)-MP2 approximation. By exploiting the THC factorization, the formal scaling of MP2 and SOS-MP2 gradient calculations with respect to system size is reduced to quartic and cubic, respectively. An efficient implementation has been developed that utilizes both graphics processing units and sparse tensor techniques exploiting spatial sparsity of the atomic orbitals. THC-MP2 has been applied to both geometry optimization and ab initio molecular dynamics (AIMD) simulations. The resulting energy conservation in micro-canonical AIMD demonstrates that the implementation provides accurate nuclear gradients with respect to the THC-MP2 potential energy surfaces.
Nakano, Masayoshi; Wada, Takeshi; Koga, Nobuyoshi
2015-09-24
This study focused on the kinetic modeling of the thermal decomposition of sodium percarbonate (SPC, sodium carbonate-hydrogen peroxide (2/3)). The reaction is characterized by apparently different kinetic profiles of mass-loss and exothermic behavior as recorded by thermogravimetry and differential scanning calorimetry, respectively. This phenomenon results from a combination of different kinetic features of the reaction involving two overlapping mass-loss steps controlled by the physico-geometry of the reaction and successive endothermic and exothermic processes caused by the detachment and decomposition of H2O2(g). For kinetic modeling, the overall reaction was initially separated into endothermic and exothermic processes using kinetic deconvolution analysis. Then, both of the endothermic and exothermic processes were further separated into two reaction steps accounting for the physico-geometrically controlled reaction that occurs in two steps. Kinetic modeling through kinetic deconvolution analysis clearly illustrates the appearance of the net exothermic effect is the result of a slight delay of the exothermic process to the endothermic process in each physico-geometrically controlled reaction step. This demonstrates that kinetic modeling attempted in this study is useful for interpreting the exothermic behavior of solid-state reactions such as the oxidative decomposition of solids and thermal decomposition of oxidizing agent.
Tensor SVD and distributed control
NASA Astrophysics Data System (ADS)
Iyer, Ram V.
2005-05-01
The (approximate) diagonalization of symmetric matrices has been studied in the past in the context of distributed control of an array of collocated smart actuators and sensors. For distributed control using a two dimensional array of actuators and sensors, it is more natural to describe the system transfer function as a complex tensor rather than a complex matrix. In this paper, we study the problem of approximately diagonalizing a transfer function tensor via the tensor singular value decomposition (TSVD) for a locally spatially invariant system, and study its application along with the technique of recursive orthogonal transforms to achieve distributed control for a smart structure.
Decomposition of repetition priming processes in word translation.
Francis, Wendy S; Durán, Gabriela; Augustini, Beatriz K; Luévano, Genoveva; Arzate, José C; Sáenz, Silvia P
2011-01-01
Translation in fluent bilinguals requires comprehension of a stimulus word and subsequent production, or retrieval and articulation, of the response word. Four repetition-priming experiments with Spanish–English bilinguals (N = 274) decomposed these processes using selective facilitation to evaluate their unique priming contributions and factorial combination to evaluate the degree of process overlap or dependence. In Experiment 1, symmetric priming between semantic classification and translation tasks indicated that bilinguals do not covertly translate words during semantic classification. In Experiments 2 and 3, semantic classification of words and word-cued picture drawing facilitated word-comprehension processes of translation, and picture naming facilitated word-production processes. These effects were independent, consistent with a sequential model and with the conclusion that neither semantic classification nor word-cued picture drawing elicits covert translation. Experiment 4 showed that 2 tasks involving word-retrieval processes--written word translation and picture naming--had subadditive effects on later translation. Incomplete transfer from written translation to spoken translation indicated that preparation for articulation also benefited from repetition in the less-fluent language.
Muravyev, Nikita V; Koga, Nobuyoshi; Meerov, Dmitry B; Pivkina, Alla N
2017-01-25
This study focused on kinetic modeling of a specific type of multistep heterogeneous reaction comprising exothermic and endothermic reaction steps, as exemplified by the practical kinetic analysis of the experimental kinetic curves for the thermal decomposition of molten ammonium dinitramide (ADN). It is known that the thermal decomposition of ADN occurs as a consecutive two step mass-loss process comprising the decomposition of ADN and subsequent evaporation/decomposition of in situ generated ammonium nitrate. These reaction steps provide exothermic and endothermic contributions, respectively, to the overall thermal effect. The overall reaction process was deconvoluted into two reaction steps using simultaneously recorded thermogravimetry and differential scanning calorimetry (TG-DSC) curves by considering the different physical meanings of the kinetic data derived from TG and DSC by P value analysis. The kinetic data thus separated into exothermic and endothermic reaction steps were kinetically characterized using kinetic computation methods including isoconversional method, combined kinetic analysis, and master plot method. The overall kinetic behavior was reproduced as the sum of the kinetic equations for each reaction step considering the contributions to the rate data derived from TG and DSC. During reproduction of the kinetic behavior, the kinetic parameters and contributions of each reaction step were optimized using kinetic deconvolution analysis. As a result, the thermal decomposition of ADN was successfully modeled as partially overlapping exothermic and endothermic reaction steps. The logic of the kinetic modeling was critically examined, and the practical usefulness of phenomenological modeling for the thermal decomposition of ADN was illustrated to demonstrate the validity of the methodology and its applicability to similar complex reaction processes.
Decomposition of Repetition Priming Processes in Word Translation
ERIC Educational Resources Information Center
Francis, Wendy S.; Duran, Gabriela; Augustini, Beatriz K.; Luevano, Genoveva; Arzate, Jose C.; Saenz, Silvia P.
2011-01-01
Translation in fluent bilinguals requires comprehension of a stimulus word and subsequent production, or retrieval and articulation, of the response word. Four repetition-priming experiments with Spanish-English bilinguals (N = 274) decomposed these processes using selective facilitation to evaluate their unique priming contributions and factorial…
Decomposition of Repetition Priming Processes in Word Translation
ERIC Educational Resources Information Center
Francis, Wendy S.; Duran, Gabriela; Augustini, Beatriz K.; Luevano, Genoveva; Arzate, Jose C.; Saenz, Silvia P.
2011-01-01
Translation in fluent bilinguals requires comprehension of a stimulus word and subsequent production, or retrieval and articulation, of the response word. Four repetition-priming experiments with Spanish-English bilinguals (N = 274) decomposed these processes using selective facilitation to evaluate their unique priming contributions and factorial…
Iron oxalate decomposition process by means of Mössbauer spectroscopy and nuclear forward scattering
NASA Astrophysics Data System (ADS)
Smrčka, David; Procházka, Vít; Novák, Petr; Kašlík, Josef; Vrba, Vlastimil
2016-10-01
This study reports the transformation kinetics of the thermal decomposition of the iron(II) oxalate dihydrate studied in detail by two different techniques: the transmission Mössbauer spectroscopy and the nuclear forward scattering of synchrotron radiation. Both methods were applied to observe three steps of the decomposition process when the iron oxalate transforms to the amorphous iron oxide. The hematite/maghemite ratio was determined from the transmission Mössbauer spectra using an evaluation procedure based on a subtraction of two opposite sides of spectra. The results obtained indicate that the amount of hematite increases with an annealing time prolongation.
PROCESS OF COATING WITH NICKEL BY THE DECOMPOSITION OF NICKEL CARBONYL
Hoover, T.B.
1959-04-01
An improved process is presented for the deposition of nickel coatings by the thermal decomposition of nickel carbonyl vapor. The improvement consists in incorporating a small amount of hydrogen sulfide gas in the nickel carbonyl plating gas. It is postulated that the hydrogen sulfide functions as a catalyst. i
Petrova, O.M.; Fedoseev, S.D.; Komarova, T.V.
1984-01-01
A calculation has been made of the activation energy of the thermal decomposition of phenol-formaldehyde polymers. It has been established that for nonisothermal conditions the rate of performance of the process does not affect the effective activation energy calculated by means of Piloyan's equation.
[Putrefaction in a mortuary cold room? Unusual progression of postmortem decomposition processes].
Kunz, Sebastian N; Brandtner, Herwig; Meyer, Harald
2013-01-01
This article illustrates the rare case of rapid body decomposition in an uncommonly short postmortem interval. A clear discrepancy between early postmortem changes at the crime scene and advanced body decomposition at the time of autopsy were seen. Subsequent police investigation identified a failure in the cooling system of the morgue as probable cause. However, due to the postmortem status of the body, a moderate rise in temperature alone is not considered to have caused the full extent of postmortem changes. Therefore, other factors must have been present, which accelerated the postmortem decomposition processes. In our opinion, the most reasonable explanation for this phenomenon would be a rather long resting time of the corpse in a non-refrigerated hearse on a hot summer day.
Factors and processes causing accelerated decomposition in human cadavers - An overview.
Zhou, Chong; Byard, Roger W
2011-01-01
Artefactually enhanced putrefactive and autolytic changes may be misinterpreted as indicating a prolonged postmortem interval and throw doubt on the veracity of witness statements. Review of files from Forensic Science SA and the literature revealed a number of external and internal factors that may be responsible for accelerating these processes. Exogenous factors included exposure to elevated environmental temperatures, both outdoors and indoors, exacerbated by increased humidity or fires. Situations indoor involved exposure to central heating, hot water, saunas and electric blankets. Deaths within motor vehicles were also characterized by enhanced decomposition. Failure to quickly or adequately refrigerate bodies may also lead to early decomposition. Endogenous factors included fever, infections, illicit and prescription drugs, obesity and insulin-dependent diabetes mellitus. When these factors or conditions are identified at autopsy less significance should, therefore, be attached to changes of decomposition as markers of time since death. Copyright Â© 2010 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Method for increasing steam decomposition in a coal gasification process
Wilson, M.W.
1987-03-23
The gasification of coal in the presence of steam and oxygen is significantly enhanced by introducing a thermochemical water- splitting agent such as sulfuric acid, into the gasifier for decomposing the steam to provide additional oxygen and hydrogen usable in the gasification process for the combustion of the coal and enrichment of the gaseous gasification products. The addition of the water-splitting agent into the gasifier also allows for the operation of the reactor at a lower temperature.
Method for increasing steam decomposition in a coal gasification process
Wilson, Marvin W.
1988-01-01
The gasification of coal in the presence of steam and oxygen is significantly enhanced by introducing a thermochemical water-splitting agent such as sulfuric acid, into the gasifier for decomposing the steam to provide additional oxygen and hydrogen usable in the gasification process for the combustion of the coal and enrichment of the gaseous gasification products. The addition of the water-splitting agent into the gasifier also allows for the operation of the reactor at a lower temperature.
Subensemble decomposition and Markov process analysis of Burgers turbulence.
Zhang, Zhi-Xiong; She, Zhen-Su
2011-08-01
A numerical and statistical study is performed to describe the positive and negative local subgrid energy fluxes in the one-dimensional random-force-driven Burgers turbulence (Burgulence). We use a subensemble method to decompose the field into shock wave and rarefaction wave subensembles by group velocity difference. We observe that the shock wave subensemble shows a strong intermittency which dominates the whole Burgulence field, while the rarefaction wave subensemble satisfies the Kolmogorov 1941 (K41) scaling law. We calculate the two subensemble probabilities and find that in the inertial range they maintain scale invariance, which is the important feature of turbulence self-similarity. We reveal that the interconversion of shock and rarefaction waves during the equation's evolution displays in accordance with a Markov process, which has a stationary transition probability matrix with the elements satisfying universal functions and, when the time interval is much greater than the corresponding characteristic value, exhibits the scale-invariant property.
Michaud, Jean-Philippe; Moreau, Gaétan
2011-01-01
Using pig carcasses exposed over 3 years in rural fields during spring, summer, and fall, we studied the relationship between decomposition stages and degree-day accumulation (i) to verify the predictability of the decomposition stages used in forensic entomology to document carcass decomposition and (ii) to build a degree-day accumulation model applicable to various decomposition-related processes. Results indicate that the decomposition stages can be predicted with accuracy from temperature records and that a reliable degree-day index can be developed to study decomposition-related processes. The development of degree-day indices opens new doors for researchers and allows for the application of inferential tools unaffected by climatic variability, as well as for the inclusion of statistics in a science that is primarily descriptive and in need of validation methods in courtroom proceedings.
Zhu, Lin; Guo, Wei-Li; Deng, Su-Ping; Huang, De-Shuang
2016-01-01
In recent years, thanks to the efforts of individual scientists and research consortiums, a huge amount of chromatin immunoprecipitation followed by high-throughput sequencing (ChIP-seq) experimental data have been accumulated. Instead of investigating them independently, several recent studies have convincingly demonstrated that a wealth of scientific insights can be gained by integrative analysis of these ChIP-seq data. However, when used for the purpose of integrative analysis, a serious drawback of current ChIP-seq technique is that it is still expensive and time-consuming to generate ChIP-seq datasets of high standard. Most researchers are therefore unable to obtain complete ChIP-seq data for several TFs in a wide variety of cell lines, which considerably limits the understanding of transcriptional regulation pattern. In this paper, we propose a novel method called ChIP-PIT to overcome the aforementioned limitation. In ChIP-PIT, ChIP-seq data corresponding to a diverse collection of cell types, TFs and genes are fused together using the three-mode pair-wise interaction tensor (PIT) model, and the prediction of unperformed ChIP-seq experimental results is formulated as a tensor completion problem. Computationally, we propose efficient first-order method based on extensions of coordinate descent method to learn the optimal solution of ChIP-PIT, which makes it particularly suitable for the analysis of massive scale ChIP-seq data. Experimental evaluation the ENCODE data illustrate the usefulness of the proposed model.
The neural basis of novelty and appropriateness in processing of creative chunk decomposition.
Huang, Furong; Fan, Jin; Luo, Jing
2015-06-01
Novelty and appropriateness have been recognized as the fundamental features of creative thinking. However, the brain mechanisms underlying these features remain largely unknown. In this study, we used event-related functional magnetic resonance imaging (fMRI) to dissociate these mechanisms in a revised creative chunk decomposition task in which participants were required to perform different types of chunk decomposition that systematically varied in novelty and appropriateness. We found that novelty processing involved functional areas for procedural memory (caudate), mental rewarding (substantia nigra, SN), and visual-spatial processing, whereas appropriateness processing was mediated by areas for declarative memory (hippocampus), emotional arousal (amygdala), and orthography recognition. These results indicate that non-declarative and declarative memory systems may jointly contribute to the two fundamental features of creative thinking.
Chemical dehalogenation treatment: Base-catalyzed decomposition process (BCDP). Tech data sheet
Not Available
1992-07-01
The Base-Catalyzed Decomposition Process (BCDP) is an efficient, relatively inexpensive treatment process for polychlorinated biphenyls (PCBs). It is also effective on other halogenated contaminants such as insecticides, herbicides, pentachlorophenol (PCP), lindane, and chlorinated dibenzodioxins and furans. The heart of BCDP is the rotary reactor in which most of the decomposition takes place. The contaminated soil is first screened, processed with a crusher and pug mill, and stockpiled. Next, in the main treatment step, this stockpile is mixed with sodium bicarbonate (in the amount of 10% of the weight of the stockpile) and heated for about one hour at 630 F in the rotary reactor. Most (about 60% to 90%) of the PCBs in the soil are decomposed in this step. The remainder are volatilized, captured, and decomposed.
Multidimensional seismic data reconstruction using tensor analysis
NASA Astrophysics Data System (ADS)
Kreimer, Nadia
Exploration seismology utilizes the seismic wavefield for prospecting oil and gas. The seismic reflection experiment consists on deploying sources and receivers in the surface of an area of interest. When the sources are activated, the receivers measure the wavefield that is reflected from different subsurface interfaces and store the information as time-series called traces or seismograms. The seismic data depend on two source coordinates, two receiver coordinates and time (a 5D volume). Obstacles in the field, logistical and economical factors constrain seismic data acquisition. Therefore, the wavefield sampling is incomplete in the four spatial dimensions. Seismic data undergoes different processes. In particular, the reconstruction process is responsible for correcting sampling irregularities of the seismic wavefield. This thesis focuses on the development of new methodologies for the reconstruction of multidimensional seismic data. This thesis examines techniques based on tensor algebra and proposes three methods that exploit the tensor nature of the seismic data. The fully sampled volume is low-rank in the frequency-space domain. The rank increases when we have missing traces and/or noise. The methods proposed perform rank reduction on frequency slices of the 4D spatial volume. The first method employs the Higher-Order Singular Value Decomposition (HOSVD) immersed in an iterative algorithm that reinserts weighted observations. The second method uses a sequential truncated SVD on the unfoldings of the tensor slices (SEQ-SVD). The third method formulates the rank reduction problem as a convex optimization problem. The measure of the rank is replaced by the nuclear norm of the tensor and the alternating direction method of multipliers (ADMM) minimizes the cost function. All three methods have the interesting property that they are robust to curvature of the reflections, unlike many reconstruction methods. Finally, we present a comparison between the methods
Moment tensors, state of stress and their relation to faulting processes in Gujarat, western India
NASA Astrophysics Data System (ADS)
Aggarwal, Sandeep Kumar; Khan, Prosanta Kumar; Mohanty, Sarada Prasad; Roumelioti, Zafeiria
2016-10-01
Time domain moment tensor analysis of 145 earthquakes (Mw 3.2 to 5.1), occurring during the period 2006-2014 in Gujarat region, has been performed. The events are mainly confined in the Kachchh area demarcated by the Island belt and Kachchh Mainland faults to its north and south, and two transverse faults to its east and west. Libraries of Green's functions were established using the 1D velocity model of Kachchh, Saurashtra and Mainland Gujarat. Green's functions and broadband displacement waveforms filtered at low frequency (0.5-0.8 Hz) were inverted to determine the moment tensor solutions. The estimated solutions were rigorously tested through number of iterations at different source depths for finding reliable source locations. The identified heterogeneous nature of the stress fields in the Kachchh area allowed us to divide this into four Zones 1-4. The stress inversion results indicate that the Zone 1 is dominated with radial compression, Zone 2 with strike-slip compression, and Zones 3 and 4 with strike-slip extensions. The analysis further shows that the epicentral region of 2001 MW 7.7 Bhuj mainshock, located at the junction of Zones 2, 3 and 4, was associated with predominant compressional stress and strike-slip motion along ∼ NNE-SSW striking fault on the western margin of the Wagad uplift. Other tectonically active parts of Gujarat (e.g. Jamnagar, Talala and Mainland) show earthquake activities are dominantly associated with strike-slip extension/compression faulting. Stress inversion analysis shows that the maximum compressive stress axes (σ1) are vertical for both the Jamnagar and Talala regions and horizontal for the Mainland Gujarat. These stress regimes are distinctly different from those of the Kachchh region.
Oda, Tetsuji; Yamashita, Ryuichi; Haga, Ichiro; Takahashi, Tadashi; Masuda, Senichi
1996-01-01
The decomposition performance of the surface induced plasma chemical processing (SPCP) for chlorofluorocarbon (83 ppm CFC-113 in air), acetone, trichloroethylene, and isopropylalcohol was experimentally examined. In every case, very high decomposition performance, more than 90 or 99% removal rate, is realized when the residence time is about 1 second and the input electric power for a 16 cm{sup 3} reactor is about 10 W. Acetone is the most stable compound and alcohol is most easily decomposed. The decomposed product-analysis by a GasChromato-MassSpectrometer has just started but very poor results are obtained. In fact, some portion of the isopropylalcohol may change to acetone which is worse than alcohol. The necessary energy to decompose one mol gas diluted in the air is calculated form the experiments. The necessary energy level for acetone and trichloroethylene is about one-tenth or one-fiftieth of that for chlorofluorocarbon.
Controlled decomposition and oxidation: A treatment method for gaseous process effluents
NASA Astrophysics Data System (ADS)
McKinley, Roger J. B., Sr.
1990-07-01
The safe disposal of effluent gases produced by the electronics industry deserves special attention. Due to the hazardous nature of many of the materials used, it is essential to control and treat the reactants and reactant by-products as they are exhausted from the process tool and prior to their release into the manufacturing facility's exhaust system and the atmosphere. Controlled decomposition and oxidation (CDO) is one method of treating effluent gases from thin film deposition processes. CDO equipment applications, field experience, and results of the use of CDO equipment and technological advances gained from the field experiences are discussed.
Controlled decomposition and oxidation: A treatment method for gaseous process effluents
NASA Technical Reports Server (NTRS)
Mckinley, Roger J. B., Sr.
1990-01-01
The safe disposal of effluent gases produced by the electronics industry deserves special attention. Due to the hazardous nature of many of the materials used, it is essential to control and treat the reactants and reactant by-products as they are exhausted from the process tool and prior to their release into the manufacturing facility's exhaust system and the atmosphere. Controlled decomposition and oxidation (CDO) is one method of treating effluent gases from thin film deposition processes. CDO equipment applications, field experience, and results of the use of CDO equipment and technological advances gained from the field experiences are discussed.
Factors controlling decomposition in arctic tundra and related root mycorrhizal processes
Linkins, A.E.
1990-01-01
Work proposed for the final year of Phase 1 of the R D Program will focus on three areas: (1) acquire soil and root-mycorrhizal process data which will incorporate baseline enzymatic and soil respiration data, as has been collected during the duration of the project, into the manipulations in the project initiated by Drs. Chapin and Schimmel. Additional enzymatic data on a broader range of organic nitrogen compound decomposition will be collected to better integrate existing decomposition data and modeling structure with the expanded information to be collected on nitrogen dynamics in soils and plant compartments. This activity will principally be done in the new dust disturbance experiment the overall project has planned. (2) Finalize data sets on the complete mineralization of cellulose, and cellulose like plant structural material, and cellulose intermediate hydrolysis products into CO2 and CH4 from soils from water track and non-water track soils and soils from riparian sedge moss meadow vegetation areas. Gas efflux from these soils will be measured in closed microcosms in which the soils will be manipulated to alter their redox state. (3) Continue developing and testing the GAS models on decomposition and plant growth and nutrient acquisition. The primary activity of this project will be on this latter task. 22 refs.
Analysis of a Methanol Decomposition Process by a Nonthermal Plasma Flow
NASA Astrophysics Data System (ADS)
Sato, Takehiko; Kambe, Makoto; Nishiyama, Hideya
In the present study, experimental and numerical analyses were adopted to clarify key reactive species for methanol decomposition processes using a nonthermal plasma flow. The nonthermal plasma flow was generated by a dielectric barrier discharge (DBD) as a radical production source. The experimental methods were as follows. Working gas was air of 1-10Sl/min. The peak-to-peak applied voltage was 16-20kV with sine wave of 1Hz-7kHz. The characteristics of gas velocity, gas temperature, ozone concentration and methanol decomposition efficiency were measured. Those characteristics were also numerically analyzed using conservation equations of mass, chemical component, momentum and energy, and state of equation. The simulation model takes into account reactive species, which have chemical reaction with the methanol. The detailed reaction mechanism used in this model consists of 108 elementary reactions and 41 chemical species. Inlet conditions are partially given by experimental results. Finally, effects of reactive species such as O, OH, H, NO, etc. on methanol decomposition characteristics are numerically analyzed. The results obtained in this study are summarized as follows. (1) Existence of excited atoms of O, N and excited molecular of OH, N2(B3Πg), N2(A3Σu+), NO are implied in the discharge region. (2) The methanol below 50ppm is decomposed completely by using DBD at discharge conditions as V=16kVpp and f=100Hz. (3) The reactive species are most important factor to decompose methanol, as the full decomposition is obtained under all injection positions. (4) In numerical analysis, it is clarified that OH is the important radical to decompose the methanol.
Yao, Shengnan; Zeng, Weiming; Wang, Nizhuan; Chen, Lei
2013-07-01
Independent component analysis (ICA) has been proven to be effective for functional magnetic resonance imaging (fMRI) data analysis. However, ICA decomposition requires to optimize the unmixing matrix iteratively whose initial values are generated randomly. Thus the randomness of the initialization leads to different ICA decomposition results. Therefore, just one-time decomposition for fMRI data analysis is not usually reliable. Under this circumstance, several methods about repeated decompositions with ICA (RDICA) were proposed to reveal the stability of ICA decomposition. Although utilizing RDICA has achieved satisfying results in validating the performance of ICA decomposition, RDICA cost much computing time. To mitigate the problem, in this paper, we propose a method, named ATGP-ICA, to do the fMRI data analysis. This method generates fixed initial values with automatic target generation process (ATGP) instead of being produced randomly. We performed experimental tests on both hybrid data and fMRI data to indicate the effectiveness of the new method and made a performance comparison of the traditional one-time decomposition with ICA (ODICA), RDICA and ATGP-ICA. The proposed method demonstrated that it not only could eliminate the randomness of ICA decomposition, but also could save much computing time compared to RDICA. Furthermore, the ROC (Receiver Operating Characteristic) power analysis also denoted the better signal reconstruction performance of ATGP-ICA than that of RDICA.
Chlorine/UV Process for Decomposition and Detoxification of Microcystin-LR.
Zhang, Xinran; Li, Jing; Yang, Jer-Yen; Wood, Karl V; Rothwell, Arlene P; Li, Weiguang; Blatchley Iii, Ernest R
2016-07-19
Microcystin-LR (MC-LR) is a potent hepatotoxin that is often associated with blooms of cyanobacteria. Experiments were conducted to evaluate the efficiency of the chlorine/UV process for MC-LR decomposition and detoxification. Chlorinated MC-LR was observed to be more photoactive than MC-LR. LC/MS analyses confirmed that the arginine moiety represented an important reaction site within the MC-LR molecule for conditions of chlorination below the chlorine demand of the molecule. Prechlorination activated MC-LR toward UV254 exposure by increasing the product of the molar absorption coefficient and the quantum yield of chloro-MC-LR, relative to the unchlorinated molecule. This mechanism of decay is fundamentally different than the conventional view of chlorine/UV as an advanced oxidation process. A toxicity assay based on human liver cells indicated MC-LR degradation byproducts in the chlorine/UV process possessed less cytotoxicity than those that resulted from chlorination or UV254 irradiation applied separately. MC-LR decomposition and detoxification in this combined process were more effective at pH 8.5 than at pH 7.5 or 6.5. These results suggest that the chlorine/UV process could represent an effective strategy for control of microcystins and their associated toxicity in drinking water supplies.
Discussion of stress tensor nonuniqueness with application to nonuniform, particulate systems
Aidun, J.B.
1993-01-01
The indeterminacy of the mechanical stress tensor has been noted in several developments of expressions for stress in a system of particles. It is generally agreed that physical quantities related to the stress tensor must be insensitive to this nonuniqueness, but there is no definitive prescription for insuring it. Kroener's tensor decomposition theorem is applied to the mechanical stress tensor [sup [sigma
Input-decomposition balance of heterotrophic processes in a warm-temperate mixed forest in Japan
NASA Astrophysics Data System (ADS)
Jomura, M.; Kominami, Y.; Ataka, M.; Makita, N.; Dannoura, M.; Miyama, T.; Tamai, K.; Goto, Y.; Sakurai, S.
2010-12-01
Carbon accumulation in forest ecosystem has been evaluated using three approaches. One is net ecosystem exchange (NEE) estimated by tower flux measurement. The second is net ecosystem production (NEP) estimated by biometric measurements. NEP can be expressed as the difference between net primary production and heterotrophic respiration. NEP can also be expressed as the annual increment in the plant biomass (ΔW) plus soil (ΔS) carbon pools defined as follows; NEP = ΔW+ΔS The third approach needs to evaluate annual carbon increment in soil compartment. Soil carbon accumulation rate could not be measured directly in a short term because of the small amount of annual accumulation. Soil carbon accumulation rate can be estimated by a model calculation. Rothamsted carbon model is a soil organic carbon turnover model and a useful tool to estimate the rate of soil carbon accumulation. However, the model has not sufficiently included variations in decomposition processes of organic matters in forest ecosystems. Organic matter in forest ecosystems have a different turnover rate that creates temporal variations in input-decomposition balance and also have a large variation in spatial distribution. Thus, in order to estimate the rate of soil carbon accumulation, temporal and spatial variation in input-decomposition balance of heterotrophic processes should be incorporated in the model. In this study, we estimated input-decomposition balance and the rate of soil carbon accumulation using the modified Roth-C model. We measured respiration rate of many types of organic matters, such as leaf litter, fine root litter, twigs and coarse woody debris using a chamber method. We can illustrate the relation of respiration rate to diameter of organic matters. Leaf and fine root litters have no diameter, so assumed to be zero in diameter. Organic matters in small size, such as leaf and fine root litter, have high decomposition respiration. It could be caused by the difference in
Ning, J. G.; Chu, L.; Ren, H. L.
2014-08-28
We base a quantitative acoustic emission (AE) study on fracture processes in alumina ceramics on wavelet packet decomposition and AE source location. According to the frequency characteristics, as well as energy and ringdown counts of AE, the fracture process is divided into four stages: crack closure, nucleation, development, and critical failure. Each of the AE signals is decomposed by a 2-level wavelet package decomposition into four different (from-low-to-high) frequency bands (AA{sub 2}, AD{sub 2}, DA{sub 2}, and DD{sub 2}). The energy eigenvalues P{sub 0}, P{sub 1}, P{sub 2}, and P{sub 3} corresponding to these four frequency bands are calculated. By analyzing changes in P{sub 0} and P{sub 3} in the four stages, we determine the inverse relationship between AE frequency and the crack source size during ceramic fracture. AE signals with regard to crack nucleation can be expressed when P{sub 0} is less than 5 and P{sub 3} more than 60; whereas AE signals with regard to dangerous crack propagation can be expressed when more than 92% of P{sub 0} is greater than 4, and more than 95% of P{sub 3} is less than 45. Geiger location algorithm is used to locate AE sources and cracks in the sample. The results of this location algorithm are consistent with the positions of fractures in the sample when observed under a scanning electronic microscope; thus the locations of fractures located with Geiger's method can reflect the fracture process. The stage division by location results is in a good agreement with the division based on AE frequency characteristics. We find that both wavelet package decomposition and Geiger's AE source locations are suitable for the identification of the evolutionary process of cracks in alumina ceramics.
Decomposition of 1,4-dioxane by advanced oxidation and biochemical process.
Kim, Chang-Gyun; Seo, Hyung-Joon; Lee, Byung-Ryul
2006-01-01
This study was undertaken to determine the optimal decomposition conditions when 1,4-dioxane was degraded using either the AOPs (Advanced Oxidation Processes) or the BAC-TERRA microbial complex. The advanced oxidation was operated with H2O2, in the range 4.7 to 51 mM, under 254 nm (25 W lamp) illumination, while varying the reaction parameters, such as the air flow rate and reaction time. The greatest oxidation rate (96%) of 1,4-dioxane was achieved with H2O2 concentration of 17 mM after a 2-hr reaction. As a result of this reaction, organic acid intermediates were formed, such as acetic, propionic and butyric acids. Furthermore, the study revealed that suspended particles, i.e., bio-flocs, kaolin and pozzolan, in the reaction were able to have an impact on the extent of 1,4-dioxane decomposition. The decomposition of 1,4-dioxane in the presence of bio-flocs was significantly declined due to hindered UV penetration through the solution as a result of the consistent dispersion of bio-particles. In contrast, dosing with pozzolan decomposed up to 98.8% of the 1,4-dioxane after 2 hr of reaction. Two actual wastewaters, from polyester manufacturing, containing 1,4-dioxane in the range 370 to 450 mg/L were able to be oxidized by as high as 100% within 15 min with the introduction of 100:200 (mg/L) Fe(II):H202 under UV illumination. Aerobic biological decomposition, employing BAC-TERRA, was able to remove up to 90% of 1,4-dioxane after 15 days of incubation. In the meantime, the by-products (i.e., acetic, propionic and valeric acid) generated were similar to those formed during the AOPs investigation. According to kinetic studies, both photo-decomposition and biodegradation of 1,4-dioxane followed pseudo first-order reaction kinetics, with k = 5 x 10(-4) s(-1) and 2.38 x 10(-6) s(-1), respectively. It was concluded that 1,4-dioxane could be readily degraded by both AOPs and BAC-TERRA, and that the actual polyester wastewater containing 1,4-dioxane could be successfully
Rumiza, A R; Khairul, O; Zuha, R M; Heo, C C
2010-12-01
This study was designed to mimic homicide or suicide cases using gasoline. Six adult long-tailed macaque (Macaca fascicularis), weighing between 2.5 to 4.0 kg, were equally divided into control and test groups. The control group was sacrificed by a lethal dose of phenobarbital intracardiac while test group was force fed with two doses of gasoline LD50 (37.7 ml/kg) after sedation with phenobarbital. All carcasses were then placed in a decomposition site to observe the decomposition and invasion process of cadaveric fauna on the carcasses. A total of five decomposition stages were recognized during this study. This study was performed during July 2007. Fresh stage of control and test carcasses occurred between 0 to 15 and 0 to 39 hours of exposure, respectively. The subsequent decomposition stages also exhibited the similar pattern whereby the decomposition process of control carcasses were faster than tested one. The first larvae were found on control carcasses after 9 hours of death while the test group carcasses had only their first blowfly eggs after 15 hours of exposure. Blow flies, Achoetandrus rufifacies and Chrysomya megacephala were the most dominant invader of both carcasses throughout the decaying process. Diptera collected from control carcasses comprised of scuttle fly, Megaselia scalaris and flesh fly, sarcophagid. We concluded that the presence of gasoline and its odor on the carcass had delayed the arrival of insect to the carcasses, thereby slowing down the decomposition process in the carcass by 6 hours.
Kolda, Tamara G.; Bader, Brett W.
2006-08-03
This software provides a collection of MATLAB classes for tensor manipulations that can be used for fast algorithm prototyping. The tensor class extends the functionality of MATLAB's multidimensional arrays by supporting additional operations such as tensor multiplication. We have also added support for sparse tensor, tensors in Kruskal or Tucker format, and tensors stored as matrices (both dense and sparse).
Osono, T
2006-08-01
The ecology of endophytic and epiphytic phyllosphere fungi of forest trees is reviewed with special emphasis on the development of decomposer fungal communities and decomposition processes of leaf litter. A total of 41 genera of phyllosphere fungi have been reported to occur on leaf litter of tree species in 19 genera. The relative proportion of phyllosphere fungi in decomposer fungal communities ranges from 2% to 100%. Phyllosphere fungi generally disappear in the early stages of decomposition, although a few species persist until the late stages. Phyllosphere fungi have the ability to utilize various organic compounds as carbon sources, and the marked decomposing ability is associated with ligninolytic activity. The role of phyllosphere fungi in the decomposition of soluble components during the early stages is relatively small in spite of their frequent occurrence. Recently, the roles of phyllosphere fungi in the decomposition of structural components have been documented with reference to lignin and cellulose decomposition, nutrient dynamics, and accumulation and decomposition of soil organic matter. It is clear from this review that several of the common phyllosphere fungi of forest trees are primarily saprobic, being specifically adapted to colonize and utilize dead host tissue, and that some phyllosphere fungi with marked abilities to decompose litter components play important roles in decomposition of structural components, nutrient dynamics, and soil organic matter accumulation.
Tensor Modeling Based for Airborne LiDAR Data Classification
NASA Astrophysics Data System (ADS)
Li, N.; Liu, C.; Pfeifer, N.; Yin, J. F.; Liao, Z. Y.; Zhou, Y.
2016-06-01
Feature selection and description is a key factor in classification of Earth observation data. In this paper a classification method based on tensor decomposition is proposed. First, multiple features are extracted from raw LiDAR point cloud, and raster LiDAR images are derived by accumulating features or the "raw" data attributes. Then, the feature rasters of LiDAR data are stored as a tensor, and tensor decomposition is used to select component features. This tensor representation could keep the initial spatial structure and insure the consideration of the neighborhood. Based on a small number of component features a k nearest neighborhood classification is applied.
A detailed kinetic model for the hydrothermal decomposition process of sewage sludge.
Yin, Fengjun; Chen, Hongzhen; Xu, Guihua; Wang, Guangwei; Xu, Yuanjian
2015-12-01
A detailed kinetic model for the hydrothermal decomposition (HTD) of sewage sludge was developed based on an explicit reaction scheme considering exact intermediates including protein, saccharide, NH4(+)-N and acetic acid. The parameters were estimated by a series of kinetic data at a temperature range of 180-300°C. This modeling framework is capable of revealing stoichiometric relationships between different components by determining the conversion coefficients and identifying the reaction behaviors by determining rate constants and activation energies. The modeling work shows that protein and saccharide are the primary intermediates in the initial stage of HTD resulting from the fast reduction of biomass. The oxidation processes of macromolecular products to acetic acid are highly dependent on reaction temperature and dramatically restrained when temperature is below 220°C. Overall, this detailed model is meaningful for process simulation and kinetic analysis.
Man, Pascal P; Bonhomme, Christian; Babonneau, Florence
2014-01-01
We present a post-processing method that decreases the NMR spectrum noise without line shape distortion. As a result the signal-to-noise (S/N) ratio of a spectrum increases. This method is called Cadzow enhancement procedure that is based on the singular-value decomposition of time-domain signal. We also provide software whose execution duration is a few seconds for typical data when it is executed in modern graphic-processing unit. We tested this procedure not only on low sensitive nucleus (29)Si in hybrid materials but also on low gyromagnetic ratio, quadrupole nucleus (87)Sr in reference sample Sr(NO3)2. Improving the spectrum S/N ratio facilitates the determination of T/Q ratio of hybrid materials. It is also applicable to simulated spectrum, resulting shorter simulation duration for powder averaging. An estimation of the number of singular values needed for denoising is also provided.
Noise-assisted data processing with empirical mode decomposition in biomedical signals.
Karagiannis, Alexandros; Constantinou, Philip
2011-01-01
In this paper, a methodology is described in order to investigate the performance of empirical mode decomposition (EMD) in biomedical signals, and especially in the case of electrocardiogram (ECG). Synthetic ECG signals corrupted with white Gaussian noise are employed and time series of various lengths are processed with EMD in order to extract the intrinsic mode functions (IMFs). A statistical significance test is implemented for the identification of IMFs with high-level noise components and their exclusion from denoising procedures. Simulation campaign results reveal that a decrease of processing time is accomplished with the introduction of preprocessing stage, prior to the application of EMD in biomedical time series. Furthermore, the variation in the number of IMFs according to the type of the preprocessing stage is studied as a function of SNR and time-series length. The application of the methodology in MIT-BIH ECG records is also presented in order to verify the findings in real ECG signals.
An integrated condition-monitoring method for a milling process using reduced decomposition features
NASA Astrophysics Data System (ADS)
Liu, Jie; Wu, Bo; Wang, Yan; Hu, Youmin
2017-08-01
Complex and non-stationary cutting chatter affects productivity and quality in the milling process. Developing an effective condition-monitoring approach is critical to accurately identify cutting chatter. In this paper, an integrated condition-monitoring method is proposed, where reduced features are used to efficiently recognize and classify machine states in the milling process. In the proposed method, vibration signals are decomposed into multiple modes with variational mode decomposition, and Shannon power spectral entropy is calculated to extract features from the decomposed signals. Principal component analysis is adopted to reduce feature size and computational cost. With the extracted feature information, the probabilistic neural network model is used to recognize and classify the machine states, including stable, transition, and chatter states. Experimental studies are conducted, and results show that the proposed method can effectively detect cutting chatter during different milling operation conditions. This monitoring method is also efficient enough to satisfy fast machine state recognition and classification.
Implementing the sine transform of fermionic modes as a tensor network
NASA Astrophysics Data System (ADS)
Epple, Hannes; Fries, Pascal; Hinrichsen, Haye
2017-09-01
Based on the algebraic theory of signal processing, we recursively decompose the discrete sine transform of the first kind (DST-I) into small orthogonal block operations. Using a diagrammatic language, we then second-quantize this decomposition to construct a tensor network implementing the DST-I for fermionic modes on a lattice. The complexity of the resulting network is shown to scale as 5/4 n logn (not considering swap gates), where n is the number of lattice sites. Our method provides a systematic approach of generalizing Ferris' spectral tensor network for nontrivial boundary conditions.
Surface modification processes during methane decomposition on Cu-promoted Ni–ZrO2 catalysts
Wolfbeisser, Astrid; Klötzer, Bernhard; Mayr, Lukas; Rameshan, Raffael; Zemlyanov, Dmitry; Bernardi, Johannes; Rupprechter, Günther
2015-01-01
The surface chemistry of methane on Ni–ZrO2 and bimetallic CuNi–ZrO2 catalysts and the stability of the CuNi alloy under reaction conditions of methane decomposition were investigated by combining reactivity measurements and in situ synchrotron-based near-ambient pressure XPS. Cu was selected as an exemplary promoter for modifying the reactivity of Ni and enhancing the resistance against coke formation. We observed an activation process occurring in methane between 650 and 735 K with the exact temperature depending on the composition which resulted in an irreversible modification of the catalytic performance of the bimetallic catalysts towards a Ni-like behaviour. The sudden increase in catalytic activity could be explained by an increase in the concentration of reduced Ni atoms at the catalyst surface in the active state, likely as a consequence of the interaction with methane. Cu addition to Ni improved the desired resistance against carbon deposition by lowering the amount of coke formed. As a key conclusion, the CuNi alloy shows limited stability under relevant reaction conditions. This system is stable only in a limited range of temperature up to ~700 K in methane. Beyond this temperature, segregation of Ni species causes a fast increase in methane decomposition rate. In view of the applicability of this system, a detailed understanding of the stability and surface composition of the bimetallic phases present and the influence of the Cu promoter on the surface chemistry under relevant reaction conditions are essential. PMID:25815163
Decomposition strategies in the problems of simulation of additive laser technology processes
NASA Astrophysics Data System (ADS)
Khomenko, M. D.; Dubrov, A. V.; Mirzade, F. Kh.
2016-11-01
The development of additive technologies and their application in industry is associated with the possibility of predicting the final properties of a crystallized added material. This paper describes the problem characterized by a dynamic and spatially nonuniform computational complexity, which, in the case of uniform decomposition of a computational domain, leads to an unbalanced load on computational cores. The strategy of partitioning of the computational domain is used, which minimizes the CPU time losses in the serial computations of the additive technological process. The chosen strategy is optimal from the standpoint of a priori unknown dynamic computational load distribution. The scaling of the computational problem on the cluster of the Institute on Laser and Information Technologies (RAS) that uses the InfiniBand interconnect is determined. The use of the parallel code with optimal decomposition made it possible to significantly reduce the computational time (down to several hours), which is important in the context of development of the software package for support of engineering activity in the field of additive technology.
Nucleation versus spinodal decomposition in phase formation processes in multicomponent solutions
NASA Astrophysics Data System (ADS)
Schmelzer, Jürn W. P.; Abyzov, Alexander S.; Möller, Jörg
2004-10-01
In the present paper, some further results of application of the generalized Gibbs' approach [J. W. P. Schmelzer et al., J. Chem. Phys. 112, 3820 (2000); 114, 5180 (2001); 119, 6166 (2003)] to describing new-phase formation processes are outlined. The path of cluster evolution in size and composition space is determined taking into account both thermodynamic and kinetic factors. The basic features of these paths of evolution are discussed in detail for a simple model of a binary mixture. According to this analysis, size and composition of the clusters of the newly evolving phase change in an unexpected way which is qualitatively different as compared to the classical picture of nucleation-growth processes. As shown, nucleation (i.e., the first stage of cluster formation starting from metastable initial states) exhibits properties resembling spinodal decomposition (the size remains nearly constant while the composition changes) although the presence of an activation barrier distinguishes the nucleation process from true spinodal decomposition. In addition, it is shown that phase formation both in metastable and unstable initial states near the classical spinodal may proceed via a passage of a ridge of the thermodynamic potential with a finite work of the activation barrier even though (for unstable initial states) the value of the work of critical cluster formation (corresponding to the saddle point of the thermodynamic potential) is zero. This way, it turns out that nucleation concepts—in a modified form as compared with the classical picture—may govern also phase formation processes starting from unstable initial states. In contrast to the classical Gibbs' approach, the generalized Gibbs' method provides a description of phase changes both in binodal and spinodal regions of the phase diagram and confirms the point of view assuming a continuity of the basic features of the phase transformation kinetics in the vicinity of the classical spinodal curve.
Pedros, Philip B; Askari, Omid; Metghalchi, Hameed
2016-12-01
During the last decade municipal wastewater treatment plants have been regulated with increasingly stringent nutrient removal requirements including nitrogen. Typically biological treatment processes are employed to meet these limits. Although the nitrogen in the wastewater stream is reduced, certain steps in the biological processes allow for the release of gaseous nitrous oxide (N2O), a greenhouse gas (GHG). A comprehensive study was conducted to investigate the potential to mitigate N2O emissions from biological nutrient removal (BNR) processes by means of thermal decomposition. The study examined using the off gases from the biological process, instead of ambient air, as the oxidant gas for the combustion of biomethane. A detailed analysis was done to examine the concentration of N2O and 58 other gases that exited the combustion process. The analysis was based on the assumption that the exhaust gases were in chemical equilibrium since the residence time in the combustor is sufficiently longer than the chemical characteristics. For all inlet N2O concentrations the outlet concentrations were close to zero. Additionally, the emission of hydrogen sulfide (H2S) and ten commonly occurring volatile organic compounds (VOCs) were also examined as a means of odor control for biological secondary treatment processes or as potential emissions from an anaerobic reactor of a BNR process. The sulfur released from the H2S formed sulfur dioxide (SO2) and eight of the ten VOCs were destroyed.
NASA Astrophysics Data System (ADS)
Bakker, O. J.; Gibson, C.; Wilson, P.; Lohse, N.; Popov, A. A.
2015-10-01
Due to its inherent advantages, linear friction welding is a solid-state joining process of increasing importance to the aerospace, automotive, medical and power generation equipment industries. Tangential oscillations and forge stroke during the burn-off phase of the joining process introduce essential dynamic forces, which can also be detrimental to the welding process. Since burn-off is a critical phase in the manufacturing stage, process monitoring is fundamental for quality and stability control purposes. This study aims to improve workholding stability through the analysis of fixture cassette deformations. Methods and procedures for process monitoring are developed and implemented in a fail-or-pass assessment system for fixture cassette deformations during the burn-off phase. Additionally, the de-noised signals are compared to results from previous production runs. The observed deformations as a consequence of the forces acting on the fixture cassette are measured directly during the welding process. Data on the linear friction-welding machine are acquired and de-noised using empirical mode decomposition, before the burn-off phase is extracted. This approach enables a direct, objective comparison of the signal features with trends from previous successful welds. The capacity of the whole process monitoring system is validated and demonstrated through the analysis of a large number of signals obtained from welding experiments.
Deshpande, Gopikrishna; Rangaprakash, D.; Oeding, Luke; Cichocki, Andrzej; Hu, Xiaoping P.
2017-01-01
A Brain-Computer Interface (BCI) is a setup permitting the control of external devices by decoding brain activity. Electroencephalography (EEG) has been extensively used for decoding brain activity since it is non-invasive, cheap, portable, and has high temporal resolution to allow real-time operation. Due to its poor spatial specificity, BCIs based on EEG can require extensive training and multiple trials to decode brain activity (consequently slowing down the operation of the BCI). On the other hand, BCIs based on functional magnetic resonance imaging (fMRI) are more accurate owing to its superior spatial resolution and sensitivity to underlying neuronal processes which are functionally localized. However, due to its relatively low temporal resolution, high cost, and lack of portability, fMRI is unlikely to be used for routine BCI. We propose a new approach for transferring the capabilities of fMRI to EEG, which includes simultaneous EEG/fMRI sessions for finding a mapping from EEG to fMRI, followed by a BCI run from only EEG data, but driven by fMRI-like features obtained from the mapping identified previously. Our novel data-driven method is likely to discover latent linkages between electrical and hemodynamic signatures of neural activity hitherto unexplored using model-driven methods, and is likely to serve as a template for a novel multi-modal strategy wherein cross-modal EEG-fMRI interactions are exploited for the operation of a unimodal EEG system, leading to a new generation of EEG-based BCIs. PMID:28638316
A domain decomposition parallel processing algorithm for molecular dynamics simulations of polymers
NASA Astrophysics Data System (ADS)
Brown, David; Clarke, Julian H. R.; Okuda, Motoi; Yamazaki, Takao
1994-10-01
We describe in this paper a domain decomposition molecular dynamics algorithm for use on distributed memory parallel computers which is capable of handling systems containing rigid bond constraints and three- and four-body potentials as well as non-bonded potentials. The algorithm has been successfully implemented on the Fujitsu 1024 processor element AP1000 machine. The performance has been compared with and benchmarked against the alternative cloning method of parallel processing [D. Brown, J.H.R. Clarke, M. Okuda and T. Yamazaki, J. Chem. Phys., 100 (1994) 1684] and results obtained using other scalar and vector machines. Two parallel versions of the SHAKE algorithm, which solves the bond length constraints problem, have been compared with regard to optimising the performance of this procedure.
Garbe, Christoph S; Buttgereit, Andreas; Schürmann, Sebastian; Friedrich, Oliver
2012-01-01
Practically, all chronic diseases are characterized by tissue remodeling that alters organ and cellular function through changes to normal organ architecture. Some morphometric alterations become irreversible and account for disease progression even on cellular levels. Early diagnostics to categorize tissue alterations, as well as monitoring progression or remission of disturbed cytoarchitecture upon treatment in the same individual, are a new emerging field. They strongly challenge spatial resolution and require advanced imaging techniques and strategies for detecting morphological changes. We use a combined second harmonic generation (SHG) microscopy and automated image processing approach to quantify morphology in an animal model of inherited Duchenne muscular dystrophy (mdx mouse) with age. Multiphoton XYZ image stacks from tissue slices reveal vast morphological deviation in muscles from old mdx mice at different scales of cytoskeleton architecture: cell calibers are irregular, myofibrils within cells are twisted, and sarcomere lattice disruptions (detected as "verniers") are larger in number compared to samples from healthy mice. In young mdx mice, such alterations are only minor. The boundary-tensor approach, adapted and optimized for SHG data, is a suitable approach to allow quick quantitative morphometry in whole tissue slices. The overall detection performance of the automated algorithm compares very well with manual "by eye" detection, the latter being time consuming and prone to subjective errors. Our algorithm outperfoms manual detection by time with similar reliability. This approach will be an important prerequisite for the implementation of a clinical image databases to diagnose and monitor specific morphological alterations in chronic (muscle) diseases.
Schmidt, A.J.; Freeman, H.D.; Brown, M.D.; Zacher, A.H.; Neuenschwander, G.N.; Wilcox, W.A.; Gano, S.R.; Kim, B.C.; Gavaskar, A.R.
1996-02-01
Base Catalyzed Decomposition (BCD) is a chemical dehalogenation process designed for treating soils and other substrate contaminated with polychlorinated biphenyls (PCB), pesticides, dioxins, furans, and other hazardous organic substances. PCBs are heavy organic liquids once widely used in industry as lubricants, heat transfer oils, and transformer dielectric fluids. In 1976, production was banned when PCBs were recognized as carcinogenic substances. It was estimated that significant quantities (one billion tons) of U.S. soils, including areas on U.S. military bases outside the country, were contaminated by PCB leaks and spills, and cleanup activities began. The BCD technology was developed in response to these activities. This report details the evolution of the process, from inception to deployment in Guam, and describes the process and system components provided to the Navy to meet the remediation requirements. The report is divided into several sections to cover the range of development and demonstration activities. Section 2.0 gives an overview of the project history. Section 3.0 describes the process chemistry and remediation steps involved. Section 4.0 provides a detailed description of each component and specific development activities. Section 5.0 details the testing and deployment operations and provides the results of the individual demonstration campaigns. Section 6.0 gives an economic assessment of the process. Section 7.0 presents the conclusions and recommendations form this project. The appendices contain equipment and instrument lists, equipment drawings, and detailed run and analytical data.
Decomposition of aniline in aqueous solution by UV/TiO2 process with applying bias potential.
Ku, Young; Chiu, Ping-Chin; Chou, Yiang-Chen
2010-11-15
Application of bias potential to the photocatalytic decomposition of aniline in aqueous solution was studied under various solution pH, bias potentials and concentrations of potassium chloride. The decomposition of aniline by UV/TiO(2) process was found to be enhanced with the application of bias potential of lower voltages; however, the electrolysis of aniline became more dominant as the applying bias potential exceeding 1.0 V. Based on the experimental results and calculated synergetic factors, the application of bias potential improved the decomposition of aniline more noticeably in acidic solutions than that in alkaline solutions. Decomposition of aniline by UV/bias/TiO(2) process in alkaline solutions was increased to certain extent with the concentration of potassium chloride present in aqueous solution. Experimental results also indicated that the energy consumed by applying bias potential for aniline decomposition by UV/bias/TiO(2) process might be much lower than that consumed for increasing light intensity for photocatalysis.
Hsiao, M.C.; Merritt, B.T.; Penetrante, B.M.; Vogtlin, G.E.; Wallman, P.H.
1995-09-01
Experiments are presented on the plasma-assisted decomposition of dilute concentrations of methanol and trichloroethylene in atmospheric pressure air streams by electrical discharge processing. This investigation used two types of discharge reactors, a dielectric-barrier and a pulsed corona discharge reactor, to study the effects of gas temperature and electrical energy input on the decomposition chemistry and byproduct formation. Our experimental data on both methanol and trichloroethylene show that, under identical gas conditions, the type of electrical discharge reactor does not affect the energy requirements for decomposition or byproduct formation. Our experiments on methanol show that discharge processing converts methanol to CO{sub {ital x}} with an energy yield that increases with temperature. In contrast to the results from methanol, CO{sub {ital x}} is only a minor product in the decomposition of trichloroethylene. In addition, higher temperatures decrease the energy yield for trichloroethylene. This effect may be due to increased competition from decomposition of the byproducts dichloroacetyl chloride and phosgene. In all cases plasma processing using an electrical discharge device produces CO preferentially over CO{sub 2}.
NASA Astrophysics Data System (ADS)
Hsiao, M. C.; Merritt, B. T.; Penetrante, B. M.; Vogtlin, G. E.; Wallman, P. H.
1995-09-01
Experiments are presented on the plasma-assisted decomposition of dilute concentrations of methanol and trichloroethylene in atmospheric pressure air streams by electrical discharge processing. This investigation used two types of discharge reactors, a dielectric-barrier and a pulsed corona discharge reactor, to study the effects of gas temperature and electrical energy input on the decomposition chemistry and byproduct formation. Our experimental data on both methanol and trichloroethylene show that, under identical gas conditions, the type of electrical discharge reactor does not affect the energy requirements for decomposition or byproduct formation. Our experiments on methanol show that discharge processing converts methanol to COx with an energy yield that increases with temperature. In contrast to the results from methanol, COx is only a minor product in the decomposition of trichloroethylene. In addition, higher temperatures decrease the energy yield for trichloroethylene. This effect may be due to increased competition from decomposition of the byproducts dichloroacetyl chloride and phosgene. In all cases plasma processing using an electrical discharge device produces CO preferentially over CO2.
Efficient photoreductive decomposition of N-nitrosodimethylamine by UV/iodide process.
Sun, Zhuyu; Zhang, Chaojie; Zhao, Xiaoyun; Chen, Jing; Zhou, Qi
2017-05-05
N-nitrosodimethylamine (NDMA) has aroused extensive concern as a disinfection byproduct due to its high toxicity and elevated concentration levels in water sources. This study investigates the photoreductive decomposition of NDMA by UV/iodide process. The results showed that this process is an effective strategy for the treatment of NDMA with 99.2% NDMA removed within 10min. The depletion of NDMA by UV/iodide process obeyed pseudo-first-order kinetics with a rate constant (k1) of 0.60±0.03min(-1). Hydrated electrons (eaq(-)) generated by the UV irradiation of iodide were proven to play a critical role. Dimethylamine (DMA) and nitrite (NO2(-)) were formed as the main intermediate products, which completely converted to formate (HCOO(-)), ammonium (NH4(+)) and nitrogen (N2). Therefore, not only the high efficiencies in NDMA destruction, but the elimination of toxic intermediates make UV/iodide process advantageous. A photoreduction mechanism was proposed: NDMA initially absorbed photons to a photoexcited state, and underwent a cleavage of NNO bond under the attack of eaq(-). The solution pH had little impact on NDMA removal. However, alkaline conditions were more favorable for the elimination of DMA and NO2(-), thus effectively reducing the secondary pollution.
Cao, Hong-Wen; Yang, Ke-Yu; Yan, Hong-Mei
2017-01-01
Character order information is encoded at the initial stage of Chinese word processing, however, its time course remains underspecified. In this study, we assess the exact time course of the character decomposition and transposition processes of two-character Chinese compound words (canonical, transposed, or reversible words) compared with pseudowords using dual-target rapid serial visual presentation (RSVP) of stimuli appearing at 30 ms per character with no inter-stimulus interval. The results indicate that Chinese readers can identify words with character transpositions in rapid succession; however, a transposition cost is involved in identifying transposed words compared to canonical words. In RSVP reading, character order of words is more likely to be reversed during the period from 30 to 180 ms for canonical and reversible words, but the period from 30 to 240 ms for transposed words. Taken together, the findings demonstrate that the holistic representation of the base word is activated, however, the order of the two constituent characters is not strictly processed during the very early stage of visual word processing. PMID:28408895
Decomposition of phenylarsonic acid by AOP processes: degradation rate constants and by-products.
Jaworek, K; Czaplicka, M; Bratek, Ł
2014-10-01
The paper presents results of the studies photodegradation, photooxidation, and oxidation of phenylarsonic acid (PAA) in aquatic solution. The water solutions, which consist of 2.7 g dm(-3) phenylarsonic acid, were subjected to advance oxidation process (AOP) in UV, UV/H2O2, UV/O3, H2O2, and O3 systems under two pH conditions. Kinetic rate constants and half-life of phenylarsonic acid decomposition reaction are presented. The results from the study indicate that at pH 2 and 7, PAA degradation processes takes place in accordance with the pseudo first order kinetic reaction. The highest rate constants (10.45 × 10(-3) and 20.12 × 10(-3)) and degradation efficiencies at pH 2 and 7 were obtained at UV/O3 processes. In solution, after processes, benzene, phenol, acetophenone, o-hydroxybiphenyl, p-hydroxybiphenyl, benzoic acid, benzaldehyde, and biphenyl were identified.
NASA Technical Reports Server (NTRS)
Hudson, Nicolas; Lin, Ying; Barengoltz, Jack
2010-01-01
A method for evaluating the probability of a Viable Earth Microorganism (VEM) contaminating a sample during the sample acquisition and handling (SAH) process of a potential future Mars Sample Return mission is developed. A scenario where multiple core samples would be acquired using a rotary percussive coring tool, deployed from an arm on a MER class rover is analyzed. The analysis is conducted in a structured way by decomposing sample acquisition and handling process into a series of discrete time steps, and breaking the physical system into a set of relevant components. At each discrete time step, two key functions are defined: The probability of a VEM being released from each component, and the transport matrix, which represents the probability of VEM transport from one component to another. By defining the expected the number of VEMs on each component at the start of the sampling process, these decompositions allow the expected number of VEMs on each component at each sampling step to be represented as a Markov chain. This formalism provides a rigorous mathematical framework in which to analyze the probability of a VEM entering the sample chain, as well as making the analysis tractable by breaking the process down into small analyzable steps.
NASA Technical Reports Server (NTRS)
Hudson, Nicolas; Lin, Ying; Barengoltz, Jack
2010-01-01
A method for evaluating the probability of a Viable Earth Microorganism (VEM) contaminating a sample during the sample acquisition and handling (SAH) process of a potential future Mars Sample Return mission is developed. A scenario where multiple core samples would be acquired using a rotary percussive coring tool, deployed from an arm on a MER class rover is analyzed. The analysis is conducted in a structured way by decomposing sample acquisition and handling process into a series of discrete time steps, and breaking the physical system into a set of relevant components. At each discrete time step, two key functions are defined: The probability of a VEM being released from each component, and the transport matrix, which represents the probability of VEM transport from one component to another. By defining the expected the number of VEMs on each component at the start of the sampling process, these decompositions allow the expected number of VEMs on each component at each sampling step to be represented as a Markov chain. This formalism provides a rigorous mathematical framework in which to analyze the probability of a VEM entering the sample chain, as well as making the analysis tractable by breaking the process down into small analyzable steps.
Interactive multiscale tensor reconstruction for multiresolution volume visualization.
Suter, Susanne K; Guitián, José A Iglesias; Marton, Fabio; Agus, Marco; Elsener, Andreas; Zollikofer, Christoph P E; Gopi, M; Gobbetti, Enrico; Pajarola, Renato
2011-12-01
Large scale and structurally complex volume datasets from high-resolution 3D imaging devices or computational simulations pose a number of technical challenges for interactive visual analysis. In this paper, we present the first integration of a multiscale volume representation based on tensor approximation within a GPU-accelerated out-of-core multiresolution rendering framework. Specific contributions include (a) a hierarchical brick-tensor decomposition approach for pre-processing large volume data, (b) a GPU accelerated tensor reconstruction implementation exploiting CUDA capabilities, and (c) an effective tensor-specific quantization strategy for reducing data transfer bandwidth and out-of-core memory footprint. Our multiscale representation allows for the extraction, analysis and display of structural features at variable spatial scales, while adaptive level-of-detail rendering methods make it possible to interactively explore large datasets within a constrained memory footprint. The quality and performance of our prototype system is evaluated on large structurally complex datasets, including gigabyte-sized micro-tomographic volumes.
Qiu, Yang; Collin, Felten; Hurt, Robert H; Külaots, Indrek
2016-01-01
The success of graphene technologies will require the development of safe and cost-effective nano-manufacturing methods. Special safety issues arise for manufacturing routes based on graphite oxide (GO) as an intermediate due to its energetic behavior. This article presents a detailed thermochemical and kinetic study of GO exothermic decomposition designed to identify the conditions and material compositions that avoid explosive events during storage and processing at large scale. It is shown that GO becomes more reactive for thermal decomposition when it is pretreated with OH(-) in suspension and the effect is reversible by back-titration to low pH. This OH(-) effect can lower the decomposition reaction exotherm onset temperature by up to 50 degrees of Celsius, causing overlap with common drying operations (100-120°C) and possible self-heating and thermal runaway during processing. Spectroscopic and modeling evidence suggest epoxide groups are primarily responsible for the energetic behavior, and epoxy ring opening/closing reactions are offered as an explanation for the reversible effects of pH on decomposition kinetics and enthalpies. A quantitative kinetic model is developed for GO thermal decomposition and used in a series of case studies to predict the storage conditions under which spontaneous self-heating, thermal runaway, and explosions can be avoided.
Cao, Hongwen; Gao, Min; Yan, Hongmei
2016-01-01
The attentional blink (AB) is the phenomenon in which the identification of the second of two targets (T2) is attenuated if it is presented less than 500 ms after the first target (T1). Although the AB is eliminated in canonical word conditions, it remains unclear whether the character order in compound words affects the magnitude of the AB. Morpheme decomposition and transposition of Chinese two-character compound words can provide an effective means to examine AB priming and to assess combinations of the component representations inherent to visual word identification. In the present study, we examined the processing of consecutive targets in a rapid serial visual presentation (RSVP) paradigm using Chinese two-character compound words in which the two characters were transposed to form meaningful words or meaningless combinations (reversible, transposed, or canonical words). We found that when two Chinese characters that form a compound word, regardless of their order, are presented in an RSVP sequence, the likelihood of an AB for the second character is greatly reduced or eliminated compared to when the two characters constitute separate words rather than a compound word. Moreover, the order of the report for the two characters is more likely to be reversed when the normal order of the two characters in a compound word is reversed, especially when the interval between the presentation of the two characters is extremely short. These findings are more consistent with the cognitive strategy hypothesis than the resource-limited hypothesis during character decomposition and transposition of Chinese two-character compound words. These results suggest that compound characters are perceived as a unit, rather than two separate words. The data further suggest that readers could easily understand the text with character transpositions in compound words during Chinese reading.
Cao, Hongwen; Gao, Min; Yan, Hongmei
2016-01-01
The attentional blink (AB) is the phenomenon in which the identification of the second of two targets (T2) is attenuated if it is presented less than 500 ms after the first target (T1). Although the AB is eliminated in canonical word conditions, it remains unclear whether the character order in compound words affects the magnitude of the AB. Morpheme decomposition and transposition of Chinese two-character compound words can provide an effective means to examine AB priming and to assess combinations of the component representations inherent to visual word identification. In the present study, we examined the processing of consecutive targets in a rapid serial visual presentation (RSVP) paradigm using Chinese two-character compound words in which the two characters were transposed to form meaningful words or meaningless combinations (reversible, transposed, or canonical words). We found that when two Chinese characters that form a compound word, regardless of their order, are presented in an RSVP sequence, the likelihood of an AB for the second character is greatly reduced or eliminated compared to when the two characters constitute separate words rather than a compound word. Moreover, the order of the report for the two characters is more likely to be reversed when the normal order of the two characters in a compound word is reversed, especially when the interval between the presentation of the two characters is extremely short. These findings are more consistent with the cognitive strategy hypothesis than the resource-limited hypothesis during character decomposition and transposition of Chinese two-character compound words. These results suggest that compound characters are perceived as a unit, rather than two separate words. The data further suggest that readers could easily understand the text with character transpositions in compound words during Chinese reading. PMID:27379003
NASA Technical Reports Server (NTRS)
Kuo, Kenneth K.; Lu, Y. C.; Chiaverini, Martin J.; Harting, George C.
1994-01-01
An experimental study on the fundamental processes involved in fuel decomposition and boundary layer combustion in hybrid rocket motors is being conducted at the High Pressure Combustion Laboratory of the Pennsylvania State University. This research should provide a useful engineering technology base in the development of hybrid rocket motors as well as a fundamental understanding of the complex processes involved in hybrid propulsion. A high pressure slab motor has been designed and manufactured for conducting experimental investigations. Oxidizer (LOX or GOX) supply and control systems have been designed and partly constructed for the head-end injection into the test chamber. Experiments using HTPB fuel, as well as fuels supplied by NASA designated industrial companies will be conducted. Design and construction of fuel casting molds and sample holders have been completed. The portion of these items for industrial company fuel casting will be sent to the McDonnell Douglas Aerospace Corporation in the near future. The study focuses on the following areas: observation of solid fuel burning processes with LOX or GOX, measurement and correlation of solid fuel regression rate with operating conditions, measurement of flame temperature and radical species concentrations, determination of the solid fuel subsurface temperature profile, and utilization of experimental data for validation of a companion theoretical study (Part 2) also being conducted at PSU.
NASA Astrophysics Data System (ADS)
Kuo, Kenneth K.; Lu, Y. C.; Chiaverini, Martin J.; Harting, George C.
1994-11-01
An experimental study on the fundamental processes involved in fuel decomposition and boundary layer combustion in hybrid rocket motors is being conducted at the High Pressure Combustion Laboratory of the Pennsylvania State University. This research should provide a useful engineering technology base in the development of hybrid rocket motors as well as a fundamental understanding of the complex processes involved in hybrid propulsion. A high pressure slab motor has been designed and manufactured for conducting experimental investigations. Oxidizer (LOX or GOX) supply and control systems have been designed and partly constructed for the head-end injection into the test chamber. Experiments using HTPB fuel, as well as fuels supplied by NASA designated industrial companies will be conducted. Design and construction of fuel casting molds and sample holders have been completed. The portion of these items for industrial company fuel casting will be sent to the McDonnell Douglas Aerospace Corporation in the near future. The study focuses on the following areas: observation of solid fuel burning processes with LOX or GOX, measurement and correlation of solid fuel regression rate with operating conditions, measurement of flame temperature and radical species concentrations, determination of the solid fuel subsurface temperature profile, and utilization of experimental data for validation of a companion theoretical study (Part 2) also being conducted at PSU.
Putting domain decomposition at the heart of a mesh-based simulation process
NASA Astrophysics Data System (ADS)
Chow, Peter; Addison, Clifford
2002-12-01
In computational mechanics analyses such as those in computational fluid dynamics and computational structure mechanics, some 60-90% of total modelling time is taken by specifying and creating the model of the geometry and mesh. The rest of the time is spent in actual analyses and interpreting the results. This is especially true for industries such as aerospace and electronics, where 3D geometrically complex models with multiple physical processes are common. Advances in computational hardware and software have tended to increase the proportion of time spent in model creation, partly because such advances have made it feasible to solve hard and complex geometry problems in a timely fashion. This paper shows one way to exploit the advances in computation to reduce the model creation time and potentially the overall modelling time, namely the use of domain decomposition to define consistent and coherent global models based on existing component geometry and mesh models. In keeping with existing modelling processes the re-engineering cost for the process is minimal.
The classical model for moment tensors
NASA Astrophysics Data System (ADS)
Tape, W.; Tape, C.
2013-12-01
A seismic moment tensor is a description of an earthquake source, but the description is indirect. The moment tensor describes seismic radiation rather than the actual physical process that initiates the radiation. A moment tensor 'model' then ties the physical process to the moment tensor. The model is not unique, and the physical process is therefore not unique. In the classical moment tensor model (Aki and Richards, 1980), an earthquake arises from slip along a planar fault, but with the slip not necessarily in the plane of the fault. The model specifies the resulting moment tensor in terms of the slip vector, the fault normal vector, and the Lame elastic parameters, assuming isotropy. We review the classical model in the context of the fundamental lune. The lune is closely related to the space of moment tensors, and it provides a setting that is conceptually natural as well as pictorial. In addition to the classical model, we consider a crack plus double couple model (CDC model) in which a moment tensor is regarded as the sum of a crack tensor and a double couple. A compilation of full moment tensors from the literature reveals large deviations in Poisson's ratio as implied by the classical model. Either the classical model is inadequate or the published full moment tensors have very large uncertainties. We question the common interpretation of the isotropic component as a volume change in the source region.
Pelletier, Amandine; Periot, Olivier; Dilharreguy, Bixente; Hiba, Bassem; Bordessoules, Martine; Chanraud, Sandra; Pérès, Karine; Amieva, Hélène; Dartigues, Jean-François; Allard, Michèle; Catheline, Gwénaëlle
2015-01-01
Microstructural changes of White Matter (WM) associated with aging have been widely described through Diffusion Tensor Imaging (DTI) parameters. In parallel, White Matter Hyperintensities (WMH) as observed on a T2-weighted MRI are extremely common in older individuals. However, few studies have investigated both phenomena conjointly. The present study investigates aging effects on DTI parameters in absence and in presence of WMH. Diffusion maps were constructed based on 21 directions DTI scans of young adults (n = 19, mean age = 33 SD = 7.4) and two age-matched groups of older adults, one presenting low-level-WMH (n = 20, mean age = 78, SD = 3.2) and one presenting high-level-WMH (n = 20, mean age = 79, SD = 5.4). Older subjects with low-level-WMH presented modifications of DTI parameters in comparison to younger subjects, fitting with the DTI pattern classically described in aging, i.e., Fractional Anisotropy (FA) decrease/Radial Diffusivity (RD) increase. Furthermore, older subjects with high-level-WMH showed higher DTI modifications in Normal Appearing White Matter (NAWM) in comparison to those with low-level-WMH. Finally, in older subjects with high-level-WMH, FA, and RD values of NAWM were associated with to WMH burden. Therefore, our findings suggest that DTI modifications and the presence of WMH would be two inter-dependent processes but occurring within different temporal windows. DTI changes would reflect the early phase of white matter changes and WMH would appear as a consequence of those changes.
Striganova, B R; Bienkowski, P
2000-01-01
The rate of grass litter decomposition was studied in soils of the Karkonosze Mountains of the Sudeten at different altitudes. Parallel structural-functional investigations of the soil animal population on the example of soil macrofauna were carried out and heavy metals were assayed in the soil at stationary plots to reveal the effects of both natural and anthropogenic factors on the soil biological activity. The recent contamination of soil in the Sudeten by heavy metals and sulfur does not affect the spatial distribution and abundance of the soil-dwelling invertebrates and the decomposition rates. The latter correlated to a high level of soil saprotroph activity. The activity of the decomposition processes depends on the soil content of organic matter, conditions of soil drainage, and the temperature of upper soil horizon.
NASA Astrophysics Data System (ADS)
Ball, R.; McIntosh, A. C.; Brindley, J.
2004-06-01
A simple dynamical system that models the competitive thermokinetics and chemistry of cellulose decomposition is examined, with reference to evidence from experimental studies indicating that char formation is a low activation energy exothermal process and volatilization is a high activation energy endothermal process. The thermohydrolysis chemistry at the core of the primary competition is described. Essentially, the competition is between two nucleophiles, a molecule of water and an -OH group on C6 of an end glucosyl cation, to form either a reducing chain fragment with the propensity to undergo the bond-forming reactions that ultimately form char, or a levoglucosan end-fragment that depolymerizes to volatile products. The results of this analysis suggest that promotion of char formation under thermal stress can actually increase the production of flammable volatiles. Thus, we would like to convey an important safety message in this paper: in some situations where heat and mass transfer is restricted in cellulosic materials, such as furnishings, insulation, and stockpiles, the use of char-promoting treatments for fire retardation may have the effect of increasing the risk of flaming combustion.
NASA Technical Reports Server (NTRS)
Kuo, Kenneth K.; Lu, Y. C.; Chiaverini, Martin J.; Harting, George C.
1994-01-01
An experimental study on the fundamental processes involved in fuel decomposition and boundary layer combustion in hybrid rocket motors is being conducted at the High Pressure Combustion Laboratory of the Pennsylvania State University. This research should provide an engineering technology base for development of large scale hybrid rocket motors as well as a fundamental understanding of the complex processes involved in hybrid propulsion. A high pressure slab motor has been designed for conducting experimental investigations. Oxidizer (LOX or GOX) is injected through the head-end over a solid fuel (HTPB) surface. Experiments using fuels supplied by NASA designated industrial companies will also be conducted. The study focuses on the following areas: measurement and observation of solid fuel burning with LOX or GOX, correlation of solid fuel regression rate with operating conditions, measurement of flame temperature and radical species concentrations, determination of the solid fuel subsurface temperature profile, and utilization of experimental data for validation of a companion theoretical study also being conducted at PSU.
Mathematical simulation of thermal decomposition processes in coking polymers during intense heating
Shlenskii, O.F.; Polyakov, A.A.
1994-12-01
Description of nonstationary heat transfer in heat-shielding materials based on cross-linked polymers, mathematical simulation of chemical engineering processes of treating coking and fiery coals, and designing calculations all require taking thermal destruction kinetics into account. The kinetics of chemical transformations affects the substance density change depending on the temperature, the time, the heat-release function, and other properties of materials. The traditionally accepted description of the thermal destruction kinetics of coking materials is based on formulating a set of kinetic equations, in which only chemical transformations are taken into account. However, such an approach does not necessarily agree with the obtained experimental data for the case of intense heating. The authors propose including the parameters characterizing the decrease of intermolecular interaction in a comparatively narrow temperature interval (20-40 K) into the set of kinetic equations. In the neighborhood of a certain temperature T{sub 1}, which is called the limiting temperature of thermal decomposition, a decrease in intermolecular interaction causes an increase in the rates of chemical and phase transformations. The effect of the enhancement of destruction processes has been found experimentally by the contact thermal analysis method.
Empirical mode decomposition as a time-varying multirate signal processing system
NASA Astrophysics Data System (ADS)
Yang, Yanli
2016-08-01
Empirical mode decomposition (EMD) can adaptively split composite signals into narrow subbands termed intrinsic mode functions (IMFs). Although an analytical expression of IMFs extracted by EMD from signals is introduced in Yang et al. (2013) [1], it is only used for the case of extrema spaced uniformly. In this paper, the EMD algorithm is analyzed from digital signal processing perspective for the case of extrema spaced nonuniformly. Firstly, the extrema extraction is represented by a time-varying extrema decimator. The nonuniform extrema extraction is analyzed through modeling the time-varying extrema decimation at a fixed time point as a time-invariant decimation. Secondly, by using the impulse/summation approach, spline interpolation for knots spaced nonuniformly is shown as two basic operations, time-varying interpolation and filtering by a time-varying spline filter. Thirdly, envelopes of signals are written as the output of the time-varying spline filter. An expression of envelopes of signals in both time and frequency domain is presented. The EMD algorithm is then described as a time-varying multirate signal processing system. Finally, an equation to model IMFs is derived by using a matrix formulation in time domain for the general case of extrema spaced nonuniformly.
Multilinear operators for higher-order decompositions.
Kolda, Tamara Gibson
2006-04-01
We propose two new multilinear operators for expressing the matrix compositions that are needed in the Tucker and PARAFAC (CANDECOMP) decompositions. The first operator, which we call the Tucker operator, is shorthand for performing an n-mode matrix multiplication for every mode of a given tensor and can be employed to concisely express the Tucker decomposition. The second operator, which we call the Kruskal operator, is shorthand for the sum of the outer-products of the columns of N matrices and allows a divorce from a matricized representation and a very concise expression of the PARAFAC decomposition. We explore the properties of the Tucker and Kruskal operators independently of the related decompositions. Additionally, we provide a review of the matrix and tensor operations that are frequently used in the context of tensor decompositions.
NASA Astrophysics Data System (ADS)
Andriyah, L.; Lalasari, L. H.; Manaf, A.
2017-02-01
Extraction of cassiterite using alkaline decomposition of sodium carbonate (Na2CO3) has been studied. Cassiterite (SnO2) is a mineral ore that contains tin (Sn) about 57.82 wt% and impurities like quartz, ilmenite, monazite, rutile and zircon. The initial step for the process was to remove the impurities in cassiterite through washing and separation by a high magnetic separator (HTS). The aim of this research is to increase the added value of cassiterite from local area Indonesia that using alkaline decomposition to form sodium stannate (Na2SnO3). The result shows that cassiterite from Indonesia can form sodium stannate (Na2SnO3) which soluble with water in the leaching process. The longer the time for decomposition, the more phases of sodium stannate that will be formed. Optimum result reached when the decomposition process was done in 850 °C for 4 hours with a mole ratio Na2CO3 to cassiterite 3:2. High Score Plus (HSP) was used in this research to analyze the mass of sodium stannate (Na2SnO3). HSP analysis showed that mass of sodium stannate (Na2SnO3) is 70.3 wt%.
Kos, L; Michalska, K; Perkowski, J
2014-11-01
The aim of our studies was to determine the efficiency of decomposition of non-ionic surfactant by the Fenton method in the presence of iron nanocompounds and to compare it with the classical Fenton method. The subject of studies was water solutions of non-ionic detergent Tergitol TMN-10 used in textile industry. Water solutions of the surfactant were subjected to treatment by the classical Fenton method and to treatment in the presence of iron nanocompounds. In the samples of liquid solutions containing the surfactant, chemical oxygen demand (COD) and total organic carbon (TOC) were determined. The Fenton process was optimized based on studies of the effect of compounds used in the treatment, doses of iron and nanoiron, hydrogen peroxide and pH of the solution on surfactant decomposition. Iron oxide nanopowder catalyzed the process of detergent decomposition, increasing its efficiency and the degree of mineralization. It was found that the efficiency of the surfactant decomposition in the process with the use of iron nanocompounds was by 10 to 30 % higher than that in the classical method. The amounts of formed deposits were also several times smaller.
Tensor-Factorized Neural Networks.
Chien, Jen-Tzung; Bao, Yi-Ting
2017-04-17
The growing interests in multiway data analysis and deep learning have drawn tensor factorization (TF) and neural network (NN) as the crucial topics. Conventionally, the NN model is estimated from a set of one-way observations. Such a vectorized NN is not generalized for learning the representation from multiway observations. The classification performance using vectorized NN is constrained, because the temporal or spatial information in neighboring ways is disregarded. More parameters are required to learn the complicated data structure. This paper presents a new tensor-factorized NN (TFNN), which tightly integrates TF and NN for multiway feature extraction and classification under a unified discriminative objective. This TFNN is seen as a generalized NN, where the affine transformation in an NN is replaced by the multilinear and multiway factorization for tensor-based NN. The multiway information is preserved through layerwise factorization. Tucker decomposition and nonlinear activation are performed in each hidden layer. The tensor-factorized error backpropagation is developed to train TFNN with the limited parameter size and computation time. This TFNN can be further extended to realize the convolutional TFNN (CTFNN) by looking at small subtensors through the factorized convolution. Experiments on real-world classification tasks demonstrate that TFNN and CTFNN attain substantial improvement when compared with an NN and a convolutional NN, respectively.
KOALA: A program for the processing and decomposition of transient spectra
NASA Astrophysics Data System (ADS)
Grubb, Michael P.; Orr-Ewing, Andrew J.; Ashfold, Michael N. R.
2014-06-01
Extracting meaningful kinetic traces from time-resolved absorption spectra is a non-trivial task, particularly for solution phase spectra where solvent interactions can substantially broaden and shift the transition frequencies. Typically, each spectrum is composed of signal from a number of molecular species (e.g., excited states, intermediate complexes, product species) with overlapping spectral features. Additionally, the profiles of these spectral features may evolve in time (i.e., signal nonlinearity), further complicating the decomposition process. Here, we present a new program for decomposing mixed transient spectra into their individual component spectra and extracting the corresponding kinetic traces: KOALA (Kinetics Observed After Light Absorption). The software combines spectral target analysis with brute-force linear least squares fitting, which is computationally efficient because of the small nonlinear parameter space of most spectral features. Within, we demonstrate the application of KOALA to two sets of experimental transient absorption spectra with multiple mixed spectral components. Although designed for decomposing solution-phase transient absorption data, KOALA may in principle be applied to any time-evolving spectra with multiple components.
Spectral decomposition of P50 suppression in schizophrenia during concurrent visual processing.
Moran, Zachary D; Williams, Terrance J; Bachman, Peter; Nuechterlein, Keith H; Subotnik, Kenneth L; Yee, Cindy M
2012-09-01
Reduced suppression of the auditory P50 event-related potential has long been associated with schizophrenia, but the mechanisms associated with the generation and suppression of the P50 are not well understood. Recent investigations have used spectral decomposition of the electroencephalograph (EEG) signal to gain additional insight into the ongoing electrophysiological activity that may be reflected by the P50 suppression deficit. The present investigation extended this line of study by examining how both a traditional measure of sensory gating and the ongoing EEG from which it is extracted might be modified by the presence of concurrent visual stimulation - perhaps better characterizing gating deficits as they occur in a real-world, complex sensory environment. The EEG was obtained from 18 patients with schizophrenia and 17 healthy control subjects during the P50 suppression paradigm and while identical auditory paired-stimuli were presented concurrently with affectively neutral pictures. Consistent with prior research, schizophrenia patients differed from healthy subjects in gating of power in the theta range; theta activity also was modulated by visual stimulation. In addition, schizophrenia patients showed intact gating but overall increased power in the gamma range, consistent with a model of NMDA receptor dysfunction in the disorder. These results are in line with a model of schizophrenia in which impairments in neural synchrony are related to sensory demands and the processing of multimodal information. Copyright © 2012 Elsevier B.V. All rights reserved.
Trace Norm Regularized CANDECOMP/PARAFAC Decomposition With Missing Data.
Liu, Yuanyuan; Shang, Fanhua; Jiao, Licheng; Cheng, James; Cheng, Hong
2015-11-01
In recent years, low-rank tensor completion (LRTC) problems have received a significant amount of attention in computer vision, data mining, and signal processing. The existing trace norm minimization algorithms for iteratively solving LRTC problems involve multiple singular value decompositions of very large matrices at each iteration. Therefore, they suffer from high computational cost. In this paper, we propose a novel trace norm regularized CANDECOMP/PARAFAC decomposition (TNCP) method for simultaneous tensor decomposition and completion. We first formulate a factor matrix rank minimization model by deducing the relation between the rank of each factor matrix and the mode- n rank of a tensor. Then, we introduce a tractable relaxation of our rank function, and then achieve a convex combination problem of much smaller-scale matrix trace norm minimization. Finally, we develop an efficient algorithm based on alternating direction method of multipliers to solve our problem. The promising experimental results on synthetic and real-world data validate the effectiveness of our TNCP method. Moreover, TNCP is significantly faster than the state-of-the-art methods and scales to larger problems.
Extended vector-tensor theories
NASA Astrophysics Data System (ADS)
Kimura, Rampei; Naruko, Atsushi; Yoshida, Daisuke
2017-01-01
Recently, several extensions of massive vector theory in curved space-time have been proposed in many literatures. In this paper, we consider the most general vector-tensor theories that contain up to two derivatives with respect to metric and vector field. By imposing a degeneracy condition of the Lagrangian in the context of ADM decomposition of space-time to eliminate an unwanted mode, we construct a new class of massive vector theories where five degrees of freedom can propagate, corresponding to three for massive vector modes and two for massless tensor modes. We find that the generalized Proca and the beyond generalized Proca theories up to the quartic Lagrangian, which should be included in this formulation, are degenerate theories even in curved space-time. Finally, introducing new metric and vector field transformations, we investigate the properties of thus obtained theories under such transformations.
Efficient MATLAB computations with sparse and factored tensors.
Bader, Brett William; Kolda, Tamara Gibson (Sandia National Lab, Livermore, CA)
2006-12-01
In this paper, the term tensor refers simply to a multidimensional or N-way array, and we consider how specially structured tensors allow for efficient storage and computation. First, we study sparse tensors, which have the property that the vast majority of the elements are zero. We propose storing sparse tensors using coordinate format and describe the computational efficiency of this scheme for various mathematical operations, including those typical to tensor decomposition algorithms. Second, we study factored tensors, which have the property that they can be assembled from more basic components. We consider two specific types: a Tucker tensor can be expressed as the product of a core tensor (which itself may be dense, sparse, or factored) and a matrix along each mode, and a Kruskal tensor can be expressed as the sum of rank-1 tensors. We are interested in the case where the storage of the components is less than the storage of the full tensor, and we demonstrate that many elementary operations can be computed using only the components. All of the efficiencies described in this paper are implemented in the Tensor Toolbox for MATLAB.
Unsupervised Tensor Mining for Big Data Practitioners.
Papalexakis, Evangelos E; Faloutsos, Christos
2016-09-01
Multiaspect data are ubiquitous in modern Big Data applications. For instance, different aspects of a social network are the different types of communication between people, the time stamp of each interaction, and the location associated to each individual. How can we jointly model all those aspects and leverage the additional information that they introduce to our analysis? Tensors, which are multidimensional extensions of matrices, are a principled and mathematically sound way of modeling such multiaspect data. In this article, our goal is to popularize tensors and tensor decompositions to Big Data practitioners by demonstrating their effectiveness, outlining challenges that pertain to their application in Big Data scenarios, and presenting our recent work that tackles those challenges. We view this work as a step toward a fully automated, unsupervised tensor mining tool that can be easily and broadly adopted by practitioners in academia and industry.
Shlenskii, O.F.; Murashov, G.G.
1982-05-01
In describing frontal processes of thermal decomposition of high-energy condensed substances, for example detonation, it is common practice to write the equation for the conservation of energy without any limitations on the heat propagation velocity (HPV). At the same time, it is known that in calculating fast processes of heat conduction, the assumption of an infinitely high HPV is not always justified. In order to evaluate the influence of the HPV on the results from calculations of heat conduction process under conditions of a short-term exothermic decomposition of a condensed substance, the solution of the problem of heating a semiinfinite, thermally unstable solid body with boundary conditions of the third kind on the surface has been examined.
Empirical mode decomposition analysis of random processes in the solar atmosphere
NASA Astrophysics Data System (ADS)
Kolotkov, D. Y.; Anfinogentov, S. A.; Nakariakov, V. M.
2016-08-01
Context. Coloured noisy components with a power law spectral energy distribution are often shown to appear in solar signals of various types. Such a frequency-dependent noise may indicate the operation of various randomly distributed dynamical processes in the solar atmosphere. Aims: We develop a recipe for the correct usage of the empirical mode decomposition (EMD) technique in the presence of coloured noise, allowing for clear distinguishing between quasi-periodic oscillatory phenomena in the solar atmosphere and superimposed random background processes. For illustration, we statistically investigate extreme ultraviolet (EUV) emission intensity variations observed with SDO/AIA in the coronal (171 Å), chromospheric (304 Å), and upper photospheric (1600 Å) layers of the solar atmosphere, from a quiet sun and a sunspot umbrae region. Methods: EMD has been used for analysis because of its adaptive nature and essential applicability to the processing non-stationary and amplitude-modulated time series. For the comparison of the results obtained with EMD, we use the Fourier transform technique as an etalon. Results: We empirically revealed statistical properties of synthetic coloured noises in EMD, and suggested a scheme that allows for the detection of noisy components among the intrinsic modes obtained with EMD in real signals. Application of the method to the solar EUV signals showed that they indeed behave randomly and could be represented as a combination of different coloured noises characterised by a specific value of the power law indices in their spectral energy distributions. On the other hand, 3-min oscillations in the analysed sunspot were detected to have energies significantly above the corresponding noise level. Conclusions: The correct accounting for the background frequency-dependent random processes is essential when using EMD for analysis of oscillations in the solar atmosphere. For the quiet sun region the power law index was found to increase
NASA Astrophysics Data System (ADS)
Davydov, S. V.; Petrov, E. V.
2017-08-01
We have studied structural and phase transformations in tungsten-containing functional coatings of carbon steels obtained during the high-energy processes of implanting tungsten carbide micropowders by the method of complex pulse electromechanical processing and micropowders of tungsten by technology of directed energy of explosion based on the effect of superdeep penetration of solid particles (Usherenko effect). It has been shown that, during thermomechanical action, intensive steel austenization occurs in the deformation zone with the dissolution of tungsten carbide powder, the carbidization of tungsten powder, and the subsequent formation of composite gradient structures as a result of the decay of supercooled austenite supersaturated by tungsten according to the diffusion mechanism and the mechanism of spinodal decomposition. Separate zones of tungsten-containing phases of the alloy are in the liquid-phase state, as well as undergo spinodal decomposition with the formation of highly disperse carbide phases of globular morphology.
Schoenen, Dirk
2013-01-01
Decomposition of the human body is a microbial process. It is influenced by the environmental situation and it depends to a high degree on the exchange of substances between the corpse and the environment. Mummification occurs at low humidity or frost. Adipocere arises from lack of oxygen, incomplete putrified corpses develop when there is no exchange of air or water between the corpse and the environment.
Kozawa, Takahiro; Onda, Ayumu; Yanagisawa, Kazumichi; Kishi, Akira; Masuda, Yasuaki
2011-03-15
Thermal decomposition process of zinc hydroxide chloride (ZHC), Zn{sub 5}(OH){sub 8}Cl{sub 2}.H{sub 2}O, prepared by a hydrothermal slow-cooling method has been investigated by simultaneous X-ray diffractometry and differential scanning calorimetry (XRD-DSC) and thermogravimetric-differential thermal analysis (TG-DTA) in a humidity-controlled atmosphere. ZHC was decomposed to ZnO through {beta}-Zn(OH)Cl as the intermediate phase, leaving amorphous hydrated ZnCl{sub 2}. In humid N{sub 2} with P{sub H{sub 2O}}=4.5 and 10 kPa, the hydrolysis of residual ZnCl{sub 2} was accelerated and the theoretical amount of ZnO was obtained at lower temperatures than in dry N{sub 2}, whereas significant weight loss was caused by vaporization of residual ZnCl{sub 2} in dry N{sub 2}. ZnO formed by calcinations in a stagnant air atmosphere had the same morphology of the original ZHC crystals and consisted of the c-axis oriented column-like particle arrays. On the other hand, preferred orientation of ZnO was inhibited in the case of calcinations in 100% water vapor. A detailed thermal decomposition process of ZHC and the effect of water vapor on the crystal growth of ZnO are discussed. -- Graphical abstract: Thermal decomposition process of zinc hydroxide chloride (ZHC), Zn{sub 5}(OH){sub 8}Cl{sub 2}.H{sub 2}O, has been investigated by novel thermal analyses with three different water vapor partial pressures. In the water vapor atmosphere, the formation of ZnO was completed at lower temperatures than in dry. Display Omitted highlights: > We examine the thermal decomposition of zinc hydroxide chloride in water vapor. > Water vapor had no effects on its thermal decomposition up to 230 {sup o}C. > Water vapor accelerated the decomposition of the residual ZnCl{sub 2} in ZnO. > Without water vapor, a large amount of ZnCl{sub 2} evaporated to form the c-axis oriented ZnO.
Modelling regulation of decomposition and related root/mycorrhizal processes in arctic tundra soils
Linkins, A.E.
1992-01-01
Since this was the final year of this project principal activities were directed towards either collecting data needed to complete existing incomplete data sets or writing manuscripts. Data sets on Imnaviat Creek watershed basin are functionally complete and data finialized on the cellulose mineralizaiton and dust impact on soil organic carbon and phsophorus decomposition. Seven manuscripts were prepared, and are briefly outlined.
Linkins, A.E.
1992-09-01
Since this was the final year of this project principal activities were directed towards either collecting data needed to complete existing incomplete data sets or writing manuscripts. Data sets on Imnaviat Creek watershed basin are functionally complete and data finialized on the cellulose mineralizaiton and dust impact on soil organic carbon and phsophorus decomposition. Seven manuscripts were prepared, and are briefly outlined.
The Spatial Variability of Organic Matter and Decomposition Processes at the Marsh Scale
NASA Astrophysics Data System (ADS)
Yousefi Lalimi, Fateme; Silvestri, Sonia; D'Alpaos, Andrea; Roner, Marcella; Marani, Marco
2017-04-01
Coastal salt marshes sequester carbon as they respond to the local Rate of Relative Sea Level Rise (RRSLR) and their accretion rate is governed by inorganic soil deposition, organic soil production, and soil organic matter (SOM) decomposition. It is generally recognized that SOM plays a central role in marsh vertical dynamics, but while existing limited observations and modelling results suggest that SOME varies widely at the marsh scale, we lack systematic observations aimed at understanding how SOM production is modulated spatially as a result of biomass productivity and decomposition rate. Marsh topography and distance to the creek can affect biomass and SOM production, while a higher topographic elevation increases drainage, evapotranspiration, aeration, thereby likely inducing higher SOM decomposition rates. Data collected in salt marshes in the northern Venice Lagoon (Italy) show that, even though plant productivity decreases in the lower areas of a marsh located farther away from channel edges, the relative contribution of organic soil production to the overall vertical soil accretion tends to remain constant as the distance from the channel increases. These observations suggest that the competing effects between biomass production and aeration/decomposition determine a contribution of organic soil to total accretion which remains approximately constant with distance from the creek, in spite of the declining plant productivity. Here we test this hypothesis using new observations of SOM and decomposition rates from marshes in North Carolina. The objective is to fill the gap in our understanding of the spatial distribution, at the marsh scale, of the organic and inorganic contributions to marsh accretion in response to RRSLR.
Bilayer linearized tensor renormalization group approach for thermal tensor networks
NASA Astrophysics Data System (ADS)
Dong, Yong-Liang; Chen, Lei; Liu, Yun-Jing; Li, Wei
2017-04-01
Thermal tensor networks constitute an efficient and versatile representation for quantum lattice models at finite temperatures. By Trotter-Suzuki decomposition, one obtains a D +1 dimensional TTN for the D -dimensional quantum system and then employs efficient renormalizaton group (RG) contractions to obtain the thermodynamic properties with high precision. The linearized tensor renormalization group (LTRG) method, which can be used to contract TTN efficiently and calculate the thermodynamics, is briefly reviewed and then generalized to a bilayer form. We dub this bilayer algorithm as LTRG++ and explore its performance in both finite- and infinite-size systems, finding the numerical accuracy significantly improved compared to single-layer algorithm. Moreover, we show that the LTRG++ algorithm in an infinite-size system is in essence equivalent to transfer-matrix renormalization group method, while reformulated in a tensor network language. As an application of LTRG++, we simulate an extended fermionic Hubbard model numerically, where the phase separation phenomenon, ground-state phase diagram, as well as quantum criticality-enhanced magnetocaloric effects, are investigated.
The classical model for moment tensors
NASA Astrophysics Data System (ADS)
Tape, Walter; Tape, Carl
2013-12-01
A seismic moment tensor is a description of an earthquake source, but the description is indirect. The moment tensor describes seismic radiation rather than the actual physical process that initiates the radiation. A moment tensor `model' then ties the physical process to the moment tensor. The model is not unique, and the physical process is therefore not unique. In the classical moment tensor model, an earthquake arises from slip along a planar fault, but with the slip not necessarily in the plane of the fault. The model specifies the resulting moment tensor in terms of the slip vector, the fault normal vector and the Lamé elastic parameters, assuming isotropy. We review the classical model in the context of the fundamental lune. The lune is closely related to the space of moment tensors, and it provides a setting that is conceptually natural as well as pictorial. In addition to the classical model, we consider a crack plus double-couple model (CDC model) in which a moment tensor is regarded as the sum of a crack tensor and a double couple.
Human action recognition based on point context tensor shape descriptor
NASA Astrophysics Data System (ADS)
Li, Jianjun; Mao, Xia; Chen, Lijiang; Wang, Lan
2017-07-01
Motion trajectory recognition is one of the most important means to determine the identity of a moving object. A compact and discriminative feature representation method can improve the trajectory recognition accuracy. This paper presents an efficient framework for action recognition using a three-dimensional skeleton kinematic joint model. First, we put forward a rotation-scale-translation-invariant shape descriptor based on point context (PC) and the normal vector of hypersurface to jointly characterize local motion and shape information. Meanwhile, an algorithm for extracting the key trajectory based on the confidence coefficient is proposed to reduce the randomness and computational complexity. Second, to decrease the eigenvalue decomposition time complexity, a tensor shape descriptor (TSD) based on PC that can globally capture the spatial layout and temporal order to preserve the spatial information of each frame is proposed. Then, a multilinear projection process is achieved by tensor dynamic time warping to map the TSD to a low-dimensional tensor subspace of the same size. Experimental results show that the proposed shape descriptor is effective and feasible, and the proposed approach obtains considerable performance improvement over the state-of-the-art approaches with respect to accuracy on a public action dataset.
Peng, Cong; Chai, Liyuan; Tang, Chongjian; Min, Xiaobo; Song, Yuxia; Duan, Chengshan; Yu, Cheng
2017-01-01
Heavy metals and ammonia are difficult to remove from wastewater, as they easily combine into refractory complexes. The struvite formation method (SFM) was applied for the complex decomposition and simultaneous removal of heavy metal and ammonia. The results indicated that ammonia deprivation by SFM was the key factor leading to the decomposition of the copper-ammonia complex ion. Ammonia was separated from solution as crystalline struvite, and the copper mainly co-precipitated as copper hydroxide together with struvite. Hydrogen bonding and electrostatic attraction were considered to be the main surface interactions between struvite and copper hydroxide. Hydrogen bonding was concluded to be the key factor leading to the co-precipitation. In addition, incorporation of copper ions into the struvite crystal also occurred during the treatment process. Copyright © 2016. Published by Elsevier B.V.
Okayama, T; Fujii, M; Yamanoue, M
1991-01-01
The effect of cooking temperature and time on the percentage colour formation, nitrite decomposition and denaturation of sarcoplasmic proteins in processed meat products was investigated in detail. The colour forming percentage increased with a rise in temperature of heating, especially at 50-60°C (P < 0·05). The percentage nitrite decomposition was promoted by the retention time of cooking rather than by the cooking temperature (P < 0·05). The percentage of sarcoplasmic proteins denatured was enhanced by heating temperature in the range 50-80°C (especially at 50-60°C) (P < 0·05). The relationship between the percentage colour formation and the percentage of sarcoplasmic proteins denatured is discussed. The SDS-PAGE patterns of the heat-treated samples revealed the components of the sarcoplasmic proteins which had been denatured.
NASA Astrophysics Data System (ADS)
Prothin, Sebastien; Billard, Jean-Yves; Djeridi, Henda
2016-10-01
The purpose of the present study is to get a better understanding of the hydrodynamic instabilities of sheet cavities which develop along solid walls. The main objective is to highlight the spatial and temporal behavior of such a cavity when it develops on a NACA0015 foil at high Reynolds number. Experimental results show a quasi-steady, periodic, bifurcation domain, with aperiodic cavity behavior corresponding to σ/2 α values of 5.75, 5, 4.3 and 3.58. Robust mathematical methods of signal postprocessing (proper orthogonal decomposition and dynamic mode decomposition) were applied in order to emphasize the spatio-temporal nature of the flow. These new techniques put in evidence the 3D effects due to the reentrant jet instabilities or due to propagating shock wave mechanism at the origin of the shedding process of the cavitation cloud.
Multi-Zone Liquid Thrust Chamber Performance Code with Domain Decomposition for Parallel Processing
NASA Technical Reports Server (NTRS)
Navaz, Homayun K.
2002-01-01
-equation turbulence model, and two-phase flow. To overcome these limitations, the LTCP code is rewritten to include the multi-zone capability with domain decomposition that makes it suitable for parallel processing, i.e., enabling the code to run every zone or sub-domain on a separate processor. This can reduce the run time by a factor of 6 to 8, depending on the problem.
Multi-Zone Liquid Thrust Chamber Performance Code with Domain Decomposition for Parallel Processing
NASA Technical Reports Server (NTRS)
Navaz, Homayun K.
2002-01-01
-equation turbulence model, and two-phase flow. To overcome these limitations, the LTCP code is rewritten to include the multi-zone capability with domain decomposition that makes it suitable for parallel processing, i.e., enabling the code to run every zone or sub-domain on a separate processor. This can reduce the run time by a factor of 6 to 8, depending on the problem.
Atomic-batched tensor decomposed two-electron repulsion integrals
NASA Astrophysics Data System (ADS)
Schmitz, Gunnar; Madsen, Niels Kristian; Christiansen, Ove
2017-04-01
We present a new integral format for 4-index electron repulsion integrals, in which several strategies like the Resolution-of-the-Identity (RI) approximation and other more general tensor-decomposition techniques are combined with an atomic batching scheme. The 3-index RI integral tensor is divided into sub-tensors defined by atom pairs on which we perform an accelerated decomposition to the canonical product (CP) format. In a first step, the RI integrals are decomposed to a high-rank CP-like format by repeated singular value decompositions followed by a rank reduction, which uses a Tucker decomposition as an intermediate step to lower the prefactor of the algorithm. After decomposing the RI sub-tensors (within the Coulomb metric), they can be reassembled to the full decomposed tensor (RC approach) or the atomic batched format can be maintained (ABC approach). In the first case, the integrals are very similar to the well-known tensor hypercontraction integral format, which gained some attraction in recent years since it allows for quartic scaling implementations of MP2 and some coupled cluster methods. On the MP2 level, the RC and ABC approaches are compared concerning efficiency and storage requirements. Furthermore, the overall accuracy of this approach is assessed. Initial test calculations show a good accuracy and that it is not limited to small systems.
NASA Astrophysics Data System (ADS)
Gurau, Razvan
2016-09-01
This article is preface to the SIGMA special issue ''Tensor Models, Formalism and Applications'', http://www.emis.de/journals/SIGMA/Tensor_Models.html. The issue is a collection of eight excellent, up to date reviews on random tensor models. The reviews combine pedagogical introductions meant for a general audience with presentations of the most recent developments in the field. This preface aims to give a condensed panoramic overview of random tensors as the natural generalization of random matrices to higher dimensions.
The processing of rotor startup signals based on empirical mode decomposition
NASA Astrophysics Data System (ADS)
Gai, Guanghong
2006-01-01
In this paper, we applied empirical mode decomposition method to analyse rotor startup signals, which are non-stationary and contain a lot of additional information other than that from its stationary running signals. The methodology developed in this paper decomposes the original startup signals into intrinsic oscillation modes or intrinsic modes function (IMFs). Then, we obtained rotating frequency components for Bode diagrams plot by corresponding IMFs, according to the characteristics of rotor system. The method can obtain precise critical speed without complex hardware support. The low-frequency components were extracted from these IMFs in vertical and horizontal directions. Utilising these components, we constructed a drift locus of rotor revolution centre, which provides some significant information to fault diagnosis of rotating machinery. Also, we proved that empirical mode decomposition method is more precise than Fourier filter for the extraction of low-frequency component.
1981-11-12
nitrotoluenes actually represent surface- catalyzed reactions . Preliminary qualitative results for pyrolysis of ortho-nitrotoluene in the absence of hot...quantitative validity. LPHP studies of azoisopropane decomposition chosen as a radical-forming test reaction , show the accepted literature parameters to...systematic errors or by rate control exerted by secondary reactions . (2) Support from these VLPP studies for the conclusion that some previous kinetic
A linearly approximated iterative Gaussian decomposition method for waveform LiDAR processing
NASA Astrophysics Data System (ADS)
Mountrakis, Giorgos; Li, Yuguang
2017-07-01
Full-waveform LiDAR (FWL) decomposition results often act as the basis for key LiDAR-derived products, for example canopy height, biomass and carbon pool estimation, leaf area index calculation and under canopy detection. To date, the prevailing method for FWL product creation is the Gaussian Decomposition (GD) based on a non-linear Levenberg-Marquardt (LM) optimization for Gaussian node parameter estimation. GD follows a ;greedy; approach that may leave weak nodes undetected, merge multiple nodes into one or separate a noisy single node into multiple ones. In this manuscript, we propose an alternative decomposition method called Linearly Approximated Iterative Gaussian Decomposition (LAIGD method). The novelty of the LAIGD method is that it follows a multi-step ;slow-and-steady; iterative structure, where new Gaussian nodes are quickly discovered and adjusted using a linear fitting technique before they are forwarded for a non-linear optimization. Two experiments were conducted, one using real full-waveform data from NASA's land, vegetation, and ice sensor (LVIS) and another using synthetic data containing different number of nodes and degrees of overlap to assess performance in variable signal complexity. LVIS data revealed considerable improvements in RMSE (44.8% lower), RSE (56.3% lower) and rRMSE (74.3% lower) values compared to the benchmark GD method. These results were further confirmed with the synthetic data. Furthermore, the proposed multi-step method reduces execution times in half, an important consideration as there are plans for global coverage with the upcoming Global Ecosystem Dynamics Investigation LiDAR sensor on the International Space Station.
In Vivo Generalized Diffusion Tensor Imaging (GDTI) Using Higher-Order Tensors (HOT)
Liu, Chunlei; Mang, Sarah C.; Moseley, Michael E.
2009-01-01
Generalized diffusion tensor imaging (GDTI) using higher order tensor statistics (HOT) generalizes the technique of diffusion tensor imaging (DTI) by including the effect of non-Gaussian diffusion on the signal of magnetic resonance imaging (MRI). In GDTI-HOT, the effect of non-Gaussian diffusion is characterized by higher order tensor statistics (i.e. the cumulant tensors or the moment tensors) such as the covariance matrix (the second-order cumulant tensor), the skewness tensor (the third-order cumulant tensor) and the kurtosis tensor (the fourth-order cumulant tensor) etc. Previously, Monte Carlo simulations have been applied to verify the validity of this technique in reconstructing complicated fiber structures. However, no in vivo implementation of GDTI-HOT has been reported. The primary goal of this study is to establish GDTI-HOT as a feasible in vivo technique for imaging non-Gaussian diffusion. We show that probability distribution function (PDF) of the molecular diffusion process can be measured in vivo with GDTI-HOT and be visualized with 3D glyphs. By comparing GDTI-HOT to fiber structures that are revealed by the highest resolution DWI possible in vivo, we show that the GDTI-HOT can accurately predict multiple fiber orientations within one white matter voxel. Furthermore, through bootstrap analysis we demonstrate that in vivo measurement of HOT elements is reproducible with a small statistical variation that is similar to that of DTI. PMID:19953513
Wang, Xiao-Yan; Miao, Yuan; Yu, Shuo; Chen, Xiao-Yong; Schmid, Bernhard
2014-03-01
Following studies that showed negative effects of species loss on ecosystem functioning, newer studies have started to investigate if similar consequences could result from reductions of genetic diversity within species. We tested the influence of genotypic richness and dissimilarity (plots containing one, three, six or 12 genotypes) in stands of the invasive plant Solidago canadensis in China on the decomposition of its leaf litter and associated soil animals over five monthly time intervals. We found that the logarithm of genotypic richness was positively linearly related to mass loss of C, N and P from the litter and to richness and abundance of soil animals on the litter samples. The mixing proportion of litter from two sites, but not genotypic dissimilarity of mixtures, had additional effects on measured variables. The litter diversity effects on soil animals were particularly strong under the most stressful conditions of hot weather in July: at this time richness and abundance of soil animals were higher in 12-genotype litter mixtures than even in the highest corresponding one-genotype litter. The litter diversity effects on decomposition were in part mediated by soil animals: the abundance of Acarina, when used as covariate in the analysis, fully explained the litter diversity effects on mass loss of N and P. Overall, our study shows that high genotypic richness of S. canadensis leaf litter positively affects richness and abundance of soil animals, which in turn accelerate litter decomposition and P release from litter.
NASA Technical Reports Server (NTRS)
Sirlin, Samuel W.
1993-01-01
Eight-page report describes systems of notation used most commonly to represent tensors of various ranks, with emphasis on tensors in Cartesian coordinate systems. Serves as introductory or refresher text for scientists, engineers, and others familiar with basic concepts of coordinate systems, vectors, and partial derivatives. Indicial tensor, vector, dyadic, and matrix notations, and relationships among them described.
Souto, X C; Gonzales, L; Reigosa, M J
1994-11-01
The development of toxicity produced by vegetable litter of four forest species (Quercus robur L.,Pinus radiata D.Don.,Eucalyptus globulus Labill, andAcacia melanoxylon R.Br.) was studied during the decomposition process in each of the soils where the species were found. The toxicity of the extracts was measured by the effects produced on germination and growth ofLactuca saliva L. var. Great Lakes seeds. The phenolic composition of the leaves of the four species was also studied using high-performance liquid chromatographic analysis (HPLC). It was verified that toxicity was clearly reflected in the first stages of leaf decomposition inE. globulus andA. melanoxylon, due to phytotoxic compounds liberated by their litter. At the end of half a year of decomposition, inhibition due to the vegetable material was not observed, but the soils associated with these two species appeared to be responsible for the toxic effects. On the other hand, the phenolic profiles are quite different among the four species, and greater complexity in the two toxic species (E. globulus andA. melanoxylon) was observed.
Block term decomposition for modelling epileptic seizures
NASA Astrophysics Data System (ADS)
Hunyadi, Borbála; Camps, Daan; Sorber, Laurent; Paesschen, Wim Van; Vos, Maarten De; Huffel, Sabine Van; Lathauwer, Lieven De
2014-12-01
Recordings of neural activity, such as EEG, are an inherent mixture of different ongoing brain processes as well as artefacts and are typically characterised by low signal-to-noise ratio. Moreover, EEG datasets are often inherently multidimensional, comprising information in time, along different channels, subjects, trials, etc. Additional information may be conveyed by expanding the signal into even more dimensions, e.g. incorporating spectral features applying wavelet transform. The underlying sources might show differences in each of these modes. Therefore, tensor-based blind source separation techniques which can extract the sources of interest from such multiway arrays, simultaneously exploiting the signal characteristics in all dimensions, have gained increasing interest. Canonical polyadic decomposition (CPD) has been successfully used to extract epileptic seizure activity from wavelet-transformed EEG data (Bioinformatics 23(13):i10-i18, 2007; NeuroImage 37:844-854, 2007), where each source is described by a rank-1 tensor, i.e. by the combination of one particular temporal, spectral and spatial signature. However, in certain scenarios, where the seizure pattern is nonstationary, such a trilinear signal model is insufficient. Here, we present the application of a recently introduced technique, called block term decomposition (BTD) to separate EEG tensors into rank- ( L r , L r ,1) terms, allowing to model more variability in the data than what would be possible with CPD. In a simulation study, we investigate the robustness of BTD against noise and different choices of model parameters. Furthermore, we show various real EEG recordings where BTD outperforms CPD in capturing complex seizure characteristics.
Geodesic-loxodromes for diffusion tensor interpolation and difference measurement.
Kindlmann, Gordon; Estépar, Raúl San José; Niethammer, Marc; Haker, Steven; Westin, Carl-Fredrik
2007-01-01
In algorithms for processing diffusion tensor images, two common ingredients are interpolating tensors, and measuring the distance between them. We propose a new class of interpolation paths for tensors, termed geodesic-loxodromes, which explicitly preserve clinically important tensor attributes, such as mean diffusivity or fractional anisotropy, while using basic differential geometry to interpolate tensor orientation. This contrasts with previous Riemannian and Log-Euclidean methods that preserve the determinant. Path integrals of tangents of geodesic-loxodromes generate novel measures of over-all difference between two tensors, and of difference in shape and in orientation.
Moment tensor mechanisms from Iberia
NASA Astrophysics Data System (ADS)
Stich, D.; Morales, J.
2003-12-01
New moment tensor solutions are presented for small and moderate earthquakes in Spain, Portugal and the westernmost Mediterranean Sea for the period from 2002 to present. Moment tensor inversion, to estimate focal mechanism, depth and magnitude, is applied at the Instituto Andaluz de Geof¡sica (IAG) in a routine manner to regional earthquakes with local magnitude larger then or equal 3.5. Recent improvements of broadband network coverage contribute to relatively high rates of success: Since beginning of 2002, we could obtain valuable solutions, in the sense that moment tensor synthetic waveforms fit adequately the main characteristics of the observed seismograms, for about 50% of all events of the initial selection. Results are available on-line at http://www.ugr.es/~iag/tensor/. To date, the IAG moment tensor catalogue contains 90 solutions since 1984 and gives a relatively detailed picture of seismotectonics in the Ibero-maghrebian region, covering also low seismicity areas like intraplate Iberia. Solutions are concentrated in southern Spain and the Alboran Sea along the diffuse African-Eurasian plate boundary. These solutions reveal characteristics of the transition between the reverse faulting regime in Algeria and predominately normal faulting on the Iberian Peninsula. Further we discuss the available mechanisms for intermediate deep events, related to subcrustal tectonic processes at the plate contact.
Mohd Nasir, Norlirubayah; Teo Ming, Ting; Ahmadun, Fakhru'l-Razi; Sobri, Shafreeza
2010-01-01
The research conducted a study on decomposition and biodegradability enhancement of textile wastewater using a combination of electron beam irradiation and activated sludge process. The purposes of this research are to remove pollutant through decomposition and to enhance the biodegradability of textile wastewater. The wastewater is treated using electron beam irradiation as a pre-treatment before undergo an activated sludge process. As a result, for non-irradiated wastewater, the COD removal was achieved to be between 70% and 79% after activated sludge process. The improvement of COD removal efficiency increased to 94% after irradiation of treated effluent at the dose of 50 kGy. Meanwhile, the BOD(5) removal efficiencies of non-irradiated and irradiated textile wastewater were reported to be between 80 and 87%, and 82 and 99.2%, respectively. The maximum BOD(5) removal efficiency was achieved at day 1 (HRT 5 days) of the process of an irradiated textile wastewater which is 99.2%. The biodegradability ratio of non-irradiated wastewater was reported to be between 0.34 and 0.61, while the value of biodegradability ratio of an irradiated wastewater increased to be between 0.87 and 0.96. The biodegradability enhancement of textile wastewater is increased with increasing the doses. Therefore, an electron beam radiation holds a greatest application of removing pollutants and also on enhancing the biodegradability of textile wastewater.
Shigeri, Yasushi; Matsui, Tatsunobu; Watanabe, Kunihiko
2009-11-01
In order to develop a practical method for the decomposition of intact chicken feathers, a moderate thermophile strain, Meiothermus ruber H328, having strong keratinolytic activity, was used in a bio-type garbage-treatment machine working with an acidulocomposting process. The addition of strain H328 cells (15 g) combined with acidulocomposting in the garbage machine resulted in 70% degradation of intact chicken feathers (30 g) within 14 d. This degradation efficiency is comparable to a previous result employing the strain as a single bacterium in flask culture, and it indicates that strain H328 can promote intact feather degradation activity in a garbage machine currently on the market.
Tensor hypercontraction. II. Least-squares renormalization
NASA Astrophysics Data System (ADS)
Parrish, Robert M.; Hohenstein, Edward G.; Martínez, Todd J.; Sherrill, C. David
2012-12-01
The least-squares tensor hypercontraction (LS-THC) representation for the electron repulsion integral (ERI) tensor is presented. Recently, we developed the generic tensor hypercontraction (THC) ansatz, which represents the fourth-order ERI tensor as a product of five second-order tensors [E. G. Hohenstein, R. M. Parrish, and T. J. Martínez, J. Chem. Phys. 137, 044103 (2012)], 10.1063/1.4732310. Our initial algorithm for the generation of the THC factors involved a two-sided invocation of overlap-metric density fitting, followed by a PARAFAC decomposition, and is denoted PARAFAC tensor hypercontraction (PF-THC). LS-THC supersedes PF-THC by producing the THC factors through a least-squares renormalization of a spatial quadrature over the otherwise singular 1/r12 operator. Remarkably, an analytical and simple formula for the LS-THC factors exists. Using this formula, the factors may be generated with O(N^5) effort if exact integrals are decomposed, or O(N^4) effort if the decomposition is applied to density-fitted integrals, using any choice of density fitting metric. The accuracy of LS-THC is explored for a range of systems using both conventional and density-fitted integrals in the context of MP2. The grid fitting error is found to be negligible even for extremely sparse spatial quadrature grids. For the case of density-fitted integrals, the additional error incurred by the grid fitting step is generally markedly smaller than the underlying Coulomb-metric density fitting error. The present results, coupled with our previously published factorizations of MP2 and MP3, provide an efficient, robust O(N^4) approach to both methods. Moreover, LS-THC is generally applicable to many other methods in quantum chemistry.
Solar radiation influence on the decomposition process of diclofenac in surface waters.
Bartels, Peter; von Tümpling, Wolf
2007-03-01
Diclofenac can be detected in surface water of many rivers with human impacts worldwide. The observed decrease of the diclofenac concentration in waters and the formation of its photochemical transformation products under the impact of natural irradiation during one to 16 days are explained in this article. In semi-natural laboratory tests and in a field experiment it could be shown, that sunlight stimulates the decomposition of diclofenac in surface waters. During one day intensive solar radiation in middle European summer diclofenac decomposes in the surface layer of the water (0 to 5 cm) up to 83%, determined in laboratory exposition experiments. After two weeks in a field experiment, the diclofenac was not detectable anymore in the water surface layer (limit of quantification: 5 ng/L). At a water depth of 50 cm, within two weeks 96% of the initial concentration was degraded, while in 100 cm depth 2/3 of the initial diclofenac concentration remained. With the decomposition, stable and meta-stable photolysis products were formed and observed by UV detection. Beyond that the chemical structure of these products were determined. Three transformation products, that were not described in the literature so far, were identified and quantified with GC-MS.
Toluene decomposition performance and NOx by-product formation during a DBD-catalyst process.
Guo, Yufang; Liao, Xiaobin; Fu, Mingli; Huang, Haibao; Ye, Daiqi
2015-02-01
Characteristics of toluene decomposition and formation of nitrogen oxide (NOx) by-products were investigated in a dielectric barrier discharge (DBD) reactor with/without catalyst at room temperature and atmospheric pressure. Four kinds of metal oxides, i.e., manganese oxide (MnOx), iron oxide (FeOx), cobalt oxide (CoOx) and copper oxide (CuO), supported on Al2O3/nickel foam, were used as catalysts. It was found that introducing catalysts could improve toluene removal efficiency, promote decomposition of by-product ozone and enhance CO2 selectivity. In addition, NOx was suppressed with the decrease of specific energy density (SED) and the increase of humidity, gas flow rate and toluene concentration, or catalyst introduction. Among the four kinds of catalysts, the CuO catalyst showed the best performance in NOx suppression. The MnOx catalyst exhibited the lowest concentration of O3 and highest CO2 selectivity but the highest concentration of NOx. A possible pathway for NOx production in DBD was discussed. The contributions of oxygen active species and hydroxyl radicals are dominant in NOx suppression.
Chin, Sungmin; Jurng, Jongsoo; Lee, Jae-Heon; Moon, Seung-Jae
2009-05-01
This study examined the catalytic oxidation of 1,2-dichlorobenzene on V(2)O(5)/TiO(2) nanoparticles. The V(2)O(5)/TiO(2) nanoparticles were synthesized by the thermal decomposition of vanadium oxytripropoxide and titanium tetraisopropoxide. The effects of the synthesis conditions, such as the synthesis temperature and precursor heating temperature, were investigated. The specific surface areas of V(2)O(5)/TiO(2) nanoparticles increased with increasing synthesis temperature and decreasing precursor heating temperature. The catalytic oxidation rate of the V(2)O(5)/TiO(2) catalyst formed by thermal decomposition process at a catalytic reaction temperature of 150 and 200 degrees C was 46% and 95%, respectively. As a result, it was concluded that the V(2)O(5)/TiO(2) catalysts synthesized by a thermal decomposition process showed good performance for 1,2-DCB decomposition at a lower temperature.
3D reconstruction of tensors and vectors
Defrise, Michel; Gullberg, Grant T.
2005-02-17
Here we have developed formulations for the reconstruction of 3D tensor fields from planar (Radon) and line-integral (X-ray) projections of 3D vector and tensor fields. Much of the motivation for this work is the potential application of MRI to perform diffusion tensor tomography. The goal is to develop a theory for the reconstruction of both Radon planar and X-ray or line-integral projections because of the flexibility of MRI to obtain both of these type of projections in 3D. The development presented here for the linear tensor tomography problem provides insight into the structure of the nonlinear MRI diffusion tensor inverse problem. A particular application of tensor imaging in MRI is the potential application of cardiac diffusion tensor tomography for determining in vivo cardiac fiber structure. One difficulty in the cardiac application is the motion of the heart. This presents a need for developing future theory for tensor tomography in a motion field. This means developing a better understanding of the MRI signal for diffusion processes in a deforming media. The techniques developed may allow the application of MRI tensor tomography for the study of structure of fiber tracts in the brain, atherosclerotic plaque, and spine in addition to fiber structure in the heart. However, the relations presented are also applicable to other fields in medical imaging such as diffraction tomography using ultrasound. The mathematics presented can also be extended to exponential Radon transform of tensor fields and to other geometric acquisitions such as cone beam tomography of tensor fields.
NASA Astrophysics Data System (ADS)
Okamoto, T.; Takenaka, H.; Hara, T.; Nakamura, T.; Aoki, T.
2014-12-01
We analyze "seismic" rupture process of the March 11, 2011 Tohoku-Oki earthquake (GCMT Mw9.1) by using a non-linear multi-time-window waveform inversion method. We incorporate the effect of the near-source laterally heterogeneous structure on the synthetic Green's tensor waveforms; otherwise the analysis may result in erroneous solutions [1]. To increase the resolution we use teleseismic and strong-motion seismograms jointly because the one-sided distribution of strong-motion station may cause reduced resolution near the trench axis [2]. We use a 2.5D FDM [3] for teleseismic P-waves and a full 3D FDM that incorporates topography, oceanic water layer, 3D heterogeneity and attenuation for strong-motions [4]. We apply multi-GPU acceleration by using the TSUBAME supercomputer in Tokyo Institute of Technology [5]. We "validated" the Green's tensor waveforms with a point-source moment tensor inversion analysis for a small (Mw5.8) shallow event: we confirm the observed waveforms are reproduced well with the synthetics.The inferred slip distribution using the 2.5D and 3D Green's functions has large slips (max. 37 m) near the hypocenter and small slips near the trench (figure). Also an isolated slip region is identified close to Fukushima prefecture. These features are similar to those obtained by our preliminary study [4]. The land-ward large slips and trench-ward small slips have also been reported by [2]. It is remarkable that we confirmed these features by using data-validated Green's functions. On the other hand very large slips are inferred close to the trench when we apply "1D" Green's functions that do not incorporate the lateral heterogeneity. Our result suggests the trench-ward large deformation that caused large tsunamis did not radiate strong seismic waves. Very slow slips (e.g., the tsunami earthquake), delayed slips and anelastic deformation are among the candidates of the physical processes of the deformation.[1] Okamoto and Takenaka, EPS, 61, e17-e20, 2009
van der Wal, Annemieke; Geydan, Thomas D; Kuyper, Thomas W; de Boer, Wietse
2013-07-01
Filamentous fungi are critical to the decomposition of terrestrial organic matter and, consequently, in the global carbon cycle. In particular, their contribution to degradation of recalcitrant lignocellulose complexes has been widely studied. In this review, we focus on the functioning of terrestrial fungal decomposers and examine the factors that affect their activities and community dynamics. In relation to this, impacts of global warming and increased N deposition are discussed. We also address the contribution of fungal decomposer studies to the development of general community ecological concepts such as diversity-functioning relationships, succession, priority effects and home-field advantage. Finally, we indicate several research directions that will lead to a more complete understanding of the ecological roles of terrestrial decomposer fungi such as their importance in turnover of rhizodeposits, the consequences of interactions with other organisms and niche differentiation.
Species-specific effects of elevated ozone on wetland plants and decomposition processes.
Williamson, Jennifer; Mills, Gina; Freeman, Chris
2010-05-01
Seven species from two contrasting wetlands, an upland bog and a lowland rich fen in North Wales, UK, were exposed to elevated ozone (150 ppb for 5 days and 20 ppb for 2 days per week) or low ozone (20 ppb) for four weeks in solardomes. The rich fen species were: Molinia caerulea, Juncus subnodulosus, Potentilla erecta and Hydrocotyle vulgaris and the bog species were: Carex echinata, Potentilla erecta and Festuca rubra. Senescence significantly increased under elevated ozone in all seven species but only Molinia caerulea showed a reduction in biomass under elevated ozone. Decomposition rates of plants exposed to elevated ozone, as measured by carbon dioxide efflux from dried plant material inoculated with peat slurry, increased for Potentilla erecta with higher hydrolytic enzyme activities. In contrast, a decrease in enzyme activities and a non-significant decrease in carbon dioxide efflux occurred in the grasses, sedge and rush species.
Hecht, Erin E.; Gutman, David A.; Preuss, Todd M.; Sanchez, Mar M.; Parr, Lisa A.; Rilling, James K.
2013-01-01
Social learning varies among primate species. Macaques only copy the product of observed actions, or emulate, while humans and chimpanzees also copy the process, or imitate. In humans, imitation is linked to the mirror system. Here we compare mirror system connectivity across these species using diffusion tensor imaging. In macaques and chimpanzees, the preponderance of this circuitry consists of frontal–temporal connections via the extreme/external capsules. In contrast, humans have more substantial temporal–parietal and frontal–parietal connections via the middle/inferior longitudinal fasciculi and the third branch of the superior longitudinal fasciculus. In chimpanzees and humans, but not in macaques, this circuitry includes connections with inferior temporal cortex. In humans alone, connections with superior parietal cortex were also detected. We suggest a model linking species differences in mirror system connectivity and responsivity with species differences in behavior, including adaptations for imitation and social learning of tool use. PMID:22539611
Randomized interpolative decomposition of separated representations
NASA Astrophysics Data System (ADS)
Biagioni, David J.; Beylkin, Daniel; Beylkin, Gregory
2015-01-01
We introduce an algorithm to compute tensor interpolative decomposition (dubbed CTD-ID) for the reduction of the separation rank of Canonical Tensor Decompositions (CTDs). Tensor ID selects, for a user-defined accuracy ɛ, a near optimal subset of terms of a CTD to represent the remaining terms via a linear combination of the selected terms. CTD-ID can be used as an alternative to or in combination with the Alternating Least Squares (ALS) algorithm. We present examples of its use within a convergent iteration to compute inverse operators in high dimensions. We also briefly discuss the spectral norm as a computational alternative to the Frobenius norm in estimating approximation errors of tensor ID. We reduce the problem of finding tensor IDs to that of constructing interpolative decompositions of certain matrices. These matrices are generated via randomized projection of the terms of the given tensor. We provide cost estimates and several examples of the new approach to the reduction of separation rank.
Moran, S.C.
2003-01-01
The volcanological significance of seismicity within Katmai National Park has been debated since the first seismograph was installed in 1963, in part because Katmai seismicity consists almost entirely of high-frequency earthquakes that can be caused by a wide range of processes. I investigate this issue by determining 140 well-constrained first-motion fault-plane solutions for shallow (depth < 9 km) earthquakes occuring between 1995 and 2001 and inverting these solutions for the stress tensor in different regions within the park. Earthquakes removed by several kilometers from the volcanic axis occur in a stress field characterized by horizontally oriented ??1 and ??3 axes, with ??1 rotated slightly (12??) relative to the NUVELIA subduction vector, indicating that these earthquakes are occurring in response to regional tectonic forces. On the other hand, stress tensors for earthquake clusters beneath several Katmai cluster volcanoes have vertically oriented ??1 axes, indicating that these events are occuring in response to local, not regional, processes. At Martin-Mageik, vertically oriented ??1 is most consistent with failure under edifice loading conditions in conjunction with localized pore pressure increases associated with hydrothermal circulation cells. At Trident-Novarupta, it is consistent with a number of possible models, including occurence along fractures formed during the 1912 eruption that now serve as horizontal conduits for migrating fluids and/or volatiles from nearby degassing and cooling magma bodies. At Mount Katmai, it is most consistent with continued seismicity along ring-fracture systems created in the 1912 eruption, perhaps enhanced by circulating hydrothermal fluids and/or seepage from the caldera-filling lake.
Jones, Derek K; Leemans, Alexander
2011-01-01
Diffusion tensor MRI (DT-MRI) is the only non-invasive method for characterising the microstructural organization of tissue in vivo. Generating parametric maps that help to visualise different aspects of the tissue microstructure (mean diffusivity, tissue anisotropy and dominant fibre orientation) involves a number of steps from deciding on the optimal acquisition parameters on the scanner, collecting the data, pre-processing the data and fitting the model to generating final parametric maps for entry into statistical data analysis. Here, we describe an entire protocol that we have used on over 400 subjects with great success in our laboratory. In the 'Notes' section, we justify our choice of the various parameters/choices along the way so that the reader may adapt/modify the protocol to their own time/hardware constraints.
Martínez-Casado, Francisco J; Ramos-Riesco, Miguel; Rodríguez-Cheda, José A; Cucinotta, Fabio; Matesanz, Emilio; Miletto, Ivana; Gianotti, Enrica; Marchese, Leonardo; Matěj, Zdeněk
2016-09-06
Lead(II) acetate [Pb(Ac)2, where Ac = acetate group (CH3-COO(-))2] is a very common salt with many and varied uses throughout history. However, only lead(II) acetate trihydrate [Pb(Ac)2·3H2O] has been characterized to date. In this paper, two enantiotropic polymorphs of the anhydrous salt, a novel hydrate [lead(II) acetate hemihydrate: Pb(Ac)2·(1)/2H2O], and two decomposition products [corresponding to two different basic lead(II) acetates: Pb4O(Ac)6 and Pb2O(Ac)2] are reported, with their structures being solved for the first time. The compounds present a variety of molecular arrangements, being 2D or 1D coordination polymers. A thorough thermal analysis, by differential scanning calorimetry (DSC) and thermogravimetric analysis (TGA), was also carried out to study the behavior and thermal data of the salt and its decomposition process, in inert and oxygenated atmospheres, identifying the phases and byproducts that appear. The complex thermal behavior of lead(II) acetate is now solved, finding the existence of another hydrate, two anhydrous enantiotropic polymorphs, and some byproducts. Moreover, some of them are phosphorescent at room temperature. The compounds were studied by TGA, DSC, X-ray diffraction, and UV-vis spectroscopy.
NASA Astrophysics Data System (ADS)
Leow, Alex D.; Zhu, Siwei
2008-03-01
Diffusion weighted MR imaging is a powerful tool that can be employed to study white matter microstructure by examining the 3D displacement profile of water molecules in brain tissue. By applying diffusion-sensitizing gradients along a minimum of 6 directions, second-order tensors (represetnted by 3-by-3 positive definiite matrices) can be computed to model dominant diffusion processes. However, it has been shown that conventional DTI is not sufficient to resolve more complicated white matter configurations, e.g. crossing fiber tracts. More recently, High Angular Resolution Diffusion Imaging (HARDI) seeks to address this issue by employing more than 6 gradient directions. To account for fiber crossing when analyzing HARDI data, several methodologies have been introduced. For example, q-ball imaging was proposed to approximate Orientation Diffusion Function (ODF). Similarly, the PAS method seeks to reslove the angular structure of displacement probability functions using the maximum entropy principle. Alternatively, deconvolution methods extract multiple fiber tracts by computing fiber orientations using a pre-specified single fiber response function. In this study, we introduce Tensor Distribution Function (TDF), a probability function defined on the space of symmetric and positive definite matrices. Using calculus of variations, we solve for the TDF that optimally describes the observed data. Here, fiber crossing is modeled as an ensemble of Gaussian diffusion processes with weights specified by the TDF. Once this optimal TDF is determined, ODF can easily be computed by analytical integration of the resulting displacement probability function. Moreover, principle fiber directions can also be directly derived from the TDF.
Ji, Hongfei; Li, Jie; Lu, Rongrong; Gu, Rong; Cao, Lei; Gong, Xiaoliang
2016-01-01
Electroencephalogram- (EEG-) based brain-computer interface (BCI) systems usually utilize one type of changes in the dynamics of brain oscillations for control, such as event-related desynchronization/synchronization (ERD/ERS), steady state visual evoked potential (SSVEP), and P300 evoked potentials. There is a recent trend to detect more than one of these signals in one system to create a hybrid BCI. However, in this case, EEG data were always divided into groups and analyzed by the separate processing procedures. As a result, the interactive effects were ignored when different types of BCI tasks were executed simultaneously. In this work, we propose an improved tensor based multiclass multimodal scheme especially for hybrid BCI, in which EEG signals are denoted as multiway tensors, a nonredundant rank-one tensor decomposition model is proposed to obtain nonredundant tensor components, a weighted fisher criterion is designed to select multimodal discriminative patterns without ignoring the interactive effects, and support vector machine (SVM) is extended to multiclass classification. Experiment results suggest that the proposed scheme can not only identify the different changes in the dynamics of brain oscillations induced by different types of tasks but also capture the interactive effects of simultaneous tasks properly. Therefore, it has great potential use for hybrid BCI.
Ji, Hongfei; Li, Jie; Lu, Rongrong; Gu, Rong; Cao, Lei; Gong, Xiaoliang
2016-01-01
Electroencephalogram- (EEG-) based brain-computer interface (BCI) systems usually utilize one type of changes in the dynamics of brain oscillations for control, such as event-related desynchronization/synchronization (ERD/ERS), steady state visual evoked potential (SSVEP), and P300 evoked potentials. There is a recent trend to detect more than one of these signals in one system to create a hybrid BCI. However, in this case, EEG data were always divided into groups and analyzed by the separate processing procedures. As a result, the interactive effects were ignored when different types of BCI tasks were executed simultaneously. In this work, we propose an improved tensor based multiclass multimodal scheme especially for hybrid BCI, in which EEG signals are denoted as multiway tensors, a nonredundant rank-one tensor decomposition model is proposed to obtain nonredundant tensor components, a weighted fisher criterion is designed to select multimodal discriminative patterns without ignoring the interactive effects, and support vector machine (SVM) is extended to multiclass classification. Experiment results suggest that the proposed scheme can not only identify the different changes in the dynamics of brain oscillations induced by different types of tasks but also capture the interactive effects of simultaneous tasks properly. Therefore, it has great potential use for hybrid BCI. PMID:26880873
NASA Astrophysics Data System (ADS)
Herman, M. W.; Furlong, K. P.; Herrmann, R. B.; Benz, H.
2011-12-01
We model regional broadband data from the South Island of New Zealand to determine regional moment tensor solutions for the mainshock and selected aftershocks of the M7.0, 3 September 2011, M6.1, 21 February 2011 and M6.0 13 June 2011 earthquakes that occurred near Christchurch, New Zealand. Arrival time picks from both the local and regional strong motion and broadband data were used to determine preliminary earthquake locations using a previously published South Island velocity model. Rayleigh and Love surface wave dispersion measurements were then made from selected events to refine the velocity model in order to better match the predominantly large regional surface waves. RMT solutions were computed using the procedures of Herrmann et al. (2011). In total, we computed RMT solutions for 82 events in the magnitude range of Mw3.5-7.0. Although the crustal faulting behavior in the region has been argued to reflect a complex interaction of strike slip and thrust faulting, the dominant faulting style in the sequence is right-lateral, strike-slip (75 events), with nodal planes striking west-east to southwest-northeast. There are only five purely reverse mechanisms, at the western end of the sequence, in the vicinity of the Harper Hills blind thrust. The main Mw 7.0 rupture shows both local small-scale stepovers and one larger (~ 5-10 km width) right stepover near 172.40°E. Although we expect normal faulting associated with this larger stepover, during the first month after the main shock we observe only two normal fault mechanisms and 13 strike slip (inferred E-W right-lateral) events in the stepover region, and since that time, the sense of faulting has been dominated by right-lateral, strike-slip events, perhaps indicating a sequence of short E-W fault segments in the region. The February and June 2011 events occurred along the same trend at the eastern end of the sequence, and show similar strike slip mechanisms to the majority of events to the west, but the
The Search for a Volatile Human Specific Marker in the Decomposition Process.
Rosier, E; Loix, S; Develter, W; Van de Voorde, W; Tytgat, J; Cuypers, E
2015-01-01
In this study, a validated method using a thermal desorber combined with a gas chromatograph coupled to mass spectrometry was used to identify the volatile organic compounds released during decomposition of 6 human and 26 animal remains in a laboratory environment during a period of 6 months. 452 compounds were identified. Among them a human specific marker was sought using principle component analysis. We found a combination of 8 compounds (ethyl propionate, propyl propionate, propyl butyrate, ethyl pentanoate, pyridine, diethyl disulfide, methyl(methylthio)ethyl disulfide and 3-methylthio-1-propanol) that led to the distinction of human and pig remains from other animal remains. Furthermore, it was possible to separate the pig remains from human remains based on 5 esters (3-methylbutyl pentanoate, 3-methylbutyl 3-methylbutyrate, 3-methylbutyl 2-methylbutyrate, butyl pentanoate and propyl hexanoate). Further research in the field with full bodies has to corroborate these results and search for one or more human specific markers. These markers would allow a more efficiently training of cadaver dogs or portable detection devices could be developed.
Photocatalytic decomposition of bromate ion by the UV/P25-Graphene processes.
Huang, Xin; Wang, Longyong; Zhou, Jizhi; Gao, Naiyun
2014-06-15
The photocatalysis of bromate (BrO3(-)) attracts much attention as BrO3(-) is a carcinogenic and genotoxic contaminant in drinking water. In this work, TiO2-graphene composite (P25-GR) photocatalyst for BrO3(-) reduction were prepared by a facile one-step hydrothermal method, which exhibited a higher capacity of BrO3(-) removal than P25 or GR did. The maximum removal of BrO3(-) was observed in the optimal conductions of 1% GR doping and at pH 6.8. Compared with that without UV, the higher decreasing of BrO3(-) on the composite indicates that BrO3(-) decomposition was predominantly contributed to photo-reduction with UV rather than adsorption. This hypothesis was supported by the decreasing of [BrO3(-)] with the synchronous increasing of [Br(-)] at nearly constant amount of total Bromine ([BrO3(-)] + [Br(-)]). Furthermore, the improvement of BrO3(-) reduction on P25-GR was observed in the treatment of a tap water. However, the efficiency of BrO3(-) removal was less than that in deionized water, probably due to the consumption of photo-generated electrons and the adsorption of natural organic matters (NOM) on graphene.
The Search for a Volatile Human Specific Marker in the Decomposition Process
Rosier, E.; Loix, S.; Develter, W.; Van de Voorde, W.; Tytgat, J.; Cuypers, E.
2015-01-01
In this study, a validated method using a thermal desorber combined with a gas chromatograph coupled to mass spectrometry was used to identify the volatile organic compounds released during decomposition of 6 human and 26 animal remains in a laboratory environment during a period of 6 months. 452 compounds were identified. Among them a human specific marker was sought using principle component analysis. We found a combination of 8 compounds (ethyl propionate, propyl propionate, propyl butyrate, ethyl pentanoate, pyridine, diethyl disulfide, methyl(methylthio)ethyl disulfide and 3-methylthio-1-propanol) that led to the distinction of human and pig remains from other animal remains. Furthermore, it was possible to separate the pig remains from human remains based on 5 esters (3-methylbutyl pentanoate, 3-methylbutyl 3-methylbutyrate, 3-methylbutyl 2-methylbutyrate, butyl pentanoate and propyl hexanoate). Further research in the field with full bodies has to corroborate these results and search for one or more human specific markers. These markers would allow a more efficiently training of cadaver dogs or portable detection devices could be developed. PMID:26375029
Propp, W.A.; Grey, A.E.; Negus-de Wys, J.; Plum, M.M.; Haefner, D.R.
1991-09-01
This study presents a preliminary evaluation of the technical and economic feasibility of selected conceptual processes for pyrolytic conversion of organic feedstocks or the decomposition/detoxification of hazardous wastes by coupling the process to the geopressured-geothermal resource. The report presents a detailed discussion of the resource and of each process selected for evaluation including the technical evaluation of each. A separate section presents the economic methodology used and the evaluation of the technically viable process. A final section presents conclusions and recommendations. Three separate processes were selected for evaluation. These are pyrolytic conversion of biomass to petroleum like fluids, wet air oxidation (WAO) at subcritical conditions for destruction of hazardous waste, and supercritical water oxidation (SCWO) also for the destruction of hazardous waste. The scientific feasibility of all three processes has been previously established by various bench-scale and pilot-scale studies. For a variety of reasons detailed in the report the SCWO process is the only one deemed to be technically feasible, although the effects of the high solids content of the geothermal brine need further study. This technology shows tremendous promise for contributing to solving the nation's energy and hazardous waste problems. However, the current economic analysis suggests that it is uneconomical at this time. 50 refs., 5 figs., 7 tabs.
NASA Astrophysics Data System (ADS)
Li, Youning; Han, Muxin; Grassl, Markus; Zeng, Bei
2017-06-01
Invariant tensors are states in the SU(2) tensor product representation that are invariant under SU(2) action. They play an important role in the study of loop quantum gravity. On the other hand, perfect tensors are highly entangled many-body quantum states with local density matrices maximally mixed. Recently, the notion of perfect tensors has attracted a lot of attention in the fields of quantum information theory, condensed matter theory, and quantum gravity. In this work, we introduce the concept of an invariant perfect tensor (IPT), which is an n-valent tensor that is both invariant and perfect. We discuss the existence and construction of IPTs. For bivalent tensors, the IPT is the unique singlet state for each local dimension. The trivalent IPT also exists and is uniquely given by Wigner’s 3j symbol. However, we show that, surprisingly, 4-valent IPTs do not exist for any identical local dimension d. On the contrary, when the dimension is large, almost all invariant tensors are asymptotically perfect, which is a consequence of the phenomenon of the concentration of measure for multipartite quantum states.
Generalization of the tensor renormalization group approach to 3-D or higher dimensions
NASA Astrophysics Data System (ADS)
Teng, Peiyuan
2017-04-01
In this paper, a way of generalizing the tensor renormalization group (TRG) is proposed. Mathematically, the connection between patterns of tensor renormalization group and the concept of truncation sequence in polytope geometry is discovered. A theoretical contraction framework is therefore proposed. Furthermore, the canonical polyadic decomposition is introduced to tensor network theory. A numerical verification of this method on the 3-D Ising model is carried out.
Behrens, R.; Minier, L.; Bulusu, S.
1998-12-31
The time-dependent, solid-phase thermal decomposition behavior of 2,4-dinitroimidazole (2,4-DNI) has been measured utilizing simultaneous thermogravimetric modulated beam mass spectrometry (STMBMS) methods. The decomposition products consist of gaseous and non-volatile polymeric products. The temporal behavior of the gas formation rates of the identified products indicate that the overall thermal decomposition process is complex. In isothermal experiments with 2,4-DNI in the solid phase, four distinguishing features are observed: (1) elevated rates of gas formation are observed during the early stages of the decomposition, which appear to be correlated to the presence of exogenous water in the sample; (2) this is followed by a period of relatively constant rates of gas formation; (3) next, the rates of gas formation accelerate, characteristic of an autocatalytic reaction; (4) finally, the 2,4-DNI is depleted and gaseous decomposition products continue to evolve at a decreasing rate. A physicochemical and mathematical model of the decomposition of 2,4-DNI has been developed and applied to the experimental results. The first generation of this model is described in this paper. Differences between the first generation of the model and the experimental data collected under different conditions suggest refinements for the next generation of the model.
Peatland microbial communities and decomposition processes in the james bay lowlands, Canada.
Preston, Michael D; Smemo, Kurt A; McLaughlin, James W; Basiliko, Nathan
2012-01-01
Northern peatlands are a large repository of atmospheric carbon due to an imbalance between primary production by plants and microbial decomposition. The James Bay Lowlands (JBL) of northern Ontario are a large peatland-complex but remain relatively unstudied. Climate change models predict the region will experience warmer and drier conditions, potentially altering plant community composition, and shifting the region from a long-term carbon sink to a source. We collected a peat core from two geographically separated (ca. 200 km) ombrotrophic peatlands (Victor and Kinoje Bogs) and one minerotrophic peatland (Victor Fen) located near Victor Bog within the JBL. We characterized (i) archaeal, bacterial, and fungal community structure with terminal restriction fragment length polymorphism of ribosomal DNA, (ii) estimated microbial activity using community level physiological profiling and extracellular enzymes activities, and (iii) the aeration and temperature dependence of carbon mineralization at three depths (0-10, 50-60, and 100-110 cm) from each site. Similar dominant microbial taxa were observed at all three peatlands despite differences in nutrient content and substrate quality. In contrast, we observed differences in basal respiration, enzyme activity, and the magnitude of substrate utilization, which were all generally higher at Victor Fen and similar between the two bogs. However, there was no preferential mineralization of carbon substrates between the bogs and fens. Microbial community composition did not correlate with measures of microbial activity but pH was a strong predictor of activity across all sites and depths. Increased peat temperature and aeration stimulated CO(2) production but this did not correlate with a change in enzyme activities. Potential microbial activity in the JBL appears to be influenced by the quality of the peat substrate and the presence of microbial inhibitors, which suggests the existing peat substrate will have a large
Peatland Microbial Communities and Decomposition Processes in the James Bay Lowlands, Canada
Preston, Michael D.; Smemo, Kurt A.; McLaughlin, James W.; Basiliko, Nathan
2012-01-01
Northern peatlands are a large repository of atmospheric carbon due to an imbalance between primary production by plants and microbial decomposition. The James Bay Lowlands (JBL) of northern Ontario are a large peatland-complex but remain relatively unstudied. Climate change models predict the region will experience warmer and drier conditions, potentially altering plant community composition, and shifting the region from a long-term carbon sink to a source. We collected a peat core from two geographically separated (ca. 200 km) ombrotrophic peatlands (Victor and Kinoje Bogs) and one minerotrophic peatland (Victor Fen) located near Victor Bog within the JBL. We characterized (i) archaeal, bacterial, and fungal community structure with terminal restriction fragment length polymorphism of ribosomal DNA, (ii) estimated microbial activity using community level physiological profiling and extracellular enzymes activities, and (iii) the aeration and temperature dependence of carbon mineralization at three depths (0–10, 50–60, and 100–110 cm) from each site. Similar dominant microbial taxa were observed at all three peatlands despite differences in nutrient content and substrate quality. In contrast, we observed differences in basal respiration, enzyme activity, and the magnitude of substrate utilization, which were all generally higher at Victor Fen and similar between the two bogs. However, there was no preferential mineralization of carbon substrates between the bogs and fens. Microbial community composition did not correlate with measures of microbial activity but pH was a strong predictor of activity across all sites and depths. Increased peat temperature and aeration stimulated CO2 production but this did not correlate with a change in enzyme activities. Potential microbial activity in the JBL appears to be influenced by the quality of the peat substrate and the presence of microbial inhibitors, which suggests the existing peat substrate will have a large
Xu, Yan; Wu, Qian; Shimatani, Yuji; Yamaguchi, Koji
2015-10-07
Due to the lack of regeneration methods, the reusability of nanofluidic chips is a significant technical challenge impeding the efficient and economic promotion of both fundamental research and practical applications on nanofluidics. Herein, a simple method for the total regeneration of glass nanofluidic chips was described. The method consists of sequential thermal treatment with six well-designed steps, which correspond to four sequential thermal and thermochemical decomposition processes, namely, dehydration, high-temperature redox chemical reaction, high-temperature gasification, and cooling. The method enabled the total regeneration of typical 'dead' glass nanofluidic chips by eliminating physically clogged nanoparticles in the nanochannels, removing chemically reacted organic matter on the glass surface and regenerating permanent functional surfaces of dissimilar materials localized in the nanochannels. The method provides a technical solution to significantly improve the reusability of glass nanofluidic chips and will be useful for the promotion and acceleration of research and applications on nanofluidics.
Trinh, Nguyen Duy; Hong, Seong-Soo
2015-07-01
Iron-based MIL-53 crystals with uniform size were successfully synthesized using a microwave-assisted solvothermal method and characterized by XRD, FE-SEM and DRS. We also investigated the photocatalytic activity of MIL-53(Fe) for the decomposition of methylene blue using H2O2 as an electron acceptor. From XRD and SEM results, the fully crystallized MIL-53(Fe) materials were obtained regardless of preparation method. From DRS results, MIL-53(Fe) samples prepared using microwave-assisted process displayed the absorption spectrum up to the visible region and then they showed the high photocatalytic activity under visible light irradiation. The MIL-53(Fe) catalyst prepared by two times microwave irradiation showed the highest activity.
Moles, Pamela; Oliva, Mónica; Safont, Vicent S
2011-01-20
By using 6,7,8-trioxabicyclo[3.2.2]nonane as the artemisinin model and dihydrated Fe(OH)(2) as the heme model, we report a theoretical study of the late steps of the artemisinin decomposition process. The study offers two viewpoints: first, the energetic and geometric parameters are obtained and analyzed, and hence, different reaction paths have been studied. The second point of view uses the electron localization function (ELF) and the atoms in molecules (AIM) methodology, to conduct a complete topological study of such steps. The MO analysis together with the spin density description has also been used. The obtained results agree nicely with the experimental data, and a new mechanistic proposal that explains the experimentally determined outcome of deoxiartemisinin has been postulated.
Yan, Yingjie; Liao, Qi-Nan; Ji, Feng; Wang, Wei; Yuan, Shoujun; Hu, Zhen-Hu
2017-02-01
3,5-Dinitrobenzamide has been widely used as a feed additive to control coccidiosis in poultry, and part of the added 3,5-dinitrobenzamide is excreted into wastewater and surface water. The removal of 3,5-dinitrobenzamide from wastewater and surface water has not been reported in previous studies. Highly reactive hydroxyl radicals from UV/hydrogen peroxide (H2O2) and UV/titanium dioxide (TiO2) advanced oxidation processes (AOPs) can decompose organic contaminants efficiently. In this study, the decomposition of 3,5-dinitrobenzamide in aqueous solution during UV/H2O2 and UV/TiO2 oxidation processes was investigated. The decomposition of 3,5-dinitrobenzamide fits well with a fluence-based pseudo-first-order kinetics model. The decomposition in both two oxidation processes was affected by solution pH, and was inhibited under alkaline conditions. Inorganic anions such as NO3(-), Cl(-), SO4(2-), HCO3(-), and CO3(2-) inhibited the degradation of 3,5-dinitrobenzamide during the UV/H2O2 and UV/TiO2 oxidation processes. After complete decomposition in both oxidation processes, approximately 50% of 3,5-dinitrobenzamide was decomposed into organic intermediates, and the rest was mineralized to CO2, H2O, and other inorganic anions. Ions such as NH4(+), NO3(-), and NO2(-) were released into aqueous solution during the degradation. The primary decomposition products of 3,5-dinitrobenzamide were identified using time-of-flight mass spectrometry (LCMS-IT-TOF). Based on these products and ions release, a possible decomposition pathway of 3,5-dinitrobenzamide in both UV/H2O2 and UV/TiO2 processes was proposed.
A low-cost polysilicon process based on the synthesis and decomposition of dichlorosilane
NASA Technical Reports Server (NTRS)
Mccormick, J. R.; Plahutnik, F.; Sawyer, D.; Arvidson, A.; Goldfarb, S.
1982-01-01
Major process steps of a dichlorosilane based chemical vapor deposition (CVD) process for the production of polycrystalline silicon have been evaluated. While an economic analysis of the process indicates that it is not capable of meeting JPL/DOE price objectives ($14.00/kg in 1980 dollars), product price in the $19.00/kg to $25.00/kg range may be achieved. Product quality has been evaluated and ascertained to be comparable to semiconductor-grade polycrystalline silicon. Solar cells fabricated from the material are also equivalent to those fabricated from semiconductor-grade polycrystalline silicon.
A low-cost polysilicon process based on the synthesis and decomposition of dichlorosilane
NASA Technical Reports Server (NTRS)
Mccormick, J. R.; Plahutnik, F.; Sawyer, D.; Arvidson, A.; Goldfarb, S.
1982-01-01
Major process steps of a dichlorosilane based chemical vapor deposition (CVD) process for the production of polycrystalline silicon have been evaluated. While an economic analysis of the process indicates that it is not capable of meeting JPL/DOE price objectives ($14.00/kg in 1980 dollars), product price in the $19.00/kg to $25.00/kg range may be achieved. Product quality has been evaluated and ascertained to be comparable to semiconductor-grade polycrystalline silicon. Solar cells fabricated from the material are also equivalent to those fabricated from semiconductor-grade polycrystalline silicon.
Multiple alignment tensors from a denatured protein.
Gebel, Erika B; Ruan, Ke; Tolman, Joel R; Shortle, David
2006-07-26
The structural content of the denatured state has yet to be fully characterized. In recent years, large residual dipolar couplings (RDCs) from denatured proteins have been observed under alignment conditions produced by bicelles and strained polyacrylamide gels. In this report, we describe efforts to extend our picture of the residual structure in denatured nuclease by measuring RDCs with multiple alignment tensors. Backbone amide 15N-1H RDCs were collected from 4 M urea for a total of eight RDC data sets. The RDCs were analyzed by singular value decomposition (SVD) to determine the number of independent alignment tensors present in the data. On the basis of the resultant singular values and propagated error estimates, it is clear that there are at least three independent alignment tensors. These three independent RDC datasets can be reconstituted as orthogonal linear combinations, (OLC)-RDC datasets, of the eight actually recorded. The first, second, and third OLC-RDC datasets are highly robust to the removal of any single experimental RDC dataset, establishing the presence of three independent alignment tensors, sampled well above the level of experimental uncertainty. The observation that the RDC data span three or more dimensions of the five-dimensional parameter space demonstrates that the ensemble average structure of denatured nuclease must be asymmetric with respect to these three orthogonal principal axes, which is not inconsistent with earlier work demonstrating that it has a nativelike topology.
NASA Technical Reports Server (NTRS)
Kuo, Kenneth K.; Lu, Yeu-Cherng; Chiaverini, Martin J.; Harting, George C.; Johnson, David K.; Serin, Nadir
1995-01-01
The experimental study on the fundamental processes involved in fuel decomposition and boundary-layer combustion in hybrid rocket motors is continuously being conducted at the High Pressure Combustion Laboratory of The Pennsylvania State University. This research will provide a useful engineering technology base in the development of hybrid rocket motors as well as a fundamental understanding of the complex processes involved in hybrid propulsion. A high-pressure, 2-D slab motor has been designed, manufactured, and utilized for conducting seven test firings using HTPB fuel processed at PSU. A total of 20 fuel slabs have been received from the Mcdonnell Douglas Aerospace Corporation. Ten of these fuel slabs contain an array of fine-wire thermocouples for measuring solid fuel surface and subsurface temperatures. Diagnostic instrumentation used in the test include high-frequency pressure transducers for measuring static and dynamic motor pressures and fine-wire thermocouples for measuring solid fuel surface and subsurface temperatures. The ultrasonic pulse-echo technique as well as a real-time x-ray radiography system have been used to obtain independent measurements of instantaneous solid fuel regression rates.
NASA Astrophysics Data System (ADS)
Yang, Yang; Ren, R.-C.; Cai, Ming
2016-12-01
The stratosphere has been cooling under global warming, the causes of which are not yet well understood. This study applied a process-based decomposition method (CFRAM; Coupled Surface-Atmosphere Climate Feedback Response Analysis Method) to the simulation results of a Coupled Model Intercomparison Project, phase 5 (CMIP5) model (CCSM4; Community Climate System Model, version 4), to demonstrate the responsible radiative and non-radiative processes involved in the stratospheric cooling. By focusing on the long-term stratospheric temperature changes between the "historical run" and the 8.5 W m-2 Representative Concentration Pathway (RCP8.5) scenario, this study demonstrates that the changes of radiative radiation due to CO2, ozone and water vapor are the main divers of stratospheric cooling in both winter and summer. They contribute to the cooling changes by reducing the net radiative energy (mainly downward radiation) received by the stratospheric layer. In terms of the global average, their contributions are around -5, -1.5, and -1 K, respectively. However, the observed stratospheric cooling is much weaker than the cooling by radiative processes. It is because changes in atmospheric dynamic processes act to strongly mitigate the radiative cooling by yielding a roughly 4 K warming on the global average base. In particular, the much stronger/weaker dynamic warming in the northern/southern winter extratropics is associated with an increase of the planetary-wave activity in the northern winter, but a slight decrease in the southern winter hemisphere, under global warming. More importantly, although radiative processes dominate the stratospheric cooling, the spatial patterns are largely determined by the non-radiative effects of dynamic processes.
García-Garrido, C; Sánchez-Jiménez, P E; Pérez-Maqueda, L A; Perejón, A; Criado, José M
2016-10-26
The polymer-to-ceramic transformation kinetics of two widely employed ceramic precursors, 1,3,5,7-tetramethyl-1,3,5,7-tetravinylcyclotetrasiloxane (TTCS) and polyureamethylvinylsilazane (CERASET), have been investigated using coupled thermogravimetry and mass spectrometry (TG-MS), Raman, XRD and FTIR. The thermally induced decomposition of the pre-ceramic polymer is the critical step in the synthesis of polymer derived ceramics (PDCs) and accurate kinetic modeling is key to attaining a complete understanding of the underlying process and to attempt any behavior predictions. However, obtaining a precise kinetic description of processes of such complexity, consisting of several largely overlapping physico-chemical processes comprising the cleavage of the starting polymeric network and the release of organic moieties, is extremely difficult. Here, by using the evolved gases detected by MS as a guide it has been possible to determine the number of steps that compose the overall process, which was subsequently resolved using a semiempirical deconvolution method based on the Frasier-Suzuki function. Such a function is more appropriate that the more usual Gaussian or Lorentzian functions since it takes into account the intrinsic asymmetry of kinetic curves. Then, the kinetic parameters of each constituent step were independently determined using both model-free and model-fitting procedures, and it was found that the processes obey mostly diffusion models which can be attributed to the diffusion of the released gases through the solid matrix. The validity of the obtained kinetic parameters was tested not only by the successful reconstruction of the original experimental curves, but also by predicting the kinetic curves of the overall processes yielded by different thermal schedules and by a mixed TTCS-CERASET precursor.
Tensor Network Renormalization.
Evenbly, G; Vidal, G
2015-10-30
We introduce a coarse-graining transformation for tensor networks that can be applied to study both the partition function of a classical statistical system and the Euclidean path integral of a quantum many-body system. The scheme is based upon the insertion of optimized unitary and isometric tensors (disentanglers and isometries) into the tensor network and has, as its key feature, the ability to remove short-range entanglement or correlations at each coarse-graining step. Removal of short-range entanglement results in scale invariance being explicitly recovered at criticality. In this way we obtain a proper renormalization group flow (in the space of tensors), one that in particular (i) is computationally sustainable, even for critical systems, and (ii) has the correct structure of fixed points, both at criticality and away from it. We demonstrate the proposed approach in the context of the 2D classical Ising model.
Tensor coupling and pseudospin symmetry in nuclei
Alberto, P.; Castro, A.S. de; Lisboa, R.; Malheiro, M.
2005-03-01
In this work we study the contribution of the isoscalar tensor coupling to the realization of pseudospin symmetry in nuclei. Using realistic values for the tensor coupling strength, we show that this coupling reduces noticeably the pseudospin splittings, especially for single-particle levels near the Fermi surface. By using an energy decomposition of the pseudospin energy splittings, we show that the changes in these splittings come mainly through the changes induced in the lower radial wave function for the low-lying pseudospin partners and through changes in the expectation value of the pseudospin-orbit coupling term for surface partners. This allows us to confirm the conclusion already reached in previous studies, namely that the pseudospin symmetry in nuclei is of a dynamical nature.
Oxidative decomposition of p-nitroaniline in water by solar photo-Fenton advanced oxidation process.
Sun, Jian-Hui; Sun, Sheng-Peng; Fan, Mao-Hong; Guo, Hui-Qin; Lee, Yi-Fan; Sun, Rui-Xia
2008-05-01
The degradation of p-nitroaniline (PNA) in water by solar photo-Fenton advanced oxidation process was investigated in this study. The effects of different reaction parameters including pH value of solutions, dosages of hydrogen peroxide and ferrous ion, initial PNA concentration and temperature on the degradation of PNA have been studied. The optimum conditions for the degradation of PNA in water were considered to be: the pH value at 3.0, 10 mmol L(-1) H(2)O(2), 0.05 mmol L(-1) Fe(2+), 0.072-0.217 mmol L(-1) PNA and temperature at 20 degrees C. Under the optimum conditions, the degradation efficiencies of PNA were more than 98% within 30 min reaction. The degradation characteristic of PNA showed that the conjugated pi systems of the aromatic ring in PNA molecules were effectively destructed. The experimental results indicated solar photo-Fenton process has more advantages compared with classical Fenton process, such as higher oxidation power, wider working pH range, lower ferrous ion usage, etc. Furthermore, the present study showed the potential use of solar photo-Fenton process for PNA containing wastewater treatment.
Square Deal: Lower Bounds and Improved Relaxations for Tensor Recovery
2013-08-16
a low-rank tensor from incomplete information is a recurring problem in signal processing and machine learning . The most popular convex relaxation of...is a recurring problem in signal processing and machine learning . The most popular convex relaxation of this problem minimizes the sum of the nuclear...results to low-rank tensors is not obvious. The numerical algebra of tensors is fraught with hardness results [HL09]. For example, even computing a
McKenna, Benjamin S; Theilmann, Rebecca J; Sutherland, Ashley N; Eyler, Lisa T
2015-05-01
Evidence for abnormal brain function as measured with diffusion tensor imaging (DTI) and functional magnetic resonance imaging (fMRI) and cognitive dysfunction have been observed in inter-episode bipolar disorder (BD) patients. We aimed to create a joint statistical model of white matter integrity and functional response measures in explaining differences in working memory and processing speed among BD patients. Medicated inter-episode BD (n=26; age=45.2±10.1 years) and healthy comparison (HC; n=36; age=46.3±11.5 years) participants completed 51-direction DTI and fMRI while performing a working memory task. Participants also completed a processing speed test. Tract-based spatial statistics identified common white matter tracts where fractional anisotropy was calculated from atlas-defined regions of interest. Brain responses within regions of interest activation clusters were also calculated. Least angle regression was used to fuse fMRI and DTI data to select the best joint neuroimaging predictors of cognitive performance for each group. While there was overlap between groups in which regions were most related to cognitive performance, some relationships differed between groups. For working memory accuracy, BD-specific predictors included bilateral dorsolateral prefrontal cortex from fMRI, splenium of the corpus callosum, left uncinate fasciculus, and bilateral superior longitudinal fasciculi from DTI. For processing speed, the genu and splenium of the corpus callosum and right superior longitudinal fasciculus from DTI were significant predictors of cognitive performance selectively for BD patients. BD patients demonstrated unique brain-cognition relationships compared to HC. These findings are a first step in discovering how interactions of structural and functional brain abnormalities contribute to cognitive impairments in BD.
Decomposition of Time Scales in Linear Systems and Markovian Decision Processes.
1980-11-01
Inventory theory (17] iii. Queuing theory [18]. Markovian decision processes can be traced back to Bellman’s development of Dynamic Programming (19,20...corresponds to the demand in terms of generating units needed. Markov models of this type are common in optimal resource scheduling problems [22,59]. The units...scale systems is a researchI area that will always remain rich in potential. More demanding performance leads to more complex models necessitating the
Ivanova, Maria V; Isaev, Dmitry Yu; Dragoy, Olga V; Akinina, Yulia S; Petrushevskiy, Alexey G; Fedina, Oksana N; Shklovsky, Victor M; Dronkers, Nina F
2016-12-01
A growing literature is pointing towards the importance of white matter tracts in understanding the neural mechanisms of language processing, and determining the nature of language deficits and recovery patterns in aphasia. Measurements extracted from diffusion-weighted (DW) images provide comprehensive in vivo measures of local microstructural properties of fiber pathways. In the current study, we compared microstructural properties of major white matter tracts implicated in language processing in each hemisphere (these included arcuate fasciculus (AF), superior longitudinal fasciculus (SLF), inferior longitudinal fasciculus (ILF), inferior frontal-occipital fasciculus (IFOF), uncinate fasciculus (UF), and corpus callosum (CC), and corticospinal tract (CST) for control purposes) between individuals with aphasia and healthy controls and investigated the relationship between these neural indices and language deficits. Thirty-seven individuals with aphasia due to left hemisphere stroke and eleven age-matched controls were scanned using DW imaging sequences. Fractional anisotropy (FA), mean diffusivity (MD), radial diffusivity (RD), axial diffusivity (AD) values for each major white matter tract were extracted from DW images using tract masks chosen from standardized atlases. Individuals with aphasia were also assessed with a standardized language test in Russian targeting comprehension and production at the word and sentence level. Individuals with aphasia had significantly lower FA values for left hemisphere tracts and significantly higher values of MD, RD and AD for both left and right hemisphere tracts compared to controls, all indicating profound impairment in tract integrity. Language comprehension was predominantly related to integrity of the left IFOF and left ILF, while language production was mainly related to integrity of the left AF. In addition, individual segments of these three tracts were differentially associated with language production and
NASA Astrophysics Data System (ADS)
Honda, K.; Peter, K.; Zhang, Y.; Yu, B.; Park, K.; Li, Xiaolei; Michaels, K.; Yamada, Shinichi; Noguchi, T.
2004-05-01
With downscaling of dimensions, essential challenges on layout printability significantly increase. The design rule cannot be shrunk with linearity any more. Historically, in the early development stage, simple test patterns like snake/comb or border/borderless via chains were used for identifying design and process issues electrically. However it is unclear how much these patterns represent the sensitive patterns for the real critical failures. The lack of these kinds of critical patterns would always cause yield problems in the volume production. In this paper, we show the result of evaluating 65-nm BEOL process by using the test patterns that can cover critical layout situations. Especially, it was focused on the line end via hole, which is believed to cause the systematic yield degradation. The key steps in our process/design decomposition methodology are design attribute and process space analysis. By exploring the process space for a given design, the method allows to find the most challenging patterns to print due to various process issues. The test patterns were generated from critical pattern extracted from standard cells library by considering our preliminary opc and mask design flow. Simulation of all test patterns are performed to ensure that DOE range is sufficient to cover the entire process/design space. These patterns are generated from the 65nm node ground design rule. It used a size of 90nm as metal minimum width and space, and a size of 100nm for fixed via hole diameter. It was confirmed by simulations that all the test pattern represent for the original design on each module process/design space. All the test patterns were measured by the standard parametric e-test setup. The amount of line end pull back can be inferred from the via resistance, and the amount of line end widening can be inferred from the leakage current between via chains and neighboring lines. Thus the meaningful information about the OPC and litho process can be obtained
General route for the decomposition of InAs quantum dots during the capping process.
González, D; Reyes, D F; Utrilla, A D; Ben, T; Braza, V; Guzman, A; Hierro, A; Ulloa, J M
2016-03-29
The effect of the capping process on the morphology of InAs/GaAs quantum dots (QDs) by using different GaAs-based capping layers (CLs), ranging from strain reduction layers to strain compensating layers, has been studied by transmission microscopic techniques. For this, we have measured simultaneously the height and diameter in buried and uncapped QDs covering populations of hundreds of QDs that are statistically reliable. First, the uncapped QD population evolves in all cases from a pyramidal shape into a more homogenous distribution of buried QDs with a spherical-dome shape, despite the different mechanisms implicated in the QD capping. Second, the shape of the buried QDs depends only on the final QD size, where the radius of curvature is function of the base diameter independently of the CL composition and growth conditions. An asymmetric evolution of the QDs' morphology takes place, in which the QD height and base diameter are modified in the amount required to adopt a similar stable shape characterized by a averaged aspect ratio of 0.21. Our results contradict the traditional model of QD material redistribution from the apex to the base and point to a different universal behavior of the overgrowth processes in self-organized InAs QDs.
General route for the decomposition of InAs quantum dots during the capping process
NASA Astrophysics Data System (ADS)
González, D.; Reyes, D. F.; Utrilla, A. D.; Ben, T.; Braza, V.; Guzman, A.; Hierro, A.; Ulloa, J. M.
2016-03-01
The effect of the capping process on the morphology of InAs/GaAs quantum dots (QDs) by using different GaAs-based capping layers (CLs), ranging from strain reduction layers to strain compensating layers, has been studied by transmission microscopic techniques. For this, we have measured simultaneously the height and diameter in buried and uncapped QDs covering populations of hundreds of QDs that are statistically reliable. First, the uncapped QD population evolves in all cases from a pyramidal shape into a more homogenous distribution of buried QDs with a spherical-dome shape, despite the different mechanisms implicated in the QD capping. Second, the shape of the buried QDs depends only on the final QD size, where the radius of curvature is function of the base diameter independently of the CL composition and growth conditions. An asymmetric evolution of the QDs’ morphology takes place, in which the QD height and base diameter are modified in the amount required to adopt a similar stable shape characterized by a averaged aspect ratio of 0.21. Our results contradict the traditional model of QD material redistribution from the apex to the base and point to a different universal behavior of the overgrowth processes in self-organized InAs QDs.
Decomposition of Iodinated Pharmaceuticals by UV-254 nm-assisted Advanced Oxidation Processes.
Duan, Xiaodi; He, Xuexiang; Wang, Dong; Mezyk, Stephen P; Otto, Shauna C; Marfil-Vega, Ruth; Mills, Marc A; Dionysiou, Dionysios D
2017-02-05
Iodinated pharmaceuticals, thyroxine (a thyroid hormone) and diatrizoate (an iodinated X-ray contrast medium), are among the most prescribed active pharmaceutical ingredients. Both of them have been reported to potentially disrupt thyroid homeostasis even at very low concentrations. In this study, UV-254 nm-based photolysis and photochemical processes, i.e., UV only, UV/H2O2, and UV/S2O8(2-), were evaluated for the destruction of these two pharmaceuticals. Approximately 40% of 0.5μM thyroxine or diatrizoate was degraded through direct photolysis at UV fluence of 160mJcm(-2), probably resulting from the photosensitive cleavage of C-I bonds. While the addition of H2O2 only accelerated the degradation efficiency to a low degree, the destruction rates of both chemicals were significantly enhanced in the UV/S2O8(2-) system, suggesting the potential vulnerability of the iodinated chemicals toward UV/S2O8(2-) treatment. Such efficient destruction also occurred in the presence of radical scavengers when biologically treated wastewater samples were used as reaction matrices. The effects of initial oxidant concentrations, solution pH, as well as the presence of natural organic matter (humic acid or fulvic acid) and alkalinity were also investigated in this study. These results provide insights for the removal of iodinated pharmaceuticals in water and/or wastewater using UV-based photochemical processes.
Sandbeck, Kenneth A.; Ward, David M.
1982-01-01
The optimum temperatures for methanogenesis in microbial mats of four neutral to alkaline, low-sulfate hot springs in Yellowstone National Park were between 50 and 60°C, which was 13 to 23°C lower than the upper temperature for mat development. Significant methanogenesis at 65°C was only observed in one of the springs. Methane production in samples collected at a 51 or 62°C site in Octopus Spring was increased by incubation at higher temperatures and was maximal at 70°C. Strains of Methanobacterium thermoautotrophicum were isolated from 50, 55, 60, and 65°C sites in Octopus Spring at the temperatures of the collection sites. The optimum temperature for growth and methanogenesis of each isolate was 65°C. Similar results were found for the potential rate of sulfate reduction in an Icelandic hot spring microbial mat in which sulfate reduction dominated methane production as a terminal process in anaerobic decomposition. The potential rate of sulfate reduction along the thermal gradient of the mat was greatest at 50°C, but incubation at 60°C of the samples obtained at 50°C increased the rate. Adaptation to different mat temperatures, common among various microorganisms and processes in the mats, did not appear to occur in the processes and microorganisms which terminate the anaerobic food chain. Other factors must explain why the maximal rates of these processes are restricted to moderate temperatures of the mat ecosystem. PMID:16346109
Batakliev, Todor; Georgiev, Vladimir; Anachkov, Metody; Rakovsky, Slavcho
2014-01-01
Catalytic ozone decomposition is of great significance because ozone is a toxic substance commonly found or generated in human environments (aircraft cabins, offices with photocopiers, laser printers, sterilizers). Considerable work has been done on ozone decomposition reported in the literature. This review provides a comprehensive summary of the literature, concentrating on analysis of the physico-chemical properties, synthesis and catalytic decomposition of ozone. This is supplemented by a review on kinetics and catalyst characterization which ties together the previously reported results. Noble metals and oxides of transition metals have been found to be the most active substances for ozone decomposition. The high price of precious metals stimulated the use of metal oxide catalysts and particularly the catalysts based on manganese oxide. It has been determined that the kinetics of ozone decomposition is of first order importance. A mechanism of the reaction of catalytic ozone decomposition is discussed, based on detailed spectroscopic investigations of the catalytic surface, showing the existence of peroxide and superoxide surface intermediates. PMID:26109880
Batakliev, Todor; Georgiev, Vladimir; Anachkov, Metody; Rakovsky, Slavcho; Zaikov, Gennadi E
2014-06-01
Catalytic ozone decomposition is of great significance because ozone is a toxic substance commonly found or generated in human environments (aircraft cabins, offices with photocopiers, laser printers, sterilizers). Considerable work has been done on ozone decomposition reported in the literature. This review provides a comprehensive summary of the literature, concentrating on analysis of the physico-chemical properties, synthesis and catalytic decomposition of ozone. This is supplemented by a review on kinetics and catalyst characterization which ties together the previously reported results. Noble metals and oxides of transition metals have been found to be the most active substances for ozone decomposition. The high price of precious metals stimulated the use of metal oxide catalysts and particularly the catalysts based on manganese oxide. It has been determined that the kinetics of ozone decomposition is of first order importance. A mechanism of the reaction of catalytic ozone decomposition is discussed, based on detailed spectroscopic investigations of the catalytic surface, showing the existence of peroxide and superoxide surface intermediates.
Ding, Wen Quan; Zhou, Xue Jun; Tang, Jin Bo; Gu, Jian Hui; Jin, Dong Sheng
2015-06-01
To achieve 3-dimensional (3D) display of peripheral nerves in the wrist region by using maximum intensity projection (MIP) post-processing methods to reconstruct raw images acquired by a diffusion tensor imaging (DTI) scan, and to explore its clinical applications. We performed DTI scans in 6 (DTI6) and 25 (DTI25) diffusion directions on 20 wrists of 10 healthy young volunteers, 6 wrists of 5 patients with carpal tunnel syndrome, 6 wrists of 6 patients with nerve lacerations, and one patient with neurofibroma. The MIP post-processing methods employed 2 types of DTI raw images: (1) single-direction and (2) T2-weighted trace. The fractional anisotropy (FA) and apparent diffusion coefficient (ADC) values of the median and ulnar nerves were measured at multiple testing sites. Two radiologists used custom evaluation scales to assess the 3D nerve imaging quality independently. In both DTI6 and DTI25, nerves in the wrist region could be displayed clearly by the 2 MIP post-processing methods. The FA and ADC values were not significantly different between DTI6 and DTI25, except for the FA values of the ulnar nerves at the level of pisiform bone (p=0.03). As to the imaging quality of each MIP post-processing method, there were no significant differences between DTI6 and DTI25 (p>0.05). The imaging quality of single-direction MIP post-processing was better than that from T2-weighted traces (p<0.05) because of the higher nerve signal intensity. Three-dimensional displays of peripheral nerves in the wrist region can be achieved by MIP post-processing for single-direction images and T2-weighted trace images for both DTI6 and DTI25. The FA and ADC values of the median nerves can be accurately measured by using DTI6 data. Adopting 6-direction DTI scan and MIP post-processing is an efficient method for evaluating peripheral nerves. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Fujii, Hidemichi; Nakagawa, Kei; Kagabu, Makoto
2016-11-01
Groundwater nitrate pollution is one of the most prevalent water-related environmental problems worldwide. The objective of this study is to identify the determinants of nitrogen pollutant changes with a focus on the nitrogen generation process. The novelty of our research framework is to cost-effectively identify the factors involved in nitrogen pollutant generation using public data. This study focuses on three determinant factors: (1) nitrogen intensity changes, (2) structural changes, and (3) scale changes. This study empirically analyses three sectors, including crop production, farm animals, and the household, on the Shimabara Peninsula in Japan. Our results show that the nitrogen supply from crop production sectors has decreased because the production has been scaled down and shifted towards lower nitrogen intensive crops. In the farm animal sector, the nitrogen supply has also been successfully reduced due to scaling-down efforts. Households have decreased the nitrogen supply by diffusion of integrated septic tank and sewerage systems.
NASA Astrophysics Data System (ADS)
Azimi-Sadjadi, Mahmood R.; Pezeshki, Ali; Wade, Robert L.
2004-09-01
Sparse array processing methods are typically used to improve the spatial resolution of sensor arrays for the estimation of direction of arrival (DOA). The fundamental assumption behind these methods is that signals that are received by the sparse sensors (or a group of sensors) are coherent. However, coherence may vary significantly with the changes in environmental, terrain, and, operating conditions. In this paper canonical correlation analysis is used to study the variations in coherence between pairs of sub-arrays in a sparse array problem. The data set for this study is a subset of an acoustic signature data set, acquired from the US Army TACOM-ARDEC, Picatinny Arsenal, NJ. This data set is collected using three wagon-wheel type arrays with five microphones. The results show that in nominal operating conditions, i.e. no extreme wind noise or masking effects by trees, building, etc., the signals collected at different sensor arrays are indeed coherent even at distant node separation.
Souza-Corrêa, J A; Ridenti, M A; Oliveira, C; Araújo, S R; Amorim, J
2013-03-21
Mass spectrometry was used to monitor neutral chemical species from sugar cane bagasse that could volatilize during the bagasse ozonation process. Lignin fragments and some radicals liberated by direct ozone reaction with the biomass structure were detected. Ozone density was monitored during the ozonation by optical absorption spectroscopy. The optical results indicated that the ozone interaction with the bagasse material was better for bagasse particle sizes less than or equal to 0.5 mm. Both techniques have shown that the best condition for the ozone diffusion in the bagasse was at 50% of its moisture content. In addition, Fourier transform infrared spectroscopy (FTIR) and scanning electron microscopy (SEM) were employed to analyze the lignin bond disruptions and morphology changes of the bagasse surface that occurred due to the ozonolysis reactions as well. Appropriate chemical characterization of the lignin content in bagasse before and after its ozonation was also carried out.
NASA Astrophysics Data System (ADS)
Ohiwa, Norio; Ishino, Yojiro; Yamamoto, Atsunori; Yamakita, Ryuji
To elucidate the possibility and availability of thermal recycling of waste plastic resin from a basic and microscopic viewpoint, a series of abrupt heating processes of a spherical micro plastic particle having a diameter of about 200 μm is observed, when it is abruptly exposed to hot oxidizing combustion gas. Three ingenious devices are introduced and two typical plastic resins of polyethylene terephthalate and polyethylene are used. In this paper the dependency of internal and external appearances of residual plastic embers on the heating time and the ingredients of plastic resins is optically analyzed, along with appearances of internal micro bubbling, multiple micro explosions and jets, and micro diffusion flames during abrupt heating. Based on temporal variations of the surface area of a micro plastic particle, the apparent burning rate constant is also evaluated and compared with those of well-known volatile liquid fuels.
A process-based decomposition of decadal-scale surface temperature evolutions over East Asia
NASA Astrophysics Data System (ADS)
Chen, Junwen; Deng, Yi; Lin, Wenshi; Yang, Song
2017-08-01
This study partitions the observed decadal evolution of surface temperature and surface temperature differences between two decades (early 2000s and early 1980s) over the East Asian continent into components associated with individual radiative and non-radiative (dynamical) processes in the context of the coupled atmosphere-surface climate feedback-response analysis method (CFRAM). Rapid warming in this region occurred in late 1980s and early 2000s with a transient pause of warming between the two periods. The rising CO2 concentration provides a sustained, region-wide warming contribution and surface albedo effect, largely related to snow cover change, is important for warming/cooling over high-latitude and high-elevation regions. Sensible hear flux and surface dynamics dominates the evolution of surface temperature, with latent heat flux and atmospheric dynamics working against them mostly through large-scale and convective/turbulent heat transport. Cloud via its shortwave effect provides positive contributions to warming over southern Siberia and South China. The longwave effect associated with water vapor change contributes significant warming over northern India, Tibetan Plateau, and central Siberia. Impacts of solar irradiance and ozone changes are relatively small. The strongest year-to-year temperature fluctuation occurred at a rapid warming (1987-1988) and a rapid cooling (1995-1996) period. The pattern of the rapid warming receives major positive contributions from sensible heat flux with changes in atmospheric dynamics, water vapor, clouds, and albedo providing secondary positive contributions, while surface dynamics and latent heat flux providing negative contributions. The signs of the contributions from individual processes to the rapid cooling are almost opposite to those to the rapid warming.
Chen, Wen-Shing; Liang, Jing-Song
2008-06-01
Oxidative degradation of dinitrotoluene (DNT) isomers and 2,4,6-trinitrotoluene (TNT) in spent acid was conducted by Electro-Fenton's reagents. The electrolytic experiments were carried out to elucidate the influence of various operating parameters on the performance of mineralization of total organic compounds (TOC) in spent acid, including reaction temperature, dosage of oxygen, sulfuric acid concentration and dosage of ferrous ions. It deserves to note that organic compounds could be completely destructed by Electro-Fenton's reagent with in situ electrogenerated hydrogen peroxide obtained from cathodic reduction of oxygen, which was mainly supplied by anodic oxidation of water. Based on the spectra analyzed by gas chromatograph/mass spectrometer, it is proposed that initial denitration of 2,4,6-TNT gives rise to formation of 2,4-DNT and/or 2,6-DNT, which undergo the cleavage of nitro group into o-mononitrotoluene, followed by denitration to toluene and subsequent oxidation of the methyl group. Owing to the removal of both TOC and partial amounts of water simultaneously, the electrolytic method established is potentially applied to regenerate spent acid from toluene nitration processes in practice.
Wang, Yongjiang; Witarsa, Freddy
2016-11-01
An integrated model was developed by associating separate degradation kinetics for an array of degradations during a decomposition process, which was considered as a novelty of this study. The raw composting material was divided into soluble, hemi-/cellulose, lignin, NBVS, ash, water, and free air-space. Considering their specific capabilities of expressing certain degradation phenomenon, Contois, Tessier (an extension to Monod kinetic), and first-order kinetics were employed to calculate the biochemical rates. It was found that the degradation of soluble substrate was relatively faster which could reach a maximum rate of about 0.4perhour. The hydrolysis of lignin was rate-limiting with a maximum rate of about 0.04perhour. The dry-based peak concentrations of soluble, hemi-/cellulose and lignin degraders were about 0.9, 0.2 and 0.3kgm(-3), respectively. Model developed, as a platform, allows degradation simulation of composting material that could be separated into the different components used in this study.
Morphology and phase modifications of MoO{sub 3} obtained by metallo-organic decomposition processes
Barros Santos, Elias de; Martins de Souza e Silva, Juliana; Odone Mazali, Italo
2010-11-15
Molybdenum oxide samples were prepared using different temperatures and atmospheric conditions by metallo-organic decomposition processes and were characterized by XRD, SEM and DRS UV/Vis and Raman spectroscopies. Variation in the synthesis conditions resulted in solids with different morphologies and oxygen vacancy concentrations. Intense characteristic Raman bands of crystalline orthorhombic {alpha}-MoO{sub 3}, occurring at 992 cm{sup -1} and 820 cm{sup -1}, are observed and their shifts can be related to the differences in the structure of the solids obtained. The sample obtained under nitrogen flow at 1073 K is a phase mixture of orthorhombic {alpha}-MoO{sub 3} and monoclinic {beta}-MoO{sub 3}. The characterization results suggest that the molybdenum oxide samples are non-stoichiometric and are described as MoO{sub x} with x < 2.94. Variations in the reaction conditions make it possible to tune the number of oxygen defects and the band gap of the final material.
NASA Astrophysics Data System (ADS)
Qian, Xi-Yuan; Gu, Gao-Feng; Zhou, Wei-Xing
2011-11-01
Detrended fluctuation analysis (DFA) is a simple but very efficient method for investigating the power-law long-term correlations of non-stationary time series, in which a detrending step is necessary to obtain the local fluctuations at different timescales. We propose to determine the local trends through empirical mode decomposition (EMD) and perform the detrending operation by removing the EMD-based local trends, which gives an EMD-based DFA method. Similarly, we also propose a modified multifractal DFA algorithm, called an EMD-based MFDFA. The performance of the EMD-based DFA and MFDFA methods is assessed with extensive numerical experiments based on fractional Brownian motion and multiplicative cascading process. We find that the EMD-based DFA method performs better than the classic DFA method in the determination of the Hurst index when the time series is strongly anticorrelated and the EMD-based MFDFA method outperforms the traditional MFDFA method when the moment order q of the detrended fluctuations is positive. We apply the EMD-based MFDFA to the 1 min data of Shanghai Stock Exchange Composite index, and the presence of multifractality is confirmed. We also analyze the daily Austrian electricity prices and confirm its anti-persistence.
NASA Astrophysics Data System (ADS)
Gómez-Núñez, Alberto; Roura, Pere; López, Concepción; Vilà, Anna
2016-09-01
Four inks for the production of ZnO semiconducting films have been prepared with zinc acetate dihydrate as precursor salt and one among the following aminoalcohols: aminopropanol (APr), aminomethyl butanol (AMB), aminophenol (APh) and aminobenzyl alcohol (AB) as stabilizing agent. Their thermal decomposition process has been analyzed in situ by thermogravimetric analysis (TGA), differential scanning calorimetry (DSC) and evolved gas analysis (EGA), whereas the solid product has been analysed ex-situ by X-ray diffraction (XRD) and infrared spectroscopy (IR). Although, except for the APh ink, crystalline ZnO is already obtained at 300 °C, the films contain an organic residue that evolves at higher temperature in the form of a large variety of nitrogen-containing cyclic compounds. The results indicate that APr can be a better stabilizing agent than ethanolamine (EA). It gives larger ZnO crystal sizes with similar carbon content. However, a common drawback of all the amino stabilizers (EA included) is that nitrogen atoms have not been completely removed from the ZnO film at the highest temperature of our experiments (600 °C).
McMillen, D.F.; Golden, D.M.
1981-11-12
Very Low-Pressure Pyrolysis studies of 2,4-dinitrotoluene decomposition resulted in decomposition rates consistent with log (ks) = 12.1 - 43.9/2.3 RT. These results support the conclusion that previously reported 'anomalously' low Arrhenius parameters for the homogeneous gas-phase decomposition of ortho-nitrotoluene actually represent surface-catalyzed reactions. Preliminary qualitative results for pyrolysis of ortho-nitrotouene in the absence of hot reactor walls, using the Laser-Powered Homogeneous Pyrolysis technique (LPHP), provide further support for this conclusion: only products resulting from Ph-NO2 bond scission were observed; no products indicating complex intramolecular oxidation-reduction or elimination processes could be detected. The LPHP technique was successfully modified to use a pulsed laser and a heated flow system, so that the technique becomes suitable for study of surface-sensitive, low vapor pressure substrates such as TNT. The validity and accuracy of the technique was demonstrated by applying it to the decomposition of substances whose Arrhenius parameters for decomposition were already well known. IR-fluorescence measurements show that the temperature-space-time behavior under the present LPHP conditions is in agreement with expectations and with requirements which must be met if the method is to have quantitative validity. LPHP studies of azoisopropane decomposition, chosen as a radical-forming test reaction, show the accepted literature parameters to be substantially in error and indicate that the correct values are in all probability much closer to those measured in this work: log (k/s) = 13.9 - 41.2/2.3 RT.
NASA Astrophysics Data System (ADS)
Aragón, Roxana; Montti, Lia; Ayup, María Marta; Fernández, Romina
2014-01-01
Invasions of exotic tree species can cause profound changes in community composition and structure, and may even cause legacy effect on nutrient cycling via litter production. In this study, we compared leaf litter decomposition of two invasive exotic trees (Ligustrum lucidum and Morus sp.) and two dominant native trees (Cinnamomum porphyria and Cupania vernalis) in native and invaded (Ligustrum-dominated) forest stands in NW Argentina. We measured leaf attributes and environmental characteristics in invaded and native stands to isolate the effects of litter quality and habitat characteristics. Species differed in their decomposition rates and, as predicted by the different species colonization status (pioneer vs. late successional), exotic species decayed more rapidly than native ones. Invasion by L. lucidum modified environmental attributes by reducing soil humidity. Decomposition constants (k) tended to be slightly lower (-5%) for all species in invaded stands. High SLA, low tensile strength, and low C:N of Morus sp. distinguish this species from the native ones and explain its higher decomposition rate. Contrary to our expectations, L. lucidum leaf attributes were similar to those of native species. Decomposition rates also differed between the two exotic species (35% higher in Morus sp.), presumably due to leaf attributes and colonization status. Given the high decomposition rate of L. lucidum litter (more than 6 times that of natives) we expect an acceleration of nutrient circulation at ecosystem level in Ligustrum-dominated stands. This may occur in spite of the modified environmental conditions that are associated with L. lucidum invasion.
Fuel decomposition and boundary-layer combustion processes of hybrid rocket motors
NASA Technical Reports Server (NTRS)
Chiaverini, Martin J.; Harting, George C.; Lu, Yeu-Cherng; Kuo, Kenneth K.; Serin, Nadir; Johnson, David K.
1995-01-01
Using a high-pressure, two-dimensional hybrid motor, an experimental investigation was conducted on fundamental processes involved in hybrid rocket combustion. HTPB (Hydroxyl-terminated Polybutadiene) fuel cross-linked with diisocyanate was burned with GOX under various operating conditions. Large-amplitude pressure oscillations were encountered in earlier test runs. After identifying the source of instability and decoupling the GOX feed-line system and combustion chamber, the pressure oscillations were drastically reduced from +/-20% of the localized mean pressure to an acceptable range of +/-1.5% Embedded fine-wire thermocouples indicated that the surface temperature of the burning fuel was around 1000 K depending upon axial locations and operating conditions. Also, except near the leading-edge region, the subsurface thermal wave profiles in the upstream locations are thicker than those in the downstream locations since the solid-fuel regression rate, in general, increases with distance along the fuel slab. The recovered solid fuel slabs in the laminar portion of the boundary layer exhibited smooth surfaces, indicating the existence of a liquid melt layer on the burning fuel surface in the upstream region. After the transition section, which displayed distinct transverse striations, the surface roughness pattern became quite random and very pronounced in the downstream turbulent boundary-layer region. Both real-time X-ray radiography and ultrasonic pulse-echo techniques were used to determine the instantaneous web thickness burned and instantaneous solid-fuel regression rates over certain portions of the fuel slabs. Globally averaged and axially dependent but time-averaged regression rates were also obtained and presented.
Chao, T.T.; Sanzolone, R.F.
1992-01-01
Sample decomposition is a fundamental and integral step in the procedure of geochemical analysis. It is often the limiting factor to sample throughput, especially with the recent application of the fast and modern multi-element measurement instrumentation. The complexity of geological materials makes it necessary to choose the sample decomposition technique that is compatible with the specific objective of the analysis. When selecting a decomposition technique, consideration should be given to the chemical and mineralogical characteristics of the sample, elements to be determined, precision and accuracy requirements, sample throughput, technical capability of personnel, and time constraints. This paper addresses these concerns and discusses the attributes and limitations of many techniques of sample decomposition along with examples of their application to geochemical analysis. The chemical properties of reagents as to their function as decomposition agents are also reviewed. The section on acid dissolution techniques addresses the various inorganic acids that are used individually or in combination in both open and closed systems. Fluxes used in sample fusion are discussed. The promising microwave-oven technology and the emerging field of automation are also examined. A section on applications highlights the use of decomposition techniques for the determination of Au, platinum group elements (PGEs), Hg, U, hydride-forming elements, rare earth elements (REEs), and multi-elements in geological materials. Partial dissolution techniques used for geochemical exploration which have been treated in detail elsewhere are not discussed here; nor are fire-assaying for noble metals and decomposition techniques for X-ray fluorescence or nuclear methods be discussed. ?? 1992.
Lee, Joo Won; Thomas, Leonard C; Jerrell, John; Feng, Hao; Cadwallader, Keith R; Schmidt, Shelly J
2011-01-26
High performance liquid chromatography (HPLC) on a calcium form cation exchange column with refractive index and photodiode array detection was used to investigate thermal decomposition as the cause of the loss of crystalline structure in sucrose. Crystalline sucrose structure was removed using a standard differential scanning calorimetry (SDSC) method (fast heating method) and a quasi-isothermal modulated differential scanning calorimetry (MDSC) method (slow heating method). In the fast heating method, initial decomposition components, glucose (0.365%) and 5-HMF (0.003%), were found in the sucrose sample coincident with the onset temperature of the first endothermic peak. In the slow heating method, glucose (0.411%) and 5-HMF (0.003%) were found in the sucrose sample coincident with the holding time (50 min) at which the reversing heat capacity began to increase. In both methods, even before the crystalline structure in sucrose was completely removed, unidentified thermal decomposition components were formed. These results prove not only that the loss of crystalline structure in sucrose is caused by thermal decomposition, but also that it is achieved via a time-temperature combination process. This knowledge is important for quality assurance purposes and for developing new sugar based food and pharmaceutical products. In addition, this research provides new insights into the caramelization process, showing that caramelization can occur under low temperature (significantly below the literature reported melting temperature), albeit longer time, conditions.
NASA Astrophysics Data System (ADS)
Narain, Gaurav; Sasakura, Naoki
2017-07-01
The canonical tensor model (CTM) is a tensor model formulated in the Hamilton formalism as a totally constrained system with first class constraints, the algebraic structure of which is very similar to that of the ADM formalism of general relativity. It has recently been shown that a formal continuum limit of the classical equation of motion of CTM in a derivative expansion of the tensor up to the fourth derivatives agrees with that of a coupled system of general relativity and a scalar field in the Hamilton-Jacobi formalism. This suggests the existence of a ‘mother’ tensor model which derives CTM through the Hamilton-Jacobi procedure, and we have successfully found such a ‘mother’ CTM (mCTM) in this paper. The quantization of the mCTM is as straightforward as the CTM. However, we have not been able to identify all the secondary constraints, and therefore the full structure of the model has been left for future study. Nonetheless, we have found some exact physical wave functions and classical phase spaces, which can be shown to solve the primary and all the (possibly infinite) secondary constraints in the quantum and classical cases, respectively, and have thereby proven the non-triviality of the model. It has also been shown that the mCTM has more interesting dynamics than the CTM from the perspective of randomly connected tensor networks.
Rank-based decompositions of morphological templates.
Sussner, P; Ritter, G X
2000-01-01
Methods for matrix decomposition have found numerous applications in image processing, in particular for the problem of template decomposition. Since existing matrix decomposition techniques are mainly concerned with the linear domain, we consider it timely to investigate matrix decomposition techniques in the nonlinear domain with applications in image processing. The mathematical basis for these investigations is the new theory of rank within minimax algebra. Thus far, only minimax decompositions of rank 1 and rank 2 matrices into outer product expansions are known to the image processing community. We derive a heuristic algorithm for the decomposition of matrices having arbitrary rank.
NASA Astrophysics Data System (ADS)
Chatzistavrakidis, Athanasios; Khoo, Fech Scen; Roest, Diederik; Schupp, Peter
2017-03-01
The particular structure of Galileon interactions allows for higher-derivative terms while retaining second order field equations for scalar fields and Abelian p-forms. In this work we introduce an index-free formulation of these interactions in terms of two sets of Grassmannian variables. We employ this to construct Galileon interactions for mixed-symmetry tensor fields and coupled systems thereof. We argue that these tensors are the natural generalization of scalars with Galileon symmetry, similar to p-forms and scalars with a shift-symmetry. The simplest case corresponds to linearised gravity with Lovelock invariants, relating the Galileon symmetry to diffeomorphisms. Finally, we examine the coupling of a mixed-symmetry tensor to gravity, and demonstrate in an explicit example that the inclusion of appropriate counterterms retains second order field equations.
Jha, Abhinav K.; Barrett, Harrison H.; Frey, Eric C.; Clarkson, Eric; Caucci, Luca; Kupinski, Matthew A.
2015-01-01
Recent advances in technology are enabling a new class of nuclear imaging systems consisting of detectors that use real-time maximum-likelihood (ML) methods to estimate the interaction position, deposited energy, and other attributes of each photon-interaction event and store these attributes in a list format. This class of systems, which we refer to as photon-processing (PP) nuclear imaging systems, can be described by a fundamentally different mathematical imaging operator that allows processing of the continuous-valued photon attributes on a per-photon basis. Unlike conventional photon-counting (PC) systems that bin the data into images, PP systems do not have any binning-related information loss. Mathematically, while PC systems have an infinite-dimensional null space due to dimensionality considerations, PP systems do not necessarily suffer from this issue. Therefore, PP systems have the potential to provide improved performance in comparison to PC systems. To study these advantages, in this paper, we propose a framework to perform the singular-value decomposition (SVD) of the PP imaging operator. We use this framework to perform the SVD of operators that describe a general two-dimensional (2D) planar linear shift-invariant (LSIV) PP system and a hypothetical continuously rotating 2D single-photon emission computed tomography (SPECT) PP system. We then discuss two applications of the SVD framework. The first application is to decompose the object being imaged by the PP imaging system into measurement and null components. We compare these components to the measurement and null components obtained with PC systems. In the process, we also present a procedure to compute the null functions for a PC system. The second application is designing analytical reconstruction algorithms for PP systems. The proposed analytical method exploits the fact that PP systems acquire data in a continuous domain to estimate a continuous object function. The approach is parallelizable
Jha, Abhinav K; Barrett, Harrison H; Frey, Eric C; Clarkson, Eric; Caucci, Luca; Kupinski, Matthew A
2015-09-21
Recent advances in technology are enabling a new class of nuclear imaging systems consisting of detectors that use real-time maximum-likelihood (ML) methods to estimate the interaction position, deposited energy, and other attributes of each photon-interaction event and store these attributes in a list format. This class of systems, which we refer to as photon-processing (PP) nuclear imaging systems, can be described by a fundamentally different mathematical imaging operator that allows processing of the continuous-valued photon attributes on a per-photon basis. Unlike conventional photon-counting (PC) systems that bin the data into images, PP systems do not have any binning-related information loss. Mathematically, while PC systems have an infinite-dimensional null space due to dimensionality considerations, PP systems do not necessarily suffer from this issue. Therefore, PP systems have the potential to provide improved performance in comparison to PC systems. To study these advantages, we propose a framework to perform the singular-value decomposition (SVD) of the PP imaging operator. We use this framework to perform the SVD of operators that describe a general two-dimensional (2D) planar linear shift-invariant (LSIV) PP system and a hypothetical continuously rotating 2D single-photon emission computed tomography (SPECT) PP system. We then discuss two applications of the SVD framework. The first application is to decompose the object being imaged by the PP imaging system into measurement and null components. We compare these components to the measurement and null components obtained with PC systems. In the process, we also present a procedure to compute the null functions for a PC system. The second application is designing analytical reconstruction algorithms for PP systems. The proposed analytical approach exploits the fact that PP systems acquire data in a continuous domain to estimate a continuous object function. The approach is parallelizable and
NASA Astrophysics Data System (ADS)
Jha, Abhinav K.; Barrett, Harrison H.; Frey, Eric C.; Clarkson, Eric; Caucci, Luca; Kupinski, Matthew A.
2015-09-01
Recent advances in technology are enabling a new class of nuclear imaging systems consisting of detectors that use real-time maximum-likelihood (ML) methods to estimate the interaction position, deposited energy, and other attributes of each photon-interaction event and store these attributes in a list format. This class of systems, which we refer to as photon-processing (PP) nuclear imaging systems, can be described by a fundamentally different mathematical imaging operator that allows processing of the continuous-valued photon attributes on a per-photon basis. Unlike conventional photon-counting (PC) systems that bin the data into images, PP systems do not have any binning-related information loss. Mathematically, while PC systems have an infinite-dimensional null space due to dimensionality considerations, PP systems do not necessarily suffer from this issue. Therefore, PP systems have the potential to provide improved performance in comparison to PC systems. To study these advantages, we propose a framework to perform the singular-value decomposition (SVD) of the PP imaging operator. We use this framework to perform the SVD of operators that describe a general two-dimensional (2D) planar linear shift-invariant (LSIV) PP system and a hypothetical continuously rotating 2D single-photon emission computed tomography (SPECT) PP system. We then discuss two applications of the SVD framework. The first application is to decompose the object being imaged by the PP imaging system into measurement and null components. We compare these components to the measurement and null components obtained with PC systems. In the process, we also present a procedure to compute the null functions for a PC system. The second application is designing analytical reconstruction algorithms for PP systems. The proposed analytical approach exploits the fact that PP systems acquire data in a continuous domain to estimate a continuous object function. The approach is parallelizable and
Sharma, Sandeep Kumar; Roudaut, Gaëlle; Fabing, Isabelle; Duplâtre, Gilles
2010-11-14
The triplet state of positronium, o-Ps, is used as a probe to characterize a starch-20% w/w sucrose matrix as a function of temperature (T). A two-step decomposition (of sucrose, and then starch) starts at 440 K as shown by a decrease in the o-Ps intensity (I(3)) and lifetime (τ(3)), the latter also disclosing the occurrence of a glass transition. Upon sucrose decomposition, the matrix acquires properties (reduced size and density of nanoholes) that are different from those of pure starch. A model is successfully established, describing the variations of both I(3) and τ(3) with T and yields a glass transition temperature, T(g) = (446 ± 2) K, in spite of the concomitant sucrose decomposition. Unexpectedly, the starch volume fraction (as probed through thermal gravimetry) decreases with T at a higher rate than the free volume fraction (as probed through PALS).
NASA Astrophysics Data System (ADS)
Heil, Konstantin; Moroianu, Andrei; Semmelmann, Uwe
2017-07-01
We show that Killing tensors on conformally flat n-dimensional tori whose conformal factor only depends on one variable, are polynomials in the metric and in the Killing vector fields. In other words, every first integral of the geodesic flow polynomial in the momenta on the sphere bundle of such a torus is linear in the momenta.
ERIC Educational Resources Information Center
Napier, J.
1988-01-01
Outlines the role of the main organisms involved in woodland decomposition and discusses some of the variables affecting the rate of nutrient cycling. Suggests practical work that may be of value to high school students either as standard practice or long-term projects. (CW)
ERIC Educational Resources Information Center
Napier, J.
1988-01-01
Outlines the role of the main organisms involved in woodland decomposition and discusses some of the variables affecting the rate of nutrient cycling. Suggests practical work that may be of value to high school students either as standard practice or long-term projects. (CW)
NASA Astrophysics Data System (ADS)
Viswanathan, R.; Thompson, Donald L.; Raff, L. M.
1984-05-01
The rates and mechanism for the unimolecular decomposition of SiH4 have been investigated using quasiclassical trajectory methods to follow the dynamics and Metropolis sampling procedures to average over the initial SiH4 phase space. The semiempirical potential-energy surface has been fitted to scaled SCF calculations and to a variety of experimental data. It gives the correct SiH4 equilibrium structure, reaction endothermicities, and bond energies for SiH4, SiH3, and SiH2. All hydrogen atoms are treated in an equivalent fashion. Excellent first-order decay plots are obtained for the microcanonical rates for the total SiH4 decomposition as well as for the separate decomposition channels. The low-energy pathway is found to be a three-center elimination to form SiH2+H2. The decomposition channel forming SiH3+H becomes important only at internal SiH4 energies in excess of 5.0 eV. Comparison of computed falloff curves with RRKM calculations fitted to experimental results indicates that the critical threshold energy for the three-center reaction lies in the range 2.10
Zou, Min Wang, Xin Jiang, Xiaohong Lu, Lude
2014-05-01
Catalyzed thermal decomposition process of ammonium perchlorate (AP) over neodymium oxide (Nd{sub 2}O{sub 3}) was investigated. Catalytic performances of nanometer-sized Nd{sub 2}O{sub 3} and micrometer-sized Nd{sub 2}O{sub 3} were evaluated by differential scanning calorimetry (DSC). In contrast to universal concepts, catalysts in different sizes have nearly similar catalytic activities. Based on structural and morphological variation of the catalysts during the reaction, combined with mass spectrum analyses and studies of unmixed style, a new understanding of this catalytic process was proposed. We believed that the newly formed chloride neodymium oxide (NdOCl) was the real catalytic species in the overall thermal decomposition of AP over Nd{sub 2}O{sub 3}. Meanwhile, it was the “self-distributed” procedure which occurred within the reaction that also worked for the improvement of overall catalytic activities. This work is of great value in understanding the roles of micrometer-sized catalysts used in heterogeneous reactions, especially the solid–solid reactions which could generate a large quantity of gaseous species. - Graphical abstract: In-situ and self-distributed reaction process in thermal decomposition of AP catalyzed by Nd{sub 2}O{sub 3}. - Highlights: • Micro- and nano-Nd{sub 2}O{sub 3} for catalytic thermal decomposition of AP. • No essential differences on their catalytic performances. • Structural and morphological variation of catalysts digs out catalytic mechanism. • This catalytic process is “in-situ and self-distributed” one.
NASA Astrophysics Data System (ADS)
Klimek, Beata; Niklińska, Maria; Chodak, Marcin
2013-04-01
Temperature is one of the most important factors affecting soil organic matter decomposition. Mountain areas with vertical gradients of temperature and precipitation provide an opportunity to observe climate changes similar to those observed at various latitudes and may serve as an approximation for climatic changes. The aim of the study was to compare the effects of climatic conditions and initial properties of litter on decomposition processes and thermal sensitivity of forest litter. The litter was collected at three altitudes (600, 900, 1200 m a.s.l.) in the Beskidy Mts (southern Poland), put into litter-bags and exposed in the field since autumn 2011. The litter collected at single altitude was exposed at the altitude it was taken and also at the two other altitudes. The litter-bags were laid out on five mountains, treated as replicates. Starting on April 2012, single sets of litter-bags were collected every five weeks. The laboratory measurements included determination of dry mass loss and chemical composition (Corg, Nt, St, Mg, Ca, Na, K, Cu, Zn) of the litter. In the additional litter-bag sets, taken in spring and autumn 2012, microbial properties were measured. To determine the effect of litter properties and climatic conditions of elevation sites on decomposing litter thermal sensitivity the respiration rate of litter was measured at 5°C, 15°C and 25°C and calculated as Q10 L and Q10 H (ratios of respiration rate between 5° and 15°C and between 15°C and 25°C, respectively). The functional diversity of soil microbes was measured with Biolog® ECO plates, structural diversity with phospholipid fatty acids (PLFA). Litter mass lost during first year of incubation was characterized by high variability and mean mass lost ranged up to a 30% of initial mass. After autumn sampling we showed, that mean respiration rate of litter (dry mass) from the 600m a.s.l site exposed on 600m a.s.l. was the highest at each tested temperature. In turn, the lowest mean
xTENSOR:. a Free Fast Abstract Tensor Manipulator
NASA Astrophysics Data System (ADS)
Martín-García, José M.
2008-09-01
The package xTensor is introduced, a very fast and general manipulator of tensor expressions for Mathematica. Manifolds and vector bundles can be defined containing tensor fields with arbitrary symmetry, connections of any type, metrics and other objects. Based on the Penrose abstract-index notation, xTensor has a single canonicalizer which fully simplifies all expressions, using highly efficient techniques of computational group theory. A number of companion packages have been developed to address particular problems in General Relativity, like metric perturbation theory or the manipulation of the Riemann tensor.
Diffusion Tensor Image Registration Using Hybrid Connectivity and Tensor Features
Wang, Qian; Yap, Pew-Thian; Wu, Guorong; Shen, Dinggang
2014-01-01
Most existing diffusion tensor imaging (DTI) registration methods estimate structural correspondences based on voxelwise matching of tensors. The rich connectivity information that is given by DTI, however, is often neglected. In this article, we propose to integrate complementary information given by connectivity features and tensor features for improved registration accuracy. To utilize connectivity information, we place multiple anchors representing different brain anatomies in the image space, and define the connectivity features for each voxel as the geodesic distances from all anchors to the voxel under consideration. The geodesic distance, which is computed in relation to the tensor field, encapsulates information of brain connectivity. We also extract tensor features for every voxel to reflect the local statistics of tensors in its neighborhood. We then combine both connectivity features and tensor features for registration of tensor images. From the images, landmarks are selected automatically and their correspondences are determined based on their connectivity and tensor feature vectors. The deformation field that deforms one tensor image to the other is iteratively estimated and optimized according to the landmarks and their associated correspondences. Experimental results show that, by using connectivity features and tensor features simultaneously, registration accuracy is increased substantially compared with the cases using either type of features alone. PMID:24293159
Evaluation of bayesian tensor estimation using tensor coherence.
Kim, Dae-Jin; Kim, In-Young; Jeong, Seok-Oh; Park, Hae-Jeong
2009-06-21
Fiber tractography, a unique and non-invasive method to estimate axonal fibers within white matter, constructs the putative streamlines from diffusion tensor MRI by interconnecting voxels according to the propagation direction defined by the diffusion tensor. This direction has uncertainties due to the properties of underlying fiber bundles, neighboring structures and image noise. Therefore, robust estimation of the diffusion direction is essential to reconstruct reliable fiber pathways. For this purpose, we propose a tensor estimation method using a Bayesian framework, which includes an a priori probability distribution based on tensor coherence indices, to utilize both the neighborhood direction information and the inertia moment as regularization terms. The reliability of the proposed tensor estimation was evaluated using Monte Carlo simulations in terms of accuracy and precision with four synthetic tensor fields at various SNRs and in vivo human data of brain and calf muscle. Proposed Bayesian estimation demonstrated the relative robustness to noise and the higher reliability compared to the simple tensor regression.
Wang, Fang; Ouyang, Guang; Zhou, Changsong; Wang, Suiping
2015-01-01
A number of studies have explored the time course of Chinese semantic and syntactic processing. However, whether syntactic processing occurs earlier than semantics during Chinese sentence reading is still under debate. To further explore this issue, an event-related potentials (ERPs) experiment was conducted on 21 native Chinese speakers who read individually-presented Chinese simple sentences (NP1+VP+NP2) word-by-word for comprehension and made semantic plausibility judgments. The transitivity of the verbs was manipulated to form three types of stimuli: congruent sentences (CON), sentences with a semantically violated NP2 following a transitive verb (semantic violation, SEM), and sentences with a semantically violated NP2 following an intransitive verb (combined semantic and syntactic violation, SEM+SYN). The ERPs evoked from the target NP2 were analyzed by using the Residue Iteration Decomposition (RIDE) method to reconstruct the ERP waveform blurred by trial-to-trial variability, as well as by using the conventional ERP method based on stimulus-locked averaging. The conventional ERP analysis showed that, compared with the critical words in CON, those in SEM and SEM+SYN elicited an N400–P600 biphasic pattern. The N400 effects in both violation conditions were of similar size and distribution, but the P600 in SEM+SYN was bigger than that in SEM. Compared with the conventional ERP analysis, RIDE analysis revealed a larger N400 effect and an earlier P600 effect (in the time window of 500–800 ms instead of 570–810ms). Overall, the combination of conventional ERP analysis and the RIDE method for compensating for trial-to-trial variability confirmed the non-significant difference between SEM and SEM+SYN in the earlier N400 time window. Converging with previous findings on other Chinese structures, the current study provides further precise evidence that syntactic processing in Chinese does not occur earlier than semantic processing. PMID:25615600
Endoscopic approach to tensor fold in patients with attic cholesteatoma.
Marchioni, Daniele; Mattioli, Francesco; Alicandri-Ciufelli, Matteo; Presutti, Livio
2009-09-01
The endoscopic approach to attic cholesteatoma allows clear observation of the tensor fold area and consequently, excision of the tensor fold, modifying the epitympanic diaphragm. This permits good removal of cholesteatoma and direct ventilation of the upper unit, preventing the development of a retraction pocket or attic cholesteatoma recurrence, with good functional results. An isthmus block associated with a complete tensor fold is a necessary condition for creation and development of an attic cholesteatoma. During surgical treatment of attic cholesteatoma, tensor fold removal is required to restore ventilation of the attic region. Use of a microscope does not allow exposure of the tensor fold area and so removal of the tensor fold can be very difficult. In contrast, the endoscope permits better visualization of the tensor fold area, and this aids understanding of the anatomy of the tensor fold and its removal, restoring attic ventilation. In all, 21 patients with limited attic cholesteatoma underwent an endoscopic approach with complete removal of the disease. Patients with a wide external ear canal were operated through an exclusively endoscopic transcanal approach; patients with a narrow external ear canal or who were affected by external canal exostosis were operated through a traditional retroauricular incision and meatoplasty followed by the endoscopic transcanal approach. In 18/21 patients, the endoscope permitted the discovery of different anatomical morphologies of the tensor fold. Sixteen patients presented a complete tensor fold (one with an anomalous transversal orientation), one patient presented an incomplete tensor fold and one patient presented a bony ridge in the cochleariform region. In all 16 cases of complete tensor tympani fold, the fold was removed and anterior epitympanic ventilation was restored. The ridge bone over the cochleariform process was also removed with a microdrill.
Entanglement, tensor networks and black hole horizons
NASA Astrophysics Data System (ADS)
Molina-Vilaplana, J.; Prior, J.
2014-11-01
We elaborate on a previous proposal by Hartman and Maldacena on a tensor network which accounts for the scaling of the entanglement entropy in a system at a finite temperature. In this construction, the ordinary entanglement renormalization flow given by the class of tensor networks known as the Multi Scale Entanglement Renormalization Ansatz (MERA), is supplemented by an additional entanglement structure at the length scale fixed by the temperature. The network comprises two copies of a MERA circuit with a fixed number of layers and a pure matrix product state which joins both copies by entangling the infrared degrees of freedom of both MERA networks. The entanglement distribution within this bridge state defines reduced density operators on both sides which cause analogous effects to the presence of a black hole horizon when computing the entanglement entropy at finite temperature in the AdS/CFT correspondence. The entanglement and correlations during the thermalization process of a system after a quantum quench are also analyzed. To this end, a full tensor network representation of the action of local unitary operations on the bridge state is proposed. This amounts to a tensor network which grows in size by adding succesive layers of bridge states. Finally, we discuss on the holographic interpretation of the tensor network through a notion of distance within the network which emerges from its entanglement distribution.
Kim, Na Rae; Jung, Inyu; Jo, Yun Hwan; Lee, Hyuck Mo
2013-09-01
To control the optical properties of Cu2O for a variety of application, we synthesized Cu2O in nanoscale without other treatments. Cu2O nanoparticles with an average size of 2.7 nm (sigma < or = 3.7%) were successfully synthesized in this study via a modified thermal decomposition process. Copper (II) acetylacetonate was used as a precursor, and oleylamine was used as a solvent, a surfactant and a reducing agent. The oleylamine-mediated synthesis allowed for the preparation of Cu2O nanoparticles with a narrower size distribution, and the nanoparticles were synthesized in the presence of a borane tert-butylamine (BTB) complex, where BTB was a strong co-reducing agent together with oleylamine. UV-vis spectroscopy analysis suggest that band gap energy of these Cu2O particles is enlarged from 2.1 eV in the bulk to 3.1 eV in the 2.7-nm nanoparticles, which is larger than most other reported value of Cu2O nanoparticles. Therefore, these nanoparticles could be used as a transparent material because of transformed optical property.
NASA Astrophysics Data System (ADS)
Hallo, Miroslav; Asano, Kimiyuki; Gallovič, František
2017-09-01
On April 16, 2016, Kumamoto prefecture in Kyushu region, Japan, was devastated by a shallow M JMA7.3 earthquake. The series of foreshocks started by M JMA6.5 foreshock 28 h before the mainshock. They have originated in Hinagu fault zone intersecting the mainshock Futagawa fault zone; hence, the tectonic background for this earthquake sequence is rather complex. Here we infer centroid moment tensors (CMTs) for 11 events with M JMA between 4.8 and 6.5, using strong motion records of the K-NET, KiK-net and F-net networks. We use upgraded Bayesian full-waveform inversion code ISOLA-ObsPy, which takes into account uncertainty of the velocity model. Such an approach allows us to reliably assess uncertainty of the CMT parameters including the centroid position. The solutions show significant systematic spatial and temporal variations throughout the sequence. Foreshocks are right-lateral steeply dipping strike-slip events connected to the NE-SW shear zone. Those located close to the intersection of the Hinagu and Futagawa fault zones are dipping slightly to ESE, while those in the southern area are dipping to WNW. Contrarily, aftershocks are mostly normal dip-slip events, being related to the N-S extensional tectonic regime. Most of the deviatoric moment tensors contain only minor CLVD component, which can be attributed to the velocity model uncertainty. Nevertheless, two of the CMTs involve a significant CLVD component, which may reflect complex rupture process. Decomposition of those moment tensors into two pure shear moment tensors suggests combined right-lateral strike-slip and normal dip-slip mechanisms, consistent with the tectonic settings of the intersection of the Hinagu and Futagawa fault zones.[Figure not available: see fulltext.
Direct Solution of the Chemical Master Equation Using Quantized Tensor Trains
Kazeev, Vladimir; Khammash, Mustafa; Nip, Michael; Schwab, Christoph
2014-01-01
The Chemical Master Equation (CME) is a cornerstone of stochastic analysis and simulation of models of biochemical reaction networks. Yet direct solutions of the CME have remained elusive. Although several approaches overcome the infinite dimensional nature of the CME through projections or other means, a common feature of proposed approaches is their susceptibility to the curse of dimensionality, i.e. the exponential growth in memory and computational requirements in the number of problem dimensions. We present a novel approach that has the potential to “lift” this curse of dimensionality. The approach is based on the use of the recently proposed Quantized Tensor Train (QTT) formatted numerical linear algebra for the low parametric, numerical representation of tensors. The QTT decomposition admits both, algorithms for basic tensor arithmetics with complexity scaling linearly in the dimension (number of species) and sub-linearly in the mode size (maximum copy number), and a numerical tensor rounding procedure which is stable and quasi-optimal. We show how the CME can be represented in QTT format, then use the exponentially-converging -discontinuous Galerkin discretization in time to reduce the CME evolution problem to a set of QTT-structured linear equations to be solved at each time step using an algorithm based on Density Matrix Renormalization Group (DMRG) methods from quantum chemistry. Our method automatically adapts the “basis” of the solution at every time step guaranteeing that it is large enough to capture the dynamics of interest but no larger than necessary, as this would increase the computational complexity. Our approach is demonstrated by applying it to three different examples from systems biology: independent birth-death process, an example of enzymatic futile cycle, and a stochastic switch model. The numerical results on these examples demonstrate that the proposed QTT method achieves dramatic speedups and several orders of magnitude
Direct solution of the Chemical Master Equation using quantized tensor trains.
Kazeev, Vladimir; Khammash, Mustafa; Nip, Michael; Schwab, Christoph
2014-03-01
The Chemical Master Equation (CME) is a cornerstone of stochastic analysis and simulation of models of biochemical reaction networks. Yet direct solutions of the CME have remained elusive. Although several approaches overcome the infinite dimensional nature of the CME through projections or other means, a common feature of proposed approaches is their susceptibility to the curse of dimensionality, i.e. the exponential growth in memory and computational requirements in the number of problem dimensions. We present a novel approach that has the potential to "lift" this curse of dimensionality. The approach is based on the use of the recently proposed Quantized Tensor Train (QTT) formatted numerical linear algebra for the low parametric, numerical representation of tensors. The QTT decomposition admits both, algorithms for basic tensor arithmetics with complexity scaling linearly in the dimension (number of species) and sub-linearly in the mode size (maximum copy number), and a numerical tensor rounding procedure which is stable and quasi-optimal. We show how the CME can be represented in QTT format, then use the exponentially-converging hp-discontinuous Galerkin discretization in time to reduce the CME evolution problem to a set of QTT-structured linear equations to be solved at each time step using an algorithm based on Density Matrix Renormalization Group (DMRG) methods from quantum chemistry. Our method automatically adapts the "basis" of the solution at every time step guaranteeing that it is large enough to capture the dynamics of interest but no larger than necessary, as this would increase the computational complexity. Our approach is demonstrated by applying it to three different examples from systems biology: independent birth-death process, an example of enzymatic futile cycle, and a stochastic switch model. The numerical results on these examples demonstrate that the proposed QTT method achieves dramatic speedups and several orders of magnitude storage
Superconducting tensor gravity gradiometer
NASA Technical Reports Server (NTRS)
Paik, H. J.
1981-01-01
The employment of superconductivity and other material properties at cryogenic temperatures to fabricate sensitive, low-drift, gravity gradiometer is described. The device yields a reduction of noise of four orders of magnitude over room temperature gradiometers, and direct summation and subtraction of signals from accelerometers in varying orientations are possible with superconducting circuitry. Additional circuits permit determination of the linear and angular acceleration vectors independent of the measurement of the gravity gradient tensor. A dewar flask capable of maintaining helium in a liquid state for a year's duration is under development by NASA, and a superconducting tensor gravity gradiometer for the NASA Geodynamics Program is intended for a LEO polar trajectory to measure the harmonic expansion coefficients of the earth's gravity field up to order 300.
Grid-based electronic structure calculations: The tensor decomposition approach
NASA Astrophysics Data System (ADS)
Rakhuba, M. V.; Oseledets, I. V.
2016-05-01
We present a fully grid-based approach for solving Hartree-Fock and all-electron Kohn-Sham equations based on low-rank approximation of three-dimensional electron orbitals. Due to the low-rank structure the total complexity of the algorithm depends linearly with respect to the one-dimensional grid size. Linear complexity allows for the usage of fine grids, e.g. 81923 and, thus, cheap extrapolation procedure. We test the proposed approach on closed-shell atoms up to the argon, several molecules and clusters of hydrogen atoms. All tests show systematical convergence with the required accuracy.
Grid-based electronic structure calculations: The tensor decomposition approach
Rakhuba, M.V.; Oseledets, I.V.
2016-05-01
We present a fully grid-based approach for solving Hartree–Fock and all-electron Kohn–Sham equations based on low-rank approximation of three-dimensional electron orbitals. Due to the low-rank structure the total complexity of the algorithm depends linearly with respect to the one-dimensional grid size. Linear complexity allows for the usage of fine grids, e.g. 8192{sup 3} and, thus, cheap extrapolation procedure. We test the proposed approach on closed-shell atoms up to the argon, several molecules and clusters of hydrogen atoms. All tests show systematical convergence with the required accuracy.
2014-09-02
station locations of HUMMING ALBATROSS . .................32 Figure 17. Full moment tensor solutions and decompositions as a function source depth...Geophysical for providing the Humming Albatross waveform data. Plots were made with GMT (Wessel and Smith, 1998). 1. SUMMARY In this project...synthetic studies, we applied the moment tensor method to the HUMMING ALBATROSS quarry blast events, which is an excellent dataset in terms of understanding
Residue decomposition of submodel of WEPS
USDA-ARS?s Scientific Manuscript database
The Residue Decomposition submodel of the Wind Erosion Prediction System (WEPS) simulates the decrease in crop residue biomass due to microbial activity. The decomposition process is modeled as a first-order reaction with temperature and moisture as driving variables. Decomposition is a function of ...
E6Tensors: A Mathematica package for E6 Tensors
NASA Astrophysics Data System (ADS)
Deppisch, Thomas
2017-04-01
We present the Mathematica package E6Tensors, a tool for explicit tensor calculations in E6 gauge theories. In addition to matrix expressions for the group generators of E6, it provides structure constants, various higher rank tensors and expressions for the representations 27, 78, 351 and 351‧. This paper comes along with a short manual including physically relevant examples. I further give a complete list of gauge invariant, renormalisable terms for superpotentials and Lagrangians.
FaRe: A Mathematica package for tensor reduction of Feynman integrals
NASA Astrophysics Data System (ADS)
Re Fiorentin, Michele
2016-08-01
In this paper, we present FaRe, a package for Mathematica that implements the decomposition of a generic tensor Feynman integral, with arbitrary loop number, into scalar integrals in higher dimension. In order for FaRe to work, the package FeynCalc is needed, so that the tensor structure of the different contributions is preserved and the obtained scalar integrals are grouped accordingly. FaRe can prove particularly useful when it is preferable to handle Feynman integrals with free Lorentz indices and tensor reduction of high-order integrals is needed. This can then be achieved with several powerful existing tools.
Projectors and seed conformal blocks for traceless mixed-symmetry tensors
NASA Astrophysics Data System (ADS)
Costa, Miguel S.; Hansen, Tobias; Penedones, João; Trevisani, Emilio
2016-07-01
In this paper we derive the projectors to all irreducible SO( d) representations (traceless mixed-symmetry tensors) that appear in the partial wave decomposition of a conformal correlator of four stress-tensors in d dimensions. These projectors are given in a closed form for arbitrary length l 1 of the first row of the Young diagram. The appearance of Gegenbauer polynomials leads directly to recursion relations in l 1 for seed conformal blocks. Further results include a differential operator that generates the projectors to traceless mixed-symmetry tensors and the general normalization constant of the shadow operator.
Discussion of stress tensor nonuniqueness with application to nonuniform, particulate systems
Aidun, J.B.
1993-06-01
The indeterminacy of the mechanical stress tensor has been noted in several developments of expressions for stress in a system of particles. It is generally agreed that physical quantities related to the stress tensor must be insensitive to this nonuniqueness, but there is no definitive prescription for insuring it. Kroener`s tensor decomposition theorem is applied to the mechanical stress tensor {sup {sigma}}{sub ij} to show that its complete determination requires specification of its ``incompatibility,`` {epsilon}{sub ijk} {epsilon}{sub lmn} {sup {partial_derivative}}{sub j} {sup {partial_derivative}}{sub m} {sup {sigma}}{sub kn}, in addition to its divergence, which is obtained from the momentum conservation relation. For a particulate system, stress tensor incompatibility is shown to vanish to recover the correct expression for macroscopically observable traction. This result removes concern about nonuniqueness without requiring equilibrium or arbitrarily-defined force lines.
Catalyst for sodium chlorate decomposition
NASA Technical Reports Server (NTRS)
Wydeven, T.
1972-01-01
Production of oxygen by rapid decomposition of cobalt oxide and sodium chlorate mixture is discussed. Cobalt oxide serves as catalyst to accelerate reaction. Temperature conditions and chemical processes involved are described.
Carbon decomposition process of the residual biomass in the paddy soil of a single-crop rice field
NASA Astrophysics Data System (ADS)
Okada, K.; Iwata, T.
2014-12-01
In cultivated fields, residual organic matter is plowed into soil after harvest and decaying in fallow season. Greenhouse gases such as CO2 and CH4 is generated by the decomposition of the substantial organic matter and released into the atmosphere. In some fields, open burning is carried out by tradition, when carbon in residual matter is released into atmosphere as CO2. However, burning effect on carbon budget between crop lands and atmosphere is not entirely considered yet. In this study, coarse organic matter (COM) in paddy soil of a single-crop rice field was sampled on regular intervals between January, 2011 and August, 2014 The amount of carbon release from residual matter was estimated by analyzing of the variations in carbon content of COM. Effects of soil temperature (Ts) and soil water content (SWC) at the paddy field on the rate of carbon decomposition was investigated. Though decreasing rate of COM was much smaller in winter season, it is accelerated at the warming season between April and June every year. Decomposition was resisted for next rice cultivated season despite of highest soil temperature. In addition, the observational field was divided into two areas, and three time open burning experiments were conducted in November, 2011, 2012, and 2013. In each year, three sampling surveys, or plants before harvest and residuals before and after the burning experiment, were done. From these surveys, it is suggested that about 48±2% of carbon contents of above-ground plant was yield out as grain by harvest, and about 27±2% of carbon emitted as CO2 by burning. Carbon content of residuals plowed into soil after the harvest was estimated 293±1 and 220±36gC/m2 in no-burned and burned area, respectively, based on three-years average. It is estimated that 70 and 60% of the first input amount of COM was decomposed after a year in no-burned and burned area, respectively.
OPERATOR NORM INEQUALITIES BETWEEN TENSOR UNFOLDINGS ON THE PARTITION LATTICE.
Wang, Miaoyan; Duc, Khanh Dao; Fischer, Jonathan; Song, Yun S
2017-05-01
Interest in higher-order tensors has recently surged in data-intensive fields, with a wide range of applications including image processing, blind source separation, community detection, and feature extraction. A common paradigm in tensor-related algorithms advocates unfolding (or flattening) the tensor into a matrix and applying classical methods developed for matrices. Despite the popularity of such techniques, how the functional properties of a tensor changes upon unfolding is currently not well understood. In contrast to the body of existing work which has focused almost exclusively on matricizations, we here consider all possible unfoldings of an order-k tensor, which are in one-to-one correspondence with the set of partitions of {1, …, k}. We derive general inequalities between the l(p) -norms of arbitrary unfoldings defined on the partition lattice. In particular, we demonstrate how the spectral norm (p = 2) of a tensor is bounded by that of its unfoldings, and obtain an improved upper bound on the ratio of the Frobenius norm to the spectral norm of an arbitrary tensor. For specially-structured tensors satisfying a generalized definition of orthogonal decomposability, we prove that the spectral norm remains invariant under specific subsets of unfolding operations.
2013-01-01
Background The Prussian blue analogues represent well-known and extensively studied group of coordination species which has many remarkable applications due to their ion-exchange, electron transfer or magnetic properties. Among them, Co-Fe Prussian blue analogues have been extensively studied due to the photoinduced magnetization. Surprisingly, their suitability as precursors for solid-state synthesis of magnetic nanoparticles is almost unexplored. In this paper, the mechanism of thermal decomposition of [Co(en)3][Fe(CN)6] ∙∙ 2H2O (1a) is elucidated, including the topotactic dehydration, valence and spins exchange mechanisms suggestion and the formation of a mixture of CoFe2O4-Co3O4 (3:1) as final products of thermal degradation. Results The course of thermal decomposition of 1a in air atmosphere up to 600°C was monitored by TG/DSC techniques, 57Fe Mössbauer and IR spectroscopy. As first, the topotactic dehydration of 1a to the hemihydrate [Co(en)3][Fe(CN)6] ∙∙ 1/2H2O (1b) occurred with preserving the single-crystal character as was confirmed by the X-ray diffraction analysis. The consequent thermal decomposition proceeded in further four stages including intermediates varying in valence and spin states of both transition metal ions in their structures, i.e. [FeII(en)2(μ-NC)CoIII(CN)4], FeIII(NH2CH2CH3)2(μ-NC)2CoII(CN)3] and FeIII[CoII(CN)5], which were suggested mainly from 57Fe Mössbauer, IR spectral and elemental analyses data. Thermal decomposition was completed at 400°C when superparamagnetic phases of CoFe2O4 and Co3O4 in the molar ratio of 3:1 were formed. During further temperature increase (450 and 600°C), the ongoing crystallization process gave a new ferromagnetic phase attributed to the CoFe2O4-Co3O4 nanocomposite particles. Their formation was confirmed by XRD and TEM analyses. In-field (5 K / 5 T) Mössbauer spectrum revealed canting of Fe(III) spin in almost fully inverse spinel structure of CoFe2O4. Conclusions It has been found
Reducing tensor magnetic gradiometer data for unexploded ordnance detection
Bracken, Robert E.; Brown, Philip J.
2005-01-01
We performed a survey to demonstrate the effectiveness of a prototype tensor magnetic gradiometer system (TMGS) for detection of buried unexploded ordnance (UXO). In order to achieve a useful result, we designed a data-reduction procedure that resulted in a realistic magnetic gradient tensor and devised a simple way of viewing complicated tensor data, not only to assess the validity of the final resulting tensor, but also to preview the data at interim stages of processing. The final processed map of the surveyed area clearly shows a sharp anomaly that peaks almost directly over the target UXO. This map agrees well with a modeled map derived from dipolar sources near the known target locations. From this agreement, it can be deduced that the reduction process is valid, making the prototype TMGS a foundation for development of future systems and processes.
Relativistic Lagrangian displacement field and tensor perturbations
NASA Astrophysics Data System (ADS)
Rampf, Cornelius; Wiegand, Alexander
2014-12-01
We investigate the purely spatial Lagrangian coordinate transformation from the Lagrangian to the basic Eulerian frame. We demonstrate three techniques for extracting the relativistic displacement field from a given solution in the Lagrangian frame. These techniques are (a) from defining a local set of Eulerian coordinates embedded into the Lagrangian frame; (b) from performing a specific gauge transformation; and (c) from a fully nonperturbative approach based on the Arnowitt-Deser-Misner (ADM) split. The latter approach shows that this decomposition is not tied to a specific perturbative formulation for the solution of the Einstein equations. Rather, it can be defined at the level of the nonperturbative coordinate change from the Lagrangian to the Eulerian description. Studying such different techniques is useful because it allows us to compare and develop further the various approximation techniques available in the Lagrangian formulation. We find that one has to solve the gravitational wave equation in the relativistic analysis, otherwise the corresponding Newtonian limit will necessarily contain spurious nonpropagating tensor artifacts at second order in the Eulerian frame. We also derive the magnetic part of the Weyl tensor in the Lagrangian frame, and find that it is not only excited by gravitational waves but also by tensor perturbations which are induced through the nonlinear frame dragging. We apply our findings to calculate for the first time the relativistic displacement field, up to second order, for a Λ CDM Universe in the presence of a local primordial non-Gaussian component. Finally, we also comment on recent claims about whether mass conservation in the Lagrangian frame is violated.
On Endomorphisms of Quantum Tensor Space
NASA Astrophysics Data System (ADS)
Lehrer, Gustav Isaac; Zhang, Ruibin
2008-12-01
We give a presentation of the endomorphism algebra End_{mathcal {U}q(mathfrak {sl}2)}(V^{⊗ r}) , where V is the three-dimensional irreducible module for quantum {mathfrak {sl}_2} over the function field {mathbb {C}(q^{1/2})} . This will be as a quotient of the Birman-Wenzl-Murakami algebra BMW r ( q) : = BMW r ( q -4, q 2 - q -2) by an ideal generated by a single idempotent Φ q . Our presentation is in analogy with the case where V is replaced by the two-dimensional irreducible {mathcal {U}_q(mathfrak {sl}2)} -module, the BMW algebra is replaced by the Hecke algebra H r ( q) of type A r-1, Φ q is replaced by the quantum alternator in H 3( q), and the endomorphism algebra is the classical realisation of the Temperley-Lieb algebra on tensor space. In particular, we show that all relations among the endomorphisms defined by the R-matrices on {V^{⊗ r}} are consequences of relations among the three R-matrices acting on {V^{⊗ 4}}. The proof makes extensive use of the theory of cellular algebras. Potential applications include the decomposition of tensor powers when q is a root of unity.
NASA Astrophysics Data System (ADS)
Lizurek, Grzegorz
2017-01-01
Tectonic seismicity in Poland is sparse. The biggest event was located near Myślenice in 17th century of magnitude 5.6. On the other hand, the anthropogenic seismicity is one of the highest in Europe related, for example, to underground mining in Upper Silesian Coal Basin (USCB) and Legnica Głogów Copper District (LGCD), open pit mining in "Bełchatów" brown coal mine and reservoir impoundment of Czorsztyn artificial lake. The level of seismic activity in these areas varies from tens to thousands of events per year. Focal mechanism and full moment tensor (MT) decomposition allow for deeper understanding of the seismogenic process leading to tectonic, induced, and triggered seismic events. The non-DC components of moment tensors are considered as an indicator of the induced seismicity. In this work, the MT inversion and decomposition is proved to be a robust tool for unveiling collapse-type events as well as the other induced events in Polish underground mining areas. The robustness and limitations of the presented method is exemplified by synthetic tests and by analyzing weak tectonic earthquakes. The spurious non-DC components of full MT solutions due to the noise and poor focal coverage are discussed. The results of the MT inversions of the human-related and tectonic earthquakes from Poland indicate this method as a useful part of the tectonic and anthropogenic seismicity discrimination workflow.
Local recovery of lithospheric stress tensor from GOCE gravitational tensor
NASA Astrophysics Data System (ADS)
Eshagh, Mehdi
2017-04-01
The sublithospheric stress due to mantle convection can be computed from gravity data and propagated through the lithosphere by solving the boundary-value problem of elasticity for the Earth's lithosphere. In this case, a full tensor of stress can be computed at any point inside this elastic layer. Here, we present mathematical foundations for recovering such a tensor from gravitational tensor measured at satellite altitudes. The mathematical relations will be much simpler in this way than the case of using gravity data as no derivative of spherical harmonics (SHs) or Legendre polynomials is involved in the expressions. Here, new relations between the SH coefficients of the stress and gravitational tensor elements are presented. Thereafter, integral equations are established from them to recover the elements of stress tensor from those of the gravitational tensor. The integrals have no closed-form kernels, but they are easy to invert and their spatial truncation errors are reducible. The integral equations are used to invert the real data of the gravity field and steady-state ocean circulation explorer mission (GOCE), in 2009 November, over the South American plate and its surroundings to recover the stress tensor at a depth of 35 km. The recovered stress fields are in good agreement with the tectonic and geological features of the area.
Local recovery of lithospheric stress tensor from GOCE gravitational tensor
NASA Astrophysics Data System (ADS)
Eshagh, Mehdi
2017-01-01