Tensor decomposition of EEG signals: a brief review.
Cong, Fengyu; Lin, Qiu-Hua; Kuang, Li-Dan; Gong, Xiao-Feng; Astikainen, Piia; Ristaniemi, Tapani
2015-06-15
Electroencephalography (EEG) is one fundamental tool for functional brain imaging. EEG signals tend to be represented by a vector or a matrix to facilitate data processing and analysis with generally understood methodologies like time-series analysis, spectral analysis and matrix decomposition. Indeed, EEG signals are often naturally born with more than two modes of time and space, and they can be denoted by a multi-way array called as tensor. This review summarizes the current progress of tensor decomposition of EEG signals with three aspects. The first is about the existing modes and tensors of EEG signals. Second, two fundamental tensor decomposition models, canonical polyadic decomposition (CPD, it is also called parallel factor analysis-PARAFAC) and Tucker decomposition, are introduced and compared. Moreover, the applications of the two models for EEG signals are addressed. Particularly, the determination of the number of components for each mode is discussed. Finally, the N-way partial least square and higher-order partial least square are described for a potential trend to process and analyze brain signals of two modalities simultaneously.
An optimization approach for fitting canonical tensor decompositions.
Dunlavy, Daniel M.; Acar, Evrim; Kolda, Tamara Gibson
2009-02-01
Tensor decompositions are higher-order analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as the CANDECOMP/PARAFAC decomposition (CPD), which expresses a tensor as the sum of component rank-one tensors and is used in a multitude of applications such as chemometrics, signal processing, neuroscience, and web analysis. The task of computing the CPD, however, can be difficult. The typical approach is based on alternating least squares (ALS) optimization, which can be remarkably fast but is not very accurate. Previously, nonlinear least squares (NLS) methods have also been recommended; existing NLS methods are accurate but slow. In this paper, we propose the use of gradient-based optimization methods. We discuss the mathematical calculation of the derivatives and further show that they can be computed efficiently, at the same cost as one iteration of ALS. Computational experiments demonstrate that the gradient-based optimization methods are much more accurate than ALS and orders of magnitude faster than NLS.
Tensor network decompositions in the presence of a global symmetry
Singh, Sukhwinder; Pfeifer, Robert N. C.; Vidal, Guifre
2010-11-15
Tensor network decompositions offer an efficient description of certain many-body states of a lattice system and are the basis of a wealth of numerical simulation algorithms. We discuss how to incorporate a global symmetry, given by a compact, completely reducible group G, in tensor network decompositions and algorithms. This is achieved by considering tensors that are invariant under the action of the group G. Each symmetric tensor decomposes into two types of tensors: degeneracy tensors, containing all the degrees of freedom, and structural tensors, which only depend on the symmetry group. In numerical calculations, the use of symmetric tensors ensures the preservation of the symmetry, allows selection of a specific symmetry sector, and significantly reduces computational costs. On the other hand, the resulting tensor network can be interpreted as a superposition of exponentially many spin networks. Spin networks are used extensively in loop quantum gravity, where they represent states of quantum geometry. Our work highlights their importance in the context of tensor network algorithms as well, thus setting the stage for cross-fertilization between these two areas of research.
3D tensor-based blind multispectral image decomposition for tumor demarcation
NASA Astrophysics Data System (ADS)
Kopriva, Ivica; Peršin, Antun
2010-03-01
Blind decomposition of multi-spectral fluorescent image for tumor demarcation is formulated exploiting tensorial structure of the image. First contribution of the paper is identification of the matrix of spectral responses and 3D tensor of spatial distributions of the materials present in the image from Tucker3 or PARAFAC models of 3D image tensor. Second contribution of the paper is clustering based estimation of the number of the materials present in the image as well as matrix of their spectral profiles. 3D tensor of the spatial distributions of the materials is recovered through 3-mode multiplication of the multi-spectral image tensor and inverse of the matrix of spectral profiles. Tensor representation of the multi-spectral image preserves its local spatial structure that is lost, due to vectorization process, when matrix factorization-based decomposition methods (such as non-negative matrix factorization and independent component analysis) are used. Superior performance of the tensor-based image decomposition over matrix factorization-based decompositions is demonstrated on experimental red-green-blue (RGB) image with known ground truth as well as on RGB fluorescent images of the skin tumor (basal cell carcinoma).
Tensor Decompositions for Learning Latent Variable Models
2012-12-08
of a tensor, 2011. arXiv:1004.4953. [CSC+12] S. B. Cohen, K. Stratos, M. Collins, D. P. Foster, and L. Ungar . Spectral learning of latent-variable...12] P. S. Dhillon, J. Rodu, M. Collins, D. P. Foster, and L. H. Ungar . Spectral dependency parsing with latent variables. In EMNLP-CoNLL, 2012. [DS07...Foster, J. Rodu, and L. H. Ungar . Spectral dimensionality reduction for HMMs, 2012. arXiv:1203.6130. [GvL96] G. H. Golub and C. F. van Loan. Matrix
3D extension of Tensorial Polar Decomposition. Application to (photo-)elasticity tensors
NASA Astrophysics Data System (ADS)
Desmorat, Rodrigue; Desmorat, Boris
2016-06-01
The orthogonalized harmonic decomposition of symmetric fourth-order tensors (i.e. having major and minor indicial symmetries, such as elasticity tensors) is completed by a representation of harmonic fourth-order tensors H by means of two second-order harmonic (symmetric deviatoric) tensors only. A similar decomposition is obtained for non-symmetric tensors (i.e. having minor indicial symmetry only, such as photo-elasticity tensors or elasto-plasticity tangent operators) introducing a fourth-order major antisymmetric traceless tensor Z. The tensor Z is represented by means of one harmonic second-order tensor and one antisymmetric second-order tensor only. Representations of totally symmetric (rari-constant), symmetric and major antisymmetric fourth-order tensors are simple particular cases of the proposed general representation. Closed-form expressions for tensor decomposition are given in the monoclinic case. Practical applications to elasticity and photo-elasticity monoclinic tensors are finally presented. xml:lang="fr"
Calculating vibrational spectra of molecules using tensor train decomposition
NASA Astrophysics Data System (ADS)
Rakhuba, Maxim; Oseledets, Ivan
2016-09-01
We propose a new algorithm for calculation of vibrational spectra of molecules using tensor train decomposition. Under the assumption that eigenfunctions lie on a low-parametric manifold of low-rank tensors we suggest using well-known iterative methods that utilize matrix inversion (locally optimal block preconditioned conjugate gradient method, inverse iteration) and solve corresponding linear systems inexactly along this manifold. As an application, we accurately compute vibrational spectra (84 states) of acetonitrile molecule CH3CN on a laptop in one hour using only 100 MB of memory to represent all computed eigenfunctions.
Tensor decomposition and nonlocal means based spectral CT reconstruction
NASA Astrophysics Data System (ADS)
Zhang, Yanbo; Yu, Hengyong
2016-10-01
As one of the state-of-the-art detectors, photon counting detector is used in spectral CT to classify the received photons into several energy channels and generate multichannel projection simultaneously. However, the projection always contains severe noise due to the low counts in each energy channel. How to reconstruct high-quality images from photon counting detector based spectral CT is a challenging problem. It is widely accepted that there exists self-similarity over the spatial domain in a CT image. Moreover, because a multichannel CT image is obtained from the same object at different energy, images among channels are highly correlated. Motivated by these two characteristics of the spectral CT, we employ tensor decomposition and nonlocal means methods for spectral CT iterative reconstruction. Our method includes three basic steps. First, each channel image is updated by using the OS-SART. Second, small 3D volumetric patches (tensor) are extracted from the multichannel image, and higher-order singular value decomposition (HOSVD) is performed on each tensor, which can help to enhance the spatial sparsity and spectral correlation. Third, in order to employ the self-similarity in CT images, similar patches are grouped to reduce noise using the nonlocal means method. These three steps are repeated alternatively till the stopping criteria are met. The effectiveness of the developed algorithm is validated on both numerically simulated and realistic preclinical datasets. Our results show that the proposed method achieves promising performance in terms of noise reduction and fine structures preservation.
Uncertainty propagation in orbital mechanics via tensor decomposition
NASA Astrophysics Data System (ADS)
Sun, Yifei; Kumar, Mrinal
2016-03-01
Uncertainty forecasting in orbital mechanics is an essential but difficult task, primarily because the underlying Fokker-Planck equation (FPE) is defined on a relatively high dimensional (6-D) state-space and is driven by the nonlinear perturbed Keplerian dynamics. In addition, an enormously large solution domain is required for numerical solution of this FPE (e.g. encompassing the entire orbit in the x-y-z subspace), of which the state probability density function (pdf) occupies a tiny fraction at any given time. This coupling of large size, high dimensionality and nonlinearity makes for a formidable computational task, and has caused the FPE for orbital uncertainty propagation to remain an unsolved problem. To the best of the authors' knowledge, this paper presents the first successful direct solution of the FPE for perturbed Keplerian mechanics. To tackle the dimensionality issue, the time-varying state pdf is approximated in the CANDECOMP/PARAFAC decomposition tensor form where all the six spatial dimensions as well as the time dimension are separated from one other. The pdf approximation for all times is obtained simultaneously via the alternating least squares algorithm. Chebyshev spectral differentiation is employed for discretization on account of its spectral ("super-fast") convergence rate. To facilitate the tensor decomposition and control the solution domain size, system dynamics is expressed using spherical coordinates in a noninertial reference frame. Numerical results obtained on a regular personal computer are compared with Monte Carlo simulations.
Reduction of Linear Combinations of Tensors by Ideal Decompositions
NASA Astrophysics Data System (ADS)
Fiedler, Bernd
2001-04-01
Symmetry properties of r-times covariant tensors T can be described by certain linear subspaces W of the group ring { K}[{ S}r] of a symmetric group { S}r. If for a class of tensors T such a W is known, the elements of the orthogonal subspace W⊥ of W within the dual space { K}[{ S}r]* of { K}[{ S}r] yield linear identities needed for a treatment of the term combination problem for the coordinates of the T. In earlier papers1,2 we gave the structure of these W for every situation which appears in symbolic tensor calculations by computer. Characterizing idempotents of such W and machinable, linear equation systems for W⊥ can be determined on the basis of an ideal decomposition algorithm which works in every semisimple ring up to an isomorphism. Furthermore, we use tools such as the Littlewood-Richardson rule, plethysms and discrete Fourier transforms for { S}r to increase the efficience of calculations. All described methods were implemented in a Mathematica package called PERMS.
Databases post-processing in Tensoral
NASA Technical Reports Server (NTRS)
Dresselhaus, Eliot
1994-01-01
The Center for Turbulent Research (CTR) post-processing effort aims to make turbulence simulations and data more readily and usefully available to the research and industrial communities. The Tensoral language, introduced in this document and currently existing in prototype form, is the foundation of this effort. Tensoral provides a convenient and powerful protocol to connect users who wish to analyze fluids databases with the authors who generate them. In this document we introduce Tensoral and its prototype implementation in the form of a user's guide. This guide focuses on use of Tensoral for post-processing turbulence databases. The corresponding document - the Tensoral 'author's guide' - which focuses on how authors can make databases available to users via the Tensoral system - is currently unwritten. Section 1 of this user's guide defines Tensoral's basic notions: we explain the class of problems at hand and how Tensoral abstracts them. Section 2 defines Tensoral syntax for mathematical expressions. Section 3 shows how these expressions make up Tensoral statements. Section 4 shows how Tensoral statements and expressions are embedded into other computer languages (such as C or Vectoral) to make Tensoral programs. We conclude with a complete example program.
Tensor decomposition for multi-tissue gene expression experiments
Hore, Victoria; Viñuela, Ana; Buil, Alfonso; Knight, Julian; McCarthy, Mark I; Small, Kerrin; Marchini, Jonathan
2016-01-01
Genome wide association studies of gene expression traits and other cellular phenotypes have been successful in revealing links between genetic variation and biological processes. The majority of discoveries have uncovered cis eQTL effects via mass univariate testing of SNPs against gene expression in single tissues. We present a Bayesian method for multi-tissue experiments focusing on uncovering gene networks linked to genetic variation. Our method decomposes the 3D array (or tensor) of gene expression measurements into a set of latent components. We identify sparse gene networks, which can then be tested for association against genetic variation genome-wide. We apply our method to a dataset of 845 individuals from the TwinsUK cohort with gene expression measured via RNA sequencing in adipose, LCLs and skin. We uncover several gene networks with a genetic basis and clear biological and statistical significance. Extensions of this approach will allow integration of multi-omic, environmental and phenotypic datasets. PMID:27479908
Thermochemical water decomposition processes
NASA Technical Reports Server (NTRS)
Chao, R. E.
1974-01-01
Thermochemical processes which lead to the production of hydrogen and oxygen from water without the consumption of any other material have a number of advantages when compared to other processes such as water electrolysis. It is possible to operate a sequence of chemical steps with net work requirements equal to zero at temperatures well below the temperature required for water dissociation in a single step. Various types of procedures are discussed, giving attention to halide processes, reverse Deacon processes, iron oxide and carbon oxide processes, and metal and alkali metal processes. Economical questions are also considered.
Performance of tensor decomposition-based modal identification under nonstationary vibration
NASA Astrophysics Data System (ADS)
Friesen, P.; Sadhu, A.
2017-03-01
Health monitoring of civil engineering structures is of paramount importance when they are subjected to natural hazards or extreme climatic events like earthquake, strong wind gusts or man-made excitations. Most of the traditional modal identification methods are reliant on stationarity assumption of the vibration response and posed difficulty while analyzing nonstationary vibration (e.g. earthquake or human-induced vibration). Recently tensor decomposition based methods are emerged as powerful and yet generic blind (i.e. without requiring a knowledge of input characteristics) signal decomposition tool for structural modal identification. In this paper, a tensor decomposition based system identification method is further explored to estimate modal parameters using nonstationary vibration generated due to either earthquake or pedestrian induced excitation in a structure. The effects of lag parameters and sensor densities on tensor decomposition are studied with respect to the extent of nonstationarity of the responses characterized by the stationary duration and peak ground acceleration of the earthquake. A suite of more than 1400 earthquakes is used to investigate the performance of the proposed method under a wide variety of ground motions utilizing both complete and partial measurements of a high-rise building model. Apart from the earthquake, human-induced nonstationary vibration of a real-life pedestrian bridge is also used to verify the accuracy of the proposed method.
Predicting the reference evapotranspiration based on tensor decomposition
NASA Astrophysics Data System (ADS)
Misaghian, Negin; Shamshirband, Shahaboddin; Petković, Dalibor; Gocic, Milan; Mohammadi, Kasra
2016-09-01
Most of the available models for reference evapotranspiration (ET0) estimation are based upon only an empirical equation for ET0. Thus, one of the main issues in ET0 estimation is the appropriate integration of time information and different empirical ET0 equations to determine ET0 and boost the precision. The FAO-56 Penman-Monteith, adjusted Hargreaves, Blaney-Criddle, Priestley-Taylor, and Jensen-Haise equations were utilized in this study for estimating ET0 for two stations of Belgrade and Nis in Serbia using collected data for the period of 1980 to 2010. Three-order tensor is used to capture three-way correlations among months, years, and ET0 information. Afterward, the latent correlations among ET0 parameters were found by the multiway analysis to enhance the quality of the prediction. The suggested method is valuable as it takes into account simultaneous relations between elements, boosts the prediction precision, and determines latent associations. Models are compared with respect to coefficient of determination (R 2), mean absolute error (MAE), and root-mean-square error (RMSE). The proposed tensor approach has a R 2 value of greater than 0.9 for all selected ET0 methods at both selected stations, which is acceptable for the ET0 prediction. RMSE is ranged between 0.247 and 0.485 mm day-1 at Nis station and between 0.277 and 0.451 mm day-1 at Belgrade station, while MAE is between 0.140 and 0.337 mm day-1 at Nis and between 0.208 and 0.360 mm day-1 at Belgrade station. The best performances are achieved by Priestley-Taylor model at Nis station (R 2 = 0.985, MAE = 0.140 mm day-1, RMSE = 0.247 mm day-1) and FAO-56 Penman-Monteith model at Belgrade station (MAE = 0.208 mm day-1, RMSE = 0.277 mm day-1, R 2 = 0.975).
Exploiting multi-lead electrocardiogram correlations using robust third-order tensor decomposition
Dandapat, Samarendra
2015-01-01
In this Letter, a robust third-order tensor decomposition of multi-lead electrocardiogram (MECG) comprising of 12-leads is proposed to reduce the dimension of the storage data. An order-3 tensor structure is employed to represent the MECG data by rearranging the MECG information in three dimensions. The three-dimensions of the formed tensor represent the number of leads, beats and samples of some fixed ECG duration. Dimension reduction of such an arrangement exploits correlations present among the successive beats (intra-beat and inter-beat) and across the leads (inter-lead). The higher-order singular value decomposition is used to decompose the tensor data. In addition, multiscale analysis has been added for effective care of ECG information. It grossly segments the ECG characteristic waves (P-wave, QRS-complex, ST-segment and T-wave etc.) into different sub-bands. In the meantime, it separates high-frequency noise components into lower-order sub-bands which helps in removing noise from the original data. For evaluation purposes, we have used the publicly available PTB diagnostic database. The proposed method outperforms the existing algorithms where compression ratio is under 10 for MECG data. Results show that the original MECG data volume can be reduced by more than 45 times with acceptable diagnostic distortion level. PMID:26609416
Tensoral for post-processing users and simulation authors
NASA Technical Reports Server (NTRS)
Dresselhaus, Eliot
1993-01-01
The CTR post-processing effort aims to make turbulence simulations and data more readily and usefully available to the research and industrial communities. The Tensoral language, which provides the foundation for this effort, is introduced here in the form of a user's guide. The Tensoral user's guide is presented in two main sections. Section one acts as a general introduction and guides database users who wish to post-process simulation databases. Section two gives a brief description of how database authors and other advanced users can make simulation codes and/or the databases they generate available to the user community via Tensoral database back ends. The two-part structure of this document conforms to the two-level design structure of the Tensoral language. Tensoral has been designed to be a general computer language for performing tensor calculus and statistics on numerical data. Tensoral's generality allows it to be used for stand-alone native coding of high-level post-processing tasks (as described in section one of this guide). At the same time, Tensoral's specialization to a minute task (namely, to numerical tensor calculus and statistics) allows it to be easily embedded into applications written partly in Tensoral and partly in other computer languages (here, C and Vectoral). Embedded Tensoral, aimed at advanced users for more general coding (e.g. of efficient simulations, for interfacing with pre-existing software, for visualization, etc.), is described in section two of this guide.
NASA Technical Reports Server (NTRS)
Leone, Frank A., Jr.
2015-01-01
A method is presented to represent the large-deformation kinematics of intraply matrix cracks and delaminations in continuum damage mechanics (CDM) constitutive material models. The method involves the additive decomposition of the deformation gradient tensor into 'crack' and 'bulk material' components. The response of the intact bulk material is represented by a reduced deformation gradient tensor, and the opening of an embedded cohesive interface is represented by a normalized cohesive displacement-jump vector. The rotation of the embedded interface is tracked as the material deforms and as the crack opens. The distribution of the total local deformation between the bulk material and the cohesive interface components is determined by minimizing the difference between the cohesive stress and the bulk material stress projected onto the cohesive interface. The improvements to the accuracy of CDM models that incorporate the presented method over existing approaches are demonstrated for a single element subjected to simple shear deformation and for a finite element model of a unidirectional open-hole tension specimen. The material model is implemented as a VUMAT user subroutine for the Abaqus/Explicit finite element software. The presented deformation gradient decomposition method reduces the artificial load transfer across matrix cracks subjected to large shearing deformations, and avoids the spurious secondary failure modes that often occur in analyses based on conventional progressive damage models.
Towards Interactive Construction of Topical Hierarchy: A Recursive Tensor Decomposition Approach
Wang, Chi; Liu, Xueqing; Song, Yanglei; Han, Jiawei
2015-01-01
Automatic construction of user-desired topical hierarchies over large volumes of text data is a highly desirable but challenging task. This study proposes to give users freedom to construct topical hierarchies via interactive operations such as expanding a branch and merging several branches. Existing hierarchical topic modeling techniques are inadequate for this purpose because (1) they cannot consistently preserve the topics when the hierarchy structure is modified; and (2) the slow inference prevents swift response to user requests. In this study, we propose a novel method, called STROD, that allows efficient and consistent modification of topic hierarchies, based on a recursive generative model and a scalable tensor decomposition inference algorithm with theoretical performance guarantee. Empirical evaluation shows that STROD reduces the runtime of construction by several orders of magnitude, while generating consistent and quality hierarchies. PMID:26705505
Tensor-multi-scalar theories: relativistic stars and 3 + 1 decomposition
NASA Astrophysics Data System (ADS)
Horbatsch, Michael; Silva, Hector O.; Gerosa, Davide; Pani, Paolo; Berti, Emanuele; Gualtieri, Leonardo; Sperhake, Ulrich
2015-10-01
Gravitational theories with multiple scalar fields coupled to the metric and each other—a natural extension of the well studied single-scalar-tensor theories—are interesting phenomenological frameworks to describe deviations from general relativity in the strong-field regime. In these theories, the N-tuple of scalar fields takes values in a coordinate patch of an N-dimensional Riemannian target-space manifold whose properties are poorly constrained by weak-field observations. Here we introduce for simplicity a non-trivial model with two scalar fields and a maximally symmetric target-space manifold. Within this model we present a preliminary investigation of spontaneous scalarization for relativistic, perfect fluid stellar models in spherical symmetry. We find that the scalarization threshold is determined by the eigenvalues of a symmetric scalar-matter coupling matrix, and that the properties of strongly scalarized stellar configurations additionally depend on the target-space curvature radius. In preparation for numerical relativity simulations, we also write down the 3 + 1 decomposition of the field equations for generic tensor-multi-scalar theories.
Zhang, Zheng; Yang, Xiu; Oseledets, Ivan V.; Karniadakis, George E.; Daniel, Luca
2015-01-01
Hierarchical uncertainty quantification can reduce the computational cost of stochastic circuit simulation by employing spectral methods at different levels. This paper presents an efficient framework to simulate hierarchically some challenging stochastic circuits/systems that include high-dimensional subsystems. Due to the high parameter dimensionality, it is challenging to both extract surrogate models at the low level of the design hierarchy and to handle them in the high-level simulation. In this paper, we develop an efficient analysis of variance-based stochastic circuit/microelectromechanical systems simulator to efficiently extract the surrogate models at the low level. In order to avoid the curse of dimensionality, we employ tensor-train decomposition at the high level to construct the basis functions and Gauss quadrature points. As a demonstration, we verify our algorithm on a stochastic oscillator with four MEMS capacitors and 184 random parameters. This challenging example is efficiently simulated by our simulator at the cost of only 10min in MATLAB on a regular personal computer.
Multidimensional Compressed Sensing MRI Using Tensor Decomposition-Based Sparsifying Transform
Yu, Yeyang; Jin, Jin; Liu, Feng; Crozier, Stuart
2014-01-01
Compressed Sensing (CS) has been applied in dynamic Magnetic Resonance Imaging (MRI) to accelerate the data acquisition without noticeably degrading the spatial-temporal resolution. A suitable sparsity basis is one of the key components to successful CS applications. Conventionally, a multidimensional dataset in dynamic MRI is treated as a series of two-dimensional matrices, and then various matrix/vector transforms are used to explore the image sparsity. Traditional methods typically sparsify the spatial and temporal information independently. In this work, we propose a novel concept of tensor sparsity for the application of CS in dynamic MRI, and present the Higher-order Singular Value Decomposition (HOSVD) as a practical example. Applications presented in the three- and four-dimensional MRI data demonstrate that HOSVD simultaneously exploited the correlations within spatial and temporal dimensions. Validations based on cardiac datasets indicate that the proposed method achieved comparable reconstruction accuracy with the low-rank matrix recovery methods and, outperformed the conventional sparse recovery methods. PMID:24901331
NASA Astrophysics Data System (ADS)
Afra, Sardar; Gildin, Eduardo
2016-09-01
Parameter estimation through robust parameterization techniques has been addressed in many works associated with history matching and inverse problems. Reservoir models are in general complex, nonlinear, and large-scale with respect to the large number of states and unknown parameters. Thus, having a practical approach to replace the original set of highly correlated unknown parameters with non-correlated set of lower dimensionality, that captures the most significant features comparing to the original set, is of high importance. Furthermore, de-correlating system's parameters while keeping the geological description intact is critical to control the ill-posedness nature of such problems. We introduce the advantages of a new low dimensional parameterization approach for reservoir characterization applications utilizing multilinear algebra based techniques like higher order singular value decomposition (HOSVD). In tensor based approaches like HOSVD, 2D permeability images are treated as they are, i.e., the data structure is kept as it is, whereas in conventional dimensionality reduction algorithms like SVD data has to be vectorized. Hence, compared to classical methods, higher redundancy reduction with less information loss can be achieved through decreasing present redundancies in all dimensions. In other words, HOSVD approximation results in a better compact data representation with respect to least square sense and geological consistency in comparison with classical algorithms. We examined the performance of the proposed parameterization technique against SVD approach on the SPE10 benchmark reservoir model as well as synthetic channelized permeability maps to demonstrate the capability of the proposed method. Moreover, to acquire statistical consistency, we repeat all experiments for a set of 1000 unknown geological samples and provide comparison using RMSE analysis. Results prove that, for a fixed compression ratio, the performance of the proposed approach
Biogeochemistry of Decomposition and Detrital Processing
NASA Astrophysics Data System (ADS)
Sanderman, J.; Amundson, R.
2003-12-01
Decomposition is a key ecological process that roughly balances net primary production in terrestrial ecosystems and is an essential process in resupplying nutrients to the plant community. Decomposition consists of three concurrent processes: communition or fragmentation, leaching of water-soluble compounds, and microbial catabolism. Decomposition can also be viewed as a sequential process, what Eijsackers and Zehnder (1990) compare to a Russian matriochka doll. Soil macrofauna fragment and partially solubilize plant residues, facilitating establishment of a community of decomposer microorganisms. This decomposer community will gradually shift as the most easily degraded plant compounds are utilized and the more recalcitrant materials begin to accumulate. Given enough time and the proper environmental conditions, most naturally occurring compounds can completely be mineralized to inorganic forms. Simultaneously with mineralization, the process of humification acts to transform a fraction of the plant residues into stable soil organic matter (SOM) or humus. For reference, Schlesinger (1990) estimated that only ˜0.7% of detritus eventually becomes stabilized into humus.Decomposition plays a key role in the cycling of most plant macro- and micronutrients and in the formation of humus. Figure 1 places the roles of detrital processing and mineralization within the context of the biogeochemical cycling of essential plant nutrients. Chapin (1991) found that while the atmosphere supplied 4% and mineral weathering supplied no nitrogen and <1% of phosphorus, internal nutrient recycling is the source for >95% of all the nitrogen and phosphorus uptake by tundra species in Barrow, Alaska. In a cool temperate forest, nutrient recycling accounted for 93%, 89%, 88%, and 65% of total sources for nitrogen, phosphorus, potassium, and calcium, respectively ( Chapin, 1991). (13K)Figure 1. A decomposition-centric biogeochemical model of nutrient cycling. Although there is significant
Tensor Algebra Library for NVidia Graphics Processing Units
Liakh, Dmitry
2015-03-16
This is a general purpose math library implementing basic tensor algebra operations on NVidia GPU accelerators. This software is a tensor algebra library that can perform basic tensor algebra operations, including tensor contractions, tensor products, tensor additions, etc., on NVidia GPU accelerators, asynchronously with respect to the CPU host. It supports a simultaneous use of multiple NVidia GPUs. Each asynchronous API function returns a handle which can later be used for querying the completion of the corresponding tensor algebra operation on a specific GPU. The tensors participating in a particular tensor operation are assumed to be stored in local RAM of a node or GPU RAM. The main research area where this library can be utilized is the quantum many-body theory (e.g., in electronic structure theory).
NASA Astrophysics Data System (ADS)
von Larcher, Thomas; Blome, Therese; Klein, Rupert; Schneider, Reinhold; Wolf, Sebastian; Huber, Benjamin
2016-04-01
Handling high-dimensional data sets like they occur e.g. in turbulent flows or in multiscale behaviour of certain types in Geosciences are one of the big challenges in numerical analysis and scientific computing. A suitable solution is to represent those large data sets in an appropriate compact form. In this context, tensor product decomposition methods currently emerge as an important tool. One reason is that these methods often enable one to attack high-dimensional problems successfully, another that they allow for very compact representations of large data sets. We follow the novel Tensor-Train (TT) decomposition method to support the development of improved understanding of the multiscale behavior and the development of compact storage schemes for solutions of such problems. One long-term goal of the project is the construction of a self-consistent closure for Large Eddy Simulations (LES) of turbulent flows that explicitly exploits the tensor product approach's capability of capturing self-similar structures. Secondly, we focus on a mixed deterministic-stochastic subgrid scale modelling strategy currently under development for application in Finite Volume Large Eddy Simulation (LES) codes. Advanced methods of time series analysis for the databased construction of stochastic models with inherently non-stationary statistical properties and concepts of information theory based on a modified Akaike information criterion and on the Bayesian information criterion for the model discrimination are used to construct surrogate models for the non-resolved flux fluctuations. Vector-valued auto-regressive models with external influences form the basis for the modelling approach [1], [2], [4]. Here, we present the reconstruction capabilities of the two modeling approaches tested against 3D turbulent channel flow data computed by direct numerical simulation (DNS) for an incompressible, isothermal fluid at Reynolds number Reτ = 590 (computed by [3]). References [1] I
Middleton, Beth A.
2014-01-01
A cornerstone of ecosystem ecology, decomposition was recognized as a fundamental process driving the exchange of energy in ecosystems by early ecologists such as Lindeman 1942 and Odum 1960). In the history of ecology, studies of decomposition were incorporated into the International Biological Program in the 1960s to compare the nature of organic matter breakdown in various ecosystem types. Such studies still have an important role in ecological studies of today. More recent refinements have brought debates on the relative role microbes, invertebrates and environment in the breakdown and release of carbon into the atmosphere, as well as how nutrient cycling, production and other ecosystem processes regulated by decomposition may shift with climate change. Therefore, this bibliography examines the primary literature related to organic matter breakdown, but it also explores topics in which decomposition plays a key supporting role including vegetation composition, latitudinal gradients, altered ecosystems, anthropogenic impacts, carbon storage, and climate change models. Knowledge of these topics is relevant to both the study of ecosystem ecology as well projections of future conditions for human societies.
Leistritz, Lutz; Witte, Herbert; Schiecke, Karin
2015-01-01
Quantification of functional connectivity in physiological networks is frequently performed by means of time-variant partial directed coherence (tvPDC), based on time-variant multivariate autoregressive models. The principle advantage of tvPDC lies in the combination of directionality, time variance and frequency selectivity simultaneously, offering a more differentiated view into complex brain networks. Yet the advantages specific to tvPDC also cause a large number of results, leading to serious problems in interpretability. To counter this issue, we propose the decomposition of multi-dimensional tvPDC results into a sum of rank-1 outer products. This leads to a data condensation which enables an advanced interpretation of results. Furthermore it is thereby possible to uncover inherent interaction patterns of induced neuronal subsystems by limiting the decomposition to several relevant channels, while retaining the global influence determined by the preceding multivariate AR estimation and tvPDC calculation of the entire scalp. Finally a comparison between several subjects is considerably easier, as individual tvPDC results are summarized within a comprehensive model equipped with subject-specific loading coefficients. A proof-of-principle of the approach is provided by means of simulated data; EEG data of an experiment concerning visual evoked potentials are used to demonstrate the applicability to real data. PMID:26046537
Adaptation of motor imagery EEG classification model based on tensor decomposition
NASA Astrophysics Data System (ADS)
Li, Xinyang; Guan, Cuntai; Zhang, Haihong; Keng Ang, Kai; Ong, Sim Heng
2014-10-01
Objective. Session-to-session nonstationarity is inherent in brain-computer interfaces based on electroencephalography. The objective of this paper is to quantify the mismatch between the training model and test data caused by nonstationarity and to adapt the model towards minimizing the mismatch. Approach. We employ a tensor model to estimate the mismatch in a semi-supervised manner, and the estimate is regularized in the discriminative objective function. Main results. The performance of the proposed adaptation method was evaluated on a dataset recorded from 16 subjects performing motor imagery tasks on different days. The classification results validated the advantage of the proposed method in comparison with other regularization-based or spatial filter adaptation approaches. Experimental results also showed that there is a significant correlation between the quantified mismatch and the classification accuracy. Significance. The proposed method approached the nonstationarity issue from the perspective of data-model mismatch, which is more direct than data variation measurement. The results also demonstrated that the proposed method is effective in enhancing the performance of the feature extraction model.
NASA Technical Reports Server (NTRS)
Bergan, Andrew C.; Leone, Frank A., Jr.
2016-01-01
A new model is proposed that represents the kinematics of kink-band formation and propagation within the framework of a mesoscale continuum damage mechanics (CDM) model. The model uses the recently proposed deformation gradient decomposition approach to represent a kink band as a displacement jump via a cohesive interface that is embedded in an elastic bulk material. The model is capable of representing the combination of matrix failure in the frame of a misaligned fiber and instability due to shear nonlinearity. In contrast to conventional linear or bilinear strain softening laws used in most mesoscale CDM models for longitudinal compression, the constitutive response of the proposed model includes features predicted by detailed micromechanical models. These features include: 1) the rotational kinematics of the kink band, 2) an instability when the peak load is reached, and 3) a nonzero plateau stress under large strains.
Nested Vector-Sensor Array Processing via Tensor Modeling (Briefing Charts)
2014-04-24
the matrix singular value decomposition ( SVD ) [3]. • The HOSVD of tensor T can be written as T = K×1 U1 ×2 U2 ×3 U3 ×4 U4, (4) where U1,U3 ∈ CN̄×N̄...and U2,U4 ∈ CNc×Nc are orthonormal matrices, provided by the SVD of the i-mode matricization of the tensor T : T (i) = UiΛiV H i . K ∈ CN̄×Nc×N̄×Nc is
A patch-based tensor decomposition algorithm for M-FISH image classification.
Wang, Min; Huang, Ting-Zhu; Li, Jingyao; Wang, Yu-Ping
2016-05-03
Multiplex-fluorescence in situ hybridization (M-FISH) is a chromosome imaging technique which can be used to detect chromosomal abnormalities such as translocations, deletions, duplications, and inversions. Chromosome classification from M-FISH imaging data is a key step to implement the technique. In the classified M-FISH image, each pixel in a chromosome is labeled with a class index and drawn with a pseudo-color so that geneticists can easily conduct diagnosis, for example, identifying chromosomal translocations by examining color changes between chromosomes. However, the information of pixels in a neighborhood is often overlooked by existing approaches. In this work, we assume that the pixels in a patch belong to the same class and use the patch to represent the center pixel's class information, by which we can use the correlations of neighboring pixels and the structural information across different spectral channels for the classification. On the basis of assumption, we propose a patch-based classification algorithm by using higher order singular value decomposition (HOSVD). The developed method has been tested on a comprehensive M-FISH database that we established, demonstrating improved performance. When compared with other pixel-wise M-FISH image classifiers such as fuzzy c-means clustering (FCM), adaptive fuzzy c-means clustering (AFCM), improved adaptive fuzzy c-means clustering (IAFCM), and sparse representation classification (SparseRC) methods, the proposed method gave the highest correct classification ratio (CCR), which can translate into improved diagnosis of genetic diseases and cancers. © 2016 International Society for Advancement of Cytometry.
2013-08-14
Unsupervised feature learning and deep learning : A review and new perspectives. arXiv preprint arXiv:1206.5538, 2012. [2] Michael S. Lewicki, Terrence J...1017–1025, 2011. [14] Li Deng and Dong Yu. Deep Learning for Signal and Information Processing. NOW Publishers, 2013. [15] J.B. Kruskal. Three-way
2012-12-01
INTERIM REPORT Tensor Invariant Processing for Munitions/Clutter Classifications Interim Report on SNR and Background Leveling Requirements...00-00-2012 to 00-00-2012 4. TITLE AND SUBTITLE Tensor Invariant Processing for Munitions/Clutter Classifications Interim Report on SNR and...Camp Beale in 2011 and found no impact due to signal-to-noise ratio ( SNR ) and background leveling effects. However, the minimum polarizability
Tracking Hierarchical Processing in Morphological Decomposition with Brain Potentials
ERIC Educational Resources Information Center
Lavric, Aureliu; Elchlepp, Heike; Rastle, Kathleen
2012-01-01
One important debate in psycholinguistics concerns the nature of morphological decomposition processes in visual word recognition (e.g., darkness = {dark} + {-ness}). One theory claims that these processes arise during orthographic analysis and prior to accessing meaning (Rastle & Davis, 2008), and another argues that these processes arise through…
Theoretical estimate on tensor-polarization asymmetry in proton-deuteron Drell-Yan process
NASA Astrophysics Data System (ADS)
Kumano, S.; Song, Qin-Tao
2016-09-01
Tensor-polarized parton distribution functions are new quantities in spin-1 hadrons such as the deuteron, and they could probe new quark-gluon dynamics in hadron and nuclear physics. In charged-lepton deep inelastic scattering, they are studied by the twist-2 structure functions b1 and b2. The HERMES Collaboration found unexpectedly large b1 values compared to a naive theoretical expectation based on the standard deuteron model. The situation should be significantly improved in the near future by an approved experiment to measure b1 at Thomas Jefferson National Accelerator Facility (JLab). There is also an interesting indication in the HERMES result that finite antiquark tensor polarization exists. It could play an important role in solving a mechanism on tensor structure in the quark-gluon level. The tensor-polarized antiquark distributions are not easily determined from the charged-lepton deep inelastic scattering; however, they can be measured in a proton-deuteron Drell-Yan process with a tensor-polarized deuteron target. In this article, we estimate the tensor-polarization asymmetry for a possible Fermilab Main-Injector experiment by using optimum tensor-polarized parton distribution functions to explain the HERMES measurement. We find that the asymmetry is typically a few percent. If it is measured, it could probe new hadron physics, and such studies could create an interesting field of high-energy spin physics. In addition, we find that a significant tensor-polarized gluon distribution should exist due to Q2 evolution, even if it were zero at a low Q2 scale. The tensor-polarized gluon distribution has never been observed, so it is an interesting future project.
Decomposition: A Strategy for Query Processing.
ERIC Educational Resources Information Center
Wong, Eugene; Youssefi, Karel
Multivariable queries can be processed in the data base management system INGRES. The general procedure is to decompose the query into a sequence of one-variable queries using two processes. One process is reduction which requires breaking off components of the query which are joined to it by a single variable. The other process,…
Relativized hierarchical decomposition of Markov decision processes.
Ravindran, B
2013-01-01
Reinforcement Learning (RL) is a popular paradigm for sequential decision making under uncertainty. A typical RL algorithm operates with only limited knowledge of the environment and with limited feedback on the quality of the decisions. To operate effectively in complex environments, learning agents require the ability to form useful abstractions, that is, the ability to selectively ignore irrelevant details. It is difficult to derive a single representation that is useful for a large problem setting. In this chapter, we describe a hierarchical RL framework that incorporates an algebraic framework for modeling task-specific abstraction. The basic notion that we will explore is that of a homomorphism of a Markov Decision Process (MDP). We mention various extensions of the basic MDP homomorphism framework in order to accommodate different commonly understood notions of abstraction, namely, aspects of selective attention. Parts of the work described in this chapter have been reported earlier in several papers (Narayanmurthy and Ravindran, 2007, 2008; Ravindran and Barto, 2002, 2003a,b; Ravindran et al., 2007).
The ergodic decomposition of stationary discrete random processes
NASA Technical Reports Server (NTRS)
Gray, R. M.; Davisson, L. D.
1974-01-01
The ergodic decomposition is discussed, and a version focusing on the structure of individual sample functions of stationary processes is proved for the special case of discrete-time random processes with discrete alphabets. The result is stronger in this case than the usual theorem, and the proof is both intuitive and simple. Estimation-theoretic and information-theoretic interpretations are developed and applied to prove existence theorems for universal source codes, both noiseless and with a fidelity criterion.
Analysis of benzoquinone decomposition in solution plasma process
NASA Astrophysics Data System (ADS)
Bratescu, M. A.; Saito, N.
2016-01-01
The decomposition of p-benzoquinone (p-BQ) in Solution Plasma Processing (SPP) was analyzed by Coherent Anti-Stokes Raman Spectroscopy (CARS) by monitoring the change of the anti-Stokes signal intensity of the vibrational transitions of the molecule, during and after SPP. Just in the beginning of the SPP treatment, the CARS signal intensities of the ring vibrational molecular transitions increased under the influence of the electric field of plasma. The results show that plasma influences the p-BQ molecules in two ways: (i) plasma produces a polarization and an orientation of the molecules in the local electric field of plasma and (ii) the gas phase plasma supplies, in the liquid phase, hydrogen and hydroxyl radicals, which reduce or oxidize the molecules, respectively, generating different carboxylic acids. The decomposition of p-BQ after SPP was confirmed by UV-visible absorption spectroscopy and liquid chromatography.
Catalytic hydrothermal processing of microalgae: decomposition and upgrading of lipids.
Biller, P; Riley, R; Ross, A B
2011-04-01
Hydrothermal processing of high lipid feedstock such as microalgae is an alternative method of oil extraction which has obvious benefits for high moisture containing biomass. A range of microalgae and lipids extracted from terrestrial oil seed have been processed at 350 °C, at pressures of 150-200 bar in water. Hydrothermal liquefaction is shown to convert the triglycerides to fatty acids and alkanes in the presence of certain heterogeneous catalysts. This investigation has compared the composition of lipids and free fatty acids from solvent extraction to those from hydrothermal processing. The initial decomposition products include free fatty acids and glycerol, and the potential for de-oxygenation using heterogeneous catalysts has been investigated. The results indicate that the bio-crude yields from the liquefaction of microalgae were increased slightly with the use of heterogeneous catalysts but the higher heating value (HHV) and the level of de-oxygenation increased, by up to 10%.
A decomposition of irreversible diffusion processes without detailed balance
NASA Astrophysics Data System (ADS)
Qian, Hong
2013-05-01
As a generalization of deterministic, nonlinear conservative dynamical systems, a notion of canonical conservative dynamics with respect to a positive, differentiable stationary density ρ(x) is introduced: dot{x}=j(x) in which ∇.(ρ(x)j(x)) = 0. Such systems have a conserved "generalized free energy function" F[u] = ∫u(x, t)ln (u(x, t)/ρ(x))dx in phase space with a density flow u(x, t) satisfying ∂ut = -∇.(ju). Any general stochastic diffusion process without detailed balance, in terms of its Fokker-Planck equation, can be decomposed into a reversible diffusion process with detailed balance and a canonical conservative dynamics. This decomposition can be rigorously established in a function space with inner product defined as ⟨ϕ, ψ⟩ = ∫ρ-1(x)ϕ(x)ψ(x)dx. Furthermore, a law for balancing F[u] can be obtained: The non-positive dF[u(x, t)]/dt = Ein(t) - ep(t) where the "source" Ein(t) ⩾ 0 and the "sink" ep(t) ⩾ 0 are known as house-keeping heat and entropy production, respectively. A reversible diffusion has Ein(t) = 0. For a linear (Ornstein-Uhlenbeck) diffusion process, our decomposition is equivalent to the previous approaches developed by Graham and Ao, as well as the theory of large deviations. In terms of two different formulations of time reversal for a same stochastic process, the meanings of dissipative and conservative stationary dynamics are discussed.
ENVIRONMENTAL ASSESSMENT OF THE BASE CATALYZED DECOMPOSITION (BCD) PROCESS
This report summarizes laboratory-scale, pilot-scale, and field performance data on BCD (Base Catalyzed Decomposition) and technology, collected to date by various governmental, academic, and private organizations.
CO2 decomposition using electrochemical process in molten salts
NASA Astrophysics Data System (ADS)
Otake, Koya; Kinoshita, Hiroshi; Kikuchi, Tatsuya; Suzuki, Ryosuke O.
2012-08-01
The electrochemical decomposition of CO2 gas to carbon and oxygen gas in LiCl-Li2O and CaCl2-CaO molten salts was studied. This process consists of electrochemical reduction of Li2O and CaO, as well as the thermal reduction of CO2 gas by the respective metallic Li and Ca. Two kinds of ZrO2 solid electrolytes were tested as an oxygen ion conductor, and the electrolytes removed oxygen ions from the molten salts to the outside of the reactor. After electrolysis in both salts, the aggregations of nanometer-scale amorphous carbon and rod-like graphite crystals were observed by transmission electron microscopy. When 9.7 %CO2-Ar mixed gas was blown into LiCl-Li2O and CaCl2-CaO molten salts, the current efficiency was evaluated to be 89.7 % and 78.5 %, respectively, by the exhaust gas analysis and the supplied charge. When a solid electrolyte with higher ionic conductivity was used, the current and carbon production became larger. It was found that the rate determining step is the diffusion of oxygen ions into the ZrO2 solid electrolyte.
Tensor-based Dictionary Learning for Spectral CT Reconstruction
Zhang, Yanbo; Wang, Ge
2016-01-01
Spectral computed tomography (CT) produces an energy-discriminative attenuation map of an object, extending a conventional image volume with a spectral dimension. In spectral CT, an image can be sparsely represented in each of multiple energy channels, and are highly correlated among energy channels. According to this characteristics, we propose a tensor-based dictionary learning method for spectral CT reconstruction. In our method, tensor patches are extracted from an image tensor, which is reconstructed using the filtered backprojection (FBP), to form a training dataset. With the Candecomp/Parafac decomposition, a tensor-based dictionary is trained, in which each atom is a rank-one tensor. Then, the trained dictionary is used to sparsely represent image tensor patches during an iterative reconstruction process, and the alternating minimization scheme is adapted for optimization. The effectiveness of our proposed method is validated with both numerically simulated and real preclinical mouse datasets. The results demonstrate that the proposed tensor-based method generally produces superior image quality, and leads to more accurate material decomposition than the currently popular popular methods. PMID:27541628
Wang, Kunping; Guo, Jinsong; Yang, Min; Junji, Hirotsuji; Deng, Rongsen
2009-03-15
The decomposition of two haloacetic acids (HAAs), dichloroacetic acid (DCAA) and trichloroacetic acid (TCAA), from water was studied by means of single oxidants: ozone, UV radiation; and by the advanced oxidation processes (AOPs) constituted by combinations of O(3)/UV radiation, H(2)O(2)/UV radiation, O(3)/H(2)O(2), O(3)/H(2)O(2)/UV radiation. The concentrations of HAAs were analyzed at specified time intervals to elucidate the decomposition of HAAs. Single O(3) or UV did not result in perceptible decomposition of HAAs within the applied reaction time. O(3)/UV showed to be more suitable for the decomposition of DCAA and TCAA in water among the six methods of oxidation. Decomposition of DCAA was easier than TCAA by AOPs. For O(3)/UV in the semi-continuous mode, the effective utilization rate of ozone for HAA decomposition decreased with ozone addition. The kinetics of HAAs decomposition by O(3)/UV and the influence of coexistent humic acids and HCO(3)(-) on the decomposition process were investigated. The decomposition of the HAAs by the O(3)/UV accorded with the pseudo-first-order mode under the constant initial dissolved O(3) concentration and fixed UV radiation. The pseudo-first-order rate constant for the decomposition of DCAA was more than four times that for TCAA. Humic acids can cause the H(2)O(2) accumulation and the decrease in rate constants of HAAs decomposition in the O(3)/UV process. The rate constants for the decomposition of DCAA and TCAA decreased by 41.1% and 23.8%, respectively, when humic acids were added at a concentration of 1.2mgTOC/L. The rate constants decreased by 43.5% and 25.9%, respectively, at an HCO(3)(-) concentration of 1.0mmol/L.
Predictability of the Dynamic Mode Decomposition in Coastal Processes
NASA Astrophysics Data System (ADS)
Wang, Ruo-Qian; Herdman, Liv; Stacey, Mark; Barnard, Patrick
2016-11-01
Dynamic Mode Decomposition (DMD) is a model order reduction technique that helps reduce the complexity of computational models. DMD is frequently easier to interpret physically than the Proper Orthogonal Decomposition. The DMD can also produce the eigenvalues of each mode to show the trend of the mode, establishing the rate of growth or decay, but the original DMD cannot produce the contributing weights of the modes. The challenge is selecting the important modes to build a reduced order model. DMD variants have been developed to estimate the weights of each mode. One of the popular methods is called Optimal Mode Decomposition (OMD). This method decomposes the data matrix into a product of the DMD modes, the diagonal weight matrix, and the Vandermonde matrix. The weight matrix can be used to rank the importance of the mode contributions and ultimately leads to the reduced order model for prediction and controlling purpose. We are currently applying DMD to a numerical simulation of the San Francisco Bay, which features complicated coastal geometry, multiple frequency components, and high periodicity. Since DMD defines modes with specific frequencies, we expect DMD would produce a good approximation, but the preliminary results show that the predictability of the DMD is poor if unimportant modes are dropped according to the OMD. We are currently testing other DMD variants and will report our findings in the presentation.
Process characteristics and layout decomposition of self-aligned sextuple patterning
NASA Astrophysics Data System (ADS)
Kang, Weiling; Chen, Yijian
2013-03-01
Self-aligned sextuple patterning (SASP) is a promising technique to scale down the half pitch of IC features to sub- 10nm region. In this paper, the process characteristics and decomposition methods of both positive-tone (pSASP) and negative-tone SASP (nSASP) techniques are discussed, and a variety of decomposition rules are studied. By using a node-grouping method, nSASP layout conflicting graph can be significantly simplified. Graph searching and coloring algorithm is developed for feature/color assignment. We demonstrate that by generating assisting mandrels, nSASP layout decomposition can be degenerated into an nSADP decomposition problem. The proposed decomposition algorithm is successfully verified with several commonly used 2-D layout examples.
C%2B%2B tensor toolbox user manual.
Plantenga, Todd D.; Kolda, Tamara Gibson
2012-04-01
The C++ Tensor Toolbox is a software package for computing tensor decompositions. It is based on the Matlab Tensor Toolbox, and is particularly optimized for sparse data sets. This user manual briefly overviews tensor decomposition mathematics, software capabilities, and installation of the package. Tensors (also known as multidimensional arrays or N-way arrays) are used in a variety of applications ranging from chemometrics to network analysis. The Tensor Toolbox provides classes for manipulating dense, sparse, and structured tensors in C++. The Toolbox compiles into libraries and is intended for use with custom applications written by users.
Tak, Hyeong Jun; Kim, Jin Hyun; Son, Su Min
2016-01-01
We investigated the radiologic developmental process of the arcuate fasciculus (AF) using subcomponent diffusion tensor imaging (DTI) analysis in typically developing volunteers. DTI data were acquired from 96 consecutive typically developing children, aged 0–14 years. AF subcomponents, including the posterior, anterior, and direct AF tracts were analyzed. Success rates of analysis (AR) and fractional anisotropy (FA) values of each subcomponent tract were measured and compared. AR of all subcomponent tracts, except the posterior, showed a significant increase with aging (P < 0.05). Subcomponent tracts had a specific developmental sequence: First, the posterior AF tract, second, the anterior AF tract, and last, the direct AF tract in identical hemispheres. FA values of all subcomponent tracts, except right direct AF tract, showed correlation with subject's age (P < 0.05). Increased AR and FA values were observed in female subjects in young age (0–2 years) group compared with males (P < 0.05). The direct AF tract showed leftward hemispheric asymmetry and this tendency showed greater consolidation in older age (3–14 years) groups (P < 0.05). These findings demonstrated the radiologic developmental patterns of the AF from infancy to adolescence using subcomponent DTI analysis. The AF showed a specific developmental sequence, sex difference in younger age, and hemispheric asymmetry in older age. PMID:27482222
Azo dye Acid Red 27 decomposition kinetics during ozone oxidation and adsorption processes.
Beak, Mi H; Ijagbemi, Christianah O; Kim, Dong S
2009-05-01
To elucidate the effects of ozone dosage, catalysts, and temperature on azo dye decomposition rate in treatment processes, the decomposition kinetics of Acid Red 27 by ozone was investigated. Acid Red 27 decomposition rate followed the first-order reaction with complete dye discoloration in 20 min of ozone reaction. The dye decay rate increases as ozone dosage increases. Using Mn, Zn and Ni as transition metal catalysts during the ozone oxidation process, Mn displayed the greatest catalytic effect with significant increase in the rate of decomposition. The rate of decomposition decreases with increase in temperature and beyond 40 degrees C, increase in decomposition rate was followed by a corresponding increase in temperature. The FT-IR spectra in the range of 1,000-1,800 cm(-1) revealed specific band variations after the ozone oxidation process, portraying structural changes traceable to cleavage of bonds in the benzene ring, the sulphite salt group, and the C-N located beside the -N = N- bond. From the (1)H-NMR spectra, the breaking down of the benzene ring showed the disappearance of the 10 H peaks at 7-8 ppm, which later emerged with a new peak at 6.16 ppm. In a parallel batch test of azo dye Acid Red 27 adsorption onto activated carbon, a low adsorption capacity was observed in the adsorption test carried out after three minutes of ozone injection while the adsorption process without ozone injection yielded a high adsorption capacity.
Complex variational mode decomposition for signal processing applications
NASA Astrophysics Data System (ADS)
Wang, Yanxue; Liu, Fuyun; Jiang, Zhansi; He, Shuilong; Mo, Qiuyun
2017-03-01
Complex-valued signals occur in many areas of science and engineering and are thus of fundamental interest. The complex variational mode decomposition (CVMD) is proposed as a natural and a generic extension of the original VMD algorithm for the analysis of complex-valued data in this work. Moreover, the equivalent filter bank structure of the CVMD in the presence of white noise, and the effects of initialization of center frequency on the filter bank property are both investigated via numerical experiments. Benefiting from the advantages of CVMD algorithm, its bi-directional Hilbert time-frequency spectrum is developed as well, in which the positive and negative frequency components are formulated on the positive and negative frequency planes separately. Several applications in the real-world complex-valued signals support the analysis.
Nonlinear color-image decomposition for image processing of a digital color camera
NASA Astrophysics Data System (ADS)
Saito, Takahiro; Aizawa, Haruya; Yamada, Daisuke; Komatsu, Takashi
2009-01-01
This paper extends the BV (Bounded Variation) - G and/or the BV-L1 variational nonlinear image-decomposition approaches, which are considered to be useful for image processing of a digital color camera, to genuine color-image decomposition approaches. For utilizing inter-channel color cross-correlations, this paper first introduces TV (Total Variation) norms of color differences and TV norms of color sums into the BV-G and/or BV-L1 energy functionals, and then derives denoising-type decomposition-algorithms with an over-complete wavelet transform, through applying the Besov-norm approximation to the variational problems. Our methods decompose a noisy color image without producing undesirable low-frequency colored artifacts in its separated BV-component, and they achieve desirable high-quality color-image decomposition, which is very robust against colored random noise.
Tensor SVD and distributed control
NASA Astrophysics Data System (ADS)
Iyer, Ram V.
2005-05-01
The (approximate) diagonalization of symmetric matrices has been studied in the past in the context of distributed control of an array of collocated smart actuators and sensors. For distributed control using a two dimensional array of actuators and sensors, it is more natural to describe the system transfer function as a complex tensor rather than a complex matrix. In this paper, we study the problem of approximately diagonalizing a transfer function tensor via the tensor singular value decomposition (TSVD) for a locally spatially invariant system, and study its application along with the technique of recursive orthogonal transforms to achieve distributed control for a smart structure.
Stage efficiency in the analysis of thermochemical water decomposition processes
NASA Technical Reports Server (NTRS)
Conger, W. L.; Funk, J. E.; Carty, R. H.; Soliman, M. A.; Cox, K. E.
1976-01-01
The procedure for analyzing thermochemical water-splitting processes using the figure of merit is expanded to include individual stage efficiencies and loss coefficients. The use of these quantities to establish the thermodynamic insufficiencies of each stage is shown. A number of processes are used to illustrate these concepts and procedures and to demonstrate the facility with which process steps contributing most to the cycle efficiency are found. The procedure allows attention to be directed to those steps of the process where the greatest increase in total cycle efficiency can be obtained.
Muravyev, Nikita V; Koga, Nobuyoshi; Meerov, Dmitry B; Pivkina, Alla N
2017-01-25
This study focused on kinetic modeling of a specific type of multistep heterogeneous reaction comprising exothermic and endothermic reaction steps, as exemplified by the practical kinetic analysis of the experimental kinetic curves for the thermal decomposition of molten ammonium dinitramide (ADN). It is known that the thermal decomposition of ADN occurs as a consecutive two step mass-loss process comprising the decomposition of ADN and subsequent evaporation/decomposition of in situ generated ammonium nitrate. These reaction steps provide exothermic and endothermic contributions, respectively, to the overall thermal effect. The overall reaction process was deconvoluted into two reaction steps using simultaneously recorded thermogravimetry and differential scanning calorimetry (TG-DSC) curves by considering the different physical meanings of the kinetic data derived from TG and DSC by P value analysis. The kinetic data thus separated into exothermic and endothermic reaction steps were kinetically characterized using kinetic computation methods including isoconversional method, combined kinetic analysis, and master plot method. The overall kinetic behavior was reproduced as the sum of the kinetic equations for each reaction step considering the contributions to the rate data derived from TG and DSC. During reproduction of the kinetic behavior, the kinetic parameters and contributions of each reaction step were optimized using kinetic deconvolution analysis. As a result, the thermal decomposition of ADN was successfully modeled as partially overlapping exothermic and endothermic reaction steps. The logic of the kinetic modeling was critically examined, and the practical usefulness of phenomenological modeling for the thermal decomposition of ADN was illustrated to demonstrate the validity of the methodology and its applicability to similar complex reaction processes.
Decomposition of Repetition Priming Processes in Word Translation
ERIC Educational Resources Information Center
Francis, Wendy S.; Duran, Gabriela; Augustini, Beatriz K.; Luevano, Genoveva; Arzate, Jose C.; Saenz, Silvia P.
2011-01-01
Translation in fluent bilinguals requires comprehension of a stimulus word and subsequent production, or retrieval and articulation, of the response word. Four repetition-priming experiments with Spanish-English bilinguals (N = 274) decomposed these processes using selective facilitation to evaluate their unique priming contributions and factorial…
Iron oxalate decomposition process by means of Mössbauer spectroscopy and nuclear forward scattering
NASA Astrophysics Data System (ADS)
Smrčka, David; Procházka, Vít; Novák, Petr; Kašlík, Josef; Vrba, Vlastimil
2016-10-01
This study reports the transformation kinetics of the thermal decomposition of the iron(II) oxalate dihydrate studied in detail by two different techniques: the transmission Mössbauer spectroscopy and the nuclear forward scattering of synchrotron radiation. Both methods were applied to observe three steps of the decomposition process when the iron oxalate transforms to the amorphous iron oxide. The hematite/maghemite ratio was determined from the transmission Mössbauer spectra using an evaluation procedure based on a subtraction of two opposite sides of spectra. The results obtained indicate that the amount of hematite increases with an annealing time prolongation.
PROCESS OF COATING WITH NICKEL BY THE DECOMPOSITION OF NICKEL CARBONYL
Hoover, T.B.
1959-04-01
An improved process is presented for the deposition of nickel coatings by the thermal decomposition of nickel carbonyl vapor. The improvement consists in incorporating a small amount of hydrogen sulfide gas in the nickel carbonyl plating gas. It is postulated that the hydrogen sulfide functions as a catalyst. i
Petrova, O.M.; Fedoseev, S.D.; Komarova, T.V.
1984-01-01
A calculation has been made of the activation energy of the thermal decomposition of phenol-formaldehyde polymers. It has been established that for nonisothermal conditions the rate of performance of the process does not affect the effective activation energy calculated by means of Piloyan's equation.
Method for increasing steam decomposition in a coal gasification process
Wilson, M.W.
1987-03-23
The gasification of coal in the presence of steam and oxygen is significantly enhanced by introducing a thermochemical water- splitting agent such as sulfuric acid, into the gasifier for decomposing the steam to provide additional oxygen and hydrogen usable in the gasification process for the combustion of the coal and enrichment of the gaseous gasification products. The addition of the water-splitting agent into the gasifier also allows for the operation of the reactor at a lower temperature.
Method for increasing steam decomposition in a coal gasification process
Wilson, Marvin W.
1988-01-01
The gasification of coal in the presence of steam and oxygen is significantly enhanced by introducing a thermochemical water-splitting agent such as sulfuric acid, into the gasifier for decomposing the steam to provide additional oxygen and hydrogen usable in the gasification process for the combustion of the coal and enrichment of the gaseous gasification products. The addition of the water-splitting agent into the gasifier also allows for the operation of the reactor at a lower temperature.
[Putrefaction in a mortuary cold room? Unusual progression of postmortem decomposition processes].
Kunz, Sebastian N; Brandtner, Herwig; Meyer, Harald
2013-01-01
This article illustrates the rare case of rapid body decomposition in an uncommonly short postmortem interval. A clear discrepancy between early postmortem changes at the crime scene and advanced body decomposition at the time of autopsy were seen. Subsequent police investigation identified a failure in the cooling system of the morgue as probable cause. However, due to the postmortem status of the body, a moderate rise in temperature alone is not considered to have caused the full extent of postmortem changes. Therefore, other factors must have been present, which accelerated the postmortem decomposition processes. In our opinion, the most reasonable explanation for this phenomenon would be a rather long resting time of the corpse in a non-refrigerated hearse on a hot summer day.
Subensemble decomposition and Markov process analysis of Burgers turbulence.
Zhang, Zhi-Xiong; She, Zhen-Su
2011-08-01
A numerical and statistical study is performed to describe the positive and negative local subgrid energy fluxes in the one-dimensional random-force-driven Burgers turbulence (Burgulence). We use a subensemble method to decompose the field into shock wave and rarefaction wave subensembles by group velocity difference. We observe that the shock wave subensemble shows a strong intermittency which dominates the whole Burgulence field, while the rarefaction wave subensemble satisfies the Kolmogorov 1941 (K41) scaling law. We calculate the two subensemble probabilities and find that in the inertial range they maintain scale invariance, which is the important feature of turbulence self-similarity. We reveal that the interconversion of shock and rarefaction waves during the equation's evolution displays in accordance with a Markov process, which has a stationary transition probability matrix with the elements satisfying universal functions and, when the time interval is much greater than the corresponding characteristic value, exhibits the scale-invariant property.
Michaud, Jean-Philippe; Moreau, Gaétan
2011-01-01
Using pig carcasses exposed over 3 years in rural fields during spring, summer, and fall, we studied the relationship between decomposition stages and degree-day accumulation (i) to verify the predictability of the decomposition stages used in forensic entomology to document carcass decomposition and (ii) to build a degree-day accumulation model applicable to various decomposition-related processes. Results indicate that the decomposition stages can be predicted with accuracy from temperature records and that a reliable degree-day index can be developed to study decomposition-related processes. The development of degree-day indices opens new doors for researchers and allows for the application of inferential tools unaffected by climatic variability, as well as for the inclusion of statistics in a science that is primarily descriptive and in need of validation methods in courtroom proceedings.
Multidimensional seismic data reconstruction using tensor analysis
NASA Astrophysics Data System (ADS)
Kreimer, Nadia
Exploration seismology utilizes the seismic wavefield for prospecting oil and gas. The seismic reflection experiment consists on deploying sources and receivers in the surface of an area of interest. When the sources are activated, the receivers measure the wavefield that is reflected from different subsurface interfaces and store the information as time-series called traces or seismograms. The seismic data depend on two source coordinates, two receiver coordinates and time (a 5D volume). Obstacles in the field, logistical and economical factors constrain seismic data acquisition. Therefore, the wavefield sampling is incomplete in the four spatial dimensions. Seismic data undergoes different processes. In particular, the reconstruction process is responsible for correcting sampling irregularities of the seismic wavefield. This thesis focuses on the development of new methodologies for the reconstruction of multidimensional seismic data. This thesis examines techniques based on tensor algebra and proposes three methods that exploit the tensor nature of the seismic data. The fully sampled volume is low-rank in the frequency-space domain. The rank increases when we have missing traces and/or noise. The methods proposed perform rank reduction on frequency slices of the 4D spatial volume. The first method employs the Higher-Order Singular Value Decomposition (HOSVD) immersed in an iterative algorithm that reinserts weighted observations. The second method uses a sequential truncated SVD on the unfoldings of the tensor slices (SEQ-SVD). The third method formulates the rank reduction problem as a convex optimization problem. The measure of the rank is replaced by the nuclear norm of the tensor and the alternating direction method of multipliers (ADMM) minimizes the cost function. All three methods have the interesting property that they are robust to curvature of the reflections, unlike many reconstruction methods. Finally, we present a comparison between the methods
Chemical dehalogenation treatment: Base-catalyzed decomposition process (BCDP). Tech data sheet
Not Available
1992-07-01
The Base-Catalyzed Decomposition Process (BCDP) is an efficient, relatively inexpensive treatment process for polychlorinated biphenyls (PCBs). It is also effective on other halogenated contaminants such as insecticides, herbicides, pentachlorophenol (PCP), lindane, and chlorinated dibenzodioxins and furans. The heart of BCDP is the rotary reactor in which most of the decomposition takes place. The contaminated soil is first screened, processed with a crusher and pug mill, and stockpiled. Next, in the main treatment step, this stockpile is mixed with sodium bicarbonate (in the amount of 10% of the weight of the stockpile) and heated for about one hour at 630 F in the rotary reactor. Most (about 60% to 90%) of the PCBs in the soil are decomposed in this step. The remainder are volatilized, captured, and decomposed.
The neural basis of novelty and appropriateness in processing of creative chunk decomposition.
Huang, Furong; Fan, Jin; Luo, Jing
2015-06-01
Novelty and appropriateness have been recognized as the fundamental features of creative thinking. However, the brain mechanisms underlying these features remain largely unknown. In this study, we used event-related functional magnetic resonance imaging (fMRI) to dissociate these mechanisms in a revised creative chunk decomposition task in which participants were required to perform different types of chunk decomposition that systematically varied in novelty and appropriateness. We found that novelty processing involved functional areas for procedural memory (caudate), mental rewarding (substantia nigra, SN), and visual-spatial processing, whereas appropriateness processing was mediated by areas for declarative memory (hippocampus), emotional arousal (amygdala), and orthography recognition. These results indicate that non-declarative and declarative memory systems may jointly contribute to the two fundamental features of creative thinking.
Moment tensors, state of stress and their relation to faulting processes in Gujarat, western India
NASA Astrophysics Data System (ADS)
Aggarwal, Sandeep Kumar; Khan, Prosanta Kumar; Mohanty, Sarada Prasad; Roumelioti, Zafeiria
2016-10-01
Time domain moment tensor analysis of 145 earthquakes (Mw 3.2 to 5.1), occurring during the period 2006-2014 in Gujarat region, has been performed. The events are mainly confined in the Kachchh area demarcated by the Island belt and Kachchh Mainland faults to its north and south, and two transverse faults to its east and west. Libraries of Green's functions were established using the 1D velocity model of Kachchh, Saurashtra and Mainland Gujarat. Green's functions and broadband displacement waveforms filtered at low frequency (0.5-0.8 Hz) were inverted to determine the moment tensor solutions. The estimated solutions were rigorously tested through number of iterations at different source depths for finding reliable source locations. The identified heterogeneous nature of the stress fields in the Kachchh area allowed us to divide this into four Zones 1-4. The stress inversion results indicate that the Zone 1 is dominated with radial compression, Zone 2 with strike-slip compression, and Zones 3 and 4 with strike-slip extensions. The analysis further shows that the epicentral region of 2001 MW 7.7 Bhuj mainshock, located at the junction of Zones 2, 3 and 4, was associated with predominant compressional stress and strike-slip motion along ∼ NNE-SSW striking fault on the western margin of the Wagad uplift. Other tectonically active parts of Gujarat (e.g. Jamnagar, Talala and Mainland) show earthquake activities are dominantly associated with strike-slip extension/compression faulting. Stress inversion analysis shows that the maximum compressive stress axes (σ1) are vertical for both the Jamnagar and Talala regions and horizontal for the Mainland Gujarat. These stress regimes are distinctly different from those of the Kachchh region.
Oda, Tetsuji; Yamashita, Ryuichi; Haga, Ichiro; Takahashi, Tadashi; Masuda, Senichi
1996-01-01
The decomposition performance of the surface induced plasma chemical processing (SPCP) for chlorofluorocarbon (83 ppm CFC-113 in air), acetone, trichloroethylene, and isopropylalcohol was experimentally examined. In every case, very high decomposition performance, more than 90 or 99% removal rate, is realized when the residence time is about 1 second and the input electric power for a 16 cm{sup 3} reactor is about 10 W. Acetone is the most stable compound and alcohol is most easily decomposed. The decomposed product-analysis by a GasChromato-MassSpectrometer has just started but very poor results are obtained. In fact, some portion of the isopropylalcohol may change to acetone which is worse than alcohol. The necessary energy to decompose one mol gas diluted in the air is calculated form the experiments. The necessary energy level for acetone and trichloroethylene is about one-tenth or one-fiftieth of that for chlorofluorocarbon.
Discussion of stress tensor nonuniqueness with application to nonuniform, particulate systems
Aidun, J.B.
1993-01-01
The indeterminacy of the mechanical stress tensor has been noted in several developments of expressions for stress in a system of particles. It is generally agreed that physical quantities related to the stress tensor must be insensitive to this nonuniqueness, but there is no definitive prescription for insuring it. Kroener's tensor decomposition theorem is applied to the mechanical stress tensor [sup [sigma
Controlled decomposition and oxidation: A treatment method for gaseous process effluents
NASA Technical Reports Server (NTRS)
Mckinley, Roger J. B., Sr.
1990-01-01
The safe disposal of effluent gases produced by the electronics industry deserves special attention. Due to the hazardous nature of many of the materials used, it is essential to control and treat the reactants and reactant by-products as they are exhausted from the process tool and prior to their release into the manufacturing facility's exhaust system and the atmosphere. Controlled decomposition and oxidation (CDO) is one method of treating effluent gases from thin film deposition processes. CDO equipment applications, field experience, and results of the use of CDO equipment and technological advances gained from the field experiences are discussed.
Controlled decomposition and oxidation: A treatment method for gaseous process effluents
NASA Astrophysics Data System (ADS)
McKinley, Roger J. B., Sr.
1990-07-01
The safe disposal of effluent gases produced by the electronics industry deserves special attention. Due to the hazardous nature of many of the materials used, it is essential to control and treat the reactants and reactant by-products as they are exhausted from the process tool and prior to their release into the manufacturing facility's exhaust system and the atmosphere. Controlled decomposition and oxidation (CDO) is one method of treating effluent gases from thin film deposition processes. CDO equipment applications, field experience, and results of the use of CDO equipment and technological advances gained from the field experiences are discussed.
Factors controlling decomposition in arctic tundra and related root mycorrhizal processes
Linkins, A.E.
1990-01-01
Work proposed for the final year of Phase 1 of the R D Program will focus on three areas: (1) acquire soil and root-mycorrhizal process data which will incorporate baseline enzymatic and soil respiration data, as has been collected during the duration of the project, into the manipulations in the project initiated by Drs. Chapin and Schimmel. Additional enzymatic data on a broader range of organic nitrogen compound decomposition will be collected to better integrate existing decomposition data and modeling structure with the expanded information to be collected on nitrogen dynamics in soils and plant compartments. This activity will principally be done in the new dust disturbance experiment the overall project has planned. (2) Finalize data sets on the complete mineralization of cellulose, and cellulose like plant structural material, and cellulose intermediate hydrolysis products into CO2 and CH4 from soils from water track and non-water track soils and soils from riparian sedge moss meadow vegetation areas. Gas efflux from these soils will be measured in closed microcosms in which the soils will be manipulated to alter their redox state. (3) Continue developing and testing the GAS models on decomposition and plant growth and nutrient acquisition. The primary activity of this project will be on this latter task. 22 refs.
Analysis of a Methanol Decomposition Process by a Nonthermal Plasma Flow
NASA Astrophysics Data System (ADS)
Sato, Takehiko; Kambe, Makoto; Nishiyama, Hideya
In the present study, experimental and numerical analyses were adopted to clarify key reactive species for methanol decomposition processes using a nonthermal plasma flow. The nonthermal plasma flow was generated by a dielectric barrier discharge (DBD) as a radical production source. The experimental methods were as follows. Working gas was air of 1-10Sl/min. The peak-to-peak applied voltage was 16-20kV with sine wave of 1Hz-7kHz. The characteristics of gas velocity, gas temperature, ozone concentration and methanol decomposition efficiency were measured. Those characteristics were also numerically analyzed using conservation equations of mass, chemical component, momentum and energy, and state of equation. The simulation model takes into account reactive species, which have chemical reaction with the methanol. The detailed reaction mechanism used in this model consists of 108 elementary reactions and 41 chemical species. Inlet conditions are partially given by experimental results. Finally, effects of reactive species such as O, OH, H, NO, etc. on methanol decomposition characteristics are numerically analyzed. The results obtained in this study are summarized as follows. (1) Existence of excited atoms of O, N and excited molecular of OH, N2(B3Πg), N2(A3Σu+), NO are implied in the discharge region. (2) The methanol below 50ppm is decomposed completely by using DBD at discharge conditions as V=16kVpp and f=100Hz. (3) The reactive species are most important factor to decompose methanol, as the full decomposition is obtained under all injection positions. (4) In numerical analysis, it is clarified that OH is the important radical to decompose the methanol.
Chlorine/UV Process for Decomposition and Detoxification of Microcystin-LR.
Zhang, Xinran; Li, Jing; Yang, Jer-Yen; Wood, Karl V; Rothwell, Arlene P; Li, Weiguang; Blatchley Iii, Ernest R
2016-07-19
Microcystin-LR (MC-LR) is a potent hepatotoxin that is often associated with blooms of cyanobacteria. Experiments were conducted to evaluate the efficiency of the chlorine/UV process for MC-LR decomposition and detoxification. Chlorinated MC-LR was observed to be more photoactive than MC-LR. LC/MS analyses confirmed that the arginine moiety represented an important reaction site within the MC-LR molecule for conditions of chlorination below the chlorine demand of the molecule. Prechlorination activated MC-LR toward UV254 exposure by increasing the product of the molar absorption coefficient and the quantum yield of chloro-MC-LR, relative to the unchlorinated molecule. This mechanism of decay is fundamentally different than the conventional view of chlorine/UV as an advanced oxidation process. A toxicity assay based on human liver cells indicated MC-LR degradation byproducts in the chlorine/UV process possessed less cytotoxicity than those that resulted from chlorination or UV254 irradiation applied separately. MC-LR decomposition and detoxification in this combined process were more effective at pH 8.5 than at pH 7.5 or 6.5. These results suggest that the chlorine/UV process could represent an effective strategy for control of microcystins and their associated toxicity in drinking water supplies.
Yao, Shengnan; Zeng, Weiming; Wang, Nizhuan; Chen, Lei
2013-07-01
Independent component analysis (ICA) has been proven to be effective for functional magnetic resonance imaging (fMRI) data analysis. However, ICA decomposition requires to optimize the unmixing matrix iteratively whose initial values are generated randomly. Thus the randomness of the initialization leads to different ICA decomposition results. Therefore, just one-time decomposition for fMRI data analysis is not usually reliable. Under this circumstance, several methods about repeated decompositions with ICA (RDICA) were proposed to reveal the stability of ICA decomposition. Although utilizing RDICA has achieved satisfying results in validating the performance of ICA decomposition, RDICA cost much computing time. To mitigate the problem, in this paper, we propose a method, named ATGP-ICA, to do the fMRI data analysis. This method generates fixed initial values with automatic target generation process (ATGP) instead of being produced randomly. We performed experimental tests on both hybrid data and fMRI data to indicate the effectiveness of the new method and made a performance comparison of the traditional one-time decomposition with ICA (ODICA), RDICA and ATGP-ICA. The proposed method demonstrated that it not only could eliminate the randomness of ICA decomposition, but also could save much computing time compared to RDICA. Furthermore, the ROC (Receiver Operating Characteristic) power analysis also denoted the better signal reconstruction performance of ATGP-ICA than that of RDICA.
Ning, J. G.; Chu, L.; Ren, H. L.
2014-08-28
We base a quantitative acoustic emission (AE) study on fracture processes in alumina ceramics on wavelet packet decomposition and AE source location. According to the frequency characteristics, as well as energy and ringdown counts of AE, the fracture process is divided into four stages: crack closure, nucleation, development, and critical failure. Each of the AE signals is decomposed by a 2-level wavelet package decomposition into four different (from-low-to-high) frequency bands (AA{sub 2}, AD{sub 2}, DA{sub 2}, and DD{sub 2}). The energy eigenvalues P{sub 0}, P{sub 1}, P{sub 2}, and P{sub 3} corresponding to these four frequency bands are calculated. By analyzing changes in P{sub 0} and P{sub 3} in the four stages, we determine the inverse relationship between AE frequency and the crack source size during ceramic fracture. AE signals with regard to crack nucleation can be expressed when P{sub 0} is less than 5 and P{sub 3} more than 60; whereas AE signals with regard to dangerous crack propagation can be expressed when more than 92% of P{sub 0} is greater than 4, and more than 95% of P{sub 3} is less than 45. Geiger location algorithm is used to locate AE sources and cracks in the sample. The results of this location algorithm are consistent with the positions of fractures in the sample when observed under a scanning electronic microscope; thus the locations of fractures located with Geiger's method can reflect the fracture process. The stage division by location results is in a good agreement with the division based on AE frequency characteristics. We find that both wavelet package decomposition and Geiger's AE source locations are suitable for the identification of the evolutionary process of cracks in alumina ceramics.
Input-decomposition balance of heterotrophic processes in a warm-temperate mixed forest in Japan
NASA Astrophysics Data System (ADS)
Jomura, M.; Kominami, Y.; Ataka, M.; Makita, N.; Dannoura, M.; Miyama, T.; Tamai, K.; Goto, Y.; Sakurai, S.
2010-12-01
Carbon accumulation in forest ecosystem has been evaluated using three approaches. One is net ecosystem exchange (NEE) estimated by tower flux measurement. The second is net ecosystem production (NEP) estimated by biometric measurements. NEP can be expressed as the difference between net primary production and heterotrophic respiration. NEP can also be expressed as the annual increment in the plant biomass (ΔW) plus soil (ΔS) carbon pools defined as follows; NEP = ΔW+ΔS The third approach needs to evaluate annual carbon increment in soil compartment. Soil carbon accumulation rate could not be measured directly in a short term because of the small amount of annual accumulation. Soil carbon accumulation rate can be estimated by a model calculation. Rothamsted carbon model is a soil organic carbon turnover model and a useful tool to estimate the rate of soil carbon accumulation. However, the model has not sufficiently included variations in decomposition processes of organic matters in forest ecosystems. Organic matter in forest ecosystems have a different turnover rate that creates temporal variations in input-decomposition balance and also have a large variation in spatial distribution. Thus, in order to estimate the rate of soil carbon accumulation, temporal and spatial variation in input-decomposition balance of heterotrophic processes should be incorporated in the model. In this study, we estimated input-decomposition balance and the rate of soil carbon accumulation using the modified Roth-C model. We measured respiration rate of many types of organic matters, such as leaf litter, fine root litter, twigs and coarse woody debris using a chamber method. We can illustrate the relation of respiration rate to diameter of organic matters. Leaf and fine root litters have no diameter, so assumed to be zero in diameter. Organic matters in small size, such as leaf and fine root litter, have high decomposition respiration. It could be caused by the difference in
Kolda, Tamara G.; Bader, Brett W.
2006-08-03
This software provides a collection of MATLAB classes for tensor manipulations that can be used for fast algorithm prototyping. The tensor class extends the functionality of MATLAB's multidimensional arrays by supporting additional operations such as tensor multiplication. We have also added support for sparse tensor, tensors in Kruskal or Tucker format, and tensors stored as matrices (both dense and sparse).
Tensor Modeling Based for Airborne LiDAR Data Classification
NASA Astrophysics Data System (ADS)
Li, N.; Liu, C.; Pfeifer, N.; Yin, J. F.; Liao, Z. Y.; Zhou, Y.
2016-06-01
Feature selection and description is a key factor in classification of Earth observation data. In this paper a classification method based on tensor decomposition is proposed. First, multiple features are extracted from raw LiDAR point cloud, and raster LiDAR images are derived by accumulating features or the "raw" data attributes. Then, the feature rasters of LiDAR data are stored as a tensor, and tensor decomposition is used to select component features. This tensor representation could keep the initial spatial structure and insure the consideration of the neighborhood. Based on a small number of component features a k nearest neighborhood classification is applied.
Rumiza, A R; Khairul, O; Zuha, R M; Heo, C C
2010-12-01
This study was designed to mimic homicide or suicide cases using gasoline. Six adult long-tailed macaque (Macaca fascicularis), weighing between 2.5 to 4.0 kg, were equally divided into control and test groups. The control group was sacrificed by a lethal dose of phenobarbital intracardiac while test group was force fed with two doses of gasoline LD50 (37.7 ml/kg) after sedation with phenobarbital. All carcasses were then placed in a decomposition site to observe the decomposition and invasion process of cadaveric fauna on the carcasses. A total of five decomposition stages were recognized during this study. This study was performed during July 2007. Fresh stage of control and test carcasses occurred between 0 to 15 and 0 to 39 hours of exposure, respectively. The subsequent decomposition stages also exhibited the similar pattern whereby the decomposition process of control carcasses were faster than tested one. The first larvae were found on control carcasses after 9 hours of death while the test group carcasses had only their first blowfly eggs after 15 hours of exposure. Blow flies, Achoetandrus rufifacies and Chrysomya megacephala were the most dominant invader of both carcasses throughout the decaying process. Diptera collected from control carcasses comprised of scuttle fly, Megaselia scalaris and flesh fly, sarcophagid. We concluded that the presence of gasoline and its odor on the carcass had delayed the arrival of insect to the carcasses, thereby slowing down the decomposition process in the carcass by 6 hours.
Osono, T
2006-08-01
The ecology of endophytic and epiphytic phyllosphere fungi of forest trees is reviewed with special emphasis on the development of decomposer fungal communities and decomposition processes of leaf litter. A total of 41 genera of phyllosphere fungi have been reported to occur on leaf litter of tree species in 19 genera. The relative proportion of phyllosphere fungi in decomposer fungal communities ranges from 2% to 100%. Phyllosphere fungi generally disappear in the early stages of decomposition, although a few species persist until the late stages. Phyllosphere fungi have the ability to utilize various organic compounds as carbon sources, and the marked decomposing ability is associated with ligninolytic activity. The role of phyllosphere fungi in the decomposition of soluble components during the early stages is relatively small in spite of their frequent occurrence. Recently, the roles of phyllosphere fungi in the decomposition of structural components have been documented with reference to lignin and cellulose decomposition, nutrient dynamics, and accumulation and decomposition of soil organic matter. It is clear from this review that several of the common phyllosphere fungi of forest trees are primarily saprobic, being specifically adapted to colonize and utilize dead host tissue, and that some phyllosphere fungi with marked abilities to decompose litter components play important roles in decomposition of structural components, nutrient dynamics, and soil organic matter accumulation.
Noise-assisted data processing with empirical mode decomposition in biomedical signals.
Karagiannis, Alexandros; Constantinou, Philip
2011-01-01
In this paper, a methodology is described in order to investigate the performance of empirical mode decomposition (EMD) in biomedical signals, and especially in the case of electrocardiogram (ECG). Synthetic ECG signals corrupted with white Gaussian noise are employed and time series of various lengths are processed with EMD in order to extract the intrinsic mode functions (IMFs). A statistical significance test is implemented for the identification of IMFs with high-level noise components and their exclusion from denoising procedures. Simulation campaign results reveal that a decrease of processing time is accomplished with the introduction of preprocessing stage, prior to the application of EMD in biomedical time series. Furthermore, the variation in the number of IMFs according to the type of the preprocessing stage is studied as a function of SNR and time-series length. The application of the methodology in MIT-BIH ECG records is also presented in order to verify the findings in real ECG signals.
Man, Pascal P; Bonhomme, Christian; Babonneau, Florence
2014-01-01
We present a post-processing method that decreases the NMR spectrum noise without line shape distortion. As a result the signal-to-noise (S/N) ratio of a spectrum increases. This method is called Cadzow enhancement procedure that is based on the singular-value decomposition of time-domain signal. We also provide software whose execution duration is a few seconds for typical data when it is executed in modern graphic-processing unit. We tested this procedure not only on low sensitive nucleus (29)Si in hybrid materials but also on low gyromagnetic ratio, quadrupole nucleus (87)Sr in reference sample Sr(NO3)2. Improving the spectrum S/N ratio facilitates the determination of T/Q ratio of hybrid materials. It is also applicable to simulated spectrum, resulting shorter simulation duration for powder averaging. An estimation of the number of singular values needed for denoising is also provided.
A detailed kinetic model for the hydrothermal decomposition process of sewage sludge.
Yin, Fengjun; Chen, Hongzhen; Xu, Guihua; Wang, Guangwei; Xu, Yuanjian
2015-12-01
A detailed kinetic model for the hydrothermal decomposition (HTD) of sewage sludge was developed based on an explicit reaction scheme considering exact intermediates including protein, saccharide, NH4(+)-N and acetic acid. The parameters were estimated by a series of kinetic data at a temperature range of 180-300°C. This modeling framework is capable of revealing stoichiometric relationships between different components by determining the conversion coefficients and identifying the reaction behaviors by determining rate constants and activation energies. The modeling work shows that protein and saccharide are the primary intermediates in the initial stage of HTD resulting from the fast reduction of biomass. The oxidation processes of macromolecular products to acetic acid are highly dependent on reaction temperature and dramatically restrained when temperature is below 220°C. Overall, this detailed model is meaningful for process simulation and kinetic analysis.
Surface modification processes during methane decomposition on Cu-promoted Ni–ZrO2 catalysts
Wolfbeisser, Astrid; Klötzer, Bernhard; Mayr, Lukas; Rameshan, Raffael; Zemlyanov, Dmitry; Bernardi, Johannes; Rupprechter, Günther
2015-01-01
The surface chemistry of methane on Ni–ZrO2 and bimetallic CuNi–ZrO2 catalysts and the stability of the CuNi alloy under reaction conditions of methane decomposition were investigated by combining reactivity measurements and in situ synchrotron-based near-ambient pressure XPS. Cu was selected as an exemplary promoter for modifying the reactivity of Ni and enhancing the resistance against coke formation. We observed an activation process occurring in methane between 650 and 735 K with the exact temperature depending on the composition which resulted in an irreversible modification of the catalytic performance of the bimetallic catalysts towards a Ni-like behaviour. The sudden increase in catalytic activity could be explained by an increase in the concentration of reduced Ni atoms at the catalyst surface in the active state, likely as a consequence of the interaction with methane. Cu addition to Ni improved the desired resistance against carbon deposition by lowering the amount of coke formed. As a key conclusion, the CuNi alloy shows limited stability under relevant reaction conditions. This system is stable only in a limited range of temperature up to ~700 K in methane. Beyond this temperature, segregation of Ni species causes a fast increase in methane decomposition rate. In view of the applicability of this system, a detailed understanding of the stability and surface composition of the bimetallic phases present and the influence of the Cu promoter on the surface chemistry under relevant reaction conditions are essential. PMID:25815163
Decomposition strategies in the problems of simulation of additive laser technology processes
NASA Astrophysics Data System (ADS)
Khomenko, M. D.; Dubrov, A. V.; Mirzade, F. Kh.
2016-11-01
The development of additive technologies and their application in industry is associated with the possibility of predicting the final properties of a crystallized added material. This paper describes the problem characterized by a dynamic and spatially nonuniform computational complexity, which, in the case of uniform decomposition of a computational domain, leads to an unbalanced load on computational cores. The strategy of partitioning of the computational domain is used, which minimizes the CPU time losses in the serial computations of the additive technological process. The chosen strategy is optimal from the standpoint of a priori unknown dynamic computational load distribution. The scaling of the computational problem on the cluster of the Institute on Laser and Information Technologies (RAS) that uses the InfiniBand interconnect is determined. The use of the parallel code with optimal decomposition made it possible to significantly reduce the computational time (down to several hours), which is important in the context of development of the software package for support of engineering activity in the field of additive technology.
Garbe, Christoph S; Buttgereit, Andreas; Schürmann, Sebastian; Friedrich, Oliver
2012-01-01
Practically, all chronic diseases are characterized by tissue remodeling that alters organ and cellular function through changes to normal organ architecture. Some morphometric alterations become irreversible and account for disease progression even on cellular levels. Early diagnostics to categorize tissue alterations, as well as monitoring progression or remission of disturbed cytoarchitecture upon treatment in the same individual, are a new emerging field. They strongly challenge spatial resolution and require advanced imaging techniques and strategies for detecting morphological changes. We use a combined second harmonic generation (SHG) microscopy and automated image processing approach to quantify morphology in an animal model of inherited Duchenne muscular dystrophy (mdx mouse) with age. Multiphoton XYZ image stacks from tissue slices reveal vast morphological deviation in muscles from old mdx mice at different scales of cytoskeleton architecture: cell calibers are irregular, myofibrils within cells are twisted, and sarcomere lattice disruptions (detected as "verniers") are larger in number compared to samples from healthy mice. In young mdx mice, such alterations are only minor. The boundary-tensor approach, adapted and optimized for SHG data, is a suitable approach to allow quick quantitative morphometry in whole tissue slices. The overall detection performance of the automated algorithm compares very well with manual "by eye" detection, the latter being time consuming and prone to subjective errors. Our algorithm outperfoms manual detection by time with similar reliability. This approach will be an important prerequisite for the implementation of a clinical image databases to diagnose and monitor specific morphological alterations in chronic (muscle) diseases.
Pedros, Philip B; Askari, Omid; Metghalchi, Hameed
2016-12-01
During the last decade municipal wastewater treatment plants have been regulated with increasingly stringent nutrient removal requirements including nitrogen. Typically biological treatment processes are employed to meet these limits. Although the nitrogen in the wastewater stream is reduced, certain steps in the biological processes allow for the release of gaseous nitrous oxide (N2O), a greenhouse gas (GHG). A comprehensive study was conducted to investigate the potential to mitigate N2O emissions from biological nutrient removal (BNR) processes by means of thermal decomposition. The study examined using the off gases from the biological process, instead of ambient air, as the oxidant gas for the combustion of biomethane. A detailed analysis was done to examine the concentration of N2O and 58 other gases that exited the combustion process. The analysis was based on the assumption that the exhaust gases were in chemical equilibrium since the residence time in the combustor is sufficiently longer than the chemical characteristics. For all inlet N2O concentrations the outlet concentrations were close to zero. Additionally, the emission of hydrogen sulfide (H2S) and ten commonly occurring volatile organic compounds (VOCs) were also examined as a means of odor control for biological secondary treatment processes or as potential emissions from an anaerobic reactor of a BNR process. The sulfur released from the H2S formed sulfur dioxide (SO2) and eight of the ten VOCs were destroyed.
NASA Astrophysics Data System (ADS)
Bakker, O. J.; Gibson, C.; Wilson, P.; Lohse, N.; Popov, A. A.
2015-10-01
Due to its inherent advantages, linear friction welding is a solid-state joining process of increasing importance to the aerospace, automotive, medical and power generation equipment industries. Tangential oscillations and forge stroke during the burn-off phase of the joining process introduce essential dynamic forces, which can also be detrimental to the welding process. Since burn-off is a critical phase in the manufacturing stage, process monitoring is fundamental for quality and stability control purposes. This study aims to improve workholding stability through the analysis of fixture cassette deformations. Methods and procedures for process monitoring are developed and implemented in a fail-or-pass assessment system for fixture cassette deformations during the burn-off phase. Additionally, the de-noised signals are compared to results from previous production runs. The observed deformations as a consequence of the forces acting on the fixture cassette are measured directly during the welding process. Data on the linear friction-welding machine are acquired and de-noised using empirical mode decomposition, before the burn-off phase is extracted. This approach enables a direct, objective comparison of the signal features with trends from previous successful welds. The capacity of the whole process monitoring system is validated and demonstrated through the analysis of a large number of signals obtained from welding experiments.
Interactive multiscale tensor reconstruction for multiresolution volume visualization.
Suter, Susanne K; Guitián, José A Iglesias; Marton, Fabio; Agus, Marco; Elsener, Andreas; Zollikofer, Christoph P E; Gopi, M; Gobbetti, Enrico; Pajarola, Renato
2011-12-01
Large scale and structurally complex volume datasets from high-resolution 3D imaging devices or computational simulations pose a number of technical challenges for interactive visual analysis. In this paper, we present the first integration of a multiscale volume representation based on tensor approximation within a GPU-accelerated out-of-core multiresolution rendering framework. Specific contributions include (a) a hierarchical brick-tensor decomposition approach for pre-processing large volume data, (b) a GPU accelerated tensor reconstruction implementation exploiting CUDA capabilities, and (c) an effective tensor-specific quantization strategy for reducing data transfer bandwidth and out-of-core memory footprint. Our multiscale representation allows for the extraction, analysis and display of structural features at variable spatial scales, while adaptive level-of-detail rendering methods make it possible to interactively explore large datasets within a constrained memory footprint. The quality and performance of our prototype system is evaluated on large structurally complex datasets, including gigabyte-sized micro-tomographic volumes.
Schmidt, A.J.; Freeman, H.D.; Brown, M.D.; Zacher, A.H.; Neuenschwander, G.N.; Wilcox, W.A.; Gano, S.R.; Kim, B.C.; Gavaskar, A.R.
1996-02-01
Base Catalyzed Decomposition (BCD) is a chemical dehalogenation process designed for treating soils and other substrate contaminated with polychlorinated biphenyls (PCB), pesticides, dioxins, furans, and other hazardous organic substances. PCBs are heavy organic liquids once widely used in industry as lubricants, heat transfer oils, and transformer dielectric fluids. In 1976, production was banned when PCBs were recognized as carcinogenic substances. It was estimated that significant quantities (one billion tons) of U.S. soils, including areas on U.S. military bases outside the country, were contaminated by PCB leaks and spills, and cleanup activities began. The BCD technology was developed in response to these activities. This report details the evolution of the process, from inception to deployment in Guam, and describes the process and system components provided to the Navy to meet the remediation requirements. The report is divided into several sections to cover the range of development and demonstration activities. Section 2.0 gives an overview of the project history. Section 3.0 describes the process chemistry and remediation steps involved. Section 4.0 provides a detailed description of each component and specific development activities. Section 5.0 details the testing and deployment operations and provides the results of the individual demonstration campaigns. Section 6.0 gives an economic assessment of the process. Section 7.0 presents the conclusions and recommendations form this project. The appendices contain equipment and instrument lists, equipment drawings, and detailed run and analytical data.
Hsiao, M.C.; Merritt, B.T.; Penetrante, B.M.; Vogtlin, G.E.; Wallman, P.H.
1995-09-01
Experiments are presented on the plasma-assisted decomposition of dilute concentrations of methanol and trichloroethylene in atmospheric pressure air streams by electrical discharge processing. This investigation used two types of discharge reactors, a dielectric-barrier and a pulsed corona discharge reactor, to study the effects of gas temperature and electrical energy input on the decomposition chemistry and byproduct formation. Our experimental data on both methanol and trichloroethylene show that, under identical gas conditions, the type of electrical discharge reactor does not affect the energy requirements for decomposition or byproduct formation. Our experiments on methanol show that discharge processing converts methanol to CO{sub {ital x}} with an energy yield that increases with temperature. In contrast to the results from methanol, CO{sub {ital x}} is only a minor product in the decomposition of trichloroethylene. In addition, higher temperatures decrease the energy yield for trichloroethylene. This effect may be due to increased competition from decomposition of the byproducts dichloroacetyl chloride and phosgene. In all cases plasma processing using an electrical discharge device produces CO preferentially over CO{sub 2}.
Decomposition of aniline in aqueous solution by UV/TiO2 process with applying bias potential.
Ku, Young; Chiu, Ping-Chin; Chou, Yiang-Chen
2010-11-15
Application of bias potential to the photocatalytic decomposition of aniline in aqueous solution was studied under various solution pH, bias potentials and concentrations of potassium chloride. The decomposition of aniline by UV/TiO(2) process was found to be enhanced with the application of bias potential of lower voltages; however, the electrolysis of aniline became more dominant as the applying bias potential exceeding 1.0 V. Based on the experimental results and calculated synergetic factors, the application of bias potential improved the decomposition of aniline more noticeably in acidic solutions than that in alkaline solutions. Decomposition of aniline by UV/bias/TiO(2) process in alkaline solutions was increased to certain extent with the concentration of potassium chloride present in aqueous solution. Experimental results also indicated that the energy consumed by applying bias potential for aniline decomposition by UV/bias/TiO(2) process might be much lower than that consumed for increasing light intensity for photocatalysis.
The classical model for moment tensors
NASA Astrophysics Data System (ADS)
Tape, W.; Tape, C.
2013-12-01
A seismic moment tensor is a description of an earthquake source, but the description is indirect. The moment tensor describes seismic radiation rather than the actual physical process that initiates the radiation. A moment tensor 'model' then ties the physical process to the moment tensor. The model is not unique, and the physical process is therefore not unique. In the classical moment tensor model (Aki and Richards, 1980), an earthquake arises from slip along a planar fault, but with the slip not necessarily in the plane of the fault. The model specifies the resulting moment tensor in terms of the slip vector, the fault normal vector, and the Lame elastic parameters, assuming isotropy. We review the classical model in the context of the fundamental lune. The lune is closely related to the space of moment tensors, and it provides a setting that is conceptually natural as well as pictorial. In addition to the classical model, we consider a crack plus double couple model (CDC model) in which a moment tensor is regarded as the sum of a crack tensor and a double couple. A compilation of full moment tensors from the literature reveals large deviations in Poisson's ratio as implied by the classical model. Either the classical model is inadequate or the published full moment tensors have very large uncertainties. We question the common interpretation of the isotropic component as a volume change in the source region.
Decomposition of phenylarsonic acid by AOP processes: degradation rate constants and by-products.
Jaworek, K; Czaplicka, M; Bratek, Ł
2014-10-01
The paper presents results of the studies photodegradation, photooxidation, and oxidation of phenylarsonic acid (PAA) in aquatic solution. The water solutions, which consist of 2.7 g dm(-3) phenylarsonic acid, were subjected to advance oxidation process (AOP) in UV, UV/H2O2, UV/O3, H2O2, and O3 systems under two pH conditions. Kinetic rate constants and half-life of phenylarsonic acid decomposition reaction are presented. The results from the study indicate that at pH 2 and 7, PAA degradation processes takes place in accordance with the pseudo first order kinetic reaction. The highest rate constants (10.45 × 10(-3) and 20.12 × 10(-3)) and degradation efficiencies at pH 2 and 7 were obtained at UV/O3 processes. In solution, after processes, benzene, phenol, acetophenone, o-hydroxybiphenyl, p-hydroxybiphenyl, benzoic acid, benzaldehyde, and biphenyl were identified.
NASA Technical Reports Server (NTRS)
Hudson, Nicolas; Lin, Ying; Barengoltz, Jack
2010-01-01
A method for evaluating the probability of a Viable Earth Microorganism (VEM) contaminating a sample during the sample acquisition and handling (SAH) process of a potential future Mars Sample Return mission is developed. A scenario where multiple core samples would be acquired using a rotary percussive coring tool, deployed from an arm on a MER class rover is analyzed. The analysis is conducted in a structured way by decomposing sample acquisition and handling process into a series of discrete time steps, and breaking the physical system into a set of relevant components. At each discrete time step, two key functions are defined: The probability of a VEM being released from each component, and the transport matrix, which represents the probability of VEM transport from one component to another. By defining the expected the number of VEMs on each component at the start of the sampling process, these decompositions allow the expected number of VEMs on each component at each sampling step to be represented as a Markov chain. This formalism provides a rigorous mathematical framework in which to analyze the probability of a VEM entering the sample chain, as well as making the analysis tractable by breaking the process down into small analyzable steps.
Efficient photoreductive decomposition of N-nitrosodimethylamine by UV/iodide process.
Sun, Zhuyu; Zhang, Chaojie; Zhao, Xiaoyun; Chen, Jing; Zhou, Qi
2017-05-05
N-nitrosodimethylamine (NDMA) has aroused extensive concern as a disinfection byproduct due to its high toxicity and elevated concentration levels in water sources. This study investigates the photoreductive decomposition of NDMA by UV/iodide process. The results showed that this process is an effective strategy for the treatment of NDMA with 99.2% NDMA removed within 10min. The depletion of NDMA by UV/iodide process obeyed pseudo-first-order kinetics with a rate constant (k1) of 0.60±0.03min(-1). Hydrated electrons (eaq(-)) generated by the UV irradiation of iodide were proven to play a critical role. Dimethylamine (DMA) and nitrite (NO2(-)) were formed as the main intermediate products, which completely converted to formate (HCOO(-)), ammonium (NH4(+)) and nitrogen (N2). Therefore, not only the high efficiencies in NDMA destruction, but the elimination of toxic intermediates make UV/iodide process advantageous. A photoreduction mechanism was proposed: NDMA initially absorbed photons to a photoexcited state, and underwent a cleavage of NNO bond under the attack of eaq(-). The solution pH had little impact on NDMA removal. However, alkaline conditions were more favorable for the elimination of DMA and NO2(-), thus effectively reducing the secondary pollution.
Cao, Hong-Wen; Yang, Ke-Yu; Yan, Hong-Mei
2017-01-01
Character order information is encoded at the initial stage of Chinese word processing, however, its time course remains underspecified. In this study, we assess the exact time course of the character decomposition and transposition processes of two-character Chinese compound words (canonical, transposed, or reversible words) compared with pseudowords using dual-target rapid serial visual presentation (RSVP) of stimuli appearing at 30 ms per character with no inter-stimulus interval. The results indicate that Chinese readers can identify words with character transpositions in rapid succession; however, a transposition cost is involved in identifying transposed words compared to canonical words. In RSVP reading, character order of words is more likely to be reversed during the period from 30 to 180 ms for canonical and reversible words, but the period from 30 to 240 ms for transposed words. Taken together, the findings demonstrate that the holistic representation of the base word is activated, however, the order of the two constituent characters is not strictly processed during the very early stage of visual word processing.
Cao, Hongwen; Gao, Min; Yan, Hongmei
2016-01-01
The attentional blink (AB) is the phenomenon in which the identification of the second of two targets (T2) is attenuated if it is presented less than 500 ms after the first target (T1). Although the AB is eliminated in canonical word conditions, it remains unclear whether the character order in compound words affects the magnitude of the AB. Morpheme decomposition and transposition of Chinese two-character compound words can provide an effective means to examine AB priming and to assess combinations of the component representations inherent to visual word identification. In the present study, we examined the processing of consecutive targets in a rapid serial visual presentation (RSVP) paradigm using Chinese two-character compound words in which the two characters were transposed to form meaningful words or meaningless combinations (reversible, transposed, or canonical words). We found that when two Chinese characters that form a compound word, regardless of their order, are presented in an RSVP sequence, the likelihood of an AB for the second character is greatly reduced or eliminated compared to when the two characters constitute separate words rather than a compound word. Moreover, the order of the report for the two characters is more likely to be reversed when the normal order of the two characters in a compound word is reversed, especially when the interval between the presentation of the two characters is extremely short. These findings are more consistent with the cognitive strategy hypothesis than the resource-limited hypothesis during character decomposition and transposition of Chinese two-character compound words. These results suggest that compound characters are perceived as a unit, rather than two separate words. The data further suggest that readers could easily understand the text with character transpositions in compound words during Chinese reading. PMID:27379003
Cao, Hongwen; Gao, Min; Yan, Hongmei
2016-01-01
The attentional blink (AB) is the phenomenon in which the identification of the second of two targets (T2) is attenuated if it is presented less than 500 ms after the first target (T1). Although the AB is eliminated in canonical word conditions, it remains unclear whether the character order in compound words affects the magnitude of the AB. Morpheme decomposition and transposition of Chinese two-character compound words can provide an effective means to examine AB priming and to assess combinations of the component representations inherent to visual word identification. In the present study, we examined the processing of consecutive targets in a rapid serial visual presentation (RSVP) paradigm using Chinese two-character compound words in which the two characters were transposed to form meaningful words or meaningless combinations (reversible, transposed, or canonical words). We found that when two Chinese characters that form a compound word, regardless of their order, are presented in an RSVP sequence, the likelihood of an AB for the second character is greatly reduced or eliminated compared to when the two characters constitute separate words rather than a compound word. Moreover, the order of the report for the two characters is more likely to be reversed when the normal order of the two characters in a compound word is reversed, especially when the interval between the presentation of the two characters is extremely short. These findings are more consistent with the cognitive strategy hypothesis than the resource-limited hypothesis during character decomposition and transposition of Chinese two-character compound words. These results suggest that compound characters are perceived as a unit, rather than two separate words. The data further suggest that readers could easily understand the text with character transpositions in compound words during Chinese reading.
Qiu, Yang; Collin, Felten; Hurt, Robert H; Külaots, Indrek
2016-01-01
The success of graphene technologies will require the development of safe and cost-effective nano-manufacturing methods. Special safety issues arise for manufacturing routes based on graphite oxide (GO) as an intermediate due to its energetic behavior. This article presents a detailed thermochemical and kinetic study of GO exothermic decomposition designed to identify the conditions and material compositions that avoid explosive events during storage and processing at large scale. It is shown that GO becomes more reactive for thermal decomposition when it is pretreated with OH(-) in suspension and the effect is reversible by back-titration to low pH. This OH(-) effect can lower the decomposition reaction exotherm onset temperature by up to 50 degrees of Celsius, causing overlap with common drying operations (100-120°C) and possible self-heating and thermal runaway during processing. Spectroscopic and modeling evidence suggest epoxide groups are primarily responsible for the energetic behavior, and epoxy ring opening/closing reactions are offered as an explanation for the reversible effects of pH on decomposition kinetics and enthalpies. A quantitative kinetic model is developed for GO thermal decomposition and used in a series of case studies to predict the storage conditions under which spontaneous self-heating, thermal runaway, and explosions can be avoided.
NASA Astrophysics Data System (ADS)
Kuo, Kenneth K.; Lu, Y. C.; Chiaverini, Martin J.; Harting, George C.
1994-11-01
An experimental study on the fundamental processes involved in fuel decomposition and boundary layer combustion in hybrid rocket motors is being conducted at the High Pressure Combustion Laboratory of the Pennsylvania State University. This research should provide a useful engineering technology base in the development of hybrid rocket motors as well as a fundamental understanding of the complex processes involved in hybrid propulsion. A high pressure slab motor has been designed and manufactured for conducting experimental investigations. Oxidizer (LOX or GOX) supply and control systems have been designed and partly constructed for the head-end injection into the test chamber. Experiments using HTPB fuel, as well as fuels supplied by NASA designated industrial companies will be conducted. Design and construction of fuel casting molds and sample holders have been completed. The portion of these items for industrial company fuel casting will be sent to the McDonnell Douglas Aerospace Corporation in the near future. The study focuses on the following areas: observation of solid fuel burning processes with LOX or GOX, measurement and correlation of solid fuel regression rate with operating conditions, measurement of flame temperature and radical species concentrations, determination of the solid fuel subsurface temperature profile, and utilization of experimental data for validation of a companion theoretical study (Part 2) also being conducted at PSU.
NASA Technical Reports Server (NTRS)
Kuo, Kenneth K.; Lu, Y. C.; Chiaverini, Martin J.; Harting, George C.
1994-01-01
An experimental study on the fundamental processes involved in fuel decomposition and boundary layer combustion in hybrid rocket motors is being conducted at the High Pressure Combustion Laboratory of the Pennsylvania State University. This research should provide a useful engineering technology base in the development of hybrid rocket motors as well as a fundamental understanding of the complex processes involved in hybrid propulsion. A high pressure slab motor has been designed and manufactured for conducting experimental investigations. Oxidizer (LOX or GOX) supply and control systems have been designed and partly constructed for the head-end injection into the test chamber. Experiments using HTPB fuel, as well as fuels supplied by NASA designated industrial companies will be conducted. Design and construction of fuel casting molds and sample holders have been completed. The portion of these items for industrial company fuel casting will be sent to the McDonnell Douglas Aerospace Corporation in the near future. The study focuses on the following areas: observation of solid fuel burning processes with LOX or GOX, measurement and correlation of solid fuel regression rate with operating conditions, measurement of flame temperature and radical species concentrations, determination of the solid fuel subsurface temperature profile, and utilization of experimental data for validation of a companion theoretical study (Part 2) also being conducted at PSU.
Putting domain decomposition at the heart of a mesh-based simulation process
NASA Astrophysics Data System (ADS)
Chow, Peter; Addison, Clifford
2002-12-01
In computational mechanics analyses such as those in computational fluid dynamics and computational structure mechanics, some 60-90% of total modelling time is taken by specifying and creating the model of the geometry and mesh. The rest of the time is spent in actual analyses and interpreting the results. This is especially true for industries such as aerospace and electronics, where 3D geometrically complex models with multiple physical processes are common. Advances in computational hardware and software have tended to increase the proportion of time spent in model creation, partly because such advances have made it feasible to solve hard and complex geometry problems in a timely fashion. This paper shows one way to exploit the advances in computation to reduce the model creation time and potentially the overall modelling time, namely the use of domain decomposition to define consistent and coherent global models based on existing component geometry and mesh models. In keeping with existing modelling processes the re-engineering cost for the process is minimal.
NASA Astrophysics Data System (ADS)
Ball, R.; McIntosh, A. C.; Brindley, J.
2004-06-01
A simple dynamical system that models the competitive thermokinetics and chemistry of cellulose decomposition is examined, with reference to evidence from experimental studies indicating that char formation is a low activation energy exothermal process and volatilization is a high activation energy endothermal process. The thermohydrolysis chemistry at the core of the primary competition is described. Essentially, the competition is between two nucleophiles, a molecule of water and an -OH group on C6 of an end glucosyl cation, to form either a reducing chain fragment with the propensity to undergo the bond-forming reactions that ultimately form char, or a levoglucosan end-fragment that depolymerizes to volatile products. The results of this analysis suggest that promotion of char formation under thermal stress can actually increase the production of flammable volatiles. Thus, we would like to convey an important safety message in this paper: in some situations where heat and mass transfer is restricted in cellulosic materials, such as furnishings, insulation, and stockpiles, the use of char-promoting treatments for fire retardation may have the effect of increasing the risk of flaming combustion.
NASA Technical Reports Server (NTRS)
Kuo, Kenneth K.; Lu, Y. C.; Chiaverini, Martin J.; Harting, George C.
1994-01-01
An experimental study on the fundamental processes involved in fuel decomposition and boundary layer combustion in hybrid rocket motors is being conducted at the High Pressure Combustion Laboratory of the Pennsylvania State University. This research should provide an engineering technology base for development of large scale hybrid rocket motors as well as a fundamental understanding of the complex processes involved in hybrid propulsion. A high pressure slab motor has been designed for conducting experimental investigations. Oxidizer (LOX or GOX) is injected through the head-end over a solid fuel (HTPB) surface. Experiments using fuels supplied by NASA designated industrial companies will also be conducted. The study focuses on the following areas: measurement and observation of solid fuel burning with LOX or GOX, correlation of solid fuel regression rate with operating conditions, measurement of flame temperature and radical species concentrations, determination of the solid fuel subsurface temperature profile, and utilization of experimental data for validation of a companion theoretical study also being conducted at PSU.
Mathematical simulation of thermal decomposition processes in coking polymers during intense heating
Shlenskii, O.F.; Polyakov, A.A.
1994-12-01
Description of nonstationary heat transfer in heat-shielding materials based on cross-linked polymers, mathematical simulation of chemical engineering processes of treating coking and fiery coals, and designing calculations all require taking thermal destruction kinetics into account. The kinetics of chemical transformations affects the substance density change depending on the temperature, the time, the heat-release function, and other properties of materials. The traditionally accepted description of the thermal destruction kinetics of coking materials is based on formulating a set of kinetic equations, in which only chemical transformations are taken into account. However, such an approach does not necessarily agree with the obtained experimental data for the case of intense heating. The authors propose including the parameters characterizing the decrease of intermolecular interaction in a comparatively narrow temperature interval (20-40 K) into the set of kinetic equations. In the neighborhood of a certain temperature T{sub 1}, which is called the limiting temperature of thermal decomposition, a decrease in intermolecular interaction causes an increase in the rates of chemical and phase transformations. The effect of the enhancement of destruction processes has been found experimentally by the contact thermal analysis method.
Empirical mode decomposition as a time-varying multirate signal processing system
NASA Astrophysics Data System (ADS)
Yang, Yanli
2016-08-01
Empirical mode decomposition (EMD) can adaptively split composite signals into narrow subbands termed intrinsic mode functions (IMFs). Although an analytical expression of IMFs extracted by EMD from signals is introduced in Yang et al. (2013) [1], it is only used for the case of extrema spaced uniformly. In this paper, the EMD algorithm is analyzed from digital signal processing perspective for the case of extrema spaced nonuniformly. Firstly, the extrema extraction is represented by a time-varying extrema decimator. The nonuniform extrema extraction is analyzed through modeling the time-varying extrema decimation at a fixed time point as a time-invariant decimation. Secondly, by using the impulse/summation approach, spline interpolation for knots spaced nonuniformly is shown as two basic operations, time-varying interpolation and filtering by a time-varying spline filter. Thirdly, envelopes of signals are written as the output of the time-varying spline filter. An expression of envelopes of signals in both time and frequency domain is presented. The EMD algorithm is then described as a time-varying multirate signal processing system. Finally, an equation to model IMFs is derived by using a matrix formulation in time domain for the general case of extrema spaced nonuniformly.
Striganova, B R; Bienkowski, P
2000-01-01
The rate of grass litter decomposition was studied in soils of the Karkonosze Mountains of the Sudeten at different altitudes. Parallel structural-functional investigations of the soil animal population on the example of soil macrofauna were carried out and heavy metals were assayed in the soil at stationary plots to reveal the effects of both natural and anthropogenic factors on the soil biological activity. The recent contamination of soil in the Sudeten by heavy metals and sulfur does not affect the spatial distribution and abundance of the soil-dwelling invertebrates and the decomposition rates. The latter correlated to a high level of soil saprotroph activity. The activity of the decomposition processes depends on the soil content of organic matter, conditions of soil drainage, and the temperature of upper soil horizon.
Multilinear operators for higher-order decompositions.
Kolda, Tamara Gibson
2006-04-01
We propose two new multilinear operators for expressing the matrix compositions that are needed in the Tucker and PARAFAC (CANDECOMP) decompositions. The first operator, which we call the Tucker operator, is shorthand for performing an n-mode matrix multiplication for every mode of a given tensor and can be employed to concisely express the Tucker decomposition. The second operator, which we call the Kruskal operator, is shorthand for the sum of the outer-products of the columns of N matrices and allows a divorce from a matricized representation and a very concise expression of the PARAFAC decomposition. We explore the properties of the Tucker and Kruskal operators independently of the related decompositions. Additionally, we provide a review of the matrix and tensor operations that are frequently used in the context of tensor decompositions.
Extended vector-tensor theories
NASA Astrophysics Data System (ADS)
Kimura, Rampei; Naruko, Atsushi; Yoshida, Daisuke
2017-01-01
Recently, several extensions of massive vector theory in curved space-time have been proposed in many literatures. In this paper, we consider the most general vector-tensor theories that contain up to two derivatives with respect to metric and vector field. By imposing a degeneracy condition of the Lagrangian in the context of ADM decomposition of space-time to eliminate an unwanted mode, we construct a new class of massive vector theories where five degrees of freedom can propagate, corresponding to three for massive vector modes and two for massless tensor modes. We find that the generalized Proca and the beyond generalized Proca theories up to the quartic Lagrangian, which should be included in this formulation, are degenerate theories even in curved space-time. Finally, introducing new metric and vector field transformations, we investigate the properties of thus obtained theories under such transformations.
Efficient MATLAB computations with sparse and factored tensors.
Bader, Brett William; Kolda, Tamara Gibson (Sandia National Lab, Livermore, CA)
2006-12-01
In this paper, the term tensor refers simply to a multidimensional or N-way array, and we consider how specially structured tensors allow for efficient storage and computation. First, we study sparse tensors, which have the property that the vast majority of the elements are zero. We propose storing sparse tensors using coordinate format and describe the computational efficiency of this scheme for various mathematical operations, including those typical to tensor decomposition algorithms. Second, we study factored tensors, which have the property that they can be assembled from more basic components. We consider two specific types: a Tucker tensor can be expressed as the product of a core tensor (which itself may be dense, sparse, or factored) and a matrix along each mode, and a Kruskal tensor can be expressed as the sum of rank-1 tensors. We are interested in the case where the storage of the components is less than the storage of the full tensor, and we demonstrate that many elementary operations can be computed using only the components. All of the efficiencies described in this paper are implemented in the Tensor Toolbox for MATLAB.
Kos, L; Michalska, K; Perkowski, J
2014-11-01
The aim of our studies was to determine the efficiency of decomposition of non-ionic surfactant by the Fenton method in the presence of iron nanocompounds and to compare it with the classical Fenton method. The subject of studies was water solutions of non-ionic detergent Tergitol TMN-10 used in textile industry. Water solutions of the surfactant were subjected to treatment by the classical Fenton method and to treatment in the presence of iron nanocompounds. In the samples of liquid solutions containing the surfactant, chemical oxygen demand (COD) and total organic carbon (TOC) were determined. The Fenton process was optimized based on studies of the effect of compounds used in the treatment, doses of iron and nanoiron, hydrogen peroxide and pH of the solution on surfactant decomposition. Iron oxide nanopowder catalyzed the process of detergent decomposition, increasing its efficiency and the degree of mineralization. It was found that the efficiency of the surfactant decomposition in the process with the use of iron nanocompounds was by 10 to 30 % higher than that in the classical method. The amounts of formed deposits were also several times smaller.
NASA Astrophysics Data System (ADS)
Andriyah, L.; Lalasari, L. H.; Manaf, A.
2017-02-01
Extraction of cassiterite using alkaline decomposition of sodium carbonate (Na2CO3) has been studied. Cassiterite (SnO2) is a mineral ore that contains tin (Sn) about 57.82 wt% and impurities like quartz, ilmenite, monazite, rutile and zircon. The initial step for the process was to remove the impurities in cassiterite through washing and separation by a high magnetic separator (HTS). The aim of this research is to increase the added value of cassiterite from local area Indonesia that using alkaline decomposition to form sodium stannate (Na2SnO3). The result shows that cassiterite from Indonesia can form sodium stannate (Na2SnO3) which soluble with water in the leaching process. The longer the time for decomposition, the more phases of sodium stannate that will be formed. Optimum result reached when the decomposition process was done in 850 °C for 4 hours with a mole ratio Na2CO3 to cassiterite 3:2. High Score Plus (HSP) was used in this research to analyze the mass of sodium stannate (Na2SnO3). HSP analysis showed that mass of sodium stannate (Na2SnO3) is 70.3 wt%.
Unsupervised Tensor Mining for Big Data Practitioners.
Papalexakis, Evangelos E; Faloutsos, Christos
2016-09-01
Multiaspect data are ubiquitous in modern Big Data applications. For instance, different aspects of a social network are the different types of communication between people, the time stamp of each interaction, and the location associated to each individual. How can we jointly model all those aspects and leverage the additional information that they introduce to our analysis? Tensors, which are multidimensional extensions of matrices, are a principled and mathematically sound way of modeling such multiaspect data. In this article, our goal is to popularize tensors and tensor decompositions to Big Data practitioners by demonstrating their effectiveness, outlining challenges that pertain to their application in Big Data scenarios, and presenting our recent work that tackles those challenges. We view this work as a step toward a fully automated, unsupervised tensor mining tool that can be easily and broadly adopted by practitioners in academia and industry.
Trace Norm Regularized CANDECOMP/PARAFAC Decomposition With Missing Data.
Liu, Yuanyuan; Shang, Fanhua; Jiao, Licheng; Cheng, James; Cheng, Hong
2015-11-01
In recent years, low-rank tensor completion (LRTC) problems have received a significant amount of attention in computer vision, data mining, and signal processing. The existing trace norm minimization algorithms for iteratively solving LRTC problems involve multiple singular value decompositions of very large matrices at each iteration. Therefore, they suffer from high computational cost. In this paper, we propose a novel trace norm regularized CANDECOMP/PARAFAC decomposition (TNCP) method for simultaneous tensor decomposition and completion. We first formulate a factor matrix rank minimization model by deducing the relation between the rank of each factor matrix and the mode- n rank of a tensor. Then, we introduce a tractable relaxation of our rank function, and then achieve a convex combination problem of much smaller-scale matrix trace norm minimization. Finally, we develop an efficient algorithm based on alternating direction method of multipliers to solve our problem. The promising experimental results on synthetic and real-world data validate the effectiveness of our TNCP method. Moreover, TNCP is significantly faster than the state-of-the-art methods and scales to larger problems.
KOALA: A program for the processing and decomposition of transient spectra
NASA Astrophysics Data System (ADS)
Grubb, Michael P.; Orr-Ewing, Andrew J.; Ashfold, Michael N. R.
2014-06-01
Extracting meaningful kinetic traces from time-resolved absorption spectra is a non-trivial task, particularly for solution phase spectra where solvent interactions can substantially broaden and shift the transition frequencies. Typically, each spectrum is composed of signal from a number of molecular species (e.g., excited states, intermediate complexes, product species) with overlapping spectral features. Additionally, the profiles of these spectral features may evolve in time (i.e., signal nonlinearity), further complicating the decomposition process. Here, we present a new program for decomposing mixed transient spectra into their individual component spectra and extracting the corresponding kinetic traces: KOALA (Kinetics Observed After Light Absorption). The software combines spectral target analysis with brute-force linear least squares fitting, which is computationally efficient because of the small nonlinear parameter space of most spectral features. Within, we demonstrate the application of KOALA to two sets of experimental transient absorption spectra with multiple mixed spectral components. Although designed for decomposing solution-phase transient absorption data, KOALA may in principle be applied to any time-evolving spectra with multiple components.
Shlenskii, O.F.; Murashov, G.G.
1982-05-01
In describing frontal processes of thermal decomposition of high-energy condensed substances, for example detonation, it is common practice to write the equation for the conservation of energy without any limitations on the heat propagation velocity (HPV). At the same time, it is known that in calculating fast processes of heat conduction, the assumption of an infinitely high HPV is not always justified. In order to evaluate the influence of the HPV on the results from calculations of heat conduction process under conditions of a short-term exothermic decomposition of a condensed substance, the solution of the problem of heating a semiinfinite, thermally unstable solid body with boundary conditions of the third kind on the surface has been examined.
Empirical mode decomposition analysis of random processes in the solar atmosphere
NASA Astrophysics Data System (ADS)
Kolotkov, D. Y.; Anfinogentov, S. A.; Nakariakov, V. M.
2016-08-01
Context. Coloured noisy components with a power law spectral energy distribution are often shown to appear in solar signals of various types. Such a frequency-dependent noise may indicate the operation of various randomly distributed dynamical processes in the solar atmosphere. Aims: We develop a recipe for the correct usage of the empirical mode decomposition (EMD) technique in the presence of coloured noise, allowing for clear distinguishing between quasi-periodic oscillatory phenomena in the solar atmosphere and superimposed random background processes. For illustration, we statistically investigate extreme ultraviolet (EUV) emission intensity variations observed with SDO/AIA in the coronal (171 Å), chromospheric (304 Å), and upper photospheric (1600 Å) layers of the solar atmosphere, from a quiet sun and a sunspot umbrae region. Methods: EMD has been used for analysis because of its adaptive nature and essential applicability to the processing non-stationary and amplitude-modulated time series. For the comparison of the results obtained with EMD, we use the Fourier transform technique as an etalon. Results: We empirically revealed statistical properties of synthetic coloured noises in EMD, and suggested a scheme that allows for the detection of noisy components among the intrinsic modes obtained with EMD in real signals. Application of the method to the solar EUV signals showed that they indeed behave randomly and could be represented as a combination of different coloured noises characterised by a specific value of the power law indices in their spectral energy distributions. On the other hand, 3-min oscillations in the analysed sunspot were detected to have energies significantly above the corresponding noise level. Conclusions: The correct accounting for the background frequency-dependent random processes is essential when using EMD for analysis of oscillations in the solar atmosphere. For the quiet sun region the power law index was found to increase
The classical model for moment tensors
NASA Astrophysics Data System (ADS)
Tape, Walter; Tape, Carl
2013-12-01
A seismic moment tensor is a description of an earthquake source, but the description is indirect. The moment tensor describes seismic radiation rather than the actual physical process that initiates the radiation. A moment tensor `model' then ties the physical process to the moment tensor. The model is not unique, and the physical process is therefore not unique. In the classical moment tensor model, an earthquake arises from slip along a planar fault, but with the slip not necessarily in the plane of the fault. The model specifies the resulting moment tensor in terms of the slip vector, the fault normal vector and the Lamé elastic parameters, assuming isotropy. We review the classical model in the context of the fundamental lune. The lune is closely related to the space of moment tensors, and it provides a setting that is conceptually natural as well as pictorial. In addition to the classical model, we consider a crack plus double-couple model (CDC model) in which a moment tensor is regarded as the sum of a crack tensor and a double couple.
Schoenen, Dirk
2013-01-01
Decomposition of the human body is a microbial process. It is influenced by the environmental situation and it depends to a high degree on the exchange of substances between the corpse and the environment. Mummification occurs at low humidity or frost. Adipocere arises from lack of oxygen, incomplete putrified corpses develop when there is no exchange of air or water between the corpse and the environment.
NASA Astrophysics Data System (ADS)
Gurau, Razvan
2016-09-01
This article is preface to the SIGMA special issue ''Tensor Models, Formalism and Applications'', http://www.emis.de/journals/SIGMA/Tensor_Models.html. The issue is a collection of eight excellent, up to date reviews on random tensor models. The reviews combine pedagogical introductions meant for a general audience with presentations of the most recent developments in the field. This preface aims to give a condensed panoramic overview of random tensors as the natural generalization of random matrices to higher dimensions.
Modelling regulation of decomposition and related root/mycorrhizal processes in arctic tundra soils
Linkins, A.E.
1992-01-01
Since this was the final year of this project principal activities were directed towards either collecting data needed to complete existing incomplete data sets or writing manuscripts. Data sets on Imnaviat Creek watershed basin are functionally complete and data finialized on the cellulose mineralizaiton and dust impact on soil organic carbon and phsophorus decomposition. Seven manuscripts were prepared, and are briefly outlined.
Linkins, A.E.
1992-09-01
Since this was the final year of this project principal activities were directed towards either collecting data needed to complete existing incomplete data sets or writing manuscripts. Data sets on Imnaviat Creek watershed basin are functionally complete and data finialized on the cellulose mineralizaiton and dust impact on soil organic carbon and phsophorus decomposition. Seven manuscripts were prepared, and are briefly outlined.
In Vivo Generalized Diffusion Tensor Imaging (GDTI) Using Higher-Order Tensors (HOT)
Liu, Chunlei; Mang, Sarah C.; Moseley, Michael E.
2009-01-01
Generalized diffusion tensor imaging (GDTI) using higher order tensor statistics (HOT) generalizes the technique of diffusion tensor imaging (DTI) by including the effect of non-Gaussian diffusion on the signal of magnetic resonance imaging (MRI). In GDTI-HOT, the effect of non-Gaussian diffusion is characterized by higher order tensor statistics (i.e. the cumulant tensors or the moment tensors) such as the covariance matrix (the second-order cumulant tensor), the skewness tensor (the third-order cumulant tensor) and the kurtosis tensor (the fourth-order cumulant tensor) etc. Previously, Monte Carlo simulations have been applied to verify the validity of this technique in reconstructing complicated fiber structures. However, no in vivo implementation of GDTI-HOT has been reported. The primary goal of this study is to establish GDTI-HOT as a feasible in vivo technique for imaging non-Gaussian diffusion. We show that probability distribution function (PDF) of the molecular diffusion process can be measured in vivo with GDTI-HOT and be visualized with 3D glyphs. By comparing GDTI-HOT to fiber structures that are revealed by the highest resolution DWI possible in vivo, we show that the GDTI-HOT can accurately predict multiple fiber orientations within one white matter voxel. Furthermore, through bootstrap analysis we demonstrate that in vivo measurement of HOT elements is reproducible with a small statistical variation that is similar to that of DTI. PMID:19953513
NASA Astrophysics Data System (ADS)
Prothin, Sebastien; Billard, Jean-Yves; Djeridi, Henda
2016-10-01
The purpose of the present study is to get a better understanding of the hydrodynamic instabilities of sheet cavities which develop along solid walls. The main objective is to highlight the spatial and temporal behavior of such a cavity when it develops on a NACA0015 foil at high Reynolds number. Experimental results show a quasi-steady, periodic, bifurcation domain, with aperiodic cavity behavior corresponding to σ/2 α values of 5.75, 5, 4.3 and 3.58. Robust mathematical methods of signal postprocessing (proper orthogonal decomposition and dynamic mode decomposition) were applied in order to emphasize the spatio-temporal nature of the flow. These new techniques put in evidence the 3D effects due to the reentrant jet instabilities or due to propagating shock wave mechanism at the origin of the shedding process of the cavitation cloud.
Okayama, T; Fujii, M; Yamanoue, M
1991-01-01
The effect of cooking temperature and time on the percentage colour formation, nitrite decomposition and denaturation of sarcoplasmic proteins in processed meat products was investigated in detail. The colour forming percentage increased with a rise in temperature of heating, especially at 50-60°C (P < 0·05). The percentage nitrite decomposition was promoted by the retention time of cooking rather than by the cooking temperature (P < 0·05). The percentage of sarcoplasmic proteins denatured was enhanced by heating temperature in the range 50-80°C (especially at 50-60°C) (P < 0·05). The relationship between the percentage colour formation and the percentage of sarcoplasmic proteins denatured is discussed. The SDS-PAGE patterns of the heat-treated samples revealed the components of the sarcoplasmic proteins which had been denatured.
Peng, Cong; Chai, Liyuan; Tang, Chongjian; Min, Xiaobo; Song, Yuxia; Duan, Chengshan; Yu, Cheng
2017-01-01
Heavy metals and ammonia are difficult to remove from wastewater, as they easily combine into refractory complexes. The struvite formation method (SFM) was applied for the complex decomposition and simultaneous removal of heavy metal and ammonia. The results indicated that ammonia deprivation by SFM was the key factor leading to the decomposition of the copper-ammonia complex ion. Ammonia was separated from solution as crystalline struvite, and the copper mainly co-precipitated as copper hydroxide together with struvite. Hydrogen bonding and electrostatic attraction were considered to be the main surface interactions between struvite and copper hydroxide. Hydrogen bonding was concluded to be the key factor leading to the co-precipitation. In addition, incorporation of copper ions into the struvite crystal also occurred during the treatment process.
Multi-Zone Liquid Thrust Chamber Performance Code with Domain Decomposition for Parallel Processing
NASA Technical Reports Server (NTRS)
Navaz, Homayun K.
2002-01-01
-equation turbulence model, and two-phase flow. To overcome these limitations, the LTCP code is rewritten to include the multi-zone capability with domain decomposition that makes it suitable for parallel processing, i.e., enabling the code to run every zone or sub-domain on a separate processor. This can reduce the run time by a factor of 6 to 8, depending on the problem.
Moment tensor mechanisms from Iberia
NASA Astrophysics Data System (ADS)
Stich, D.; Morales, J.
2003-12-01
New moment tensor solutions are presented for small and moderate earthquakes in Spain, Portugal and the westernmost Mediterranean Sea for the period from 2002 to present. Moment tensor inversion, to estimate focal mechanism, depth and magnitude, is applied at the Instituto Andaluz de Geof¡sica (IAG) in a routine manner to regional earthquakes with local magnitude larger then or equal 3.5. Recent improvements of broadband network coverage contribute to relatively high rates of success: Since beginning of 2002, we could obtain valuable solutions, in the sense that moment tensor synthetic waveforms fit adequately the main characteristics of the observed seismograms, for about 50% of all events of the initial selection. Results are available on-line at http://www.ugr.es/~iag/tensor/. To date, the IAG moment tensor catalogue contains 90 solutions since 1984 and gives a relatively detailed picture of seismotectonics in the Ibero-maghrebian region, covering also low seismicity areas like intraplate Iberia. Solutions are concentrated in southern Spain and the Alboran Sea along the diffuse African-Eurasian plate boundary. These solutions reveal characteristics of the transition between the reverse faulting regime in Algeria and predominately normal faulting on the Iberian Peninsula. Further we discuss the available mechanisms for intermediate deep events, related to subcrustal tectonic processes at the plate contact.
The processing of rotor startup signals based on empirical mode decomposition
NASA Astrophysics Data System (ADS)
Gai, Guanghong
2006-01-01
In this paper, we applied empirical mode decomposition method to analyse rotor startup signals, which are non-stationary and contain a lot of additional information other than that from its stationary running signals. The methodology developed in this paper decomposes the original startup signals into intrinsic oscillation modes or intrinsic modes function (IMFs). Then, we obtained rotating frequency components for Bode diagrams plot by corresponding IMFs, according to the characteristics of rotor system. The method can obtain precise critical speed without complex hardware support. The low-frequency components were extracted from these IMFs in vertical and horizontal directions. Utilising these components, we constructed a drift locus of rotor revolution centre, which provides some significant information to fault diagnosis of rotating machinery. Also, we proved that empirical mode decomposition method is more precise than Fourier filter for the extraction of low-frequency component.
1981-11-12
nitrotoluenes actually represent surface- catalyzed reactions . Preliminary qualitative results for pyrolysis of ortho-nitrotoluene in the absence of hot...quantitative validity. LPHP studies of azoisopropane decomposition chosen as a radical-forming test reaction , show the accepted literature parameters to...systematic errors or by rate control exerted by secondary reactions . (2) Support from these VLPP studies for the conclusion that some previous kinetic
Wang, Xiao-Yan; Miao, Yuan; Yu, Shuo; Chen, Xiao-Yong; Schmid, Bernhard
2014-03-01
Following studies that showed negative effects of species loss on ecosystem functioning, newer studies have started to investigate if similar consequences could result from reductions of genetic diversity within species. We tested the influence of genotypic richness and dissimilarity (plots containing one, three, six or 12 genotypes) in stands of the invasive plant Solidago canadensis in China on the decomposition of its leaf litter and associated soil animals over five monthly time intervals. We found that the logarithm of genotypic richness was positively linearly related to mass loss of C, N and P from the litter and to richness and abundance of soil animals on the litter samples. The mixing proportion of litter from two sites, but not genotypic dissimilarity of mixtures, had additional effects on measured variables. The litter diversity effects on soil animals were particularly strong under the most stressful conditions of hot weather in July: at this time richness and abundance of soil animals were higher in 12-genotype litter mixtures than even in the highest corresponding one-genotype litter. The litter diversity effects on decomposition were in part mediated by soil animals: the abundance of Acarina, when used as covariate in the analysis, fully explained the litter diversity effects on mass loss of N and P. Overall, our study shows that high genotypic richness of S. canadensis leaf litter positively affects richness and abundance of soil animals, which in turn accelerate litter decomposition and P release from litter.
Souto, X C; Gonzales, L; Reigosa, M J
1994-11-01
The development of toxicity produced by vegetable litter of four forest species (Quercus robur L.,Pinus radiata D.Don.,Eucalyptus globulus Labill, andAcacia melanoxylon R.Br.) was studied during the decomposition process in each of the soils where the species were found. The toxicity of the extracts was measured by the effects produced on germination and growth ofLactuca saliva L. var. Great Lakes seeds. The phenolic composition of the leaves of the four species was also studied using high-performance liquid chromatographic analysis (HPLC). It was verified that toxicity was clearly reflected in the first stages of leaf decomposition inE. globulus andA. melanoxylon, due to phytotoxic compounds liberated by their litter. At the end of half a year of decomposition, inhibition due to the vegetable material was not observed, but the soils associated with these two species appeared to be responsible for the toxic effects. On the other hand, the phenolic profiles are quite different among the four species, and greater complexity in the two toxic species (E. globulus andA. melanoxylon) was observed.
Mohd Nasir, Norlirubayah; Teo Ming, Ting; Ahmadun, Fakhru'l-Razi; Sobri, Shafreeza
2010-01-01
The research conducted a study on decomposition and biodegradability enhancement of textile wastewater using a combination of electron beam irradiation and activated sludge process. The purposes of this research are to remove pollutant through decomposition and to enhance the biodegradability of textile wastewater. The wastewater is treated using electron beam irradiation as a pre-treatment before undergo an activated sludge process. As a result, for non-irradiated wastewater, the COD removal was achieved to be between 70% and 79% after activated sludge process. The improvement of COD removal efficiency increased to 94% after irradiation of treated effluent at the dose of 50 kGy. Meanwhile, the BOD(5) removal efficiencies of non-irradiated and irradiated textile wastewater were reported to be between 80 and 87%, and 82 and 99.2%, respectively. The maximum BOD(5) removal efficiency was achieved at day 1 (HRT 5 days) of the process of an irradiated textile wastewater which is 99.2%. The biodegradability ratio of non-irradiated wastewater was reported to be between 0.34 and 0.61, while the value of biodegradability ratio of an irradiated wastewater increased to be between 0.87 and 0.96. The biodegradability enhancement of textile wastewater is increased with increasing the doses. Therefore, an electron beam radiation holds a greatest application of removing pollutants and also on enhancing the biodegradability of textile wastewater.
3D reconstruction of tensors and vectors
Defrise, Michel; Gullberg, Grant T.
2005-02-17
Here we have developed formulations for the reconstruction of 3D tensor fields from planar (Radon) and line-integral (X-ray) projections of 3D vector and tensor fields. Much of the motivation for this work is the potential application of MRI to perform diffusion tensor tomography. The goal is to develop a theory for the reconstruction of both Radon planar and X-ray or line-integral projections because of the flexibility of MRI to obtain both of these type of projections in 3D. The development presented here for the linear tensor tomography problem provides insight into the structure of the nonlinear MRI diffusion tensor inverse problem. A particular application of tensor imaging in MRI is the potential application of cardiac diffusion tensor tomography for determining in vivo cardiac fiber structure. One difficulty in the cardiac application is the motion of the heart. This presents a need for developing future theory for tensor tomography in a motion field. This means developing a better understanding of the MRI signal for diffusion processes in a deforming media. The techniques developed may allow the application of MRI tensor tomography for the study of structure of fiber tracts in the brain, atherosclerotic plaque, and spine in addition to fiber structure in the heart. However, the relations presented are also applicable to other fields in medical imaging such as diffraction tomography using ultrasound. The mathematics presented can also be extended to exponential Radon transform of tensor fields and to other geometric acquisitions such as cone beam tomography of tensor fields.
NASA Astrophysics Data System (ADS)
Okamoto, T.; Takenaka, H.; Hara, T.; Nakamura, T.; Aoki, T.
2014-12-01
We analyze "seismic" rupture process of the March 11, 2011 Tohoku-Oki earthquake (GCMT Mw9.1) by using a non-linear multi-time-window waveform inversion method. We incorporate the effect of the near-source laterally heterogeneous structure on the synthetic Green's tensor waveforms; otherwise the analysis may result in erroneous solutions [1]. To increase the resolution we use teleseismic and strong-motion seismograms jointly because the one-sided distribution of strong-motion station may cause reduced resolution near the trench axis [2]. We use a 2.5D FDM [3] for teleseismic P-waves and a full 3D FDM that incorporates topography, oceanic water layer, 3D heterogeneity and attenuation for strong-motions [4]. We apply multi-GPU acceleration by using the TSUBAME supercomputer in Tokyo Institute of Technology [5]. We "validated" the Green's tensor waveforms with a point-source moment tensor inversion analysis for a small (Mw5.8) shallow event: we confirm the observed waveforms are reproduced well with the synthetics.The inferred slip distribution using the 2.5D and 3D Green's functions has large slips (max. 37 m) near the hypocenter and small slips near the trench (figure). Also an isolated slip region is identified close to Fukushima prefecture. These features are similar to those obtained by our preliminary study [4]. The land-ward large slips and trench-ward small slips have also been reported by [2]. It is remarkable that we confirmed these features by using data-validated Green's functions. On the other hand very large slips are inferred close to the trench when we apply "1D" Green's functions that do not incorporate the lateral heterogeneity. Our result suggests the trench-ward large deformation that caused large tsunamis did not radiate strong seismic waves. Very slow slips (e.g., the tsunami earthquake), delayed slips and anelastic deformation are among the candidates of the physical processes of the deformation.[1] Okamoto and Takenaka, EPS, 61, e17-e20, 2009
Shigeri, Yasushi; Matsui, Tatsunobu; Watanabe, Kunihiko
2009-11-01
In order to develop a practical method for the decomposition of intact chicken feathers, a moderate thermophile strain, Meiothermus ruber H328, having strong keratinolytic activity, was used in a bio-type garbage-treatment machine working with an acidulocomposting process. The addition of strain H328 cells (15 g) combined with acidulocomposting in the garbage machine resulted in 70% degradation of intact chicken feathers (30 g) within 14 d. This degradation efficiency is comparable to a previous result employing the strain as a single bacterium in flask culture, and it indicates that strain H328 can promote intact feather degradation activity in a garbage machine currently on the market.
Toluene decomposition performance and NOx by-product formation during a DBD-catalyst process.
Guo, Yufang; Liao, Xiaobin; Fu, Mingli; Huang, Haibao; Ye, Daiqi
2015-02-01
Characteristics of toluene decomposition and formation of nitrogen oxide (NOx) by-products were investigated in a dielectric barrier discharge (DBD) reactor with/without catalyst at room temperature and atmospheric pressure. Four kinds of metal oxides, i.e., manganese oxide (MnOx), iron oxide (FeOx), cobalt oxide (CoOx) and copper oxide (CuO), supported on Al2O3/nickel foam, were used as catalysts. It was found that introducing catalysts could improve toluene removal efficiency, promote decomposition of by-product ozone and enhance CO2 selectivity. In addition, NOx was suppressed with the decrease of specific energy density (SED) and the increase of humidity, gas flow rate and toluene concentration, or catalyst introduction. Among the four kinds of catalysts, the CuO catalyst showed the best performance in NOx suppression. The MnOx catalyst exhibited the lowest concentration of O3 and highest CO2 selectivity but the highest concentration of NOx. A possible pathway for NOx production in DBD was discussed. The contributions of oxygen active species and hydroxyl radicals are dominant in NOx suppression.
Moran, S.C.
2003-01-01
The volcanological significance of seismicity within Katmai National Park has been debated since the first seismograph was installed in 1963, in part because Katmai seismicity consists almost entirely of high-frequency earthquakes that can be caused by a wide range of processes. I investigate this issue by determining 140 well-constrained first-motion fault-plane solutions for shallow (depth < 9 km) earthquakes occuring between 1995 and 2001 and inverting these solutions for the stress tensor in different regions within the park. Earthquakes removed by several kilometers from the volcanic axis occur in a stress field characterized by horizontally oriented ??1 and ??3 axes, with ??1 rotated slightly (12??) relative to the NUVELIA subduction vector, indicating that these earthquakes are occurring in response to regional tectonic forces. On the other hand, stress tensors for earthquake clusters beneath several Katmai cluster volcanoes have vertically oriented ??1 axes, indicating that these events are occuring in response to local, not regional, processes. At Martin-Mageik, vertically oriented ??1 is most consistent with failure under edifice loading conditions in conjunction with localized pore pressure increases associated with hydrothermal circulation cells. At Trident-Novarupta, it is consistent with a number of possible models, including occurence along fractures formed during the 1912 eruption that now serve as horizontal conduits for migrating fluids and/or volatiles from nearby degassing and cooling magma bodies. At Mount Katmai, it is most consistent with continued seismicity along ring-fracture systems created in the 1912 eruption, perhaps enhanced by circulating hydrothermal fluids and/or seepage from the caldera-filling lake.
Chin, Sungmin; Jurng, Jongsoo; Lee, Jae-Heon; Moon, Seung-Jae
2009-05-01
This study examined the catalytic oxidation of 1,2-dichlorobenzene on V(2)O(5)/TiO(2) nanoparticles. The V(2)O(5)/TiO(2) nanoparticles were synthesized by the thermal decomposition of vanadium oxytripropoxide and titanium tetraisopropoxide. The effects of the synthesis conditions, such as the synthesis temperature and precursor heating temperature, were investigated. The specific surface areas of V(2)O(5)/TiO(2) nanoparticles increased with increasing synthesis temperature and decreasing precursor heating temperature. The catalytic oxidation rate of the V(2)O(5)/TiO(2) catalyst formed by thermal decomposition process at a catalytic reaction temperature of 150 and 200 degrees C was 46% and 95%, respectively. As a result, it was concluded that the V(2)O(5)/TiO(2) catalysts synthesized by a thermal decomposition process showed good performance for 1,2-DCB decomposition at a lower temperature.
Randomized interpolative decomposition of separated representations
NASA Astrophysics Data System (ADS)
Biagioni, David J.; Beylkin, Daniel; Beylkin, Gregory
2015-01-01
We introduce an algorithm to compute tensor interpolative decomposition (dubbed CTD-ID) for the reduction of the separation rank of Canonical Tensor Decompositions (CTDs). Tensor ID selects, for a user-defined accuracy ɛ, a near optimal subset of terms of a CTD to represent the remaining terms via a linear combination of the selected terms. CTD-ID can be used as an alternative to or in combination with the Alternating Least Squares (ALS) algorithm. We present examples of its use within a convergent iteration to compute inverse operators in high dimensions. We also briefly discuss the spectral norm as a computational alternative to the Frobenius norm in estimating approximation errors of tensor ID. We reduce the problem of finding tensor IDs to that of constructing interpolative decompositions of certain matrices. These matrices are generated via randomized projection of the terms of the given tensor. We provide cost estimates and several examples of the new approach to the reduction of separation rank.
Species-specific effects of elevated ozone on wetland plants and decomposition processes.
Williamson, Jennifer; Mills, Gina; Freeman, Chris
2010-05-01
Seven species from two contrasting wetlands, an upland bog and a lowland rich fen in North Wales, UK, were exposed to elevated ozone (150 ppb for 5 days and 20 ppb for 2 days per week) or low ozone (20 ppb) for four weeks in solardomes. The rich fen species were: Molinia caerulea, Juncus subnodulosus, Potentilla erecta and Hydrocotyle vulgaris and the bog species were: Carex echinata, Potentilla erecta and Festuca rubra. Senescence significantly increased under elevated ozone in all seven species but only Molinia caerulea showed a reduction in biomass under elevated ozone. Decomposition rates of plants exposed to elevated ozone, as measured by carbon dioxide efflux from dried plant material inoculated with peat slurry, increased for Potentilla erecta with higher hydrolytic enzyme activities. In contrast, a decrease in enzyme activities and a non-significant decrease in carbon dioxide efflux occurred in the grasses, sedge and rush species.
van der Wal, Annemieke; Geydan, Thomas D; Kuyper, Thomas W; de Boer, Wietse
2013-07-01
Filamentous fungi are critical to the decomposition of terrestrial organic matter and, consequently, in the global carbon cycle. In particular, their contribution to degradation of recalcitrant lignocellulose complexes has been widely studied. In this review, we focus on the functioning of terrestrial fungal decomposers and examine the factors that affect their activities and community dynamics. In relation to this, impacts of global warming and increased N deposition are discussed. We also address the contribution of fungal decomposer studies to the development of general community ecological concepts such as diversity-functioning relationships, succession, priority effects and home-field advantage. Finally, we indicate several research directions that will lead to a more complete understanding of the ecological roles of terrestrial decomposer fungi such as their importance in turnover of rhizodeposits, the consequences of interactions with other organisms and niche differentiation.
NASA Astrophysics Data System (ADS)
Herman, M. W.; Furlong, K. P.; Herrmann, R. B.; Benz, H.
2011-12-01
We model regional broadband data from the South Island of New Zealand to determine regional moment tensor solutions for the mainshock and selected aftershocks of the M7.0, 3 September 2011, M6.1, 21 February 2011 and M6.0 13 June 2011 earthquakes that occurred near Christchurch, New Zealand. Arrival time picks from both the local and regional strong motion and broadband data were used to determine preliminary earthquake locations using a previously published South Island velocity model. Rayleigh and Love surface wave dispersion measurements were then made from selected events to refine the velocity model in order to better match the predominantly large regional surface waves. RMT solutions were computed using the procedures of Herrmann et al. (2011). In total, we computed RMT solutions for 82 events in the magnitude range of Mw3.5-7.0. Although the crustal faulting behavior in the region has been argued to reflect a complex interaction of strike slip and thrust faulting, the dominant faulting style in the sequence is right-lateral, strike-slip (75 events), with nodal planes striking west-east to southwest-northeast. There are only five purely reverse mechanisms, at the western end of the sequence, in the vicinity of the Harper Hills blind thrust. The main Mw 7.0 rupture shows both local small-scale stepovers and one larger (~ 5-10 km width) right stepover near 172.40°E. Although we expect normal faulting associated with this larger stepover, during the first month after the main shock we observe only two normal fault mechanisms and 13 strike slip (inferred E-W right-lateral) events in the stepover region, and since that time, the sense of faulting has been dominated by right-lateral, strike-slip events, perhaps indicating a sequence of short E-W fault segments in the region. The February and June 2011 events occurred along the same trend at the eastern end of the sequence, and show similar strike slip mechanisms to the majority of events to the west, but the
Ji, Hongfei; Li, Jie; Lu, Rongrong; Gu, Rong; Cao, Lei; Gong, Xiaoliang
2016-01-01
Electroencephalogram- (EEG-) based brain-computer interface (BCI) systems usually utilize one type of changes in the dynamics of brain oscillations for control, such as event-related desynchronization/synchronization (ERD/ERS), steady state visual evoked potential (SSVEP), and P300 evoked potentials. There is a recent trend to detect more than one of these signals in one system to create a hybrid BCI. However, in this case, EEG data were always divided into groups and analyzed by the separate processing procedures. As a result, the interactive effects were ignored when different types of BCI tasks were executed simultaneously. In this work, we propose an improved tensor based multiclass multimodal scheme especially for hybrid BCI, in which EEG signals are denoted as multiway tensors, a nonredundant rank-one tensor decomposition model is proposed to obtain nonredundant tensor components, a weighted fisher criterion is designed to select multimodal discriminative patterns without ignoring the interactive effects, and support vector machine (SVM) is extended to multiclass classification. Experiment results suggest that the proposed scheme can not only identify the different changes in the dynamics of brain oscillations induced by different types of tasks but also capture the interactive effects of simultaneous tasks properly. Therefore, it has great potential use for hybrid BCI. PMID:26880873
Martínez-Casado, Francisco J; Ramos-Riesco, Miguel; Rodríguez-Cheda, José A; Cucinotta, Fabio; Matesanz, Emilio; Miletto, Ivana; Gianotti, Enrica; Marchese, Leonardo; Matěj, Zdeněk
2016-09-06
Lead(II) acetate [Pb(Ac)2, where Ac = acetate group (CH3-COO(-))2] is a very common salt with many and varied uses throughout history. However, only lead(II) acetate trihydrate [Pb(Ac)2·3H2O] has been characterized to date. In this paper, two enantiotropic polymorphs of the anhydrous salt, a novel hydrate [lead(II) acetate hemihydrate: Pb(Ac)2·(1)/2H2O], and two decomposition products [corresponding to two different basic lead(II) acetates: Pb4O(Ac)6 and Pb2O(Ac)2] are reported, with their structures being solved for the first time. The compounds present a variety of molecular arrangements, being 2D or 1D coordination polymers. A thorough thermal analysis, by differential scanning calorimetry (DSC) and thermogravimetric analysis (TGA), was also carried out to study the behavior and thermal data of the salt and its decomposition process, in inert and oxygenated atmospheres, identifying the phases and byproducts that appear. The complex thermal behavior of lead(II) acetate is now solved, finding the existence of another hydrate, two anhydrous enantiotropic polymorphs, and some byproducts. Moreover, some of them are phosphorescent at room temperature. The compounds were studied by TGA, DSC, X-ray diffraction, and UV-vis spectroscopy.
Generalization of the tensor renormalization group approach to 3-D or higher dimensions
NASA Astrophysics Data System (ADS)
Teng, Peiyuan
2017-04-01
In this paper, a way of generalizing the tensor renormalization group (TRG) is proposed. Mathematically, the connection between patterns of tensor renormalization group and the concept of truncation sequence in polytope geometry is discovered. A theoretical contraction framework is therefore proposed. Furthermore, the canonical polyadic decomposition is introduced to tensor network theory. A numerical verification of this method on the 3-D Ising model is carried out.
Photocatalytic decomposition of bromate ion by the UV/P25-Graphene processes.
Huang, Xin; Wang, Longyong; Zhou, Jizhi; Gao, Naiyun
2014-06-15
The photocatalysis of bromate (BrO3(-)) attracts much attention as BrO3(-) is a carcinogenic and genotoxic contaminant in drinking water. In this work, TiO2-graphene composite (P25-GR) photocatalyst for BrO3(-) reduction were prepared by a facile one-step hydrothermal method, which exhibited a higher capacity of BrO3(-) removal than P25 or GR did. The maximum removal of BrO3(-) was observed in the optimal conductions of 1% GR doping and at pH 6.8. Compared with that without UV, the higher decreasing of BrO3(-) on the composite indicates that BrO3(-) decomposition was predominantly contributed to photo-reduction with UV rather than adsorption. This hypothesis was supported by the decreasing of [BrO3(-)] with the synchronous increasing of [Br(-)] at nearly constant amount of total Bromine ([BrO3(-)] + [Br(-)]). Furthermore, the improvement of BrO3(-) reduction on P25-GR was observed in the treatment of a tap water. However, the efficiency of BrO3(-) removal was less than that in deionized water, probably due to the consumption of photo-generated electrons and the adsorption of natural organic matters (NOM) on graphene.
The Search for a Volatile Human Specific Marker in the Decomposition Process.
Rosier, E; Loix, S; Develter, W; Van de Voorde, W; Tytgat, J; Cuypers, E
2015-01-01
In this study, a validated method using a thermal desorber combined with a gas chromatograph coupled to mass spectrometry was used to identify the volatile organic compounds released during decomposition of 6 human and 26 animal remains in a laboratory environment during a period of 6 months. 452 compounds were identified. Among them a human specific marker was sought using principle component analysis. We found a combination of 8 compounds (ethyl propionate, propyl propionate, propyl butyrate, ethyl pentanoate, pyridine, diethyl disulfide, methyl(methylthio)ethyl disulfide and 3-methylthio-1-propanol) that led to the distinction of human and pig remains from other animal remains. Furthermore, it was possible to separate the pig remains from human remains based on 5 esters (3-methylbutyl pentanoate, 3-methylbutyl 3-methylbutyrate, 3-methylbutyl 2-methylbutyrate, butyl pentanoate and propyl hexanoate). Further research in the field with full bodies has to corroborate these results and search for one or more human specific markers. These markers would allow a more efficiently training of cadaver dogs or portable detection devices could be developed.
Propp, W.A.; Grey, A.E.; Negus-de Wys, J.; Plum, M.M.; Haefner, D.R.
1991-09-01
This study presents a preliminary evaluation of the technical and economic feasibility of selected conceptual processes for pyrolytic conversion of organic feedstocks or the decomposition/detoxification of hazardous wastes by coupling the process to the geopressured-geothermal resource. The report presents a detailed discussion of the resource and of each process selected for evaluation including the technical evaluation of each. A separate section presents the economic methodology used and the evaluation of the technically viable process. A final section presents conclusions and recommendations. Three separate processes were selected for evaluation. These are pyrolytic conversion of biomass to petroleum like fluids, wet air oxidation (WAO) at subcritical conditions for destruction of hazardous waste, and supercritical water oxidation (SCWO) also for the destruction of hazardous waste. The scientific feasibility of all three processes has been previously established by various bench-scale and pilot-scale studies. For a variety of reasons detailed in the report the SCWO process is the only one deemed to be technically feasible, although the effects of the high solids content of the geothermal brine need further study. This technology shows tremendous promise for contributing to solving the nation's energy and hazardous waste problems. However, the current economic analysis suggests that it is uneconomical at this time. 50 refs., 5 figs., 7 tabs.
Peatland microbial communities and decomposition processes in the james bay lowlands, Canada.
Preston, Michael D; Smemo, Kurt A; McLaughlin, James W; Basiliko, Nathan
2012-01-01
Northern peatlands are a large repository of atmospheric carbon due to an imbalance between primary production by plants and microbial decomposition. The James Bay Lowlands (JBL) of northern Ontario are a large peatland-complex but remain relatively unstudied. Climate change models predict the region will experience warmer and drier conditions, potentially altering plant community composition, and shifting the region from a long-term carbon sink to a source. We collected a peat core from two geographically separated (ca. 200 km) ombrotrophic peatlands (Victor and Kinoje Bogs) and one minerotrophic peatland (Victor Fen) located near Victor Bog within the JBL. We characterized (i) archaeal, bacterial, and fungal community structure with terminal restriction fragment length polymorphism of ribosomal DNA, (ii) estimated microbial activity using community level physiological profiling and extracellular enzymes activities, and (iii) the aeration and temperature dependence of carbon mineralization at three depths (0-10, 50-60, and 100-110 cm) from each site. Similar dominant microbial taxa were observed at all three peatlands despite differences in nutrient content and substrate quality. In contrast, we observed differences in basal respiration, enzyme activity, and the magnitude of substrate utilization, which were all generally higher at Victor Fen and similar between the two bogs. However, there was no preferential mineralization of carbon substrates between the bogs and fens. Microbial community composition did not correlate with measures of microbial activity but pH was a strong predictor of activity across all sites and depths. Increased peat temperature and aeration stimulated CO(2) production but this did not correlate with a change in enzyme activities. Potential microbial activity in the JBL appears to be influenced by the quality of the peat substrate and the presence of microbial inhibitors, which suggests the existing peat substrate will have a large
Peatland Microbial Communities and Decomposition Processes in the James Bay Lowlands, Canada
Preston, Michael D.; Smemo, Kurt A.; McLaughlin, James W.; Basiliko, Nathan
2012-01-01
Northern peatlands are a large repository of atmospheric carbon due to an imbalance between primary production by plants and microbial decomposition. The James Bay Lowlands (JBL) of northern Ontario are a large peatland-complex but remain relatively unstudied. Climate change models predict the region will experience warmer and drier conditions, potentially altering plant community composition, and shifting the region from a long-term carbon sink to a source. We collected a peat core from two geographically separated (ca. 200 km) ombrotrophic peatlands (Victor and Kinoje Bogs) and one minerotrophic peatland (Victor Fen) located near Victor Bog within the JBL. We characterized (i) archaeal, bacterial, and fungal community structure with terminal restriction fragment length polymorphism of ribosomal DNA, (ii) estimated microbial activity using community level physiological profiling and extracellular enzymes activities, and (iii) the aeration and temperature dependence of carbon mineralization at three depths (0–10, 50–60, and 100–110 cm) from each site. Similar dominant microbial taxa were observed at all three peatlands despite differences in nutrient content and substrate quality. In contrast, we observed differences in basal respiration, enzyme activity, and the magnitude of substrate utilization, which were all generally higher at Victor Fen and similar between the two bogs. However, there was no preferential mineralization of carbon substrates between the bogs and fens. Microbial community composition did not correlate with measures of microbial activity but pH was a strong predictor of activity across all sites and depths. Increased peat temperature and aeration stimulated CO2 production but this did not correlate with a change in enzyme activities. Potential microbial activity in the JBL appears to be influenced by the quality of the peat substrate and the presence of microbial inhibitors, which suggests the existing peat substrate will have a large
Xu, Yan; Wu, Qian; Shimatani, Yuji; Yamaguchi, Koji
2015-10-07
Due to the lack of regeneration methods, the reusability of nanofluidic chips is a significant technical challenge impeding the efficient and economic promotion of both fundamental research and practical applications on nanofluidics. Herein, a simple method for the total regeneration of glass nanofluidic chips was described. The method consists of sequential thermal treatment with six well-designed steps, which correspond to four sequential thermal and thermochemical decomposition processes, namely, dehydration, high-temperature redox chemical reaction, high-temperature gasification, and cooling. The method enabled the total regeneration of typical 'dead' glass nanofluidic chips by eliminating physically clogged nanoparticles in the nanochannels, removing chemically reacted organic matter on the glass surface and regenerating permanent functional surfaces of dissimilar materials localized in the nanochannels. The method provides a technical solution to significantly improve the reusability of glass nanofluidic chips and will be useful for the promotion and acceleration of research and applications on nanofluidics.
Trinh, Nguyen Duy; Hong, Seong-Soo
2015-07-01
Iron-based MIL-53 crystals with uniform size were successfully synthesized using a microwave-assisted solvothermal method and characterized by XRD, FE-SEM and DRS. We also investigated the photocatalytic activity of MIL-53(Fe) for the decomposition of methylene blue using H2O2 as an electron acceptor. From XRD and SEM results, the fully crystallized MIL-53(Fe) materials were obtained regardless of preparation method. From DRS results, MIL-53(Fe) samples prepared using microwave-assisted process displayed the absorption spectrum up to the visible region and then they showed the high photocatalytic activity under visible light irradiation. The MIL-53(Fe) catalyst prepared by two times microwave irradiation showed the highest activity.
Yan, Yingjie; Liao, Qi-Nan; Ji, Feng; Wang, Wei; Yuan, Shoujun; Hu, Zhen-Hu
2017-02-01
3,5-Dinitrobenzamide has been widely used as a feed additive to control coccidiosis in poultry, and part of the added 3,5-dinitrobenzamide is excreted into wastewater and surface water. The removal of 3,5-dinitrobenzamide from wastewater and surface water has not been reported in previous studies. Highly reactive hydroxyl radicals from UV/hydrogen peroxide (H2O2) and UV/titanium dioxide (TiO2) advanced oxidation processes (AOPs) can decompose organic contaminants efficiently. In this study, the decomposition of 3,5-dinitrobenzamide in aqueous solution during UV/H2O2 and UV/TiO2 oxidation processes was investigated. The decomposition of 3,5-dinitrobenzamide fits well with a fluence-based pseudo-first-order kinetics model. The decomposition in both two oxidation processes was affected by solution pH, and was inhibited under alkaline conditions. Inorganic anions such as NO3(-), Cl(-), SO4(2-), HCO3(-), and CO3(2-) inhibited the degradation of 3,5-dinitrobenzamide during the UV/H2O2 and UV/TiO2 oxidation processes. After complete decomposition in both oxidation processes, approximately 50% of 3,5-dinitrobenzamide was decomposed into organic intermediates, and the rest was mineralized to CO2, H2O, and other inorganic anions. Ions such as NH4(+), NO3(-), and NO2(-) were released into aqueous solution during the degradation. The primary decomposition products of 3,5-dinitrobenzamide were identified using time-of-flight mass spectrometry (LCMS-IT-TOF). Based on these products and ions release, a possible decomposition pathway of 3,5-dinitrobenzamide in both UV/H2O2 and UV/TiO2 processes was proposed.
Multiple alignment tensors from a denatured protein.
Gebel, Erika B; Ruan, Ke; Tolman, Joel R; Shortle, David
2006-07-26
The structural content of the denatured state has yet to be fully characterized. In recent years, large residual dipolar couplings (RDCs) from denatured proteins have been observed under alignment conditions produced by bicelles and strained polyacrylamide gels. In this report, we describe efforts to extend our picture of the residual structure in denatured nuclease by measuring RDCs with multiple alignment tensors. Backbone amide 15N-1H RDCs were collected from 4 M urea for a total of eight RDC data sets. The RDCs were analyzed by singular value decomposition (SVD) to determine the number of independent alignment tensors present in the data. On the basis of the resultant singular values and propagated error estimates, it is clear that there are at least three independent alignment tensors. These three independent RDC datasets can be reconstituted as orthogonal linear combinations, (OLC)-RDC datasets, of the eight actually recorded. The first, second, and third OLC-RDC datasets are highly robust to the removal of any single experimental RDC dataset, establishing the presence of three independent alignment tensors, sampled well above the level of experimental uncertainty. The observation that the RDC data span three or more dimensions of the five-dimensional parameter space demonstrates that the ensemble average structure of denatured nuclease must be asymmetric with respect to these three orthogonal principal axes, which is not inconsistent with earlier work demonstrating that it has a nativelike topology.
Square Deal: Lower Bounds and Improved Relaxations for Tensor Recovery
2013-08-16
a low-rank tensor from incomplete information is a recurring problem in signal processing and machine learning . The most popular convex relaxation of...is a recurring problem in signal processing and machine learning . The most popular convex relaxation of this problem minimizes the sum of the nuclear...results to low-rank tensors is not obvious. The numerical algebra of tensors is fraught with hardness results [HL09]. For example, even computing a
A low-cost polysilicon process based on the synthesis and decomposition of dichlorosilane
NASA Technical Reports Server (NTRS)
Mccormick, J. R.; Plahutnik, F.; Sawyer, D.; Arvidson, A.; Goldfarb, S.
1982-01-01
Major process steps of a dichlorosilane based chemical vapor deposition (CVD) process for the production of polycrystalline silicon have been evaluated. While an economic analysis of the process indicates that it is not capable of meeting JPL/DOE price objectives ($14.00/kg in 1980 dollars), product price in the $19.00/kg to $25.00/kg range may be achieved. Product quality has been evaluated and ascertained to be comparable to semiconductor-grade polycrystalline silicon. Solar cells fabricated from the material are also equivalent to those fabricated from semiconductor-grade polycrystalline silicon.
Ivanova, Maria V; Isaev, Dmitry Yu; Dragoy, Olga V; Akinina, Yulia S; Petrushevskiy, Alexey G; Fedina, Oksana N; Shklovsky, Victor M; Dronkers, Nina F
2016-12-01
A growing literature is pointing towards the importance of white matter tracts in understanding the neural mechanisms of language processing, and determining the nature of language deficits and recovery patterns in aphasia. Measurements extracted from diffusion-weighted (DW) images provide comprehensive in vivo measures of local microstructural properties of fiber pathways. In the current study, we compared microstructural properties of major white matter tracts implicated in language processing in each hemisphere (these included arcuate fasciculus (AF), superior longitudinal fasciculus (SLF), inferior longitudinal fasciculus (ILF), inferior frontal-occipital fasciculus (IFOF), uncinate fasciculus (UF), and corpus callosum (CC), and corticospinal tract (CST) for control purposes) between individuals with aphasia and healthy controls and investigated the relationship between these neural indices and language deficits. Thirty-seven individuals with aphasia due to left hemisphere stroke and eleven age-matched controls were scanned using DW imaging sequences. Fractional anisotropy (FA), mean diffusivity (MD), radial diffusivity (RD), axial diffusivity (AD) values for each major white matter tract were extracted from DW images using tract masks chosen from standardized atlases. Individuals with aphasia were also assessed with a standardized language test in Russian targeting comprehension and production at the word and sentence level. Individuals with aphasia had significantly lower FA values for left hemisphere tracts and significantly higher values of MD, RD and AD for both left and right hemisphere tracts compared to controls, all indicating profound impairment in tract integrity. Language comprehension was predominantly related to integrity of the left IFOF and left ILF, while language production was mainly related to integrity of the left AF. In addition, individual segments of these three tracts were differentially associated with language production and
McKenna, Benjamin S; Theilmann, Rebecca J; Sutherland, Ashley N; Eyler, Lisa T
2015-05-01
Evidence for abnormal brain function as measured with diffusion tensor imaging (DTI) and functional magnetic resonance imaging (fMRI) and cognitive dysfunction have been observed in inter-episode bipolar disorder (BD) patients. We aimed to create a joint statistical model of white matter integrity and functional response measures in explaining differences in working memory and processing speed among BD patients. Medicated inter-episode BD (n=26; age=45.2±10.1 years) and healthy comparison (HC; n=36; age=46.3±11.5 years) participants completed 51-direction DTI and fMRI while performing a working memory task. Participants also completed a processing speed test. Tract-based spatial statistics identified common white matter tracts where fractional anisotropy was calculated from atlas-defined regions of interest. Brain responses within regions of interest activation clusters were also calculated. Least angle regression was used to fuse fMRI and DTI data to select the best joint neuroimaging predictors of cognitive performance for each group. While there was overlap between groups in which regions were most related to cognitive performance, some relationships differed between groups. For working memory accuracy, BD-specific predictors included bilateral dorsolateral prefrontal cortex from fMRI, splenium of the corpus callosum, left uncinate fasciculus, and bilateral superior longitudinal fasciculi from DTI. For processing speed, the genu and splenium of the corpus callosum and right superior longitudinal fasciculus from DTI were significant predictors of cognitive performance selectively for BD patients. BD patients demonstrated unique brain-cognition relationships compared to HC. These findings are a first step in discovering how interactions of structural and functional brain abnormalities contribute to cognitive impairments in BD.
NASA Technical Reports Server (NTRS)
Kuo, Kenneth K.; Lu, Yeu-Cherng; Chiaverini, Martin J.; Harting, George C.; Johnson, David K.; Serin, Nadir
1995-01-01
The experimental study on the fundamental processes involved in fuel decomposition and boundary-layer combustion in hybrid rocket motors is continuously being conducted at the High Pressure Combustion Laboratory of The Pennsylvania State University. This research will provide a useful engineering technology base in the development of hybrid rocket motors as well as a fundamental understanding of the complex processes involved in hybrid propulsion. A high-pressure, 2-D slab motor has been designed, manufactured, and utilized for conducting seven test firings using HTPB fuel processed at PSU. A total of 20 fuel slabs have been received from the Mcdonnell Douglas Aerospace Corporation. Ten of these fuel slabs contain an array of fine-wire thermocouples for measuring solid fuel surface and subsurface temperatures. Diagnostic instrumentation used in the test include high-frequency pressure transducers for measuring static and dynamic motor pressures and fine-wire thermocouples for measuring solid fuel surface and subsurface temperatures. The ultrasonic pulse-echo technique as well as a real-time x-ray radiography system have been used to obtain independent measurements of instantaneous solid fuel regression rates.
García-Garrido, C; Sánchez-Jiménez, P E; Pérez-Maqueda, L A; Perejón, A; Criado, José M
2016-10-26
The polymer-to-ceramic transformation kinetics of two widely employed ceramic precursors, 1,3,5,7-tetramethyl-1,3,5,7-tetravinylcyclotetrasiloxane (TTCS) and polyureamethylvinylsilazane (CERASET), have been investigated using coupled thermogravimetry and mass spectrometry (TG-MS), Raman, XRD and FTIR. The thermally induced decomposition of the pre-ceramic polymer is the critical step in the synthesis of polymer derived ceramics (PDCs) and accurate kinetic modeling is key to attaining a complete understanding of the underlying process and to attempt any behavior predictions. However, obtaining a precise kinetic description of processes of such complexity, consisting of several largely overlapping physico-chemical processes comprising the cleavage of the starting polymeric network and the release of organic moieties, is extremely difficult. Here, by using the evolved gases detected by MS as a guide it has been possible to determine the number of steps that compose the overall process, which was subsequently resolved using a semiempirical deconvolution method based on the Frasier-Suzuki function. Such a function is more appropriate that the more usual Gaussian or Lorentzian functions since it takes into account the intrinsic asymmetry of kinetic curves. Then, the kinetic parameters of each constituent step were independently determined using both model-free and model-fitting procedures, and it was found that the processes obey mostly diffusion models which can be attributed to the diffusion of the released gases through the solid matrix. The validity of the obtained kinetic parameters was tested not only by the successful reconstruction of the original experimental curves, but also by predicting the kinetic curves of the overall processes yielded by different thermal schedules and by a mixed TTCS-CERASET precursor.
NASA Astrophysics Data System (ADS)
Yang, Yang; Ren, R.-C.; Cai, Ming
2016-12-01
The stratosphere has been cooling under global warming, the causes of which are not yet well understood. This study applied a process-based decomposition method (CFRAM; Coupled Surface-Atmosphere Climate Feedback Response Analysis Method) to the simulation results of a Coupled Model Intercomparison Project, phase 5 (CMIP5) model (CCSM4; Community Climate System Model, version 4), to demonstrate the responsible radiative and non-radiative processes involved in the stratospheric cooling. By focusing on the long-term stratospheric temperature changes between the "historical run" and the 8.5 W m-2 Representative Concentration Pathway (RCP8.5) scenario, this study demonstrates that the changes of radiative radiation due to CO2, ozone and water vapor are the main divers of stratospheric cooling in both winter and summer. They contribute to the cooling changes by reducing the net radiative energy (mainly downward radiation) received by the stratospheric layer. In terms of the global average, their contributions are around -5, -1.5, and -1 K, respectively. However, the observed stratospheric cooling is much weaker than the cooling by radiative processes. It is because changes in atmospheric dynamic processes act to strongly mitigate the radiative cooling by yielding a roughly 4 K warming on the global average base. In particular, the much stronger/weaker dynamic warming in the northern/southern winter extratropics is associated with an increase of the planetary-wave activity in the northern winter, but a slight decrease in the southern winter hemisphere, under global warming. More importantly, although radiative processes dominate the stratospheric cooling, the spatial patterns are largely determined by the non-radiative effects of dynamic processes.
Oxidative decomposition of p-nitroaniline in water by solar photo-Fenton advanced oxidation process.
Sun, Jian-Hui; Sun, Sheng-Peng; Fan, Mao-Hong; Guo, Hui-Qin; Lee, Yi-Fan; Sun, Rui-Xia
2008-05-01
The degradation of p-nitroaniline (PNA) in water by solar photo-Fenton advanced oxidation process was investigated in this study. The effects of different reaction parameters including pH value of solutions, dosages of hydrogen peroxide and ferrous ion, initial PNA concentration and temperature on the degradation of PNA have been studied. The optimum conditions for the degradation of PNA in water were considered to be: the pH value at 3.0, 10 mmol L(-1) H(2)O(2), 0.05 mmol L(-1) Fe(2+), 0.072-0.217 mmol L(-1) PNA and temperature at 20 degrees C. Under the optimum conditions, the degradation efficiencies of PNA were more than 98% within 30 min reaction. The degradation characteristic of PNA showed that the conjugated pi systems of the aromatic ring in PNA molecules were effectively destructed. The experimental results indicated solar photo-Fenton process has more advantages compared with classical Fenton process, such as higher oxidation power, wider working pH range, lower ferrous ion usage, etc. Furthermore, the present study showed the potential use of solar photo-Fenton process for PNA containing wastewater treatment.
Decomposition of Iodinated Pharmaceuticals by UV-254 nm-assisted Advanced Oxidation Processes.
Duan, Xiaodi; He, Xuexiang; Wang, Dong; Mezyk, Stephen P; Otto, Shauna C; Marfil-Vega, Ruth; Mills, Marc A; Dionysiou, Dionysios D
2017-02-05
Iodinated pharmaceuticals, thyroxine (a thyroid hormone) and diatrizoate (an iodinated X-ray contrast medium), are among the most prescribed active pharmaceutical ingredients. Both of them have been reported to potentially disrupt thyroid homeostasis even at very low concentrations. In this study, UV-254 nm-based photolysis and photochemical processes, i.e., UV only, UV/H2O2, and UV/S2O8(2-), were evaluated for the destruction of these two pharmaceuticals. Approximately 40% of 0.5μM thyroxine or diatrizoate was degraded through direct photolysis at UV fluence of 160mJcm(-2), probably resulting from the photosensitive cleavage of C-I bonds. While the addition of H2O2 only accelerated the degradation efficiency to a low degree, the destruction rates of both chemicals were significantly enhanced in the UV/S2O8(2-) system, suggesting the potential vulnerability of the iodinated chemicals toward UV/S2O8(2-) treatment. Such efficient destruction also occurred in the presence of radical scavengers when biologically treated wastewater samples were used as reaction matrices. The effects of initial oxidant concentrations, solution pH, as well as the presence of natural organic matter (humic acid or fulvic acid) and alkalinity were also investigated in this study. These results provide insights for the removal of iodinated pharmaceuticals in water and/or wastewater using UV-based photochemical processes.
General route for the decomposition of InAs quantum dots during the capping process.
González, D; Reyes, D F; Utrilla, A D; Ben, T; Braza, V; Guzman, A; Hierro, A; Ulloa, J M
2016-03-29
The effect of the capping process on the morphology of InAs/GaAs quantum dots (QDs) by using different GaAs-based capping layers (CLs), ranging from strain reduction layers to strain compensating layers, has been studied by transmission microscopic techniques. For this, we have measured simultaneously the height and diameter in buried and uncapped QDs covering populations of hundreds of QDs that are statistically reliable. First, the uncapped QD population evolves in all cases from a pyramidal shape into a more homogenous distribution of buried QDs with a spherical-dome shape, despite the different mechanisms implicated in the QD capping. Second, the shape of the buried QDs depends only on the final QD size, where the radius of curvature is function of the base diameter independently of the CL composition and growth conditions. An asymmetric evolution of the QDs' morphology takes place, in which the QD height and base diameter are modified in the amount required to adopt a similar stable shape characterized by a averaged aspect ratio of 0.21. Our results contradict the traditional model of QD material redistribution from the apex to the base and point to a different universal behavior of the overgrowth processes in self-organized InAs QDs.
Batakliev, Todor; Georgiev, Vladimir; Anachkov, Metody; Rakovsky, Slavcho; Zaikov, Gennadi E
2014-06-01
Catalytic ozone decomposition is of great significance because ozone is a toxic substance commonly found or generated in human environments (aircraft cabins, offices with photocopiers, laser printers, sterilizers). Considerable work has been done on ozone decomposition reported in the literature. This review provides a comprehensive summary of the literature, concentrating on analysis of the physico-chemical properties, synthesis and catalytic decomposition of ozone. This is supplemented by a review on kinetics and catalyst characterization which ties together the previously reported results. Noble metals and oxides of transition metals have been found to be the most active substances for ozone decomposition. The high price of precious metals stimulated the use of metal oxide catalysts and particularly the catalysts based on manganese oxide. It has been determined that the kinetics of ozone decomposition is of first order importance. A mechanism of the reaction of catalytic ozone decomposition is discussed, based on detailed spectroscopic investigations of the catalytic surface, showing the existence of peroxide and superoxide surface intermediates.
Batakliev, Todor; Georgiev, Vladimir; Anachkov, Metody; Rakovsky, Slavcho
2014-01-01
Catalytic ozone decomposition is of great significance because ozone is a toxic substance commonly found or generated in human environments (aircraft cabins, offices with photocopiers, laser printers, sterilizers). Considerable work has been done on ozone decomposition reported in the literature. This review provides a comprehensive summary of the literature, concentrating on analysis of the physico-chemical properties, synthesis and catalytic decomposition of ozone. This is supplemented by a review on kinetics and catalyst characterization which ties together the previously reported results. Noble metals and oxides of transition metals have been found to be the most active substances for ozone decomposition. The high price of precious metals stimulated the use of metal oxide catalysts and particularly the catalysts based on manganese oxide. It has been determined that the kinetics of ozone decomposition is of first order importance. A mechanism of the reaction of catalytic ozone decomposition is discussed, based on detailed spectroscopic investigations of the catalytic surface, showing the existence of peroxide and superoxide surface intermediates. PMID:26109880
Souza-Corrêa, J A; Ridenti, M A; Oliveira, C; Araújo, S R; Amorim, J
2013-03-21
Mass spectrometry was used to monitor neutral chemical species from sugar cane bagasse that could volatilize during the bagasse ozonation process. Lignin fragments and some radicals liberated by direct ozone reaction with the biomass structure were detected. Ozone density was monitored during the ozonation by optical absorption spectroscopy. The optical results indicated that the ozone interaction with the bagasse material was better for bagasse particle sizes less than or equal to 0.5 mm. Both techniques have shown that the best condition for the ozone diffusion in the bagasse was at 50% of its moisture content. In addition, Fourier transform infrared spectroscopy (FTIR) and scanning electron microscopy (SEM) were employed to analyze the lignin bond disruptions and morphology changes of the bagasse surface that occurred due to the ozonolysis reactions as well. Appropriate chemical characterization of the lignin content in bagasse before and after its ozonation was also carried out.
Fujii, Hidemichi; Nakagawa, Kei; Kagabu, Makoto
2016-11-01
Groundwater nitrate pollution is one of the most prevalent water-related environmental problems worldwide. The objective of this study is to identify the determinants of nitrogen pollutant changes with a focus on the nitrogen generation process. The novelty of our research framework is to cost-effectively identify the factors involved in nitrogen pollutant generation using public data. This study focuses on three determinant factors: (1) nitrogen intensity changes, (2) structural changes, and (3) scale changes. This study empirically analyses three sectors, including crop production, farm animals, and the household, on the Shimabara Peninsula in Japan. Our results show that the nitrogen supply from crop production sectors has decreased because the production has been scaled down and shifted towards lower nitrogen intensive crops. In the farm animal sector, the nitrogen supply has also been successfully reduced due to scaling-down efforts. Households have decreased the nitrogen supply by diffusion of integrated septic tank and sewerage systems.
NASA Astrophysics Data System (ADS)
Azimi-Sadjadi, Mahmood R.; Pezeshki, Ali; Wade, Robert L.
2004-09-01
Sparse array processing methods are typically used to improve the spatial resolution of sensor arrays for the estimation of direction of arrival (DOA). The fundamental assumption behind these methods is that signals that are received by the sparse sensors (or a group of sensors) are coherent. However, coherence may vary significantly with the changes in environmental, terrain, and, operating conditions. In this paper canonical correlation analysis is used to study the variations in coherence between pairs of sub-arrays in a sparse array problem. The data set for this study is a subset of an acoustic signature data set, acquired from the US Army TACOM-ARDEC, Picatinny Arsenal, NJ. This data set is collected using three wagon-wheel type arrays with five microphones. The results show that in nominal operating conditions, i.e. no extreme wind noise or masking effects by trees, building, etc., the signals collected at different sensor arrays are indeed coherent even at distant node separation.
Morphology and phase modifications of MoO{sub 3} obtained by metallo-organic decomposition processes
Barros Santos, Elias de; Martins de Souza e Silva, Juliana; Odone Mazali, Italo
2010-11-15
Molybdenum oxide samples were prepared using different temperatures and atmospheric conditions by metallo-organic decomposition processes and were characterized by XRD, SEM and DRS UV/Vis and Raman spectroscopies. Variation in the synthesis conditions resulted in solids with different morphologies and oxygen vacancy concentrations. Intense characteristic Raman bands of crystalline orthorhombic {alpha}-MoO{sub 3}, occurring at 992 cm{sup -1} and 820 cm{sup -1}, are observed and their shifts can be related to the differences in the structure of the solids obtained. The sample obtained under nitrogen flow at 1073 K is a phase mixture of orthorhombic {alpha}-MoO{sub 3} and monoclinic {beta}-MoO{sub 3}. The characterization results suggest that the molybdenum oxide samples are non-stoichiometric and are described as MoO{sub x} with x < 2.94. Variations in the reaction conditions make it possible to tune the number of oxygen defects and the band gap of the final material.
NASA Astrophysics Data System (ADS)
Gómez-Núñez, Alberto; Roura, Pere; López, Concepción; Vilà, Anna
2016-09-01
Four inks for the production of ZnO semiconducting films have been prepared with zinc acetate dihydrate as precursor salt and one among the following aminoalcohols: aminopropanol (APr), aminomethyl butanol (AMB), aminophenol (APh) and aminobenzyl alcohol (AB) as stabilizing agent. Their thermal decomposition process has been analyzed in situ by thermogravimetric analysis (TGA), differential scanning calorimetry (DSC) and evolved gas analysis (EGA), whereas the solid product has been analysed ex-situ by X-ray diffraction (XRD) and infrared spectroscopy (IR). Although, except for the APh ink, crystalline ZnO is already obtained at 300 °C, the films contain an organic residue that evolves at higher temperature in the form of a large variety of nitrogen-containing cyclic compounds. The results indicate that APr can be a better stabilizing agent than ethanolamine (EA). It gives larger ZnO crystal sizes with similar carbon content. However, a common drawback of all the amino stabilizers (EA included) is that nitrogen atoms have not been completely removed from the ZnO film at the highest temperature of our experiments (600 °C).
Wang, Yongjiang; Witarsa, Freddy
2016-11-01
An integrated model was developed by associating separate degradation kinetics for an array of degradations during a decomposition process, which was considered as a novelty of this study. The raw composting material was divided into soluble, hemi-/cellulose, lignin, NBVS, ash, water, and free air-space. Considering their specific capabilities of expressing certain degradation phenomenon, Contois, Tessier (an extension to Monod kinetic), and first-order kinetics were employed to calculate the biochemical rates. It was found that the degradation of soluble substrate was relatively faster which could reach a maximum rate of about 0.4perhour. The hydrolysis of lignin was rate-limiting with a maximum rate of about 0.04perhour. The dry-based peak concentrations of soluble, hemi-/cellulose and lignin degraders were about 0.9, 0.2 and 0.3kgm(-3), respectively. Model developed, as a platform, allows degradation simulation of composting material that could be separated into the different components used in this study.
Fuel decomposition and boundary-layer combustion processes of hybrid rocket motors
NASA Technical Reports Server (NTRS)
Chiaverini, Martin J.; Harting, George C.; Lu, Yeu-Cherng; Kuo, Kenneth K.; Serin, Nadir; Johnson, David K.
1995-01-01
Using a high-pressure, two-dimensional hybrid motor, an experimental investigation was conducted on fundamental processes involved in hybrid rocket combustion. HTPB (Hydroxyl-terminated Polybutadiene) fuel cross-linked with diisocyanate was burned with GOX under various operating conditions. Large-amplitude pressure oscillations were encountered in earlier test runs. After identifying the source of instability and decoupling the GOX feed-line system and combustion chamber, the pressure oscillations were drastically reduced from +/-20% of the localized mean pressure to an acceptable range of +/-1.5% Embedded fine-wire thermocouples indicated that the surface temperature of the burning fuel was around 1000 K depending upon axial locations and operating conditions. Also, except near the leading-edge region, the subsurface thermal wave profiles in the upstream locations are thicker than those in the downstream locations since the solid-fuel regression rate, in general, increases with distance along the fuel slab. The recovered solid fuel slabs in the laminar portion of the boundary layer exhibited smooth surfaces, indicating the existence of a liquid melt layer on the burning fuel surface in the upstream region. After the transition section, which displayed distinct transverse striations, the surface roughness pattern became quite random and very pronounced in the downstream turbulent boundary-layer region. Both real-time X-ray radiography and ultrasonic pulse-echo techniques were used to determine the instantaneous web thickness burned and instantaneous solid-fuel regression rates over certain portions of the fuel slabs. Globally averaged and axially dependent but time-averaged regression rates were also obtained and presented.
NASA Astrophysics Data System (ADS)
Aragón, Roxana; Montti, Lia; Ayup, María Marta; Fernández, Romina
2014-01-01
Invasions of exotic tree species can cause profound changes in community composition and structure, and may even cause legacy effect on nutrient cycling via litter production. In this study, we compared leaf litter decomposition of two invasive exotic trees (Ligustrum lucidum and Morus sp.) and two dominant native trees (Cinnamomum porphyria and Cupania vernalis) in native and invaded (Ligustrum-dominated) forest stands in NW Argentina. We measured leaf attributes and environmental characteristics in invaded and native stands to isolate the effects of litter quality and habitat characteristics. Species differed in their decomposition rates and, as predicted by the different species colonization status (pioneer vs. late successional), exotic species decayed more rapidly than native ones. Invasion by L. lucidum modified environmental attributes by reducing soil humidity. Decomposition constants (k) tended to be slightly lower (-5%) for all species in invaded stands. High SLA, low tensile strength, and low C:N of Morus sp. distinguish this species from the native ones and explain its higher decomposition rate. Contrary to our expectations, L. lucidum leaf attributes were similar to those of native species. Decomposition rates also differed between the two exotic species (35% higher in Morus sp.), presumably due to leaf attributes and colonization status. Given the high decomposition rate of L. lucidum litter (more than 6 times that of natives) we expect an acceleration of nutrient circulation at ecosystem level in Ligustrum-dominated stands. This may occur in spite of the modified environmental conditions that are associated with L. lucidum invasion.
McMillen, D.F.; Golden, D.M.
1981-11-12
Very Low-Pressure Pyrolysis studies of 2,4-dinitrotoluene decomposition resulted in decomposition rates consistent with log (ks) = 12.1 - 43.9/2.3 RT. These results support the conclusion that previously reported 'anomalously' low Arrhenius parameters for the homogeneous gas-phase decomposition of ortho-nitrotoluene actually represent surface-catalyzed reactions. Preliminary qualitative results for pyrolysis of ortho-nitrotouene in the absence of hot reactor walls, using the Laser-Powered Homogeneous Pyrolysis technique (LPHP), provide further support for this conclusion: only products resulting from Ph-NO2 bond scission were observed; no products indicating complex intramolecular oxidation-reduction or elimination processes could be detected. The LPHP technique was successfully modified to use a pulsed laser and a heated flow system, so that the technique becomes suitable for study of surface-sensitive, low vapor pressure substrates such as TNT. The validity and accuracy of the technique was demonstrated by applying it to the decomposition of substances whose Arrhenius parameters for decomposition were already well known. IR-fluorescence measurements show that the temperature-space-time behavior under the present LPHP conditions is in agreement with expectations and with requirements which must be met if the method is to have quantitative validity. LPHP studies of azoisopropane decomposition, chosen as a radical-forming test reaction, show the accepted literature parameters to be substantially in error and indicate that the correct values are in all probability much closer to those measured in this work: log (k/s) = 13.9 - 41.2/2.3 RT.
Chao, T.T.; Sanzolone, R.F.
1992-01-01
Sample decomposition is a fundamental and integral step in the procedure of geochemical analysis. It is often the limiting factor to sample throughput, especially with the recent application of the fast and modern multi-element measurement instrumentation. The complexity of geological materials makes it necessary to choose the sample decomposition technique that is compatible with the specific objective of the analysis. When selecting a decomposition technique, consideration should be given to the chemical and mineralogical characteristics of the sample, elements to be determined, precision and accuracy requirements, sample throughput, technical capability of personnel, and time constraints. This paper addresses these concerns and discusses the attributes and limitations of many techniques of sample decomposition along with examples of their application to geochemical analysis. The chemical properties of reagents as to their function as decomposition agents are also reviewed. The section on acid dissolution techniques addresses the various inorganic acids that are used individually or in combination in both open and closed systems. Fluxes used in sample fusion are discussed. The promising microwave-oven technology and the emerging field of automation are also examined. A section on applications highlights the use of decomposition techniques for the determination of Au, platinum group elements (PGEs), Hg, U, hydride-forming elements, rare earth elements (REEs), and multi-elements in geological materials. Partial dissolution techniques used for geochemical exploration which have been treated in detail elsewhere are not discussed here; nor are fire-assaying for noble metals and decomposition techniques for X-ray fluorescence or nuclear methods be discussed. ?? 1992.
Lee, Joo Won; Thomas, Leonard C; Jerrell, John; Feng, Hao; Cadwallader, Keith R; Schmidt, Shelly J
2011-01-26
High performance liquid chromatography (HPLC) on a calcium form cation exchange column with refractive index and photodiode array detection was used to investigate thermal decomposition as the cause of the loss of crystalline structure in sucrose. Crystalline sucrose structure was removed using a standard differential scanning calorimetry (SDSC) method (fast heating method) and a quasi-isothermal modulated differential scanning calorimetry (MDSC) method (slow heating method). In the fast heating method, initial decomposition components, glucose (0.365%) and 5-HMF (0.003%), were found in the sucrose sample coincident with the onset temperature of the first endothermic peak. In the slow heating method, glucose (0.411%) and 5-HMF (0.003%) were found in the sucrose sample coincident with the holding time (50 min) at which the reversing heat capacity began to increase. In both methods, even before the crystalline structure in sucrose was completely removed, unidentified thermal decomposition components were formed. These results prove not only that the loss of crystalline structure in sucrose is caused by thermal decomposition, but also that it is achieved via a time-temperature combination process. This knowledge is important for quality assurance purposes and for developing new sugar based food and pharmaceutical products. In addition, this research provides new insights into the caramelization process, showing that caramelization can occur under low temperature (significantly below the literature reported melting temperature), albeit longer time, conditions.
Dirac tensor with heavy photon
Bytev, V. V.; Kuraev, E. A.; Scherbakova, E. S.
2013-03-15
For the large-angle hard-photon emission by initial leptons in the process of high-energy annihilation of e{sup +}e{sup -} to hadrons, the Dirac tensor is obtained by taking the lowest-order radiative corrections into account. The case of large-angle emission of two hard photons by initial leptons is considered. In the final result, the kinematic case of collinear emission of hard photons and soft virtual and real photons is included; it can be used for the construction of Monte-Carlo generators.
Diffusion Tensor Image Registration Using Hybrid Connectivity and Tensor Features
Wang, Qian; Yap, Pew-Thian; Wu, Guorong; Shen, Dinggang
2014-01-01
Most existing diffusion tensor imaging (DTI) registration methods estimate structural correspondences based on voxelwise matching of tensors. The rich connectivity information that is given by DTI, however, is often neglected. In this article, we propose to integrate complementary information given by connectivity features and tensor features for improved registration accuracy. To utilize connectivity information, we place multiple anchors representing different brain anatomies in the image space, and define the connectivity features for each voxel as the geodesic distances from all anchors to the voxel under consideration. The geodesic distance, which is computed in relation to the tensor field, encapsulates information of brain connectivity. We also extract tensor features for every voxel to reflect the local statistics of tensors in its neighborhood. We then combine both connectivity features and tensor features for registration of tensor images. From the images, landmarks are selected automatically and their correspondences are determined based on their connectivity and tensor feature vectors. The deformation field that deforms one tensor image to the other is iteratively estimated and optimized according to the landmarks and their associated correspondences. Experimental results show that, by using connectivity features and tensor features simultaneously, registration accuracy is increased substantially compared with the cases using either type of features alone. PMID:24293159
Jha, Abhinav K; Barrett, Harrison H; Frey, Eric C; Clarkson, Eric; Caucci, Luca; Kupinski, Matthew A
2015-09-21
Recent advances in technology are enabling a new class of nuclear imaging systems consisting of detectors that use real-time maximum-likelihood (ML) methods to estimate the interaction position, deposited energy, and other attributes of each photon-interaction event and store these attributes in a list format. This class of systems, which we refer to as photon-processing (PP) nuclear imaging systems, can be described by a fundamentally different mathematical imaging operator that allows processing of the continuous-valued photon attributes on a per-photon basis. Unlike conventional photon-counting (PC) systems that bin the data into images, PP systems do not have any binning-related information loss. Mathematically, while PC systems have an infinite-dimensional null space due to dimensionality considerations, PP systems do not necessarily suffer from this issue. Therefore, PP systems have the potential to provide improved performance in comparison to PC systems. To study these advantages, we propose a framework to perform the singular-value decomposition (SVD) of the PP imaging operator. We use this framework to perform the SVD of operators that describe a general two-dimensional (2D) planar linear shift-invariant (LSIV) PP system and a hypothetical continuously rotating 2D single-photon emission computed tomography (SPECT) PP system. We then discuss two applications of the SVD framework. The first application is to decompose the object being imaged by the PP imaging system into measurement and null components. We compare these components to the measurement and null components obtained with PC systems. In the process, we also present a procedure to compute the null functions for a PC system. The second application is designing analytical reconstruction algorithms for PP systems. The proposed analytical approach exploits the fact that PP systems acquire data in a continuous domain to estimate a continuous object function. The approach is parallelizable and
NASA Astrophysics Data System (ADS)
Jha, Abhinav K.; Barrett, Harrison H.; Frey, Eric C.; Clarkson, Eric; Caucci, Luca; Kupinski, Matthew A.
2015-09-01
Recent advances in technology are enabling a new class of nuclear imaging systems consisting of detectors that use real-time maximum-likelihood (ML) methods to estimate the interaction position, deposited energy, and other attributes of each photon-interaction event and store these attributes in a list format. This class of systems, which we refer to as photon-processing (PP) nuclear imaging systems, can be described by a fundamentally different mathematical imaging operator that allows processing of the continuous-valued photon attributes on a per-photon basis. Unlike conventional photon-counting (PC) systems that bin the data into images, PP systems do not have any binning-related information loss. Mathematically, while PC systems have an infinite-dimensional null space due to dimensionality considerations, PP systems do not necessarily suffer from this issue. Therefore, PP systems have the potential to provide improved performance in comparison to PC systems. To study these advantages, we propose a framework to perform the singular-value decomposition (SVD) of the PP imaging operator. We use this framework to perform the SVD of operators that describe a general two-dimensional (2D) planar linear shift-invariant (LSIV) PP system and a hypothetical continuously rotating 2D single-photon emission computed tomography (SPECT) PP system. We then discuss two applications of the SVD framework. The first application is to decompose the object being imaged by the PP imaging system into measurement and null components. We compare these components to the measurement and null components obtained with PC systems. In the process, we also present a procedure to compute the null functions for a PC system. The second application is designing analytical reconstruction algorithms for PP systems. The proposed analytical approach exploits the fact that PP systems acquire data in a continuous domain to estimate a continuous object function. The approach is parallelizable and
Sharma, Sandeep Kumar; Roudaut, Gaëlle; Fabing, Isabelle; Duplâtre, Gilles
2010-11-14
The triplet state of positronium, o-Ps, is used as a probe to characterize a starch-20% w/w sucrose matrix as a function of temperature (T). A two-step decomposition (of sucrose, and then starch) starts at 440 K as shown by a decrease in the o-Ps intensity (I(3)) and lifetime (τ(3)), the latter also disclosing the occurrence of a glass transition. Upon sucrose decomposition, the matrix acquires properties (reduced size and density of nanoholes) that are different from those of pure starch. A model is successfully established, describing the variations of both I(3) and τ(3) with T and yields a glass transition temperature, T(g) = (446 ± 2) K, in spite of the concomitant sucrose decomposition. Unexpectedly, the starch volume fraction (as probed through thermal gravimetry) decreases with T at a higher rate than the free volume fraction (as probed through PALS).
ERIC Educational Resources Information Center
Napier, J.
1988-01-01
Outlines the role of the main organisms involved in woodland decomposition and discusses some of the variables affecting the rate of nutrient cycling. Suggests practical work that may be of value to high school students either as standard practice or long-term projects. (CW)
NASA Astrophysics Data System (ADS)
Viswanathan, R.; Thompson, Donald L.; Raff, L. M.
1984-05-01
The rates and mechanism for the unimolecular decomposition of SiH4 have been investigated using quasiclassical trajectory methods to follow the dynamics and Metropolis sampling procedures to average over the initial SiH4 phase space. The semiempirical potential-energy surface has been fitted to scaled SCF calculations and to a variety of experimental data. It gives the correct SiH4 equilibrium structure, reaction endothermicities, and bond energies for SiH4, SiH3, and SiH2. All hydrogen atoms are treated in an equivalent fashion. Excellent first-order decay plots are obtained for the microcanonical rates for the total SiH4 decomposition as well as for the separate decomposition channels. The low-energy pathway is found to be a three-center elimination to form SiH2+H2. The decomposition channel forming SiH3+H becomes important only at internal SiH4 energies in excess of 5.0 eV. Comparison of computed falloff curves with RRKM calculations fitted to experimental results indicates that the critical threshold energy for the three-center reaction lies in the range 2.10
Zou, Min Wang, Xin Jiang, Xiaohong Lu, Lude
2014-05-01
Catalyzed thermal decomposition process of ammonium perchlorate (AP) over neodymium oxide (Nd{sub 2}O{sub 3}) was investigated. Catalytic performances of nanometer-sized Nd{sub 2}O{sub 3} and micrometer-sized Nd{sub 2}O{sub 3} were evaluated by differential scanning calorimetry (DSC). In contrast to universal concepts, catalysts in different sizes have nearly similar catalytic activities. Based on structural and morphological variation of the catalysts during the reaction, combined with mass spectrum analyses and studies of unmixed style, a new understanding of this catalytic process was proposed. We believed that the newly formed chloride neodymium oxide (NdOCl) was the real catalytic species in the overall thermal decomposition of AP over Nd{sub 2}O{sub 3}. Meanwhile, it was the “self-distributed” procedure which occurred within the reaction that also worked for the improvement of overall catalytic activities. This work is of great value in understanding the roles of micrometer-sized catalysts used in heterogeneous reactions, especially the solid–solid reactions which could generate a large quantity of gaseous species. - Graphical abstract: In-situ and self-distributed reaction process in thermal decomposition of AP catalyzed by Nd{sub 2}O{sub 3}. - Highlights: • Micro- and nano-Nd{sub 2}O{sub 3} for catalytic thermal decomposition of AP. • No essential differences on their catalytic performances. • Structural and morphological variation of catalysts digs out catalytic mechanism. • This catalytic process is “in-situ and self-distributed” one.
Superconducting tensor gravity gradiometer
NASA Technical Reports Server (NTRS)
Paik, H. J.
1981-01-01
The employment of superconductivity and other material properties at cryogenic temperatures to fabricate sensitive, low-drift, gravity gradiometer is described. The device yields a reduction of noise of four orders of magnitude over room temperature gradiometers, and direct summation and subtraction of signals from accelerometers in varying orientations are possible with superconducting circuitry. Additional circuits permit determination of the linear and angular acceleration vectors independent of the measurement of the gravity gradient tensor. A dewar flask capable of maintaining helium in a liquid state for a year's duration is under development by NASA, and a superconducting tensor gravity gradiometer for the NASA Geodynamics Program is intended for a LEO polar trajectory to measure the harmonic expansion coefficients of the earth's gravity field up to order 300.
Direct Solution of the Chemical Master Equation Using Quantized Tensor Trains
Kazeev, Vladimir; Khammash, Mustafa; Nip, Michael; Schwab, Christoph
2014-01-01
The Chemical Master Equation (CME) is a cornerstone of stochastic analysis and simulation of models of biochemical reaction networks. Yet direct solutions of the CME have remained elusive. Although several approaches overcome the infinite dimensional nature of the CME through projections or other means, a common feature of proposed approaches is their susceptibility to the curse of dimensionality, i.e. the exponential growth in memory and computational requirements in the number of problem dimensions. We present a novel approach that has the potential to “lift” this curse of dimensionality. The approach is based on the use of the recently proposed Quantized Tensor Train (QTT) formatted numerical linear algebra for the low parametric, numerical representation of tensors. The QTT decomposition admits both, algorithms for basic tensor arithmetics with complexity scaling linearly in the dimension (number of species) and sub-linearly in the mode size (maximum copy number), and a numerical tensor rounding procedure which is stable and quasi-optimal. We show how the CME can be represented in QTT format, then use the exponentially-converging -discontinuous Galerkin discretization in time to reduce the CME evolution problem to a set of QTT-structured linear equations to be solved at each time step using an algorithm based on Density Matrix Renormalization Group (DMRG) methods from quantum chemistry. Our method automatically adapts the “basis” of the solution at every time step guaranteeing that it is large enough to capture the dynamics of interest but no larger than necessary, as this would increase the computational complexity. Our approach is demonstrated by applying it to three different examples from systems biology: independent birth-death process, an example of enzymatic futile cycle, and a stochastic switch model. The numerical results on these examples demonstrate that the proposed QTT method achieves dramatic speedups and several orders of magnitude
Direct solution of the Chemical Master Equation using quantized tensor trains.
Kazeev, Vladimir; Khammash, Mustafa; Nip, Michael; Schwab, Christoph
2014-03-01
The Chemical Master Equation (CME) is a cornerstone of stochastic analysis and simulation of models of biochemical reaction networks. Yet direct solutions of the CME have remained elusive. Although several approaches overcome the infinite dimensional nature of the CME through projections or other means, a common feature of proposed approaches is their susceptibility to the curse of dimensionality, i.e. the exponential growth in memory and computational requirements in the number of problem dimensions. We present a novel approach that has the potential to "lift" this curse of dimensionality. The approach is based on the use of the recently proposed Quantized Tensor Train (QTT) formatted numerical linear algebra for the low parametric, numerical representation of tensors. The QTT decomposition admits both, algorithms for basic tensor arithmetics with complexity scaling linearly in the dimension (number of species) and sub-linearly in the mode size (maximum copy number), and a numerical tensor rounding procedure which is stable and quasi-optimal. We show how the CME can be represented in QTT format, then use the exponentially-converging hp-discontinuous Galerkin discretization in time to reduce the CME evolution problem to a set of QTT-structured linear equations to be solved at each time step using an algorithm based on Density Matrix Renormalization Group (DMRG) methods from quantum chemistry. Our method automatically adapts the "basis" of the solution at every time step guaranteeing that it is large enough to capture the dynamics of interest but no larger than necessary, as this would increase the computational complexity. Our approach is demonstrated by applying it to three different examples from systems biology: independent birth-death process, an example of enzymatic futile cycle, and a stochastic switch model. The numerical results on these examples demonstrate that the proposed QTT method achieves dramatic speedups and several orders of magnitude storage
Grid-based electronic structure calculations: The tensor decomposition approach
Rakhuba, M.V.; Oseledets, I.V.
2016-05-01
We present a fully grid-based approach for solving Hartree–Fock and all-electron Kohn–Sham equations based on low-rank approximation of three-dimensional electron orbitals. Due to the low-rank structure the total complexity of the algorithm depends linearly with respect to the one-dimensional grid size. Linear complexity allows for the usage of fine grids, e.g. 8192{sup 3} and, thus, cheap extrapolation procedure. We test the proposed approach on closed-shell atoms up to the argon, several molecules and clusters of hydrogen atoms. All tests show systematical convergence with the required accuracy.
Kim, Na Rae; Jung, Inyu; Jo, Yun Hwan; Lee, Hyuck Mo
2013-09-01
To control the optical properties of Cu2O for a variety of application, we synthesized Cu2O in nanoscale without other treatments. Cu2O nanoparticles with an average size of 2.7 nm (sigma < or = 3.7%) were successfully synthesized in this study via a modified thermal decomposition process. Copper (II) acetylacetonate was used as a precursor, and oleylamine was used as a solvent, a surfactant and a reducing agent. The oleylamine-mediated synthesis allowed for the preparation of Cu2O nanoparticles with a narrower size distribution, and the nanoparticles were synthesized in the presence of a borane tert-butylamine (BTB) complex, where BTB was a strong co-reducing agent together with oleylamine. UV-vis spectroscopy analysis suggest that band gap energy of these Cu2O particles is enlarged from 2.1 eV in the bulk to 3.1 eV in the 2.7-nm nanoparticles, which is larger than most other reported value of Cu2O nanoparticles. Therefore, these nanoparticles could be used as a transparent material because of transformed optical property.
E6Tensors: A Mathematica package for E6 Tensors
NASA Astrophysics Data System (ADS)
Deppisch, Thomas
2017-04-01
We present the Mathematica package E6Tensors, a tool for explicit tensor calculations in E6 gauge theories. In addition to matrix expressions for the group generators of E6, it provides structure constants, various higher rank tensors and expressions for the representations 27, 78, 351 and 351‧. This paper comes along with a short manual including physically relevant examples. I further give a complete list of gauge invariant, renormalisable terms for superpotentials and Lagrangians.
Discussion of stress tensor nonuniqueness with application to nonuniform, particulate systems
Aidun, J.B.
1993-06-01
The indeterminacy of the mechanical stress tensor has been noted in several developments of expressions for stress in a system of particles. It is generally agreed that physical quantities related to the stress tensor must be insensitive to this nonuniqueness, but there is no definitive prescription for insuring it. Kroener`s tensor decomposition theorem is applied to the mechanical stress tensor {sup {sigma}}{sub ij} to show that its complete determination requires specification of its ``incompatibility,`` {epsilon}{sub ijk} {epsilon}{sub lmn} {sup {partial_derivative}}{sub j} {sup {partial_derivative}}{sub m} {sup {sigma}}{sub kn}, in addition to its divergence, which is obtained from the momentum conservation relation. For a particulate system, stress tensor incompatibility is shown to vanish to recover the correct expression for macroscopically observable traction. This result removes concern about nonuniqueness without requiring equilibrium or arbitrarily-defined force lines.
Reducing tensor magnetic gradiometer data for unexploded ordnance detection
Bracken, Robert E.; Brown, Philip J.
2005-01-01
We performed a survey to demonstrate the effectiveness of a prototype tensor magnetic gradiometer system (TMGS) for detection of buried unexploded ordnance (UXO). In order to achieve a useful result, we designed a data-reduction procedure that resulted in a realistic magnetic gradient tensor and devised a simple way of viewing complicated tensor data, not only to assess the validity of the final resulting tensor, but also to preview the data at interim stages of processing. The final processed map of the surveyed area clearly shows a sharp anomaly that peaks almost directly over the target UXO. This map agrees well with a modeled map derived from dipolar sources near the known target locations. From this agreement, it can be deduced that the reduction process is valid, making the prototype TMGS a foundation for development of future systems and processes.
Relativistic Lagrangian displacement field and tensor perturbations
NASA Astrophysics Data System (ADS)
Rampf, Cornelius; Wiegand, Alexander
2014-12-01
We investigate the purely spatial Lagrangian coordinate transformation from the Lagrangian to the basic Eulerian frame. We demonstrate three techniques for extracting the relativistic displacement field from a given solution in the Lagrangian frame. These techniques are (a) from defining a local set of Eulerian coordinates embedded into the Lagrangian frame; (b) from performing a specific gauge transformation; and (c) from a fully nonperturbative approach based on the Arnowitt-Deser-Misner (ADM) split. The latter approach shows that this decomposition is not tied to a specific perturbative formulation for the solution of the Einstein equations. Rather, it can be defined at the level of the nonperturbative coordinate change from the Lagrangian to the Eulerian description. Studying such different techniques is useful because it allows us to compare and develop further the various approximation techniques available in the Lagrangian formulation. We find that one has to solve the gravitational wave equation in the relativistic analysis, otherwise the corresponding Newtonian limit will necessarily contain spurious nonpropagating tensor artifacts at second order in the Eulerian frame. We also derive the magnetic part of the Weyl tensor in the Lagrangian frame, and find that it is not only excited by gravitational waves but also by tensor perturbations which are induced through the nonlinear frame dragging. We apply our findings to calculate for the first time the relativistic displacement field, up to second order, for a Λ CDM Universe in the presence of a local primordial non-Gaussian component. Finally, we also comment on recent claims about whether mass conservation in the Lagrangian frame is violated.
Carbon decomposition process of the residual biomass in the paddy soil of a single-crop rice field
NASA Astrophysics Data System (ADS)
Okada, K.; Iwata, T.
2014-12-01
In cultivated fields, residual organic matter is plowed into soil after harvest and decaying in fallow season. Greenhouse gases such as CO2 and CH4 is generated by the decomposition of the substantial organic matter and released into the atmosphere. In some fields, open burning is carried out by tradition, when carbon in residual matter is released into atmosphere as CO2. However, burning effect on carbon budget between crop lands and atmosphere is not entirely considered yet. In this study, coarse organic matter (COM) in paddy soil of a single-crop rice field was sampled on regular intervals between January, 2011 and August, 2014 The amount of carbon release from residual matter was estimated by analyzing of the variations in carbon content of COM. Effects of soil temperature (Ts) and soil water content (SWC) at the paddy field on the rate of carbon decomposition was investigated. Though decreasing rate of COM was much smaller in winter season, it is accelerated at the warming season between April and June every year. Decomposition was resisted for next rice cultivated season despite of highest soil temperature. In addition, the observational field was divided into two areas, and three time open burning experiments were conducted in November, 2011, 2012, and 2013. In each year, three sampling surveys, or plants before harvest and residuals before and after the burning experiment, were done. From these surveys, it is suggested that about 48±2% of carbon contents of above-ground plant was yield out as grain by harvest, and about 27±2% of carbon emitted as CO2 by burning. Carbon content of residuals plowed into soil after the harvest was estimated 293±1 and 220±36gC/m2 in no-burned and burned area, respectively, based on three-years average. It is estimated that 70 and 60% of the first input amount of COM was decomposed after a year in no-burned and burned area, respectively.
On Endomorphisms of Quantum Tensor Space
NASA Astrophysics Data System (ADS)
Lehrer, Gustav Isaac; Zhang, Ruibin
2008-12-01
We give a presentation of the endomorphism algebra End_{mathcal {U}q(mathfrak {sl}2)}(V^{⊗ r}) , where V is the three-dimensional irreducible module for quantum {mathfrak {sl}_2} over the function field {mathbb {C}(q^{1/2})} . This will be as a quotient of the Birman-Wenzl-Murakami algebra BMW r ( q) : = BMW r ( q -4, q 2 - q -2) by an ideal generated by a single idempotent Φ q . Our presentation is in analogy with the case where V is replaced by the two-dimensional irreducible {mathcal {U}_q(mathfrak {sl}2)} -module, the BMW algebra is replaced by the Hecke algebra H r ( q) of type A r-1, Φ q is replaced by the quantum alternator in H 3( q), and the endomorphism algebra is the classical realisation of the Temperley-Lieb algebra on tensor space. In particular, we show that all relations among the endomorphisms defined by the R-matrices on {V^{⊗ r}} are consequences of relations among the three R-matrices acting on {V^{⊗ 4}}. The proof makes extensive use of the theory of cellular algebras. Potential applications include the decomposition of tensor powers when q is a root of unity.
Catalyst for sodium chlorate decomposition
NASA Technical Reports Server (NTRS)
Wydeven, T.
1972-01-01
Production of oxygen by rapid decomposition of cobalt oxide and sodium chlorate mixture is discussed. Cobalt oxide serves as catalyst to accelerate reaction. Temperature conditions and chemical processes involved are described.
2013-01-01
Background The Prussian blue analogues represent well-known and extensively studied group of coordination species which has many remarkable applications due to their ion-exchange, electron transfer or magnetic properties. Among them, Co-Fe Prussian blue analogues have been extensively studied due to the photoinduced magnetization. Surprisingly, their suitability as precursors for solid-state synthesis of magnetic nanoparticles is almost unexplored. In this paper, the mechanism of thermal decomposition of [Co(en)3][Fe(CN)6] ∙∙ 2H2O (1a) is elucidated, including the topotactic dehydration, valence and spins exchange mechanisms suggestion and the formation of a mixture of CoFe2O4-Co3O4 (3:1) as final products of thermal degradation. Results The course of thermal decomposition of 1a in air atmosphere up to 600°C was monitored by TG/DSC techniques, 57Fe Mössbauer and IR spectroscopy. As first, the topotactic dehydration of 1a to the hemihydrate [Co(en)3][Fe(CN)6] ∙∙ 1/2H2O (1b) occurred with preserving the single-crystal character as was confirmed by the X-ray diffraction analysis. The consequent thermal decomposition proceeded in further four stages including intermediates varying in valence and spin states of both transition metal ions in their structures, i.e. [FeII(en)2(μ-NC)CoIII(CN)4], FeIII(NH2CH2CH3)2(μ-NC)2CoII(CN)3] and FeIII[CoII(CN)5], which were suggested mainly from 57Fe Mössbauer, IR spectral and elemental analyses data. Thermal decomposition was completed at 400°C when superparamagnetic phases of CoFe2O4 and Co3O4 in the molar ratio of 3:1 were formed. During further temperature increase (450 and 600°C), the ongoing crystallization process gave a new ferromagnetic phase attributed to the CoFe2O4-Co3O4 nanocomposite particles. Their formation was confirmed by XRD and TEM analyses. In-field (5 K / 5 T) Mössbauer spectrum revealed canting of Fe(III) spin in almost fully inverse spinel structure of CoFe2O4. Conclusions It has been found
NASA Astrophysics Data System (ADS)
Lizurek, Grzegorz
2017-01-01
Tectonic seismicity in Poland is sparse. The biggest event was located near Myślenice in 17th century of magnitude 5.6. On the other hand, the anthropogenic seismicity is one of the highest in Europe related, for example, to underground mining in Upper Silesian Coal Basin (USCB) and Legnica Głogów Copper District (LGCD), open pit mining in "Bełchatów" brown coal mine and reservoir impoundment of Czorsztyn artificial lake. The level of seismic activity in these areas varies from tens to thousands of events per year. Focal mechanism and full moment tensor (MT) decomposition allow for deeper understanding of the seismogenic process leading to tectonic, induced, and triggered seismic events. The non-DC components of moment tensors are considered as an indicator of the induced seismicity. In this work, the MT inversion and decomposition is proved to be a robust tool for unveiling collapse-type events as well as the other induced events in Polish underground mining areas. The robustness and limitations of the presented method is exemplified by synthetic tests and by analyzing weak tectonic earthquakes. The spurious non-DC components of full MT solutions due to the noise and poor focal coverage are discussed. The results of the MT inversions of the human-related and tectonic earthquakes from Poland indicate this method as a useful part of the tectonic and anthropogenic seismicity discrimination workflow.
Local recovery of lithospheric stress tensor from GOCE gravitational tensor
NASA Astrophysics Data System (ADS)
Eshagh, Mehdi
2017-01-01