Rank-based decompositions of morphological templates.
Sussner, P; Ritter, G X
2000-01-01
Methods for matrix decomposition have found numerous applications in image processing, in particular for the problem of template decomposition. Since existing matrix decomposition techniques are mainly concerned with the linear domain, we consider it timely to investigate matrix decomposition techniques in the nonlinear domain with applications in image processing. The mathematical basis for these investigations is the new theory of rank within minimax algebra. Thus far, only minimax decompositions of rank 1 and rank 2 matrices into outer product expansions are known to the image processing community. We derive a heuristic algorithm for the decomposition of matrices having arbitrary rank.
Separable decompositions of bipartite mixed states
NASA Astrophysics Data System (ADS)
Li, Jun-Li; Qiao, Cong-Feng
2018-04-01
We present a practical scheme for the decomposition of a bipartite mixed state into a sum of direct products of local density matrices, using the technique developed in Li and Qiao (Sci. Rep. 8:1442, 2018). In the scheme, the correlation matrix which characterizes the bipartite entanglement is first decomposed into two matrices composed of the Bloch vectors of local states. Then, we show that the symmetries of Bloch vectors are consistent with that of the correlation matrix, and the magnitudes of the local Bloch vectors are lower bounded by the correlation matrix. Concrete examples for the separable decompositions of bipartite mixed states are presented for illustration.
Low-rank matrix decomposition and spatio-temporal sparse recovery for STAP radar
Sen, Satyabrata
2015-08-04
We develop space-time adaptive processing (STAP) methods by leveraging the advantages of sparse signal processing techniques in order to detect a slowly-moving target. We observe that the inherent sparse characteristics of a STAP problem can be formulated as the low-rankness of clutter covariance matrix when compared to the total adaptive degrees-of-freedom, and also as the sparse interference spectrum on the spatio-temporal domain. By exploiting these sparse properties, we propose two approaches for estimating the interference covariance matrix. In the first approach, we consider a constrained matrix rank minimization problem (RMP) to decompose the sample covariance matrix into a low-rank positivemore » semidefinite and a diagonal matrix. The solution of RMP is obtained by applying the trace minimization technique and the singular value decomposition with matrix shrinkage operator. Our second approach deals with the atomic norm minimization problem to recover the clutter response-vector that has a sparse support on the spatio-temporal plane. We use convex relaxation based standard sparse-recovery techniques to find the solutions. With extensive numerical examples, we demonstrate the performances of proposed STAP approaches with respect to both the ideal and practical scenarios, involving Doppler-ambiguous clutter ridges, spatial and temporal decorrelation effects. As a result, the low-rank matrix decomposition based solution requires secondary measurements as many as twice the clutter rank to attain a near-ideal STAP performance; whereas the spatio-temporal sparsity based approach needs a considerably small number of secondary data.« less
A knowledge-based tool for multilevel decomposition of a complex design problem
NASA Technical Reports Server (NTRS)
Rogers, James L.
1989-01-01
Although much work has been done in applying artificial intelligence (AI) tools and techniques to problems in different engineering disciplines, only recently has the application of these tools begun to spread to the decomposition of complex design problems. A new tool based on AI techniques has been developed to implement a decomposition scheme suitable for multilevel optimization and display of data in an N x N matrix format.
NASA Technical Reports Server (NTRS)
Wade, T. O.
1984-01-01
Reduction techniques for traffic matrices are explored in some detail. These matrices arise in satellite switched time-division multiple access (SS/TDMA) techniques whereby switching of uplink and downlink beams is required to facilitate interconnectivity of beam zones. A traffic matrix is given to represent that traffic to be transmitted from n uplink beams to n downlink beams within a TDMA frame typically of 1 ms duration. The frame is divided into segments of time and during each segment a portion of the traffic is represented by a switching mode. This time slot assignment is characterized by a mode matrix in which there is not more than a single non-zero entry on each line (row or column) of the matrix. Investigation is confined to decomposition of an n x n traffic matrix by mode matrices with a requirement that the decomposition be 100 percent efficient or, equivalently, that the line(s) in the original traffic matrix whose sum is maximal (called critical line(s)) remain maximal as mode matrices are subtracted throughout the decomposition process. A method of decomposition of an n x n traffic matrix by mode matrices results in a number of steps that is bounded by n(2) - 2n + 2. It is shown that this upper bound exists for an n x n matrix wherein all the lines are maximal (called a quasi doubly stochastic (QDS) matrix) or for an n x n matrix that is completely arbitrary. That is, the fact that no method can exist with a lower upper bound is shown for both QDS and arbitrary matrices, in an elementary and straightforward manner.
A technique for plasma velocity-space cross-correlation
NASA Astrophysics Data System (ADS)
Mattingly, Sean; Skiff, Fred
2018-05-01
An advance in experimental plasma diagnostics is presented and used to make the first measurement of a plasma velocity-space cross-correlation matrix. The velocity space correlation function can detect collective fluctuations of plasmas through a localized measurement. An empirical decomposition, singular value decomposition, is applied to this Hermitian matrix in order to obtain the plasma fluctuation eigenmode structure on the ion distribution function. A basic theory is introduced and compared to the modes obtained by the experiment. A full characterization of these modes is left for future work, but an outline of this endeavor is provided. Finally, the requirements for this experimental technique in other plasma regimes are discussed.
NASA Astrophysics Data System (ADS)
Kumar, Ravi; Bhaduri, Basanta; Nishchal, Naveen K.
2018-01-01
In this study, we propose a quick response (QR) code based nonlinear optical image encryption technique using spiral phase transform (SPT), equal modulus decomposition (EMD) and singular value decomposition (SVD). First, the primary image is converted into a QR code and then multiplied with a spiral phase mask (SPM). Next, the product is spiral phase transformed with particular spiral phase function, and further, the EMD is performed on the output of SPT, which results into two complex images, Z 1 and Z 2. Among these, Z 1 is further Fresnel propagated with distance d, and Z 2 is reserved as a decryption key. Afterwards, SVD is performed on Fresnel propagated output to get three decomposed matrices i.e. one diagonal matrix and two unitary matrices. The two unitary matrices are modulated with two different SPMs and then, the inverse SVD is performed using the diagonal matrix and modulated unitary matrices to get the final encrypted image. Numerical simulation results confirm the validity and effectiveness of the proposed technique. The proposed technique is robust against noise attack, specific attack, and brutal force attack. Simulation results are presented in support of the proposed idea.
Matrix decompositions of two-dimensional nuclear magnetic resonance spectra.
Havel, T F; Najfeld, I; Yang, J X
1994-08-16
Two-dimensional NMR spectra are rectangular arrays of real numbers, which are commonly regarded as digitized images to be analyzed visually. If one treats them instead as mathematical matrices, linear algebra techniques can also be used to extract valuable information from them. This matrix approach is greatly facilitated by means of a physically significant decomposition of these spectra into a product of matrices--namely, S = PAPT. Here, P denotes a matrix whose columns contain the digitized contours of each individual peak or multiple in the one-dimensional spectrum, PT is its transpose, and A is an interaction matrix specific to the experiment in question. The practical applications of this decomposition are considered in detail for two important types of two-dimensional NMR spectra, double quantum-filtered correlated spectroscopy and nuclear Overhauser effect spectroscopy, both in the weak-coupling approximation. The elements of A are the signed intensities of the cross-peaks in a double quantum-filtered correlated spectrum, or the integrated cross-peak intensities in the case of a nuclear Overhauser effect spectrum. This decomposition not only permits these spectra to be efficiently simulated but also permits the corresponding inverse problems to be given an elegant mathematical formulation to which standard numerical methods are applicable. Finally, the extension of this decomposition to the case of strong coupling is given.
Matrix decompositions of two-dimensional nuclear magnetic resonance spectra.
Havel, T F; Najfeld, I; Yang, J X
1994-01-01
Two-dimensional NMR spectra are rectangular arrays of real numbers, which are commonly regarded as digitized images to be analyzed visually. If one treats them instead as mathematical matrices, linear algebra techniques can also be used to extract valuable information from them. This matrix approach is greatly facilitated by means of a physically significant decomposition of these spectra into a product of matrices--namely, S = PAPT. Here, P denotes a matrix whose columns contain the digitized contours of each individual peak or multiple in the one-dimensional spectrum, PT is its transpose, and A is an interaction matrix specific to the experiment in question. The practical applications of this decomposition are considered in detail for two important types of two-dimensional NMR spectra, double quantum-filtered correlated spectroscopy and nuclear Overhauser effect spectroscopy, both in the weak-coupling approximation. The elements of A are the signed intensities of the cross-peaks in a double quantum-filtered correlated spectrum, or the integrated cross-peak intensities in the case of a nuclear Overhauser effect spectrum. This decomposition not only permits these spectra to be efficiently simulated but also permits the corresponding inverse problems to be given an elegant mathematical formulation to which standard numerical methods are applicable. Finally, the extension of this decomposition to the case of strong coupling is given. PMID:8058742
Performance of Scattering Matrix Decomposition and Color Spaces for Synthetic Aperture Radar Imagery
2010-03-01
Color Spaces and Synthetic Aperture Radar (SAR) Multicolor Imaging. 15 2.3.1 Colorimetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.3.2...III. Decomposition Techniques on SAR Polarimetry and Colorimetry applied to SAR Imagery...space polarimetric SAR systems. Colorimetry is also introduced in this chapter, presenting the fundamentals of the RGB and CMY color spaces, defined for
A fast new algorithm for a robot neurocontroller using inverse QR decomposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morris, A.S.; Khemaissia, S.
2000-01-01
A new adaptive neural network controller for robots is presented. The controller is based on direct adaptive techniques. Unlike many neural network controllers in the literature, inverse dynamical model evaluation is not required. A numerically robust, computationally efficient processing scheme for neutral network weight estimation is described, namely, the inverse QR decomposition (INVQR). The inverse QR decomposition and a weighted recursive least-squares (WRLS) method for neural network weight estimation is derived using Cholesky factorization of the data matrix. The algorithm that performs the efficient INVQR of the underlying space-time data matrix may be implemented in parallel on a triangular array.more » Furthermore, its systolic architecture is well suited for VLSI implementation. Another important benefit is well suited for VLSI implementation. Another important benefit of the INVQR decomposition is that it solves directly for the time-recursive least-squares filter vector, while avoiding the sequential back-substitution step required by the QR decomposition approaches.« less
Matrix decomposition graphics processing unit solver for Poisson image editing
NASA Astrophysics Data System (ADS)
Lei, Zhao; Wei, Li
2012-10-01
In recent years, gradient-domain methods have been widely discussed in the image processing field, including seamless cloning and image stitching. These algorithms are commonly carried out by solving a large sparse linear system: the Poisson equation. However, solving the Poisson equation is a computational and memory intensive task which makes it not suitable for real-time image editing. A new matrix decomposition graphics processing unit (GPU) solver (MDGS) is proposed to settle the problem. A matrix decomposition method is used to distribute the work among GPU threads, so that MDGS will take full advantage of the computing power of current GPUs. Additionally, MDGS is a hybrid solver (combines both the direct and iterative techniques) and has two-level architecture. These enable MDGS to generate identical solutions with those of the common Poisson methods and achieve high convergence rate in most cases. This approach is advantageous in terms of parallelizability, enabling real-time image processing, low memory-taken and extensive applications.
NASA Astrophysics Data System (ADS)
Elbeih, Ahmed; Abd-Elghany, Mohamed; Elshenawy, Tamer
2017-03-01
Vacuum stability test (VST) is mainly used to study compatibility and stability of energetic materials. In this work, VST has been investigated to study thermal decomposition kinetics of four cyclic nitramines, 1,3,5-trinitro-1,3,5-triazinane (RDX) and 1,3,5,7-tetranitro-1,3,5,7-tetrazocane (HMX), cis-1,3,4,6-tetranitrooctahydroimidazo-[4,5-d]imidazole (BCHMX), 2,4,6,8,10,12-hexanitro-2,4,6,8,10,12-hexaazaisowurtzitane (ε-HNIW, CL-20), bonded by polyurethane matrix based on hydroxyl terminated polybutadiene (HTPB). Model fitting and model free (isoconversional) methods have been applied to determine the decomposition kinetics from VST results. For comparison, the decomposition kinetics were determined isothermally by ignition delay technique and non-isothermally by Advanced Kinetics and Technology Solution (AKTS) software. The activation energies for thermolysis obtained by isoconversional method based on VST technique of RDX/HTPB, HMX/HTPB, BCHMX/HTPB and CL20/HTPB were 157.1, 203.1, 190.0 and 176.8 kJ mol-1 respectively. Model fitting method proved that the mechanism of thermal decomposition of BCHMX/HTPB is controlled by the nucleation model while all the other studied PBXs are controlled by the diffusion models. A linear relationship between the ignition temperatures and the activation energies was observed. BCHMX/HTPB is interesting new PBX in the research stage.
NASA Astrophysics Data System (ADS)
Zhang, Hongqin; Tian, Xiangjun
2018-04-01
Ensemble-based data assimilation methods often use the so-called localization scheme to improve the representation of the ensemble background error covariance (Be). Extensive research has been undertaken to reduce the computational cost of these methods by using the localized ensemble samples to localize Be by means of a direct decomposition of the local correlation matrix C. However, the computational costs of the direct decomposition of the local correlation matrix C are still extremely high due to its high dimension. In this paper, we propose an efficient local correlation matrix decomposition approach based on the concept of alternating directions. This approach is intended to avoid direct decomposition of the correlation matrix. Instead, we first decompose the correlation matrix into 1-D correlation matrices in the three coordinate directions, then construct their empirical orthogonal function decomposition at low resolution. This procedure is followed by the 1-D spline interpolation process to transform the above decompositions to the high-resolution grid. Finally, an efficient correlation matrix decomposition is achieved by computing the very similar Kronecker product. We conducted a series of comparison experiments to illustrate the validity and accuracy of the proposed local correlation matrix decomposition approach. The effectiveness of the proposed correlation matrix decomposition approach and its efficient localization implementation of the nonlinear least-squares four-dimensional variational assimilation are further demonstrated by several groups of numerical experiments based on the Advanced Research Weather Research and Forecasting model.
A study of the parallel algorithm for large-scale DC simulation of nonlinear systems
NASA Astrophysics Data System (ADS)
Cortés Udave, Diego Ernesto; Ogrodzki, Jan; Gutiérrez de Anda, Miguel Angel
Newton-Raphson DC analysis of large-scale nonlinear circuits may be an extremely time consuming process even if sparse matrix techniques and bypassing of nonlinear models calculation are used. A slight decrease in the time required for this task may be enabled on multi-core, multithread computers if the calculation of the mathematical models for the nonlinear elements as well as the stamp management of the sparse matrix entries are managed through concurrent processes. This numerical complexity can be further reduced via the circuit decomposition and parallel solution of blocks taking as a departure point the BBD matrix structure. This block-parallel approach may give a considerable profit though it is strongly dependent on the system topology and, of course, on the processor type. This contribution presents the easy-parallelizable decomposition-based algorithm for DC simulation and provides a detailed study of its effectiveness.
Using Strassen's algorithm to accelerate the solution of linear systems
NASA Technical Reports Server (NTRS)
Bailey, David H.; Lee, King; Simon, Horst D.
1990-01-01
Strassen's algorithm for fast matrix-matrix multiplication has been implemented for matrices of arbitrary shapes on the CRAY-2 and CRAY Y-MP supercomputers. Several techniques have been used to reduce the scratch space requirement for this algorithm while simultaneously preserving a high level of performance. When the resulting Strassen-based matrix multiply routine is combined with some routines from the new LAPACK library, LU decomposition can be performed with rates significantly higher than those achieved by conventional means. We succeeded in factoring a 2048 x 2048 matrix on the CRAY Y-MP at a rate equivalent to 325 MFLOPS.
Koopman Mode Decomposition Methods in Dynamic Stall: Reduced Order Modeling and Control
2015-11-10
the flow phenomena by separating them into individual modes. The technique of Proper Orthogonal Decomposition (POD), see [ Holmes : 1998] is a popular...sampled values h(k), k = 0,…,2M-1, of the exponential sum 1. Solve the following linear system where 2. Compute all zeros zj D, j = 1,…,M...of the Prony polynomial i.e., calculate all eigenvalues of the associated companion matrix and form fj = log zj for j = 1,…,M, where log is the
Optical systolic solutions of linear algebraic equations
NASA Technical Reports Server (NTRS)
Neuman, C. P.; Casasent, D.
1984-01-01
The philosophy and data encoding possible in systolic array optical processor (SAOP) were reviewed. The multitude of linear algebraic operations achievable on this architecture is examined. These operations include such linear algebraic algorithms as: matrix-decomposition, direct and indirect solutions, implicit and explicit methods for partial differential equations, eigenvalue and eigenvector calculations, and singular value decomposition. This architecture can be utilized to realize general techniques for solving matrix linear and nonlinear algebraic equations, least mean square error solutions, FIR filters, and nested-loop algorithms for control engineering applications. The data flow and pipelining of operations, design of parallel algorithms and flexible architectures, application of these architectures to computationally intensive physical problems, error source modeling of optical processors, and matching of the computational needs of practical engineering problems to the capabilities of optical processors are emphasized.
Newmark-Beta-FDTD method for super-resolution analysis of time reversal waves
NASA Astrophysics Data System (ADS)
Shi, Sheng-Bing; Shao, Wei; Ma, Jing; Jin, Congjun; Wang, Xiao-Hua
2017-09-01
In this work, a new unconditionally stable finite-difference time-domain (FDTD) method with the split-field perfectly matched layer (PML) is proposed for the analysis of time reversal (TR) waves. The proposed method is very suitable for multiscale problems involving microstructures. The spatial and temporal derivatives in this method are discretized by the central difference technique and Newmark-Beta algorithm, respectively, and the derivation results in the calculation of a banded-sparse matrix equation. Since the coefficient matrix keeps unchanged during the whole simulation process, the lower-upper (LU) decomposition of the matrix needs to be performed only once at the beginning of the calculation. Moreover, the reverse Cuthill-Mckee (RCM) technique, an effective preprocessing technique in bandwidth compression of sparse matrices, is used to improve computational efficiency. The super-resolution focusing of TR wave propagation in two- and three-dimensional spaces is included to validate the accuracy and efficiency of the proposed method.
3D tensor-based blind multispectral image decomposition for tumor demarcation
NASA Astrophysics Data System (ADS)
Kopriva, Ivica; Peršin, Antun
2010-03-01
Blind decomposition of multi-spectral fluorescent image for tumor demarcation is formulated exploiting tensorial structure of the image. First contribution of the paper is identification of the matrix of spectral responses and 3D tensor of spatial distributions of the materials present in the image from Tucker3 or PARAFAC models of 3D image tensor. Second contribution of the paper is clustering based estimation of the number of the materials present in the image as well as matrix of their spectral profiles. 3D tensor of the spatial distributions of the materials is recovered through 3-mode multiplication of the multi-spectral image tensor and inverse of the matrix of spectral profiles. Tensor representation of the multi-spectral image preserves its local spatial structure that is lost, due to vectorization process, when matrix factorization-based decomposition methods (such as non-negative matrix factorization and independent component analysis) are used. Superior performance of the tensor-based image decomposition over matrix factorization-based decompositions is demonstrated on experimental red-green-blue (RGB) image with known ground truth as well as on RGB fluorescent images of the skin tumor (basal cell carcinoma).
Fast polar decomposition of an arbitrary matrix
NASA Technical Reports Server (NTRS)
Higham, Nicholas J.; Schreiber, Robert S.
1988-01-01
The polar decomposition of an m x n matrix A of full rank, where m is greater than or equal to n, can be computed using a quadratically convergent algorithm. The algorithm is based on a Newton iteration involving a matrix inverse. With the use of a preliminary complete orthogonal decomposition the algorithm can be extended to arbitrary A. How to use the algorithm to compute the positive semi-definite square root of a Hermitian positive semi-definite matrix is described. A hybrid algorithm which adaptively switches from the matrix inversion based iteration to a matrix multiplication based iteration due to Kovarik, and to Bjorck and Bowie is formulated. The decision when to switch is made using a condition estimator. This matrix multiplication rich algorithm is shown to be more efficient on machines for which matrix multiplication can be executed 1.5 times faster than matrix inversion.
Detection and identification of concealed weapons using matrix pencil
NASA Astrophysics Data System (ADS)
Adve, Raviraj S.; Thayaparan, Thayananthan
2011-06-01
The detection and identification of concealed weapons is an extremely hard problem due to the weak signature of the target buried within the much stronger signal from the human body. This paper furthers the automatic detection and identification of concealed weapons by proposing the use of an effective approach to obtain the resonant frequencies in a measurement. The technique, based on Matrix Pencil, a scheme for model based parameter estimation also provides amplitude information, hence providing a level of confidence in the results. Of specific interest is the fact that Matrix Pencil is based on a singular value decomposition, making the scheme robust against noise.
Domain decomposition methods in aerodynamics
NASA Technical Reports Server (NTRS)
Venkatakrishnan, V.; Saltz, Joel
1990-01-01
Compressible Euler equations are solved for two-dimensional problems by a preconditioned conjugate gradient-like technique. An approximate Riemann solver is used to compute the numerical fluxes to second order accuracy in space. Two ways to achieve parallelism are tested, one which makes use of parallelism inherent in triangular solves and the other which employs domain decomposition techniques. The vectorization/parallelism in triangular solves is realized by the use of a recording technique called wavefront ordering. This process involves the interpretation of the triangular matrix as a directed graph and the analysis of the data dependencies. It is noted that the factorization can also be done in parallel with the wave front ordering. The performances of two ways of partitioning the domain, strips and slabs, are compared. Results on Cray YMP are reported for an inviscid transonic test case. The performances of linear algebra kernels are also reported.
Image compression using singular value decomposition
NASA Astrophysics Data System (ADS)
Swathi, H. R.; Sohini, Shah; Surbhi; Gopichand, G.
2017-11-01
We often need to transmit and store the images in many applications. Smaller the image, less is the cost associated with transmission and storage. So we often need to apply data compression techniques to reduce the storage space consumed by the image. One approach is to apply Singular Value Decomposition (SVD) on the image matrix. In this method, digital image is given to SVD. SVD refactors the given digital image into three matrices. Singular values are used to refactor the image and at the end of this process, image is represented with smaller set of values, hence reducing the storage space required by the image. Goal here is to achieve the image compression while preserving the important features which describe the original image. SVD can be adapted to any arbitrary, square, reversible and non-reversible matrix of m × n size. Compression ratio and Mean Square Error is used as performance metrics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kolda, Tamara Gibson
We propose two new multilinear operators for expressing the matrix compositions that are needed in the Tucker and PARAFAC (CANDECOMP) decompositions. The first operator, which we call the Tucker operator, is shorthand for performing an n-mode matrix multiplication for every mode of a given tensor and can be employed to concisely express the Tucker decomposition. The second operator, which we call the Kruskal operator, is shorthand for the sum of the outer-products of the columns of N matrices and allows a divorce from a matricized representation and a very concise expression of the PARAFAC decomposition. We explore the properties ofmore » the Tucker and Kruskal operators independently of the related decompositions. Additionally, we provide a review of the matrix and tensor operations that are frequently used in the context of tensor decompositions.« less
NASA Astrophysics Data System (ADS)
Ghoraani, Behnaz; Krishnan, Sridhar
2009-12-01
The number of people affected by speech problems is increasing as the modern world places increasing demands on the human voice via mobile telephones, voice recognition software, and interpersonal verbal communications. In this paper, we propose a novel methodology for automatic pattern classification of pathological voices. The main contribution of this paper is extraction of meaningful and unique features using Adaptive time-frequency distribution (TFD) and nonnegative matrix factorization (NMF). We construct Adaptive TFD as an effective signal analysis domain to dynamically track the nonstationarity in the speech and utilize NMF as a matrix decomposition (MD) technique to quantify the constructed TFD. The proposed method extracts meaningful and unique features from the joint TFD of the speech, and automatically identifies and measures the abnormality of the signal. Depending on the abnormality measure of each signal, we classify the signal into normal or pathological. The proposed method is applied on the Massachusetts Eye and Ear Infirmary (MEEI) voice disorders database which consists of 161 pathological and 51 normal speakers, and an overall classification accuracy of 98.6% was achieved.
Parallel pivoting combined with parallel reduction
NASA Technical Reports Server (NTRS)
Alaghband, Gita
1987-01-01
Parallel algorithms for triangularization of large, sparse, and unsymmetric matrices are presented. The method combines the parallel reduction with a new parallel pivoting technique, control over generations of fill-ins and a check for numerical stability, all done in parallel with the work being distributed over the active processes. The parallel technique uses the compatibility relation between pivots to identify parallel pivot candidates and uses the Markowitz number of pivots to minimize fill-in. This technique is not a preordering of the sparse matrix and is applied dynamically as the decomposition proceeds.
NASA Astrophysics Data System (ADS)
Ballard, S.; Hipp, J. R.; Encarnacao, A.; Young, C. J.; Begnaud, M. L.; Phillips, W. S.
2012-12-01
Seismic event locations can be made more accurate and precise by computing predictions of seismic travel time through high fidelity 3D models of the wave speed in the Earth's interior. Given the variable data quality and uneven data sampling associated with this type of model, it is essential that there be a means to calculate high-quality estimates of the path-dependent variance and covariance associated with the predicted travel times of ray paths through the model. In this paper, we describe a methodology for accomplishing this by exploiting the full model covariance matrix and show examples of path-dependent travel time prediction uncertainty computed from SALSA3D, our global, seamless 3D tomographic P-velocity model. Typical global 3D models have on the order of 1/2 million nodes, so the challenge in calculating the covariance matrix is formidable: 0.9 TB storage for 1/2 of a symmetric matrix, necessitating an Out-Of-Core (OOC) blocked matrix solution technique. With our approach the tomography matrix (G which includes Tikhonov regularization terms) is multiplied by its transpose (GTG) and written in a blocked sub-matrix fashion. We employ a distributed parallel solution paradigm that solves for (GTG)-1 by assigning blocks to individual processing nodes for matrix decomposition update and scaling operations. We first find the Cholesky decomposition of GTG which is subsequently inverted. Next, we employ OOC matrix multiplication methods to calculate the model covariance matrix from (GTG)-1 and an assumed data covariance matrix. Given the model covariance matrix, we solve for the travel-time covariance associated with arbitrary ray-paths by summing the model covariance along both ray paths. Setting the paths equal and taking the square root yields the travel prediction uncertainty for the single path.
Beyond Low Rank + Sparse: Multi-scale Low Rank Matrix Decomposition
Ong, Frank; Lustig, Michael
2016-01-01
We present a natural generalization of the recent low rank + sparse matrix decomposition and consider the decomposition of matrices into components of multiple scales. Such decomposition is well motivated in practice as data matrices often exhibit local correlations in multiple scales. Concretely, we propose a multi-scale low rank modeling that represents a data matrix as a sum of block-wise low rank matrices with increasing scales of block sizes. We then consider the inverse problem of decomposing the data matrix into its multi-scale low rank components and approach the problem via a convex formulation. Theoretically, we show that under various incoherence conditions, the convex program recovers the multi-scale low rank components either exactly or approximately. Practically, we provide guidance on selecting the regularization parameters and incorporate cycle spinning to reduce blocking artifacts. Experimentally, we show that the multi-scale low rank decomposition provides a more intuitive decomposition than conventional low rank methods and demonstrate its effectiveness in four applications, including illumination normalization for face images, motion separation for surveillance videos, multi-scale modeling of the dynamic contrast enhanced magnetic resonance imaging and collaborative filtering exploiting age information. PMID:28450978
Impurity characterization of magnesium diuranate using simultaneous TG-DTA-FTIR measurements
NASA Astrophysics Data System (ADS)
Raje, Naina; Ghonge, Darshana K.; Hemantha Rao, G. V. S.; Reddy, A. V. R.
2013-05-01
Current studies describe the application of simultaneous thermogravimetry-differential thermal analysis - evolved gas analysis techniques for the compositional characterization of magnesium diuranate (MDU) with respect to the impurities present in the matrix. The stoichiometric composition of MDU was identified as MgU2O7ṡ3H2O. Presence of carbonate and sulphate as impurities in the matrix was confirmed through the evolved gas analysis using Fourier Transformation Infrared Spectrometry detection. Carbon and magnesium hydroxide content present as impurities in magnesium diuranate have been determined quantitatively using TG and FTIR techniques and the results are in good agreement. Powder X-ray diffraction analysis of magnesium diuranate suggests the presence of magnesium hydroxide as impurity in the matrix. Also these studies confirm the formation of magnesium uranate, uranium sesquioxide and uranium dioxide above 1000 °C, due to the decomposition of magnesium diuranate.
ARMA Cholesky Factor Models for the Covariance Matrix of Linear Models.
Lee, Keunbaik; Baek, Changryong; Daniels, Michael J
2017-11-01
In longitudinal studies, serial dependence of repeated outcomes must be taken into account to make correct inferences on covariate effects. As such, care must be taken in modeling the covariance matrix. However, estimation of the covariance matrix is challenging because there are many parameters in the matrix and the estimated covariance matrix should be positive definite. To overcomes these limitations, two Cholesky decomposition approaches have been proposed: modified Cholesky decomposition for autoregressive (AR) structure and moving average Cholesky decomposition for moving average (MA) structure, respectively. However, the correlations of repeated outcomes are often not captured parsimoniously using either approach separately. In this paper, we propose a class of flexible, nonstationary, heteroscedastic models that exploits the structure allowed by combining the AR and MA modeling of the covariance matrix that we denote as ARMACD. We analyze a recent lung cancer study to illustrate the power of our proposed methods.
Bian, Xihui; Li, Shujuan; Lin, Ligang; Tan, Xiaoyao; Fan, Qingjie; Li, Ming
2016-06-21
Accurate prediction of the model is fundamental to the successful analysis of complex samples. To utilize abundant information embedded over frequency and time domains, a novel regression model is presented for quantitative analysis of hydrocarbon contents in the fuel oil samples. The proposed method named as high and low frequency unfolded PLSR (HLUPLSR), which integrates empirical mode decomposition (EMD) and unfolded strategy with partial least squares regression (PLSR). In the proposed method, the original signals are firstly decomposed into a finite number of intrinsic mode functions (IMFs) and a residue by EMD. Secondly, the former high frequency IMFs are summed as a high frequency matrix and the latter IMFs and residue are summed as a low frequency matrix. Finally, the two matrices are unfolded to an extended matrix in variable dimension, and then the PLSR model is built between the extended matrix and the target values. Coupled with Ultraviolet (UV) spectroscopy, HLUPLSR has been applied to determine hydrocarbon contents of light gas oil and diesel fuels samples. Comparing with single PLSR and other signal processing techniques, the proposed method shows superiority in prediction ability and better model interpretation. Therefore, HLUPLSR method provides a promising tool for quantitative analysis of complex samples. Copyright © 2016 Elsevier B.V. All rights reserved.
Evaluation of constraint stabilization procedures for multibody dynamical systems
NASA Technical Reports Server (NTRS)
Park, K. C.; Chiou, J. C.
1987-01-01
Comparative numerical studies of four constraint treatment techniques for the simulation of general multibody dynamic systems are presented, and results are presented for the example of a classical crank mechanism and for a simplified version of the seven-link manipulator deployment problem. The staggered stabilization technique (Park, 1986) is found to yield improved accuracy and robustness over Baumgarte's (1972) technique, the singular decomposition technique (Walton and Steeves, 1969), and the penalty technique (Lotstedt, 1979). Furthermore, the staggered stabilization technique offers software modularity, and the only data each solution module needs to exchange with the other is a set of vectors plus a common module to generate the gradient matrix of the constraints, B.
A Taxonomy of Latent Structure Assumptions for Probability Matrix Decomposition Models.
ERIC Educational Resources Information Center
Meulders, Michel; De Boeck, Paul; Van Mechelen, Iven
2003-01-01
Proposed a taxonomy of latent structure assumptions for probability matrix decomposition (PMD) that includes the original PMD model and a three-way extension of the multiple classification latent class model. Simulation study results show the usefulness of the taxonomy. (SLD)
Near-lossless multichannel EEG compression based on matrix and tensor decompositions.
Dauwels, Justin; Srinivasan, K; Reddy, M Ramasubba; Cichocki, Andrzej
2013-05-01
A novel near-lossless compression algorithm for multichannel electroencephalogram (MC-EEG) is proposed based on matrix/tensor decomposition models. MC-EEG is represented in suitable multiway (multidimensional) forms to efficiently exploit temporal and spatial correlations simultaneously. Several matrix/tensor decomposition models are analyzed in view of efficient decorrelation of the multiway forms of MC-EEG. A compression algorithm is built based on the principle of “lossy plus residual coding,” consisting of a matrix/tensor decomposition-based coder in the lossy layer followed by arithmetic coding in the residual layer. This approach guarantees a specifiable maximum absolute error between original and reconstructed signals. The compression algorithm is applied to three different scalp EEG datasets and an intracranial EEG dataset, each with different sampling rate and resolution. The proposed algorithm achieves attractive compression ratios compared to compressing individual channels separately. For similar compression ratios, the proposed algorithm achieves nearly fivefold lower average error compared to a similar wavelet-based volumetric MC-EEG compression algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niu, T; Dong, X; Petrongolo, M
Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its material decomposition capability. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical value. Existing de-noising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. We propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm is formulated in the form of least-square estimationmore » with smoothness regularization. It includes the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. Performance is evaluated using an evaluation phantom (Catphan 600) and an anthropomorphic head phantom. Results are compared to those generated using direct matrix inversion with no noise suppression, a de-noising method applied on the decomposed images, and an existing algorithm with similar formulation but with an edge-preserving regularization term. Results: On the Catphan phantom, our method retains the same spatial resolution as the CT images before decomposition while reducing the noise standard deviation of decomposed images by over 98%. The other methods either degrade spatial resolution or achieve less low-contrast detectability. Also, our method yields lower electron density measurement error than direct matrix inversion and reduces error variation by over 97%. On the head phantom, it reduces the noise standard deviation of decomposed images by over 97% without blurring the sinus structures. Conclusion: We propose an iterative image-domain decomposition method for DECT. The method combines noise suppression and material decomposition into an iterative process and achieves both goals simultaneously. The proposed algorithm shows superior performance on noise suppression with high image spatial resolution and low-contrast detectability. This work is supported by a Varian MRA grant.« less
NASA Astrophysics Data System (ADS)
Riasati, Vahid R.
2016-05-01
In this work, the data covariance matrix is diagonalized to provide an orthogonal bases set using the eigen vectors of the data. The eigen-vector decomposition of the data is transformed and filtered in the transform domain to truncate the data for robust features related to a specified set of targets. These truncated eigen features are then combined and reconstructed to utilize in a composite filter and consequently utilized for the automatic target detection of the same class of targets. The results associated with the testing of the current technique are evaluated using the peak-correlation and peak-correlation energy metrics and are presented in this work. The inverse transformed eigen-bases of the current technique may be thought of as an injected sparsity to minimize data in representing the skeletal data structure information associated with the set of targets under consideration.
Glove-based approach to online signature verification.
Kamel, Nidal S; Sayeed, Shohel; Ellis, Grant A
2008-06-01
Utilizing the multiple degrees of freedom offered by the data glove for each finger and the hand, a novel on-line signature verification system using the Singular Value Decomposition (SVD) numerical tool for signature classification and verification is presented. The proposed technique is based on the Singular Value Decomposition in finding r singular vectors sensing the maximal energy of glove data matrix A, called principal subspace, so the effective dimensionality of A can be reduced. Having modeled the data glove signature through its r-principal subspace, signature authentication is performed by finding the angles between the different subspaces. A demonstration of the data glove is presented as an effective high-bandwidth data entry device for signature verification. This SVD-based signature verification technique is tested and its performance is shown to be able to recognize forgery signatures with a false acceptance rate of less than 1.2%.
Sparse Gaussian elimination with controlled fill-in on a shared memory multiprocessor
NASA Technical Reports Server (NTRS)
Alaghband, Gita; Jordan, Harry F.
1989-01-01
It is shown that in sparse matrices arising from electronic circuits, it is possible to do computations on many diagonal elements simultaneously. A technique for obtaining an ordered compatible set directly from the ordered incompatible table is given. The ordering is based on the Markowitz number of the pivot candidates. This technique generates a set of compatible pivots with the property of generating few fills. A novel heuristic algorithm is presented that combines the idea of an order-compatible set with a limited binary tree search to generate several sets of compatible pivots in linear time. An elimination set for reducing the matrix is generated and selected on the basis of a minimum Markowitz sum number. The parallel pivoting technique presented is a stepwise algorithm and can be applied to any submatrix of the original matrix. Thus, it is not a preordering of the sparse matrix and is applied dynamically as the decomposition proceeds. Parameters are suggested to obtain a balance between parallelism and fill-ins. Results of applying the proposed algorithms on several large application matrices using the HEP multiprocessor (Kowalik, 1985) are presented and analyzed.
Singular value decomposition utilizing parallel algorithms on graphical processors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kotas, Charlotte W; Barhen, Jacob
2011-01-01
One of the current challenges in underwater acoustic array signal processing is the detection of quiet targets in the presence of noise. In order to enable robust detection, one of the key processing steps requires data and replica whitening. This, in turn, involves the eigen-decomposition of the sample spectral matrix, Cx = 1/K xKX(k)XH(k) where X(k) denotes a single frequency snapshot with an element for each element of the array. By employing the singular value decomposition (SVD) method, the eigenvectors and eigenvalues can be determined directly from the data without computing the sample covariance matrix, reducing the computational requirements formore » a given level of accuracy (van Trees, Optimum Array Processing). (Recall that the SVD of a complex matrix A involves determining V, , and U such that A = U VH where U and V are orthonormal and is a positive, real, diagonal matrix containing the singular values of A. U and V are the eigenvectors of AAH and AHA, respectively, while the singular values are the square roots of the eigenvalues of AAH.) Because it is desirable to be able to compute these quantities in real time, an efficient technique for computing the SVD is vital. In addition, emerging multicore processors like graphical processing units (GPUs) are bringing parallel processing capabilities to an ever increasing number of users. Since the computational tasks involved in array signal processing are well suited for parallelization, it is expected that these computations will be implemented using GPUs as soon as users have the necessary computational tools available to them. Thus, it is important to have an SVD algorithm that is suitable for these processors. This work explores the effectiveness of two different parallel SVD implementations on an NVIDIA Tesla C2050 GPU (14 multiprocessors, 32 cores per multiprocessor, 1.15 GHz clock - peed). The first algorithm is based on a two-step algorithm which bidiagonalizes the matrix using Householder transformations, and then diagonalizes the intermediate bidiagonal matrix through implicit QR shifts. This is similar to that implemented for real matrices by Lahabar and Narayanan ("Singular Value Decomposition on GPU using CUDA", IEEE International Parallel Distributed Processing Symposium 2009). The implementation is done in a hybrid manner, with the bidiagonalization stage done using the GPU while the diagonalization stage is done using the CPU, with the GPU used to update the U and V matrices. The second algorithm is based on a one-sided Jacobi scheme utilizing a sequence of pair-wise column orthogonalizations such that A is replaced by AV until the resulting matrix is sufficiently orthogonal (that is, equal to U ). V is obtained from the sequence of orthogonalizations, while can be found from the square root of the diagonal elements of AH A and, once is known, U can be found from column scaling the resulting matrix. These implementations utilize CUDA Fortran and NVIDIA's CUB LAS library. The primary goal of this study is to quantify the comparative performance of these two techniques against themselves and other standard implementations (for example, MATLAB). Considering that there is significant overhead associated with transferring data to the GPU and with synchronization between the GPU and the host CPU, it is also important to understand when it is worthwhile to use the GPU in terms of the matrix size and number of concurrent SVDs to be calculated.« less
Planetary Gears Feature Extraction and Fault Diagnosis Method Based on VMD and CNN.
Liu, Chang; Cheng, Gang; Chen, Xihui; Pang, Yusong
2018-05-11
Given local weak feature information, a novel feature extraction and fault diagnosis method for planetary gears based on variational mode decomposition (VMD), singular value decomposition (SVD), and convolutional neural network (CNN) is proposed. VMD was used to decompose the original vibration signal to mode components. The mode matrix was partitioned into a number of submatrices and local feature information contained in each submatrix was extracted as a singular value vector using SVD. The singular value vector matrix corresponding to the current fault state was constructed according to the location of each submatrix. Finally, by training a CNN using singular value vector matrices as inputs, planetary gear fault state identification and classification was achieved. The experimental results confirm that the proposed method can successfully extract local weak feature information and accurately identify different faults. The singular value vector matrices of different fault states have a distinct difference in element size and waveform. The VMD-based partition extraction method is better than ensemble empirical mode decomposition (EEMD), resulting in a higher CNN total recognition rate of 100% with fewer training times (14 times). Further analysis demonstrated that the method can also be applied to the degradation recognition of planetary gears. Thus, the proposed method is an effective feature extraction and fault diagnosis technique for planetary gears.
Planetary Gears Feature Extraction and Fault Diagnosis Method Based on VMD and CNN
Cheng, Gang; Chen, Xihui
2018-01-01
Given local weak feature information, a novel feature extraction and fault diagnosis method for planetary gears based on variational mode decomposition (VMD), singular value decomposition (SVD), and convolutional neural network (CNN) is proposed. VMD was used to decompose the original vibration signal to mode components. The mode matrix was partitioned into a number of submatrices and local feature information contained in each submatrix was extracted as a singular value vector using SVD. The singular value vector matrix corresponding to the current fault state was constructed according to the location of each submatrix. Finally, by training a CNN using singular value vector matrices as inputs, planetary gear fault state identification and classification was achieved. The experimental results confirm that the proposed method can successfully extract local weak feature information and accurately identify different faults. The singular value vector matrices of different fault states have a distinct difference in element size and waveform. The VMD-based partition extraction method is better than ensemble empirical mode decomposition (EEMD), resulting in a higher CNN total recognition rate of 100% with fewer training times (14 times). Further analysis demonstrated that the method can also be applied to the degradation recognition of planetary gears. Thus, the proposed method is an effective feature extraction and fault diagnosis technique for planetary gears. PMID:29751671
NASA Astrophysics Data System (ADS)
Hipp, J. R.; Encarnacao, A.; Ballard, S.; Young, C. J.; Phillips, W. S.; Begnaud, M. L.
2011-12-01
Recently our combined SNL-LANL research team has succeeded in developing a global, seamless 3D tomographic P-velocity model (SALSA3D) that provides superior first P travel time predictions at both regional and teleseismic distances. However, given the variable data quality and uneven data sampling associated with this type of model, it is essential that there be a means to calculate high-quality estimates of the path-dependent variance and covariance associated with the predicted travel times of ray paths through the model. In this paper, we show a methodology for accomplishing this by exploiting the full model covariance matrix. Our model has on the order of 1/2 million nodes, so the challenge in calculating the covariance matrix is formidable: 0.9 TB storage for 1/2 of a symmetric matrix, necessitating an Out-Of-Core (OOC) blocked matrix solution technique. With our approach the tomography matrix (G which includes Tikhonov regularization terms) is multiplied by its transpose (GTG) and written in a blocked sub-matrix fashion. We employ a distributed parallel solution paradigm that solves for (GTG)-1 by assigning blocks to individual processing nodes for matrix decomposition update and scaling operations. We first find the Cholesky decomposition of GTG which is subsequently inverted. Next, we employ OOC matrix multiply methods to calculate the model covariance matrix from (GTG)-1 and an assumed data covariance matrix. Given the model covariance matrix we solve for the travel-time covariance associated with arbitrary ray-paths by integrating the model covariance along both ray paths. Setting the paths equal gives variance for that path. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
NASA Astrophysics Data System (ADS)
Hipp, J. R.; Ballard, S.; Begnaud, M. L.; Encarnacao, A. V.; Young, C. J.; Phillips, W. S.
2015-12-01
Recently our combined SNL-LANL research team has succeeded in developing a global, seamless 3D tomographic P- and S-velocity model (SALSA3D) that provides superior first P and first S travel time predictions at both regional and teleseismic distances. However, given the variable data quality and uneven data sampling associated with this type of model, it is essential that there be a means to calculate high-quality estimates of the path-dependent variance and covariance associated with the predicted travel times of ray paths through the model. In this paper, we describe a methodology for accomplishing this by exploiting the full model covariance matrix and show examples of path-dependent travel time prediction uncertainty computed from our latest tomographic model. Typical global 3D SALSA3D models have on the order of 1/2 million nodes, so the challenge in calculating the covariance matrix is formidable: 0.9 TB storage for 1/2 of a symmetric matrix, necessitating an Out-Of-Core (OOC) blocked matrix solution technique. With our approach the tomography matrix (G which includes a prior model covariance constraint) is multiplied by its transpose (GTG) and written in a blocked sub-matrix fashion. We employ a distributed parallel solution paradigm that solves for (GTG)-1 by assigning blocks to individual processing nodes for matrix decomposition update and scaling operations. We first find the Cholesky decomposition of GTG which is subsequently inverted. Next, we employ OOC matrix multiplication methods to calculate the model covariance matrix from (GTG)-1 and an assumed data covariance matrix. Given the model covariance matrix, we solve for the travel-time covariance associated with arbitrary ray-paths by summing the model covariance along both ray paths. Setting the paths equal and taking the square root yields the travel prediction uncertainty for the single path.
Cheng, Yih-Chun; Tsai, Pei-Yun; Huang, Ming-Hao
2016-05-19
Low-complexity compressed sensing (CS) techniques for monitoring electrocardiogram (ECG) signals in wireless body sensor network (WBSN) are presented. The prior probability of ECG sparsity in the wavelet domain is first exploited. Then, variable orthogonal multi-matching pursuit (vOMMP) algorithm that consists of two phases is proposed. In the first phase, orthogonal matching pursuit (OMP) algorithm is adopted to effectively augment the support set with reliable indices and in the second phase, the orthogonal multi-matching pursuit (OMMP) is employed to rescue the missing indices. The reconstruction performance is thus enhanced with the prior information and the vOMMP algorithm. Furthermore, the computation-intensive pseudo-inverse operation is simplified by the matrix-inversion-free (MIF) technique based on QR decomposition. The vOMMP-MIF CS decoder is then implemented in 90 nm CMOS technology. The QR decomposition is accomplished by two systolic arrays working in parallel. The implementation supports three settings for obtaining 40, 44, and 48 coefficients in the sparse vector. From the measurement result, the power consumption is 11.7 mW at 0.9 V and 12 MHz. Compared to prior chip implementations, our design shows good hardware efficiency and is suitable for low-energy applications.
Navarro, Pedro J; Fernández-Isla, Carlos; Alcover, Pedro María; Suardíaz, Juan
2016-07-27
This paper presents a robust method for defect detection in textures, entropy-based automatic selection of the wavelet decomposition level (EADL), based on a wavelet reconstruction scheme, for detecting defects in a wide variety of structural and statistical textures. Two main features are presented. One of the new features is an original use of the normalized absolute function value (NABS) calculated from the wavelet coefficients derived at various different decomposition levels in order to identify textures where the defect can be isolated by eliminating the texture pattern in the first decomposition level. The second is the use of Shannon's entropy, calculated over detail subimages, for automatic selection of the band for image reconstruction, which, unlike other techniques, such as those based on the co-occurrence matrix or on energy calculation, provides a lower decomposition level, thus avoiding excessive degradation of the image, allowing a more accurate defect segmentation. A metric analysis of the results of the proposed method with nine different thresholding algorithms determined that selecting the appropriate thresholding method is important to achieve optimum performance in defect detection. As a consequence, several different thresholding algorithms depending on the type of texture are proposed.
Kohn, Lucas; Tschirsich, Ferdinand; Keck, Maximilian; Plenio, Martin B; Tamascelli, Dario; Montangero, Simone
2018-01-01
We provide evidence that randomized low-rank factorization is a powerful tool for the determination of the ground-state properties of low-dimensional lattice Hamiltonians through tensor network techniques. In particular, we show that randomized matrix factorization outperforms truncated singular value decomposition based on state-of-the-art deterministic routines in time-evolving block decimation (TEBD)- and density matrix renormalization group (DMRG)-style simulations, even when the system under study gets close to a phase transition: We report linear speedups in the bond or local dimension of up to 24 times in quasi-two-dimensional cylindrical systems.
Palmprint verification using Lagrangian decomposition and invariant interest points
NASA Astrophysics Data System (ADS)
Gupta, P.; Rattani, A.; Kisku, D. R.; Hwang, C. J.; Sing, J. K.
2011-06-01
This paper presents a palmprint based verification system using SIFT features and Lagrangian network graph technique. We employ SIFT for feature extraction from palmprint images whereas the region of interest (ROI) which has been extracted from wide palm texture at the preprocessing stage, is considered for invariant points extraction. Finally, identity is established by finding permutation matrix for a pair of reference and probe palm graphs drawn on extracted SIFT features. Permutation matrix is used to minimize the distance between two graphs. The propsed system has been tested on CASIA and IITK palmprint databases and experimental results reveal the effectiveness and robustness of the system.
An invariant asymptotic formula for solutions of second-order linear ODE's
NASA Technical Reports Server (NTRS)
Gingold, H.
1988-01-01
An invariant-matrix technique for the approximate solution of second-order ordinary differential equations (ODEs) of form y-double-prime = phi(x)y is developed analytically and demonstrated. A set of linear transformations for the companion matrix differential system is proposed; the diagonalization procedure employed in the final stage of the asymptotic decomposition is explained; and a scalar formulation of solutions for the ODEs is obtained. Several typical ODEs are analyzed, and it is shown that the Liouville-Green or WKB approximation is a special case of the present formula, which provides an approximation which is valid for the entire interval (0, infinity).
NASA Astrophysics Data System (ADS)
Kohn, Lucas; Tschirsich, Ferdinand; Keck, Maximilian; Plenio, Martin B.; Tamascelli, Dario; Montangero, Simone
2018-01-01
We provide evidence that randomized low-rank factorization is a powerful tool for the determination of the ground-state properties of low-dimensional lattice Hamiltonians through tensor network techniques. In particular, we show that randomized matrix factorization outperforms truncated singular value decomposition based on state-of-the-art deterministic routines in time-evolving block decimation (TEBD)- and density matrix renormalization group (DMRG)-style simulations, even when the system under study gets close to a phase transition: We report linear speedups in the bond or local dimension of up to 24 times in quasi-two-dimensional cylindrical systems.
Repeated decompositions reveal the stability of infomax decomposition of fMRI data
Duann, Jeng-Ren; Jung, Tzyy-Ping; Sejnowski, Terrence J.; Makeig, Scott
2010-01-01
In this study, we decomposed 12 fMRI data sets from six subjects each 101 times using the infomax algorithm. The first decomposition was taken as a reference decomposition; the others were used to form a component matrix of 100 by 100 components. Equivalence relations between components in this matrix, defined as maximum spatial correlations to the components of the reference decomposition, were found by the Hungarian sorting method and used to form 100 equivalence classes for each data set. We then tested the reproducibility of the matched components in the equivalence classes using uncertainty measures based on component distributions, time courses, and ROC curves. Infomax ICA rarely failed to derive nearly the same components in different decompositions. Very few components per data set were poorly reproduced, even using vector angle uncertainty measures stricter than correlation and detection theory measures. PMID:17281453
Curtis, Tyler E; Roeder, Ryan K
2017-10-01
Advances in photon-counting detectors have enabled quantitative material decomposition using multi-energy or spectral computed tomography (CT). Supervised methods for material decomposition utilize an estimated attenuation for each material of interest at each photon energy level, which must be calibrated based upon calculated or measured values for known compositions. Measurements using a calibration phantom can advantageously account for system-specific noise, but the effect of calibration methods on the material basis matrix and subsequent quantitative material decomposition has not been experimentally investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on the accuracy of quantitative material decomposition in the image domain. Gadolinium was chosen as a model contrast agent in imaging phantoms, which also contained bone tissue and water as negative controls. The maximum gadolinium concentration (30, 60, and 90 mM) and total number of concentrations (2, 4, and 7) were independently varied to systematically investigate effects of the material basis matrix and scaling factor calibration on the quantitative (root mean squared error, RMSE) and spatial (sensitivity and specificity) accuracy of material decomposition. Images of calibration and sample phantoms were acquired using a commercially available photon-counting spectral micro-CT system with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material decomposition of gadolinium, calcium, and water was performed for each calibration method using a maximum a posteriori estimator. Both the quantitative and spatial accuracy of material decomposition were most improved by using an increased maximum gadolinium concentration (range) in the basis matrix calibration; the effects of using a greater number of concentrations were relatively small in magnitude by comparison. The material basis matrix calibration was more sensitive to changes in the calibration methods than the scaling factor calibration. The material basis matrix calibration significantly influenced both the quantitative and spatial accuracy of material decomposition, while the scaling factor calibration influenced quantitative but not spatial accuracy. Importantly, the median RMSE of material decomposition was as low as ~1.5 mM (~0.24 mg/mL gadolinium), which was similar in magnitude to that measured by optical spectroscopy on the same samples. The accuracy of quantitative material decomposition in photon-counting spectral CT was significantly influenced by calibration methods which must therefore be carefully considered for the intended diagnostic imaging application. © 2017 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Kravvaritis, Christos; Mitrouli, Marilena
2009-02-01
This paper studies the possibility to calculate efficiently compounds of real matrices which have a special form or structure. The usefulness of such an effort lies in the fact that the computation of compound matrices, which is generally noneffective due to its high complexity, is encountered in several applications. A new approach for computing the Singular Value Decompositions (SVD's) of the compounds of a matrix is proposed by establishing the equality (up to a permutation) between the compounds of the SVD of a matrix and the SVD's of the compounds of the matrix. The superiority of the new idea over the standard method is demonstrated. Similar approaches with some limitations can be adopted for other matrix factorizations, too. Furthermore, formulas for the n - 1 compounds of Hadamard matrices are derived, which dodge the strenuous computations of the respective numerous large determinants. Finally, a combinatorial counting technique for finding the compounds of diagonal matrices is illustrated.
Polar decomposition for attitude determination from vector observations
NASA Technical Reports Server (NTRS)
Bar-Itzhack, Itzhack Y.
1993-01-01
This work treats the problem of weighted least squares fitting of a 3D Euclidean-coordinate transformation matrix to a set of unit vectors measured in the reference and transformed coordinates. A closed-form analytic solution to the problem is re-derived. The fact that the solution is the closest orthogonal matrix to some matrix defined on the measured vectors and their weights is clearly demonstrated. Several known algorithms for computing the analytic closed form solution are considered. An algorithm is discussed which is based on the polar decomposition of matrices into the closest unitary matrix to the decomposed matrix and a Hermitian matrix. A somewhat longer improved algorithm is suggested too. A comparison of several algorithms is carried out using simulated data as well as real data from the Upper Atmosphere Research Satellite. The comparison is based on accuracy and time consumption. It is concluded that the algorithms based on polar decomposition yield a simple although somewhat less accurate solution. The precision of the latter algorithms increase with the number of the measured vectors and with the accuracy of their measurement.
Smallwood, D. O.
1996-01-01
It is shown that the usual method for estimating the coherence functions (ordinary, partial, and multiple) for a general multiple-input! multiple-output problem can be expressed as a modified form of Cholesky decomposition of the cross-spectral density matrix of the input and output records. The results can be equivalently obtained using singular value decomposition (SVD) of the cross-spectral density matrix. Using SVD suggests a new form of fractional coherence. The formulation as a SVD problem also suggests a way to order the inputs when a natural physical order of the inputs is absent.
NASA Astrophysics Data System (ADS)
Daftardar-Gejji, Varsha; Jafari, Hossein
2005-01-01
Adomian decomposition method has been employed to obtain solutions of a system of fractional differential equations. Convergence of the method has been discussed with some illustrative examples. In particular, for the initial value problem: where A=[aij] is a real square matrix, the solution turns out to be , where E([alpha]1,...,[alpha]n),1 denotes multivariate Mittag-Leffler function defined for matrix arguments and Ai is the matrix having ith row as [ai1...ain], and all other entries are zero. Fractional oscillation and Bagley-Torvik equations are solved as illustrative examples.
Model-size reduction for the buckling and vibration analyses of anisotropic panels
NASA Technical Reports Server (NTRS)
Noor, A. K.; Whitworth, S. L.
1986-01-01
A computational procedure is presented for reducing the size of the model used in the buckling and vibration analyses of symmetric anisotropic panels to that of the corresponding orthotropic model. The key elements of the procedure are the application of an operator splitting technique through the decomposition of the material stiffness matrix of the panel into the sum of orthotropic and nonorthotropic (anisotropic) parts and the use of a reduction method through successive application of the finite element method and the classical Rayleigh-Ritz technique. The effectiveness of the procedure is demonstrated by numerical examples.
Bienvenu, François; Akçay, Erol; Legendre, Stéphane; McCandlish, David M
2017-06-01
Matrix projection models are a central tool in many areas of population biology. In most applications, one starts from the projection matrix to quantify the asymptotic growth rate of the population (the dominant eigenvalue), the stable stage distribution, and the reproductive values (the dominant right and left eigenvectors, respectively). Any primitive projection matrix also has an associated ergodic Markov chain that contains information about the genealogy of the population. In this paper, we show that these facts can be used to specify any matrix population model as a triple consisting of the ergodic Markov matrix, the dominant eigenvalue and one of the corresponding eigenvectors. This decomposition of the projection matrix separates properties associated with lineages from those associated with individuals. It also clarifies the relationships between many quantities commonly used to describe such models, including the relationship between eigenvalue sensitivities and elasticities. We illustrate the utility of such a decomposition by introducing a new method for aggregating classes in a matrix population model to produce a simpler model with a smaller number of classes. Unlike the standard method, our method has the advantage of preserving reproductive values and elasticities. It also has conceptually satisfying properties such as commuting with changes of units. Copyright © 2017 Elsevier Inc. All rights reserved.
Dong, Yang; Qi, Ji; He, Honghui; He, Chao; Liu, Shaoxiong; Wu, Jian; Elson, Daniel S; Ma, Hui
2017-08-01
Polarization imaging has been recognized as a potentially powerful technique for probing the microstructural information and optical properties of complex biological specimens. Recently, we have reported a Mueller matrix microscope by adding the polarization state generator and analyzer (PSG and PSA) to a commercial transmission-light microscope, and applied it to differentiate human liver and cervical cancerous tissues with fibrosis. In this paper, we apply the Mueller matrix microscope for quantitative detection of human breast ductal carcinoma samples at different stages. The Mueller matrix polar decomposition and transformation parameters of the breast ductal tissues in different regions and at different stages are calculated and analyzed. For more quantitative comparisons, several widely-used image texture feature parameters are also calculated to characterize the difference in the polarimetric images. The experimental results indicate that the Mueller matrix microscope and the polarization parameters can facilitate the quantitative detection of breast ductal carcinoma tissues at different stages.
NASA Astrophysics Data System (ADS)
Dehghan, Mehdi; Mohammadi, Vahid
2017-08-01
In this research, we investigate the numerical solution of nonlinear Schrödinger equations in two and three dimensions. The numerical meshless method which will be used here is RBF-FD technique. The main advantage of this method is the approximation of the required derivatives based on finite difference technique at each local-support domain as Ωi. At each Ωi, we require to solve a small linear system of algebraic equations with a conditionally positive definite matrix of order 1 (interpolation matrix). This scheme is efficient and its computational cost is same as the moving least squares (MLS) approximation. A challengeable issue is choosing suitable shape parameter for interpolation matrix in this way. In order to overcome this matter, an algorithm which was established by Sarra (2012), will be applied. This algorithm computes the condition number of the local interpolation matrix using the singular value decomposition (SVD) for obtaining the smallest and largest singular values of that matrix. Moreover, an explicit method based on Runge-Kutta formula of fourth-order accuracy will be applied for approximating the time variable. It also decreases the computational costs at each time step since we will not solve a nonlinear system. On the other hand, to compare RBF-FD method with another meshless technique, the moving kriging least squares (MKLS) approximation is considered for the studied model. Our results demonstrate the ability of the present approach for solving the applicable model which is investigated in the current research work.
Background recovery via motion-based robust principal component analysis with matrix factorization
NASA Astrophysics Data System (ADS)
Pan, Peng; Wang, Yongli; Zhou, Mingyuan; Sun, Zhipeng; He, Guoping
2018-03-01
Background recovery is a key technique in video analysis, but it still suffers from many challenges, such as camouflage, lighting changes, and diverse types of image noise. Robust principal component analysis (RPCA), which aims to recover a low-rank matrix and a sparse matrix, is a general framework for background recovery. The nuclear norm is widely used as a convex surrogate for the rank function in RPCA, which requires computing the singular value decomposition (SVD), a task that is increasingly costly as matrix sizes and ranks increase. However, matrix factorization greatly reduces the dimension of the matrix for which the SVD must be computed. Motion information has been shown to improve low-rank matrix recovery in RPCA, but this method still finds it difficult to handle original video data sets because of its batch-mode formulation and implementation. Hence, in this paper, we propose a motion-assisted RPCA model with matrix factorization (FM-RPCA) for background recovery. Moreover, an efficient linear alternating direction method of multipliers with a matrix factorization (FL-ADM) algorithm is designed for solving the proposed FM-RPCA model. Experimental results illustrate that the method provides stable results and is more efficient than the current state-of-the-art algorithms.
Visualization of x-ray computer tomography using computer-generated holography
NASA Astrophysics Data System (ADS)
Daibo, Masahiro; Tayama, Norio
1998-09-01
The theory converted from x-ray projection data to the hologram directly by combining the computer tomography (CT) with the computer generated hologram (CGH), is proposed. The purpose of this study is to offer the theory for realizing the all- electronic and high-speed seeing through 3D visualization system, which is for the application to medical diagnosis and non- destructive testing. First, the CT is expressed using the pseudo- inverse matrix which is obtained by the singular value decomposition. CGH is expressed in the matrix style. Next, `projection to hologram conversion' (PTHC) matrix is calculated by the multiplication of phase matrix of CGH with pseudo-inverse matrix of the CT. Finally, the projection vector is converted to the hologram vector directly, by multiplication of the PTHC matrix with the projection vector. Incorporating holographic analog computation into CT reconstruction, it becomes possible that the calculation amount is drastically reduced. We demonstrate the CT cross section which is reconstituted by He-Ne laser in the 3D space from the real x-ray projection data acquired by x-ray television equipment, using our direct conversion technique.
Salient Object Detection via Structured Matrix Decomposition.
Peng, Houwen; Li, Bing; Ling, Haibin; Hu, Weiming; Xiong, Weihua; Maybank, Stephen J
2016-05-04
Low-rank recovery models have shown potential for salient object detection, where a matrix is decomposed into a low-rank matrix representing image background and a sparse matrix identifying salient objects. Two deficiencies, however, still exist. First, previous work typically assumes the elements in the sparse matrix are mutually independent, ignoring the spatial and pattern relations of image regions. Second, when the low-rank and sparse matrices are relatively coherent, e.g., when there are similarities between the salient objects and background or when the background is complicated, it is difficult for previous models to disentangle them. To address these problems, we propose a novel structured matrix decomposition model with two structural regularizations: (1) a tree-structured sparsity-inducing regularization that captures the image structure and enforces patches from the same object to have similar saliency values, and (2) a Laplacian regularization that enlarges the gaps between salient objects and the background in feature space. Furthermore, high-level priors are integrated to guide the matrix decomposition and boost the detection. We evaluate our model for salient object detection on five challenging datasets including single object, multiple objects and complex scene images, and show competitive results as compared with 24 state-of-the-art methods in terms of seven performance metrics.
NASA Astrophysics Data System (ADS)
Browne, E. C.; Abdelhamid, A.; Berry, J.; Alton, M.
2017-12-01
Organic compounds account for a significant portion of fine atmospheric aerosol. Current analytical techniques have provided insights on organic aerosol (OA) sources, composition, and chemical modification pathways. Despite this knowledge, large uncertainties remain and hinder our understanding of aerosol impacts on climate, air quality, and health. Measuring OA composition is challenging due to the complex chemical composition and the wide variation in the properties (e.g., vapor pressure, solubility, reactivity) of organic compounds. In many current measurement techniques, the ability to chemically resolve and quantify OA components is complicated by molecular decomposition, matrix effects, and/or preferential ionization mechanisms. Here, we utilize a novel desorption technique, laser induced acoustic desorption (LIAD), that generates fragment-free, neutral gas-phase molecules. We couple LIAD with a high-resolution chemical ionization mass spectrometer (CIMS) to provide molecular composition OA measurements. Through a series of laboratory experiments, we demonstrate the ability of this technique to measure large, thermally labile species without fragmentation/thermal decomposition. We discuss quantification and detection limits of this technique. We compare LIAD-CIMS measurements with thermal desorption-CIMS measurements using off-line measurements of ambient aerosol collected in Boulder, CO. Lastly, we discuss future development for on-line measurements of OA using LIAD-CIMS.
Decomposition Odour Profiling in the Air and Soil Surrounding Vertebrate Carrion
2014-01-01
Chemical profiling of decomposition odour is conducted in the environmental sciences to detect malodourous target sources in air, water or soil. More recently decomposition odour profiling has been employed in the forensic sciences to generate a profile of the volatile organic compounds (VOCs) produced by decomposed remains. The chemical profile of decomposition odour is still being debated with variations in the VOC profile attributed to the sample collection technique, method of chemical analysis, and environment in which decomposition occurred. To date, little consideration has been given to the partitioning of odour between different matrices and the impact this has on developing an accurate VOC profile. The purpose of this research was to investigate the decomposition odour profile surrounding vertebrate carrion to determine how VOCs partition between soil and air. Four pig carcasses (Sus scrofa domesticus L.) were placed on a soil surface to decompose naturally and their odour profile monitored over a period of two months. Corresponding control sites were also monitored to determine the VOC profile of the surrounding environment. Samples were collected from the soil below and the air (headspace) above the decomposed remains using sorbent tubes and analysed using gas chromatography-mass spectrometry. A total of 249 compounds were identified but only 58 compounds were common to both air and soil samples. This study has demonstrated that soil and air samples produce distinct subsets of VOCs that contribute to the overall decomposition odour. Sample collection from only one matrix will reduce the likelihood of detecting the complete spectrum of VOCs, which further confounds the issue of determining a complete and accurate decomposition odour profile. Confirmation of this profile will enhance the performance of cadaver-detection dogs that are tasked with detecting decomposition odour in both soil and air to locate victim remains. PMID:24740412
Decomposition odour profiling in the air and soil surrounding vertebrate carrion.
Forbes, Shari L; Perrault, Katelynn A
2014-01-01
Chemical profiling of decomposition odour is conducted in the environmental sciences to detect malodourous target sources in air, water or soil. More recently decomposition odour profiling has been employed in the forensic sciences to generate a profile of the volatile organic compounds (VOCs) produced by decomposed remains. The chemical profile of decomposition odour is still being debated with variations in the VOC profile attributed to the sample collection technique, method of chemical analysis, and environment in which decomposition occurred. To date, little consideration has been given to the partitioning of odour between different matrices and the impact this has on developing an accurate VOC profile. The purpose of this research was to investigate the decomposition odour profile surrounding vertebrate carrion to determine how VOCs partition between soil and air. Four pig carcasses (Sus scrofa domesticus L.) were placed on a soil surface to decompose naturally and their odour profile monitored over a period of two months. Corresponding control sites were also monitored to determine the VOC profile of the surrounding environment. Samples were collected from the soil below and the air (headspace) above the decomposed remains using sorbent tubes and analysed using gas chromatography-mass spectrometry. A total of 249 compounds were identified but only 58 compounds were common to both air and soil samples. This study has demonstrated that soil and air samples produce distinct subsets of VOCs that contribute to the overall decomposition odour. Sample collection from only one matrix will reduce the likelihood of detecting the complete spectrum of VOCs, which further confounds the issue of determining a complete and accurate decomposition odour profile. Confirmation of this profile will enhance the performance of cadaver-detection dogs that are tasked with detecting decomposition odour in both soil and air to locate victim remains.
NASA Astrophysics Data System (ADS)
Wu, Binlin; Smith, Jason; Zhang, Lin; Gao, Xin; Alfano, Robert R.
2018-02-01
Worldwide breast cancer incidence has increased by more than twenty percent in the past decade. It is also known that in that time, mortality due to the affliction has increased by fourteen percent. Using optical-based diagnostic techniques, such as Raman spectroscopy, has been explored in order to increase diagnostic accuracy in a more objective way along with significantly decreasing diagnostic wait-times. In this study, Raman spectroscopy with 532-nm excitation was used in order to incite resonance effects to enhance Stokes Raman scattering from unique biomolecular vibrational modes. Seventy-two Raman spectra (41 cancerous, 31 normal) were collected from nine breast tissue samples by performing a ten-spectra average using a 500-ms acquisition time at each acquisition location. The raw spectral data was subsequently prepared for analysis with background correction and normalization. The spectral data in the Raman Shift range of 750- 2000 cm-1 was used for analysis since the detector has highest sensitivity around in this range. The matrix decomposition technique nonnegative matrix factorization (NMF) was then performed on this processed data. The resulting leave-oneout cross-validation using two selective feature components resulted in sensitivity, specificity and accuracy of 92.6%, 100% and 96.0% respectively. The performance of NMF was also compared to that using principal component analysis (PCA), and NMF was shown be to be superior to PCA in this study. This study shows that coupling the resonance Raman spectroscopy technique with subsequent NMF decomposition method shows potential for high characterization accuracy in breast cancer detection.
Decomposition of the Multistatic Response Matrix and Target Characterization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chambers, D H
2008-02-14
Decomposition of the time-reversal operator for an array, or equivalently the singular value decomposition of the multistatic response matrix, has been used to improve imaging and localization of targets in complicated media. Typically, each singular value is associated with one scatterer even though it has been shown in several cases that a single scatterer can generate several singular values. In this paper we review the analysis of the time-reversal operator (TRO), or equivalently the multistatic response matrix (MRM), of an array system and a small target. We begin with two-dimensional scattering from a small cylinder then show the results formore » a small non-spherical target in three dimensions. We show that the number and magnitudes of the singular values contain information about target composition, shape, and orientation.« less
Pham, T. Anh; Nguyen, Huy -Viet; Rocca, Dario; ...
2013-04-26
Inmore » a recent paper we presented an approach to evaluate quasiparticle energies based on the spectral decomposition of the static dielectric matrix. This method does not require the calculation of unoccupied electronic states or the direct diagonalization of large dielectric matrices, and it avoids the use of plasmon-pole models. The numerical accuracy of the approach is controlled by a single parameter, i.e., the number of eigenvectors used in the spectral decomposition of the dielectric matrix. Here we present a comprehensive validation of the method, encompassing calculations of ionization potentials and electron affinities of various molecules and of band gaps for several crystalline and disordered semiconductors. Lastly, we demonstrate the efficiency of our approach by carrying out G W calculations for systems with several hundred valence electrons.« less
Yin, Xiao-Li; Gu, Hui-Wen; Liu, Xiao-Lu; Zhang, Shan-Hui; Wu, Hai-Long
2018-03-05
Multiway calibration in combination with spectroscopic technique is an attractive tool for online or real-time monitoring of target analyte(s) in complex samples. However, how to choose a suitable multiway calibration method for the resolution of spectroscopic-kinetic data is a troubling problem in practical application. In this work, for the first time, three-way and four-way fluorescence-kinetic data arrays were generated during the real-time monitoring of the hydrolysis of irinotecan (CPT-11) in human plasma by excitation-emission matrix fluorescence. Alternating normalization-weighted error (ANWE) and alternating penalty trilinear decomposition (APTLD) were used as three-way calibration for the decomposition of the three-way kinetic data array, whereas alternating weighted residual constraint quadrilinear decomposition (AWRCQLD) and alternating penalty quadrilinear decomposition (APQLD) were applied as four-way calibration to the four-way kinetic data array. The quantitative results of the two kinds of calibration models were fully compared from the perspective of predicted real-time concentrations, spiked recoveries of initial concentration, and analytical figures of merit. The comparison study demonstrated that both three-way and four-way calibration models could achieve real-time quantitative analysis of the hydrolysis of CPT-11 in human plasma under certain conditions. However, it was also found that both of them possess some critical advantages and shortcomings during the process of dynamic analysis. The conclusions obtained in this paper can provide some helpful guidance for the reasonable selection of multiway calibration models to achieve the real-time quantitative analysis of target analyte(s) in complex dynamic systems. Copyright © 2017 Elsevier B.V. All rights reserved.
Iterative image-domain decomposition for dual-energy CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niu, Tianye; Dong, Xue; Petrongolo, Michael
2014-04-15
Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its capability of material decomposition. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical values of DECT. Existing denoising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. In this work, the authors propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm ismore » formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, the authors include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. The regularization term enforces the image smoothness by calculating the square sum of neighboring pixel value differences. To retain the boundary sharpness of the decomposed images, the authors detect the edges in the CT images before decomposition. These edge pixels have small weights in the calculation of the regularization term. Distinct from the existing denoising algorithms applied on the images before or after decomposition, the method has an iterative process for noise suppression, with decomposition performed in each iteration. The authors implement the proposed algorithm using a standard conjugate gradient algorithm. The method performance is evaluated using an evaluation phantom (Catphan©600) and an anthropomorphic head phantom. The results are compared with those generated using direct matrix inversion with no noise suppression, a denoising method applied on the decomposed images, and an existing algorithm with similar formulation as the proposed method but with an edge-preserving regularization term. Results: On the Catphan phantom, the method maintains the same spatial resolution on the decomposed images as that of the CT images before decomposition (8 pairs/cm) while significantly reducing their noise standard deviation. Compared to that obtained by the direct matrix inversion, the noise standard deviation in the images decomposed by the proposed algorithm is reduced by over 98%. Without considering the noise correlation properties in the formulation, the denoising scheme degrades the spatial resolution to 6 pairs/cm for the same level of noise suppression. Compared to the edge-preserving algorithm, the method achieves better low-contrast detectability. A quantitative study is performed on the contrast-rod slice of Catphan phantom. The proposed method achieves lower electron density measurement error as compared to that by the direct matrix inversion, and significantly reduces the error variation by over 97%. On the head phantom, the method reduces the noise standard deviation of decomposed images by over 97% without blurring the sinus structures. Conclusions: The authors propose an iterative image-domain decomposition method for DECT. The method combines noise suppression and material decomposition into an iterative process and achieves both goals simultaneously. By exploring the full variance-covariance properties of the decomposed images and utilizing the edge predetection, the proposed algorithm shows superior performance on noise suppression with high image spatial resolution and low-contrast detectability.« less
Parallel algorithm for computation of second-order sequential best rotations
NASA Astrophysics Data System (ADS)
Redif, Soydan; Kasap, Server
2013-12-01
Algorithms for computing an approximate polynomial matrix eigenvalue decomposition of para-Hermitian systems have emerged as a powerful, generic signal processing tool. A technique that has shown much success in this regard is the sequential best rotation (SBR2) algorithm. Proposed is a scheme for parallelising SBR2 with a view to exploiting the modern architectural features and inherent parallelism of field-programmable gate array (FPGA) technology. Experiments show that the proposed scheme can achieve low execution times while requiring minimal FPGA resources.
Recursive partitioned inversion of large (1500 x 1500) symmetric matrices
NASA Technical Reports Server (NTRS)
Putney, B. H.; Brownd, J. E.; Gomez, R. A.
1976-01-01
A recursive algorithm was designed to invert large, dense, symmetric, positive definite matrices using small amounts of computer core, i.e., a small fraction of the core needed to store the complete matrix. The described algorithm is a generalized Gaussian elimination technique. Other algorithms are also discussed for the Cholesky decomposition and step inversion techniques. The purpose of the inversion algorithm is to solve large linear systems of normal equations generated by working geodetic problems. The algorithm was incorporated into a computer program called SOLVE. In the past the SOLVE program has been used in obtaining solutions published as the Goddard earth models.
Trace Norm Regularized CANDECOMP/PARAFAC Decomposition With Missing Data.
Liu, Yuanyuan; Shang, Fanhua; Jiao, Licheng; Cheng, James; Cheng, Hong
2015-11-01
In recent years, low-rank tensor completion (LRTC) problems have received a significant amount of attention in computer vision, data mining, and signal processing. The existing trace norm minimization algorithms for iteratively solving LRTC problems involve multiple singular value decompositions of very large matrices at each iteration. Therefore, they suffer from high computational cost. In this paper, we propose a novel trace norm regularized CANDECOMP/PARAFAC decomposition (TNCP) method for simultaneous tensor decomposition and completion. We first formulate a factor matrix rank minimization model by deducing the relation between the rank of each factor matrix and the mode- n rank of a tensor. Then, we introduce a tractable relaxation of our rank function, and then achieve a convex combination problem of much smaller-scale matrix trace norm minimization. Finally, we develop an efficient algorithm based on alternating direction method of multipliers to solve our problem. The promising experimental results on synthetic and real-world data validate the effectiveness of our TNCP method. Moreover, TNCP is significantly faster than the state-of-the-art methods and scales to larger problems.
Randomized Dynamic Mode Decomposition
NASA Astrophysics Data System (ADS)
Erichson, N. Benjamin; Brunton, Steven L.; Kutz, J. Nathan
2017-11-01
The dynamic mode decomposition (DMD) is an equation-free, data-driven matrix decomposition that is capable of providing accurate reconstructions of spatio-temporal coherent structures arising in dynamical systems. We present randomized algorithms to compute the near-optimal low-rank dynamic mode decomposition for massive datasets. Randomized algorithms are simple, accurate and able to ease the computational challenges arising with `big data'. Moreover, randomized algorithms are amenable to modern parallel and distributed computing. The idea is to derive a smaller matrix from the high-dimensional input data matrix using randomness as a computational strategy. Then, the dynamic modes and eigenvalues are accurately learned from this smaller representation of the data, whereby the approximation quality can be controlled via oversampling and power iterations. Here, we present randomized DMD algorithms that are categorized by how many passes the algorithm takes through the data. Specifically, the single-pass randomized DMD does not require data to be stored for subsequent passes. Thus, it is possible to approximately decompose massive fluid flows (stored out of core memory, or not stored at all) using single-pass algorithms, which is infeasible with traditional DMD algorithms.
Microencapsulation of Flavors in Carnauba Wax
Milanovic, Jelena; Manojlovic, Verica; Levic, Steva; Rajic, Nevenka; Nedovic, Viktor; Bugarski, Branko
2010-01-01
The subject of this study is the development of flavor wax formulations aimed for food and feed products. The melt dispersion technique was applied for the encapsulation of ethyl vanillin in wax microcapsules. The surface morphology of microparticles was investigated using scanning electron microscope (SEM), while the loading content was determined by HPLC measurements. This study shows that the decomposition process under heating proceeds in several steps: vanilla evaporation occurs at around 200 °C, while matrix degradation starts at 250 °C and progresses with maxima at around 360, 440 and 520 °C. The results indicate that carnauba wax is an attractive material for use as a matrix for encapsulation of flavours in order to improve their functionality and stability in products. PMID:22315575
Microencapsulation of flavors in carnauba wax.
Milanovic, Jelena; Manojlovic, Verica; Levic, Steva; Rajic, Nevenka; Nedovic, Viktor; Bugarski, Branko
2010-01-01
The subject of this study is the development of flavor wax formulations aimed for food and feed products. The melt dispersion technique was applied for the encapsulation of ethyl vanillin in wax microcapsules. The surface morphology of microparticles was investigated using scanning electron microscope (SEM), while the loading content was determined by HPLC measurements. This study shows that the decomposition process under heating proceeds in several steps: vanilla evaporation occurs at around 200 °C, while matrix degradation starts at 250 °C and progresses with maxima at around 360, 440 and 520 °C. The results indicate that carnauba wax is an attractive material for use as a matrix for encapsulation of flavours in order to improve their functionality and stability in products.
PEM-PCA: a parallel expectation-maximization PCA face recognition architecture.
Rujirakul, Kanokmon; So-In, Chakchai; Arnonkijpanich, Banchar
2014-01-01
Principal component analysis or PCA has been traditionally used as one of the feature extraction techniques in face recognition systems yielding high accuracy when requiring a small number of features. However, the covariance matrix and eigenvalue decomposition stages cause high computational complexity, especially for a large database. Thus, this research presents an alternative approach utilizing an Expectation-Maximization algorithm to reduce the determinant matrix manipulation resulting in the reduction of the stages' complexity. To improve the computational time, a novel parallel architecture was employed to utilize the benefits of parallelization of matrix computation during feature extraction and classification stages including parallel preprocessing, and their combinations, so-called a Parallel Expectation-Maximization PCA architecture. Comparing to a traditional PCA and its derivatives, the results indicate lower complexity with an insignificant difference in recognition precision leading to high speed face recognition systems, that is, the speed-up over nine and three times over PCA and Parallel PCA.
NASA Astrophysics Data System (ADS)
Noble, J. H.; Lubasch, M.; Stevens, J.; Jentschura, U. D.
2017-12-01
We describe a matrix diagonalization algorithm for complex symmetric (not Hermitian) matrices, A ̲ =A̲T, which is based on a two-step algorithm involving generalized Householder reflections based on the indefinite inner product 〈 u ̲ , v ̲ 〉 ∗ =∑iuivi. This inner product is linear in both arguments and avoids complex conjugation. The complex symmetric input matrix is transformed to tridiagonal form using generalized Householder transformations (first step). An iterative, generalized QL decomposition of the tridiagonal matrix employing an implicit shift converges toward diagonal form (second step). The QL algorithm employs iterative deflation techniques when a machine-precision zero is encountered "prematurely" on the super-/sub-diagonal. The algorithm allows for a reliable and computationally efficient computation of resonance and antiresonance energies which emerge from complex-scaled Hamiltonians, and for the numerical determination of the real energy eigenvalues of pseudo-Hermitian and PT-symmetric Hamilton matrices. Numerical reference values are provided.
Exploiting Symmetry on Parallel Architectures.
NASA Astrophysics Data System (ADS)
Stiller, Lewis Benjamin
1995-01-01
This thesis describes techniques for the design of parallel programs that solve well-structured problems with inherent symmetry. Part I demonstrates the reduction of such problems to generalized matrix multiplication by a group-equivariant matrix. Fast techniques for this multiplication are described, including factorization, orbit decomposition, and Fourier transforms over finite groups. Our algorithms entail interaction between two symmetry groups: one arising at the software level from the problem's symmetry and the other arising at the hardware level from the processors' communication network. Part II illustrates the applicability of our symmetry -exploitation techniques by presenting a series of case studies of the design and implementation of parallel programs. First, a parallel program that solves chess endgames by factorization of an associated dihedral group-equivariant matrix is described. This code runs faster than previous serial programs, and discovered it a number of results. Second, parallel algorithms for Fourier transforms for finite groups are developed, and preliminary parallel implementations for group transforms of dihedral and of symmetric groups are described. Applications in learning, vision, pattern recognition, and statistics are proposed. Third, parallel implementations solving several computational science problems are described, including the direct n-body problem, convolutions arising from molecular biology, and some communication primitives such as broadcast and reduce. Some of our implementations ran orders of magnitude faster than previous techniques, and were used in the investigation of various physical phenomena.
An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation
NASA Technical Reports Server (NTRS)
Litt, Jonathan S.
2007-01-01
A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs, such as thrust. The engine's performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends on knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined that accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.
An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation
NASA Technical Reports Server (NTRS)
Litt, Jonathan S.
2007-01-01
A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs, such as thrust. The engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends on knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined that accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least-squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.
An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation
NASA Technical Reports Server (NTRS)
Litt, Jonathan S.
2005-01-01
A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs such as thrust. The engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends upon knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined which accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.
Density-cluster NMA: A new protein decomposition technique for coarse-grained normal mode analysis.
Demerdash, Omar N A; Mitchell, Julie C
2012-07-01
Normal mode analysis has emerged as a useful technique for investigating protein motions on long time scales. This is largely due to the advent of coarse-graining techniques, particularly Hooke's Law-based potentials and the rotational-translational blocking (RTB) method for reducing the size of the force-constant matrix, the Hessian. Here we present a new method for domain decomposition for use in RTB that is based on hierarchical clustering of atomic density gradients, which we call Density-Cluster RTB (DCRTB). The method reduces the number of degrees of freedom by 85-90% compared with the standard blocking approaches. We compared the normal modes from DCRTB against standard RTB using 1-4 residues in sequence in a single block, with good agreement between the two methods. We also show that Density-Cluster RTB and standard RTB perform well in capturing the experimentally determined direction of conformational change. Significantly, we report superior correlation of DCRTB with B-factors compared with 1-4 residue per block RTB. Finally, we show significant reduction in computational cost for Density-Cluster RTB that is nearly 100-fold for many examples. Copyright © 2012 Wiley Periodicals, Inc.
Simultaneous tensor decomposition and completion using factor priors.
Chen, Yi-Lei; Hsu, Chiou-Ting; Liao, Hong-Yuan Mark
2014-03-01
The success of research on matrix completion is evident in a variety of real-world applications. Tensor completion, which is a high-order extension of matrix completion, has also generated a great deal of research interest in recent years. Given a tensor with incomplete entries, existing methods use either factorization or completion schemes to recover the missing parts. However, as the number of missing entries increases, factorization schemes may overfit the model because of incorrectly predefined ranks, while completion schemes may fail to interpret the model factors. In this paper, we introduce a novel concept: complete the missing entries and simultaneously capture the underlying model structure. To this end, we propose a method called simultaneous tensor decomposition and completion (STDC) that combines a rank minimization technique with Tucker model decomposition. Moreover, as the model structure is implicitly included in the Tucker model, we use factor priors, which are usually known a priori in real-world tensor objects, to characterize the underlying joint-manifold drawn from the model factors. By exploiting this auxiliary information, our method leverages two classic schemes and accurately estimates the model factors and missing entries. We conducted experiments to empirically verify the convergence of our algorithm on synthetic data and evaluate its effectiveness on various kinds of real-world data. The results demonstrate the efficacy of the proposed method and its potential usage in tensor-based applications. It also outperforms state-of-the-art methods on multilinear model analysis and visual data completion tasks.
An algorithm for separation of mixed sparse and Gaussian sources
Akkalkotkar, Ameya
2017-01-01
Independent component analysis (ICA) is a ubiquitous method for decomposing complex signal mixtures into a small set of statistically independent source signals. However, in cases in which the signal mixture consists of both nongaussian and Gaussian sources, the Gaussian sources will not be recoverable by ICA and will pollute estimates of the nongaussian sources. Therefore, it is desirable to have methods for mixed ICA/PCA which can separate mixtures of Gaussian and nongaussian sources. For mixtures of purely Gaussian sources, principal component analysis (PCA) can provide a basis for the Gaussian subspace. We introduce a new method for mixed ICA/PCA which we call Mixed ICA/PCA via Reproducibility Stability (MIPReSt). Our method uses a repeated estimations technique to rank sources by reproducibility, combined with decomposition of multiple subsamplings of the original data matrix. These multiple decompositions allow us to assess component stability as the size of the data matrix changes, which can be used to determinine the dimension of the nongaussian subspace in a mixture. We demonstrate the utility of MIPReSt for signal mixtures consisting of simulated sources and real-word (speech) sources, as well as mixture of unknown composition. PMID:28414814
An algorithm for separation of mixed sparse and Gaussian sources.
Akkalkotkar, Ameya; Brown, Kevin Scott
2017-01-01
Independent component analysis (ICA) is a ubiquitous method for decomposing complex signal mixtures into a small set of statistically independent source signals. However, in cases in which the signal mixture consists of both nongaussian and Gaussian sources, the Gaussian sources will not be recoverable by ICA and will pollute estimates of the nongaussian sources. Therefore, it is desirable to have methods for mixed ICA/PCA which can separate mixtures of Gaussian and nongaussian sources. For mixtures of purely Gaussian sources, principal component analysis (PCA) can provide a basis for the Gaussian subspace. We introduce a new method for mixed ICA/PCA which we call Mixed ICA/PCA via Reproducibility Stability (MIPReSt). Our method uses a repeated estimations technique to rank sources by reproducibility, combined with decomposition of multiple subsamplings of the original data matrix. These multiple decompositions allow us to assess component stability as the size of the data matrix changes, which can be used to determinine the dimension of the nongaussian subspace in a mixture. We demonstrate the utility of MIPReSt for signal mixtures consisting of simulated sources and real-word (speech) sources, as well as mixture of unknown composition.
Nonlocal low-rank and sparse matrix decomposition for spectral CT reconstruction
NASA Astrophysics Data System (ADS)
Niu, Shanzhou; Yu, Gaohang; Ma, Jianhua; Wang, Jing
2018-02-01
Spectral computed tomography (CT) has been a promising technique in research and clinics because of its ability to produce improved energy resolution images with narrow energy bins. However, the narrow energy bin image is often affected by serious quantum noise because of the limited number of photons used in the corresponding energy bin. To address this problem, we present an iterative reconstruction method for spectral CT using nonlocal low-rank and sparse matrix decomposition (NLSMD), which exploits the self-similarity of patches that are collected in multi-energy images. Specifically, each set of patches can be decomposed into a low-rank component and a sparse component, and the low-rank component represents the stationary background over different energy bins, while the sparse component represents the rest of the different spectral features in individual energy bins. Subsequently, an effective alternating optimization algorithm was developed to minimize the associated objective function. To validate and evaluate the NLSMD method, qualitative and quantitative studies were conducted by using simulated and real spectral CT data. Experimental results show that the NLSMD method improves spectral CT images in terms of noise reduction, artifact suppression and resolution preservation.
On-matrix Derivatization for Dynamic Headspace Sampling of Nonvolatile Surface Residues
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harvey, Scott D.; Wahl, Jon H.
2012-09-01
The goal of this study is to extend sampling by the field and laboratory emission cell (FLEC) purge-and-trap technique to applications that target nonvolatile residues. On-matrix derivatization of residues to render analytes stable and more volatile is explored to achieve this goal. Results show that on-matrix derivatizations of nerve agent hydrolysis products (monoalkyl methylphosphonic acids and methylphosphonic acid [MPA]) with diazomethane were successful on glass and painted wallboard (at the 10-µg level). It also was successful on the more difficult concrete (at the 500-µg level) and carpet (at the 20-µg level) substrates that cannot be successfully sampled using swipe techniques.more » Analysis of additional chemical warfare (CW)-associated residues can be approached by on-matrix derivatization with trifluoroacetic anhydride (TFAA). For example, amines (used as stabilizers or present as decomposition products of the nerve agent VX) or thiodiglycol (hydrolysis product of sulfur mustard) could be sampled as their TFAA derivatives from glass, painted wallboard, and concrete (at the 40-µg level), as well as carpet (at the 80-µg level) surfaces. Although the amine and thiodiglycol are semi-volatile and could be sampled directly, derivatization improves the recovery and chromatographic behavior of these analytes.« less
On-matrix derivatization for dynamic headspace sampling of nonvolatile surface residues.
Harvey, Scott D; Wahl, Jon H
2012-09-21
The goal of this study is to extend sampling by the field and laboratory emission cell (FLEC) dynamic headspace technique to applications that target nonvolatile residues. On-matrix derivatization of residues to render analytes stable and more volatile is explored to achieve this goal. Results show that on-matrix derivatizations of nerve agent hydrolysis products (monoalkyl methylphosphonic acids and methylphosphonic acid [MPA]) with diazomethane were successful on glass and painted wallboard (at the 10-μg level). It also was successful on the more difficult concrete (at the 500-μg level) and carpet (at the 20-μg level), substrates that cannot be successfully sampled using swipe techniques. Analysis of additional chemical warfare (CW)-associated residues can be approached by on-matrix derivatization with trifluoroacetic anhydride (TFAA). For example, amines (used as stabilizers or present as decomposition products of the nerve agent VX) or thiodiglycol (hydrolysis product of sulfur mustard) could be sampled as their TFAA derivatives from glass, painted wallboard, and concrete (at the 40-μg level), as well as carpet (at the 80-μg level) surfaces. Although the amine and thiodiglycol are semi-volatile and could be sampled directly, derivatization improves the recovery and chromatographic behavior of these analytes. Copyright © 2012 Elsevier B.V. All rights reserved.
Cotrufo, M Francesca; Wallenstein, Matthew D; Boot, Claudia M; Denef, Karolien; Paul, Eldor
2013-04-01
The decomposition and transformation of above- and below-ground plant detritus (litter) is the main process by which soil organic matter (SOM) is formed. Yet, research on litter decay and SOM formation has been largely uncoupled, failing to provide an effective nexus between these two fundamental processes for carbon (C) and nitrogen (N) cycling and storage. We present the current understanding of the importance of microbial substrate use efficiency and C and N allocation in controlling the proportion of plant-derived C and N that is incorporated into SOM, and of soil matrix interactions in controlling SOM stabilization. We synthesize this understanding into the Microbial Efficiency-Matrix Stabilization (MEMS) framework. This framework leads to the hypothesis that labile plant constituents are the dominant source of microbial products, relative to input rates, because they are utilized more efficiently by microbes. These microbial products of decomposition would thus become the main precursors of stable SOM by promoting aggregation and through strong chemical bonding to the mineral soil matrix. © 2012 Blackwell Publishing Ltd.
NASA Technical Reports Server (NTRS)
Booth, Morrison, Christopher; Seidman, David N.; Noebe, Ronald D.
2009-01-01
The effects of a 2.0 at.% addition of Ta to a model Ni-10.0Al-8.5Cr (at.%) superalloy aged at 1073 K are assessed using scanning electron microscopy and atom-probe tomography. The gamma'(Ll2)-precipitate morphology that develops as a result of gamma-(fcc)matrix phase decomposition is found to evolve from a bimodal distribution of spheroidal precipitates, to {001}-faceted cuboids and parallelepipeds aligned along the elastically soft {001}-type directions. The phase compositions and the widths of the gamma'-precipitate/gamma-matrix heterophase interfaces evolve temporally as the Ni-Al-Cr-Ta alloy undergoes quasi-stationary state coarsening after 1 h of aging. Tantalum is observed to partition preferentially to the gamma'-precipitate phase, and suppresses the mobility of Ni in the gamma-matrix sufficiently to cause an accumulation of Ni on the gamma-matrix side of the gamma'/gamma interface. Additionally, computational modeling, employing Thermo-Calc, Dictra and PrecipiCalc, is employed to elucidate the kinetic pathways that lead to phase decomposition in this concentrated Ni-Al-Cr-Ta alloy.
Benhammouda, Brahim
2016-01-01
Since 1980, the Adomian decomposition method (ADM) has been extensively used as a simple powerful tool that applies directly to solve different kinds of nonlinear equations including functional, differential, integro-differential and algebraic equations. However, for differential-algebraic equations (DAEs) the ADM is applied only in four earlier works. There, the DAEs are first pre-processed by some transformations like index reductions before applying the ADM. The drawback of such transformations is that they can involve complex algorithms, can be computationally expensive and may lead to non-physical solutions. The purpose of this paper is to propose a novel technique that applies the ADM directly to solve a class of nonlinear higher-index Hessenberg DAEs systems efficiently. The main advantage of this technique is that; firstly it avoids complex transformations like index reductions and leads to a simple general algorithm. Secondly, it reduces the computational work by solving only linear algebraic systems with a constant coefficient matrix at each iteration, except for the first iteration where the algebraic system is nonlinear (if the DAE is nonlinear with respect to the algebraic variable). To demonstrate the effectiveness of the proposed technique, we apply it to a nonlinear index-three Hessenberg DAEs system with nonlinear algebraic constraints. This technique is straightforward and can be programmed in Maple or Mathematica to simulate real application problems.
Matrix with Prescribed Eigenvectors
ERIC Educational Resources Information Center
Ahmad, Faiz
2011-01-01
It is a routine matter for undergraduates to find eigenvalues and eigenvectors of a given matrix. But the converse problem of finding a matrix with prescribed eigenvalues and eigenvectors is rarely discussed in elementary texts on linear algebra. This problem is related to the "spectral" decomposition of a matrix and has important technical…
Suseela, Vidya; Tharayil, Nishanth
2018-04-01
Decomposition of plant litter is a fundamental ecosystem process that can act as a feedback to climate change by simultaneously influencing both the productivity of ecosystems and the flux of carbon dioxide from the soil. The influence of climate on decomposition from a postsenescence perspective is relatively well known; in particular, climate is known to regulate the rate of litter decomposition via its direct influence on the reaction kinetics and microbial physiology on processes downstream of tissue senescence. Climate can alter plant metabolism during the formative stage of tissues and could shape the final chemical composition of plant litter that is available for decomposition, and thus indirectly influence decomposition; however, these indirect effects are relatively poorly understood. Climatic stress disrupts cellular homeostasis in plants and results in the reprogramming of primary and secondary metabolic pathways, which leads to changes in the quantity, composition, and organization of small molecules and recalcitrant heteropolymers, including lignins, tannins, suberins, and cuticle within the plant tissue matrix. Furthermore, by regulating metabolism during tissue senescence, climate influences the resorption of nutrients from senescing tissues. Thus, the final chemical composition of plant litter that forms the substrate of decomposition is a combined product of presenescence physiological processes through the production and resorption of metabolites. The changes in quantity, composition, and localization of the molecular construct of the litter could enhance or hinder tissue decomposition and soil nutrient cycling by altering the recalcitrance of the lignocellulose matrix, the composition of microbial communities, and the activity of microbial exo-enzymes via various complexation reactions. Also, the climate-induced changes in the molecular composition of litter could differentially influence litter decomposition and soil nutrient cycling. Compared with temperate ecosystems, the indirect effects of climate on litter decomposition in the tropics are not well understood, which underscores the need to conduct additional studies in tropical biomes. We also emphasize the need to focus on how climatic stress affects the root chemistry as roots contribute significantly to biogeochemical cycling, and on utilizing more robust analytical approaches to capture the molecular composition of tissue matrix that fuel microbial metabolism. © 2017 John Wiley & Sons Ltd.
Beyond Principal Component Analysis: A Trilinear Decomposition Model and Least Squares Estimation.
ERIC Educational Resources Information Center
Pham, Tuan Dinh; Mocks, Joachim
1992-01-01
Sufficient conditions are derived for the consistency and asymptotic normality of the least squares estimator of a trilinear decomposition model for multiway data analysis. The limiting covariance matrix is computed. (Author/SLD)
NASA Astrophysics Data System (ADS)
Zou, Chunrong; Li, Bin; Zhang, Changrui; Wang, Siqing; Xie, Zhengfang; Shao, Changwei
2016-02-01
The structural evolution of a silicon oxynitride fiber reinforced boron nitride matrix (Si-N-Of/BN) wave-transparent composite at high temperatures was investigated. When heat treated at 1600 °C, the composite retained a favorable bending strength of 55.3 MPa while partially crystallizing to Si2N2O and h-BN from the as-received amorphous structure. The Si-N-O fibers still performed as effective reinforcements despite the presence of small pores due to fiber decomposition. Upon heat treatment at 1800 °C, the Si-N-O fibers already lost their reinforcing function and rough hollow microstructure formed within the fibers because of the accelerated decomposition. Further heating to 2000 °C led to the complete decomposition of the reinforcing fibers and only h-BN particles survived. The crystallization and decomposition behaviors of the composite at high temperatures are discussed.
Niederegger, Senta; Schermer, Julia; Höfig, Juliane; Mall, Gita
2015-01-01
Estimating time of death of buried human bodies is a very difficult task. Casper's rule from 1860 is still widely used which illustrates the lack of suitable methods. In this case study excavations in an arbor revealed the crouching body of a human being, dressed only in boxer shorts and socks. Witnesses were not able to generate a concise answer as to when the person in question was last seen alive; the pieces of information opened a window of 2-6 weeks for the possible time of death. To determine the post mortem interval (PMI) an experiment using a pig carcass was conducted to set up a decomposition matrix. Fitting the autopsy findings of the victim into the decomposition matrix yielded a time of death estimation of 2-3 weeks. This time frame was later confirmed by a new witness. The authors feel confident that a widespread conduction of decomposition matrices using pig carcasses can lead to a great increase of experience and knowledge in PMI estimation of buried bodies and will eventually lead to applicable new methods. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Reduced-rank technique for joint channel estimation in TD-SCDMA systems
NASA Astrophysics Data System (ADS)
Kamil Marzook, Ali; Ismail, Alyani; Mohd Ali, Borhanuddin; Sali, Adawati; Khatun, Sabira
2013-02-01
In time division-synchronous code division multiple access systems, increasing the system capacity by exploiting the inserting of the largest number of users in one time slot (TS) requires adding more estimation processes to estimate the joint channel matrix for the whole system. The increase in the number of channel parameters due the increase in the number of users in one TS directly affects the precision of the estimator's performance. This article presents a novel channel estimation with low complexity, which relies on reducing the rank order of the total channel matrix H. The proposed method exploits the rank deficiency of H to reduce the number of parameters that characterise this matrix. The adopted reduced-rank technique is based on truncated singular value decomposition algorithm. The algorithms for reduced-rank joint channel estimation (JCE) are derived and compared against traditional full-rank JCEs: least squares (LS) or Steiner and enhanced (LS or MMSE) algorithms. Simulation results of the normalised mean square error showed the superiority of reduced-rank estimators. In addition, the channel impulse responses founded by reduced-rank estimator for all active users offers considerable performance improvement over the conventional estimator along the channel window length.
NASA Astrophysics Data System (ADS)
Xu, Xiankun; Li, Peiwen
2017-11-01
Fixman's work in 1974 and the follow-up studies have developed a method that can factorize the inverse of mass matrix into an arithmetic combination of three sparse matrices-one of them is positive definite and needs to be further factorized by using the Cholesky decomposition or similar methods. When the molecule subjected to study is of serial chain structure, this method can achieve O (n) time complexity. However, for molecules with long branches, Cholesky decomposition about the corresponding positive definite matrix will introduce massive fill-in due to its nonzero structure. Although there are several methods can be used to reduce the number of fill-in, none of them could strictly guarantee for zero fill-in for all molecules according to our test, and thus cannot obtain O (n) time complexity by using these traditional methods. In this paper we present a new method that can guarantee for no fill-in in doing the Cholesky decomposition, which was developed based on the correlations between the mass matrix and the geometrical structure of molecules. As a result, the inverting of mass matrix will remain the O (n) time complexity, no matter the molecule structure has long branches or not.
Cao, Buwen; Deng, Shuguang; Qin, Hua; Ding, Pingjian; Chen, Shaopeng; Li, Guanghui
2018-06-15
High-throughput technology has generated large-scale protein interaction data, which is crucial in our understanding of biological organisms. Many complex identification algorithms have been developed to determine protein complexes. However, these methods are only suitable for dense protein interaction networks, because their capabilities decrease rapidly when applied to sparse protein⁻protein interaction (PPI) networks. In this study, based on penalized matrix decomposition ( PMD ), a novel method of penalized matrix decomposition for the identification of protein complexes (i.e., PMD pc ) was developed to detect protein complexes in the human protein interaction network. This method mainly consists of three steps. First, the adjacent matrix of the protein interaction network is normalized. Second, the normalized matrix is decomposed into three factor matrices. The PMD pc method can detect protein complexes in sparse PPI networks by imposing appropriate constraints on factor matrices. Finally, the results of our method are compared with those of other methods in human PPI network. Experimental results show that our method can not only outperform classical algorithms, such as CFinder, ClusterONE, RRW, HC-PIN, and PCE-FR, but can also achieve an ideal overall performance in terms of a composite score consisting of F-measure, accuracy (ACC), and the maximum matching ratio (MMR).
Exploiting symmetries in the modeling and analysis of tires
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Andersen, C. M.; Tanner, John A.
1989-01-01
A computational procedure is presented for reducing the size of the analysis models of tires having unsymmetric material, geometry and/or loading. The two key elements of the procedure when applied to anisotropic tires are: (1) decomposition of the stiffness matrix into the sum of an orthotropic and nonorthotropic parts; and (2) successive application of the finite-element method and the classical Rayleigh-Ritz technique. The finite-element method is first used to generate few global approximation vectors (or modes). Then the amplitudes of these modes are computed by using the Rayleigh-Ritz technique. The proposed technique has high potential for handling practical tire problems with anisotropic materials, unsymmetric imperfections and asymmetric loading. It is also particularly useful for use with three-dimensional finite-element models of tires.
The products of the thermal decomposition of CH{sub 3}CHO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vasiliou, AnGayle; National Renewable Energy Laboratory, 1617 Cole Blvd., Golden, Colorado 80401; Piech, Krzysztof M.
2011-07-07
We have used a heated 2 cm x 1 mm SiC microtubular ({mu}tubular) reactor to decompose acetaldehyde: CH{sub 3}CHO +{Delta}{yields} products. Thermal decomposition is followed at pressures of 75-150 Torr and at temperatures up to 1675 K, conditions that correspond to residence times of roughly 50-100 {mu}s in the {mu}tubular reactor. The acetaldehyde decomposition products are identified by two independent techniques: vacuum ultraviolet photoionization mass spectroscopy (PIMS) and infrared (IR) absorption spectroscopy after isolation in a cryogenic matrix. Besides CH{sub 3}CHO, we have studied three isotopologues, CH{sub 3}CDO, CD{sub 3}CHO, and CD{sub 3}CDO. We have identified the thermal decomposition productsmore » CH{sub 3} (PIMS), CO (IR, PIMS), H (PIMS), H{sub 2} (PIMS), CH{sub 2}CO (IR, PIMS), CH{sub 2}=CHOH (IR, PIMS), H{sub 2}O (IR, PIMS), and HC{identical_to}CH (IR, PIMS). Plausible evidence has been found to support the idea that there are at least three different thermal decomposition pathways for CH{sub 3}CHO; namely, radical decomposition: CH{sub 3}CHO +{Delta}{yields} CH{sub 3}+[HCO]{yields} CH{sub 3}+ H + CO; elimination: CH{sub 3}CHO +{Delta}{yields} H{sub 2}+ CH{sub 2}=C=O; isomerization/elimination: CH{sub 3}CHO +{Delta}{yields}[CH{sub 2}=CH-OH]{yields} HC{identical_to}CH + H{sub 2}O. An interesting result is that both PIMS and IR spectroscopy show compelling evidence for the participation of vinylidene, CH{sub 2}=C:, as an intermediate in the decomposition of vinyl alcohol: CH{sub 2}=CH-OH +{Delta}{yields}[CH{sub 2}=C:]+ H{sub 2}O {yields} HC{identical_to}CH + H{sub 2}O.« less
NASA Astrophysics Data System (ADS)
Gao, Pengzhi; Wang, Meng; Chow, Joe H.; Ghiocel, Scott G.; Fardanesh, Bruce; Stefopoulos, George; Razanousky, Michael P.
2016-11-01
This paper presents a new framework of identifying a series of cyber data attacks on power system synchrophasor measurements. We focus on detecting "unobservable" cyber data attacks that cannot be detected by any existing method that purely relies on measurements received at one time instant. Leveraging the approximate low-rank property of phasor measurement unit (PMU) data, we formulate the identification problem of successive unobservable cyber attacks as a matrix decomposition problem of a low-rank matrix plus a transformed column-sparse matrix. We propose a convex-optimization-based method and provide its theoretical guarantee in the data identification. Numerical experiments on actual PMU data from the Central New York power system and synthetic data are conducted to verify the effectiveness of the proposed method.
Thermodynamic properties of water in confined environments: a Monte Carlo study
NASA Astrophysics Data System (ADS)
Gladovic, Martin; Bren, Urban; Urbic, Tomaž
2018-05-01
Monte Carlo simulations of Mercedes-Benz water in a crowded environment were performed. The simulated systems are representative of both composite, porous or sintered materials and living cells with typical matrix packings. We studied the influence of overall temperature as well as the density and size of matrix particles on water density, particle distributions, hydrogen bond formation and thermodynamic quantities. Interestingly, temperature and space occupancy of matrix exhibit a similar effect on water properties following the competition between the kinetic and the potential energy of the system, whereby temperature increases the kinetic and matrix packing decreases the potential contribution. A novel thermodynamic decomposition approach was applied to gain insight into individual contributions of different types of inter-particle interactions. This decomposition proved to be useful and in good agreement with the total thermodynamic quantities especially at higher temperatures and matrix packings, where higher-order potential-energy mixing terms lose their importance.
Data-driven sensor placement from coherent fluid structures
NASA Astrophysics Data System (ADS)
Manohar, Krithika; Kaiser, Eurika; Brunton, Bingni W.; Kutz, J. Nathan; Brunton, Steven L.
2017-11-01
Optimal sensor placement is a central challenge in the prediction, estimation and control of fluid flows. We reinterpret sensor placement as optimizing discrete samples of coherent fluid structures for full state reconstruction. This permits a drastic reduction in the number of sensors required for faithful reconstruction, since complex fluid interactions can often be described by a small number of coherent structures. Our work optimizes point sensors using the pivoted matrix QR factorization to sample coherent structures directly computed from flow data. We apply this sampling technique in conjunction with various data-driven modal identification methods, including the proper orthogonal decomposition (POD) and dynamic mode decomposition (DMD). In contrast to POD-based sensors, DMD demonstrably enables the optimization of sensors for prediction in systems exhibiting multiple scales of dynamics. Finally, reconstruction accuracy from pivot sensors is shown to be competitive with sensors obtained using traditional computationally prohibitive optimization methods.
Electron energy-loss spectroscopy of single nanocrystals: mapping of tin allotropes.
Roesgaard, Søren; Ramasse, Quentin; Chevallier, Jacques; Fyhn, Mogens; Julsgaard, Brian
2018-05-25
Using monochromated electron energy-loss spectroscopy (EELS), we are able to map different allotropes in Sn-nanocrystals embedded in Si. It is demonstrated that α-Sn and β-Sn, as well as an interface related plasmon, can be distinguished in embedded Sn-nanostructures. The EELS data is interpreted by standard non-negative matrix factorization followed by a manual Lorentzian decomposition. The decomposition allows for a more physical understanding of the EELS mapping without reducing the level of information. Extending the analysis from a reference system to smaller nanocrystals demonstrates that allotrope determination in nanoscale systems down below 5 nm is possible. Such local information proves the use of monochromated EELS mapping as a powerful technique to study nanoscale systems. This possibility enables investigation of small nanostructures that cannot be investigated through other means, allowing for a better understanding and thus leading to realizations that can result in nanomaterials with improved properties.
Electron energy-loss spectroscopy of single nanocrystals: mapping of tin allotropes
NASA Astrophysics Data System (ADS)
Roesgaard, Søren; Ramasse, Quentin; Chevallier, Jacques; Fyhn, Mogens; Julsgaard, Brian
2018-05-01
Using monochromated electron energy-loss spectroscopy (EELS), we are able to map different allotropes in Sn-nanocrystals embedded in Si. It is demonstrated that α-Sn and β-Sn, as well as an interface related plasmon, can be distinguished in embedded Sn-nanostructures. The EELS data is interpreted by standard non-negative matrix factorization followed by a manual Lorentzian decomposition. The decomposition allows for a more physical understanding of the EELS mapping without reducing the level of information. Extending the analysis from a reference system to smaller nanocrystals demonstrates that allotrope determination in nanoscale systems down below 5 nm is possible. Such local information proves the use of monochromated EELS mapping as a powerful technique to study nanoscale systems. This possibility enables investigation of small nanostructures that cannot be investigated through other means, allowing for a better understanding and thus leading to realizations that can result in nanomaterials with improved properties.
NASA Technical Reports Server (NTRS)
Siljak, D. D.; Weissenberger, S.; Cuk, S. M.
1973-01-01
This report presents the development and description of the decomposition aggregation approach to stability investigations of high dimension mathematical models of dynamic systems. The high dimension vector differential equation describing a large dynamic system is decomposed into a number of lower dimension vector differential equations which represent interconnected subsystems. Then a method is described by which the stability properties of each subsystem are aggregated into a single vector Liapunov function, representing the aggregate system model, consisting of subsystem Liapunov functions as components. A linear vector differential inequality is then formed in terms of the vector Liapunov function. The matrix of the model, which reflects the stability properties of the subsystems and the nature of their interconnections, is analyzed to conclude over-all system stability characteristics. The technique is applied in detail to investigate the stability characteristics of a dynamic model of a hypothetical spinning Skylab.
NASA Astrophysics Data System (ADS)
Sridhar, J.
2015-12-01
The focus of this work is to examine polarimetric decomposition techniques primarily focussed on Pauli decomposition and Sphere Di-Plane Helix (SDH) decomposition for forest resource assessment. The data processing methods adopted are Pre-processing (Geometric correction and Radiometric calibration), Speckle Reduction, Image Decomposition and Image Classification. Initially to classify forest regions, unsupervised classification was applied to determine different unknown classes. It was observed K-means clustering method gave better results in comparison with ISO Data method.Using the algorithm developed for Radar Tools, the code for decomposition and classification techniques were applied in Interactive Data Language (IDL) and was applied to RISAT-1 image of Mysore-Mandya region of Karnataka, India. This region is chosen for studying forest vegetation and consists of agricultural lands, water and hilly regions. Polarimetric SAR data possess a high potential for classification of earth surface.After applying the decomposition techniques, classification was done by selecting region of interests andpost-classification the over-all accuracy was observed to be higher in the SDH decomposed image, as it operates on individual pixels on a coherent basis and utilises the complete intrinsic coherent nature of polarimetric SAR data. Thereby, making SDH decomposition particularly suited for analysis of high-resolution SAR data. The Pauli Decomposition represents all the polarimetric information in a single SAR image however interpretation of the resulting image is difficult. The SDH decomposition technique seems to produce better results and interpretation as compared to Pauli Decomposition however more quantification and further analysis are being done in this area of research. The comparison of Polarimetric decomposition techniques and evolutionary classification techniques will be the scope of this work.
Gao, Bo-Cai; Chen, Wei
2012-06-20
The visible/infrared imaging radiometer suite (VIIRS) is now onboard the first satellite platform managed by the Joint Polar Satellite System of the National Oceanic and Atmospheric Administration and NASA. It collects scientific data from an altitude of approximately 830 km in 22 narrow bands located in the 0.4-12.5 μm range. The seven visible and near-infrared (VisNIR) bands in the wavelength interval between 0.4-0.9 μm are known to suffer from the out-of-band (OOB) responses--a small amount of radiances far away from the center of a given band that can pass through the filter and reach detectors in the focal plane. A proper treatment of the OOB effects is necessary in order to obtain calibrated at-sensor radiance data [referred to as the Sensor Data Records (SDRs)] from measurements with these bands and subsequently to derive higher-level data products [referred to as the Environmental Data Records (EDRs)]. We have recently developed a new technique, called multispectral decomposition transform (MDT), which can be used to correct/remove the OOB effects of VIIRS VisNIR bands and to recover the true narrow band radiances from the measured radiances containing OOB effects. An MDT matrix is derived from the laboratory-measured filter transmittance functions. The recovery of the narrow band signals is performed through a matrix multiplication--the production between the MDT matrix and a multispectral vector. Hyperspectral imaging data measured from high altitude aircraft and satellite platforms, the complete VIIRS filter functions, and the truncated VIIRS filter functions to narrower spectral intervals, are used to simulate the VIIRS data with and without OOB effects. Our experimental results using the proposed MDT method have demonstrated that the average errors after decomposition are reduced by more than one order of magnitude.
NASA Technical Reports Server (NTRS)
Rancourt, J. D.; Porta, G. M.; Moyer, E. S.; Madeleine, D. G.; Taylor, L. T.
1988-01-01
Polyimide-metal oxide (Co3O4 or CuO) composite films have been prepared via in situ thermal decomposition of cobalt (II) chloride or bis(trifluoroacetylacetonato)copper(II). A soluble polyimide (XU-218) and its corresponding prepolymer (polyamide acid) were individually employed as the reaction matrix. The resulting composites exhibited a greater metal oxide concentration at the air interface with polyamide acid as the reaction matrix. The water of imidization that is released during the concurrent polyamide acid cure and additive decomposition is believed to promote metal migration and oxide formation. In contrast, XU-218 doped with either HAuCl4.3H2O or AgNO3 yields surface gold or silver when thermolyzed (300 C).
Pyrolysis and Matrix-Isolation FTIR of Acetoin
NASA Astrophysics Data System (ADS)
Cole, Sarah; Ellis, Martha; Sowards, John; McCunn, Laura R.
2017-06-01
Acetoin, CH_3C(O)CH(OH)CH_3, is an additive used in foods and cigarettes as well as a common component of biomass pyrolysate during the production of biofuels, yet little is known about its thermal decomposition mechanism. In order to identify thermal decomposition products of acetoin, a gas-phase mixture of approximately 0.3% acetoin in argon was subject to pyrolysis in a resistively heated SiC microtubular reactor at 1100-1500 K. Matrix-isolation FTIR spectroscopy was used to identify pyrolysis products. Many products were observed in analysis of the spectra, including acetylene, propyne, ethylene, and vinyl alcohol. These results provide clues to the overall mechanism of thermal decomposition and are important for predicting emissions from many industrial and residential processes.
NASA Astrophysics Data System (ADS)
Tsukamoto, Shigeru; Ono, Tomoya; Hirose, Kikuji; Blügel, Stefan
2017-03-01
The self-energy term used in transport calculations, which describes the coupling between electrode and transition regions, is able to be evaluated only from a limited number of the propagating and evanescent waves of a bulk electrode. This obviously contributes toward the reduction of the computational expenses in transport calculations. In this paper, we present a mathematical formula for reducing the computational expenses further without using any approximation and without losing accuracy. So far, the self-energy term has been handled as a matrix with the same dimension as the Hamiltonian submatrix representing the interaction between an electrode and a transition region. In this work, through the singular-value decomposition of the submatrix, the self-energy matrix is handled as a smaller matrix, whose dimension is the rank number of the Hamiltonian submatrix. This procedure is practical in the case of using the pseudopotentials in a separable form, and the computational expenses for determining the self-energy matrix are reduced by 90% when employing a code based on the real-space finite-difference formalism and projector-augmented wave method. In addition, this technique is applicable to the transport calculations using atomic or localized basis sets. Adopting the self-energy matrices obtained from this procedure, we present the calculation of the electron transport properties of C20 molecular junctions. The application demonstrates that the electron transmissions are sensitive to the orientation of the molecule with respect to the electrode surface. In addition, channel decomposition of the scattering wave functions reveals that some unoccupied C20 molecular orbitals mainly contribute to the electron conduction through the molecular junction.
Computationally Efficient Adaptive Beamformer for Ultrasound Imaging Based on QR Decomposition.
Park, Jongin; Wi, Seok-Min; Lee, Jin S
2016-02-01
Adaptive beamforming methods for ultrasound imaging have been studied to improve image resolution and contrast. The most common approach is the minimum variance (MV) beamformer which minimizes the power of the beamformed output while maintaining the response from the direction of interest constant. The method achieves higher resolution and better contrast than the delay-and-sum (DAS) beamformer, but it suffers from high computational cost. This cost is mainly due to the computation of the spatial covariance matrix and its inverse, which requires O(L(3)) computations, where L denotes the subarray size. In this study, we propose a computationally efficient MV beamformer based on QR decomposition. The idea behind our approach is to transform the spatial covariance matrix to be a scalar matrix σI and we subsequently obtain the apodization weights and the beamformed output without computing the matrix inverse. To do that, QR decomposition algorithm is used and also can be executed at low cost, and therefore, the computational complexity is reduced to O(L(2)). In addition, our approach is mathematically equivalent to the conventional MV beamformer, thereby showing the equivalent performances. The simulation and experimental results support the validity of our approach.
Removing non-stationary noise in spectrum sensing using matrix factorization
NASA Astrophysics Data System (ADS)
van Bloem, Jan-Willem; Schiphorst, Roel; Slump, Cornelis H.
2013-12-01
Spectrum sensing is key to many applications like dynamic spectrum access (DSA) systems or telecom regulators who need to measure utilization of frequency bands. The International Telecommunication Union (ITU) recommends a 10 dB threshold above the noise to decide whether a channel is occupied or not. However, radio frequency (RF) receiver front-ends are non-ideal. This means that the obtained data is distorted with noise and imperfections from the analog front-end. As part of the front-end the automatic gain control (AGC) circuitry mainly affects the sensing performance as strong adjacent signals lift the noise level. To enhance the performance of spectrum sensing significantly we focus in this article on techniques to remove the noise caused by the AGC from the sensing data. In order to do this we have applied matrix factorization techniques, i.e., SVD (singular value decomposition) and NMF (non-negative matrix factorization), which enables signal space analysis. In addition, we use live measurement results to verify the performance and to remove the effects of the AGC from the sensing data using above mentioned techniques, i.e., applied on block-wise available spectrum data. In this article it is shown that the occupancy in the industrial, scientific and medical (ISM) band, obtained by using energy detection (ITU recommended threshold), can be an overestimation of spectrum usage by 60%.
Scalar, Axial, and Tensor Interactions of Light Nuclei from Lattice QCD
NASA Astrophysics Data System (ADS)
Chang, Emmanuel; Davoudi, Zohreh; Detmold, William; Gambhir, Arjun S.; Orginos, Kostas; Savage, Martin J.; Shanahan, Phiala E.; Wagman, Michael L.; Winter, Frank; Nplqcd Collaboration
2018-04-01
Complete flavor decompositions of the matrix elements of the scalar, axial, and tensor currents in the proton, deuteron, diproton, and 3He at SU(3)-symmetric values of the quark masses corresponding to a pion mass mπ˜806 MeV are determined using lattice quantum chromodynamics. At the physical quark masses, the scalar interactions constrain mean-field models of nuclei and the low-energy interactions of nuclei with potential dark matter candidates. The axial and tensor interactions of nuclei constrain their spin content, integrated transversity, and the quark contributions to their electric dipole moments. External fields are used to directly access the quark-line connected matrix elements of quark bilinear operators, and a combination of stochastic estimation techniques is used to determine the disconnected sea-quark contributions. The calculated matrix elements differ from, and are typically smaller than, naive single-nucleon estimates. Given the particularly large, O (10 %), size of nuclear effects in the scalar matrix elements, contributions from correlated multinucleon effects should be quantified in the analysis of dark matter direct-detection experiments using nuclear targets.
Scalar, Axial, and Tensor Interactions of Light Nuclei from Lattice QCD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Emmanuel; Davoudi, Zohreh; Detmold, William
Complete flavor decompositions of the matrix elements of the scalar, axial, and tensor currents in the proton, deuteron, diproton, and 3He at SU(3)-symmetric values of the quark masses corresponding to a pion mass m π~806 MeV are determined using lattice quantum chromodynamics. At the physical quark masses, the scalar interactions constrain mean-field models of nuclei and the low-energy interactions of nuclei with potential dark matter candidates. The axial and tensor interactions of nuclei constrain their spin content, integrated transversity, and the quark contributions to their electric dipole moments. External fields are used to directly access the quark-line connected matrix elementsmore » of quark bilinear operators, and a combination of stochastic estimation techniques is used to determine the disconnected sea-quark contributions. The calculated matrix elements differ from, and are typically smaller than, naive single-nucleon estimates. Given the particularly large, O(10%), size of nuclear effects in the scalar matrix elements, contributions from correlated multinucleon effects should be quantified in the analysis of dark matter direct-detection experiments using nuclear targets.« less
Scalar, Axial, and Tensor Interactions of Light Nuclei from Lattice QCD
Chang, Emmanuel; Davoudi, Zohreh; Detmold, William; ...
2018-04-13
Complete flavor decompositions of the matrix elements of the scalar, axial, and tensor currents in the proton, deuteron, diproton, and 3He at SU(3)-symmetric values of the quark masses corresponding to a pion mass m π~806 MeV are determined using lattice quantum chromodynamics. At the physical quark masses, the scalar interactions constrain mean-field models of nuclei and the low-energy interactions of nuclei with potential dark matter candidates. The axial and tensor interactions of nuclei constrain their spin content, integrated transversity, and the quark contributions to their electric dipole moments. External fields are used to directly access the quark-line connected matrix elementsmore » of quark bilinear operators, and a combination of stochastic estimation techniques is used to determine the disconnected sea-quark contributions. The calculated matrix elements differ from, and are typically smaller than, naive single-nucleon estimates. Given the particularly large, O(10%), size of nuclear effects in the scalar matrix elements, contributions from correlated multinucleon effects should be quantified in the analysis of dark matter direct-detection experiments using nuclear targets.« less
Scalar, Axial, and Tensor Interactions of Light Nuclei from Lattice QCD.
Chang, Emmanuel; Davoudi, Zohreh; Detmold, William; Gambhir, Arjun S; Orginos, Kostas; Savage, Martin J; Shanahan, Phiala E; Wagman, Michael L; Winter, Frank
2018-04-13
Complete flavor decompositions of the matrix elements of the scalar, axial, and tensor currents in the proton, deuteron, diproton, and ^{3}He at SU(3)-symmetric values of the quark masses corresponding to a pion mass m_{π}∼806 MeV are determined using lattice quantum chromodynamics. At the physical quark masses, the scalar interactions constrain mean-field models of nuclei and the low-energy interactions of nuclei with potential dark matter candidates. The axial and tensor interactions of nuclei constrain their spin content, integrated transversity, and the quark contributions to their electric dipole moments. External fields are used to directly access the quark-line connected matrix elements of quark bilinear operators, and a combination of stochastic estimation techniques is used to determine the disconnected sea-quark contributions. The calculated matrix elements differ from, and are typically smaller than, naive single-nucleon estimates. Given the particularly large, O(10%), size of nuclear effects in the scalar matrix elements, contributions from correlated multinucleon effects should be quantified in the analysis of dark matter direct-detection experiments using nuclear targets.
Estimation of near-surface shear-wave velocity by inversion of Rayleigh waves
Xia, J.; Miller, R.D.; Park, C.B.
1999-01-01
The shear-wave (S-wave) velocity of near-surface materials (soil, rocks, pavement) and its effect on seismic-wave propagation are of fundamental interest in many groundwater, engineering, and environmental studies. Rayleigh-wave phase velocity of a layered-earth model is a function of frequency and four groups of earth properties: P-wave velocity, S-wave velocity, density, and thickness of layers. Analysis of the Jacobian matrix provides a measure of dispersion-curve sensitivity to earth properties. S-wave velocities are the dominant influence on a dispersion curve in a high-frequency range (>5 Hz) followed by layer thickness. An iterative solution technique to the weighted equation proved very effective in the high-frequency range when using the Levenberg-Marquardt and singular-value decomposition techniques. Convergence of the weighted solution is guaranteed through selection of the damping factor using the Levenberg-Marquardt method. Synthetic examples demonstrated calculation efficiency and stability of inverse procedures. We verify our method using borehole S-wave velocity measurements.Iterative solutions to the weighted equation by the Levenberg-Marquardt and singular-value decomposition techniques are derived to estimate near-surface shear-wave velocity. Synthetic and real examples demonstrate the calculation efficiency and stability of the inverse procedure. The inverse results of the real example are verified by borehole S-wave velocity measurements.
QR-decomposition based SENSE reconstruction using parallel architecture.
Ullah, Irfan; Nisar, Habab; Raza, Haseeb; Qasim, Malik; Inam, Omair; Omer, Hammad
2018-04-01
Magnetic Resonance Imaging (MRI) is a powerful medical imaging technique that provides essential clinical information about the human body. One major limitation of MRI is its long scan time. Implementation of advance MRI algorithms on a parallel architecture (to exploit inherent parallelism) has a great potential to reduce the scan time. Sensitivity Encoding (SENSE) is a Parallel Magnetic Resonance Imaging (pMRI) algorithm that utilizes receiver coil sensitivities to reconstruct MR images from the acquired under-sampled k-space data. At the heart of SENSE lies inversion of a rectangular encoding matrix. This work presents a novel implementation of GPU based SENSE algorithm, which employs QR decomposition for the inversion of the rectangular encoding matrix. For a fair comparison, the performance of the proposed GPU based SENSE reconstruction is evaluated against single and multicore CPU using openMP. Several experiments against various acceleration factors (AFs) are performed using multichannel (8, 12 and 30) phantom and in-vivo human head and cardiac datasets. Experimental results show that GPU significantly reduces the computation time of SENSE reconstruction as compared to multi-core CPU (approximately 12x speedup) and single-core CPU (approximately 53x speedup) without any degradation in the quality of the reconstructed images. Copyright © 2018 Elsevier Ltd. All rights reserved.
HiC-spector: a matrix library for spectral and reproducibility analysis of Hi-C contact maps.
Yan, Koon-Kiu; Yardimci, Galip Gürkan; Yan, Chengfei; Noble, William S; Gerstein, Mark
2017-07-15
Genome-wide proximity ligation based assays like Hi-C have opened a window to the 3D organization of the genome. In so doing, they present data structures that are different from conventional 1D signal tracks. To exploit the 2D nature of Hi-C contact maps, matrix techniques like spectral analysis are particularly useful. Here, we present HiC-spector, a collection of matrix-related functions for analyzing Hi-C contact maps. In particular, we introduce a novel reproducibility metric for quantifying the similarity between contact maps based on spectral decomposition. The metric successfully separates contact maps mapped from Hi-C data coming from biological replicates, pseudo-replicates and different cell types. Source code in Julia and Python, and detailed documentation is available at https://github.com/gersteinlab/HiC-spector . koonkiu.yan@gmail.com or mark@gersteinlab.org. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.
Synchronized flash photolysis and pulse deposition in matrix isolation experiments
NASA Technical Reports Server (NTRS)
Allamandola, Louis J.; Lucas, Donald; Pimentel, George C.
1978-01-01
An apparatus is described which permits flash photolysis of a pulse-deposited gas mixture in a matrix isolation experiment. This technique obviates the limitations of in situ photolysis imposed by the cage effect and by secondary photolysis. The matrix is deposited in pulses at 30-s intervals and photolyzed sequentially by four synchronized flashlamps approximately 1 ms before the pulse strikes the cold surface. Pulsed deposition maintains adequate isolation and causes line narrowing, which enhances spectral sensitivity. The efficacy of flash photolysis combined with pulsed deposition for producing and trapping transient species was demonstrated by infrated detection of CF3 (from photolysis of CF3I/Ar mixtures) and of ClCO (from photolysis of Cl2/CO/Ar mixtures). The apparatus was used to study the photolytic decomposition of gaseous tricarbonylironcyclobutadiene, C4H4Fe(CO)3. The results indicate that the primary photolytic step is not elimination of C4H4, as suggested earlier, but rather of CO.
Structure and decomposition of the silver formate Ag(HCO{sub 2})
DOE Office of Scientific and Technical Information (OSTI.GOV)
Puzan, Anna N., E-mail: anna_puzan@mail.ru; Baumer, Vyacheslav N.; Mateychenko, Pavel V.
Crystal structure of the silver formate Ag(HCO{sub 2}) has been determined (orthorhombic, sp.gr. Pccn, a=7.1199(5), b=10.3737(4), c=6.4701(3)Å, V=477.88(4) Å{sup 3}, Z=8). The structure contains isolated formate ions and the pairs Ag{sub 2}{sup 2+} which form the layers in (001) planes (the shortest Ag–Ag distances is 2.919 in the pair and 3.421 and 3.716 Å between the nearest Ag atoms of adjacent pairs). Silver formate is unstable compound which decompose spontaneously vs time. Decomposition was studied using Rietveld analysis of the powder diffraction patterns. It was concluded that the diffusion of Ag atoms leads to the formation of plate-like metal particlesmore » as nuclei in the (100) planes which settle parallel to (001) planes of the silver formate matrix. - Highlights: • Silver formate Ag(HCO{sub 2}) was synthesized and characterized. • Layered packing of Ag-Ag pairs in the structure was found. • Decomposition of Ag(HCO{sub 2}) and formation of metal phase were studied. • Rietveld-refined micro-structural characteristics during decomposition reveal the space relationship between the matrix structure and forming Ag phase REPLACE with: Space relationship between the matrix structure and forming Ag phase.« less
Multiresolution image gathering and restoration
NASA Technical Reports Server (NTRS)
Fales, Carl L.; Huck, Friedrich O.; Alter-Gartenberg, Rachel; Rahman, Zia-Ur
1992-01-01
In this paper we integrate multiresolution decomposition with image gathering and restoration. This integration leads to a Wiener-matrix filter that accounts for the aliasing, blurring, and noise in image gathering, together with the digital filtering and decimation in signal decomposition. Moreover, as implemented here, the Wiener-matrix filter completely suppresses the blurring and raster effects of the image-display device. We demonstrate that this filter can significantly improve the fidelity and visual quality produced by conventional image reconstruction. The extent of this improvement, in turn, depends on the design of the image-gathering device.
NASA Astrophysics Data System (ADS)
Tarai, Madhumita; Kumar, Keshav; Divya, O.; Bairi, Partha; Mishra, Kishor Kumar; Mishra, Ashok Kumar
2017-09-01
The present work compares the dissimilarity and covariance based unsupervised chemometric classification approaches by taking the total synchronous fluorescence spectroscopy data sets acquired for the cumin and non-cumin based herbal preparations. The conventional decomposition method involves eigenvalue-eigenvector analysis of the covariance of the data set and finds the factors that can explain the overall major sources of variation present in the data set. The conventional approach does this irrespective of the fact that the samples belong to intrinsically different groups and hence leads to poor class separation. The present work shows that classification of such samples can be optimized by performing the eigenvalue-eigenvector decomposition on the pair-wise dissimilarity matrix.
Sotiriou, Georgios A.; Singh, Dilpreet; Zhang, Fang; Chalbot, Marie-Cecile G.; Spielman-Sun, Eleanor; Hoering, Lutz; Kavouras, Ilias G.; Lowry, Gregory V.; Wohlleben, Wendel; Demokritou, Philip
2015-01-01
Nano-enabled products (NEPs) are currently part of our life prompting for detailed investigation of potential nano-release across their life-cycle. Particularly interesting is their end-of-life thermal decomposition scenario. Here, we examine the thermal decomposition of a widely used NEP, namely thermoplastic nanocomposites, and assess the properties of the byproducts (released aerosol and residual ash) and possible environmental health and safety implications. We focus on establishing a fundamental understanding on the effect of thermal decomposition parameters, such as polymer matrix, nanofiller properties, decomposition temperature, on the properties of byproducts using a recently-developed lab-based experimental integrated platform. Our results indicate that thermoplastic polymer matrix strongly influences size and morphology of released aerosol, while there was minimal but detectable nano-release, especially when inorganic nanofillers were used. The chemical composition of the released aerosol was found not to be strongly influenced by the presence of nanofiller at least for the low, industry-relevant loadings assessed here. Furthermore, the morphology and composition of residual ash was found to be strongly influenced by the presence of nanofiller. The findings presented here on thermal decomposition/incineration of NEPs raise important questions and concerns regarding the potential fate and transport of released engineered nanomaterials in environmental media and potential environmental health and safety implications. PMID:26642449
Underdetermined blind separation of three-way fluorescence spectra of PAHs in water
NASA Astrophysics Data System (ADS)
Yang, Ruifang; Zhao, Nanjing; Xiao, Xue; Zhu, Wei; Chen, Yunan; Yin, Gaofang; Liu, Jianguo; Liu, Wenqing
2018-06-01
In this work, underdetermined blind decomposition method is developed to recognize individual components from the three-way fluorescent spectra of their mixtures by using sparse component analysis (SCA). The mixing matrix is estimated from the mixtures using fuzzy data clustering algorithm together with the scatters corresponding to local energy maximum value in the time-frequency domain, and the spectra of object components are recovered by pseudo inverse technique. As an example, using this method three and four pure components spectra can be blindly extracted from two samples of their mixture, with similarities between resolved and reference spectra all above 0.80. This work opens a new and effective path to realize monitoring PAHs in water by three-way fluorescence spectroscopy technique.
Improved method of step length estimation based on inverted pendulum model.
Zhao, Qi; Zhang, Boxue; Wang, Jingjing; Feng, Wenquan; Jia, Wenyan; Sun, Mingui
2017-04-01
Step length estimation is an important issue in areas such as gait analysis, sport training, or pedestrian localization. In this article, we estimate the step length of walking using a waist-worn wearable computer named eButton. Motion sensors within this device are used to record body movement from the trunk instead of extremities. Two signal-processing techniques are applied to our algorithm design. The direction cosine matrix transforms vertical acceleration from the device coordinates to the topocentric coordinates. The empirical mode decomposition is used to remove the zero- and first-order skew effects resulting from an integration process. Our experimental results show that our algorithm performs well in step length estimation. The effectiveness of the direction cosine matrix algorithm is improved from 1.69% to 3.56% while the walking speed increased.
Wang, Huaqing; Li, Ruitong; Tang, Gang; Yuan, Hongfang; Zhao, Qingliang; Cao, Xi
2014-01-01
A Compound fault signal usually contains multiple characteristic signals and strong confusion noise, which makes it difficult to separate week fault signals from them through conventional ways, such as FFT-based envelope detection, wavelet transform or empirical mode decomposition individually. In order to improve the compound faults diagnose of rolling bearings via signals’ separation, the present paper proposes a new method to identify compound faults from measured mixed-signals, which is based on ensemble empirical mode decomposition (EEMD) method and independent component analysis (ICA) technique. With the approach, a vibration signal is firstly decomposed into intrinsic mode functions (IMF) by EEMD method to obtain multichannel signals. Then, according to a cross correlation criterion, the corresponding IMF is selected as the input matrix of ICA. Finally, the compound faults can be separated effectively by executing ICA method, which makes the fault features more easily extracted and more clearly identified. Experimental results validate the effectiveness of the proposed method in compound fault separating, which works not only for the outer race defect, but also for the rollers defect and the unbalance fault of the experimental system. PMID:25289644
Chao, T.T.; Sanzolone, R.F.
1992-01-01
Sample decomposition is a fundamental and integral step in the procedure of geochemical analysis. It is often the limiting factor to sample throughput, especially with the recent application of the fast and modern multi-element measurement instrumentation. The complexity of geological materials makes it necessary to choose the sample decomposition technique that is compatible with the specific objective of the analysis. When selecting a decomposition technique, consideration should be given to the chemical and mineralogical characteristics of the sample, elements to be determined, precision and accuracy requirements, sample throughput, technical capability of personnel, and time constraints. This paper addresses these concerns and discusses the attributes and limitations of many techniques of sample decomposition along with examples of their application to geochemical analysis. The chemical properties of reagents as to their function as decomposition agents are also reviewed. The section on acid dissolution techniques addresses the various inorganic acids that are used individually or in combination in both open and closed systems. Fluxes used in sample fusion are discussed. The promising microwave-oven technology and the emerging field of automation are also examined. A section on applications highlights the use of decomposition techniques for the determination of Au, platinum group elements (PGEs), Hg, U, hydride-forming elements, rare earth elements (REEs), and multi-elements in geological materials. Partial dissolution techniques used for geochemical exploration which have been treated in detail elsewhere are not discussed here; nor are fire-assaying for noble metals and decomposition techniques for X-ray fluorescence or nuclear methods be discussed. ?? 1992.
A direct method for unfolding the resolution function from measurements of neutron induced reactions
NASA Astrophysics Data System (ADS)
Žugec, P.; Colonna, N.; Sabate-Gilarte, M.; Vlachoudis, V.; Massimi, C.; Lerendegui-Marco, J.; Stamatopoulos, A.; Bacak, M.; Warren, S. G.; n TOF Collaboration
2017-12-01
The paper explores the numerical stability and the computational efficiency of a direct method for unfolding the resolution function from the measurements of the neutron induced reactions. A detailed resolution function formalism is laid out, followed by an overview of challenges present in a practical implementation of the method. A special matrix storage scheme is developed in order to facilitate both the memory management of the resolution function matrix, and to increase the computational efficiency of the matrix multiplication and decomposition procedures. Due to its admirable computational properties, a Cholesky decomposition is at the heart of the unfolding procedure. With the smallest but necessary modification of the matrix to be decomposed, the method is successfully applied to system of 105 × 105. However, the amplification of the uncertainties during the direct inversion procedures limits the applicability of the method to high-precision measurements of neutron induced reactions.
NASA Astrophysics Data System (ADS)
Qing, Zhou; Weili, Jiao; Tengfei, Long
2014-03-01
The Rational Function Model (RFM) is a new generalized sensor model. It does not need the physical parameters of sensors to achieve a high accuracy that is compatible to the rigorous sensor models. At present, the main method to solve RPCs is the Least Squares Estimation. But when coefficients has a large number or the distribution of the control points is not even, the classical least square method loses its superiority due to the ill-conditioning problem of design matrix. Condition Index and Variance Decomposition Proportion (CIVDP) is a reliable method for diagnosing the multicollinearity among the design matrix. It can not only detect the multicollinearity, but also can locate the parameters and show the corresponding columns in the design matrix. In this paper, the CIVDP method is used to diagnose the ill-condition problem of the RFM and to find the multicollinearity in the normal matrix.
The study of Thai stock market across the 2008 financial crisis
NASA Astrophysics Data System (ADS)
Kanjamapornkul, K.; Pinčák, Richard; Bartoš, Erik
2016-11-01
The cohomology theory for financial market can allow us to deform Kolmogorov space of time series data over time period with the explicit definition of eight market states in grand unified theory. The anti-de Sitter space induced from a coupling behavior field among traders in case of a financial market crash acts like gravitational field in financial market spacetime. Under this hybrid mathematical superstructure, we redefine a behavior matrix by using Pauli matrix and modified Wilson loop for time series data. We use it to detect the 2008 financial market crash by using a degree of cohomology group of sphere over tensor field in correlation matrix over all possible dominated stocks underlying Thai SET50 Index Futures. The empirical analysis of financial tensor network was performed with the help of empirical mode decomposition and intrinsic time scale decomposition of correlation matrix and the calculation of closeness centrality of planar graph.
On the parallel solution of parabolic equations
NASA Technical Reports Server (NTRS)
Gallopoulos, E.; Saad, Youcef
1989-01-01
Parallel algorithms for the solution of linear parabolic problems are proposed. The first of these methods is based on using polynomial approximation to the exponential. It does not require solving any linear systems and is highly parallelizable. The two other methods proposed are based on Pade and Chebyshev approximations to the matrix exponential. The parallelization of these methods is achieved by using partial fraction decomposition techniques to solve the resulting systems and thus offers the potential for increased time parallelism in time dependent problems. Experimental results from the Alliant FX/8 and the Cray Y-MP/832 vector multiprocessors are also presented.
The Rigid Orthogonal Procrustes Rotation Problem
ERIC Educational Resources Information Center
ten Berge, Jos M. F.
2006-01-01
The problem of rotating a matrix orthogonally to a best least squares fit with another matrix of the same order has a closed-form solution based on a singular value decomposition. The optimal rotation matrix is not necessarily rigid, but may also involve a reflection. In some applications, only rigid rotations are permitted. Gower (1976) has…
Effect of metallic coating on the properties of copper-silicon carbide composites
NASA Astrophysics Data System (ADS)
Chmielewski, M.; Pietrzak, K.; Teodorczyk, M.; Nosewicz, S.; Jarząbek, D.; Zybała, R.; Bazarnik, P.; Lewandowska, M.; Strojny-Nędza, A.
2017-11-01
In the presented paper a coating of SiC particles with a metallic layer was used to prepare copper matrix composite materials. The role of the layer was to protect the silicon carbide from decomposition and dissolution of silicon in the copper matrix during the sintering process. The SiC particles were covered by chromium, tungsten and titanium using Plasma Vapour Deposition method. After powder mixing of components, the final densification process via Spark Plasma Sintering (SPS) method at temperature 950 °C was provided. The almost fully dense materials were obtained (>97.5%). The microstructure of obtained composites was studied using scanning electron microscopy as well as transmission electron microscopy. The microstructural analysis of composites confirmed that regardless of the type of deposited material, there is no evidence for decomposition process of silicon carbide in copper. In order to measure the strength of the interface between ceramic particles and the metal matrix, the micro tensile tests have been performed. Furthermore, thermal diffusivity was measured with the use of the laser pulse technique. In the context of performed studies, the tungsten coating seems to be the most promising solution for heat sink application. Compared to pure composites without metallic layer, Cu-SiC with W coating indicate the higher tensile strength and thermal diffusitivy, irrespective of an amount of SiC reinforcement. The improvement of the composite properties is related to advantageous condition of Cu-SiC interface characterized by well homogenity and low porosity, as well as individual properties of the tungsten coating material.
A wavelet-based technique to predict treatment outcome for Major Depressive Disorder.
Mumtaz, Wajid; Xia, Likun; Mohd Yasin, Mohd Azhar; Azhar Ali, Syed Saad; Malik, Aamir Saeed
2017-01-01
Treatment management for Major Depressive Disorder (MDD) has been challenging. However, electroencephalogram (EEG)-based predictions of antidepressant's treatment outcome may help during antidepressant's selection and ultimately improve the quality of life for MDD patients. In this study, a machine learning (ML) method involving pretreatment EEG data was proposed to perform such predictions for Selective Serotonin Reuptake Inhibitor (SSRIs). For this purpose, the acquisition of experimental data involved 34 MDD patients and 30 healthy controls. Consequently, a feature matrix was constructed involving time-frequency decomposition of EEG data based on wavelet transform (WT) analysis, termed as EEG data matrix. However, the resultant EEG data matrix had high dimensionality. Therefore, dimension reduction was performed based on a rank-based feature selection method according to a criterion, i.e., receiver operating characteristic (ROC). As a result, the most significant features were identified and further be utilized during the training and testing of a classification model, i.e., the logistic regression (LR) classifier. Finally, the LR model was validated with 100 iterations of 10-fold cross-validation (10-CV). The classification results were compared with short-time Fourier transform (STFT) analysis, and empirical mode decompositions (EMD). The wavelet features extracted from frontal and temporal EEG data were found statistically significant. In comparison with other time-frequency approaches such as the STFT and EMD, the WT analysis has shown highest classification accuracy, i.e., accuracy = 87.5%, sensitivity = 95%, and specificity = 80%. In conclusion, significant wavelet coefficients extracted from frontal and temporal pre-treatment EEG data involving delta and theta frequency bands may predict antidepressant's treatment outcome for the MDD patients.
Wang, Jianfeng; Zheng, Wei; Lin, Kan; Huang, Zhiwei
2016-01-01
We report the development and implementation of a unique integrated Mueller-matrix (MM) near-infrared (NIR) imaging and Mueller-matrix point-wise diffuse reflectance (DR) spectroscopy technique for improving colonic cancer detection and diagnosis. Point-wise MM DR spectra can be acquired from any suspicious tissue areas indicated by MM imaging. A total of 30 paired colonic tissue specimens (normal vs. cancer) were measured using the integrated MM imaging and point-wise MM DR spectroscopy system. Polar decomposition algorithms are employed on the acquired images and spectra to derive three polarization metrics including depolarization, diattentuation and retardance for colonic tissue characterization. The decomposition results show that tissue depolarization and retardance are significantly decreased (p<0.001, paired 2-sided Student’s t-test, n = 30); while the tissue diattentuation is significantly increased (p<0.001, paired 2-sided Student’s t-test, n = 30) associated with colonic cancer. Further partial least squares discriminant analysis (PLS-DA) and leave-one tissue site-out, cross validation (LOSCV) show that the combination of the three polarization metrics provide the best diagnostic accuracy of 95.0% (sensitivity: 93.3%, and specificity: 96.7%) compared to either of the three polarization metrics (sensitivities of 93.3%, 83.3%, and 80.0%; and specificities of 90.0%, 96.7%, and 80.0%, respectively, for the depolarization, diattentuation and retardance metrics) for colonic cancer detection. This work suggests that the integrated MM NIR imaging and point-wise MM NIR diffuse reflectance spectroscopy has the potential to improve the early detection and diagnosis of malignant lesions in the colon. PMID:27446640
Stage scoring of liver fibrosis using Mueller matrix microscope
NASA Astrophysics Data System (ADS)
Zhou, Jialing; He, Honghui; Wang, Ye; Ma, Hui
2016-10-01
Liver fibrosis is a common pathological process of varied chronic liver diseases including alcoholic hepatitis, virus hepatitis, and so on. Accurate evaluation of liver fibrosis is necessary for effective therapy and a five-stage grading system was developed. Currently, experienced pathologists use stained liver biopsies to assess the degree of liver fibrosis. But it is difficult to obtain highly reproducible results because of huge discrepancy among different observers. Polarization imaging technique has the potential of scoring liver fibrosis since it is capable of probing the structural and optical properties of samples. Considering that the Mueller matrix measurement can provide comprehensive microstructural information of the tissues, in this paper, we apply the Mueller matrix microscope to human liver fibrosis slices in different fibrosis stages. We extract the valid regions and adopt the Mueller matrix polar decomposition (MMPD) and Mueller matrix transformation (MMT) parameters for quantitative analysis. We also use the Monte Carlo simulation to analyze the relationship between the microscopic Mueller matrix parameters and the characteristic structural changes during the fibrosis process. The experimental and Monte Carlo simulated results show good consistency. We get a positive correlation between the parameters and the stage of liver fibrosis. The results presented in this paper indicate that the Mueller matrix microscope can provide additional information for the detections and fibrosis scorings of liver tissues and has great potential in liver fibrosis diagnosis.
NASA Astrophysics Data System (ADS)
Roehl, Jan Hendrik; Oberrath, Jens
2016-09-01
``Active plasma resonance spectroscopy'' (APRS) is a widely used diagnostic method to measure plasma parameter like electron density. Measurements with APRS probes in plasmas of a few Pa typically show a broadening of the spectrum due to kinetic effects. To analyze the broadening a general kinetic model in electrostatic approximation based on functional analytic methods has been presented [ 1 ] . One of the main results is, that the system response function Y(ω) is given in terms of the matrix elements of the resolvent of the dynamic operator evaluated for values on the imaginary axis. To determine the response function of a specific probe the resolvent has to be approximated by a huge matrix which is given by a banded block structure. Due to this structure a block based LU decomposition can be implemented. It leads to a solution of Y(ω) which is given only by products of matrices of the inner block size. This LU decomposition allows to analyze the influence of kinetic effects on the broadening and saves memory and calculation time. Gratitude is expressed to the internal funding of Leuphana University.
Yang, Haixuan; Seoighe, Cathal
2016-01-01
Nonnegative Matrix Factorization (NMF) has proved to be an effective method for unsupervised clustering analysis of gene expression data. By the nonnegativity constraint, NMF provides a decomposition of the data matrix into two matrices that have been used for clustering analysis. However, the decomposition is not unique. This allows different clustering results to be obtained, resulting in different interpretations of the decomposition. To alleviate this problem, some existing methods directly enforce uniqueness to some extent by adding regularization terms in the NMF objective function. Alternatively, various normalization methods have been applied to the factor matrices; however, the effects of the choice of normalization have not been carefully investigated. Here we investigate the performance of NMF for the task of cancer class discovery, under a wide range of normalization choices. After extensive evaluations, we observe that the maximum norm showed the best performance, although the maximum norm has not previously been used for NMF. Matlab codes are freely available from: http://maths.nuigalway.ie/~haixuanyang/pNMF/pNMF.htm.
Polar and singular value decomposition of 3×3 magic squares
NASA Astrophysics Data System (ADS)
Trenkler, Götz; Schmidt, Karsten; Trenkler, Dietrich
2013-07-01
In this note, we find polar as well as singular value decompositions of a 3×3 magic square, i.e. a 3×3 matrix M with real elements where each row, column and diagonal adds up to the magic sum s of the magic square.
Tarai, Madhumita; Kumar, Keshav; Divya, O; Bairi, Partha; Mishra, Kishor Kumar; Mishra, Ashok Kumar
2017-09-05
The present work compares the dissimilarity and covariance based unsupervised chemometric classification approaches by taking the total synchronous fluorescence spectroscopy data sets acquired for the cumin and non-cumin based herbal preparations. The conventional decomposition method involves eigenvalue-eigenvector analysis of the covariance of the data set and finds the factors that can explain the overall major sources of variation present in the data set. The conventional approach does this irrespective of the fact that the samples belong to intrinsically different groups and hence leads to poor class separation. The present work shows that classification of such samples can be optimized by performing the eigenvalue-eigenvector decomposition on the pair-wise dissimilarity matrix. Copyright © 2017 Elsevier B.V. All rights reserved.
Circular Mixture Modeling of Color Distribution for Blind Stain Separation in Pathology Images.
Li, Xingyu; Plataniotis, Konstantinos N
2017-01-01
In digital pathology, to address color variation and histological component colocalization in pathology images, stain decomposition is usually performed preceding spectral normalization and tissue component segmentation. This paper examines the problem of stain decomposition, which is a naturally nonnegative matrix factorization (NMF) problem in algebra, and introduces a systematical and analytical solution consisting of a circular color analysis module and an NMF-based computation module. Unlike the paradigm of existing stain decomposition algorithms where stain proportions are computed from estimated stain spectra using a matrix inverse operation directly, the introduced solution estimates stain spectra and stain depths via probabilistic reasoning individually. Since the proposed method pays extra attentions to achromatic pixels in color analysis and stain co-occurrence in pixel clustering, it achieves consistent and reliable stain decomposition with minimum decomposition residue. Particularly, aware of the periodic and angular nature of hue, we propose the use of a circular von Mises mixture model to analyze the hue distribution, and provide a complete color-based pixel soft-clustering solution to address color mixing introduced by stain overlap. This innovation combined with saturation-weighted computation makes our study effective for weak stains and broad-spectrum stains. Extensive experimentation on multiple public pathology datasets suggests that our approach outperforms state-of-the-art blind stain separation methods in terms of decomposition effectiveness.
Dominant modal decomposition method
NASA Astrophysics Data System (ADS)
Dombovari, Zoltan
2017-03-01
The paper deals with the automatic decomposition of experimental frequency response functions (FRF's) of mechanical structures. The decomposition of FRF's is based on the Green function representation of free vibratory systems. After the determination of the impulse dynamic subspace, the system matrix is formulated and the poles are calculated directly. By means of the corresponding eigenvectors, the contribution of each element of the impulse dynamic subspace is determined and the sufficient decomposition of the corresponding FRF is carried out. With the presented dominant modal decomposition (DMD) method, the mode shapes, the modal participation vectors and the modal scaling factors are identified using the decomposed FRF's. Analytical example is presented along with experimental case studies taken from machine tool industry.
Kannan, R; Ievlev, A V; Laanait, N; Ziatdinov, M A; Vasudevan, R K; Jesse, S; Kalinin, S V
2018-01-01
Many spectral responses in materials science, physics, and chemistry experiments can be characterized as resulting from the superposition of a number of more basic individual spectra. In this context, unmixing is defined as the problem of determining the individual spectra, given measurements of multiple spectra that are spatially resolved across samples, as well as the determination of the corresponding abundance maps indicating the local weighting of each individual spectrum. Matrix factorization is a popular linear unmixing technique that considers that the mixture model between the individual spectra and the spatial maps is linear. Here, we present a tutorial paper targeted at domain scientists to introduce linear unmixing techniques, to facilitate greater understanding of spectroscopic imaging data. We detail a matrix factorization framework that can incorporate different domain information through various parameters of the matrix factorization method. We demonstrate many domain-specific examples to explain the expressivity of the matrix factorization framework and show how the appropriate use of domain-specific constraints such as non-negativity and sum-to-one abundance result in physically meaningful spectral decompositions that are more readily interpretable. Our aim is not only to explain the off-the-shelf available tools, but to add additional constraints when ready-made algorithms are unavailable for the task. All examples use the scalable open source implementation from https://github.com/ramkikannan/nmflibrary that can run from small laptops to supercomputers, creating a user-wide platform for rapid dissemination and adoption across scientific disciplines.
Conceptual design optimization study
NASA Technical Reports Server (NTRS)
Hollowell, S. J.; Beeman, E. R., II; Hiyama, R. M.
1990-01-01
The feasibility of applying multilevel functional decomposition and optimization techniques to conceptual design of advanced fighter aircraft was investigated. Applying the functional decomposition techniques to the conceptual design phase appears to be feasible. The initial implementation of the modified design process will optimize wing design variables. A hybrid approach, combining functional decomposition techniques for generation of aerodynamic and mass properties linear sensitivity derivatives with existing techniques for sizing mission performance and optimization, is proposed.
Underdetermined blind separation of three-way fluorescence spectra of PAHs in water.
Yang, Ruifang; Zhao, Nanjing; Xiao, Xue; Zhu, Wei; Chen, Yunan; Yin, Gaofang; Liu, Jianguo; Liu, Wenqing
2018-06-15
In this work, underdetermined blind decomposition method is developed to recognize individual components from the three-way fluorescent spectra of their mixtures by using sparse component analysis (SCA). The mixing matrix is estimated from the mixtures using fuzzy data clustering algorithm together with the scatters corresponding to local energy maximum value in the time-frequency domain, and the spectra of object components are recovered by pseudo inverse technique. As an example, using this method three and four pure components spectra can be blindly extracted from two samples of their mixture, with similarities between resolved and reference spectra all above 0.80. This work opens a new and effective path to realize monitoring PAHs in water by three-way fluorescence spectroscopy technique. Copyright © 2018 Elsevier B.V. All rights reserved.
Efficient GW calculations using eigenvalue-eigenvector decomposition of the dielectric matrix
NASA Astrophysics Data System (ADS)
Nguyen, Huy-Viet; Pham, T. Anh; Rocca, Dario; Galli, Giulia
2011-03-01
During the past 25 years, the GW method has been successfully used to compute electronic quasi-particle excitation spectra of a variety of materials. It is however a computationally intensive technique, as it involves summations over occupied and empty electronic states, to evaluate both the Green function (G) and the dielectric matrix (DM) entering the expression of the screened Coulomb interaction (W). Recent developments have shown that eigenpotentials of DMs can be efficiently calculated without any explicit evaluation of empty states. In this work, we will present a computationally efficient approach to the calculations of GW spectra by combining a representation of DMs in terms of its eigenpotentials and a recently developed iterative algorithm. As a demonstration of the efficiency of the method, we will present calculations of the vertical ionization potentials of several systems. Work was funnded by SciDAC-e DE-FC02-06ER25777.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yako, K.; Sasano, M.; Miki, K.
2009-07-03
The double-differential cross sections for the {sup 48}Ca(p,n) and {sup 48}Ti(n,p) reactions were measured at 300 MeV. A multipole decomposition technique was applied to the spectra to extract the Gamow-Teller (GT) components. The integrated GT strengths up to an excitation energy of 30 MeV in {sup 48}Sc are 15.3+-2.2 and 2.8+-0.3 in the (p,n) and (n,p) spectra, respectively. In the (n,p) spectra additional GT strengths were found above 8 MeV where shell models within the fp shell-model space predict almost no GT strengths, suggesting that the present shell-model description of the nuclear matrix element of the two-neutrino double-beta decay ismore » incomplete.« less
Revathi, V M; Balasubramaniam, P
2016-04-01
In this paper, the [Formula: see text] filtering problem is treated for N coupled genetic oscillator networks with time-varying delays and extrinsic molecular noises. Each individual genetic oscillator is a complex dynamical network that represents the genetic oscillations in terms of complicated biological functions with inner or outer couplings denote the biochemical interactions of mRNAs, proteins and other small molecules. Throughout the paper, first, by constructing appropriate delay decomposition dependent Lyapunov-Krasovskii functional combined with reciprocal convex approach, improved delay-dependent sufficient conditions are obtained to ensure the asymptotic stability of the filtering error system with a prescribed [Formula: see text] performance. Second, based on the above analysis, the existence of the designed [Formula: see text] filters are established in terms of linear matrix inequalities with Kronecker product. Finally, numerical examples including a coupled Goodwin oscillator model are inferred to illustrate the effectiveness and less conservatism of the proposed techniques.
Devi, B Pushpa; Singh, Kh Manglem; Roy, Sudipta
2016-01-01
This paper proposes a new watermarking algorithm based on the shuffled singular value decomposition and the visual cryptography for copyright protection of digital images. It generates the ownership and identification shares of the image based on visual cryptography. It decomposes the image into low and high frequency sub-bands. The low frequency sub-band is further divided into blocks of same size after shuffling it and then the singular value decomposition is applied to each randomly selected block. Shares are generated by comparing one of the elements in the first column of the left orthogonal matrix with its corresponding element in the right orthogonal matrix of the singular value decomposition of the block of the low frequency sub-band. The experimental results show that the proposed scheme clearly verifies the copyright of the digital images, and is robust to withstand several image processing attacks. Comparison with the other related visual cryptography-based algorithms reveals that the proposed method gives better performance. The proposed method is especially resilient against the rotation attack.
Groupwise registration of MR brain images with tumors.
Tang, Zhenyu; Wu, Yihong; Fan, Yong
2017-08-04
A novel groupwise image registration framework is developed for registering MR brain images with tumors. Our method iteratively estimates a normal-appearance counterpart for each tumor image to be registered and constructs a directed graph (digraph) of normal-appearance images to guide the groupwise image registration. Particularly, our method maps each tumor image to its normal appearance counterpart by identifying and inpainting brain tumor regions with intensity information estimated using a low-rank plus sparse matrix decomposition based image representation technique. The estimated normal-appearance images are groupwisely registered to a group center image guided by a digraph of images so that the total length of 'image registration paths' to be the minimum, and then the original tumor images are warped to the group center image using the resulting deformation fields. We have evaluated our method based on both simulated and real MR brain tumor images. The registration results were evaluated with overlap measures of corresponding brain regions and average entropy of image intensity information, and Wilcoxon signed rank tests were adopted to compare different methods with respect to their regional overlap measures. Compared with a groupwise image registration method that is applied to normal-appearance images estimated using the traditional low-rank plus sparse matrix decomposition based image inpainting, our method achieved higher image registration accuracy with statistical significance (p = 7.02 × 10 -9 ).
Deng, Xinyang; Jiang, Wen; Zhang, Jiandong
2017-01-01
The zero-sum matrix game is one of the most classic game models, and it is widely used in many scientific and engineering fields. In the real world, due to the complexity of the decision-making environment, sometimes the payoffs received by players may be inexact or uncertain, which requires that the model of matrix games has the ability to represent and deal with imprecise payoffs. To meet such a requirement, this paper develops a zero-sum matrix game model with Dempster–Shafer belief structure payoffs, which effectively represents the ambiguity involved in payoffs of a game. Then, a decomposition method is proposed to calculate the value of such a game, which is also expressed with belief structures. Moreover, for the possible computation-intensive issue in the proposed decomposition method, as an alternative solution, a Monte Carlo simulation approach is presented, as well. Finally, the proposed zero-sum matrix games with payoffs of Dempster–Shafer belief structures is illustratively applied to the sensor selection and intrusion detection of sensor networks, which shows its effectiveness and application process. PMID:28430156
Fault Detection of a Roller-Bearing System through the EMD of a Wavelet Denoised Signal
Ahn, Jong-Hyo; Kwak, Dae-Ho; Koh, Bong-Hwan
2014-01-01
This paper investigates fault detection of a roller bearing system using a wavelet denoising scheme and proper orthogonal value (POV) of an intrinsic mode function (IMF) covariance matrix. The IMF of the bearing vibration signal is obtained through empirical mode decomposition (EMD). The signal screening process in the wavelet domain eliminates noise-corrupted portions that may lead to inaccurate prognosis of bearing conditions. We segmented the denoised bearing signal into several intervals, and decomposed each of them into IMFs. The first IMF of each segment is collected to become a covariance matrix for calculating the POV. We show that covariance matrices from healthy and damaged bearings exhibit different POV profiles, which can be a damage-sensitive feature. We also illustrate the conventional approach of feature extraction, of observing the kurtosis value of the measured signal, to compare the functionality of the proposed technique. The study demonstrates the feasibility of wavelet-based de-noising, and shows through laboratory experiments that tracking the proper orthogonal values of the covariance matrix of the IMF can be an effective and reliable measure for monitoring bearing fault. PMID:25196008
Formulating face verification with semidefinite programming.
Yan, Shuicheng; Liu, Jianzhuang; Tang, Xiaoou; Huang, Thomas S
2007-11-01
This paper presents a unified solution to three unsolved problems existing in face verification with subspace learning techniques: selection of verification threshold, automatic determination of subspace dimension, and deducing feature fusing weights. In contrast to previous algorithms which search for the projection matrix directly, our new algorithm investigates a similarity metric matrix (SMM). With a certain verification threshold, this matrix is learned by a semidefinite programming approach, along with the constraints of the kindred pairs with similarity larger than the threshold, and inhomogeneous pairs with similarity smaller than the threshold. Then, the subspace dimension and the feature fusing weights are simultaneously inferred from the singular value decomposition of the derived SMM. In addition, the weighted and tensor extensions are proposed to further improve the algorithmic effectiveness and efficiency, respectively. Essentially, the verification is conducted within an affine subspace in this new algorithm and is, hence, called the affine subspace for verification (ASV). Extensive experiments show that the ASV can achieve encouraging face verification accuracy in comparison to other subspace algorithms, even without the need to explore any parameters.
NASA Astrophysics Data System (ADS)
Snakowska, Anna; Jurkiewicz, Jerzy; Gorazd, Łukasz
2017-05-01
The paper presents derivation of the impedance matrix based on the rigorous solution of the wave equation obtained by the Wiener-Hopf technique for a semi-infinite unflanged cylindrical duct. The impedance matrix allows, in turn, calculate the acoustic impedance along the duct and, as a special case, the radiation impedance. The analysis is carried out for a multimode incident wave accounting for modes coupling on the duct outlet not only qualitatively but also quantitatively for a selected source operating inside. The quantitative evaluation of the acoustic impedance requires setting of modes amplitudes which has been obtained applying the mode decomposition method to the far-field pressure radiation measurements and theoretical formulae for single mode directivity characteristics for an unflanged duct. Calculation of the acoustic impedance for a non-uniform distribution of the sound pressure and the sound velocity on a duct cross section requires determination of the acoustic power transmitted along/radiated from a duct. In the paper, the impedance matrix, the power, and the acoustic impedance were derived as functions of Helmholtz number and distance from the outlet.
Definition of a parametric form of nonsingular Mueller matrices.
Devlaminck, Vincent; Terrier, Patrick
2008-11-01
The goal of this paper is to propose a mathematical framework to define and analyze a general parametric form of an arbitrary nonsingular Mueller matrix. Starting from previous results about nondepolarizing matrices, we generalize the method to any nonsingular Mueller matrix. We address this problem in a six-dimensional space in order to introduce a transformation group with the same number of degrees of freedom and explain why subsets of O(5,1), the orthogonal group associated with six-dimensional Minkowski space, is a physically admissible solution to this question. Generators of this group are used to define possible expressions of an arbitrary nonsingular Mueller matrix. Ultimately, the problem of decomposition of these matrices is addressed, and we point out that the "reverse" and "forward" decomposition concepts recently introduced may be inferred from the formalism we propose.
Factor Analytic Approach to Transitive Text Mining using Medline Descriptors
NASA Astrophysics Data System (ADS)
Stegmann, J.; Grohmann, G.
Matrix decomposition methods were applied to examples of noninteractive literature sets sharing implicit relations. Document-by-term matrices were created from downloaded PubMed literature sets, the terms being the Medical Subject Headings (MeSH descriptors) assigned to the documents. The loadings of the factors derived from singular value or eigenvalue matrix decomposition were sorted according to absolute values and subsequently inspected for positions of terms relevant to the discovery of hidden connections. It was found that only a small number of factors had to be screened to find key terms in close neighbourhood, being separated by a small number of terms only.
Dispersion toughened ceramic composites and method for making same
Stinton, David P.; Lackey, Walter J.; Lauf, Robert J.
1986-01-01
Ceramic composites exhibiting increased fracture toughness are produced by the simultaneous codeposition of silicon carbide and titanium disilicide by chemical vapor deposition. A mixture of hydrogen, methyltrichlorosilane and titanium tetrachloride is introduced into a furnace containing a substrate such as graphite or silicon carbide. The thermal decomposition of the methyltrichlorosilane provides a silicon carbide matrix phase and the decomposition of the titanium tetrachloride provides a uniformly dispersed second phase of the intermetallic titanium disilicide within the matrix phase. The fracture toughness of the ceramic composite is in the range of about 6.5 to 7.0 MPa.sqroot.m which represents a significant increase over that of silicon carbide.
Dispersion toughened ceramic composites and method for making same
Stinton, D.P.; Lackey, W.J.; Lauf, R.J.
1984-09-28
Ceramic composites exhibiting increased fracture toughness are produced by the simultaneous codeposition of silicon carbide and titanium disilicide by chemical vapor deposition. A mixture of hydrogen, methyltrichlorosilane and titanium tetrachloride is introduced into a furnace containing a substrate such as graphite or silicon carbide. The thermal decomposition of the methyltrichlorosilane provides a silicon carbide matrix phase and the decomposition of the titanium tetrachloride provides a uniformly dispersed second phase of the intermetallic titanium disilicide within the matrix phase. The fracture toughness of the ceramic composite is in the range of about 6.5 to 7.0 MPa..sqrt..m which represents a significant increase over that of silicon carbide.
Reconstruction of Complex Network based on the Noise via QR Decomposition and Compressed Sensing.
Li, Lixiang; Xu, Dafei; Peng, Haipeng; Kurths, Jürgen; Yang, Yixian
2017-11-08
It is generally known that the states of network nodes are stable and have strong correlations in a linear network system. We find that without the control input, the method of compressed sensing can not succeed in reconstructing complex networks in which the states of nodes are generated through the linear network system. However, noise can drive the dynamics between nodes to break the stability of the system state. Therefore, a new method integrating QR decomposition and compressed sensing is proposed to solve the reconstruction problem of complex networks under the assistance of the input noise. The state matrix of the system is decomposed by QR decomposition. We construct the measurement matrix with the aid of Gaussian noise so that the sparse input matrix can be reconstructed by compressed sensing. We also discover that noise can build a bridge between the dynamics and the topological structure. Experiments are presented to show that the proposed method is more accurate and more efficient to reconstruct four model networks and six real networks by the comparisons between the proposed method and only compressed sensing. In addition, the proposed method can reconstruct not only the sparse complex networks, but also the dense complex networks.
Lorentz force electrical impedance tomography using magnetic field measurements.
Zengin, Reyhan; Gençer, Nevzat Güneri
2016-08-21
In this study, magnetic field measurement technique is investigated to image the electrical conductivity properties of biological tissues using Lorentz forces. This technique is based on electrical current induction using ultrasound together with an applied static magnetic field. The magnetic field intensity generated due to induced currents is measured using two coil configurations, namely, a rectangular loop coil and a novel xy coil pair. A time-varying voltage is picked-up and recorded while the acoustic wave propagates along its path. The forward problem of this imaging modality is defined as calculation of the pick-up voltages due to a given acoustic excitation and known body properties. Firstly, the feasibility of the proposed technique is investigated analytically. The basic field equations governing the behaviour of time-varying electromagnetic fields are presented. Secondly, the general formulation of the partial differential equations for the scalar and magnetic vector potentials are derived. To investigate the feasibility of this technique, numerical studies are conducted using a finite element method based software. To sense the pick-up voltages a novel coil configuration (xy coil pairs) is proposed. Two-dimensional numerical geometry with a 16-element linear phased array (LPA) ultrasonic transducer (1 MHz) and a conductive body (breast fat) with five tumorous tissues is modeled. The static magnetic field is assumed to be 4 Tesla. To understand the performance of the imaging system, the sensitivity matrix is analyzed. The sensitivity matrix is obtained for two different locations of LPA transducer with eleven steering angles from [Formula: see text] to [Formula: see text] at intervals of [Formula: see text]. The characteristics of the imaging system are shown with the singular value decomposition (SVD) of the sensitivity matrix. The images are reconstructed with the truncated SVD algorithm. The signal-to-noise ratio in measurements is assumed 80 dB. Simulation studies based on the sensitivity matrix analysis reveal that perturbations with [Formula: see text] mm size can be detected up to a 3.5 cm depth.
Lorentz force electrical impedance tomography using magnetic field measurements
NASA Astrophysics Data System (ADS)
Zengin, Reyhan; Güneri Gençer, Nevzat
2016-08-01
In this study, magnetic field measurement technique is investigated to image the electrical conductivity properties of biological tissues using Lorentz forces. This technique is based on electrical current induction using ultrasound together with an applied static magnetic field. The magnetic field intensity generated due to induced currents is measured using two coil configurations, namely, a rectangular loop coil and a novel xy coil pair. A time-varying voltage is picked-up and recorded while the acoustic wave propagates along its path. The forward problem of this imaging modality is defined as calculation of the pick-up voltages due to a given acoustic excitation and known body properties. Firstly, the feasibility of the proposed technique is investigated analytically. The basic field equations governing the behaviour of time-varying electromagnetic fields are presented. Secondly, the general formulation of the partial differential equations for the scalar and magnetic vector potentials are derived. To investigate the feasibility of this technique, numerical studies are conducted using a finite element method based software. To sense the pick-up voltages a novel coil configuration (xy coil pairs) is proposed. Two-dimensional numerical geometry with a 16-element linear phased array (LPA) ultrasonic transducer (1 MHz) and a conductive body (breast fat) with five tumorous tissues is modeled. The static magnetic field is assumed to be 4 Tesla. To understand the performance of the imaging system, the sensitivity matrix is analyzed. The sensitivity matrix is obtained for two different locations of LPA transducer with eleven steering angles from -{{25}\\circ} to {{25}\\circ} at intervals of {{5}\\circ} . The characteristics of the imaging system are shown with the singular value decomposition (SVD) of the sensitivity matrix. The images are reconstructed with the truncated SVD algorithm. The signal-to-noise ratio in measurements is assumed 80 dB. Simulation studies based on the sensitivity matrix analysis reveal that perturbations with 5~\\text{mm}× 5 mm size can be detected up to a 3.5 cm depth.
NASA Astrophysics Data System (ADS)
Liao, Haitao; Wu, Wenwang; Fang, Daining
2018-07-01
A coupled approach combining the reduced space Sequential Quadratic Programming (SQP) method with the harmonic balance condensation technique for finding the worst resonance response is developed. The nonlinear equality constraints of the optimization problem are imposed on the condensed harmonic balance equations. Making use of the null space decomposition technique, the original optimization formulation in the full space is mathematically simplified, and solved in the reduced space by means of the reduced SQP method. The transformation matrix that maps the full space to the null space of the constrained optimization problem is constructed via the coordinate basis scheme. The removal of the nonlinear equality constraints is accomplished, resulting in a simple optimization problem subject to bound constraints. Moreover, second order correction technique is introduced to overcome Maratos effect. The combination application of the reduced SQP method and condensation technique permits a large reduction of the computational cost. Finally, the effectiveness and applicability of the proposed methodology is demonstrated by two numerical examples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sakai, H.; Yako, K.
2009-08-26
Angular distributions of the double differential cross sections for the {sup 48}Ca(p,n) and the {sup 48}Ti(n,p) reactions were measured at 300 MeV. A multipole decomposition technique was applied to the spectra to extract the Gamow-Teller (GT) transition strengths. In the (n, p) spectrum beyond 8 MeV excitation energy extra B(GT{sup +}) strengths which are not predicted by the shell model calculation. This extra B(GT{sup +}) strengths significantly contribute to the nuclear matrix element of the 2v2{beta}-decay.
Simultaneous Tensor Decomposition and Completion Using Factor Priors.
Chen, Yi-Lei; Hsu, Chiou-Ting Candy; Liao, Hong-Yuan Mark
2013-08-27
Tensor completion, which is a high-order extension of matrix completion, has generated a great deal of research interest in recent years. Given a tensor with incomplete entries, existing methods use either factorization or completion schemes to recover the missing parts. However, as the number of missing entries increases, factorization schemes may overfit the model because of incorrectly predefined ranks, while completion schemes may fail to interpret the model factors. In this paper, we introduce a novel concept: complete the missing entries and simultaneously capture the underlying model structure. To this end, we propose a method called Simultaneous Tensor Decomposition and Completion (STDC) that combines a rank minimization technique with Tucker model decomposition. Moreover, as the model structure is implicitly included in the Tucker model, we use factor priors, which are usually known a priori in real-world tensor objects, to characterize the underlying joint-manifold drawn from the model factors. We conducted experiments to empirically verify the convergence of our algorithm on synthetic data, and evaluate its effectiveness on various kinds of real-world data. The results demonstrate the efficacy of the proposed method and its potential usage in tensor-based applications. It also outperforms state-of-the-art methods on multilinear model analysis and visual data completion tasks.
NASA Astrophysics Data System (ADS)
Poggi, Valerio; Ermert, Laura; Burjanek, Jan; Michel, Clotaire; Fäh, Donat
2015-01-01
Frequency domain decomposition (FDD) is a well-established spectral technique used in civil engineering to analyse and monitor the modal response of buildings and structures. The method is based on singular value decomposition of the cross-power spectral density matrix from simultaneous array recordings of ambient vibrations. This method is advantageous to retrieve not only the resonance frequencies of the investigated structure, but also the corresponding modal shapes without the need for an absolute reference. This is an important piece of information, which can be used to validate the consistency of numerical models and analytical solutions. We apply this approach using advanced signal processing to evaluate the resonance characteristics of 2-D Alpine sedimentary valleys. In this study, we present the results obtained at Martigny, in the Rhône valley (Switzerland). For the analysis, we use 2 hr of ambient vibration recordings from a linear seismic array deployed perpendicularly to the valley axis. Only the horizontal-axial direction (SH) of the ground motion is considered. Using the FDD method, six separate resonant frequencies are retrieved together with their corresponding modal shapes. We compare the mode shapes with results from classical standard spectral ratios and numerical simulations of ambient vibration recordings.
A wavelet-based technique to predict treatment outcome for Major Depressive Disorder
Xia, Likun; Mohd Yasin, Mohd Azhar; Azhar Ali, Syed Saad
2017-01-01
Treatment management for Major Depressive Disorder (MDD) has been challenging. However, electroencephalogram (EEG)-based predictions of antidepressant’s treatment outcome may help during antidepressant’s selection and ultimately improve the quality of life for MDD patients. In this study, a machine learning (ML) method involving pretreatment EEG data was proposed to perform such predictions for Selective Serotonin Reuptake Inhibitor (SSRIs). For this purpose, the acquisition of experimental data involved 34 MDD patients and 30 healthy controls. Consequently, a feature matrix was constructed involving time-frequency decomposition of EEG data based on wavelet transform (WT) analysis, termed as EEG data matrix. However, the resultant EEG data matrix had high dimensionality. Therefore, dimension reduction was performed based on a rank-based feature selection method according to a criterion, i.e., receiver operating characteristic (ROC). As a result, the most significant features were identified and further be utilized during the training and testing of a classification model, i.e., the logistic regression (LR) classifier. Finally, the LR model was validated with 100 iterations of 10-fold cross-validation (10-CV). The classification results were compared with short-time Fourier transform (STFT) analysis, and empirical mode decompositions (EMD). The wavelet features extracted from frontal and temporal EEG data were found statistically significant. In comparison with other time-frequency approaches such as the STFT and EMD, the WT analysis has shown highest classification accuracy, i.e., accuracy = 87.5%, sensitivity = 95%, and specificity = 80%. In conclusion, significant wavelet coefficients extracted from frontal and temporal pre-treatment EEG data involving delta and theta frequency bands may predict antidepressant’s treatment outcome for the MDD patients. PMID:28152063
A general solution strategy of modified power method for higher mode solutions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Peng; Lee, Hyunsuk; Lee, Deokjung, E-mail: deokjung@unist.ac.kr
2016-01-15
A general solution strategy of the modified power iteration method for calculating higher eigenmodes has been developed and applied in continuous energy Monte Carlo simulation. The new approach adopts four features: 1) the eigen decomposition of transfer matrix, 2) weight cancellation for higher modes, 3) population control with higher mode weights, and 4) stabilization technique of statistical fluctuations using multi-cycle accumulations. The numerical tests of neutron transport eigenvalue problems successfully demonstrate that the new strategy can significantly accelerate the fission source convergence with stable convergence behavior while obtaining multiple higher eigenmodes at the same time. The advantages of the newmore » strategy can be summarized as 1) the replacement of the cumbersome solution step of high order polynomial equations required by Booth's original method with the simple matrix eigen decomposition, 2) faster fission source convergence in inactive cycles, 3) more stable behaviors in both inactive and active cycles, and 4) smaller variances in active cycles. Advantages 3 and 4 can be attributed to the lower sensitivity of the new strategy to statistical fluctuations due to the multi-cycle accumulations. The application of the modified power method to continuous energy Monte Carlo simulation and the higher eigenmodes up to 4th order are reported for the first time in this paper. -- Graphical abstract: -- Highlights: •Modified power method is applied to continuous energy Monte Carlo simulation. •Transfer matrix is introduced to generalize the modified power method. •All mode based population control is applied to get the higher eigenmodes. •Statistic fluctuation can be greatly reduced using accumulated tally results. •Fission source convergence is accelerated with higher mode solutions.« less
COMPADRE: an R and web resource for pathway activity analysis by component decompositions.
Ramos-Rodriguez, Roberto-Rafael; Cuevas-Diaz-Duran, Raquel; Falciani, Francesco; Tamez-Peña, Jose-Gerardo; Trevino, Victor
2012-10-15
The analysis of biological networks has become essential to study functional genomic data. Compadre is a tool to estimate pathway/gene sets activity indexes using sub-matrix decompositions for biological networks analyses. The Compadre pipeline also includes one of the direct uses of activity indexes to detect altered gene sets. For this, the gene expression sub-matrix of a gene set is decomposed into components, which are used to test differences between groups of samples. This procedure is performed with and without differentially expressed genes to decrease false calls. During this process, Compadre also performs an over-representation test. Compadre already implements four decomposition methods [principal component analysis (PCA), Isomaps, independent component analysis (ICA) and non-negative matrix factorization (NMF)], six statistical tests (t- and f-test, SAM, Kruskal-Wallis, Welch and Brown-Forsythe), several gene sets (KEGG, BioCarta, Reactome, GO and MsigDB) and can be easily expanded. Our simulation results shown in Supplementary Information suggest that Compadre detects more pathways than over-representation tools like David, Babelomics and Webgestalt and less false positives than PLAGE. The output is composed of results from decomposition and over-representation analyses providing a more complete biological picture. Examples provided in Supplementary Information show the utility, versatility and simplicity of Compadre for analyses of biological networks. Compadre is freely available at http://bioinformatica.mty.itesm.mx:8080/compadre. The R package is also available at https://sourceforge.net/p/compadre.
Tang, Xin; Feng, Guo-Can; Li, Xiao-Xin; Cai, Jia-Xin
2015-01-01
Face recognition is challenging especially when the images from different persons are similar to each other due to variations in illumination, expression, and occlusion. If we have sufficient training images of each person which can span the facial variations of that person under testing conditions, sparse representation based classification (SRC) achieves very promising results. However, in many applications, face recognition often encounters the small sample size problem arising from the small number of available training images for each person. In this paper, we present a novel face recognition framework by utilizing low-rank and sparse error matrix decomposition, and sparse coding techniques (LRSE+SC). Firstly, the low-rank matrix recovery technique is applied to decompose the face images per class into a low-rank matrix and a sparse error matrix. The low-rank matrix of each individual is a class-specific dictionary and it captures the discriminative feature of this individual. The sparse error matrix represents the intra-class variations, such as illumination, expression changes. Secondly, we combine the low-rank part (representative basis) of each person into a supervised dictionary and integrate all the sparse error matrix of each individual into a within-individual variant dictionary which can be applied to represent the possible variations between the testing and training images. Then these two dictionaries are used to code the query image. The within-individual variant dictionary can be shared by all the subjects and only contribute to explain the lighting conditions, expressions, and occlusions of the query image rather than discrimination. At last, a reconstruction-based scheme is adopted for face recognition. Since the within-individual dictionary is introduced, LRSE+SC can handle the problem of the corrupted training data and the situation that not all subjects have enough samples for training. Experimental results show that our method achieves the state-of-the-art results on AR, FERET, FRGC and LFW databases.
Tang, Xin; Feng, Guo-can; Li, Xiao-xin; Cai, Jia-xin
2015-01-01
Face recognition is challenging especially when the images from different persons are similar to each other due to variations in illumination, expression, and occlusion. If we have sufficient training images of each person which can span the facial variations of that person under testing conditions, sparse representation based classification (SRC) achieves very promising results. However, in many applications, face recognition often encounters the small sample size problem arising from the small number of available training images for each person. In this paper, we present a novel face recognition framework by utilizing low-rank and sparse error matrix decomposition, and sparse coding techniques (LRSE+SC). Firstly, the low-rank matrix recovery technique is applied to decompose the face images per class into a low-rank matrix and a sparse error matrix. The low-rank matrix of each individual is a class-specific dictionary and it captures the discriminative feature of this individual. The sparse error matrix represents the intra-class variations, such as illumination, expression changes. Secondly, we combine the low-rank part (representative basis) of each person into a supervised dictionary and integrate all the sparse error matrix of each individual into a within-individual variant dictionary which can be applied to represent the possible variations between the testing and training images. Then these two dictionaries are used to code the query image. The within-individual variant dictionary can be shared by all the subjects and only contribute to explain the lighting conditions, expressions, and occlusions of the query image rather than discrimination. At last, a reconstruction-based scheme is adopted for face recognition. Since the within-individual dictionary is introduced, LRSE+SC can handle the problem of the corrupted training data and the situation that not all subjects have enough samples for training. Experimental results show that our method achieves the state-of-the-art results on AR, FERET, FRGC and LFW databases. PMID:26571112
Fast modal decomposition for optical fibers using digital holography.
Lyu, Meng; Lin, Zhiquan; Li, Guowei; Situ, Guohai
2017-07-26
Eigenmode decomposition of the light field at the output end of optical fibers can provide fundamental insights into the nature of electromagnetic-wave propagation through the fibers. Here we present a fast and complete modal decomposition technique for step-index optical fibers. The proposed technique employs digital holography to measure the light field at the output end of the multimode optical fiber, and utilizes the modal orthonormal property of the basis modes to calculate the modal coefficients of each mode. Optical experiments were carried out to demonstrate the proposed decomposition technique, showing that this approach is fast, accurate and cost-effective.
The 3D modeling of high numerical aperture imaging in thin films
NASA Technical Reports Server (NTRS)
Flagello, D. G.; Milster, Tom
1992-01-01
A modelling technique is described which is used to explore three dimensional (3D) image irradiance distributions formed by high numerical aperture (NA is greater than 0.5) lenses in homogeneous, linear films. This work uses a 3D modelling approach that is based on a plane-wave decomposition in the exit pupil. Each plane wave component is weighted by factors due to polarization, aberration, and input amplitude and phase terms. This is combined with a modified thin-film matrix technique to derive the total field amplitude at each point in a film by a coherent vector sum over all plane waves. Then the total irradiance is calculated. The model is used to show how asymmetries present in the polarized image change with the influence of a thin film through varying degrees of focus.
Compressed Continuous Computation v. 12/20/2016
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorodetsky, Alex
2017-02-17
A library for performing numerical computation with low-rank functions. The (C3) library enables performing continuous linear and multilinear algebra with multidimensional functions. Common tasks include taking "matrix" decompositions of vector- or matrix-valued functions, approximating multidimensional functions in low-rank format, adding or multiplying functions together, integrating multidimensional functions.
NASA Astrophysics Data System (ADS)
Forootan, Ehsan; Kusche, Jürgen
2016-04-01
Geodetic/geophysical observations, such as the time series of global terrestrial water storage change or sea level and temperature change, represent samples of physical processes and therefore contain information about complex physical interactionswith many inherent time scales. Extracting relevant information from these samples, for example quantifying the seasonality of a physical process or its variability due to large-scale ocean-atmosphere interactions, is not possible by rendering simple time series approaches. In the last decades, decomposition techniques have found increasing interest for extracting patterns from geophysical observations. Traditionally, principal component analysis (PCA) and more recently independent component analysis (ICA) are common techniques to extract statistical orthogonal (uncorrelated) and independent modes that represent the maximum variance of observations, respectively. PCA and ICA can be classified as stationary signal decomposition techniques since they are based on decomposing the auto-covariance matrix or diagonalizing higher (than two)-order statistical tensors from centered time series. However, the stationary assumption is obviously not justifiable for many geophysical and climate variables even after removing cyclic components e.g., the seasonal cycles. In this paper, we present a new decomposition method, the complex independent component analysis (CICA, Forootan, PhD-2014), which can be applied to extract to non-stationary (changing in space and time) patterns from geophysical time series. Here, CICA is derived as an extension of real-valued ICA (Forootan and Kusche, JoG-2012), where we (i) define a new complex data set using a Hilbert transformation. The complex time series contain the observed values in their real part, and the temporal rate of variability in their imaginary part. (ii) An ICA algorithm based on diagonalization of fourth-order cumulants is then applied to decompose the new complex data set in (i). (iii) Dominant non-stationary patterns are recognized as independent complex patterns that can be used to represent the space and time amplitude and phase propagations. We present the results of CICA on simulated and real cases e.g., for quantifying the impact of large-scale ocean-atmosphere interaction on global mass changes. Forootan (PhD-2014) Statistical signal decomposition techniques for analyzing time-variable satellite gravimetry data, PhD Thesis, University of Bonn, http://hss.ulb.uni-bonn.de/2014/3766/3766.htm Forootan and Kusche (JoG-2012) Separation of global time-variable gravity signals into maximally independent components, Journal of Geodesy 86 (7), 477-497, doi: 10.1007/s00190-011-0532-5
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dongarra, J.J.; Hewitt, T.
1985-08-01
This note describes some experiments on simple, dense linear algebra algorithms. These experiments show that the CRAY X-MP is capable of small-grain multitasking arising from standard implementations of LU and Cholesky decomposition. The implementation described here provides the ''fastest'' execution rate for LU decomposition, 718 MFLOPS for a matrix of order 1000.
On the computation and updating of the modified Cholesky decomposition of a covariance matrix
NASA Technical Reports Server (NTRS)
Vanrooy, D. L.
1976-01-01
Methods for obtaining and updating the modified Cholesky decomposition (MCD) for the particular case of a covariance matrix when one is given only the original data are described. These methods are the standard method of forming the covariance matrix K then solving for the MCD, L and D (where K=LDLT); a method based on Householder reflections; and lastly, a method employing the composite-t algorithm. For many cases in the analysis of remotely sensed data, the composite-t method is the superior method despite the fact that it is the slowest one, since (1) the relative amount of time computing MCD's is often quite small, (2) the stability properties of it are the best of the three, and (3) it affords an efficient and numerically stable procedure for updating the MCD. The properties of these methods are discussed and FORTRAN programs implementing these algorithms are listed.
NASA Astrophysics Data System (ADS)
Gou, Ming-Jiang; Yang, Ming-Lin; Sheng, Xin-Qing
2016-10-01
Mature red blood cells (RBC) do not contain huge complex nuclei and organelles, makes them can be approximately regarded as homogeneous medium particles. To compute the radiation pressure force (RPF) exerted by multiple laser beams on this kind of arbitrary shaped homogenous nano-particles, a fast electromagnetic optics method is demonstrated. In general, based on the Maxwell's equations, the matrix equation formed by the method of moment (MOM) has many right hand sides (RHS's) corresponding to the different laser beams. In order to accelerate computing the matrix equation, the algorithm conducts low-rank decomposition on the excitation matrix consisting of all RHS's to figure out the so-called skeleton laser beams by interpolative decomposition (ID). After the solutions corresponding to the skeletons are obtained, the desired responses can be reconstructed efficiently. Some numerical results are performed to validate the developed method.
Controlled nucleation and growth of CdS nanoparticles in a polymer matrix.
Di Luccio, Tiziana; Laera, Anna Maria; Tapfer, Leander; Kempter, Susanne; Kraus, Robert; Nickel, Bert
2006-06-29
In-situ synchrotron X-ray diffraction (XRD) was used to monitor the thermal decomposition (thermolysis) of Cd thiolates precursors embedded in a polymer matrix and the nucleation of CdS nanoparticles. A thiolate precursor/polymer solid foil was heated to 300 degrees C in the X-ray diffraction setup of beamline W1.1 at Hasylab, and the diffraction curves were each recorded at 10 degrees C. At temperatures above 240 degrees C, the precursor decomposition is complete and CdS nanoparticles grow within the polymer matrix forming a nanocomposite with interesting optical properties. The nanoparticle structural properties (size and crystal structure) depend on the annealing temperature. Transmission electron microscopy (TEM) and photoluminescence (PL) analyses were used to characterize the nanoparticles. A possible mechanism driving the structural transformation of the precursor is inferred from the diffraction features arising at the different temperatures.
Advanced Background Subtraction Applied to Aeroacoustic Wind Tunnel Testing
NASA Technical Reports Server (NTRS)
Bahr, Christopher J.; Horne, William C.
2015-01-01
An advanced form of background subtraction is presented and applied to aeroacoustic wind tunnel data. A variant of this method has seen use in other fields such as climatology and medical imaging. The technique, based on an eigenvalue decomposition of the background noise cross-spectral matrix, is robust against situations where isolated background auto-spectral levels are measured to be higher than levels of combined source and background signals. It also provides an alternate estimate of the cross-spectrum, which previously might have poor definition for low signal-to-noise ratio measurements. Simulated results indicate similar performance to conventional background subtraction when the subtracted spectra are weaker than the true contaminating background levels. Superior performance is observed when the subtracted spectra are stronger than the true contaminating background levels. Experimental results show limited success in recovering signal behavior for data where conventional background subtraction fails. They also demonstrate the new subtraction technique's ability to maintain a proper coherence relationship in the modified cross-spectral matrix. Beam-forming and de-convolution results indicate the method can successfully separate sources. Results also show a reduced need for the use of diagonal removal in phased array processing, at least for the limited data sets considered.
How to Compute the Partial Fraction Decomposition without Really Trying
ERIC Educational Resources Information Center
Brazier, Richard; Boman, Eugene
2007-01-01
For various reasons there has been a recent trend in college and high school calculus courses to de-emphasize teaching the Partial Fraction Decomposition (PFD) as an integration technique. This is regrettable because the Partial Fraction Decomposition is considerably more than an integration technique. It is, in fact, a general purpose tool which…
Analysis and Prediction of Sea Ice Evolution using Koopman Mode Decomposition Techniques
2018-04-30
Title: Analysis and Prediction of Sea Ice Evolution using Koopman Mode Decomposition Techniques Subject: Monthly Progress Report Period of...Resources: N/A TOTAL: $18,687 2 TECHNICAL STATUS REPORT Abstract The program goal is analysis of sea ice dynamical behavior using Koopman Mode Decompo...sition (KMD) techniques. The work in the program’s first month consisted of improvements to data processing code, inclusion of additional arctic sea ice
Recursive inverse factorization.
Rubensson, Emanuel H; Bock, Nicolas; Holmström, Erik; Niklasson, Anders M N
2008-03-14
A recursive algorithm for the inverse factorization S(-1)=ZZ(*) of Hermitian positive definite matrices S is proposed. The inverse factorization is based on iterative refinement [A.M.N. Niklasson, Phys. Rev. B 70, 193102 (2004)] combined with a recursive decomposition of S. As the computational kernel is matrix-matrix multiplication, the algorithm can be parallelized and the computational effort increases linearly with system size for systems with sufficiently sparse matrices. Recent advances in network theory are used to find appropriate recursive decompositions. We show that optimization of the so-called network modularity results in an improved partitioning compared to other approaches. In particular, when the recursive inverse factorization is applied to overlap matrices of irregularly structured three-dimensional molecules.
CP decomposition approach to blind separation for DS-CDMA system using a new performance index
NASA Astrophysics Data System (ADS)
Rouijel, Awatif; Minaoui, Khalid; Comon, Pierre; Aboutajdine, Driss
2014-12-01
In this paper, we present a canonical polyadic (CP) tensor decomposition isolating the scaling matrix. This has two major implications: (i) the problem conditioning shows up explicitly and could be controlled through a constraint on the so-called coherences and (ii) a performance criterion concerning the factor matrices can be exactly calculated and is more realistic than performance metrics used in the literature. Two new algorithms optimizing the CP decomposition based on gradient descent are proposed. This decomposition is illustrated by an application to direct-sequence code division multiplexing access (DS-CDMA) systems; computer simulations are provided and demonstrate the good behavior of these algorithms, compared to others in the literature.
NASA Astrophysics Data System (ADS)
Jaber, Abobaker M.
2014-12-01
Two nonparametric methods for prediction and modeling of financial time series signals are proposed. The proposed techniques are designed to handle non-stationary and non-linearity behave and to extract meaningful signals for reliable prediction. Due to Fourier Transform (FT), the methods select significant decomposed signals that will be employed for signal prediction. The proposed techniques developed by coupling Holt-winter method with Empirical Mode Decomposition (EMD) and it is Extending the scope of empirical mode decomposition by smoothing (SEMD). To show performance of proposed techniques, we analyze daily closed price of Kuala Lumpur stock market index.
NASA Astrophysics Data System (ADS)
Teal, Paul D.; Eccles, Craig
2015-04-01
The two most successful methods of estimating the distribution of nuclear magnetic resonance relaxation times from two dimensional data are data compression followed by application of the Butler-Reeds-Dawson algorithm, and a primal-dual interior point method using preconditioned conjugate gradient. Both of these methods have previously been presented using a truncated singular value decomposition of matrices representing the exponential kernel. In this paper it is shown that other matrix factorizations are applicable to each of these algorithms, and that these illustrate the different fundamental principles behind the operation of the algorithms. These are the rank-revealing QR (RRQR) factorization and the LDL factorization with diagonal pivoting, also known as the Bunch-Kaufman-Parlett factorization. It is shown that both algorithms can be improved by adaptation of the truncation as the optimization process progresses, improving the accuracy as the optimal value is approached. A variation on the interior method viz, the use of barrier function instead of the primal-dual approach, is found to offer considerable improvement in terms of speed and reliability. A third type of algorithm, related to the algorithm known as Fast iterative shrinkage-thresholding algorithm, is applied to the problem. This method can be efficiently formulated without the use of a matrix decomposition.
NASA Technical Reports Server (NTRS)
Chew, W. C.; Song, J. M.; Lu, C. C.; Weedon, W. H.
1995-01-01
In the first phase of our work, we have concentrated on laying the foundation to develop fast algorithms, including the use of recursive structure like the recursive aggregate interaction matrix algorithm (RAIMA), the nested equivalence principle algorithm (NEPAL), the ray-propagation fast multipole algorithm (RPFMA), and the multi-level fast multipole algorithm (MLFMA). We have also investigated the use of curvilinear patches to build a basic method of moments code where these acceleration techniques can be used later. In the second phase, which is mainly reported on here, we have concentrated on implementing three-dimensional NEPAL on a massively parallel machine, the Connection Machine CM-5, and have been able to obtain some 3D scattering results. In order to understand the parallelization of codes on the Connection Machine, we have also studied the parallelization of 3D finite-difference time-domain (FDTD) code with PML material absorbing boundary condition (ABC). We found that simple algorithms like the FDTD with material ABC can be parallelized very well allowing us to solve within a minute a problem of over a million nodes. In addition, we have studied the use of the fast multipole method and the ray-propagation fast multipole algorithm to expedite matrix-vector multiplication in a conjugate-gradient solution to integral equations of scattering. We find that these methods are faster than LU decomposition for one incident angle, but are slower than LU decomposition when many incident angles are needed as in the monostatic RCS calculations.
NASA Astrophysics Data System (ADS)
Pan, Xiao-Min; Wei, Jian-Gong; Peng, Zhen; Sheng, Xin-Qing
2012-02-01
The interpolative decomposition (ID) is combined with the multilevel fast multipole algorithm (MLFMA), denoted by ID-MLFMA, to handle multiscale problems. The ID-MLFMA first generates ID levels by recursively dividing the boxes at the finest MLFMA level into smaller boxes. It is specifically shown that near-field interactions with respect to the MLFMA, in the form of the matrix vector multiplication (MVM), are efficiently approximated at the ID levels. Meanwhile, computations on far-field interactions at the MLFMA levels remain unchanged. Only a small portion of matrix entries are required to approximate coupling among well-separated boxes at the ID levels, and these submatrices can be filled without computing the complete original coupling matrix. It follows that the matrix filling in the ID-MLFMA becomes much less expensive. The memory consumed is thus greatly reduced and the MVM is accelerated as well. Several factors that may influence the accuracy, efficiency and reliability of the proposed ID-MLFMA are investigated by numerical experiments. Complex targets are calculated to demonstrate the capability of the ID-MLFMA algorithm.
Emergent causality and the N-photon scattering matrix in waveguide QED
NASA Astrophysics Data System (ADS)
Sánchez-Burillo, E.; Cadarso, A.; Martín-Moreno, L.; García-Ripoll, J. J.; Zueco, D.
2018-01-01
In this work we discuss the emergence of approximate causality in a general setup from waveguide QED—i.e. a one-dimensional propagating field interacting with a scatterer. We prove that this emergent causality translates into a structure for the N-photon scattering matrix. Our work builds on the derivation of a Lieb-Robinson-type bound for continuous models and for all coupling strengths, as well as on several intermediate results, of which we highlight: (i) the asymptotic independence of space-like separated wave packets, (ii) the proper definition of input and output scattering states, and (iii) the characterization of the ground state and correlations in the model. We illustrate our formal results by analyzing the two-photon scattering from a quantum impurity in the ultrastrong coupling regime, verifying the cluster decomposition and ground-state nature. Besides, we generalize the cluster decomposition if inelastic or Raman scattering occurs, finding the structure of the S-matrix in momentum space for linear dispersion relations. In this case, we compute the decay of the fluorescence (photon-photon correlations) caused by this S-matrix.
NASA Astrophysics Data System (ADS)
Fang, Dong-Liang; Faessler, Amand; Šimkovic, Fedor
2018-04-01
In this paper, with restored isospin symmetry, we evaluated the neutrinoless double-β -decay nuclear matrix elements for 76Ge, 82Se, 130Te, 136Xe, and 150Nd for both the light and heavy neutrino mass mechanisms using the deformed quasiparticle random-phase approximation approach with realistic forces. We give detailed decompositions of the nuclear matrix elements over different intermediate states and nucleon pairs, and discuss how these decompositions are affected by the model space truncations. Compared to the spherical calculations, our results show reductions from 30 % to about 60 % of the nuclear matrix elements for the calculated isotopes mainly due to the presence of the BCS overlap factor between the initial and final ground states. The comparison between different nucleon-nucleon (NN) forces with corresponding short-range correlations shows that the choice of the NN force gives roughly 20 % deviations for the light exchange neutrino mechanism and much larger deviations for the heavy neutrino exchange mechanism.
Multi-color incomplete Cholesky conjugate gradient methods for vector computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poole, E.L.
1986-01-01
This research is concerned with the solution on vector computers of linear systems of equations. Ax = b, where A is a large, sparse symmetric positive definite matrix with non-zero elements lying only along a few diagonals of the matrix. The system is solved using the incomplete Cholesky conjugate gradient method (ICCG). Multi-color orderings are used of the unknowns in the linear system to obtain p-color matrices for which a no-fill block ICCG method is implemented on the CYBER 205 with O(N/p) length vector operations in both the decomposition of A and, more importantly, in the forward and back solvesmore » necessary at each iteration of the method. (N is the number of unknowns and p is a small constant). A p-colored matrix is a matrix that can be partitioned into a p x p block matrix where the diagonal blocks are diagonal matrices. The matrix is stored by diagonals and matrix multiplication by diagonals is used to carry out the decomposition of A and the forward and back solves. Additionally, if the vectors across adjacent blocks line up, then some of the overhead associated with vector startups can be eliminated in the matrix vector multiplication necessary at each conjugate gradient iteration. Necessary and sufficient conditions are given to determine which multi-color orderings of the unknowns correspond to p-color matrices, and a process is indicated for choosing multi-color orderings.« less
New evidence favoring multilevel decomposition and optimization
NASA Technical Reports Server (NTRS)
Padula, Sharon L.; Polignone, Debra A.
1990-01-01
The issue of the utility of multilevel decomposition and optimization remains controversial. To date, only the structural optimization community has actively developed and promoted multilevel optimization techniques. However, even this community acknowledges that multilevel optimization is ideally suited for a rather limited set of problems. It is warned that decomposition typically requires eliminating local variables by using global variables and that this in turn causes ill-conditioning of the multilevel optimization by adding equality constraints. The purpose is to suggest a new multilevel optimization technique. This technique uses behavior variables, in addition to design variables and constraints, to decompose the problem. The new technique removes the need for equality constraints, simplifies the decomposition of the design problem, simplifies the programming task, and improves the convergence speed of multilevel optimization compared to conventional optimization.
Fast heap transform-based QR-decomposition of real and complex matrices: algorithms and codes
NASA Astrophysics Data System (ADS)
Grigoryan, Artyom M.
2015-03-01
In this paper, we describe a new look on the application of Givens rotations to the QR-decomposition problem, which is similar to the method of Householder transformations. We apply the concept of the discrete heap transform, or signal-induced unitary transforms which had been introduced by Grigoryan (2006) and used in signal and image processing. Both cases of real and complex nonsingular matrices are considered and examples of performing QR-decomposition of square matrices are given. The proposed method of QR-decomposition for the complex matrix is novel and differs from the known method of complex Givens rotation and is based on analytical equations for the heap transforms. Many examples illustrated the proposed heap transform method of QR-decomposition are given, algorithms are described in detail, and MATLAB-based codes are included.
NASA Astrophysics Data System (ADS)
Kafka, Orion L.; Yu, Cheng; Shakoor, Modesar; Liu, Zeliang; Wagner, Gregory J.; Liu, Wing Kam
2018-04-01
A data-driven mechanistic modeling technique is applied to a system representative of a broken-up inclusion ("stringer") within drawn nickel-titanium wire or tube, e.g., as used for arterial stents. The approach uses a decomposition of the problem into a training stage and a prediction stage. It is applied to compute the fatigue crack incubation life of a microstructure of interest under high-cycle fatigue. A parametric study of a matrix-inclusion-void microstructure is conducted. The results indicate that, within the range studied, a larger void between halves of the inclusion increases fatigue life, while larger inclusion diameter reduces fatigue life.
Asymmetric latent semantic indexing for gene expression experiments visualization.
González, Javier; Muñoz, Alberto; Martos, Gabriel
2016-08-01
We propose a new method to visualize gene expression experiments inspired by the latent semantic indexing technique originally proposed in the textual analysis context. By using the correspondence word-gene document-experiment, we define an asymmetric similarity measure of association for genes that accounts for potential hierarchies in the data, the key to obtain meaningful gene mappings. We use the polar decomposition to obtain the sources of asymmetry of the similarity matrix, which are later combined with previous knowledge. Genetic classes of genes are identified by means of a mixture model applied in the genes latent space. We describe the steps of the procedure and we show its utility in the Human Cancer dataset.
Optical character recognition with feature extraction and associative memory matrix
NASA Astrophysics Data System (ADS)
Sasaki, Osami; Shibahara, Akihito; Suzuki, Takamasa
1998-06-01
A method is proposed in which handwritten characters are recognized using feature extraction and an associative memory matrix. In feature extraction, simple processes such as shifting and superimposing patterns are executed. A memory matrix is generated with singular value decomposition and by modifying small singular values. The method is optically implemented with two liquid crystal displays. Experimental results for the recognition of 25 handwritten alphabet characters clearly shows the effectiveness of the method.
Tensor Decompositions for Learning Latent Variable Models
2012-12-08
and eigenvectors of tensors is generally significantly more complicated than their matrix counterpart (both algebraically [Qi05, CS11, Lim05] and...The reduction First, let W ∈ Rd×k be a linear transformation such that M2(W,W ) = W M2W = I where I is the k × k identity matrix (i.e., W whitens ...approximate the whitening matrix W ∈ Rd×k from second-moment matrix M2 ∈ Rd×d. To do this, one first multiplies M2 by a random matrix R ∈ Rd×k′ for some k′ ≥ k
Matrix eigenvalue method for free-oscillations modelling of spherical elastic bodies
NASA Astrophysics Data System (ADS)
Zábranová, E.; Hanyk, L.; Matyska, C.
2017-11-01
Deformations and changes of the gravitational potential of pre-stressed self-gravitating elastic bodies caused by free oscillations are described by means of the momentum and Poisson equations and the constitutive relation. For spherically symmetric bodies, the equations and boundary conditions are transformed into ordinary differential equations of the second order by the spherical harmonic decomposition and further discretized by highly accurate pseudospectral difference schemes on Chebyshev grids; we pay special attention to the conditions at the centre of the models. We thus obtain a series of matrix eigenvalue problems for eigenfrequencies and eigenfunctions of the free oscillations. Accuracy of the presented numerical approach is tested by means of the Rayleigh quotients calculated for the eigenfrequencies up to 500 mHz. Both the modal frequencies and eigenfunctions are benchmarked against the output from the Mineos software package based on shooting methods. The presented technique is a promising alternative to widely used methods because it is stable and with a good capability up to high frequencies.
Mode detection in turbofan inlets from near field sensor arrays.
Castres, Fabrice O; Joseph, Phillip F
2007-02-01
Knowledge of the modal content of the sound field radiated from a turbofan inlet is important for source characterization and for helping to determine noise generation mechanisms in the engine. An inverse technique for determining the mode amplitudes at the duct outlet is proposed using pressure measurements made in the near field. The radiated sound pressure from a duct is modeled by directivity patterns of cut-on modes in the near field using a model based on the Kirchhoff approximation for flanged ducts with no flow. The resulting system of equations is ill posed and it is shown that the presence of modes with eigenvalues close to a cutoff frequency results in a poorly conditioned directivity matrix. An analysis of the conditioning of this directivity matrix is carried out to assess the inversion robustness and accuracy. A physical interpretation of the singular value decomposition is given and allows us to understand the issues of ill conditioning as well as the detection performance of the radiated sound field by a given sensor array.
Intrasystem Analysis Program (IAP) code summaries
NASA Astrophysics Data System (ADS)
Dobmeier, J. J.; Drozd, A. L. S.; Surace, J. A.
1983-05-01
This report contains detailed descriptions and capabilities of the codes that comprise the Intrasystem Analysis Program. The four codes are: Intrasystem Electromagnetic Compatibility Analysis Program (IEMCAP), General Electromagnetic Model for the Analysis of Complex Systems (GEMACS), Nonlinear Circuit Analysis Program (NCAP), and Wire Coupling Prediction Models (WIRE). IEMCAP is used for computer-aided evaluation of electromagnetic compatibility (ECM) at all stages of an Air Force system's life cycle, applicable to aircraft, space/missile, and ground-based systems. GEMACS utilizes a Method of Moments (MOM) formalism with the Electric Field Integral Equation (EFIE) for the solution of electromagnetic radiation and scattering problems. The code employs both full matrix decomposition and Banded Matrix Iteration solution techniques and is expressly designed for large problems. NCAP is a circuit analysis code which uses the Volterra approach to solve for the transfer functions and node voltage of weakly nonlinear circuits. The Wire Programs deal with the Application of Multiconductor Transmission Line Theory to the Prediction of Cable Coupling for specific classes of problems.
Multiprocessor sparse L/U decomposition with controlled fill-in
NASA Technical Reports Server (NTRS)
Alaghband, G.; Jordan, H. F.
1985-01-01
Generation of the maximal compatibles of pivot elements for a class of small sparse matrices is studied. The algorithm involves a binary tree search and has a complexity exponential in the order of the matrix. Different strategies for selection of a set of compatible pivots based on the Markowitz criterion are investigated. The competing issues of parallelism and fill-in generation are studied and results are provided. A technque for obtaining an ordered compatible set directly from the ordered incompatible table is given. This technique generates a set of compatible pivots with the property of generating few fills. A new hueristic algorithm is then proposed that combines the idea of an ordered compatible set with a limited binary tree search to generate several sets of compatible pivots in linear time. Finally, an elimination set to reduce the matrix is selected. Parameters are suggested to obtain a balance between parallelism and fill-ins. Results of applying the proposed algorithms on several large application matrices are presented and analyzed.
Compton, L A; Johnson, W C
1986-05-15
Inverse circular dichroism (CD) spectra are presented for each of the five major secondary structures of proteins: alpha-helix, antiparallel and parallel beta-sheet, beta-turn, and other (random) structures. The fraction of the each secondary structure in a protein is predicted by forming the dot product of the corresponding inverse CD spectrum, expressed as a vector, with the CD spectrum of the protein digitized in the same way. We show how this method is based on the construction of the generalized inverse from the singular value decomposition of a set of CD spectra corresponding to proteins whose secondary structures are known from X-ray crystallography. These inverse spectra compute secondary structure directly from protein CD spectra without resorting to least-squares fitting and standard matrix inversion techniques. In addition, spectra corresponding to the individual secondary structures, analogous to the CD spectra of synthetic polypeptides, are generated from the five most significant CD eigenvectors.
Using dynamic mode decomposition for real-time background/foreground separation in video
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kutz, Jose Nathan; Grosek, Jacob; Brunton, Steven
The technique of dynamic mode decomposition (DMD) is disclosed herein for the purpose of robustly separating video frames into background (low-rank) and foreground (sparse) components in real-time. Foreground/background separation is achieved at the computational cost of just one singular value decomposition (SVD) and one linear equation solve, thus producing results orders of magnitude faster than robust principal component analysis (RPCA). Additional techniques, including techniques for analyzing the video for multi-resolution time-scale components, and techniques for reusing computations to allow processing of streaming video in real time, are also described herein.
Scott, Jill R.; Ham, Jason E.; Durham, Bill; ...
2004-01-01
Metal polypyridines are excellent candidates for gas-phase optical experiments where their intrinsic properties can be studied without complications due to the presence of solvent. The fluorescence lifetimes of [Ru(bpy) 3 ] 1+ trapped in an optical detection cell within a Fourier transform mass spectrometer were obtained using matrix-assisted laser desorption/ionization to generate the ions with either 2,5-dihydroxybenzoic acid (DHB) or sinapinic acid (SA) as matrix. All transients acquired, whether using DHB or SA for ion generation, were best described as approximately exponential decays. The rate constant for transients derived using DHB as matrix was 4×10 7 s −1 , whilemore » the rate constant using SA was 1×10 7 s −1 . Some suggestions of multiple exponential decay were evident although limited by the quality of the signals. Photodissociation experiments revealed that [Ru(bpy) 3 ] 1+ generated using DHB can decompose to [Ru(bpy) 2 ] 1+ , whereas ions generated using SA showed no decomposition. Comparison of the mass spectra with the fluorescence lifetimes illustrates the promise of incorporating optical detection with trapped ion mass spectrometry techniques.« less
NASA Astrophysics Data System (ADS)
Neuer, Marcus J.
2013-11-01
A technique for the spectral identification of strontium-90 is shown, utilising a Maximum-Likelihood deconvolution. Different deconvolution approaches are discussed and summarised. Based on the intensity distribution of the beta emission and Geant4 simulations, a combined response matrix is derived, tailored to the β- detection process in sodium iodide detectors. It includes scattering effects and attenuation by applying a base material decomposition extracted from Geant4 simulations with a CAD model for a realistic detector system. Inversion results of measurements show the agreement between deconvolution and reconstruction. A detailed investigation with additional masking sources like 40K, 226Ra and 131I shows that a contamination of strontium can be found in the presence of these nuisance sources. Identification algorithms for strontium are presented based on the derived technique. For the implementation of blind identification, an exemplary masking ratio is calculated.
Research on the application of a decoupling algorithm for structure analysis
NASA Technical Reports Server (NTRS)
Denman, E. D.
1980-01-01
The mathematical theory for decoupling mth-order matrix differential equations is presented. It is shown that the decoupling precedure can be developed from the algebraic theory of matrix polynomials. The role of eigenprojectors and latent projectors in the decoupling process is discussed and the mathematical relationships between eigenvalues, eigenvectors, latent roots, and latent vectors are developed. It is shown that the eigenvectors of the companion form of a matrix contains the latent vectors as a subset. The spectral decomposition of a matrix and the application to differential equations is given.
Li, Yu-Hua; Cheng, Su-Wen; Yuan, Chung-Shin; Lai, Tzu-Fan; Hung, Chung-Hsuang
2018-06-05
Chinese cooking fume is one of the sources of volatile organic compounds (VOCs) in the air. An innovative control technology combining photocatalytic degradation and ozone oxidation (UV/TiO 2 +O 3 ) was developed to decompose VOCs in the cooking fume. Fiberglass filter (FGF) coated with TiO 2 was prepared by an impregnation procedure. A continuous-flow reaction system was self-designed by combining photocatalysis with advanced ozone oxidation technique. By passing the simulated cooking fume through the FGF, the VOC decomposition efficiency in the cooking fume could be increased by about 10%. The decomposition efficiency of VOCs in the cooking fume increased and then decreased with the inlet VOC concentration. A maximum VOC decomposition efficiency of 64% was obtained at 100 ppm. Similar trend was observed for reaction temperature with the VOC decomposition efficiencies ranging from 64 to 68%. Moreover, inlet ozone concentration had a positive effect on the decomposition of VOCs in the cooking fume for inlet ozone≤1000 ppm and leveled off for inlet ozone>1000 ppm. 34% of VOC decomposition efficiency was achieved solely by ozone oxidation with or without near-UV irradiation. A maximum of 75% and 94% VOC decomposition efficiency could be achieved by O 3 +UV/TiO 2 and UV/TiO 2 +O 3 techniques, respectively. The maximum decomposition efficiencies of VOCs decreased to 79% for using UV/TiO 2 +O 3 technique with adding water in the oil fume. Comparing the chromatographical species of VOCs in the oil fume before and after the decomposition of VOCs by using UV/TiO 2 +O 3 technique, we found that both TVOC and VOC species in the oil fume were effectively decomposed. Copyright © 2018 Elsevier Ltd. All rights reserved.
Aquilante, Francesco; Gagliardi, Laura; Pedersen, Thomas Bondo; Lindh, Roland
2009-04-21
Cholesky decomposition of the atomic two-electron integral matrix has recently been proposed as a procedure for automated generation of auxiliary basis sets for the density fitting approximation [F. Aquilante et al., J. Chem. Phys. 127, 114107 (2007)]. In order to increase computational performance while maintaining accuracy, we propose here to reduce the number of primitive Gaussian functions of the contracted auxiliary basis functions by means of a second Cholesky decomposition. Test calculations show that this procedure is most beneficial in conjunction with highly contracted atomic orbital basis sets such as atomic natural orbitals, and that the error resulting from the second decomposition is negligible. We also demonstrate theoretically as well as computationally that the locality of the fitting coefficients can be controlled by means of the decomposition threshold even with the long-ranged Coulomb metric. Cholesky decomposition-based auxiliary basis sets are thus ideally suited for local density fitting approximations.
NASA Astrophysics Data System (ADS)
Aquilante, Francesco; Gagliardi, Laura; Pedersen, Thomas Bondo; Lindh, Roland
2009-04-01
Cholesky decomposition of the atomic two-electron integral matrix has recently been proposed as a procedure for automated generation of auxiliary basis sets for the density fitting approximation [F. Aquilante et al., J. Chem. Phys. 127, 114107 (2007)]. In order to increase computational performance while maintaining accuracy, we propose here to reduce the number of primitive Gaussian functions of the contracted auxiliary basis functions by means of a second Cholesky decomposition. Test calculations show that this procedure is most beneficial in conjunction with highly contracted atomic orbital basis sets such as atomic natural orbitals, and that the error resulting from the second decomposition is negligible. We also demonstrate theoretically as well as computationally that the locality of the fitting coefficients can be controlled by means of the decomposition threshold even with the long-ranged Coulomb metric. Cholesky decomposition-based auxiliary basis sets are thus ideally suited for local density fitting approximations.
Lossless and Sufficient - Invariant Decomposition of Deterministic Target
NASA Astrophysics Data System (ADS)
Paladini, Riccardo; Ferro Famil, Laurent; Pottier, Eric; Martorella, Marco; Berizzi, Fabrizio
2011-03-01
The symmetric radar scattering matrix of a reciprocal target is projected on the circular polarization basis and is decomposed into four orientation invariant parameters, relative phase and relative orientation. The physical interpretation of this results is found in the wave-particle nature of radar scattering due to the circular polarization nature of elemental packets of energy. The proposed decomposition, is based on left orthogonal to left Special Unitary basis, providing the target description in term of a unitary vector. A comparison between the proposed CTD and Cameron, Kennaugh and Krogager decompositions is also pointed out. A validation by the use of both anechoic chamber data and airborne EMISAR data of DTU is used to show the effectiveness of this decomposition for the analysis of coherent targets. In the second paper we will show the application of the rotation group U(3) for the decomposition of distributed targets into nine meaningful parameters.
Multi-Centrality Graph Spectral Decompositions and Their Application to Cyber Intrusion Detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Pin-Yu; Choudhury, Sutanay; Hero, Alfred
Many modern datasets can be represented as graphs and hence spectral decompositions such as graph principal component analysis (PCA) can be useful. Distinct from previous graph decomposition approaches based on subspace projection of a single topological feature, e.g., the centered graph adjacency matrix (graph Laplacian), we propose spectral decomposition approaches to graph PCA and graph dictionary learning that integrate multiple features, including graph walk statistics, centrality measures and graph distances to reference nodes. In this paper we propose a new PCA method for single graph analysis, called multi-centrality graph PCA (MC-GPCA), and a new dictionary learning method for ensembles ofmore » graphs, called multi-centrality graph dictionary learning (MC-GDL), both based on spectral decomposition of multi-centrality matrices. As an application to cyber intrusion detection, MC-GPCA can be an effective indicator of anomalous connectivity pattern and MC-GDL can provide discriminative basis for attack classification.« less
NASA Astrophysics Data System (ADS)
Mleczko, M.
2014-12-01
Polarimetric SAR data is not widely used in practice, because it is not yet available operationally from the satellites. Currently we can distinguish two approaches in POL - In - SAR technology: alternating polarization imaging (Alt - POL) and fully polarimetric (QuadPol). The first represents a subset of another and is more operational, while the second is experimental because classification of this data requires polarimetric decomposition of scattering matrix in the first stage. In the literature decomposition process is divided in two types: the coherent and incoherent decomposition. In this paper the decomposition methods have been tested using data from the high resolution airborne F - SAR system. Results of classification have been interpreted in the context of the land cover mapping capabilities
Traffic Simulations on Parallel Computers Using Domain Decomposition Techniques
DOT National Transportation Integrated Search
1995-01-01
Large scale simulations of Intelligent Transportation Systems (ITS) can only be acheived by using the computing resources offered by parallel computing architectures. Domain decomposition techniques are proposed which allow the performance of traffic...
Identification and modification of dominant noise sources in diesel engines
NASA Astrophysics Data System (ADS)
Hayward, Michael D.
Determination of dominant noise sources in diesel engines is an integral step in the creation of quiet engines, but is a process which can involve an extensive series of expensive, time-consuming fired and motored tests. The goal of this research is to determine dominant noise source characteristics of a diesel engine in the near and far-fields with data from fewer tests than is currently required. Pre-conditioning and use of numerically robust methods to solve a set of cross-spectral density equations results in accurate calculation of the transfer paths between the near- and far-field measurement points. Application of singular value decomposition to an input cross-spectral matrix determines the spectral characteristics of a set of independent virtual sources, that, when scaled and added, result in the input cross spectral matrix. Each virtual source power spectral density is a singular value resulting from the decomposition performed over a range of frequencies. The complex relationship between virtual and physical sources is estimated through determination of virtual source contributions to each input measurement power spectral density. The method is made more user-friendly through use of a percentage contribution color plotting technique, where different normalizations can be used to help determine the presence of sources and the strengths of their contributions. Convolution of input measurements with the estimated path impulse responses results in a set of far-field components, to which the same singular value contribution plotting technique can be applied, thus allowing dominant noise source characteristics in the far-field to also be examined. Application of the methods presented results in determination of the spectral characteristics of dominant noise sources both in the near- and far-fields from one fired test, which significantly reduces the need for extensive fired and motored testing. Finally, it is shown that the far-field noise time history of a physically altered engine can be simulated through modification of singular values and recalculation of transfer paths between input and output measurements of previously recorded data.
Through-wall image enhancement using fuzzy and QR decomposition.
Riaz, Muhammad Mohsin; Ghafoor, Abdul
2014-01-01
QR decomposition and fuzzy logic based scheme is proposed for through-wall image enhancement. QR decomposition is less complex compared to singular value decomposition. Fuzzy inference engine assigns weights to different overlapping subspaces. Quantitative measures and visual inspection are used to analyze existing and proposed techniques.
Electrochemical and Infrared Absorption Spectroscopy Detection of SF₆ Decomposition Products.
Dong, Ming; Zhang, Chongxing; Ren, Ming; Albarracín, Ricardo; Ye, Rixin
2017-11-15
Sulfur hexafluoride (SF₆) gas-insulated electrical equipment is widely used in high-voltage (HV) and extra-high-voltage (EHV) power systems. Partial discharge (PD) and local heating can occur in the electrical equipment because of insulation faults, which results in SF₆ decomposition and ultimately generates several types of decomposition products. These SF₆ decomposition products can be qualitatively and quantitatively detected with relevant detection methods, and such detection contributes to diagnosing the internal faults and evaluating the security risks of the equipment. At present, multiple detection methods exist for analyzing the SF₆ decomposition products, and electrochemical sensing (ES) and infrared (IR) spectroscopy are well suited for application in online detection. In this study, the combination of ES with IR spectroscopy is used to detect SF₆ gas decomposition. First, the characteristics of these two detection methods are studied, and the data analysis matrix is established. Then, a qualitative and quantitative analysis ES-IR model is established by adopting a two-step approach. A SF₆ decomposition detector is designed and manufactured by combining an electrochemical sensor and IR spectroscopy technology. The detector is used to detect SF₆ gas decomposition and is verified to reliably and accurately detect the gas components and concentrations.
NASA Technical Reports Server (NTRS)
Leone, Frank A., Jr.
2015-01-01
A method is presented to represent the large-deformation kinematics of intraply matrix cracks and delaminations in continuum damage mechanics (CDM) constitutive material models. The method involves the additive decomposition of the deformation gradient tensor into 'crack' and 'bulk material' components. The response of the intact bulk material is represented by a reduced deformation gradient tensor, and the opening of an embedded cohesive interface is represented by a normalized cohesive displacement-jump vector. The rotation of the embedded interface is tracked as the material deforms and as the crack opens. The distribution of the total local deformation between the bulk material and the cohesive interface components is determined by minimizing the difference between the cohesive stress and the bulk material stress projected onto the cohesive interface. The improvements to the accuracy of CDM models that incorporate the presented method over existing approaches are demonstrated for a single element subjected to simple shear deformation and for a finite element model of a unidirectional open-hole tension specimen. The material model is implemented as a VUMAT user subroutine for the Abaqus/Explicit finite element software. The presented deformation gradient decomposition method reduces the artificial load transfer across matrix cracks subjected to large shearing deformations, and avoids the spurious secondary failure modes that often occur in analyses based on conventional progressive damage models.
Singular value description of a digital radiographic detector: Theory and measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kyprianou, Iacovos S.; Badano, Aldo; Gallas, Brandon D.
The H operator represents the deterministic performance of any imaging system. For a linear, digital imaging system, this system operator can be written in terms of a matrix, H, that describes the deterministic response of the system to a set of point objects. A singular value decomposition of this matrix results in a set of orthogonal functions (singular vectors) that form the system basis. A linear combination of these vectors completely describes the transfer of objects through the linear system, where the respective singular values associated with each singular vector describe the magnitude with which that contribution to the objectmore » is transferred through the system. This paper is focused on the measurement, analysis, and interpretation of the H matrix for digital x-ray detectors. A key ingredient in the measurement of the H matrix is the detector response to a single x ray (or infinitestimal x-ray beam). The authors have developed a method to estimate the 2D detector shift-variant, asymmetric ray response function (RRF) from multiple measured line response functions (LRFs) using a modified edge technique. The RRF measurements cover a range of x-ray incident angles from 0 deg. (equivalent location at the detector center) to 30 deg. (equivalent location at the detector edge) for a standard radiographic or cone-beam CT geometric setup. To demonstrate the method, three beam qualities were tested using the inherent, Lu/Er, and Yb beam filtration. The authors show that measures using the LRF, derived from an edge measurement, underestimate the system's performance when compared with the H matrix derived using the RRF. Furthermore, the authors show that edge measurements must be performed at multiple directions in order to capture rotational asymmetries of the RRF. The authors interpret the results of the H matrix SVD and provide correlations with the familiar MTF methodology. Discussion is made about the benefits of the H matrix technique with regards to signal detection theory, and the characterization of shift-variant imaging systems.« less
Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen
2018-01-05
With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP approach is tested and compared with MultiPlan on three clinical cases of varying complexities. In general, the plans generated by the SVDLP achieve steeper dose gradient, better conformity and less damage to normal tissues. In conclusion, the SVDLP approach effectively improves the quality of treatment plan due to the use of the complete beam search space. This challenging optimization problem with the complete beam search space is effectively handled by the proposed SVD acceleration.
NASA Astrophysics Data System (ADS)
Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen
2018-01-01
With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP approach is tested and compared with MultiPlan on three clinical cases of varying complexities. In general, the plans generated by the SVDLP achieve steeper dose gradient, better conformity and less damage to normal tissues. In conclusion, the SVDLP approach effectively improves the quality of treatment plan due to the use of the complete beam search space. This challenging optimization problem with the complete beam search space is effectively handled by the proposed SVD acceleration.
NASA Astrophysics Data System (ADS)
Lanen, Theo A.; Watt, David W.
1995-10-01
Singular value decomposition has served as a diagnostic tool in optical computed tomography by using its capability to provide insight into the condition of ill-posed inverse problems. Various tomographic geometries are compared to one another through the singular value spectrum of their weight matrices. The number of significant singular values in the singular value spectrum of a weight matrix is a quantitative measure of the condition of the system of linear equations defined by a tomographic geometery. The analysis involves variation of the following five parameters, characterizing a tomographic geometry: 1) the spatial resolution of the reconstruction domain, 2) the number of views, 3) the number of projection rays per view, 4) the total observation angle spanned by the views, and 5) the selected basis function. Five local basis functions are considered: the square pulse, the triangle, the cubic B-spline, the Hanning window, and the Gaussian distribution. Also items like the presence of noise in the views, the coding accuracy of the weight matrix, as well as the accuracy of the accuracy of the singular value decomposition procedure itself are assessed.
NASA Astrophysics Data System (ADS)
Bai, Xue-Mei; Liu, Tie; Liu, De-Long; Wei, Yong-Ju
2018-02-01
A chemometrics-assisted excitation-emission matrix (EEM) fluorescence method was proposed for simultaneous determination of α-asarone and β-asarone in Acorus tatarinowii. Using the strategy of combining EEM data with chemometrics methods, the simultaneous determination of α-asarone and β-asarone in the complex Traditional Chinese medicine system was achieved successfully, even in the presence of unexpected interferents. The physical or chemical separation step was avoided due to the use of ;mathematical separation;. Six second-order calibration methods were used including parallel factor analysis (PARAFAC), alternating trilinear decomposition (ATLD), alternating penalty trilinear decomposition (APTLD), self-weighted alternating trilinear decomposition (SWATLD), the unfolded partial least-squares (U-PLS) and multidimensional partial least-squares (N-PLS) with residual bilinearization (RBL). In addition, HPLC method was developed to further validate the presented strategy. Consequently, for the validation samples, the analytical results obtained by six second-order calibration methods were almost accurate. But for the Acorus tatarinowii samples, the results indicated a slightly better predictive ability of N-PLS/RBL procedure over other methods.
Precision PEP-II optics measurement with an SVD-enhanced Least-Square fitting
NASA Astrophysics Data System (ADS)
Yan, Y. T.; Cai, Y.
2006-03-01
A singular value decomposition (SVD)-enhanced Least-Square fitting technique is discussed. By automatic identifying, ordering, and selecting dominant SVD modes of the derivative matrix that responds to the variations of the variables, the converging process of the Least-Square fitting is significantly enhanced. Thus the fitting speed can be fast enough for a fairly large system. This technique has been successfully applied to precision PEP-II optics measurement in which we determine all quadrupole strengths (both normal and skew components) and sextupole feed-downs as well as all BPM gains and BPM cross-plane couplings through Least-Square fitting of the phase advances and the Local Green's functions as well as the coupling ellipses among BPMs. The local Green's functions are specified by 4 local transfer matrix components R12, R34, R32, R14. These measurable quantities (the Green's functions, the phase advances and the coupling ellipse tilt angles and axis ratios) are obtained by analyzing turn-by-turn Beam Position Monitor (BPM) data with a high-resolution model-independent analysis (MIA). Once all of the quadrupoles and sextupole feed-downs are determined, we obtain a computer virtual accelerator which matches the real accelerator in linear optics. Thus, beta functions, linear coupling parameters, and interaction point (IP) optics characteristics can be measured and displayed.
NASA Astrophysics Data System (ADS)
Özdemir, Gizem; Demiralp, Metin
2015-12-01
In this work, Enhanced Multivariance Products Representation (EMPR) approach which is a Demiralp-and-his- group extension to the Sobol's High Dimensional Model Representation (HDMR) has been used as the basic tool. Their discrete form have also been developed and used in practice by Demiralp and his group in addition to some other authors for the decomposition of the arrays like vectors, matrices, or multiway arrays. This work specifically focuses on the decomposition of infinite matrices involving denumerable infinitely many rows and columns. To this end the target matrix is first decomposed to the sum of certain outer products and then each outer product is treated by Tridiagonal Matrix Enhanced Multivariance Products Representation (TMEMPR) which has been developed by Demiralp and his group. The result is a three-matrix- factor-product whose kernel (the middle factor) is an arrowheaded matrix while the pre and post factors are invertable matrices decomposed of the support vectors of TMEMPR. This new method is called as Arrowheaded Enhanced Multivariance Products Representation for Matrices. The general purpose is approximation of denumerably infinite matrices with the new method.
Low-Rank Correction Methods for Algebraic Domain Decomposition Preconditioners
Li, Ruipeng; Saad, Yousef
2017-08-01
This study presents a parallel preconditioning method for distributed sparse linear systems, based on an approximate inverse of the original matrix, that adopts a general framework of distributed sparse matrices and exploits domain decomposition (DD) and low-rank corrections. The DD approach decouples the matrix and, once inverted, a low-rank approximation is applied by exploiting the Sherman--Morrison--Woodbury formula, which yields two variants of the preconditioning methods. The low-rank expansion is computed by the Lanczos procedure with reorthogonalizations. Numerical experiments indicate that, when combined with Krylov subspace accelerators, this preconditioner can be efficient and robust for solving symmetric sparse linear systems. Comparisonsmore » with pARMS, a DD-based parallel incomplete LU (ILU) preconditioning method, are presented for solving Poisson's equation and linear elasticity problems.« less
Singular value decomposition for collaborative filtering on a GPU
NASA Astrophysics Data System (ADS)
Kato, Kimikazu; Hosino, Tikara
2010-06-01
A collaborative filtering predicts customers' unknown preferences from known preferences. In a computation of the collaborative filtering, a singular value decomposition (SVD) is needed to reduce the size of a large scale matrix so that the burden for the next phase computation will be decreased. In this application, SVD means a roughly approximated factorization of a given matrix into smaller sized matrices. Webb (a.k.a. Simon Funk) showed an effective algorithm to compute SVD toward a solution of an open competition called "Netflix Prize". The algorithm utilizes an iterative method so that the error of approximation improves in each step of the iteration. We give a GPU version of Webb's algorithm. Our algorithm is implemented in the CUDA and it is shown to be efficient by an experiment.
Fourier decomposition of payoff matrix for symmetric three-strategy games.
Szabó, György; Bodó, Kinga S; Allen, Benjamin; Nowak, Martin A
2014-10-01
In spatial evolutionary games the payoff matrices are used to describe pair interactions among neighboring players located on a lattice. Now we introduce a way how the payoff matrices can be built up as a sum of payoff components reflecting basic symmetries. For the two-strategy games this decomposition reproduces interactions characteristic to the Ising model. For the three-strategy symmetric games the Fourier components can be classified into four types representing games with self-dependent and cross-dependent payoffs, variants of three-strategy coordinations, and the rock-scissors-paper (RSP) game. In the absence of the RSP component the game is a potential game. The resultant potential matrix has been evaluated. The general features of these systems are analyzed when the game is expressed by the linear combinations of these components.
Unitary irreducible representations of SL(2,C) in discrete and continuous SU(1,1) bases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conrady, Florian; Hnybida, Jeff; Department of Physics, University of Waterloo, Waterloo, Ontario
2011-01-15
We derive the matrix elements of generators of unitary irreducible representations of SL(2,C) with respect to basis states arising from a decomposition into irreducible representations of SU(1,1). This is done with regard to a discrete basis diagonalized by J{sup 3} and a continuous basis diagonalized by K{sup 1}, and for both the discrete and continuous series of SU(1,1). For completeness, we also treat the more conventional SU(2) decomposition as a fifth case. The derivation proceeds in a functional/differential framework and exploits the fact that state functions and differential operators have a similar structure in all five cases. The states aremore » defined explicitly and related to SU(1,1) and SU(2) matrix elements.« less
Low-Rank Correction Methods for Algebraic Domain Decomposition Preconditioners
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Ruipeng; Saad, Yousef
This study presents a parallel preconditioning method for distributed sparse linear systems, based on an approximate inverse of the original matrix, that adopts a general framework of distributed sparse matrices and exploits domain decomposition (DD) and low-rank corrections. The DD approach decouples the matrix and, once inverted, a low-rank approximation is applied by exploiting the Sherman--Morrison--Woodbury formula, which yields two variants of the preconditioning methods. The low-rank expansion is computed by the Lanczos procedure with reorthogonalizations. Numerical experiments indicate that, when combined with Krylov subspace accelerators, this preconditioner can be efficient and robust for solving symmetric sparse linear systems. Comparisonsmore » with pARMS, a DD-based parallel incomplete LU (ILU) preconditioning method, are presented for solving Poisson's equation and linear elasticity problems.« less
EvolQG - An R package for evolutionary quantitative genetics
Melo, Diogo; Garcia, Guilherme; Hubbe, Alex; Assis, Ana Paula; Marroig, Gabriel
2016-01-01
We present an open source package for performing evolutionary quantitative genetics analyses in the R environment for statistical computing. Evolutionary theory shows that evolution depends critically on the available variation in a given population. When dealing with many quantitative traits this variation is expressed in the form of a covariance matrix, particularly the additive genetic covariance matrix or sometimes the phenotypic matrix, when the genetic matrix is unavailable and there is evidence the phenotypic matrix is sufficiently similar to the genetic matrix. Given this mathematical representation of available variation, the \\textbf{EvolQG} package provides functions for calculation of relevant evolutionary statistics; estimation of sampling error; corrections for this error; matrix comparison via correlations, distances and matrix decomposition; analysis of modularity patterns; and functions for testing evolutionary hypotheses on taxa diversification. PMID:27785352
FPGA-based coprocessor for matrix algorithms implementation
NASA Astrophysics Data System (ADS)
Amira, Abbes; Bensaali, Faycal
2003-03-01
Matrix algorithms are important in many types of applications including image and signal processing. These areas require enormous computing power. A close examination of the algorithms used in these, and related, applications reveals that many of the fundamental actions involve matrix operations such as matrix multiplication which is of O (N3) on a sequential computer and O (N3/p) on a parallel system with p processors complexity. This paper presents an investigation into the design and implementation of different matrix algorithms such as matrix operations, matrix transforms and matrix decompositions using an FPGA based environment. Solutions for the problem of processing large matrices have been proposed. The proposed system architectures are scalable, modular and require less area and time complexity with reduced latency when compared with existing structures.
Decomposition of Amino Acids in 100 K Ice by UV Photolysis: Implications for Survival on Europa
NASA Astrophysics Data System (ADS)
Goguen, Jay D.; Orzechowska, G.; Johnson, P.; Tsapin, A.; Kanik, I.; Smythe, W.
2006-09-01
We report the rate of decomposition by ultraviolet photolysis of 4 amino acids in a mm-thick crystalline water ice matrix at T=100K to constrain the survivability of these important organic molecules within ice lying near the surfaces of outer solar system bodies. We freeze our ice samples from liquid solution which results in mm-thick samples of crystalline phase hexagonal ice that appears "white” due to multiple scattering from internal microstructure. After irradiating an ice and amino acid mixture with an Argon mini-arc UV continuum light source, we used a derivatization technique based on a fluorescence reaction of amino acids to directly measure the remaining fraction of amino acid. We measured ice samples with 0.14, 0.28 and 1.6 mm thickness, prepared from 10-4 M solutions of glycine, D,L-aspartic, D,L-glutamic, and D,L-phenylalanine irradiated from 10 to 1020 minutes. We find that the half-life for decomposition of the amino acid - ice samples is linearly proportional to their thickness as is expected for a layer with strong multiple scattering. Glycine is the most resistant to destruction and phenylalanine is the most easily destroyed. For the 1.6 mm thick samples under lab conditions, the half-life of glycine was 57 hours, aspartic 21 hours, glutamic 23 hours, and phenylalanine 8 hours. These results can be expressed as a "penetration velocity", the depth to which half of the amino acids are destroyed in a year. We conclude that half of these amino acids in the upper meter of low latitude ice on Europa will be decomposed by solar UV on a 10 year timescale. Photons between 160 and 300 nm wavelength are responsible for this decomposition. Progress on identifying and quantifying the products of this decomposition, potential candidates for in-situ studies, will be discussed. This work was supported in part by JPL IR&TD funds.
Mlyniec, A; Ekiert, M; Morawska-Chochol, A; Uhl, T
2016-06-01
In this work, we investigate the influence of the surrounding environment and the initial density on the decomposition kinetics of polylactide (PLA). The decomposition of the amorphous PLA was investigated by means of reactive molecular dynamics simulations. A computational model simulates the decomposition of PLA polymer inside the bulk, due to the assumed lack of removal of reaction products from the polymer matrix. We tracked the temperature dependency of the water and carbon monoxide production to extract the activation energy of thermal decomposition of PLA. We found that an increased density results in decreased activation energy of decomposition by about 50%. Moreover, initiation of decomposition of the amorphous PLA is followed by a rapid decline in activation energy caused by reaction products which accelerates the hydrolysis of esters. The addition of water molecules decreases initial energy of activation as well as accelerates the decomposition process. Additionally, we have investigated the dependency of density on external loading. Comparison of pressures needed to obtain assumed densities shows that this relationship is bilinear and the slope changes around a density equal to 1.3g/cm(3). The conducted analyses provide an insight into the thermal decomposition process of the amorphous phase of PLA, which is particularly susceptible to decomposition in amorphous and semi-crystalline PLA polymers. Copyright © 2016 Elsevier Inc. All rights reserved.
A two-stage linear discriminant analysis via QR-decomposition.
Ye, Jieping; Li, Qi
2005-06-01
Linear Discriminant Analysis (LDA) is a well-known method for feature extraction and dimension reduction. It has been used widely in many applications involving high-dimensional data, such as image and text classification. An intrinsic limitation of classical LDA is the so-called singularity problems; that is, it fails when all scatter matrices are singular. Many LDA extensions were proposed in the past to overcome the singularity problems. Among these extensions, PCA+LDA, a two-stage method, received relatively more attention. In PCA+LDA, the LDA stage is preceded by an intermediate dimension reduction stage using Principal Component Analysis (PCA). Most previous LDA extensions are computationally expensive, and not scalable, due to the use of Singular Value Decomposition or Generalized Singular Value Decomposition. In this paper, we propose a two-stage LDA method, namely LDA/QR, which aims to overcome the singularity problems of classical LDA, while achieving efficiency and scalability simultaneously. The key difference between LDA/QR and PCA+LDA lies in the first stage, where LDA/QR applies QR decomposition to a small matrix involving the class centroids, while PCA+LDA applies PCA to the total scatter matrix involving all training data points. We further justify the proposed algorithm by showing the relationship among LDA/QR and previous LDA methods. Extensive experiments on face images and text documents are presented to show the effectiveness of the proposed algorithm.
Molloi, Sabee; Ding, Huanjun; Feig, Stephen
2015-01-01
Purpose The purpose of this study was to compare the precision of mammographic breast density measurement using radiologist reader assessment, histogram threshold segmentation, fuzzy C-mean segmentation and spectral material decomposition. Materials and Methods Spectral mammography images from a total of 92 consecutive asymptomatic women (50–69 years old) who presented for annual screening mammography were retrospectively analyzed for this study. Breast density was estimated using 10 radiologist reader assessment, standard histogram thresholding, fuzzy C-mean algorithm and spectral material decomposition. The breast density correlation between left and right breasts was used to assess the precision of these techniques to measure breast composition relative to dual-energy material decomposition. Results In comparison to the other techniques, the results of breast density measurements using dual-energy material decomposition showed the highest correlation. The relative standard error of estimate for breast density measurements from left and right breasts using radiologist reader assessment, standard histogram thresholding, fuzzy C-mean algorithm and dual-energy material decomposition was calculated to be 1.95, 2.87, 2.07 and 1.00, respectively. Conclusion The results indicate that the precision of dual-energy material decomposition was approximately factor of two higher than the other techniques with regard to better correlation of breast density measurements from right and left breasts. PMID:26031229
Steganography based on pixel intensity value decomposition
NASA Astrophysics Data System (ADS)
Abdulla, Alan Anwar; Sellahewa, Harin; Jassim, Sabah A.
2014-05-01
This paper focuses on steganography based on pixel intensity value decomposition. A number of existing schemes such as binary, Fibonacci, Prime, Natural, Lucas, and Catalan-Fibonacci (CF) are evaluated in terms of payload capacity and stego quality. A new technique based on a specific representation is proposed to decompose pixel intensity values into 16 (virtual) bit-planes suitable for embedding purposes. The proposed decomposition has a desirable property whereby the sum of all bit-planes does not exceed the maximum pixel intensity value, i.e. 255. Experimental results demonstrate that the proposed technique offers an effective compromise between payload capacity and stego quality of existing embedding techniques based on pixel intensity value decomposition. Its capacity is equal to that of binary and Lucas, while it offers a higher capacity than Fibonacci, Prime, Natural, and CF when the secret bits are embedded in 1st Least Significant Bit (LSB). When the secret bits are embedded in higher bit-planes, i.e., 2nd LSB to 8th Most Significant Bit (MSB), the proposed scheme has more capacity than Natural numbers based embedding. However, from the 6th bit-plane onwards, the proposed scheme offers better stego quality. In general, the proposed decomposition scheme has less effect in terms of quality on pixel value when compared to most existing pixel intensity value decomposition techniques when embedding messages in higher bit-planes.
Algorithms for Spectral Decomposition with Applications to Optical Plume Anomaly Detection
NASA Technical Reports Server (NTRS)
Srivastava, Askok N.; Matthews, Bryan; Das, Santanu
2008-01-01
The analysis of spectral signals for features that represent physical phenomenon is ubiquitous in the science and engineering communities. There are two main approaches that can be taken to extract relevant features from these high-dimensional data streams. The first set of approaches relies on extracting features using a physics-based paradigm where the underlying physical mechanism that generates the spectra is used to infer the most important features in the data stream. We focus on a complementary methodology that uses a data-driven technique that is informed by the underlying physics but also has the ability to adapt to unmodeled system attributes and dynamics. We discuss the following four algorithms: Spectral Decomposition Algorithm (SDA), Non-Negative Matrix Factorization (NMF), Independent Component Analysis (ICA) and Principal Components Analysis (PCA) and compare their performance on a spectral emulator which we use to generate artificial data with known statistical properties. This spectral emulator mimics the real-world phenomena arising from the plume of the space shuttle main engine and can be used to validate the results that arise from various spectral decomposition algorithms and is very useful for situations where real-world systems have very low probabilities of fault or failure. Our results indicate that methods like SDA and NMF provide a straightforward way of incorporating prior physical knowledge while NMF with a tuning mechanism can give superior performance on some tests. We demonstrate these algorithms to detect potential system-health issues on data from a spectral emulator with tunable health parameters.
NASA Astrophysics Data System (ADS)
Cheng, Boyang; Jin, Longxu; Li, Guoning
2018-06-01
Visible light and infrared images fusion has been a significant subject in imaging science. As a new contribution to this field, a novel fusion framework of visible light and infrared images based on adaptive dual-channel unit-linking pulse coupled neural networks with singular value decomposition (ADS-PCNN) in non-subsampled shearlet transform (NSST) domain is present in this paper. First, the source images are decomposed into multi-direction and multi-scale sub-images by NSST. Furthermore, an improved novel sum modified-Laplacian (INSML) of low-pass sub-image and an improved average gradient (IAVG) of high-pass sub-images are input to stimulate the ADS-PCNN, respectively. To address the large spectral difference between infrared and visible light and the occurrence of black artifacts in fused images, a local structure information operator (LSI), which comes from local area singular value decomposition in each source image, is regarded as the adaptive linking strength that enhances fusion accuracy. Compared with PCNN models in other studies, the proposed method simplifies certain peripheral parameters, and the time matrix is utilized to decide the iteration number adaptively. A series of images from diverse scenes are used for fusion experiments and the fusion results are evaluated subjectively and objectively. The results of the subjective and objective evaluation show that our algorithm exhibits superior fusion performance and is more effective than the existing typical fusion techniques.
Electrochemical and Infrared Absorption Spectroscopy Detection of SF6 Decomposition Products
Dong, Ming; Ren, Ming; Ye, Rixin
2017-01-01
Sulfur hexafluoride (SF6) gas-insulated electrical equipment is widely used in high-voltage (HV) and extra-high-voltage (EHV) power systems. Partial discharge (PD) and local heating can occur in the electrical equipment because of insulation faults, which results in SF6 decomposition and ultimately generates several types of decomposition products. These SF6 decomposition products can be qualitatively and quantitatively detected with relevant detection methods, and such detection contributes to diagnosing the internal faults and evaluating the security risks of the equipment. At present, multiple detection methods exist for analyzing the SF6 decomposition products, and electrochemical sensing (ES) and infrared (IR) spectroscopy are well suited for application in online detection. In this study, the combination of ES with IR spectroscopy is used to detect SF6 gas decomposition. First, the characteristics of these two detection methods are studied, and the data analysis matrix is established. Then, a qualitative and quantitative analysis ES-IR model is established by adopting a two-step approach. A SF6 decomposition detector is designed and manufactured by combining an electrochemical sensor and IR spectroscopy technology. The detector is used to detect SF6 gas decomposition and is verified to reliably and accurately detect the gas components and concentrations. PMID:29140268
Calibration methods influence quantitative material decomposition in photon-counting spectral CT
NASA Astrophysics Data System (ADS)
Curtis, Tyler E.; Roeder, Ryan K.
2017-03-01
Photon-counting detectors and nanoparticle contrast agents can potentially enable molecular imaging and material decomposition in computed tomography (CT). Material decomposition has been investigated using both simulated and acquired data sets. However, the effect of calibration methods on material decomposition has not been systematically investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on quantitative material decomposition. A commerciallyavailable photon-counting spectral micro-CT (MARS Bioimaging) was used to acquire images with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material basis matrix values were determined using multiple linear regression models and material decomposition was performed using a maximum a posteriori estimator. The accuracy of quantitative material decomposition was evaluated by the root mean squared error (RMSE), specificity, sensitivity, and area under the curve (AUC). An increased maximum concentration (range) in the calibration significantly improved RMSE, specificity and AUC. The effects of an increased number of concentrations in the calibration were not statistically significant for the conditions in this study. The overall results demonstrated that the accuracy of quantitative material decomposition in spectral CT is significantly influenced by calibration methods, which must therefore be carefully considered for the intended diagnostic imaging application.
Thermal Decomposition Mechanism of Butyraldehyde
NASA Astrophysics Data System (ADS)
Hatten, Courtney D.; Warner, Brian; Wright, Emily; Kaskey, Kevin; McCunn, Laura R.
2013-06-01
The thermal decomposition of butyraldehyde, CH_3CH_2CH_2C(O)H, has been studied in a resistively heated SiC tubular reactor. Products of pyrolysis were identified via matrix-isolation FTIR spectroscopy and photoionization mass spectrometry in separate experiments. Carbon monoxide, ethene, acetylene, water and ethylketene were among the products detected. To unravel the mechanism of decomposition, pyrolysis of a partially deuterated sample of butyraldehyde was studied. Also, the concentration of butyraldehyde in the carrier gas was varied in experiments to determine the presence of bimolecular reactions. The results of these experiments can be compared to the dissociation pathways observed in similar aldehydes and are relevant to the processing of biomass, foods, and tobacco.
NASA Astrophysics Data System (ADS)
Dyrdin, V. V.; Smirnov, V. G.; Kim, T. L.; Manakov, A. Yu.; Fofanov, A. A.; Kartopolova, I. S.
2017-06-01
The physical processes occurring in the coal - natural gas system under the gas pressure release were studied experimentally. The possibility of gas hydrates presence in the inner space of natural coal was shown, which decomposition leads to an increase in the amount of gas passing into the free state. The decomposition of gas hydrates can be caused either by the seam temperature increase or the pressure decrease to lower than the gas hydrates equilibrium curve. The contribution of methane released during gas hydrates decomposition should be taken into account in the design of safe mining technologies for coal seams prone to gas dynamic phenomena.
Analysis of network clustering behavior of the Chinese stock market
NASA Astrophysics Data System (ADS)
Chen, Huan; Mai, Yong; Li, Sai-Ping
2014-11-01
Random Matrix Theory (RMT) and the decomposition of correlation matrix method are employed to analyze spatial structure of stocks interactions and collective behavior in the Shanghai and Shenzhen stock markets in China. The result shows that there exists prominent sector structures, with subsectors including the Real Estate (RE), Commercial Banks (CB), Pharmaceuticals (PH), Distillers&Vintners (DV) and Steel (ST) industries. Furthermore, the RE and CB subsectors are mostly anti-correlated. We further study the temporal behavior of the dataset and find that while the sector structures are relatively stable from 2007 through 2013, the correlation between the real estate and commercial bank stocks shows large variations. By employing the ensemble empirical mode decomposition (EEMD) method, we show that this anti-correlation behavior is closely related to the monetary and austerity policies of the Chinese government during the period of study.
NASA Astrophysics Data System (ADS)
Xie, Hang; Jiang, Feng; Tian, Heng; Zheng, Xiao; Kwok, Yanho; Chen, Shuguang; Yam, ChiYung; Yan, YiJing; Chen, Guanhua
2012-07-01
Basing on our hierarchical equations of motion for time-dependent quantum transport [X. Zheng, G. H. Chen, Y. Mo, S. K. Koo, H. Tian, C. Y. Yam, and Y. J. Yan, J. Chem. Phys. 133, 114101 (2010), 10.1063/1.3475566], we develop an efficient and accurate numerical algorithm to solve the Liouville-von-Neumann equation. We solve the real-time evolution of the reduced single-electron density matrix at the tight-binding level. Calculations are carried out to simulate the transient current through a linear chain of atoms, with each represented by a single orbital. The self-energy matrix is expanded in terms of multiple Lorentzian functions, and the Fermi distribution function is evaluated via the Padè spectrum decomposition. This Lorentzian-Padè decomposition scheme is employed to simulate the transient current. With sufficient Lorentzian functions used to fit the self-energy matrices, we show that the lead spectral function and the dynamics response can be treated accurately. Compared to the conventional master equation approaches, our method is much more efficient as the computational time scales cubically with the system size and linearly with the simulation time. As a result, the simulations of the transient currents through systems containing up to one hundred of atoms have been carried out. As density functional theory is also an effective one-particle theory, the Lorentzian-Padè decomposition scheme developed here can be generalized for first-principles simulation of realistic systems.
Patil, Nagaraj; Soni, Jalpa; Ghosh, Nirmalya; De, Priyadarsi
2012-11-29
Thermodynamically favored polymer-water interactions below the lower critical solution temperature (LCST) caused swelling-induced optical anisotropy (linear retardance) of thermoresponsive hydrogels based on poly(2-(2-methoxyethoxy)ethyl methacrylate). This was exploited to study the macroscopic deswelling kinetics quantitatively by a generalized polarimetry analysis method, based on measurement of the Mueller matrix and its subsequent inverse analysis via the polar decomposition approach. The derived medium polarization parameters, namely, linear retardance (δ), diattenuation (d), and depolarization coefficient (Δ), of the hydrogels showed interesting differences between the gels prepared by conventional free radical polymerization (FRP) and reversible addition-fragmentation chain transfer polymerization (RAFT) and also between dry and swollen state. The effect of temperature, cross-linking density, and polymerization technique employed to synthesize hydrogel on deswelling kinetics was systematically studied via conventional gravimetry and corroborated further with the corresponding Mueller matrix derived quantitative polarimetry characteristics (δ, d, and Δ). The RAFT gels exhibited higher swelling ratio and swelling-induced optical anisotropy compared to FRP gels and also deswelled faster at 30 °C. On the contrary, at 45 °C, deswelling was significantly retarded for the RAFT gels due to formation of a skin layer, which was confirmed and quantified via the enhanced diattenuation and depolarization parameters.
NASA Astrophysics Data System (ADS)
Kim, Jae Wook
2013-05-01
This paper proposes a novel systematic approach for the parallelization of pentadiagonal compact finite-difference schemes and filters based on domain decomposition. The proposed approach allows a pentadiagonal banded matrix system to be split into quasi-disjoint subsystems by using a linear-algebraic transformation technique. As a result the inversion of pentadiagonal matrices can be implemented within each subdomain in an independent manner subject to a conventional halo-exchange process. The proposed matrix transformation leads to new subdomain boundary (SB) compact schemes and filters that require three halo terms to exchange with neighboring subdomains. The internode communication overhead in the present approach is equivalent to that of standard explicit schemes and filters based on seven-point discretization stencils. The new SB compact schemes and filters demand additional arithmetic operations compared to the original serial ones. However, it is shown that the additional cost becomes sufficiently low by choosing optimal sizes of their discretization stencils. Compared to earlier published results, the proposed SB compact schemes and filters successfully reduce parallelization artifacts arising from subdomain boundaries to a level sufficiently negligible for sophisticated aeroacoustic simulations without degrading parallel efficiency. The overall performance and parallel efficiency of the proposed approach are demonstrated by stringent benchmark tests.
NASA Astrophysics Data System (ADS)
Qarib, Hossein; Adeli, Hojjat
2015-12-01
In this paper authors introduce a new adaptive signal processing technique for feature extraction and parameter estimation in noisy exponentially damped signals. The iterative 3-stage method is based on the adroit integration of the strengths of parametric and nonparametric methods such as multiple signal categorization, matrix pencil, and empirical mode decomposition algorithms. The first stage is a new adaptive filtration or noise removal scheme. The second stage is a hybrid parametric-nonparametric signal parameter estimation technique based on an output-only system identification technique. The third stage is optimization of estimated parameters using a combination of the primal-dual path-following interior point algorithm and genetic algorithm. The methodology is evaluated using a synthetic signal and a signal obtained experimentally from transverse vibrations of a steel cantilever beam. The method is successful in estimating the frequencies accurately. Further, it estimates the damping exponents. The proposed adaptive filtration method does not include any frequency domain manipulation. Consequently, the time domain signal is not affected as a result of frequency domain and inverse transformations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Couturier, Laurent, E-mail: laurent.couturier55@ho
The fine microstructure obtained by unmixing of a solid solution either by classical precipitation or spinodal decomposition is often characterized either by small angle scattering or atom probe tomography. This article shows that a common data analysis framework can be used to analyze data obtained from these two techniques. An example of the application of this common analysis is given for characterization of the unmixing of the Fe-Cr matrix of a 15-5 PH stainless steel during long-term ageing at 350 °C and 400 °C. A direct comparison of the Cr composition fluctuations amplitudes and characteristic lengths obtained with both techniquesmore » is made showing a quantitative agreement for the fluctuation amplitudes. The origin of the discrepancy remaining for the characteristic lengths is discussed. - Highlights: •Common analysis framework for atom probe tomography and small angle scattering •Comparison of same microstructural characteristics obtained using both techniques •Good correlation of Cr composition fluctuations amplitudes from both techniques •Good correlation of Cr composition fluctuations amplitudes with classic V parameter.« less
Optical characterization of murine model's in-vivo skin using Mueller matrix polarimetric imaging
NASA Astrophysics Data System (ADS)
Mora-Núñez, Azael; Martinez-Ponce, Geminiano; Garcia-Torales, Guillermo
2015-12-01
Mueller matrix polarimetric imaging (MMPI) provides a complete characterization of an anisotropic optical medium. Subsequent single value decomposition allows image interpretation in terms of basic optical anisotropies, such as depolarization, diattenuation, and retardance. In this work, healthy in-vivo skin at different anatomical locations of a biological model (Rattus norvegicus) was imaged by the MMPI technique using 532nm coherent illumination. The body parts under study were back, abdomen, tail, and calvaria. Because skin components are randomly distributed and skin thickness depends on its location, polarization measures arise from the average over a single detection element (pixel) and on the number of free optical paths, respectively. Optical anisotropies over the imaged skin indicates, mainly, the presence of components related to the physiology of the explored region. In addition, a MMPI-based comparison between a tumor on the back of one test subject and proximal healthy skin was made. The results show that the single values of optical anisotropies can be helpful in distinguishing different areas of in-vivo skin and also lesions.
SU-G-JeP4-03: Anomaly Detection of Respiratory Motion by Use of Singular Spectrum Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kotoku, J; Kumagai, S; Nakabayashi, S
Purpose: The implementation and realization of automatic anomaly detection of respiratory motion is a very important technique to prevent accidental damage during radiation therapy. Here, we propose an automatic anomaly detection method using singular value decomposition analysis. Methods: The anomaly detection procedure consists of four parts:1) measurement of normal respiratory motion data of a patient2) calculation of a trajectory matrix representing normal time-series feature3) real-time monitoring and calculation of a trajectory matrix of real-time data.4) calculation of an anomaly score from the similarity of the two feature matrices. Patient motion was observed by a marker-less tracking system using a depthmore » camera. Results: Two types of motion e.g. cough and sudden stop of breathing were successfully detected in our real-time application. Conclusion: Automatic anomaly detection of respiratory motion using singular spectrum analysis was successful in the cough and sudden stop of breathing. The clinical use of this algorithm will be very hopeful. This work was supported by JSPS KAKENHI Grant Number 15K08703.« less
SAR matrices: automated extraction of information-rich SAR tables from large compound data sets.
Wassermann, Anne Mai; Haebel, Peter; Weskamp, Nils; Bajorath, Jürgen
2012-07-23
We introduce the SAR matrix data structure that is designed to elucidate SAR patterns produced by groups of structurally related active compounds, which are extracted from large data sets. SAR matrices are systematically generated and sorted on the basis of SAR information content. Matrix generation is computationally efficient and enables processing of large compound sets. The matrix format is reminiscent of SAR tables, and SAR patterns revealed by different categories of matrices are easily interpretable. The structural organization underlying matrix formation is more flexible than standard R-group decomposition schemes. Hence, the resulting matrices capture SAR information in a comprehensive manner.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manthe, Uwe, E-mail: uwe.manthe@uni-bielefeld.de; Ellerbrock, Roman, E-mail: roman.ellerbrock@uni-bielefeld.de
2016-05-28
A new approach for the quantum-state resolved analysis of polyatomic reactions is introduced. Based on the singular value decomposition of the S-matrix, energy-dependent natural reaction channels and natural reaction probabilities are defined. It is shown that the natural reaction probabilities are equal to the eigenvalues of the reaction probability operator [U. Manthe and W. H. Miller, J. Chem. Phys. 99, 3411 (1993)]. Consequently, the natural reaction channels can be interpreted as uniquely defined pathways through the transition state of the reaction. The analysis can efficiently be combined with reactive scattering calculations based on the propagation of thermal flux eigenstates. Inmore » contrast to a decomposition based straightforwardly on thermal flux eigenstates, it does not depend on the choice of the dividing surface separating reactants from products. The new approach is illustrated studying a prototypical example, the H + CH{sub 4} → H{sub 2} + CH{sub 3} reaction. The natural reaction probabilities and the contributions of the different vibrational states of the methyl product to the natural reaction channels are calculated and discussed. The relation between the thermal flux eigenstates and the natural reaction channels is studied in detail.« less
Han, Te; Jiang, Dongxiang; Zhang, Xiaochen; Sun, Yankui
2017-03-27
Rotating machinery is widely used in industrial applications. With the trend towards more precise and more critical operating conditions, mechanical failures may easily occur. Condition monitoring and fault diagnosis (CMFD) technology is an effective tool to enhance the reliability and security of rotating machinery. In this paper, an intelligent fault diagnosis method based on dictionary learning and singular value decomposition (SVD) is proposed. First, the dictionary learning scheme is capable of generating an adaptive dictionary whose atoms reveal the underlying structure of raw signals. Essentially, dictionary learning is employed as an adaptive feature extraction method regardless of any prior knowledge. Second, the singular value sequence of learned dictionary matrix is served to extract feature vector. Generally, since the vector is of high dimensionality, a simple and practical principal component analysis (PCA) is applied to reduce dimensionality. Finally, the K -nearest neighbor (KNN) algorithm is adopted for identification and classification of fault patterns automatically. Two experimental case studies are investigated to corroborate the effectiveness of the proposed method in intelligent diagnosis of rotating machinery faults. The comparison analysis validates that the dictionary learning-based matrix construction approach outperforms the mode decomposition-based methods in terms of capacity and adaptability for feature extraction.
Algebraic multigrid domain and range decomposition (AMG-DD / AMG-RD)*
Bank, R.; Falgout, R. D.; Jones, T.; ...
2015-10-29
In modern large-scale supercomputing applications, algebraic multigrid (AMG) is a leading choice for solving matrix equations. However, the high cost of communication relative to that of computation is a concern for the scalability of traditional implementations of AMG on emerging architectures. This paper introduces two new algebraic multilevel algorithms, algebraic multigrid domain decomposition (AMG-DD) and algebraic multigrid range decomposition (AMG-RD), that replace traditional AMG V-cycles with a fully overlapping domain decomposition approach. While the methods introduced here are similar in spirit to the geometric methods developed by Brandt and Diskin [Multigrid solvers on decomposed domains, in Domain Decomposition Methods inmore » Science and Engineering, Contemp. Math. 157, AMS, Providence, RI, 1994, pp. 135--155], Mitchell [Electron. Trans. Numer. Anal., 6 (1997), pp. 224--233], and Bank and Holst [SIAM J. Sci. Comput., 22 (2000), pp. 1411--1443], they differ primarily in that they are purely algebraic: AMG-RD and AMG-DD trade communication for computation by forming global composite “grids” based only on the matrix, not the geometry. (As is the usual AMG convention, “grids” here should be taken only in the algebraic sense, regardless of whether or not it corresponds to any geometry.) Another important distinguishing feature of AMG-RD and AMG-DD is their novel residual communication process that enables effective parallel computation on composite grids, avoiding the all-to-all communication costs of the geometric methods. The main purpose of this paper is to study the potential of these two algebraic methods as possible alternatives to existing AMG approaches for future parallel machines. As a result, this paper develops some theoretical properties of these methods and reports on serial numerical tests of their convergence properties over a spectrum of problem parameters.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, X; Petrongolo, M; Wang, T
Purpose: A general problem of dual-energy CT (DECT) is that the decomposition is sensitive to noise in the two sets of dual-energy projection data, resulting in severely degraded qualities of decomposed images. We have previously proposed an iterative denoising method for DECT. Using a linear decomposition function, the method does not gain the full benefits of DECT on beam-hardening correction. In this work, we expand the framework of our iterative method to include non-linear decomposition models for noise suppression in DECT. Methods: We first obtain decomposed projections, which are free of beam-hardening artifacts, using a lookup table pre-measured on amore » calibration phantom. First-pass material images with high noise are reconstructed from the decomposed projections using standard filter-backprojection reconstruction. Noise on the decomposed images is then suppressed by an iterative method, which is formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, we include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. Analytical formulae are derived to compute the variance-covariance matrix from the measured decomposition lookup table. Results: We have evaluated the proposed method via phantom studies. Using non-linear decomposition, our method effectively suppresses the streaking artifacts of beam-hardening and obtains more uniform images than our previous approach based on a linear model. The proposed method reduces the average noise standard deviation of two basis materials by one order of magnitude without sacrificing the spatial resolution. Conclusion: We propose a general framework of iterative denoising for material decomposition of DECT. Preliminary phantom studies have shown the proposed method improves the image uniformity and reduces noise level without resolution loss. In the future, we will perform more phantom studies to further validate the performance of the purposed method. This work is supported by a Varian MRA grant.« less
Modal identification of structures by a novel approach based on FDD-wavelet method
NASA Astrophysics Data System (ADS)
Tarinejad, Reza; Damadipour, Majid
2014-02-01
An important application of system identification in structural dynamics is the determination of natural frequencies, mode shapes and damping ratios during operation which can then be used for calibrating numerical models. In this paper, the combination of two advanced methods of Operational Modal Analysis (OMA) called Frequency Domain Decomposition (FDD) and Continuous Wavelet Transform (CWT) based on novel cyclic averaging of correlation functions (CACF) technique are used for identification of dynamic properties. By using this technique, the autocorrelation of averaged correlation functions is used instead of original signals. Integration of FDD and CWT methods is used to overcome their deficiency and take advantage of the unique capabilities of these methods. The FDD method is able to accurately estimate the natural frequencies and mode shapes of structures in the frequency domain. On the other hand, the CWT method is in the time-frequency domain for decomposition of a signal at different frequencies and determines the damping coefficients. In this paper, a new formulation applied to the wavelet transform of the averaged correlation function of an ambient response is proposed. This application causes to accurate estimation of damping ratios from weak (noise) or strong (earthquake) vibrations and long or short duration record. For this purpose, the modified Morlet wavelet having two free parameters is used. The optimum values of these two parameters are obtained by employing a technique which minimizes the entropy of the wavelet coefficients matrix. The capabilities of the novel FDD-Wavelet method in the system identification of various dynamic systems with regular or irregular distribution of mass and stiffness are illustrated. This combined approach is superior to classic methods and yields results that agree well with the exact solutions of the numerical models.
Aladko, E Ya; Dyadin, Yu A; Fenelonov, V B; Larionov, E G; Manakov, A Yu; Mel'gunov, M S; Zhurko, F V
2006-10-05
The experimental data on decomposition temperatures for the gas hydrates of ethane, propane, and carbon dioxide dispersed in silica gel mesopores are reported. The studies were performed at pressures up to 1 GPa. It is shown that the experimental dependence of hydrate decomposition temperature on the size of pores that limit the size of hydrate particles can be described on the basis of the Gibbs-Thomson equation only if one takes into account changes in the shape coefficient that is present in the equation; in turn, the value of this coefficient depends on a method of mesopore size determination. A mechanism of hydrate formation in mesoporous medium is proposed. Experimental data providing evidence of the possibility of the formation of hydrate compounds in hydrophobic matrixes under high pressure are reported. Decomposition temperature of those hydrate compounds is higher than that for the bulk hydrates of the corresponding gases.
Blind source separation by sparse decomposition
NASA Astrophysics Data System (ADS)
Zibulevsky, Michael; Pearlmutter, Barak A.
2000-04-01
The blind source separation problem is to extract the underlying source signals from a set of their linear mixtures, where the mixing matrix is unknown. This situation is common, eg in acoustics, radio, and medical signal processing. We exploit the property of the sources to have a sparse representation in a corresponding signal dictionary. Such a dictionary may consist of wavelets, wavelet packets, etc., or be obtained by learning from a given family of signals. Starting from the maximum a posteriori framework, which is applicable to the case of more sources than mixtures, we derive a few other categories of objective functions, which provide faster and more robust computations, when there are an equal number of sources and mixtures. Our experiments with artificial signals and with musical sounds demonstrate significantly better separation than other known techniques.
Determination of trace metals in spirits by total reflection X-ray fluorescence spectrometry
NASA Astrophysics Data System (ADS)
Siviero, G.; Cinosi, A.; Monticelli, D.; Seralessandri, L.
2018-06-01
Eight spirituous samples were analyzed for trace metal content with Horizon Total Reflection X-Ray Fluorescence (TXRF) Spectrometer. The expected single metal amount is at the ng/g level in a mixed aqueous/organic matrix, thus requiring a sample preparation method capable of achieving suitable limits of detection. On-site enrichment and Atmospheric Pressure-Vapor Phase Decomposition allowed to detect Cr, Mn, Fe, Co, Ni, Cu, Zn, As, Sr and Pb with detection limits ranging from 0.1 ng/g to 4.6 ng/g. These results highlight how the synergy between instrument and sample preparation strategy may foster the use of TXRF as a fast and reliable technique for the determination of trace elements in spirituous samples, either for quality control or risk assessment purposes.
NASA Technical Reports Server (NTRS)
Fijany, Amir
1993-01-01
In this paper, parallel O(log n) algorithms for computation of rigid multibody dynamics are developed. These parallel algorithms are derived by parallelization of new O(n) algorithms for the problem. The underlying feature of these O(n) algorithms is a drastically different strategy for decomposition of interbody force which leads to a new factorization of the mass matrix (M). Specifically, it is shown that a factorization of the inverse of the mass matrix in the form of the Schur Complement is derived as M(exp -1) = C - B(exp *)A(exp -1)B, wherein matrices C, A, and B are block tridiagonal matrices. The new O(n) algorithm is then derived as a recursive implementation of this factorization of M(exp -1). For the closed-chain systems, similar factorizations and O(n) algorithms for computation of Operational Space Mass Matrix lambda and its inverse lambda(exp -1) are also derived. It is shown that these O(n) algorithms are strictly parallel, that is, they are less efficient than other algorithms for serial computation of the problem. But, to our knowledge, they are the only known algorithms that can be parallelized and that lead to both time- and processor-optimal parallel algorithms for the problem, i.e., parallel O(log n) algorithms with O(n) processors. The developed parallel algorithms, in addition to their theoretical significance, are also practical from an implementation point of view due to their simple architectural requirements.
Sparse Regression as a Sparse Eigenvalue Problem
NASA Technical Reports Server (NTRS)
Moghaddam, Baback; Gruber, Amit; Weiss, Yair; Avidan, Shai
2008-01-01
We extend the l0-norm "subspectral" algorithms for sparse-LDA [5] and sparse-PCA [6] to general quadratic costs such as MSE in linear (kernel) regression. The resulting "Sparse Least Squares" (SLS) problem is also NP-hard, by way of its equivalence to a rank-1 sparse eigenvalue problem (e.g., binary sparse-LDA [7]). Specifically, for a general quadratic cost we use a highly-efficient technique for direct eigenvalue computation using partitioned matrix inverses which leads to dramatic x103 speed-ups over standard eigenvalue decomposition. This increased efficiency mitigates the O(n4) scaling behaviour that up to now has limited the previous algorithms' utility for high-dimensional learning problems. Moreover, the new computation prioritizes the role of the less-myopic backward elimination stage which becomes more efficient than forward selection. Similarly, branch-and-bound search for Exact Sparse Least Squares (ESLS) also benefits from partitioned matrix inverse techniques. Our Greedy Sparse Least Squares (GSLS) generalizes Natarajan's algorithm [9] also known as Order-Recursive Matching Pursuit (ORMP). Specifically, the forward half of GSLS is exactly equivalent to ORMP but more efficient. By including the backward pass, which only doubles the computation, we can achieve lower MSE than ORMP. Experimental comparisons to the state-of-the-art LARS algorithm [3] show forward-GSLS is faster, more accurate and more flexible in terms of choice of regularization
Gao, Yu-Fei; Gui, Guan; Xie, Wei; Zou, Yan-Bin; Yang, Yue; Wan, Qun
2017-01-01
This paper investigates a two-dimensional angle of arrival (2D AOA) estimation algorithm for the electromagnetic vector sensor (EMVS) array based on Type-2 block component decomposition (BCD) tensor modeling. Such a tensor decomposition method can take full advantage of the multidimensional structural information of electromagnetic signals to accomplish blind estimation for array parameters with higher resolution. However, existing tensor decomposition methods encounter many restrictions in applications of the EMVS array, such as the strict requirement for uniqueness conditions of decomposition, the inability to handle partially-polarized signals, etc. To solve these problems, this paper investigates tensor modeling for partially-polarized signals of an L-shaped EMVS array. The 2D AOA estimation algorithm based on rank-(L1,L2,·) BCD is developed, and the uniqueness condition of decomposition is analyzed. By means of the estimated steering matrix, the proposed algorithm can automatically achieve angle pair-matching. Numerical experiments demonstrate that the present algorithm has the advantages of both accuracy and robustness of parameter estimation. Even under the conditions of lower SNR, small angular separation and limited snapshots, the proposed algorithm still possesses better performance than subspace methods and the canonical polyadic decomposition (CPD) method. PMID:28448431
Gao, Yu-Fei; Gui, Guan; Xie, Wei; Zou, Yan-Bin; Yang, Yue; Wan, Qun
2017-04-27
This paper investigates a two-dimensional angle of arrival (2D AOA) estimation algorithm for the electromagnetic vector sensor (EMVS) array based on Type-2 block component decomposition (BCD) tensor modeling. Such a tensor decomposition method can take full advantage of the multidimensional structural information of electromagnetic signals to accomplish blind estimation for array parameters with higher resolution. However, existing tensor decomposition methods encounter many restrictions in applications of the EMVS array, such as the strict requirement for uniqueness conditions of decomposition, the inability to handle partially-polarized signals, etc. To solve these problems, this paper investigates tensor modeling for partially-polarized signals of an L-shaped EMVS array. The 2D AOA estimation algorithm based on rank- ( L 1 , L 2 , · ) BCD is developed, and the uniqueness condition of decomposition is analyzed. By means of the estimated steering matrix, the proposed algorithm can automatically achieve angle pair-matching. Numerical experiments demonstrate that the present algorithm has the advantages of both accuracy and robustness of parameter estimation. Even under the conditions of lower SNR, small angular separation and limited snapshots, the proposed algorithm still possesses better performance than subspace methods and the canonical polyadic decomposition (CPD) method.
Direct Iterative Nonlinear Inversion by Multi-frequency T-matrix Completion
NASA Astrophysics Data System (ADS)
Jakobsen, M.; Wu, R. S.
2016-12-01
Researchers in the mathematical physics community have recently proposed a conceptually new method for solving nonlinear inverse scattering problems (like FWI) which is inspired by the theory of nonlocality of physical interactions. The conceptually new method, which may be referred to as the T-matrix completion method, is very interesting since it is not based on linearization at any stage. Also, there are no gradient vectors or (inverse) Hessian matrices to calculate. However, the convergence radius of this promising T-matrix completion method is seriously restricted by it's use of single-frequency scattering data only. In this study, we have developed a modified version of the T-matrix completion method which we believe is more suitable for applications to nonlinear inverse scattering problems in (exploration) seismology, because it makes use of multi-frequency data. Essentially, we have simplified the single-frequency T-matrix completion method of Levinson and Markel and combined it with the standard sequential frequency inversion (multi-scale regularization) method. For each frequency, we first estimate the experimental T-matrix by using the Moore-Penrose pseudo inverse concept. Then this experimental T-matrix is used to initiate an iterative procedure for successive estimation of the scattering potential and the T-matrix using the Lippmann-Schwinger for the nonlinear relation between these two quantities. The main physical requirements in the basic iterative cycle is that the T-matrix should be data-compatible and the scattering potential operator should be dominantly local; although a non-local scattering potential operator is allowed in the intermediate iterations. In our simplified T-matrix completion strategy, we ensure that the T-matrix updates are always data compatible simply by adding a suitable correction term in the real space coordinate representation. The use of singular-value decomposition representations are not required in our formulation since we have developed an efficient domain decomposition method. The results of several numerical experiments for the SEG/EAGE salt model illustrate the importance of using multi-frequency data when performing frequency domain full waveform inversion in strongly scattering media via the new concept of T-matrix completion.
Absolute continuity for operator valued completely positive maps on C∗-algebras
NASA Astrophysics Data System (ADS)
Gheondea, Aurelian; Kavruk, Ali Şamil
2009-02-01
Motivated by applicability to quantum operations, quantum information, and quantum probability, we investigate the notion of absolute continuity for operator valued completely positive maps on C∗-algebras, previously introduced by Parthasarathy [in Athens Conference on Applied Probability and Time Series Analysis I (Springer-Verlag, Berlin, 1996), pp. 34-54]. We obtain an intrinsic definition of absolute continuity, we show that the Lebesgue decomposition defined by Parthasarathy is the maximal one among all other Lebesgue-type decompositions and that this maximal Lebesgue decomposition does not depend on the jointly dominating completely positive map, we obtain more flexible formulas for calculating the maximal Lebesgue decomposition, and we point out the nonuniqueness of the Lebesgue decomposition as well as a sufficient condition for uniqueness. In addition, we consider Radon-Nikodym derivatives for absolutely continuous completely positive maps that, in general, are unbounded positive self-adjoint operators affiliated to a certain von Neumann algebra, and we obtain a spectral approximation by bounded Radon-Nikodym derivatives. An application to the existence of the infimum of two completely positive maps is indicated, and formulas in terms of Choi's matrices for the Lebesgue decomposition of completely positive maps in matrix algebras are obtained.
Yao, Shengnan; Zeng, Weiming; Wang, Nizhuan; Chen, Lei
2013-07-01
Independent component analysis (ICA) has been proven to be effective for functional magnetic resonance imaging (fMRI) data analysis. However, ICA decomposition requires to optimize the unmixing matrix iteratively whose initial values are generated randomly. Thus the randomness of the initialization leads to different ICA decomposition results. Therefore, just one-time decomposition for fMRI data analysis is not usually reliable. Under this circumstance, several methods about repeated decompositions with ICA (RDICA) were proposed to reveal the stability of ICA decomposition. Although utilizing RDICA has achieved satisfying results in validating the performance of ICA decomposition, RDICA cost much computing time. To mitigate the problem, in this paper, we propose a method, named ATGP-ICA, to do the fMRI data analysis. This method generates fixed initial values with automatic target generation process (ATGP) instead of being produced randomly. We performed experimental tests on both hybrid data and fMRI data to indicate the effectiveness of the new method and made a performance comparison of the traditional one-time decomposition with ICA (ODICA), RDICA and ATGP-ICA. The proposed method demonstrated that it not only could eliminate the randomness of ICA decomposition, but also could save much computing time compared to RDICA. Furthermore, the ROC (Receiver Operating Characteristic) power analysis also denoted the better signal reconstruction performance of ATGP-ICA than that of RDICA. Copyright © 2013 Elsevier Inc. All rights reserved.
Electrochemical Test Method for Evaluating Long-Term Propellant-Material Compatibility
1978-12-01
matrix of test conditions is illustrated in Fig. 13. A statistically designed test matrix (Graeco-Latin Cube) could not be used because of passivation...ears simulated time results in a findl decomposition level of 0.753 mg/cm The data was examined using statistical techniqves to evaluate the relative...metals. The compatibility of all nine metals was evaluated in hydrazine containing water and chloride. The results of the statistical analy(is
Time series decomposition methods were applied to meteorological and air quality data and their numerical model estimates. Decomposition techniques express a time series as the sum of a small number of independent modes which hypothetically represent identifiable forcings, thereb...
Lu, Yan; Li, Gang; Liu, Wei; Yuan, Hongyan; Xiao, Dan
2018-08-15
It is known that most of the refractory ore are the basis of national economy and widely applied in various fields, however, the complexity of the chemical composition and the diversity of the crystallinity in the mineral phases make the sample pre-treatment of refractory ore still remains a challenge. In this work, the complete decomposition of the refractory ore sample can be achieved just by exposing the solid fusion agent and the refractory ore sample in the microwave irradiation environment for a few minutes, and induced by a drop of water. A digestion time of 15 min for 3.0 g solid fusion agent mixture of sodium peroxide/sodium carbonate (Na 2 O 2 /Na 2 CO 3 ) in a corundum crucible via microwave heating is sufficient to decompose 0.1 g refractory ore sample. An excellent microwave digestion solid agent should meet the following conditions, a good decomposition ability, an outstanding ability of absorbing microwave energy and converting it into heat quickly, a higher melting point than the decomposing temperature of the ore sample. In the research, the induction effect of water plays an important role for the microwave digestion. The energy which is released by the reaction of water and the solid fusion agent (Na 2 O 2 ) is the key to decompose refractory ore samples with solid fusion agent, which replenished the total energy required for the microwave digestion and made the microwave digestion completed successfully. This microwave digestion technique has good reproducibility and precision, RSD % for Mo, Fe, Ti, Cr and W in the refractory ore samples were all better than 6, except RSD % for Be of about 8 because of the influence of matrix-effect. Meanwhile, the analysis results of the elements in the refractory ore samples provided by the microwave digestion technique were all in good agreement with the analysis results provided by the traditional fusion method except for Cr in the mixture ore samples. In the study, the non-linear dependence of the electromagnetic and thermal properties of the solid fusion agent on temperature under microwave irradiation and the selective heating of microwave are fully applied in this simple microwave technique. Comparing to the traditional fusion decomposition method, this microwave digestion technique is a simple, economical, fast and energy-saving sample pre-treatment technique. Copyright © 2018 Elsevier B.V. All rights reserved.
Decomposition of Multi-player Games
NASA Astrophysics Data System (ADS)
Zhao, Dengji; Schiffel, Stephan; Thielscher, Michael
Research in General Game Playing aims at building systems that learn to play unknown games without human intervention. We contribute to this endeavour by generalising the established technique of decomposition from AI Planning to multi-player games. To this end, we present a method for the automatic decomposition of previously unknown games into independent subgames, and we show how a general game player can exploit a successful decomposition for game tree search.
General linear codes for fault-tolerant matrix operations on processor arrays
NASA Technical Reports Server (NTRS)
Nair, V. S. S.; Abraham, J. A.
1988-01-01
Various checksum codes have been suggested for fault-tolerant matrix computations on processor arrays. Use of these codes is limited due to potential roundoff and overflow errors. Numerical errors may also be misconstrued as errors due to physical faults in the system. In this a set of linear codes is identified which can be used for fault-tolerant matrix operations such as matrix addition, multiplication, transposition, and LU-decomposition, with minimum numerical error. Encoding schemes are given for some of the example codes which fall under the general set of codes. With the help of experiments, a rule of thumb for the selection of a particular code for a given application is derived.
1,2-diketones promoted degradation of poly(epsilon-caprolactone)
NASA Astrophysics Data System (ADS)
Danko, Martin; Borska, Katarina; Ragab, Sherif Shaban; Janigova, Ivica; Mosnacek, Jaroslav
2012-07-01
Photochemical reactions of Benzil and Camphorquinone were used for modification of poly(ɛ-caprolactone) polymer films. Photochemistry of dopants was followed by infrared spectroscopy, changes on polymer chains of matrix were followed by gel permeation chromatography. Benzoyl peroxide was efficiently photochemically generated from benzyl in solid polymer matrix in the presence of air. Following decomposition of benzoyl peroxide led to degradation of matrix. Photochemical transformation of benzil in vacuum led to hydrogen abstraction from the polymer chains in higher extent, which resulted to chains recombination and formation of gel. Photochemical transformation of camphorquinone to corresponding camphoric peroxide was not observed. Only decrease of molecular weight of polymer matrix doped with camphorquinone was observed during the irradiation.
Users manual for the Variable dimension Automatic Synthesis Program (VASP)
NASA Technical Reports Server (NTRS)
White, J. S.; Lee, H. Q.
1971-01-01
A dictionary and some problems for the Variable Automatic Synthesis Program VASP are submitted. The dictionary contains a description of each subroutine and instructions on its use. The example problems give the user a better perspective on the use of VASP for solving problems in modern control theory. These example problems include dynamic response, optimal control gain, solution of the sampled data matrix Ricatti equation, matrix decomposition, and pseudo inverse of a matrix. Listings of all subroutines are also included. The VASP program has been adapted to run in the conversational mode on the Ames 360/67 computer.
A general framework of noise suppression in material decomposition for dual-energy CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petrongolo, Michael; Dong, Xue; Zhu, Lei, E-mail: leizhu@gatech.edu
Purpose: As a general problem of dual-energy CT (DECT), noise amplification in material decomposition severely reduces the signal-to-noise ratio on the decomposed images compared to that on the original CT images. In this work, the authors propose a general framework of noise suppression in material decomposition for DECT. The method is based on an iterative algorithm recently developed in their group for image-domain decomposition of DECT, with an extension to include nonlinear decomposition models. The generalized framework of iterative DECT decomposition enables beam-hardening correction with simultaneous noise suppression, which improves the clinical benefits of DECT. Methods: The authors propose tomore » suppress noise on the decomposed images of DECT using convex optimization, which is formulated in the form of least-squares estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, the authors include the inverse of the estimated variance–covariance matrix of the decomposed images as the penalty weight in the least-squares term. Analytical formulas are derived to compute the variance–covariance matrix for decomposed images with general-form numerical or analytical decomposition. As a demonstration, the authors implement the proposed algorithm on phantom data using an empirical polynomial function of decomposition measured on a calibration scan. The polynomial coefficients are determined from the projection data acquired on a wedge phantom, and the signal decomposition is performed in the projection domain. Results: On the Catphan{sup ®}600 phantom, the proposed noise suppression method reduces the average noise standard deviation of basis material images by one to two orders of magnitude, with a superior performance on spatial resolution as shown in comparisons of line-pair images and modulation transfer function measurements. On the synthesized monoenergetic CT images, the noise standard deviation is reduced by a factor of 2–3. By using nonlinear decomposition on projections, the authors’ method effectively suppresses the streaking artifacts of beam hardening and obtains more uniform images than their previous approach based on a linear model. Similar performance of noise suppression is observed in the results of an anthropomorphic head phantom and a pediatric chest phantom generated by the proposed method. With beam-hardening correction enabled by their approach, the image spatial nonuniformity on the head phantom is reduced from around 10% on the original CT images to 4.9% on the synthesized monoenergetic CT image. On the pediatric chest phantom, their method suppresses image noise standard deviation by a factor of around 7.5, and compared with linear decomposition, it reduces the estimation error of electron densities from 33.3% to 8.6%. Conclusions: The authors propose a general framework of noise suppression in material decomposition for DECT. Phantom studies have shown the proposed method improves the image uniformity and the accuracy of electron density measurements by effective beam-hardening correction and reduces noise level without noticeable resolution loss.« less
Duemichen, E; Braun, U; Senz, R; Fabian, G; Sturm, H
2014-08-08
For analysis of the gaseous thermal decomposition products of polymers, the common techniques are thermogravimetry, combined with Fourier transformed infrared spectroscopy (TGA-FTIR) and mass spectrometry (TGA-MS). These methods offer a simple approach to the decomposition mechanism, especially for small decomposition molecules. Complex spectra of gaseous mixtures are very often hard to identify because of overlapping signals. In this paper a new method is described to adsorb the decomposition products during controlled conditions in TGA on solid-phase extraction (SPE) material: twisters. Subsequently the twisters were analysed with thermal desorption gas chromatography mass spectrometry (TDS-GC-MS), which allows the decomposition products to be separated and identified using an MS library. The thermoplastics polyamide 66 (PA 66) and polybutylene terephthalate (PBT) were used as example polymers. The influence of the sample mass and of the purge gas flow during the decomposition process was investigated in TGA. The advantages and limitations of the method were presented in comparison to the common analysis techniques, TGA-FTIR and TGA-MS. Copyright © 2014 Elsevier B.V. All rights reserved.
An optimization approach for fitting canonical tensor decompositions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dunlavy, Daniel M.; Acar, Evrim; Kolda, Tamara Gibson
Tensor decompositions are higher-order analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as the CANDECOMP/PARAFAC decomposition (CPD), which expresses a tensor as the sum of component rank-one tensors and is used in a multitude of applications such as chemometrics, signal processing, neuroscience, and web analysis. The task of computing the CPD, however, can be difficult. The typical approach is based on alternating least squares (ALS) optimization, which can be remarkably fast but is not very accurate. Previously, nonlinear least squares (NLS) methodsmore » have also been recommended; existing NLS methods are accurate but slow. In this paper, we propose the use of gradient-based optimization methods. We discuss the mathematical calculation of the derivatives and further show that they can be computed efficiently, at the same cost as one iteration of ALS. Computational experiments demonstrate that the gradient-based optimization methods are much more accurate than ALS and orders of magnitude faster than NLS.« less
The predictive power of singular value decomposition entropy for stock market dynamics
NASA Astrophysics Data System (ADS)
Caraiani, Petre
2014-01-01
We use a correlation-based approach to analyze financial data from the US stock market, both daily and monthly observations from the Dow Jones. We compute the entropy based on the singular value decomposition of the correlation matrix for the components of the Dow Jones Industrial Index. Based on a moving window, we derive time varying measures of entropy for both daily and monthly data. We find that the entropy has a predictive ability with respect to stock market dynamics as indicated by the Granger causality tests.
Monitoring hydraulic stimulation using telluric sounding
NASA Astrophysics Data System (ADS)
Rees, Nigel; Heinson, Graham; Conway, Dennis
2018-01-01
The telluric sounding (TS) method is introduced as a potential tool for monitoring hydraulic fracturing at depth. The advantage of this technique is that it requires only the measurement of electric fields, which are cheap and easy when compared with magnetotelluric measurements. Additionally, the transfer function between electric fields from two locations is essentially the identity matrix for a 1D Earth no matter what the vertical structure. Therefore, changes in the earth resulting from the introduction of conductive bodies underneath one of these sites can be associated with deviations away from the identity matrix, with static shift appearing as a galvanic multiplier at all periods. Singular value decomposition and eigenvalue analysis can reduce the complexity of the resulting telluric distortion matrix to simpler parameters that can be visualised in the form of Mohr circles. This technique would be useful in constraining the lateral extent of resistivity changes. We test the viability of utilising the TS method for monitoring on both a synthetic dataset and for a hydraulic stimulation of an enhanced geothermal system case study conducted in Paralana, South Australia. The synthetic data example shows small but consistent changes in the transfer functions associated with hydraulic stimulation, with grids of Mohr circles introduced as a useful diagnostic tool for visualising the extent of fluid movement. The Paralana electric field data were relatively noisy and affected by the dead band making the analysis of transfer functions difficult. However, changes in the order of 5% were observed from 5 s to longer periods. We conclude that deep monitoring using the TS method is marginal at depths in the order of 4 km and that in order to have meaningful interpretations, electric field data need to be of a high quality with low levels of site noise.[Figure not available: see fulltext.
Kumar, Ranjeet; Kumar, A; Singh, G K
2016-06-01
In the field of biomedical, it becomes necessary to reduce data quantity due to the limitation of storage in real-time ambulatory system and telemedicine system. Research has been underway since very beginning for the development of an efficient and simple technique for longer term benefits. This paper, presents an algorithm based on singular value decomposition (SVD), and embedded zero tree wavelet (EZW) techniques for ECG signal compression which deals with the huge data of ambulatory system. The proposed method utilizes the low rank matrix for initial compression on two dimensional (2-D) ECG data array using SVD, and then EZW is initiated for final compression. Initially, 2-D array construction has key issue for the proposed technique in pre-processing. Here, three different beat segmentation approaches have been exploited for 2-D array construction using segmented beat alignment with exploitation of beat correlation. The proposed algorithm has been tested on MIT-BIH arrhythmia record, and it was found that it is very efficient in compression of different types of ECG signal with lower signal distortion based on different fidelity assessments. The evaluation results illustrate that the proposed algorithm has achieved the compression ratio of 24.25:1 with excellent quality of signal reconstruction in terms of percentage-root-mean square difference (PRD) as 1.89% for ECG signal Rec. 100 and consumes only 162bps data instead of 3960bps uncompressed data. The proposed method is efficient and flexible with different types of ECG signal for compression, and controls quality of reconstruction. Simulated results are clearly illustrate the proposed method can play a big role to save the memory space of health data centres as well as save the bandwidth in telemedicine based healthcare systems. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Order reduction, identification and localization studies of dynamical systems
NASA Astrophysics Data System (ADS)
Ma, Xianghong
In this thesis methods are developed for performing order reduction, system identification and induction of nonlinear localization in complex mechanical dynamic systems. General techniques are proposed for constructing low-order models of linear and nonlinear mechanical systems; in addition, novel mechanical designs are considered for inducing nonlinear localization phenomena for the purpose of enhancing their dynamical performance. The thesis is in three major parts. In the first part, the transient dynamics of an impulsively loaded multi-bay truss is numerically computed by employing the Direct Global Matrix (DGM) approach. The approach is applicable to large-scale flexible structures with periodicity. Karhunen-Loeve (K-L) decomposition is used to discretize the dynamics of the truss and to create the low-order models of the truss. The leading order K-L modes are recovered by an experiment, which shows the feasibility of K-L based order reduction technique. In the second part of the thesis, nonlinear localization in dynamical systems is studied through two applications. In the seismic base isolation study, it is shown that the dynamics are sensitive to the presence of nonlinear elements and that passive motion confinement can be induced under proper design. In the coupled rod system, numerical simulation of the transient dynamics shows that a nonlinear backlash spring can induce either nonlinear localization or delocalization in the form of beat phenomena. K-L decomposition and poincare maps are utilized to study the nonlinear effects. The study shows that nonlinear localization can be induced in complex structures through backlash. In the third and final part of the thesis, a new technique based on Green!s function method is proposed to identify the dynamics of practical bolted joints. By modeling the difference between the dynamics of the bolted structure and the corresponding unbolted one, one constructs a nonparametric model for the joint dynamics. Two applications are given with a bolted beam and a truss joint in order to show the applicability of the technique.
Improvements in sparse matrix operations of NASTRAN
NASA Technical Reports Server (NTRS)
Harano, S.
1980-01-01
A "nontransmit" packing routine was added to NASTRAN to allow matrix data to be refered to directly from the input/output buffer. Use of the packing routine permits various routines for matrix handling to perform a direct reference to the input/output buffer if data addresses have once been received. The packing routine offers a buffer by buffer backspace feature for efficient backspacing in sequential access. Unlike a conventional backspacing that needs twice back record for a single read of one record (one column), this feature omits overlapping of READ operation and back record. It eliminates the necessity of writing, in decomposition of a symmetric matrix, of a portion of the matrix to its upper triangular matrix from the last to the first columns of the symmetric matrix, thus saving time for generating the upper triangular matrix. Only a lower triangular matrix must be written onto the secondary storage device, bringing 10 to 30% reduction in use of the disk space of the storage device.
NASA Astrophysics Data System (ADS)
Fortenberry, Claire F.; Walker, Michael J.; Zhang, Yaping; Mitroo, Dhruv; Brune, William H.; Williams, Brent J.
2018-02-01
The chemical complexity of biomass burning organic aerosol (BBOA) greatly increases with photochemical aging in the atmosphere, necessitating controlled laboratory studies to inform field observations. In these experiments, BBOA from American white oak (Quercus alba) leaf and heartwood samples was generated in a custom-built emissions and combustion chamber and photochemically aged in a potential aerosol mass (PAM) flow reactor. A thermal desorption aerosol gas chromatograph (TAG) was used in parallel with a high-resolution time-of-flight aerosol mass spectrometer (AMS) to analyze BBOA chemical composition at different levels of photochemical aging. Individual compounds were identified and integrated to obtain relative decay rates for key molecules. A recently developed chromatogram binning positive matrix factorization (PMF) technique was used to obtain mass spectral profiles for factors in TAG BBOA chromatograms, improving analysis efficiency and providing a more complete determination of unresolved complex mixture (UCM) components. Additionally, the recently characterized TAG decomposition window was used to track molecular fragments created by the decomposition of thermally labile BBOA during sample desorption. We demonstrate that although most primary (freshly emitted) BBOA compounds deplete with photochemical aging, certain components eluting within the TAG thermal decomposition window are instead enhanced. Specifically, the increasing trend in the decomposition m/z 44 signal (CO2+) indicates formation of secondary organic aerosol (SOA) in the PAM reactor. Sources of m/z 60 (C2H4O2+), typically attributed to freshly emitted BBOA in AMS field measurements, were also investigated. From the TAG chemical speciation and decomposition window data, we observed a decrease in m/z 60 with photochemical aging due to the decay of anhydrosugars (including levoglucosan) and other compounds, as well as an increase in m/z 60 due to the formation of thermally labile organic acids within the PAM reactor, which decompose during TAG sample desorption. When aging both types of BBOA (leaf and heartwood), the AMS data exhibit a combination of these two contributing effects, causing limited change to the overall m/z 60 signal. Our observations demonstrate the importance of chemically speciated data in fully understanding bulk aerosol measurements provided by the AMS in both laboratory and field studies.
Decomposition of Metrosideros polymorpha leaf litter along elevational gradients in Hawaii
Paul G. Scowcroft; Douglas R. Turner; Peter M. Vitousek
2000-01-01
We examined interactions between temperature, soil development, and decomposition on three elevational gradients, the upper and lower ends of each being situated on a common lava flow or ash deposit. We used the reciprocal transplant technique to estimate decomposition rates of Metrosideros polymorpha leaf litter during a three-year period at warm...
Mechanism of thermal decomposition of K2FeO4 and BaFeO4: A review
NASA Astrophysics Data System (ADS)
Sharma, Virender K.; Machala, Libor
2016-12-01
This paper presents thermal decomposition of potassium ferrate(VI) (K2FeO4) and barium ferrate(VI) (BaFeO4) in air and nitrogen atmosphere. Mössbauer spectroscopy and nuclear forward scattering (NFS) synchrotron radiation approaches are reviewed to advance understanding of electron-transfer processes involved in reduction of ferrate(VI) to Fe(III) phases. Direct evidences of Fe V and Fe IV as intermediate iron species using the applied techniques are given. Thermal decomposition of K2FeO4 involved Fe V, Fe IV, and K3FeO3 as intermediate species while BaFeO3 (i.e. Fe IV) was the only intermediate species during the decomposition of BaFeO4. Nature of ferrite species, formed as final Fe(III) species, of thermal decomposition of K2FeO4 and BaFeO4 under different conditions are evaluated. Steps of the mechanisms of thermal decomposition of ferrate(VI), which reasonably explained experimental observations of applied approaches in conjunction with thermal and surface techniques, are summarized.
Effects of Solute Concentrations on Kinetic Pathways in Ni-Al-Cr Alloys
NASA Technical Reports Server (NTRS)
Booth-Morrison, Christopher; Weninger, Jessica; Sudbrack, Chantal K.; Mao, Zugang; Seidman, David N.; Noebe, Ronald D.
2008-01-01
The kinetic pathways resulting from the formation of coherent gamma'-precipitates from the gamma-matrix are studied for two Ni-Al-Cr alloys with similar gamma'-precipitate volume fractions at 873 K. The details of the phase decompositions of Ni-7.5Al-8.5Cr at.% and Ni-5.2Al-14.2Cr at.% for aging times from 1/6 to 1024 h are investigated by atom-probe tomography, and are found to differ significantly from a mean-field description of coarsening. The morphologies of the gamma'-precipitates of the alloys are similar, though the degrees of gamma'-precipitate coagulation and coalescence differ. Quantification within the framework of classical nucleation theory reveals that differences in the chemical driving forces for phase decomposition result in differences in the nucleation behavior of the two alloys. The temporal evolution of the gamma'-precipitate average radii and the gamma-matrix supersaturations follow the predictions of classical coarsening models. The compositional trajectories of the gamma-matrix phases of the alloys are found to follow approximately the equilibrium tie-lines, while the trajectories of the gamma'-precipitates do not, resulting in significant differences in the partitioning ratios of the solute elements.
Matrix approach to uncertainty assessment and reduction for modeling terrestrial carbon cycle
NASA Astrophysics Data System (ADS)
Luo, Y.; Xia, J.; Ahlström, A.; Zhou, S.; Huang, Y.; Shi, Z.; Wang, Y.; Du, Z.; Lu, X.
2017-12-01
Terrestrial ecosystems absorb approximately 30% of the anthropogenic carbon dioxide emissions. This estimate has been deduced indirectly: combining analyses of atmospheric carbon dioxide concentrations with ocean observations to infer the net terrestrial carbon flux. In contrast, when knowledge about the terrestrial carbon cycle is integrated into different terrestrial carbon models they make widely different predictions. To improve the terrestrial carbon models, we have recently developed a matrix approach to uncertainty assessment and reduction. Specifically, the terrestrial carbon cycle has been commonly represented by a series of carbon balance equations to track carbon influxes into and effluxes out of individual pools in earth system models. This representation matches our understanding of carbon cycle processes well and can be reorganized into one matrix equation without changing any modeled carbon cycle processes and mechanisms. We have developed matrix equations of several global land C cycle models, including CLM3.5, 4.0 and 4.5, CABLE, LPJ-GUESS, and ORCHIDEE. Indeed, the matrix equation is generic and can be applied to other land carbon models. This matrix approach offers a suite of new diagnostic tools, such as the 3-dimensional (3-D) parameter space, traceability analysis, and variance decomposition, for uncertainty analysis. For example, predictions of carbon dynamics with complex land models can be placed in a 3-D parameter space (carbon input, residence time, and storage potential) as a common metric to measure how much model predictions are different. The latter can be traced to its source components by decomposing model predictions to a hierarchy of traceable components. Then, variance decomposition can help attribute the spread in predictions among multiple models to precisely identify sources of uncertainty. The highly uncertain components can be constrained by data as the matrix equation makes data assimilation computationally possible. We will illustrate various applications of this matrix approach to uncertainty assessment and reduction for terrestrial carbon cycle models.
Defect inspection using a time-domain mode decomposition technique
NASA Astrophysics Data System (ADS)
Zhu, Jinlong; Goddard, Lynford L.
2018-03-01
In this paper, we propose a technique called time-varying frequency scanning (TVFS) to meet the challenges in killer defect inspection. The proposed technique enables the dynamic monitoring of defects by checking the hopping in the instantaneous frequency data and the classification of defect types by comparing the difference in frequencies. The TVFS technique utilizes the bidimensional empirical mode decomposition (BEMD) method to separate the defect information from the sea of system errors. This significantly improve the signal-to-noise ratio (SNR) and moreover, it potentially enables reference-free defect inspection.
Pseudoinverse Decoding Process in Delay-Encoded Synthetic Transmit Aperture Imaging.
Gong, Ping; Kolios, Michael C; Xu, Yuan
2016-09-01
Recently, we proposed a new method to improve the signal-to-noise ratio of the prebeamformed radio-frequency data in synthetic transmit aperture (STA) imaging: the delay-encoded STA (DE-STA) imaging. In the decoding process of DE-STA, the equivalent STA data were obtained by directly inverting the coding matrix. This is usually regarded as an ill-posed problem, especially under high noise levels. Pseudoinverse (PI) is usually used instead for seeking a more stable inversion process. In this paper, we apply singular value decomposition to the coding matrix to conduct the PI. Our numerical studies demonstrate that the singular values of the coding matrix have a special distribution, i.e., all the values are the same except for the first and last ones. We compare the PI in two cases: complete PI (CPI), where all the singular values are kept, and truncated PI (TPI), where the last and smallest singular value is ignored. The PI (both CPI and TPI) DE-STA processes are tested against noise with both numerical simulations and experiments. The CPI and TPI can restore the signals stably, and the noise mainly affects the prebeamformed signals corresponding to the first transmit channel. The difference in the overall enveloped beamformed image qualities between the CPI and TPI is negligible. Thus, it demonstrates that DE-STA is a relatively stable encoding and decoding technique. Also, according to the special distribution of the singular values of the coding matrix, we propose a new efficient decoding formula that is based on the conjugate transpose of the coding matrix. We also compare the computational complexity of the direct inverse and the new formula.
Adaptive bearing estimation and tracking of multiple targets in a realistic passive sonar scenario
NASA Astrophysics Data System (ADS)
Rajagopal, R.; Challa, Subhash; Faruqi, Farhan A.; Rao, P. R.
1997-06-01
In a realistic passive sonar environment, the received signal consists of multipath arrivals from closely separated moving targets. The signals are contaminated by spatially correlated noise. The differential MUSIC has been proposed to estimate the DOAs in such a scenario. This method estimates the 'noise subspace' in order to estimate the DOAs. However, the 'noise subspace' estimate has to be updated as and when new data become available. In order to save the computational costs, a new adaptive noise subspace estimation algorithm is proposed in this paper. The salient features of the proposed algorithm are: (1) Noise subspace estimation is done by QR decomposition of the difference matrix which is formed from the data covariance matrix. Thus, as compared to standard eigen-decomposition based methods which require O(N3) computations, the proposed method requires only O(N2) computations. (2) Noise subspace is updated by updating the QR decomposition. (3) The proposed algorithm works in a realistic sonar environment. In the second part of the paper, the estimated bearing values are used to track multiple targets. In order to achieve this, the nonlinear system/linear measurement extended Kalman filtering proposed is applied. Computer simulation results are also presented to support the theory.
Harmonic analysis of electric locomotive and traction power system based on wavelet singular entropy
NASA Astrophysics Data System (ADS)
Dun, Xiaohong
2018-05-01
With the rapid development of high-speed railway and heavy-haul transport, the locomotive and traction power system has become the main harmonic source of China's power grid. In response to this phenomenon, the system's power quality issues need timely monitoring, assessment and governance. Wavelet singular entropy is an organic combination of wavelet transform, singular value decomposition and information entropy theory, which combines the unique advantages of the three in signal processing: the time-frequency local characteristics of wavelet transform, singular value decomposition explores the basic modal characteristics of data, and information entropy quantifies the feature data. Based on the theory of singular value decomposition, the wavelet coefficient matrix after wavelet transform is decomposed into a series of singular values that can reflect the basic characteristics of the original coefficient matrix. Then the statistical properties of information entropy are used to analyze the uncertainty of the singular value set, so as to give a definite measurement of the complexity of the original signal. It can be said that wavelet entropy has a good application prospect in fault detection, classification and protection. The mat lab simulation shows that the use of wavelet singular entropy on the locomotive and traction power system harmonic analysis is effective.
Han, Te; Jiang, Dongxiang; Zhang, Xiaochen; Sun, Yankui
2017-01-01
Rotating machinery is widely used in industrial applications. With the trend towards more precise and more critical operating conditions, mechanical failures may easily occur. Condition monitoring and fault diagnosis (CMFD) technology is an effective tool to enhance the reliability and security of rotating machinery. In this paper, an intelligent fault diagnosis method based on dictionary learning and singular value decomposition (SVD) is proposed. First, the dictionary learning scheme is capable of generating an adaptive dictionary whose atoms reveal the underlying structure of raw signals. Essentially, dictionary learning is employed as an adaptive feature extraction method regardless of any prior knowledge. Second, the singular value sequence of learned dictionary matrix is served to extract feature vector. Generally, since the vector is of high dimensionality, a simple and practical principal component analysis (PCA) is applied to reduce dimensionality. Finally, the K-nearest neighbor (KNN) algorithm is adopted for identification and classification of fault patterns automatically. Two experimental case studies are investigated to corroborate the effectiveness of the proposed method in intelligent diagnosis of rotating machinery faults. The comparison analysis validates that the dictionary learning-based matrix construction approach outperforms the mode decomposition-based methods in terms of capacity and adaptability for feature extraction. PMID:28346385
A leakage-free resonance sparse decomposition technique for bearing fault detection in gearboxes
NASA Astrophysics Data System (ADS)
Osman, Shazali; Wang, Wilson
2018-03-01
Most of rotating machinery deficiencies are related to defects in rolling element bearings. Reliable bearing fault detection still remains a challenging task, especially for bearings in gearboxes as bearing-defect-related features are nonstationary and modulated by gear mesh vibration. A new leakage-free resonance sparse decomposition (LRSD) technique is proposed in this paper for early bearing fault detection of gearboxes. In the proposed LRSD technique, a leakage-free filter is suggested to remove strong gear mesh and shaft running signatures. A kurtosis and cosine distance measure is suggested to select appropriate redundancy r and quality factor Q. The signal residual is processed by signal sparse decomposition for highpass and lowpass resonance analysis to extract representative features for bearing fault detection. The effectiveness of the proposed technique is verified by a succession of experimental tests corresponding to different gearbox and bearing conditions.
Mohn, Joachim; Gutjahr, Wilhelm; Toyoda, Sakae; Harris, Eliza; Ibraim, Erkan; Geilmann, Heike; Schleppi, Patrick; Kuhn, Thomas; Lehmann, Moritz F; Decock, Charlotte; Werner, Roland A; Yoshida, Naohiro; Brand, Willi A
2016-09-08
In the last few years, the study of N 2 O site-specific nitrogen isotope composition has been established as a powerful technique to disentangle N 2 O emission pathways. This trend has been accelerated by significant analytical progress in the field of isotope-ratio mass-spectrometry (IRMS) and more recently quantum cascade laser absorption spectroscopy (QCLAS). Methods The ammonium nitrate (NH 4 NO 3 ) decomposition technique provides a strategy to scale the 15 N site-specific (SP ≡ δ 15 N α - δ 15 N β ) and bulk (δ 15 N bulk = (δ 15 N α + δ 15 N β )/2) isotopic composition of N 2 O against the international standard for the 15 N/ 14 N isotope ratio (AIR-N 2 ). Within the current project 15 N fractionation effects during thermal decomposition of NH 4 NO 3 on the N 2 O site preference were studied using static and dynamic decomposition techniques. The validity of the NH 4 NO 3 decomposition technique to link NH 4 + and NO 3 - moiety-specific δ 15 N analysis by IRMS to the site-specific nitrogen isotopic composition of N 2 O was confirmed. However, the accuracy of this approach for the calibration of δ 15 N α and δ 15 N β values was found to be limited by non-quantitative NH 4 NO 3 decomposition in combination with substantially different isotope enrichment factors for the conversion of the NO 3 - or NH 4 + nitrogen atom into the α or β position of the N 2 O molecule. The study reveals that the completeness and reproducibility of the NH 4 NO 3 decomposition reaction currently confine the anchoring of N 2 O site-specific isotopic composition to the international isotope ratio scale AIR-N 2 . The authors suggest establishing a set of N 2 O isotope reference materials with appropriate site-specific isotopic composition, as community standards, to improve inter-laboratory compatibility. This article is protected by copyright. All rights reserved.
Application of decomposition techniques to the preliminary design of a transport aircraft
NASA Technical Reports Server (NTRS)
Rogan, J. E.; Mcelveen, R. P.; Kolb, M. A.
1986-01-01
A multifaceted decomposition of a nonlinear constrained optimization problem describing the preliminary design process for a transport aircraft has been made. Flight dynamics, flexible aircraft loads and deformations, and preliminary structural design subproblems appear prominently in the decomposition. The use of design process decomposition for scheduling design projects, a new system integration approach to configuration control, and the application of object-centered programming to a new generation of design tools are discussed.
1983-07-01
the decomposition reaction (Leider, 1981; Kageyama, 1973; Wolfrom, 1956), 2) Hydrolysis of linkages between glucose units (Urbanski, 1964), 3... dehydration ), 2) Acceleration period (to 50 percent decomposition ), 3) First order reaction rate period. The products of thermal decomposition of...simple mechanism to clean an entire building at once. o Depending on the contaminant, thermal decomposition and or hydrolysis may occur. o May be
The High-Resolution Wave-Propagation Method Applied to Meso- and Micro-Scale Flows
NASA Technical Reports Server (NTRS)
Ahmad, Nashat N.; Proctor, Fred H.
2012-01-01
The high-resolution wave-propagation method for computing the nonhydrostatic atmospheric flows on meso- and micro-scales is described. The design and implementation of the Riemann solver used for computing the Godunov fluxes is discussed in detail. The method uses a flux-based wave decomposition in which the flux differences are written directly as the linear combination of the right eigenvectors of the hyperbolic system. The two advantages of the technique are: 1) the need for an explicit definition of the Roe matrix is eliminated and, 2) the inclusion of source term due to gravity does not result in discretization errors. The resulting flow solver is conservative and able to resolve regions of large gradients without introducing dispersion errors. The methodology is validated against exact analytical solutions and benchmark cases for non-hydrostatic atmospheric flows.
ERIC Educational Resources Information Center
Man, Yiu-Kwong
2012-01-01
Partial fraction decomposition is a useful technique often taught at senior secondary or undergraduate levels to handle integrations, inverse Laplace transforms or linear ordinary differential equations, etc. In recent years, an improved Heaviside's approach to partial fraction decomposition was introduced and developed by the author. An important…
Clustering Tree-structured Data on Manifold
Lu, Na; Miao, Hongyu
2016-01-01
Tree-structured data usually contain both topological and geometrical information, and are necessarily considered on manifold instead of Euclidean space for appropriate data parameterization and analysis. In this study, we propose a novel tree-structured data parameterization, called Topology-Attribute matrix (T-A matrix), so the data clustering task can be conducted on matrix manifold. We incorporate the structure constraints embedded in data into the non-negative matrix factorization method to determine meta-trees from the T-A matrix, and the signature vector of each single tree can then be extracted by meta-tree decomposition. The meta-tree space turns out to be a cone space, in which we explore the distance metric and implement the clustering algorithm based on the concepts like Fréchet mean. Finally, the T-A matrix based clustering (TAMBAC) framework is evaluated and compared using both simulated data and real retinal images to illus trate its efficiency and accuracy. PMID:26660696
Acoustooptic linear algebra processors - Architectures, algorithms, and applications
NASA Technical Reports Server (NTRS)
Casasent, D.
1984-01-01
Architectures, algorithms, and applications for systolic processors are described with attention to the realization of parallel algorithms on various optical systolic array processors. Systolic processors for matrices with special structure and matrices of general structure, and the realization of matrix-vector, matrix-matrix, and triple-matrix products and such architectures are described. Parallel algorithms for direct and indirect solutions to systems of linear algebraic equations and their implementation on optical systolic processors are detailed with attention to the pipelining and flow of data and operations. Parallel algorithms and their optical realization for LU and QR matrix decomposition are specifically detailed. These represent the fundamental operations necessary in the implementation of least squares, eigenvalue, and SVD solutions. Specific applications (e.g., the solution of partial differential equations, adaptive noise cancellation, and optimal control) are described to typify the use of matrix processors in modern advanced signal processing.
Fast and Accurate Simulation Technique for Large Irregular Arrays
NASA Astrophysics Data System (ADS)
Bui-Van, Ha; Abraham, Jens; Arts, Michel; Gueuning, Quentin; Raucy, Christopher; Gonzalez-Ovejero, David; de Lera Acedo, Eloy; Craeye, Christophe
2018-04-01
A fast full-wave simulation technique is presented for the analysis of large irregular planar arrays of identical 3-D metallic antennas. The solution method relies on the Macro Basis Functions (MBF) approach and an interpolatory technique to compute the interactions between MBFs. The Harmonic-polynomial (HARP) model is established for the near-field interactions in a modified system of coordinates. For extremely large arrays made of complex antennas, two approaches assuming a limited radius of influence for mutual coupling are considered: one is based on a sparse-matrix LU decomposition and the other one on a tessellation of the array in the form of overlapping sub-arrays. The computation of all embedded element patterns is sped up with the help of the non-uniform FFT algorithm. Extensive validations are shown for arrays of log-periodic antennas envisaged for the low-frequency SKA (Square Kilometer Array) radio-telescope. The analysis of SKA stations with such a large number of elements has not been treated yet in the literature. Validations include comparison with results obtained with commercial software and with experiments. The proposed method is particularly well suited to array synthesis, in which several orders of magnitude can be saved in terms of computation time.
1,2-diketones promoted degradation of poly(epsilon-caprolactone)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Danko, Martin; Borska, Katarina; Ragab, Sherif Shaban
2012-07-11
Photochemical reactions of Benzil and Camphorquinone were used for modification of poly({epsilon}-caprolactone) polymer films. Photochemistry of dopants was followed by infrared spectroscopy, changes on polymer chains of matrix were followed by gel permeation chromatography. Benzoyl peroxide was efficiently photochemically generated from benzyl in solid polymer matrix in the presence of air. Following decomposition of benzoyl peroxide led to degradation of matrix. Photochemical transformation of benzil in vacuum led to hydrogen abstraction from the polymer chains in higher extent, which resulted to chains recombination and formation of gel. Photochemical transformation of camphorquinone to corresponding camphoric peroxide was not observed. Only decreasemore » of molecular weight of polymer matrix doped with camphorquinone was observed during the irradiation.« less
Zhou, Rong; Basile, Franco
2017-09-05
A method based on plasmon surface resonance absorption and heating was developed to perform a rapid on-surface protein thermal decomposition and digestion suitable for imaging mass spectrometry (MS) and/or profiling. This photothermal process or plasmonic thermal decomposition/digestion (plasmonic-TDD) method incorporates a continuous wave (CW) laser excitation and gold nanoparticles (Au-NPs) to induce known thermal decomposition reactions that cleave peptides and proteins specifically at the C-terminus of aspartic acid and at the N-terminus of cysteine. These thermal decomposition reactions are induced by heating a solid protein sample to temperatures between 200 and 270 °C for a short period of time (10-50 s per 200 μm segment) and are reagentless and solventless, and thus are devoid of sample product delocalization. In the plasmonic-TDD setup the sample is coated with Au-NPs and irradiated with 532 nm laser radiation to induce thermoplasmonic heating and bring about site-specific thermal decomposition on solid peptide/protein samples. In this manner the Au-NPs act as nanoheaters that result in a highly localized thermal decomposition and digestion of the protein sample that is independent of the absorption properties of the protein, making the method universally applicable to all types of proteinaceous samples (e.g., tissues or protein arrays). Several experimental variables were optimized to maximize product yield, and they include heating time, laser intensity, size of Au-NPs, and surface coverage of Au-NPs. Using optimized parameters, proof-of-principle experiments confirmed the ability of the plasmonic-TDD method to induce both C-cleavage and D-cleavage on several peptide standards and the protein lysozyme by detecting their thermal decomposition products with matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS). The high spatial specificity of the plasmonic-TDD method was demonstrated by using a mask to digest designated sections of the sample surface with the heating laser and MALDI-MS imaging to map the resulting products. The solventless nature of the plasmonic-TDD method enabled the nonenzymatic on-surface digestion of proteins to proceed with undetectable delocalization of the resulting products from their precursor protein location. The advantages of this novel plasmonic-TDD method include short reaction times (<30 s/200 μm), compatibility with MALDI, universal sample compatibility, high spatial specificity, and localization of the digestion products. These advantages point to potential applications of this method for on-tissue protein digestion and MS-imaging/profiling for the identification of proteins, high-fidelity MS imaging of high molecular weight (>30 kDa) proteins, and the rapid analysis of formalin-fixed paraffin-embedded (FFPE) tissue samples.
Optical diagnosis of dengue virus infected human blood using Mueller matrix polarimetry
NASA Astrophysics Data System (ADS)
Anwar, Shahzad; Firdous, Shamaraz
2016-08-01
Currently dengue fever diagnosis methods include capture ELISAs, immunofluorescence tests, and hemagglutination assays. In this study optical diagnosis of dengue virus infection in the whole blood is presented utilizing Mueller matrix polarimetry. Mueller matrices of about 50 dengue viral infected and 25 non-dengue healthy blood samples were recorded utilizing light source from 500 to 700 nm with scanning step of 10 nm. Polar decomposition of the Mueller matrices for all the blood samples was performed that yielded polarization properties including depolarization, diattenuation, degree of polarization, retardance and optical activity, out of which, depolarization index clusters up the diseased and healthy in to different separate groups. The average depolarized light in the case of dengue infection in the whole blood at 500 nm is 18%, whereas for the healthy blood samples it is 13.5%. This suggests that depolarization index of polarized light at the wavelengths of 500, 510, 520, 530 and 540 nm, we find that in case of depolarization index values are higher for dengue viral infection as compared to normal samples. This technique can effectively be used for the characterization of the dengue virus infected at an early stage of disease.
Low temperature catalytic oxidative aging of LDPE films in response to heat excitation.
Luo, Xuegang; Zhang, Sizhao; Ding, Feng; Lin, Xiaoyan
2015-09-14
The waste treatment of polymer materials is often conducted using the photocatalytic technique; however, complete decomposition is frequently inhibited owing to several shortcomings such as low quantum yield and the requirement of ultraviolet irradiation. Herein, we report a strategy to implement moderate management of polymeric films via thermocatalytic oxidative route, which is responsive to heat stimulus. Diverse LDPE-matrix films together with as-prepared thermal catalysts (TCs) or initiators were synthesized to further investigate heat-dependent-catalytic degradation effects. After artificial ageing, structural textures of the as-synthesized films could be chemically deteriorated, followed by a huge increase in surface roughness values, and appreciable loss was also found in the average molecular weights and mechanical parameters. We found an emergent phenomenon in which crystallization closely resembled two-dimensional (2D) growth, which displayed rod-like or disc-type crystal shapes. New chemical groups generated on film surfaces were monitored, and led to a higher limiting oxygen index because of strong catalytic oxidation, thus demonstrating the success of catalytic oxidative ageing by heat actuation. The underlying mechanism responsible for thermocatalytic oxidative pattern is also discussed. Accordingly, these findings may have important implications for better understanding the development of polymeric-matrix waste disposal.
Zhang, Zhilin; Jung, Tzyy-Ping; Makeig, Scott; Rao, Bhaskar D
2013-02-01
Fetal ECG (FECG) telemonitoring is an important branch in telemedicine. The design of a telemonitoring system via a wireless body area network with low energy consumption for ambulatory use is highly desirable. As an emerging technique, compressed sensing (CS) shows great promise in compressing/reconstructing data with low energy consumption. However, due to some specific characteristics of raw FECG recordings such as nonsparsity and strong noise contamination, current CS algorithms generally fail in this application. This paper proposes to use the block sparse Bayesian learning framework to compress/reconstruct nonsparse raw FECG recordings. Experimental results show that the framework can reconstruct the raw recordings with high quality. Especially, the reconstruction does not destroy the interdependence relation among the multichannel recordings. This ensures that the independent component analysis decomposition of the reconstructed recordings has high fidelity. Furthermore, the framework allows the use of a sparse binary sensing matrix with much fewer nonzero entries to compress recordings. Particularly, each column of the matrix can contain only two nonzero entries. This shows that the framework, compared to other algorithms such as current CS algorithms and wavelet algorithms, can greatly reduce code execution in CPU in the data compression stage.
NASA Astrophysics Data System (ADS)
Komissarova, T. A.; Wang, P.; Paturi, P.; Wang, X.; Ivanov, S. V.
2017-11-01
Influence of the molecular beam epitaxy (MBE) growth conditions on the electrical properties of the InN epilayers in terms of minimization of the effect of spontaneously formed In nanoparticles was studied. A three-step growth sequence was used, including direct MBE growth of an InN nucleation layer, migration enhanced epitaxy (MEE) of an InN buffer layer, and In-rich MBE growth of the main InN layer, utilizing the droplet elimination by radical-beam irradiation (DERI) technique. The three-step growth regime was found to lead to decreasing the relative amount of In nanoparticles to 4.8% and 3.8% in In-rich and near-stoichiometric conditions, respectively, whereas the transport properties are better for the In-rich growth. Further reduction of the metallic indium inclusions in the InN films, while keeping simultaneously satisfactory transport parameters, is hardly possible due to fundamental processes of InN thermal decomposition and formation of the nitrogen vacancy conglomerates in the InN matrix. The In inclusions are shown to dominate the electrical conductivity of the InN films even at their minimum amount.
Diffusion MRI noise mapping using random matrix theory
Veraart, Jelle; Fieremans, Els; Novikov, Dmitry S.
2016-01-01
Purpose To estimate the spatially varying noise map using a redundant magnitude MR series. Methods We exploit redundancy in non-Gaussian multi-directional diffusion MRI data by identifying its noise-only principal components, based on the theory of noisy covariance matrices. The bulk of PCA eigenvalues, arising due to noise, is described by the universal Marchenko-Pastur distribution, parameterized by the noise level. This allows us to estimate noise level in a local neighborhood based on the singular value decomposition of a matrix combining neighborhood voxels and diffusion directions. Results We present a model-independent local noise mapping method capable of estimating noise level down to about 1% error. In contrast to current state-of-the art techniques, the resultant noise maps do not show artifactual anatomical features that often reflect physiological noise, the presence of sharp edges, or a lack of adequate a priori knowledge of the expected form of MR signal. Conclusions Simulations and experiments show that typical diffusion MRI data exhibit sufficient redundancy that enables accurate, precise, and robust estimation of the local noise level by interpreting the PCA eigenspectrum in terms of the Marchenko-Pastur distribution. PMID:26599599
Thermal behaviour properties and corrosion resistance of organoclay/polyurethane film
NASA Astrophysics Data System (ADS)
Kurniawan, O.; Soegijono, B.
2018-03-01
Organoclay/polyurethane film composite was prepared by adding organoclay with different content (1, 3, and 5 wt.%) in polyurethane as a matrix. TGA and DSC showed decomposition temperature shifted to a lower point as organoclay content change. FT-IR spectra showed chemical bonding of organoclay and polyurethane as a matrix, which means that the bonding between filler and matrix occured and the composite was stronger but less bonding occur in composite with 5 wt.% organoclay. The corrosion resistance overall increased with the increasing organoclay content. Composite with 5 wt.% organoclay had more thermal stability and corrosion resistance may probably due to exfoliation of organoclay.
NASA Astrophysics Data System (ADS)
Akhbardeh, Alireza; Junnila, Sakari; Koivuluoma, Mikko; Koivistoinen, Teemu; Värri, Alpo
2006-12-01
As we know, singular value decomposition (SVD) is designed for computing singular values (SVs) of a matrix. Then, if it is used for finding SVs of an [InlineEquation not available: see fulltext.]-by-1 or 1-by- [InlineEquation not available: see fulltext.] array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ''time-frequency moments singular value decomposition (TFM-SVD).'' In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal). This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs) for ballistocardiogram (BCG) data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.
Three geographic decomposition approaches in transportation network analysis
DOT National Transportation Integrated Search
1980-03-01
This document describes the results of research into the application of geographic decomposition techniques to practical transportation network problems. Three approaches are described for the solution of the traffic assignment problem. One approach ...
Application of Decomposition to Transportation Network Analysis
DOT National Transportation Integrated Search
1976-10-01
This document reports preliminary results of five potential applications of the decomposition techniques from mathematical programming to transportation network problems. The five application areas are (1) the traffic assignment problem with fixed de...
Transportation Network Analysis and Decomposition Methods
DOT National Transportation Integrated Search
1978-03-01
The report outlines research in transportation network analysis using decomposition techniques as a basis for problem solutions. Two transportation network problems were considered in detail: a freight network flow problem and a scheduling problem fo...
Interface conditions for domain decomposition with radical grid refinement
NASA Technical Reports Server (NTRS)
Scroggs, Jeffrey S.
1991-01-01
Interface conditions for coupling the domains in a physically motivated domain decomposition method are discussed. The domain decomposition is based on an asymptotic-induced method for the numerical solution of hyperbolic conservation laws with small viscosity. The method consists of multiple stages. The first stage is to obtain a first approximation using a first-order method, such as the Godunov scheme. Subsequent stages of the method involve solving internal-layer problem via a domain decomposition. The method is derived and justified via singular perturbation techniques.
Application of decomposition techniques to the preliminary design of a transport aircraft
NASA Technical Reports Server (NTRS)
Rogan, J. E.; Kolb, M. A.
1987-01-01
A nonlinear constrained optimization problem describing the preliminary design process for a transport aircraft has been formulated. A multifaceted decomposition of the optimization problem has been made. Flight dynamics, flexible aircraft loads and deformations, and preliminary structural design subproblems appear prominently in the decomposition. The use of design process decomposition for scheduling design projects, a new system integration approach to configuration control, and the application of object-centered programming to a new generation of design tools are discussed.
On the decomposition of synchronous state mechines using sequence invariant state machines
NASA Technical Reports Server (NTRS)
Hebbalalu, K.; Whitaker, S.; Cameron, K.
1992-01-01
This paper presents a few techniques for the decomposition of Synchronous State Machines of medium to large sizes into smaller component machines. The methods are based on the nature of the transitions and sequences of states in the machine and on the number and variety of inputs to the machine. The results of the decomposition, and of using the Sequence Invariant State Machine (SISM) Design Technique for generating the component machines, include great ease and quickness in the design and implementation processes. Furthermore, there is increased flexibility in making modifications to the original design leading to negligible re-design time.
Mueller matrix imaging and analysis of cancerous cells
NASA Astrophysics Data System (ADS)
Fernández, A.; Fernández-Luna, J. L.; Moreno, F.; Saiz, J. M.
2017-08-01
Imaging polarimetry is a focus of increasing interest in diagnostic medicine because of its non-invasive nature and its potential for recognizing abnormal tissues. However, handling polarimetric images is not an easy task, and different intermediate steps have been proposed to introduce physical parameters that may be helpful to interpret results. In this work, transmission Mueller matrices (MM) corresponding to cancer cell samples have been experimentally obtained, and three different transformations have been applied: MM-Polar Decomposition, MM-Transformation and MM-Differential Decomposition. Special attention has been paid to diattenuation as a sensitive parameter to identify apoptosis processes induced by cisplatin and etoposide.
Application of modified Martinez-Silva algorithm in determination of net cover
NASA Astrophysics Data System (ADS)
Stefanowicz, Łukasz; Grobelna, Iwona
2016-12-01
In the article we present the idea of modifications of Martinez-Silva algorithm, which allows for determination of place invariants (p-invariants) of Petri net. Their generation time is important in the parallel decomposition of discrete systems described by Petri nets. Decomposition process is essential from the point of view of discrete system design, as it allows for separation of smaller sequential parts. The proposed modifications of Martinez-Silva method concern the net cover by p-invariants and are focused on two important issues: cyclic reduction of invariant matrix and cyclic checking of net cover.
Muravyev, Nikita V; Monogarov, Konstantin A; Asachenko, Andrey F; Nechaev, Mikhail S; Ananyev, Ivan V; Fomenkov, Igor V; Kiselev, Vitaly G; Pivkina, Alla N
2016-12-21
Thermal decomposition of a novel promising high-performance explosive dihydroxylammonium 5,5'-bistetrazole-1,1'-diolate (TKX-50) was studied using a number of thermal analysis techniques (thermogravimetry, differential scanning calorimetry, and accelerating rate calorimetry, ARC). To obtain more comprehensive insight into the kinetics and mechanism of TKX-50 decomposition, a variety of complementary thermoanalytical experiments were performed under various conditions. Non-isothermal and isothermal kinetics were obtained at both atmospheric and low (up to 0.3 Torr) pressures. The gas products of thermolysis were detected in situ using IR spectroscopy, and the structure of solid-state decomposition products was determined by X-ray diffraction and scanning electron microscopy. Diammonium 5,5'-bistetrazole-1,1'-diolate (ABTOX) was directly identified to be the most important intermediate of the decomposition process. The important role of bistetrazole diol (BTO) in the mechanism of TKX-50 decomposition was also rationalized by thermolysis experiments with mixtures of TKX-50 and BTO. Several widely used thermoanalytical data processing techniques (Kissinger, isoconversional, formal kinetic approaches, etc.) were independently benchmarked against the ARC data, which are more germane to the real storage and application conditions of energetic materials. Our study revealed that none of the Arrhenius parameters reported before can properly describe the complex two-stage decomposition process of TKX-50. In contrast, we showed the superior performance of the isoconversional methods combined with isothermal measurements, which yielded the most reliable kinetic parameters of TKX-50 thermolysis. In contrast with the existing reports, the thermal stability of TKX-50 was determined in the ARC experiments to be lower than that of hexogen, but close to that of hexanitrohexaazaisowurtzitane (CL-20).
ERIC Educational Resources Information Center
Schizas, Dimitrios; Katrana, Evagelia; Stamou, George
2013-01-01
In the present study we used the technique of word association tests to assess students' cognitive structures during the learning period. In particular, we tried to investigate what students living near a protected area in Greece (Dadia forest) knew about the phenomenon of decomposition. Decomposition was chosen as a stimulus word because it…
Multidimensional Compressed Sensing MRI Using Tensor Decomposition-Based Sparsifying Transform
Yu, Yeyang; Jin, Jin; Liu, Feng; Crozier, Stuart
2014-01-01
Compressed Sensing (CS) has been applied in dynamic Magnetic Resonance Imaging (MRI) to accelerate the data acquisition without noticeably degrading the spatial-temporal resolution. A suitable sparsity basis is one of the key components to successful CS applications. Conventionally, a multidimensional dataset in dynamic MRI is treated as a series of two-dimensional matrices, and then various matrix/vector transforms are used to explore the image sparsity. Traditional methods typically sparsify the spatial and temporal information independently. In this work, we propose a novel concept of tensor sparsity for the application of CS in dynamic MRI, and present the Higher-order Singular Value Decomposition (HOSVD) as a practical example. Applications presented in the three- and four-dimensional MRI data demonstrate that HOSVD simultaneously exploited the correlations within spatial and temporal dimensions. Validations based on cardiac datasets indicate that the proposed method achieved comparable reconstruction accuracy with the low-rank matrix recovery methods and, outperformed the conventional sparse recovery methods. PMID:24901331
Supercritical CO2/Co-solvents Extraction of Porogen and Surfactant to Obtain
NASA Astrophysics Data System (ADS)
Lubguban, Jorge
2005-03-01
A method of pore generation by supercritical CO2 (SCCO2)/co-solvents extraction for the preparation of nanoporous organosilicate thin films for ultralow dielectric constant materials is investigated. A nanohybrid film was prepared from poly (propylene glycol) (PPG) and poly(methylsilsesquioxane) (PMSSQ) whereby the PPG porogen are entrapped within the crosslinked PMSSQ matrix. Another set of thin films was produced by liquid crystal templating whereby non-ionic (polyoxyethylene 10 stearyl ether) (Brij76) and ionic (cetyltrimethylammonium bromide) (CTAB) surfactant were used as sacrificial templates in a tetraethoxy silane (TEOS) and methyltrimethoxy silane (MTMS) based matrix. These two types of films were treated with SCCO2/co-solvents to remove porogen and surfactant templates. As a comparison, porous structures generated by thermal decomposition were also evaluated. It is found that SCCO2/co-solvents treatment produced closely comparable results with thermal decomposition. The results were evident from Fourier Transform Infrared (FT- IR) spectroscopy and optical constants data obtained from variable angle spectroscopic ellipsometry (VASE).
Hong, Xia
2006-07-01
In this letter, a Box-Cox transformation-based radial basis function (RBF) neural network is introduced using the RBF neural network to represent the transformed system output. Initially a fixed and moderate sized RBF model base is derived based on a rank revealing orthogonal matrix triangularization (QR decomposition). Then a new fast identification algorithm is introduced using Gauss-Newton algorithm to derive the required Box-Cox transformation, based on a maximum likelihood estimator. The main contribution of this letter is to explore the special structure of the proposed RBF neural network for computational efficiency by utilizing the inverse of matrix block decomposition lemma. Finally, the Box-Cox transformation-based RBF neural network, with good generalization and sparsity, is identified based on the derived optimal Box-Cox transformation and a D-optimality-based orthogonal forward regression algorithm. The proposed algorithm and its efficacy are demonstrated with an illustrative example in comparison with support vector machine regression.
Kong, Xiang-Zhen; Liu, Jin-Xing; Zheng, Chun-Hou; Hou, Mi-Xiao; Wang, Juan
2017-07-01
High dimensionality has become a typical feature of biomolecular data. In this paper, a novel dimension reduction method named p-norm singular value decomposition (PSVD) is proposed to seek the low-rank approximation matrix to the biomolecular data. To enhance the robustness to outliers, the Lp-norm is taken as the error function and the Schatten p-norm is used as the regularization function in the optimization model. To evaluate the performance of PSVD, the Kmeans clustering method is then employed for tumor clustering based on the low-rank approximation matrix. Extensive experiments are carried out on five gene expression data sets including two benchmark data sets and three higher dimensional data sets from the cancer genome atlas. The experimental results demonstrate that the PSVD-based method outperforms many existing methods. Especially, it is experimentally proved that the proposed method is more efficient for processing higher dimensional data with good robustness, stability, and superior time performance.
NASA Astrophysics Data System (ADS)
Vyletel, G. M.; van Aken, D. C.; Allison, J. E.
1995-12-01
The 150 °C cyclic response of peak-aged and overaged 2219/TiC/15p and 2219 Al was examined using fully reversed plastic strain-controlled testing. The cyclic response of peak-aged and overaged particle-reinforced materials showed extensive cyclic softening. This softening began at the commencement of cycling and continued until failure. At a plastic strain below 5 × 103, the unreinforced materials did not show evidence of cyclic softening until approximately 30 pct of the life was consumed. In addition, the degree of cyclic softening (†σ) was significantly lower in the unreinforced microstructures. The cyclic softening in both reinforced and unreinforced materials was attributed to the decomposition of the θ' strengthening precipitates. The extent of the precipitate decomposition was much greater in the composite materials due to the increased levels of local plastic strain in the matrix caused by constrained deformation near the TiC particles.
Application of higher order SVD to vibration-based system identification and damage detection
NASA Astrophysics Data System (ADS)
Chao, Shu-Hsien; Loh, Chin-Hsiung; Weng, Jian-Huang
2012-04-01
Singular value decomposition (SVD) is a powerful linear algebra tool. It is widely used in many different signal processing methods, such principal component analysis (PCA), singular spectrum analysis (SSA), frequency domain decomposition (FDD), subspace identification and stochastic subspace identification method ( SI and SSI ). In each case, the data is arranged appropriately in matrix form and SVD is used to extract the feature of the data set. In this study three different algorithms on signal processing and system identification are proposed: SSA, SSI-COV and SSI-DATA. Based on the extracted subspace and null-space from SVD of data matrix, damage detection algorithms can be developed. The proposed algorithm is used to process the shaking table test data of the 6-story steel frame. Features contained in the vibration data are extracted by the proposed method. Damage detection can then be investigated from the test data of the frame structure through subspace-based and nullspace-based damage indices.
Three dimensional empirical mode decomposition analysis apparatus, method and article manufacture
NASA Technical Reports Server (NTRS)
Gloersen, Per (Inventor)
2004-01-01
An apparatus and method of analysis for three-dimensional (3D) physical phenomena. The physical phenomena may include any varying 3D phenomena such as time varying polar ice flows. A repesentation of the 3D phenomena is passed through a Hilbert transform to convert the data into complex form. A spatial variable is separated from the complex representation by producing a time based covariance matrix. The temporal parts of the principal components are produced by applying Singular Value Decomposition (SVD). Based on the rapidity with which the eigenvalues decay, the first 3-10 complex principal components (CPC) are selected for Empirical Mode Decomposition into intrinsic modes. The intrinsic modes produced are filtered in order to reconstruct the spatial part of the CPC. Finally, a filtered time series may be reconstructed from the first 3-10 filtered complex principal components.
Mueller matrix imaging study to detect the dental demineralization
NASA Astrophysics Data System (ADS)
Chen, Qingguang; Shen, Huanbo; Wang, Binqiang
2018-01-01
Mueller matrix is an optical parameter invasively to reveal the structure information of anisotropic material. Dental tissue has the ordered structure including dental enamel prism and dentinal tubule. The ordered structure of teeth surface will be destroyed by demineralization. The structure information has the possibility to reflect the dental demineralization. In the paper, the experiment setup was built to obtain the Mueller matrix images based on the dual- wave plate rotation method. Two linear polarizer and two quarter-wave plate were rotated by electric control revolving stage respectively to capture 16 images at different group of polarization states. Therefore, Mueller matrix image can be calculated from the 16 images. On this basis, depolarization index, the diattenuation index and retardance index of the Mueller matrix were analyzed by Lu-Chipman polarization decomposition method. Mueller matrix images of artificial demineralized enamels at different stages were analyzed and the results show the possibility to detect the dental demineralization using Mueller matrix imaging method.
Efficient morse decompositions of vector fields.
Chen, Guoning; Mischaikow, Konstantin; Laramee, Robert S; Zhang, Eugene
2008-01-01
Existing topology-based vector field analysis techniques rely on the ability to extract the individual trajectories such as fixed points, periodic orbits, and separatrices that are sensitive to noise and errors introduced by simulation and interpolation. This can make such vector field analysis unsuitable for rigorous interpretations. We advocate the use of Morse decompositions, which are robust with respect to perturbations, to encode the topological structures of a vector field in the form of a directed graph, called a Morse connection graph (MCG). While an MCG exists for every vector field, it need not be unique. Previous techniques for computing MCG's, while fast, are overly conservative and usually results in MCG's that are too coarse to be useful for the applications. To address this issue, we present a new technique for performing Morse decomposition based on the concept of tau-maps, which typically provides finer MCG's than existing techniques. Furthermore, the choice of tau provides a natural tradeoff between the fineness of the MCG's and the computational costs. We provide efficient implementations of Morse decomposition based on tau-maps, which include the use of forward and backward mapping techniques and an adaptive approach in constructing better approximations of the images of the triangles in the meshes used for simulation.. Furthermore, we propose the use of spatial tau-maps in addition to the original temporal tau-maps. These techniques provide additional trade-offs between the quality of the MCGs and the speed of computation. We demonstrate the utility of our technique with various examples in the plane and on surfaces including engine simulation data sets.
Spatial, temporal, and hybrid decompositions for large-scale vehicle routing with time windows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bent, Russell W
This paper studies the use of decomposition techniques to quickly find high-quality solutions to large-scale vehicle routing problems with time windows. It considers an adaptive decomposition scheme which iteratively decouples a routing problem based on the current solution. Earlier work considered vehicle-based decompositions that partitions the vehicles across the subproblems. The subproblems can then be optimized independently and merged easily. This paper argues that vehicle-based decompositions, although very effective on various problem classes also have limitations. In particular, they do not accommodate temporal decompositions and may produce spatial decompositions that are not focused enough. This paper then proposes customer-based decompositionsmore » which generalize vehicle-based decouplings and allows for focused spatial and temporal decompositions. Experimental results on class R2 of the extended Solomon benchmarks demonstrates the benefits of the customer-based adaptive decomposition scheme and its spatial, temporal, and hybrid instantiations. In particular, they show that customer-based decompositions bring significant benefits over large neighborhood search in contrast to vehicle-based decompositions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ndong, Mamadou; Lauvergnat, David; Nauts, André
2013-11-28
We present new techniques for an automatic computation of the kinetic energy operator in analytical form. These techniques are based on the use of the polyspherical approach and are extended to take into account Cartesian coordinates as well. An automatic procedure is developed where analytical expressions are obtained by symbolic calculations. This procedure is a full generalization of the one presented in Ndong et al., [J. Chem. Phys. 136, 034107 (2012)]. The correctness of the new implementation is analyzed by comparison with results obtained from the TNUM program. We give several illustrations that could be useful for users of themore » code. In particular, we discuss some cyclic compounds which are important in photochemistry. Among others, we show that choosing a well-adapted parameterization and decomposition into subsystems can allow one to avoid singularities in the kinetic energy operator. We also discuss a relation between polyspherical and Z-matrix coordinates: this comparison could be helpful for building an interface between the new code and a quantum chemistry package.« less
An improved pulse sequence and inversion algorithm of T2 spectrum
NASA Astrophysics Data System (ADS)
Ge, Xinmin; Chen, Hua; Fan, Yiren; Liu, Juntao; Cai, Jianchao; Liu, Jianyu
2017-03-01
The nuclear magnetic resonance transversal relaxation time is widely applied in geological prospecting, both in laboratory and downhole environments. However, current methods used for data acquisition and inversion should be reformed to characterize geological samples with complicated relaxation components and pore size distributions, such as samples of tight oil, gas shale, and carbonate. We present an improved pulse sequence to collect transversal relaxation signals based on the CPMG (Carr, Purcell, Meiboom, and Gill) pulse sequence. The echo spacing is not constant but varies in different windows, depending on prior knowledge or customer requirements. We use the entropy based truncated singular value decomposition (TSVD) to compress the ill-posed matrix and discard small singular values which cause the inversion instability. A hybrid algorithm combining the iterative TSVD and a simultaneous iterative reconstruction technique is implemented to reach the global convergence and stability of the inversion. Numerical simulations indicate that the improved pulse sequence leads to the same result as CPMG, but with lower echo numbers and computational time. The proposed method is a promising technique for geophysical prospecting and other related fields in future.
Zhang, Lisha; Zhang, Songhe; Lv, Xiaoyang; Qiu, Zheng; Zhang, Ziqiu; Yan, Liying
2018-08-15
This study investigated the alterations in biomass, nutrients and dissolved organic matter concentration in overlying water and determined the bacterial 16S rRNA gene in biofilms attached to plant residual during the decomposition of Myriophyllum verticillatum. The 55-day decomposition experimental results show that plant decay process can be well described by the exponential model, with the average decomposition rate of 0.037d -1 . Total organic carbon, total nitrogen, and organic nitrogen concentrations increased significantly in overlying water during decomposition compared to control within 35d. Results from excitation emission matrix-parallel factor analysis showed humic acid-like and tyrosine acid-like substances might originate from plant degradation processes. Tyrosine acid-like substances had an obvious correlation to organic nitrogen and total nitrogen (p<0.01). Decomposition rates were positively related to pH, total organic carbon, oxidation-reduction potential and dissolved oxygen but negatively related to temperature in overlying water. Microbe densities attached to plant residues increased with decomposition process. The most dominant phylum was Bacteroidetes (>46%) at 7d, Chlorobi (20%-44%) or Proteobacteria (25%-34%) at 21d and Chlorobi (>40%) at 55d. In microbes attached to plant residues, sugar- and polysaccharides-degrading genus including Bacteroides, Blvii28, Fibrobacter, and Treponema dominated at 7d while Chlorobaculum, Rhodobacter, Methanobacterium, Thiobaca, Methanospirillum and Methanosarcina at 21d and 55d. These results gain the insight into the dissolved organic matter release and bacterial community shifts during submerged macrophytes decomposition. Copyright © 2018 Elsevier B.V. All rights reserved.
[Chlorine coatings on skin surfaces. II. Parameters influencing the coating strength].
Gottardi, W; Karl, A
1991-05-01
Although active chlorine compounds have been used for more than 140 years (Semmelweis, 1848) as a skin disinfectant the phenomenon of the "chlorine covers" not earlier than 1988 has been described for the first time (Hyg. + Med. 13 (1988) 157). It deals with a chemical alteration of the uppermost skin layer which comes apparent in an oxydizing action against aqueous iodide. Its origin is chlorine covalently bound in the form of N-Cl functions to the protein matrix of the horny skin. Since the chlorine covers exhibit a persistant disinfecting activity which might be important for practice, the factors influencing their strength have been established. The most important are: the kind of the chlorine system, the concentration (oxydation capacity), pH, temperature and the volume of the used solution, the time of action, the application technique and the state of the skin. Variations of the latter can be observed at different skin areas of one and the same person as well as at the same areas of different persons, and result in differences of the cover strength up to 100%. The stability on dry skin is very good, showing a decomposition rate of approximately 1.2% per hour. However on skin surfaces moistened by sweat (e.g. hands covered by surgeons gloves) the chlorine cover is disingrated much more faster (decomposition rate: 40-50% per hour). Washing with soap as well as the action of alcohols cause virtually no decrease in the cover strength, while wetting by solutions of reducing agents (e.g. thiosulfate, cysteine, iodide) provokes a fast decomposition suitable for removing the chlorine covers.(ABSTRACT TRUNCATED AT 250 WORDS)
Sparse Tensor Decomposition for Haplotype Assembly of Diploids and Polyploids.
Hashemi, Abolfazl; Zhu, Banghua; Vikalo, Haris
2018-03-21
Haplotype assembly is the task of reconstructing haplotypes of an individual from a mixture of sequenced chromosome fragments. Haplotype information enables studies of the effects of genetic variations on an organism's phenotype. Most of the mathematical formulations of haplotype assembly are known to be NP-hard and haplotype assembly becomes even more challenging as the sequencing technology advances and the length of the paired-end reads and inserts increases. Assembly of haplotypes polyploid organisms is considerably more difficult than in the case of diploids. Hence, scalable and accurate schemes with provable performance are desired for haplotype assembly of both diploid and polyploid organisms. We propose a framework that formulates haplotype assembly from sequencing data as a sparse tensor decomposition. We cast the problem as that of decomposing a tensor having special structural constraints and missing a large fraction of its entries into a product of two factors, U and [Formula: see text]; tensor [Formula: see text] reveals haplotype information while U is a sparse matrix encoding the origin of erroneous sequencing reads. An algorithm, AltHap, which reconstructs haplotypes of either diploid or polyploid organisms by iteratively solving this decomposition problem is proposed. The performance and convergence properties of AltHap are theoretically analyzed and, in doing so, guarantees on the achievable minimum error correction scores and correct phasing rate are established. The developed framework is applicable to diploid, biallelic and polyallelic polyploid species. The code for AltHap is freely available from https://github.com/realabolfazl/AltHap . AltHap was tested in a number of different scenarios and was shown to compare favorably to state-of-the-art methods in applications to haplotype assembly of diploids, and significantly outperforms existing techniques when applied to haplotype assembly of polyploids.
Origins of Magnetite Nanocrystals in Martian Meteorite ALH84001
NASA Technical Reports Server (NTRS)
Thomas-Keprta, Kathie L.; Clemett, Simon J.; Mckay, David S.; Gibson, Everett K.; Wentworth, Susan J.
2009-01-01
The Martian meteorite ALH84001 preserves evidence of interaction with aqueous fluids while on Mars in the form of microscopic carbonate disks. These carbonate disks are believed to have precipitated 3.9 Ga ago at beginning of the Noachian epoch on Mars during which both the oldest extant Martian surfaces were formed, and perhaps the earliest global oceans. Intimately associated within and throughout these carbonate disks are nanocrystal magnetites (Fe3O4) with unusual chemical and physical properties, whose origins have become the source of considerable debate. One group of hypotheses argues that these magnetites are the product of partial thermal decomposition of the host carbonate. Alternatively, the origins of mag- netite and carbonate may be unrelated; that is, from the perspective of the carbonate the magnetite is allochthonous. For example, the magnetites might have already been present in the aqueous fluids from which the carbonates were believed to have been deposited. We have sought to resolve between these hypotheses through the detailed characterized of the compo- sitional and structural relationships of the carbonate disks and associated magnetites with the orthopyroxene matrix in which they are embedded. Extensive use of focused ion beam milling techniques has been utilized for sample preparation. We then compared our observations with those from experimental thermal decomposition studies of sideritic carbonates under a range of plausible geological heating scenarios. We conclude that the vast majority of the nanocrystal magnetites present in the car- bonate disks could not have formed by any of the currently proposed thermal decomposition scenarios. Instead, we find there is considerable evidence in support of an alternative allochthonous origin for the magnetite unrelated to any shock or thermal processing of the carbonates.
Characteristic-eddy decomposition of turbulence in a channel
NASA Technical Reports Server (NTRS)
Moin, Parviz; Moser, Robert D.
1989-01-01
Lumley's proper orthogonal decomposition technique is applied to the turbulent flow in a channel. Coherent structures are extracted by decomposing the velocity field into characteristic eddies with random coefficients. A generalization of the shot-noise expansion is used to determine the characteristic eddies in homogeneous spatial directions. Three different techniques are used to determine the phases of the Fourier coefficients in the expansion: (1) one based on the bispectrum, (2) a spatial compactness requirement, and (3) a functional continuity argument. Similar results are found from each of these techniques.
UV-laser photochemistry of isoxazole isolated in a low-temperature matrix.
Nunes, Cláudio M; Reva, Igor; Pinho e Melo, Teresa M V D; Fausto, Rui
2012-10-05
The photochemistry of matrix-isolated isoxazole, induced by narrowband tunable UV-light, was investigated by infrared spectroscopy, with the aid of MP2/6-311++G(d,p) calculations. The isoxazole photoreaction starts to occur upon irradiation at λ = 240 nm, with the dominant pathway involving decomposition to ketene and hydrogen cyanide. However, upon irradiation at λ = 221 nm, in addition to this decomposition, isoxazole was also found to isomerize into several products: 2-formyl-2H-azirine, 3-formylketenimine, 3-hydroxypropenenitrile, imidoylketene, and 3-oxopropanenitrile. The structural and spectroscopic assignment of the different photoisomerization products was achieved by additional irradiation of the λ = 221 nm photolyzed matrix, using UV-light with λ ≥ 240 nm: (i) irradiation in the 330 ≤ λ ≤ 340 nm range induced direct transformation of 2-formyl-2H-azirine into 3-formylketenimine; (ii) irradiation with 310 ≤ λ ≤ 318 nm light induced the hitherto unobserved transformation of 3-formylketenimine into 3-hydroxypropenenitrile and imidoylketene; (iii) irradiation with λ = 280 nm light permits direct identification of 3-oxopropanenitrile; (iv) under λ = 240 nm irradiation, tautomerization of 3-hydroxypropenenitrile to 3-oxopropanenitrile is observed. On the basis of these findings, a detailed mechanistic proposal for isoxazole photochemistry is presented.
Characteristic eddy decomposition of turbulence in a channel
NASA Technical Reports Server (NTRS)
Moin, Parviz; Moser, Robert D.
1991-01-01
The proper orthogonal decomposition technique (Lumley's decomposition) is applied to the turbulent flow in a channel to extract coherent structures by decomposing the velocity field into characteristic eddies with random coefficients. In the homogeneous spatial directions, a generaliztion of the shot-noise expansion is used to determine the characteristic eddies. In this expansion, the Fourier coefficients of the characteristic eddy cannot be obtained from the second-order statistics. Three different techniques are used to determine the phases of these coefficients. They are based on: (1) the bispectrum, (2) a spatial compactness requirement, and (3) a functional continuity argument. Results from these three techniques are found to be similar in most respects. The implications of these techniques and the shot-noise expansion are discussed. The dominant eddy is found to contribute as much as 76 percent to the turbulent kinetic energy. In both 2D and 3D, the characteristic eddies consist of an ejection region straddled by streamwise vortices that leave the wall in the very short streamwise distance of about 100 wall units.
Decomposition-Based Failure Mode Identification Method for Risk-Free Design of Large Systems
NASA Technical Reports Server (NTRS)
Tumer, Irem Y.; Stone, Robert B.; Roberts, Rory A.; Clancy, Daniel (Technical Monitor)
2002-01-01
When designing products, it is crucial to assure failure and risk-free operation in the intended operating environment. Failures are typically studied and eliminated as much as possible during the early stages of design. The few failures that go undetected result in unacceptable damage and losses in high-risk applications where public safety is of concern. Published NASA and NTSB accident reports point to a variety of components identified as sources of failures in the reported cases. In previous work, data from these reports were processed and placed in matrix form for all the system components and failure modes encountered, and then manipulated using matrix methods to determine similarities between the different components and failure modes. In this paper, these matrices are represented in the form of a linear combination of failures modes, mathematically formed using Principal Components Analysis (PCA) decomposition. The PCA decomposition results in a low-dimensionality representation of all failure modes and components of interest, represented in a transformed coordinate system. Such a representation opens the way for efficient pattern analysis and prediction of failure modes with highest potential risks on the final product, rather than making decisions based on the large space of component and failure mode data. The mathematics of the proposed method are explained first using a simple example problem. The method is then applied to component failure data gathered from helicopter, accident reports to demonstrate its potential.
NASA Astrophysics Data System (ADS)
Gharekhan, Anita H.; Biswal, Nrusingh C.; Gupta, Sharad; Pradhan, Asima; Sureshkumar, M. B.; Panigrahi, Prasanta K.
2008-02-01
The statistical and characteristic features of the polarized fluorescence spectra from cancer, normal and benign human breast tissues are studied through wavelet transform and singular value decomposition. The discrete wavelets enabled one to isolate high and low frequency spectral fluctuations, which revealed substantial randomization in the cancerous tissues, not present in the normal cases. In particular, the fluctuations fitted well with a Gaussian distribution for the cancerous tissues in the perpendicular component. One finds non-Gaussian behavior for normal and benign tissues' spectral variations. The study of the difference of intensities in parallel and perpendicular channels, which is free from the diffusive component, revealed weak fluorescence activity in the 630nm domain, for the cancerous tissues. This may be ascribable to porphyrin emission. The role of both scatterers and fluorophores in the observed minor intensity peak for the cancer case is experimentally confirmed through tissue-phantom experiments. Continuous Morlet wavelet also highlighted this domain for the cancerous tissue fluorescence spectra. Correlation in the spectral fluctuation is further studied in different tissue types through singular value decomposition. Apart from identifying different domains of spectral activity for diseased and non-diseased tissues, we found random matrix support for the spectral fluctuations. The small eigenvalues of the perpendicular polarized fluorescence spectra of cancerous tissues fitted remarkably well with random matrix prediction for Gaussian random variables, confirming our observations about spectral fluctuations in the wavelet domain.
A compositional approach to building applications in a computational environment
NASA Astrophysics Data System (ADS)
Roslovtsev, V. V.; Shumsky, L. D.; Wolfengagen, V. E.
2014-04-01
The paper presents an approach to creating an applicative computational environment to feature computational processes and data decomposition, and a compositional approach to application building. The approach in question is based on the notion of combinator - both in systems with variable binding (such as λ-calculi) and those allowing programming without variables (combinatory logic style). We present a computation decomposition technique based on objects' structural decomposition, with the focus on computation decomposition. The computational environment's architecture is based on a network with nodes playing several roles simultaneously.
An Optimal Strategy for Accurate Bulge-to-disk Decomposition of Disk Galaxies
NASA Astrophysics Data System (ADS)
Gao, Hua; Ho, Luis C.
2017-08-01
The development of two-dimensional (2D) bulge-to-disk decomposition techniques has shown their advantages over traditional one-dimensional (1D) techniques, especially for galaxies with non-axisymmetric features. However, the full potential of 2D techniques has yet to be fully exploited. Secondary morphological features in nearby disk galaxies, such as bars, lenses, rings, disk breaks, and spiral arms, are seldom accounted for in 2D image decompositions, even though some image-fitting codes, such as GALFIT, are capable of handling them. We present detailed, 2D multi-model and multi-component decomposition of high-quality R-band images of a representative sample of nearby disk galaxies selected from the Carnegie-Irvine Galaxy Survey, using the latest version of GALFIT. The sample consists of five barred and five unbarred galaxies, spanning Hubble types from S0 to Sc. Traditional 1D decomposition is also presented for comparison. In detailed case studies of the 10 galaxies, we successfully model the secondary morphological features. Through a comparison of best-fit parameters obtained from different input surface brightness models, we identify morphological features that significantly impact bulge measurements. We show that nuclear and inner lenses/rings and disk breaks must be properly taken into account to obtain accurate bulge parameters, whereas outer lenses/rings and spiral arms have a negligible effect. We provide an optimal strategy to measure bulge parameters of typical disk galaxies, as well as prescriptions to estimate realistic uncertainties of them, which will benefit subsequent decomposition of a larger galaxy sample.
An Optimal Strategy for Accurate Bulge-to-disk Decomposition of Disk Galaxies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao Hua; Ho, Luis C.
The development of two-dimensional (2D) bulge-to-disk decomposition techniques has shown their advantages over traditional one-dimensional (1D) techniques, especially for galaxies with non-axisymmetric features. However, the full potential of 2D techniques has yet to be fully exploited. Secondary morphological features in nearby disk galaxies, such as bars, lenses, rings, disk breaks, and spiral arms, are seldom accounted for in 2D image decompositions, even though some image-fitting codes, such as GALFIT, are capable of handling them. We present detailed, 2D multi-model and multi-component decomposition of high-quality R -band images of a representative sample of nearby disk galaxies selected from the Carnegie-Irvine Galaxymore » Survey, using the latest version of GALFIT. The sample consists of five barred and five unbarred galaxies, spanning Hubble types from S0 to Sc. Traditional 1D decomposition is also presented for comparison. In detailed case studies of the 10 galaxies, we successfully model the secondary morphological features. Through a comparison of best-fit parameters obtained from different input surface brightness models, we identify morphological features that significantly impact bulge measurements. We show that nuclear and inner lenses/rings and disk breaks must be properly taken into account to obtain accurate bulge parameters, whereas outer lenses/rings and spiral arms have a negligible effect. We provide an optimal strategy to measure bulge parameters of typical disk galaxies, as well as prescriptions to estimate realistic uncertainties of them, which will benefit subsequent decomposition of a larger galaxy sample.« less
Characteristic analysis on UAV-MIMO channel based on normalized correlation matrix.
Gao, Xi jun; Chen, Zi li; Hu, Yong Jiang
2014-01-01
Based on the three-dimensional GBSBCM (geometrically based double bounce cylinder model) channel model of MIMO for unmanned aerial vehicle (UAV), the simple form of UAV space-time-frequency channel correlation function which includes the LOS, SPE, and DIF components is presented. By the methods of channel matrix decomposition and coefficient normalization, the analytic formula of UAV-MIMO normalized correlation matrix is deduced. This formula can be used directly to analyze the condition number of UAV-MIMO channel matrix, the channel capacity, and other characteristic parameters. The simulation results show that this channel correlation matrix can be applied to describe the changes of UAV-MIMO channel characteristics under different parameter settings comprehensively. This analysis method provides a theoretical basis for improving the transmission performance of UAV-MIMO channel. The development of MIMO technology shows practical application value in the field of UAV communication.
Characteristic Analysis on UAV-MIMO Channel Based on Normalized Correlation Matrix
Xi jun, Gao; Zi li, Chen; Yong Jiang, Hu
2014-01-01
Based on the three-dimensional GBSBCM (geometrically based double bounce cylinder model) channel model of MIMO for unmanned aerial vehicle (UAV), the simple form of UAV space-time-frequency channel correlation function which includes the LOS, SPE, and DIF components is presented. By the methods of channel matrix decomposition and coefficient normalization, the analytic formula of UAV-MIMO normalized correlation matrix is deduced. This formula can be used directly to analyze the condition number of UAV-MIMO channel matrix, the channel capacity, and other characteristic parameters. The simulation results show that this channel correlation matrix can be applied to describe the changes of UAV-MIMO channel characteristics under different parameter settings comprehensively. This analysis method provides a theoretical basis for improving the transmission performance of UAV-MIMO channel. The development of MIMO technology shows practical application value in the field of UAV communication. PMID:24977185
Blumthaler, Ingrid; Oberst, Ulrich
2012-03-01
Control design belongs to the most important and difficult tasks of control engineering and has therefore been treated by many prominent researchers and in many textbooks, the systems being generally described by their transfer matrices or by Rosenbrock equations and more recently also as behaviors. Our approach to controller design uses, in addition to the ideas of our predecessors on coprime factorizations of transfer matrices and on the parametrization of stabilizing compensators, a new mathematical technique which enables simpler design and also new theorems in spite of the many outstanding results of the literature: (1) We use an injective cogenerator signal module ℱ over the polynomial algebra [Formula: see text] (F an infinite field), a saturated multiplicatively closed set T of stable polynomials and its quotient ring [Formula: see text] of stable rational functions. This enables the simultaneous treatment of continuous and discrete systems and of all notions of stability, called T-stability. We investigate stabilizing control design by output feedback of input/output (IO) behaviors and study the full feedback IO behavior, especially its autonomous part and not only its transfer matrix. (2) The new technique is characterized by the permanent application of the injective cogenerator quotient signal module [Formula: see text] and of quotient behaviors [Formula: see text] of [Formula: see text]-behaviors B. (3) For the control tasks of tracking, disturbance rejection, model matching, and decoupling and not necessarily proper plants we derive necessary and sufficient conditions for the existence of proper stabilizing compensators with proper and stable closed loop behaviors, parametrize all such compensators as IO behaviors and not only their transfer matrices and give new algorithms for their construction. Moreover we solve the problem of pole placement or spectral assignability for the complete feedback behavior. The properness of the full feedback behavior ensures the absence of impulsive solutions in the continuous case, and that of the compensator enables its realization by Kalman state space equations or elementary building blocks. We note that every behavior admits an IO decomposition with proper transfer matrix, but that most of these decompositions do not have this property, and therefore we do not assume the properness of the plant. (4) The new technique can also be applied to more general control interconnections according to Willems, in particular to two-parameter feedback compensators and to the recent tracking framework of Fiaz/Takaba/Trentelman. In contrast to these authors, however, we pay special attention to the properness of all constructed transfer matrices which requires more subtle algorithms.
NASA Astrophysics Data System (ADS)
Hu, Yijia; Zhong, Zhong; Zhu, Yimin; Ha, Yao
2018-04-01
In this paper, a statistical forecast model using the time-scale decomposition method is established to do the seasonal prediction of the rainfall during flood period (FPR) over the middle and lower reaches of the Yangtze River Valley (MLYRV). This method decomposites the rainfall over the MLYRV into three time-scale components, namely, the interannual component with the period less than 8 years, the interdecadal component with the period from 8 to 30 years, and the interdecadal component with the period larger than 30 years. Then, the predictors are selected for the three time-scale components of FPR through the correlation analysis. At last, a statistical forecast model is established using the multiple linear regression technique to predict the three time-scale components of the FPR, respectively. The results show that this forecast model can capture the interannual and interdecadal variation of FPR. The hindcast of FPR during 14 years from 2001 to 2014 shows that the FPR can be predicted successfully in 11 out of the 14 years. This forecast model performs better than the model using traditional scheme without time-scale decomposition. Therefore, the statistical forecast model using the time-scale decomposition technique has good skills and application value in the operational prediction of FPR over the MLYRV.
Identification of particle-laden flow features from wavelet decomposition
NASA Astrophysics Data System (ADS)
Jackson, A.; Turnbull, B.
2017-12-01
A wavelet decomposition based technique is applied to air pressure data obtained from laboratory-scale powder snow avalanches. This technique is shown to be a powerful tool for identifying both repeatable and chaotic features at any frequency within the signal. Additionally, this technique is demonstrated to be a robust method for the removal of noise from the signal as well as being capable of removing other contaminants from the signal. Whilst powder snow avalanches are the focus of the experiments analysed here, the features identified can provide insight to other particle-laden gravity currents and the technique described is applicable to a wide variety of experimental signals.
Ghasali, Ehsan; Fazili, Ali; Alizadeh, Masoud; Shirvanimoghaddam, Kamyar; Ebadzadeh, Touradj
2017-01-01
In this research, the mechanical properties and microstructure of Al-15 wt % TiC composite samples prepared by spark plasma, microwave, and conventional sintering were investigated. The sintering process was performed by the speak plasma sintering (SPS) technique, microwave and conventional furnaces at 400 °C, 600 °C, and 700 °C, respectively. The results showed that sintered samples by SPS have the highest relative density (99% of theoretical density), bending strength (291 ± 12 MPa), and hardness (253 ± 23 HV). The X-ray diffraction (XRD) investigations showed the formation of TiO2 from the surface layer decomposition of TiC particles. Scanning electron microscopy (SEM) micrographs demonstrated uniform distribution of reinforcement particles in all sintered samples. The SEM/EDS analysis revealed the formation of TiO2 around the porous TiC particles. PMID:29088114
Techniques for Reaeration of Hydropower Releases.
1983-02-01
peak production from air induction through the baffle ring. The other aeration technique at Norris required modifications to the vacuum-breaker system...of Gas Tracers for Reaeration," Jour. Environ. Div., Proc. Amer. Soc. Civil Engr., 104, 215, April. Rathbun, R. E., 1979, "Estimating the Gas and Dye ...or dissolved in the water, and--last but not least--by the decomposition of bottom mud and by oxidation of the decomposition products stirred up out
Tomographic reconstruction of tokamak plasma light emission using wavelet-vaguelette decomposition
NASA Astrophysics Data System (ADS)
Schneider, Kai; Nguyen van Yen, Romain; Fedorczak, Nicolas; Brochard, Frederic; Bonhomme, Gerard; Farge, Marie; Monier-Garbet, Pascale
2012-10-01
Images acquired by cameras installed in tokamaks are difficult to interpret because the three-dimensional structure of the plasma is flattened in a non-trivial way. Nevertheless, taking advantage of the slow variation of the fluctuations along magnetic field lines, the optical transformation may be approximated by a generalized Abel transform, for which we proposed in Nguyen van yen et al., Nucl. Fus., 52 (2012) 013005, an inversion technique based on the wavelet-vaguelette decomposition. After validation of the new method using an academic test case and numerical data obtained with the Tokam 2D code, we present an application to an experimental movie obtained in the tokamak Tore Supra. A comparison with a classical regularization technique for ill-posed inverse problems, the singular value decomposition, allows us to assess the efficiency. The superiority of the wavelet-vaguelette technique is reflected in preserving local features, such as blobs and fronts, in the denoised emissivity map.
NASA Astrophysics Data System (ADS)
Nguyen van yen, R.; Fedorczak, N.; Brochard, F.; Bonhomme, G.; Schneider, K.; Farge, M.; Monier-Garbet, P.
2012-01-01
Images acquired by cameras installed in tokamaks are difficult to interpret because the three-dimensional structure of the plasma is flattened in a non-trivial way. Nevertheless, taking advantage of the slow variation of the fluctuations along magnetic field lines, the optical transformation may be approximated by a generalized Abel transform, for which we propose an inversion technique based on the wavelet-vaguelette decomposition. After validation of the new method using an academic test case and numerical data obtained with the Tokam 2D code, we present an application to an experimental movie obtained in the tokamak Tore Supra. A comparison with a classical regularization technique for ill-posed inverse problems, the singular value decomposition, allows us to assess the efficiency. The superiority of the wavelet-vaguelette technique is reflected in preserving local features, such as blobs and fronts, in the denoised emissivity map.
NASA Astrophysics Data System (ADS)
Le, Thien-Phu
2017-10-01
The frequency-scale domain decomposition technique has recently been proposed for operational modal analysis. The technique is based on the Cauchy mother wavelet. In this paper, the approach is extended to the Morlet mother wavelet, which is very popular in signal processing due to its superior time-frequency localization. Based on the regressive form and an appropriate norm of the Morlet mother wavelet, the continuous wavelet transform of the power spectral density of ambient responses enables modes in the frequency-scale domain to be highlighted. Analytical developments first demonstrate the link between modal parameters and the local maxima of the continuous wavelet transform modulus. The link formula is then used as the foundation of the proposed modal identification method. Its practical procedure, combined with the singular value decomposition algorithm, is presented step by step. The proposition is finally verified using numerical examples and a laboratory test.
Nanocomposite film prepared by depositing xylan on cellulose nanowhiskers matrix
Qining Sun; Anurag Mandalika; Thomas Elder; Sandeep S. Nair; Xianzhi Meng; Fang Huang; Art J. Ragauskas
2014-01-01
Novel bionanocomposite films have been prepared by depositing xylan onto cellulose nanowhiskers through a pH adjustment. Analysis of strength properties, water vapour transmission, transparency, surface morphology and thermal decomposition showed the enhancement of film performance. This provides a new green route to the utilization of biomass for sustainable...
Analysis of Self-Excited Combustion Instabilities Using Decomposition Techniques
2016-07-05
are evaluated for the study of self-excited longitudinal combustion instabilities in laboratory-scaled single-element gas turbine and rocket...Air Force Base, California 93524 DOI: 10.2514/1.J054557 Proper orthogonal decomposition and dynamic mode decomposition are evaluated for the study of...instabilities. In addition, we also evaluate the capabilities of the methods to deal with data sets of different spatial extents and temporal resolution
Domain decomposition for aerodynamic and aeroacoustic analyses, and optimization
NASA Technical Reports Server (NTRS)
Baysal, Oktay
1995-01-01
The overarching theme was the domain decomposition, which intended to improve the numerical solution technique for the partial differential equations at hand; in the present study, those that governed either the fluid flow, or the aeroacoustic wave propagation, or the sensitivity analysis for a gradient-based optimization. The role of the domain decomposition extended beyond the original impetus of discretizing geometrical complex regions or writing modular software for distributed-hardware computers. It induced function-space decompositions and operator decompositions that offered the valuable property of near independence of operator evaluation tasks. The objectives have gravitated about the extensions and implementations of either the previously developed or concurrently being developed methodologies: (1) aerodynamic sensitivity analysis with domain decomposition (SADD); (2) computational aeroacoustics of cavities; and (3) dynamic, multibody computational fluid dynamics using unstructured meshes.
A Novel Multilevel-SVD Method to Improve Multistep Ahead Forecasting in Traffic Accidents Domain.
Barba, Lida; Rodríguez, Nibaldo
2017-01-01
Here is proposed a novel method for decomposing a nonstationary time series in components of low and high frequency. The method is based on Multilevel Singular Value Decomposition (MSVD) of a Hankel matrix. The decomposition is used to improve the forecasting accuracy of Multiple Input Multiple Output (MIMO) linear and nonlinear models. Three time series coming from traffic accidents domain are used. They represent the number of persons with injuries in traffic accidents of Santiago, Chile. The data were continuously collected by the Chilean Police and were weekly sampled from 2000:1 to 2014:12. The performance of MSVD is compared with the decomposition in components of low and high frequency of a commonly accepted method based on Stationary Wavelet Transform (SWT). SWT in conjunction with the Autoregressive model (SWT + MIMO-AR) and SWT in conjunction with an Autoregressive Neural Network (SWT + MIMO-ANN) were evaluated. The empirical results have shown that the best accuracy was achieved by the forecasting model based on the proposed decomposition method MSVD, in comparison with the forecasting models based on SWT.
A Novel Multilevel-SVD Method to Improve Multistep Ahead Forecasting in Traffic Accidents Domain
Rodríguez, Nibaldo
2017-01-01
Here is proposed a novel method for decomposing a nonstationary time series in components of low and high frequency. The method is based on Multilevel Singular Value Decomposition (MSVD) of a Hankel matrix. The decomposition is used to improve the forecasting accuracy of Multiple Input Multiple Output (MIMO) linear and nonlinear models. Three time series coming from traffic accidents domain are used. They represent the number of persons with injuries in traffic accidents of Santiago, Chile. The data were continuously collected by the Chilean Police and were weekly sampled from 2000:1 to 2014:12. The performance of MSVD is compared with the decomposition in components of low and high frequency of a commonly accepted method based on Stationary Wavelet Transform (SWT). SWT in conjunction with the Autoregressive model (SWT + MIMO-AR) and SWT in conjunction with an Autoregressive Neural Network (SWT + MIMO-ANN) were evaluated. The empirical results have shown that the best accuracy was achieved by the forecasting model based on the proposed decomposition method MSVD, in comparison with the forecasting models based on SWT. PMID:28261267
NASA Astrophysics Data System (ADS)
Vasilev, A. A.; Dzidziguri, E. L.; Muratov, D. G.; Zhilyaeva, N. A.; Efimov, M. N.; Karpacheva, G. P.
2018-04-01
Metal-carbon nanocomposites consisting of FeCo alloy nanoparticles dispersed in a carbon matrix were synthesized by the thermal decomposition method of a precursor based on polyvinyl alcohol and metals salts. The synthesized powders were investigated by X-ray diffraction (XRD), X-ray fluorescent spectrometry (XRFS), transmission electron microscopy (TEM) and scanning electron microscopy (SEM). Surface characteristics of materials were measured by BET-method. The morphology and dispersity of metal nanoparticles were studied depending on the metals ratio in the composite.
Solving periodic block tridiagonal systems using the Sherman-Morrison-Woodbury formula
NASA Technical Reports Server (NTRS)
Yarrow, Maurice
1989-01-01
Many algorithms for solving the Navier-Stokes equations require the solution of periodic block tridiagonal systems of equations. By applying a splitting to the matrix representing this system of equations, it may first be reduced to a block tridiagonal matrix plus an outer product of two block vectors. The Sherman-Morrison-Woodbury formula is then applied. The algorithm thus reduces a periodic banded system to a non-periodic banded system with additional right-hand sides and is of higher efficiency than standard Thomas algorithm/LU decompositions.
Structure of local interactions in complex financial dynamics
Jiang, X. F.; Chen, T. T.; Zheng, B.
2014-01-01
With the network methods and random matrix theory, we investigate the interaction structure of communities in financial markets. In particular, based on the random matrix decomposition, we clarify that the local interactions between the business sectors (subsectors) are mainly contained in the sector mode. In the sector mode, the average correlation inside the sectors is positive, while that between the sectors is negative. Further, we explore the time evolution of the interaction structure of the business sectors, and observe that the local interaction structure changes dramatically during a financial bubble or crisis. PMID:24936906
NASA Astrophysics Data System (ADS)
Wang, RuLin; Zheng, Xiao; Kwok, YanHo; Xie, Hang; Chen, GuanHua; Yam, ChiYung
2015-04-01
Understanding electronic dynamics on material surfaces is fundamentally important for applications including nanoelectronics, inhomogeneous catalysis, and photovoltaics. Practical approaches based on time-dependent density functional theory for open systems have been developed to characterize the dissipative dynamics of electrons in bulk materials. The accuracy and reliability of such approaches depend critically on how the electronic structure and memory effects of surrounding material environment are accounted for. In this work, we develop a novel squared-Lorentzian decomposition scheme, which preserves the positive semi-definiteness of the environment spectral matrix. The resulting electronic dynamics is guaranteed to be both accurate and convergent even in the long-time limit. The long-time stability of electronic dynamics simulation is thus greatly improved within the current decomposition scheme. The validity and usefulness of our new approach are exemplified via two prototypical model systems: quasi-one-dimensional atomic chains and two-dimensional bilayer graphene.
Analysis on Vertical Scattering Signatures in Forestry with PolInSAR
NASA Astrophysics Data System (ADS)
Guo, Shenglong; Li, Yang; Zhang, Jingjing; Hong, Wen
2014-11-01
We apply accurate topographic phase to the Freeman-Durden decomposition for polarimetric SAR interferometry (PolInSAR) data. The cross correlation matrix obtained from PolInSAR observations can be decomposed into three scattering mechanisms matrices accounting for the odd-bounce, double-bounce and volume scattering. We estimate the phase based on the Random volume over Ground (RVoG) model, and as the initial input parameter of the numerical method which is used to solve the parameters of decomposition. In addition, the modified volume scattering model introduced by Y. Yamaguchi is applied to the PolInSAR target decomposition in forest areas rather than the pure random volume scattering as proposed by Freeman-Durden to make best fit to the actual measured data. This method can accurately retrieve the magnitude associated with each mechanism and their vertical location along the vertical dimension. We test the algorithms with L- and P- band simulated data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, W; Niu, T; Xing, L
2015-06-15
Purpose: To significantly improve dual energy CT (DECT) imaging by establishing a new theoretical framework of image-domain material decomposition with incorporation of edge-preserving techniques. Methods: The proposed algorithm, HYPR-NLM, combines the edge-preserving non-local mean filter (NLM) with the HYPR-LR (Local HighlY constrained backPRojection Reconstruction) framework. Image denoising using HYPR-LR framework depends on the noise level of the composite image which is the average of the different energy images. For DECT, the composite image is the average of high- and low-energy images. To further reduce noise, one may want to increase the window size of the filter of the HYPR-LR, leadingmore » resolution degradation. By incorporating the NLM filtering and the HYPR-LR framework, HYPR-NLM reduces the boost material decomposition noise using energy information redundancies as well as the non-local mean. We demonstrate the noise reduction and resolution preservation of the algorithm with both iodine concentration numerical phantom and clinical patient data by comparing the HYPR-NLM algorithm to the direct matrix inversion, HYPR-LR and iterative image-domain material decomposition (Iter-DECT). Results: The results show iterative material decomposition method reduces noise to the lowest level and provides improved DECT images. HYPR-NLM significantly reduces noise while preserving the accuracy of quantitative measurement and resolution. For the iodine concentration numerical phantom, the averaged noise levels are about 2.0, 0.7, 0.2 and 0.4 for direct inversion, HYPR-LR, Iter- DECT and HYPR-NLM, respectively. For the patient data, the noise levels of the water images are about 0.36, 0.16, 0.12 and 0.13 for direct inversion, HYPR-LR, Iter-DECT and HYPR-NLM, respectively. Difference images of both HYPR-LR and Iter-DECT show edge effect, while no significant edge effect is shown for HYPR-NLM, suggesting spatial resolution is well preserved for HYPR-NLM. Conclusion: HYPR-NLM provides an effective way to reduce the generic magnified image noise of dual–energy material decomposition while preserving resolution. This work is supported in part by NIH grants 7R01HL111141 and 1R01-EB016777. This work is also supported by the Natural Science Foundation of China (NSFC Grant No. 81201091), Fundamental Research Funds for the Central Universities in China, and Fund Project for Excellent Abroad Scholar Personnel in Science and Technology.« less
Face recognition using tridiagonal matrix enhanced multivariance products representation
NASA Astrophysics Data System (ADS)
Ã-zay, Evrim Korkmaz
2017-01-01
This study aims to retrieve face images from a database according to a target face image. For this purpose, Tridiagonal Matrix Enhanced Multivariance Products Representation (TMEMPR) is taken into consideration. TMEMPR is a recursive algorithm based on Enhanced Multivariance Products Representation (EMPR). TMEMPR decomposes a matrix into three components which are a matrix of left support terms, a tridiagonal matrix of weight parameters for each recursion, and a matrix of right support terms, respectively. In this sense, there is an analogy between Singular Value Decomposition (SVD) and TMEMPR. However TMEMPR is a more flexible algorithm since its initial support terms (or vectors) can be chosen as desired. Low computational complexity is another advantage of TMEMPR because the algorithm has been constructed with recursions of certain arithmetic operations without requiring any iteration. The algorithm has been trained and tested with ORL face image database with 400 different grayscale images of 40 different people. TMEMPR's performance has been compared with SVD's performance as a result.
Coherent mode decomposition using mixed Wigner functions of Hermite-Gaussian beams.
Tanaka, Takashi
2017-04-15
A new method of coherent mode decomposition (CMD) is proposed that is based on a Wigner-function representation of Hermite-Gaussian beams. In contrast to the well-known method using the cross spectral density (CSD), it directly determines the mode functions and their weights without solving the eigenvalue problem. This facilitates the CMD of partially coherent light whose Wigner functions (and thus CSDs) are not separable, in which case the conventional CMD requires solving an eigenvalue problem with a large matrix and thus is numerically formidable. An example is shown regarding the CMD of synchrotron radiation, one of the most important applications of the proposed method.
Caicedo, Alexander; Varon, Carolina; Hunyadi, Borbala; Papademetriou, Maria; Tachtsidis, Ilias; Van Huffel, Sabine
2016-01-01
Clinical data is comprised by a large number of synchronously collected biomedical signals that are measured at different locations. Deciphering the interrelationships of these signals can yield important information about their dependence providing some useful clinical diagnostic data. For instance, by computing the coupling between Near-Infrared Spectroscopy signals (NIRS) and systemic variables the status of the hemodynamic regulation mechanisms can be assessed. In this paper we introduce an algorithm for the decomposition of NIRS signals into additive components. The algorithm, SIgnal DEcomposition base on Obliques Subspace Projections (SIDE-ObSP), assumes that the measured NIRS signal is a linear combination of the systemic measurements, following the linear regression model y = Ax + ϵ . SIDE-ObSP decomposes the output such that, each component in the decomposition represents the sole linear influence of one corresponding regressor variable. This decomposition scheme aims at providing a better understanding of the relation between NIRS and systemic variables, and to provide a framework for the clinical interpretation of regression algorithms, thereby, facilitating their introduction into clinical practice. SIDE-ObSP combines oblique subspace projections (ObSP) with the structure of a mean average system in order to define adequate signal subspaces. To guarantee smoothness in the estimated regression parameters, as observed in normal physiological processes, we impose a Tikhonov regularization using a matrix differential operator. We evaluate the performance of SIDE-ObSP by using a synthetic dataset, and present two case studies in the field of cerebral hemodynamics monitoring using NIRS. In addition, we compare the performance of this method with other system identification techniques. In the first case study data from 20 neonates during the first 3 days of life was used, here SIDE-ObSP decoupled the influence of changes in arterial oxygen saturation from the NIRS measurements, facilitating the use of NIRS as a surrogate measure for cerebral blood flow (CBF). The second case study used data from a 3-years old infant under Extra Corporeal Membrane Oxygenation (ECMO), here SIDE-ObSP decomposed cerebral/peripheral tissue oxygenation, as a sum of the partial contributions from different systemic variables, facilitating the comparison between the effects of each systemic variable on the cerebral/peripheral hemodynamics.
Attractive electron-electron interactions within robust local fitting approximations.
Merlot, Patrick; Kjærgaard, Thomas; Helgaker, Trygve; Lindh, Roland; Aquilante, Francesco; Reine, Simen; Pedersen, Thomas Bondo
2013-06-30
An analysis of Dunlap's robust fitting approach reveals that the resulting two-electron integral matrix is not manifestly positive semidefinite when local fitting domains or non-Coulomb fitting metrics are used. We present a highly local approximate method for evaluating four-center two-electron integrals based on the resolution-of-the-identity (RI) approximation and apply it to the construction of the Coulomb and exchange contributions to the Fock matrix. In this pair-atomic resolution-of-the-identity (PARI) approach, atomic-orbital (AO) products are expanded in auxiliary functions centered on the two atoms associated with each product. Numerical tests indicate that in 1% or less of all Hartree-Fock and Kohn-Sham calculations, the indefinite integral matrix causes nonconvergence in the self-consistent-field iterations. In these cases, the two-electron contribution to the total energy becomes negative, meaning that the electronic interaction is effectively attractive, and the total energy is dramatically lower than that obtained with exact integrals. In the vast majority of our test cases, however, the indefiniteness does not interfere with convergence. The total energy accuracy is comparable to that of the standard Coulomb-metric RI method. The speed-up compared with conventional algorithms is similar to the RI method for Coulomb contributions; exchange contributions are accelerated by a factor of up to eight with a triple-zeta quality basis set. A positive semidefinite integral matrix is recovered within PARI by introducing local auxiliary basis functions spanning the full AO product space, as may be achieved by using Cholesky-decomposition techniques. Local completion, however, slows down the algorithm to a level comparable with or below conventional calculations. Copyright © 2013 Wiley Periodicals, Inc.
A Matrix-Free Algorithm for Multidisciplinary Design Optimization
NASA Astrophysics Data System (ADS)
Lambe, Andrew Borean
Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and constraints. On the aerostructural test problem formulated with thousands of constraints, the matrix-free optimizer is estimated to reduce the total computational time by up to 90% compared to conventional optimizers.
A Matrix-Free Algorithm for Multidisciplinary Design Optimization
NASA Astrophysics Data System (ADS)
Lambe, Andrew Borean
Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation. motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and constraints. On the aerostructural test problem formulated with thousands of constraints, the matrix-free optimizer is estimated to reduce the total computational time by up to 90% compared to conventional optimizers.
Partial information decomposition as a spatiotemporal filter.
Flecker, Benjamin; Alford, Wesley; Beggs, John M; Williams, Paul L; Beer, Randall D
2011-09-01
Understanding the mechanisms of distributed computation in cellular automata requires techniques for characterizing the emergent structures that underlie information processing in such systems. Recently, techniques from information theory have been brought to bear on this problem. Building on this work, we utilize the new technique of partial information decomposition to show that previous information-theoretic measures can confound distinct sources of information. We then propose a new set of filters and demonstrate that they more cleanly separate out the background domains, particles, and collisions that are typically associated with information storage, transfer, and modification in cellular automata.
Spectral Diffusion: An Algorithm for Robust Material Decomposition of Spectral CT Data
Clark, Darin P.; Badea, Cristian T.
2014-01-01
Clinical successes with dual energy CT, aggressive development of energy discriminating x-ray detectors, and novel, target-specific, nanoparticle contrast agents promise to establish spectral CT as a powerful functional imaging modality. Common to all of these applications is the need for a material decomposition algorithm which is robust in the presence of noise. Here, we develop such an algorithm which uses spectrally joint, piece-wise constant kernel regression and the split Bregman method to iteratively solve for a material decomposition which is gradient sparse, quantitatively accurate, and minimally biased. We call this algorithm spectral diffusion because it integrates structural information from multiple spectral channels and their corresponding material decompositions within the framework of diffusion-like denoising algorithms (e.g. anisotropic diffusion, total variation, bilateral filtration). Using a 3D, digital bar phantom and a material sensitivity matrix calibrated for use with a polychromatic x-ray source, we quantify the limits of detectability (CNR = 5) afforded by spectral diffusion in the triple-energy material decomposition of iodine (3.1 mg/mL), gold (0.9 mg/mL), and gadolinium (2.9 mg/mL) concentrations. We then apply spectral diffusion to the in vivo separation of these three materials in the mouse kidneys, liver, and spleen. PMID:25296173
Spectral diffusion: an algorithm for robust material decomposition of spectral CT data.
Clark, Darin P; Badea, Cristian T
2014-11-07
Clinical successes with dual energy CT, aggressive development of energy discriminating x-ray detectors, and novel, target-specific, nanoparticle contrast agents promise to establish spectral CT as a powerful functional imaging modality. Common to all of these applications is the need for a material decomposition algorithm which is robust in the presence of noise. Here, we develop such an algorithm which uses spectrally joint, piecewise constant kernel regression and the split Bregman method to iteratively solve for a material decomposition which is gradient sparse, quantitatively accurate, and minimally biased. We call this algorithm spectral diffusion because it integrates structural information from multiple spectral channels and their corresponding material decompositions within the framework of diffusion-like denoising algorithms (e.g. anisotropic diffusion, total variation, bilateral filtration). Using a 3D, digital bar phantom and a material sensitivity matrix calibrated for use with a polychromatic x-ray source, we quantify the limits of detectability (CNR = 5) afforded by spectral diffusion in the triple-energy material decomposition of iodine (3.1 mg mL(-1)), gold (0.9 mg mL(-1)), and gadolinium (2.9 mg mL(-1)) concentrations. We then apply spectral diffusion to the in vivo separation of these three materials in the mouse kidneys, liver, and spleen.
Bailey, E A; Dutton, A W; Mattingly, M; Devasia, S; Roemer, R B
1998-01-01
Reduced-order modelling techniques can make important contributions in the control and state estimation of large systems. In hyperthermia, reduced-order modelling can provide a useful tool by which a large thermal model can be reduced to the most significant subset of its full-order modes, making real-time control and estimation possible. Two such reduction methods, one based on modal decomposition and the other on balanced realization, are compared in the context of simulated hyperthermia heat transfer problems. The results show that the modal decomposition reduction method has three significant advantages over that of balanced realization. First, modal decomposition reduced models result in less error, when compared to the full-order model, than balanced realization reduced models of similar order in problems with low or moderate advective heat transfer. Second, because the balanced realization based methods require a priori knowledge of the sensor and actuator placements, the reduced-order model is not robust to changes in sensor or actuator locations, a limitation not present in modal decomposition. Third, the modal decomposition transformation is less demanding computationally. On the other hand, in thermal problems dominated by advective heat transfer, numerical instabilities make modal decomposition based reduction problematic. Modal decomposition methods are therefore recommended for reduction of models in which advection is not dominant and research continues into methods to render balanced realization based reduction more suitable for real-time clinical hyperthermia control and estimation.
The use of the modified Cholesky decomposition in divergence and classification calculations
NASA Technical Reports Server (NTRS)
Vanroony, D. L.; Lynn, M. S.; Snyder, C. H.
1973-01-01
The use of the Cholesky decomposition technique is analyzed as applied to the feature selection and classification algorithms used in the analysis of remote sensing data (e.g. as in LARSYS). This technique is approximately 30% faster in classification and a factor of 2-3 faster in divergence, as compared with LARSYS. Also numerical stability and accuracy are slightly improved. Other methods necessary to deal with numerical stablity problems are briefly discussed.
The use of the modified Cholesky decomposition in divergence and classification calculations
NASA Technical Reports Server (NTRS)
Van Rooy, D. L.; Lynn, M. S.; Snyder, C. H.
1973-01-01
This report analyzes the use of the modified Cholesky decomposition technique as applied to the feature selection and classification algorithms used in the analysis of remote sensing data (e.g., as in LARSYS). This technique is approximately 30% faster in classification and a factor of 2-3 faster in divergence, as compared with LARSYS. Also numerical stability and accuracy are slightly improved. Other methods necessary to deal with numerical stability problems are briefly discussed.
Zhou, Xuhui; Xu, Xia; Zhou, Guiyao; Luo, Yiqi
2018-02-01
Temperature sensitivity of soil organic carbon (SOC) decomposition is one of the major uncertainties in predicting climate-carbon (C) cycle feedback. Results from previous studies are highly contradictory with old soil C decomposition being more, similarly, or less sensitive to temperature than decomposition of young fractions. The contradictory results are partly from difficulties in distinguishing old from young SOC and their changes over time in the experiments with or without isotopic techniques. In this study, we have conducted a long-term field incubation experiment with deep soil collars (0-70 cm in depth, 10 cm in diameter of PVC tubes) for excluding root C input to examine apparent temperature sensitivity of SOC decomposition under ambient and warming treatments from 2002 to 2008. The data from the experiment were infused into a multi-pool soil C model to estimate intrinsic temperature sensitivity of SOC decomposition and C residence times of three SOC fractions (i.e., active, slow, and passive) using a data assimilation (DA) technique. As active SOC with the short C residence time was progressively depleted in the deep soil collars under both ambient and warming treatments, the residences times of the whole SOC became longer over time. Concomitantly, the estimated apparent and intrinsic temperature sensitivity of SOC decomposition also became gradually higher over time as more than 50% of active SOC was depleted. Thus, the temperature sensitivity of soil C decomposition in deep soil collars was positively correlated with the mean C residence times. However, the regression slope of the temperature sensitivity against the residence time was lower under the warming treatment than under ambient temperature, indicating that other processes also regulated temperature sensitivity of SOC decomposition. These results indicate that old SOC decomposition is more sensitive to temperature than young components, making the old C more vulnerable to future warmer climate. © 2017 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Megherbi, Dalila B.; Lodhi, S. M.; Boulenouar, A. J.
2001-03-01
This work is in the field of automated document processing. This work addresses the problem of representation and recognition of Urdu characters using Fourier representation and a Neural Network architecture. In particular, we show that a two-stage Neural Network scheme is used here to make classification of 36 Urdu characters into seven sub-classes namely subclasses characterized by seven proposed and defined fuzzy features specifically related to Urdu characters. We show that here Fourier Descriptors and Neural Network provide a remarkably simple way to draw definite conclusions from vague, ambiguous, noisy or imprecise information. In particular, we illustrate the concept of interest regions and describe a framing method that provides a way to make the proposed technique for Urdu characters recognition robust and invariant to scaling and translation. We also show that a given character rotation is dealt with by using the Hotelling transform. This transform is based upon the eigenvalue decomposition of the covariance matrix of an image, providing a method of determining the orientation of the major axis of an object within an image. Finally experimental results are presented to show the power and robustness of the proposed two-stage Neural Network based technique for Urdu character recognition, its fault tolerance, and high recognition accuracy.
A Parallel Multiclassification Algorithm for Big Data Using an Extreme Learning Machine.
Duan, Mingxing; Li, Kenli; Liao, Xiangke; Li, Keqin
2018-06-01
As data sets become larger and more complicated, an extreme learning machine (ELM) that runs in a traditional serial environment cannot realize its ability to be fast and effective. Although a parallel ELM (PELM) based on MapReduce to process large-scale data shows more efficient learning speed than identical ELM algorithms in a serial environment, some operations, such as intermediate results stored on disks and multiple copies for each task, are indispensable, and these operations create a large amount of extra overhead and degrade the learning speed and efficiency of the PELMs. In this paper, an efficient ELM based on the Spark framework (SELM), which includes three parallel subalgorithms, is proposed for big data classification. By partitioning the corresponding data sets reasonably, the hidden layer output matrix calculation algorithm, matrix decomposition algorithm, and matrix decomposition algorithm perform most of the computations locally. At the same time, they retain the intermediate results in distributed memory and cache the diagonal matrix as broadcast variables instead of several copies for each task to reduce a large amount of the costs, and these actions strengthen the learning ability of the SELM. Finally, we implement our SELM algorithm to classify large data sets. Extensive experiments have been conducted to validate the effectiveness of the proposed algorithms. As shown, our SELM achieves an speedup on a cluster with ten nodes, and reaches a speedup with 15 nodes, an speedup with 20 nodes, a speedup with 25 nodes, a speedup with 30 nodes, and a speedup with 35 nodes.
Strategies for reducing large fMRI data sets for independent component analysis.
Wang, Ze; Wang, Jiongjiong; Calhoun, Vince; Rao, Hengyi; Detre, John A; Childress, Anna R
2006-06-01
In independent component analysis (ICA), principal component analysis (PCA) is generally used to reduce the raw data to a few principal components (PCs) through eigenvector decomposition (EVD) on the data covariance matrix. Although this works for spatial ICA (sICA) on moderately sized fMRI data, it is intractable for temporal ICA (tICA), since typical fMRI data have a high spatial dimension, resulting in an unmanageable data covariance matrix. To solve this problem, two practical data reduction methods are presented in this paper. The first solution is to calculate the PCs of tICA from the PCs of sICA. This approach works well for moderately sized fMRI data; however, it is highly computationally intensive, even intractable, when the number of scans increases. The second solution proposed is to perform PCA decomposition via a cascade recursive least squared (CRLS) network, which provides a uniform data reduction solution for both sICA and tICA. Without the need to calculate the covariance matrix, CRLS extracts PCs directly from the raw data, and the PC extraction can be terminated after computing an arbitrary number of PCs without the need to estimate the whole set of PCs. Moreover, when the whole data set becomes too large to be loaded into the machine memory, CRLS-PCA can save data retrieval time by reading the data once, while the conventional PCA requires numerous data retrieval steps for both covariance matrix calculation and PC extractions. Real fMRI data were used to evaluate the PC extraction precision, computational expense, and memory usage of the presented methods.
NASA Astrophysics Data System (ADS)
Bui-Thanh, T.; Girolami, M.
2014-11-01
We consider the Riemann manifold Hamiltonian Monte Carlo (RMHMC) method for solving statistical inverse problems governed by partial differential equations (PDEs). The Bayesian framework is employed to cast the inverse problem into the task of statistical inference whose solution is the posterior distribution in infinite dimensional parameter space conditional upon observation data and Gaussian prior measure. We discretize both the likelihood and the prior using the H1-conforming finite element method together with a matrix transfer technique. The power of the RMHMC method is that it exploits the geometric structure induced by the PDE constraints of the underlying inverse problem. Consequently, each RMHMC posterior sample is almost uncorrelated/independent from the others providing statistically efficient Markov chain simulation. However this statistical efficiency comes at a computational cost. This motivates us to consider computationally more efficient strategies for RMHMC. At the heart of our construction is the fact that for Gaussian error structures the Fisher information matrix coincides with the Gauss-Newton Hessian. We exploit this fact in considering a computationally simplified RMHMC method combining state-of-the-art adjoint techniques and the superiority of the RMHMC method. Specifically, we first form the Gauss-Newton Hessian at the maximum a posteriori point and then use it as a fixed constant metric tensor throughout RMHMC simulation. This eliminates the need for the computationally costly differential geometric Christoffel symbols, which in turn greatly reduces computational effort at a corresponding loss of sampling efficiency. We further reduce the cost of forming the Fisher information matrix by using a low rank approximation via a randomized singular value decomposition technique. This is efficient since a small number of Hessian-vector products are required. The Hessian-vector product in turn requires only two extra PDE solves using the adjoint technique. Various numerical results up to 1025 parameters are presented to demonstrate the ability of the RMHMC method in exploring the geometric structure of the problem to propose (almost) uncorrelated/independent samples that are far away from each other, and yet the acceptance rate is almost unity. The results also suggest that for the PDE models considered the proposed fixed metric RMHMC can attain almost as high a quality performance as the original RMHMC, i.e. generating (almost) uncorrelated/independent samples, while being two orders of magnitude less computationally expensive.
INDDGO: Integrated Network Decomposition & Dynamic programming for Graph Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Groer, Christopher S; Sullivan, Blair D; Weerapurage, Dinesh P
2012-10-01
It is well-known that dynamic programming algorithms can utilize tree decompositions to provide a way to solve some \\emph{NP}-hard problems on graphs where the complexity is polynomial in the number of nodes and edges in the graph, but exponential in the width of the underlying tree decomposition. However, there has been relatively little computational work done to determine the practical utility of such dynamic programming algorithms. We have developed software to construct tree decompositions using various heuristics and have created a fast, memory-efficient dynamic programming implementation for solving maximum weighted independent set. We describe our software and the algorithms wemore » have implemented, focusing on memory saving techniques for the dynamic programming. We compare the running time and memory usage of our implementation with other techniques for solving maximum weighted independent set, including a commercial integer programming solver and a semi-definite programming solver. Our results indicate that it is possible to solve some instances where the underlying decomposition has width much larger than suggested by the literature. For certain types of problems, our dynamic programming code runs several times faster than these other methods.« less
Solventless synthesis, morphology, structure and magnetic properties of iron oxide nanoparticles
NASA Astrophysics Data System (ADS)
Das, Bratati; Kusz, Joachim; Reddy, V. Raghavendra; Zubko, Maciej; Bhattacharjee, Ashis
2017-12-01
In this study we report the solventless synthesis of iron oxide through thermal decomposition of acetyl ferrocene as well as its mixtures with maliec anhydride and characterization of the synthesized product by various comprehensive physical techniques. Morphology, size and structure of the reaction products were investigated by scanning electron microscopy, transmission electron microscopy and X-ray powder diffraction technique, respectively. Physical characterization techniques like FT-IR spectroscopy, dc magnetization study as well as 57Fe Mössbauer spectroscopy were employed to characterize the magnetic property of the product. The results observed from these studies unequivocally established that the synthesized materials are hematite. Thermal decomposition has been studied with the help of thermogravimetry. Reaction pathway for synthesis of hematite has been proposed. It is noted that maliec anhydride in the solid reaction environment as well as the gaseous reaction atmosphere strongly affect the reaction yield as well as the particle size. In general, a method of preparing hematite nanoparticles through solventless thermal decomposition technique using organometallic compounds and the possible use of reaction promoter have been discussed in detail.
2012-03-22
with performance profiles, Math. Program., 91 (2002), pp. 201–213. [6] P. DRINEAS, R. KANNAN, AND M. W. MAHONEY , Fast Monte Carlo algorithms for matrices...computing invariant subspaces of non-Hermitian matri- ces, Numer. Math., 25 ( 1975 /76), pp. 123–136. [25] , Matrix algorithms Vol. II: Eigensystems
Investigation of Large Scale Cortical Models on Clustered Multi-Core Processors
2013-02-01
with the bias node ( gray ) denoted as ww and the weights associated with the remaining first layer nodes (black) denoted as W. In forming the overall...Implementation of RBF network on GPU Platform 3.5.1 The Cholesky decomposition algorithm We need to invert the matrix multiplication GTG to
Petroselli, Gabriela; Mandal, Mridul Kanti; Chen, Lee Chuin; Ruiz, Gustavo T; Wolcan, Ezequiel; Hiraoka, Kenzo; Nonami, Hiroshi; Erra-Balsells, Rosa
2012-03-01
A group of rhenium (I) complexes including in their structure ligands such as CF(3)SO(3)-, CH(3)CO(2)-, CO, 2,2'-bipyridine, dipyridil[3,2-a:2'3'-c]phenazine, naphthalene-2-carboxylate, anthracene-9-carboxylate, pyrene-1-carboxylate and 1,10-phenanthroline have been studied for the first time by mass spectrometry. The probe electrospray ionization (PESI) is a technique based on electrospray ionization (ESI) that generates electrospray from the tip of a solid metal needle. In this work, mass spectra for organometallic complexes obtained by PESI were compared with those obtained by classical ESI and high flow rate electrospray ionization assisted by corona discharge (HF-ESI-CD), an ideal method to avoid decomposition of the complexes and to induce their oxidation to yield intact molecular cation radicals in gas state [M](+·) and to produce their reduction yielding the gas species [M](-·). It was found that both techniques showed in general the intact molecular ions of the organometallics studied and provided additional structure characteristic diagnostic fragments. As the rhenium complexes studied in the present work showed strong absorption in the UV-visible region, particularly at 355 nm, laser desorption ionization (LDI) mass spectrometry experiments could be conducted. Although intact molecular ions could be detected in a few cases, LDI mass spectra showed diagnostic fragments for characterization of the complexes structure. Furthermore, matrix-assisted laser desorption ionization (MALDI) mass spectra were obtained. Nor-harmane, a compound with basic character, was used as matrix, and the intact molecular ions were detected in two examples, in negative ion mode as the [M](-·) species. Results obtained with 2-[(2E)-3-(4-tert-buthylphenyl)-2-methylprop-2-enylidene] malononitrile (DCTB) as matrix are also described. LDI experiments provided more information about the rhenium complex structures than did the MALDI ones. Copyright © 2012 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sofronov, I.D.; Voronin, B.L.; Butnev, O.I.
1997-12-31
The aim of the work performed is to develop a 3D parallel program for numerical calculation of gas dynamics problem with heat conductivity on distributed memory computational systems (CS), satisfying the condition of numerical result independence from the number of processors involved. Two basically different approaches to the structure of massive parallel computations have been developed. The first approach uses the 3D data matrix decomposition reconstructed at temporal cycle and is a development of parallelization algorithms for multiprocessor CS with shareable memory. The second approach is based on using a 3D data matrix decomposition not reconstructed during a temporal cycle.more » The program was developed on 8-processor CS MP-3 made in VNIIEF and was adapted to a massive parallel CS Meiko-2 in LLNL by joint efforts of VNIIEF and LLNL staffs. A large number of numerical experiments has been carried out with different number of processors up to 256 and the efficiency of parallelization has been evaluated in dependence on processor number and their parameters.« less
Characterization of polymer decomposition products by laser desorption mass spectrometry
NASA Technical Reports Server (NTRS)
Pallix, Joan B.; Lincoln, Kenneth A.; Miglionico, Charles J.; Roybal, Robert E.; Stein, Charles; Shively, Jon H.
1993-01-01
Laser desorption mass spectrometry has been used to characterize the ash-like substances formed on the surfaces of polymer matrix composites (PMC's) during exposure on LDEF. In an effort to minimize fragmentation, material was removed from the sample surfaces by laser desorption and desorbed neutrals were ionized by electron impact. Ions were detected in a time-of-flight mass analyzer which allows the entire mass spectrum to be collected for each laser shot. The method is ideal for these studies because only a small amount of ash is available for analysis. Three sets of samples were studied including C/polysulfone, C/polyimide and C/phenolic. Each set contains leading and trailing edge LDEF samples and their respective controls. In each case, the mass spectrum of the ash shows a number of high mass peaks which can be assigned to fragments of the associated polymer. These high mass peaks are not observed in the spectra of the control samples. In general, the results indicate that the ash is formed from decomposition of the polymer matrix.
Universal programmable quantum circuit schemes to emulate an operator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daskin, Anmer; Grama, Ananth; Kollias, Giorgos
Unlike fixed designs, programmable circuit designs support an infinite number of operators. The functionality of a programmable circuit can be altered by simply changing the angle values of the rotation gates in the circuit. Here, we present a new quantum circuit design technique resulting in two general programmable circuit schemes. The circuit schemes can be used to simulate any given operator by setting the angle values in the circuit. This provides a fixed circuit design whose angles are determined from the elements of the given matrix-which can be non-unitary-in an efficient way. We also give both the classical and quantummore » complexity analysis for these circuits and show that the circuits require a few classical computations. For the electronic structure simulation on a quantum computer, one has to perform the following steps: prepare the initial wave function of the system; present the evolution operator U=e{sup -iHt} for a given atomic and molecular Hamiltonian H in terms of quantum gates array and apply the phase estimation algorithm to find the energy eigenvalues. Thus, in the circuit model of quantum computing for quantum chemistry, a crucial step is presenting the evolution operator for the atomic and molecular Hamiltonians in terms of quantum gate arrays. Since the presented circuit designs are independent from the matrix decomposition techniques and the global optimization processes used to find quantum circuits for a given operator, high accuracy simulations can be done for the unitary propagators of molecular Hamiltonians on quantum computers. As an example, we show how to build the circuit design for the hydrogen molecule.« less
Parallel solution of the symmetric tridiagonal eigenproblem. Research report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jessup, E.R.
1989-10-01
This thesis discusses methods for computing all eigenvalues and eigenvectors of a symmetric tridiagonal matrix on a distributed-memory Multiple Instruction, Multiple Data multiprocessor. Only those techniques having the potential for both high numerical accuracy and significant large-grained parallelism are investigated. These include the QL method or Cuppen's divide and conquer method based on rank-one updating to compute both eigenvalues and eigenvectors, bisection to determine eigenvalues and inverse iteration to compute eigenvectors. To begin, the methods are compared with respect to computation time, communication time, parallel speed up, and accuracy. Experiments on an IPSC hypercube multiprocessor reveal that Cuppen's method ismore » the most accurate approach, but bisection with inverse iteration is the fastest and most parallel. Because the accuracy of the latter combination is determined by the quality of the computed eigenvectors, the factors influencing the accuracy of inverse iteration are examined. This includes, in part, statistical analysis of the effect of a starting vector with random components. These results are used to develop an implementation of inverse iteration producing eigenvectors with lower residual error and better orthogonality than those generated by the EISPACK routine TINVIT. This thesis concludes with adaptions of methods for the symmetric tridiagonal eigenproblem to the related problem of computing the singular value decomposition (SVD) of a bidiagonal matrix.« less
Parallel solution of the symmetric tridiagonal eigenproblem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jessup, E.R.
1989-01-01
This thesis discusses methods for computing all eigenvalues and eigenvectors of a symmetric tridiagonal matrix on a distributed memory MIMD multiprocessor. Only those techniques having the potential for both high numerical accuracy and significant large-grained parallelism are investigated. These include the QL method or Cuppen's divide and conquer method based on rank-one updating to compute both eigenvalues and eigenvectors, bisection to determine eigenvalues, and inverse iteration to compute eigenvectors. To begin, the methods are compared with respect to computation time, communication time, parallel speedup, and accuracy. Experiments on an iPSC hyper-cube multiprocessor reveal that Cuppen's method is the most accuratemore » approach, but bisection with inverse iteration is the fastest and most parallel. Because the accuracy of the latter combination is determined by the quality of the computed eigenvectors, the factors influencing the accuracy of inverse iteration are examined. This includes, in part, statistical analysis of the effects of a starting vector with random components. These results are used to develop an implementation of inverse iteration producing eigenvectors with lower residual error and better orthogonality than those generated by the EISPACK routine TINVIT. This thesis concludes with adaptations of methods for the symmetric tridiagonal eigenproblem to the related problem of computing the singular value decomposition (SVD) of a bidiagonal matrix.« less
Photochemical and radiation-chemical aspects of matrix acidity effects on some organic systems
NASA Astrophysics Data System (ADS)
Ambroz, H. B.; Przybytniak, G. K.; Wronska, T.; Kemp, T. J.
The role of matrix effects in radiolysis and photolysis is illustrated using two systems: organosulphur compounds and benzenediazonium salts. Their intermediates as detected by low temperature ESR and optical spectroscopy or FAB-MS give evidence that the main reaction pathways depend strongly on these effects. Changes in matrix acidity can control the formation of neutral radical, ion-radical or ionic species which are crucial to the character of the final products of irradiation of organosulphur compounds, which are of great importance in medicine, biology, ecology and industry. Microenvironmental influences determine whether the triplet aryl cation or radical species are detected as the principal or sole intermediates in the decomposition of diazonium salts, a process leading to different stable products with industrial application.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Supriya; Srivastava, Pratibha; Singh, Gurdip, E-mail: gsingh4us@yahoo.com
2013-02-15
Graphical abstract: Prepared nanoferrites were characterized by FE-SEM and bright field TEM micrographs. The catalytic effect of these nanoferrites was evaluated on the thermal decomposition of ammonium perchlorate using TG and TG–DSC techniques. The kinetics of thermal decomposition of AP was evaluated using isothermal TG data by model fitting as well as isoconversional method. Display Omitted Highlights: ► Synthesis of ferrite nanostructures (∼20.0 nm) by wet-chemical method under different synthetic conditions. ► Characterization using XRD, FE-SEM, EDS, TEM, HRTEM and SAED pattern. ► Catalytic activity of ferrite nanostructures on AP thermal decomposition by thermal techniques. ► Burning rate measurements ofmore » CSPs with ferrite nanostructures. ► Kinetics of thermal decomposition of AP + nanoferrites. -- Abstract: In this paper, the nanoferrites of Mn, Co and Ni were synthesized by wet chemical method and characterized by X-ray diffraction (XRD), field emission scanning electron microscopy (FE-SEM), energy dispersive, X-ray spectra (EDS), transmission electron microscopy (TEM) and high resolution transmission electron microscopy (HR-TEM). It is catalytic activity were investigated on the thermal decomposition of ammonium perchlorate (AP) and composite solid propellants (CSPs) using thermogravimetry (TG), TG coupled with differential scanning calorimetry (TG–DSC) and ignition delay measurements. Kinetics of thermal decomposition of AP + nanoferrites have also been investigated using isoconversional and model fitting approaches which have been applied to data for isothermal TG decomposition. The burning rate of CSPs was considerably enhanced by these nanoferrites. Addition of nanoferrites to AP led to shifting of the high temperature decomposition peak toward lower temperature. All these studies reveal that ferrite nanorods show the best catalytic activity superior to that of nanospheres and nanocubes.« less
NASA Astrophysics Data System (ADS)
Fan, Hong-Yi; Fan, Yue
2002-01-01
By virtue of the technique of integration within an ordered product of operators and the Schmidt decomposition of the entangled state |η〉, we reduce the general projection calculation in the theory of quantum teleportation to a as simple as possible form and present a general formalism for teleportating quantum states of continuous variable. The project supported by National Natural Science Foundation of China and Educational Ministry Foundation of China
Ludeña-Choez, Jimmy; Quispe-Soncco, Raisa; Gallardo-Antolín, Ascensión
2017-01-01
Feature extraction for Acoustic Bird Species Classification (ABSC) tasks has traditionally been based on parametric representations that were specifically developed for speech signals, such as Mel Frequency Cepstral Coefficients (MFCC). However, the discrimination capabilities of these features for ABSC could be enhanced by accounting for the vocal production mechanisms of birds, and, in particular, the spectro-temporal structure of bird sounds. In this paper, a new front-end for ABSC is proposed that incorporates this specific information through the non-negative decomposition of bird sound spectrograms. It consists of the following two different stages: short-time feature extraction and temporal feature integration. In the first stage, which aims at providing a better spectral representation of bird sounds on a frame-by-frame basis, two methods are evaluated. In the first method, cepstral-like features (NMF_CC) are extracted by using a filter bank that is automatically learned by means of the application of Non-Negative Matrix Factorization (NMF) on bird audio spectrograms. In the second method, the features are directly derived from the activation coefficients of the spectrogram decomposition as performed through NMF (H_CC). The second stage summarizes the most relevant information contained in the short-time features by computing several statistical measures over long segments. The experiments show that the use of NMF_CC and H_CC in conjunction with temporal integration significantly improves the performance of a Support Vector Machine (SVM)-based ABSC system with respect to conventional MFCC.
Quispe-Soncco, Raisa
2017-01-01
Feature extraction for Acoustic Bird Species Classification (ABSC) tasks has traditionally been based on parametric representations that were specifically developed for speech signals, such as Mel Frequency Cepstral Coefficients (MFCC). However, the discrimination capabilities of these features for ABSC could be enhanced by accounting for the vocal production mechanisms of birds, and, in particular, the spectro-temporal structure of bird sounds. In this paper, a new front-end for ABSC is proposed that incorporates this specific information through the non-negative decomposition of bird sound spectrograms. It consists of the following two different stages: short-time feature extraction and temporal feature integration. In the first stage, which aims at providing a better spectral representation of bird sounds on a frame-by-frame basis, two methods are evaluated. In the first method, cepstral-like features (NMF_CC) are extracted by using a filter bank that is automatically learned by means of the application of Non-Negative Matrix Factorization (NMF) on bird audio spectrograms. In the second method, the features are directly derived from the activation coefficients of the spectrogram decomposition as performed through NMF (H_CC). The second stage summarizes the most relevant information contained in the short-time features by computing several statistical measures over long segments. The experiments show that the use of NMF_CC and H_CC in conjunction with temporal integration significantly improves the performance of a Support Vector Machine (SVM)-based ABSC system with respect to conventional MFCC. PMID:28628630
On the physical significance of the Effective Independence method for sensor placement
NASA Astrophysics Data System (ADS)
Jiang, Yaoguang; Li, Dongsheng; Song, Gangbing
2017-05-01
Optimally deploy sparse sensors for better damage identification and structural health monitoring is always a challenging task. The Effective Independence(EI) is one of the most influential sensor placement method and to be discussed in the paper. Specifically, the effect of the different weighting coefficients on the maximization of the Fisher information matrix(FIM) and the physical significance of the re-orthogonalization of modal shapes through QR decomposition in the EI method are addressed. By analyzing the widely used EI method, we found that the absolute identification space put forward along with the EI method is preferable to ensuring the maximization of the FIM, instead of the original EI coefficient which was post-multiolied by a weighting matrix. That is, deleting the row with the minimum EI coefficient can’t achieve the objective of maximizing the trace of FIM as initially conceived. Furthermore, we observed that in the computation of EI method, the sum of each retained row in the absolute identification space is a constant in each iteration. This potential property can be revealed distinctively by the product of target mode and its transpose, and its form is similar to an alternative formula of the EI method through orthogonal-triangular(QR) decomposition previously proposed by the authors. With it, the physical significance of re-orthogonalization of modal shapes through QR decomposition in the computation of EI method can be obviously manifested from a new perspective. Finally, two simple examples are provided to demonstrate the above two observations.
Thermal effect on structure organizations in cobalt-fullerene nanocomposition.
Lavrentiev, Vasily; Vacik, Jiri; Naramoto, Hiroshi; Sakai, Seiji
2010-04-01
Effect of deposition temperature (Ts) on structure of Co-C60 nanocomposite (NC) prepared by simultaneous deposition of cobalt and fullerene on sapphire is presented. The NC structure variations with Ts increasing from room temperature (RT) to 400 degrees C have been analyzed using scanning electron microscopy (SEM), atomic force microscopy (AFM) and Raman spectroscopy. AFM and SEM show granule-like structure of the Co-C60 film. The mixture film deposited at RT includes the hills on the surface suggesting accumulation of internal stress during phase separation. Raman spectra show 25 cm(-1) downshift of Ag(2) C60 peak suggesting -Co-C60- polymerization in C60-based matrix of the NC film. Analysis of Raman spectra has revealed existence of amorphous carbon (a-C) in the NC matrix that argues C60 decomposition. The Ts increase to 200 degrees C causes the surface hills smoothing. In parallel, downshift of the Ag(2) peak decreases to 16 cm(-1) that implies more pronounced phase separation and lower -Co-C60- polymerization efficiency. Also, amount of a-C content slightly increases. Further Ts increasing to 400 degrees C changes the NC structure dramatically. AFM shows evident enlargement of the granules. According to Raman spectra the high Ts deposition yields pronounced C60 decomposition increasing the a-C content. Features of a-C Raman peak imply nucleation of graphitic islands at the NC interfaces. Abundant decomposition of C60 in the mixture film deposited at 400 degrees C is referred to cobalt catalytic effect.
A simple suboptimal least-squares algorithm for attitude determination with multiple sensors
NASA Technical Reports Server (NTRS)
Brozenec, Thomas F.; Bender, Douglas J.
1994-01-01
Three-axis attitude determination is equivalent to finding a coordinate transformation matrix which transforms a set of reference vectors fixed in inertial space to a set of measurement vectors fixed in the spacecraft. The attitude determination problem can be expressed as a constrained optimization problem. The constraint is that a coordinate transformation matrix must be proper, real, and orthogonal. A transformation matrix can be thought of as optimal in the least-squares sense if it maps the measurement vectors to the reference vectors with minimal 2-norm errors and meets the above constraint. This constrained optimization problem is known as Wahba's problem. Several algorithms which solve Wahba's problem exactly have been developed and used. These algorithms, while steadily improving, are all rather complicated. Furthermore, they involve such numerically unstable or sensitive operations as matrix determinant, matrix adjoint, and Newton-Raphson iterations. This paper describes an algorithm which minimizes Wahba's loss function, but without the constraint. When the constraint is ignored, the problem can be solved by a straightforward, numerically stable least-squares algorithm such as QR decomposition. Even though the algorithm does not explicitly take the constraint into account, it still yields a nearly orthogonal matrix for most practical cases; orthogonality only becomes corrupted when the sensor measurements are very noisy, on the same order of magnitude as the attitude rotations. The algorithm can be simplified if the attitude rotations are small enough so that the approximation sin(theta) approximately equals theta holds. We then compare the computational requirements for several well-known algorithms. For the general large-angle case, the QR least-squares algorithm is competitive with all other know algorithms and faster than most. If attitude rotations are small, the least-squares algorithm can be modified to run faster, and this modified algorithm is faster than all but a similarly specialized version of the QUEST algorithm. We also introduce a novel measurement averaging technique which reduces the n-measurement case to the two measurement case for our particular application, a star tracker and earth sensor mounted on an earth-pointed geosynchronous communications satellite. Using this technique, many n-measurement problems reduce to less than or equal to 3 measurements; this reduces the amount of required calculation without significant degradation in accuracy. Finally, we present the results of some tests which compare the least-squares algorithm with the QUEST and FOAM algorithms in the two-measurement case. For our example case, all three algorithms performed with similar accuracy.
NASA Astrophysics Data System (ADS)
Ma, Yehao; Li, Xian; Huang, Pingjie; Hou, Dibo; Wang, Qiang; Zhang, Guangxin
2017-04-01
In many situations the THz spectroscopic data observed from complex samples represent the integrated result of several interrelated variables or feature components acting together. The actual information contained in the original data might be overlapping and there is a necessity to investigate various approaches for model reduction and data unmixing. The development and use of low-rank approximate nonnegative matrix factorization (NMF) and smooth constraint NMF (CNMF) algorithms for feature components extraction and identification in the fields of terahertz time domain spectroscopy (THz-TDS) data analysis are presented. The evolution and convergence properties of NMF and CNMF methods based on sparseness, independence and smoothness constraints for the resulting nonnegative matrix factors are discussed. For general NMF, its cost function is nonconvex and the result is usually susceptible to initialization and noise corruption, and may fall into local minima and lead to unstable decomposition. To reduce these drawbacks, smoothness constraint is introduced to enhance the performance of NMF. The proposed algorithms are evaluated by several THz-TDS data decomposition experiments including a binary system and a ternary system simulating some applications such as medicine tablet inspection. Results show that CNMF is more capable of finding optimal solutions and more robust for random initialization in contrast to NMF. The investigated method is promising for THz data resolution contributing to unknown mixture identification.
Ma, Yehao; Li, Xian; Huang, Pingjie; Hou, Dibo; Wang, Qiang; Zhang, Guangxin
2017-04-15
In many situations the THz spectroscopic data observed from complex samples represent the integrated result of several interrelated variables or feature components acting together. The actual information contained in the original data might be overlapping and there is a necessity to investigate various approaches for model reduction and data unmixing. The development and use of low-rank approximate nonnegative matrix factorization (NMF) and smooth constraint NMF (CNMF) algorithms for feature components extraction and identification in the fields of terahertz time domain spectroscopy (THz-TDS) data analysis are presented. The evolution and convergence properties of NMF and CNMF methods based on sparseness, independence and smoothness constraints for the resulting nonnegative matrix factors are discussed. For general NMF, its cost function is nonconvex and the result is usually susceptible to initialization and noise corruption, and may fall into local minima and lead to unstable decomposition. To reduce these drawbacks, smoothness constraint is introduced to enhance the performance of NMF. The proposed algorithms are evaluated by several THz-TDS data decomposition experiments including a binary system and a ternary system simulating some applications such as medicine tablet inspection. Results show that CNMF is more capable of finding optimal solutions and more robust for random initialization in contrast to NMF. The investigated method is promising for THz data resolution contributing to unknown mixture identification. Copyright © 2017 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Antipova, Olga; Orgel, Joseph P.R.O.
Rheumatoid arthritis (RA) is a systemic autoimmune inflammatory and destructive joint disorder that affects tens of millions of people worldwide. Normal healthy joints maintain a balance between the synthesis of extracellular matrix (ECM) molecules and the proteolytic degradation of damaged ones. In the case of RA, this balance is shifted toward matrix destruction due to increased production of cleavage enzymes and the presence of (autoimmune) immunoglobulins resulting from an inflammation induced immune response. Herein we demonstrate that a polyclonal antibody against the proteoglycan biglycan (BG) causes tissue destruction that may be analogous to that of RA affected tissues. The effectmore » of the antibody is more potent than harsh chemical and/or enzymatic treatments designed to mimic arthritis-like fibril de-polymerization. In RA cases, the immune response to inflammation causes synovial fibroblasts, monocytes and macrophages to produce cytokines and secrete matrix remodeling enzymes, whereas B cells are stimulated to produce immunoglobulins. The specific antigen that causes the RA immune response has not yet been identified, although possible candidates have been proposed, including collagen types I and II, and proteoglycans (PG's) such as biglycan. We speculate that the initiation of RA associated tissue destruction in vivo may involve a similar non-enzymatic decomposition of collagen fibrils via the immunoglobulins themselves that we observe here ex vivo.« less
Analysis of the effectiveness of steam retorting of oil shale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacobs, H.R.; Pensel, R.W.; Udell, K.S.
A numerical model is developed to describe the retorting of oil shale using superheated steam. The model describes not only the temperature history of the shale but predicts the evolution of shale oil from kerogen decomposition and the breakdown of the carbonates existing in the shale matrix. The heat transfer coefficients between the water and the shale are determined from experiments utilizing the model to reduce the data. Similarly the model is used with thermogravimetric analysis experiments to develop an improved kinetics expression for kerogen decomposition in a steam environment. Numerical results are presented which indicate the effect of oilmore » shale particle size and steam temperature on oil production.« less
In vitro decomposition of Sphagnum by some microfungi resembles white rot of wood.
Rice, Adrianne V; Tsuneda, Akihiko; Currah, Randolph S
2006-06-01
The abilities of some ascomycetes (Myxotrichaceae) from a Sphagnum bog in Alberta to degrade cellulose, phenolics, and Sphagnum tissue were compared with those of two basidiomycetes. Most Myxotrichaceae degraded cellulose and tannic acid, and removed cell-wall components simultaneously from Sphagnum tissues, whereas the basidiomycetes degraded cellulose and insoluble phenolics, and preferentially removed the polyphenolic matrix from Sphagnum cell walls. Mass losses from Sphagnum varied from up to 50% for some ascomycetes to a maximum of 35% for the basidiomycetes. The decomposition of Sphagnum by the Myxotrichaceae was analogous to the white rot of wood and indicates that these fungi have the potential to cause significant mineralization of carbon in bogs.
Nature of Driving Force for Protein Folding: A Result From Analyzing the Statistical Potential
NASA Astrophysics Data System (ADS)
Li, Hao; Tang, Chao; Wingreen, Ned S.
1997-07-01
In a statistical approach to protein structure analysis, Miyazawa and Jernigan derived a 20×20 matrix of inter-residue contact energies between different types of amino acids. Using the method of eigenvalue decomposition, we find that the Miyazawa-Jernigan matrix can be accurately reconstructed from its first two principal component vectors as Mij = C0+C1\\(qi+qj\\)+C2qiqj, with constant C's, and 20 q values associated with the 20 amino acids. This regularity is due to hydrophobic interactions and a force of demixing, the latter obeying Hildebrand's solubility theory of simple liquids.
On the Convergence of an Implicitly Restarted Arnoldi Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lehoucq, Richard B.
We show that Sorensen's [35] implicitly restarted Arnoldi method (including its block extension) is simultaneous iteration with an implicit projection step to accelerate convergence to the invariant subspace of interest. By using the geometric convergence theory for simultaneous iteration due to Watkins and Elsner [43], we prove that an implicitly restarted Arnoldi method can achieve a super-linear rate of convergence to the dominant invariant subspace of a matrix. Moreover, we show how an IRAM computes a nested sequence of approximations for the partial Schur decomposition associated with the dominant invariant subspace of a matrix.
NASA Astrophysics Data System (ADS)
Forootan, Ehsan; Kusche, Jürgen; Talpe, Matthieu; Shum, C. K.; Schmidt, Michael
2017-12-01
In recent decades, decomposition techniques have enabled increasingly more applications for dimension reduction, as well as extraction of additional information from geophysical time series. Traditionally, the principal component analysis (PCA)/empirical orthogonal function (EOF) method and more recently the independent component analysis (ICA) have been applied to extract, statistical orthogonal (uncorrelated), and independent modes that represent the maximum variance of time series, respectively. PCA and ICA can be classified as stationary signal decomposition techniques since they are based on decomposing the autocovariance matrix and diagonalizing higher (than two) order statistical tensors from centered time series, respectively. However, the stationarity assumption in these techniques is not justified for many geophysical and climate variables even after removing cyclic components, e.g., the commonly removed dominant seasonal cycles. In this paper, we present a novel decomposition method, the complex independent component analysis (CICA), which can be applied to extract non-stationary (changing in space and time) patterns from geophysical time series. Here, CICA is derived as an extension of real-valued ICA, where (a) we first define a new complex dataset that contains the observed time series in its real part, and their Hilbert transformed series as its imaginary part, (b) an ICA algorithm based on diagonalization of fourth-order cumulants is then applied to decompose the new complex dataset in (a), and finally, (c) the dominant independent complex modes are extracted and used to represent the dominant space and time amplitudes and associated phase propagation patterns. The performance of CICA is examined by analyzing synthetic data constructed from multiple physically meaningful modes in a simulation framework, with known truth. Next, global terrestrial water storage (TWS) data from the Gravity Recovery And Climate Experiment (GRACE) gravimetry mission (2003-2016), and satellite radiometric sea surface temperature (SST) data (1982-2016) over the Atlantic and Pacific Oceans are used with the aim of demonstrating signal separations of the North Atlantic Oscillation (NAO) from the Atlantic Multi-decadal Oscillation (AMO), and the El Niño Southern Oscillation (ENSO) from the Pacific Decadal Oscillation (PDO). CICA results indicate that ENSO-related patterns can be extracted from the Gravity Recovery And Climate Experiment Terrestrial Water Storage (GRACE TWS) with an accuracy of 0.5-1 cm in terms of equivalent water height (EWH). The magnitude of errors in extracting NAO or AMO from SST data using the complex EOF (CEOF) approach reaches up to 50% of the signal itself, while it is reduced to 16% when applying CICA. Larger errors with magnitudes of 100% and 30% of the signal itself are found while separating ENSO from PDO using CEOF and CICA, respectively. We thus conclude that the CICA is more effective than CEOF in separating non-stationary patterns.
Statistical analysis of effective singular values in matrix rank determination
NASA Technical Reports Server (NTRS)
Konstantinides, Konstantinos; Yao, Kung
1988-01-01
A major problem in using SVD (singular-value decomposition) as a tool in determining the effective rank of a perturbed matrix is that of distinguishing between significantly small and significantly large singular values to the end, conference regions are derived for the perturbed singular values of matrices with noisy observation data. The analysis is based on the theories of perturbations of singular values and statistical significance test. Threshold bounds for perturbation due to finite-precision and i.i.d. random models are evaluated. In random models, the threshold bounds depend on the dimension of the matrix, the noisy variance, and predefined statistical level of significance. Results applied to the problem of determining the effective order of a linear autoregressive system from the approximate rank of a sample autocorrelation matrix are considered. Various numerical examples illustrating the usefulness of these bounds and comparisons to other previously known approaches are given.
ERIC Educational Resources Information Center
Koga, Nobuyoshi; Goshi, Yuri; Yoshikawa, Masahiro; Tatsuoka, Tomoyuki
2014-01-01
An undergraduate kinetic experiment of the thermal decomposition of solids by microscopic observation and thermal analysis was developed by investigating a suitable reaction, applicable techniques of thermal analysis and microscopic observation, and a reliable kinetic calculation method. The thermal decomposition of sodium hydrogen carbonate is…
Non-invasive quantitative pulmonary V/Q imaging using Fourier decomposition MRI at 1.5T.
Kjørstad, Åsmund; Corteville, Dominique M R; Henzler, Thomas; Schmid-Bindert, Gerald; Zöllner, Frank G; Schad, Lothar R
2015-12-01
Techniques for quantitative pulmonary perfusion and ventilation using the Fourier Decomposition method were recently demonstrated. We combine these two techniques and show that ventilation-perfusion (V/Q) imaging is possible using only a single MR acquisition of less than thirty seconds. The Fourier Decomposition method is used in combination with two quantification techniques, which extract baselines from within the images themselves and thus allows quantification. For the perfusion, a region assumed to consist of 100% blood is utilized, while for the ventilation the zero-frequency component is used. V/Q-imaging is then done by dividing the quantified ventilation map with the quantified perfusion map. The techniques were used on ten healthy volunteers and fifteen patients diagnosed with lung cancer. A mean V/Q-ratio of 1.15 ± 0.22 was found for the healthy volunteers and a mean V/Q-ratio of 1.93 ± 0.83 for the non-afflicted lung in the patients. Mean V/Q-ratio in the afflicted (tumor-bearing) lung was found to be 1.61 ± 1.06. Functional defects were clearly visible in many of the patient images, but 5 of 15 patient images had to be excluded due to artifacts or low SNR, indicating a lack of robustness. Non-invasive, quantitative V/Q-imaging is possible using Fourier Decomposition MRI. The method requires only a single acquisition of less than 30 seconds, but robustness in patients remains an issue. Copyright © 2015. Published by Elsevier GmbH.
Learning inverse kinematics: reduced sampling through decomposition into virtual robots.
de Angulo, Vicente Ruiz; Torras, Carme
2008-12-01
We propose a technique to speedup the learning of the inverse kinematics of a robot manipulator by decomposing it into two or more virtual robot arms. Unlike previous decomposition approaches, this one does not place any requirement on the robot architecture, and thus, it is completely general. Parametrized self-organizing maps are particularly adequate for this type of learning, and permit comparing results directly obtained and through the decomposition. Experimentation shows that time reductions of up to two orders of magnitude are easily attained.
A Dual Super-Element Domain Decomposition Approach for Parallel Nonlinear Finite Element Analysis
NASA Astrophysics Data System (ADS)
Jokhio, G. A.; Izzuddin, B. A.
2015-05-01
This article presents a new domain decomposition method for nonlinear finite element analysis introducing the concept of dual partition super-elements. The method extends ideas from the displacement frame method and is ideally suited for parallel nonlinear static/dynamic analysis of structural systems. In the new method, domain decomposition is realized by replacing one or more subdomains in a "parent system," each with a placeholder super-element, where the subdomains are processed separately as "child partitions," each wrapped by a dual super-element along the partition boundary. The analysis of the overall system, including the satisfaction of equilibrium and compatibility at all partition boundaries, is realized through direct communication between all pairs of placeholder and dual super-elements. The proposed method has particular advantages for matrix solution methods based on the frontal scheme, and can be readily implemented for existing finite element analysis programs to achieve parallelization on distributed memory systems with minimal intervention, thus overcoming memory bottlenecks typically faced in the analysis of large-scale problems. Several examples are presented in this article which demonstrate the computational benefits of the proposed parallel domain decomposition approach and its applicability to the nonlinear structural analysis of realistic structural systems.
NASA Technical Reports Server (NTRS)
McKay, D.S.; Gibson, E.K.; Thomas-Keprta, K.L.; Clemett, S.J.; Wentworth, S.J.
2009-01-01
The question of the origin of nanophase magnetite in Martian meteorite ALH84001 has been widely debated for nearly a decade. Golden et al. have reported producing nearly chemically pure magnetite from thermal decomposition of chemically impure siderite [(Fe, Mg, Mn)CO3]. This claim is significant for three reasons: first, it has been argued that chemically pure magnetite present in the carbonate disks in Martian meteorite ALH84001 could have formed by the thermal decomposition of the impure carbonate matrix in which they are embedded; second, the chemical purity of magnetite has been previously used to identify biogenic magnetite; and, third, previous studies of thermal decomposition of impure (Mg,Ca,Mn)-siderites, which have been investigated under a wide variety of conditions by numerous researchers, invariably yields a mixed metal oxide phase as the product and not chemically pure magnetite. The explanation for this observation is that these siderites all possess the same crystallographic structure (Calcite; R3c) so solid solutions between these carbonates are readily formed and can be viewed on an atomic scale as two chemically different but structurally similar lattices.
Biomass pyrolysis: Thermal decomposition mechanisms of furfural and benzaldehyde
NASA Astrophysics Data System (ADS)
Vasiliou, AnGayle K.; Kim, Jong Hyun; Ormond, Thomas K.; Piech, Krzysztof M.; Urness, Kimberly N.; Scheer, Adam M.; Robichaud, David J.; Mukarakate, Calvin; Nimlos, Mark R.; Daily, John W.; Guan, Qi; Carstensen, Hans-Heinrich; Ellison, G. Barney
2013-09-01
The thermal decompositions of furfural and benzaldehyde have been studied in a heated microtubular flow reactor. The pyrolysis experiments were carried out by passing a dilute mixture of the aromatic aldehydes (roughly 0.1%-1%) entrained in a stream of buffer gas (either He or Ar) through a pulsed, heated SiC reactor that is 2-3 cm long and 1 mm in diameter. Typical pressures in the reactor are 75-150 Torr with the SiC tube wall temperature in the range of 1200-1800 K. Characteristic residence times in the reactor are 100-200 μsec after which the gas mixture emerges as a skimmed molecular beam at a pressure of approximately 10 μTorr. Products were detected using matrix infrared absorption spectroscopy, 118.2 nm (10.487 eV) photoionization mass spectroscopy and resonance enhanced multiphoton ionization. The initial steps in the thermal decomposition of furfural and benzaldehyde have been identified. Furfural undergoes unimolecular decomposition to furan + CO: C4H3O-CHO (+ M) → CO + C4H4O. Sequential decomposition of furan leads to the production of HC≡CH, CH2CO, CH3C≡CH, CO, HCCCH2, and H atoms. In contrast, benzaldehyde resists decomposition until higher temperatures when it fragments to phenyl radical plus H atoms and CO: C6H5CHO (+ M) → C6H5CO + H → C6H5 + CO + H. The H atoms trigger a chain reaction by attacking C6H5CHO: H + C6H5CHO → [C6H6CHO]* → C6H6 + CO + H. The net result is the decomposition of benzaldehyde to produce benzene and CO.
Renjith, Arokia; Manjula, P; Mohan Kumar, P
2015-01-01
Brain tumour is one of the main causes for an increase in transience among children and adults. This paper proposes an improved method based on Magnetic Resonance Imaging (MRI) brain image classification and image segmentation approach. Automated classification is encouraged by the need of high accuracy when dealing with a human life. The detection of the brain tumour is a challenging problem, due to high diversity in tumour appearance and ambiguous tumour boundaries. MRI images are chosen for detection of brain tumours, as they are used in soft tissue determinations. First of all, image pre-processing is used to enhance the image quality. Second, dual-tree complex wavelet transform multi-scale decomposition is used to analyse texture of an image. Feature extraction extracts features from an image using gray-level co-occurrence matrix (GLCM). Then, the Neuro-Fuzzy technique is used to classify the stages of brain tumour as benign, malignant or normal based on texture features. Finally, tumour location is detected using Otsu thresholding. The classifier performance is evaluated based on classification accuracies. The simulated results show that the proposed classifier provides better accuracy than previous method.
Preserving Lagrangian Structure in Nonlinear Model Reduction with Application to Structural Dynamics
Carlberg, Kevin; Tuminaro, Ray; Boggs, Paul
2015-03-11
Our work proposes a model-reduction methodology that preserves Lagrangian structure and achieves computational efficiency in the presence of high-order nonlinearities and arbitrary parameter dependence. As such, the resulting reduced-order model retains key properties such as energy conservation and symplectic time-evolution maps. We focus on parameterized simple mechanical systems subjected to Rayleigh damping and external forces, and consider an application to nonlinear structural dynamics. To preserve structure, the method first approximates the system's “Lagrangian ingredients''---the Riemannian metric, the potential-energy function, the dissipation function, and the external force---and subsequently derives reduced-order equations of motion by applying the (forced) Euler--Lagrange equation with thesemore » quantities. Moreover, from the algebraic perspective, key contributions include two efficient techniques for approximating parameterized reduced matrices while preserving symmetry and positive definiteness: matrix gappy proper orthogonal decomposition and reduced-basis sparsification. Our results for a parameterized truss-structure problem demonstrate the practical importance of preserving Lagrangian structure and illustrate the proposed method's merits: it reduces computation time while maintaining high accuracy and stability, in contrast to existing nonlinear model-reduction techniques that do not preserve structure.« less
Preserving Lagrangian Structure in Nonlinear Model Reduction with Application to Structural Dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlberg, Kevin; Tuminaro, Ray; Boggs, Paul
Our work proposes a model-reduction methodology that preserves Lagrangian structure and achieves computational efficiency in the presence of high-order nonlinearities and arbitrary parameter dependence. As such, the resulting reduced-order model retains key properties such as energy conservation and symplectic time-evolution maps. We focus on parameterized simple mechanical systems subjected to Rayleigh damping and external forces, and consider an application to nonlinear structural dynamics. To preserve structure, the method first approximates the system's “Lagrangian ingredients''---the Riemannian metric, the potential-energy function, the dissipation function, and the external force---and subsequently derives reduced-order equations of motion by applying the (forced) Euler--Lagrange equation with thesemore » quantities. Moreover, from the algebraic perspective, key contributions include two efficient techniques for approximating parameterized reduced matrices while preserving symmetry and positive definiteness: matrix gappy proper orthogonal decomposition and reduced-basis sparsification. Our results for a parameterized truss-structure problem demonstrate the practical importance of preserving Lagrangian structure and illustrate the proposed method's merits: it reduces computation time while maintaining high accuracy and stability, in contrast to existing nonlinear model-reduction techniques that do not preserve structure.« less
Analysis of Nonlinear Dynamics by Square Matrix Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Li Hua
The nonlinear dynamics of a system with periodic structure can be analyzed using a square matrix. In this paper, we show that because the special property of the square matrix constructed for nonlinear dynamics, we can reduce the dimension of the matrix from the original large number for high order calculation to low dimension in the first step of the analysis. Then a stable Jordan decomposition is obtained with much lower dimension. The transformation to Jordan form provides an excellent action-angle approximation to the solution of the nonlinear dynamics, in good agreement with trajectories and tune obtained from tracking. Andmore » more importantly, the deviation from constancy of the new action-angle variable provides a measure of the stability of the phase space trajectories and their tunes. Thus the square matrix provides a novel method to optimize the nonlinear dynamic system. The method is illustrated by many examples of comparison between theory and numerical simulation. Finally, in particular, we show that the square matrix method can be used for optimization to reduce the nonlinearity of a system.« less
Evaluation of a Nonlinear Finite Element Program - ABAQUS.
1983-03-15
anisotropic properties. * MATEXP - Linearly elastic thermal expansions with isotropic, orthotropic and anisotropic properties. * MATELG - Linearly...elastic materials for general sections (options available for beam and shell elements). • MATEXG - Linearly elastic thermal expansions for general...decomposition of a matrix. * Q-R algorithm • Vector normalization, etc. Obviously, by consolidating all the utility subroutines in a library, ABAQUS has
Wang, Deyun; Wei, Shuai; Luo, Hongyuan; Yue, Chenqiang; Grunder, Olivier
2017-02-15
The randomness, non-stationarity and irregularity of air quality index (AQI) series bring the difficulty of AQI forecasting. To enhance forecast accuracy, a novel hybrid forecasting model combining two-phase decomposition technique and extreme learning machine (ELM) optimized by differential evolution (DE) algorithm is developed for AQI forecasting in this paper. In phase I, the complementary ensemble empirical mode decomposition (CEEMD) is utilized to decompose the AQI series into a set of intrinsic mode functions (IMFs) with different frequencies; in phase II, in order to further handle the high frequency IMFs which will increase the forecast difficulty, variational mode decomposition (VMD) is employed to decompose the high frequency IMFs into a number of variational modes (VMs). Then, the ELM model optimized by DE algorithm is applied to forecast all the IMFs and VMs. Finally, the forecast value of each high frequency IMF is obtained through adding up the forecast results of all corresponding VMs, and the forecast series of AQI is obtained by aggregating the forecast results of all IMFs. To verify and validate the proposed model, two daily AQI series from July 1, 2014 to June 30, 2016 collected from Beijing and Shanghai located in China are taken as the test cases to conduct the empirical study. The experimental results show that the proposed hybrid model based on two-phase decomposition technique is remarkably superior to all other considered models for its higher forecast accuracy. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Crockett, R. G. M.; Perrier, F.; Richon, P.
2009-04-01
Building on independent investigations by research groups at both IPGP, France, and the University of Northampton, UK, hourly-sampled radon time-series of durations exceeding one year have been investigated for periodic and anomalous phenomena using a variety of established and novel techniques. These time-series have been recorded in locations having no routine human behaviour and thus are effectively free of significant anthropogenic influences. With regard to periodic components, the long durations of these time-series allow, in principle, very high frequency resolutions for established spectral-measurement techniques such as Fourier and maximum-entropy. However, as has been widely observed, the stochastic nature of radon emissions from rocks and soils, coupled with sensitivity to a wide variety influences such as temperature, wind-speed and soil moisture-content has made interpretation of the results obtained by such techniques very difficult, with uncertain results, in many cases. We here report developments in the investigation of radon-time series for periodic and anomalous phenomena using spectral-decomposition techniques. These techniques, in variously separating ‘high', ‘middle' and ‘low' frequency components, effectively ‘de-noise' the data by allowing components of interest to be isolated from others which (might) serve to obscure weaker information-containing components. Once isolated, these components can be investigated using a variety of techniques. Whilst this is very much work in early stages of development, spectral decomposition methods have been used successfully to indicate the presence of diurnal and sub-diurnal cycles in radon concentration which we provisionally attribute to tidal influences. Also, these methods have been used to enhance the identification of short-duration anomalies, attributable to a variety of causes including, for example, earthquakes and rapid large-magnitude changes in weather conditions. Keywords: radon; earthquakes; tidal-influences; anomalies; time series; spectral-decomposition.
NASA Astrophysics Data System (ADS)
Otsuki, Soichi
2018-04-01
Polarimetric imaging of absorbing, strongly scattering, or birefringent inclusions is investigated in a negligibly absorbing, moderately scattering, and isotropic slab medium. It was proved that the reduced effective scattering Mueller matrix is exactly calculated from experimental or simulated raw matrices even if the medium is anisotropic and/or heterogeneous, or the outgoing light beam exits obliquely to the normal of the slab surface. The calculation also gives a reasonable approximation of the reduced matrix using a light beam with a finite diameter for illumination. The reduced matrix was calculated using a Monte Carlo simulation and was factorized in two dimensions by the Lu-Chipman polar decomposition. The intensity of backscattered light shows clear and modestly clear differences for absorbing and strongly scattering inclusions, respectively, whereas it shows no difference for birefringent inclusions. Conversely, some polarization parameters, for example, the selective depolarization coefficients exhibit only a slight difference for the absorbing inclusions, whereas they showed clear difference for the strongly scattering or birefringent inclusions. Moreover, these quantities become larger with increasing the difference in the optical properties of the inclusions relative to the surrounding medium. However, it is difficult to recognize inclusions that buried at the depth deeper than 3 mm under the surface. Thus, the present technique can detect the approximate shape and size of these inclusions, and considering the depth where inclusions lie, estimate their optical properties. This study reveals the possibility of the polarization-sensitive imaging of turbid inhomogeneous media using a pencil beam for illumination.
Influence of polyols on the formation of nanocrystalline nickel ferrite inside silica matrices
NASA Astrophysics Data System (ADS)
Stoia, Marcela; Barvinschi, Paul; Barbu-Tudoran, Lucian; Bunoiu, Mădălin
2017-01-01
We have synthesized nickel ferrite/silica nanocomposites, using a modified sol-gel method that combines the sol-gel processing with the thermal decomposition of metal-organic precursors, leading to a homogenous dispersion of ferrite nanoparticles within the silica matrix and a narrow size distribution. We used as starting materials tetraethyl orthosilicate (TEOS) as source of silica, Fe(III) and Ni(II) nitrates as sources of metal cations, and polyols as reducing agent (polyvinyl alcohol, 1,4-butanediol and their mixture). TG/DTA coupled technique evidenced the redox interaction between the polyol and the mixture of metal nitrates during the heating of the gel, with formation of nickel ferrite precursors in the pores of the silica-gels. FT-IR spectroscopy confirmed the formation of metal carboxylates inside the silica-gels and the interaction of the polyols with the Si-OH groups of the polysiloxane network. X-ray diffractometry evidenced that in case of nanocomposites obtained by using a single polyol, nickel ferrite forms as single crystalline phase inside the amorphous silica matrix, while in case of using a mixture of polyols the nickel oxide appears as a secondary phase. TEM microscopy and elemental mapping evidenced the fine nature of the obtained nickel ferrite nanoparticles that are homogenously dispersed within the silica matrix. The obtained nanocomposites exhibit magnetic behavior very close to superparamagnetism slightly depending on the presence and nature of the organic compounds used in synthesis; the magnetization reached at 5 kOe magnetic field was 7 emu/g for all composites.
The Speech multi features fusion perceptual hash algorithm based on tensor decomposition
NASA Astrophysics Data System (ADS)
Huang, Y. B.; Fan, M. H.; Zhang, Q. Y.
2018-03-01
With constant progress in modern speech communication technologies, the speech data is prone to be attacked by the noise or maliciously tampered. In order to make the speech perception hash algorithm has strong robustness and high efficiency, this paper put forward a speech perception hash algorithm based on the tensor decomposition and multi features is proposed. This algorithm analyses the speech perception feature acquires each speech component wavelet packet decomposition. LPCC, LSP and ISP feature of each speech component are extracted to constitute the speech feature tensor. Speech authentication is done by generating the hash values through feature matrix quantification which use mid-value. Experimental results showing that the proposed algorithm is robust for content to maintain operations compared with similar algorithms. It is able to resist the attack of the common background noise. Also, the algorithm is highly efficiency in terms of arithmetic, and is able to meet the real-time requirements of speech communication and complete the speech authentication quickly.
Spinodal Decomposition in Multilayered Fe-Cr System: Kinetic Stasis and Wave Instability
NASA Astrophysics Data System (ADS)
Maugis, Philippe; Colignon, Yann; Mangelinck, Dominique; Hoummada, Khalid
2015-08-01
Used as fuel cladding in the Gen IV fission reactors, ODS steels would be held at temperatures in the range of 350°C to 600°C for several months. Under these conditions, spinodal decomposition is likely to occur in the matrix, resulting in an increase of material brittleness. In this study, thin films consisting of a modulated composition in Fe and in Cr in a given direction have been elaborated. The time evolution of the composition profiles during aging at 500°C has been characterized by atom probe tomography, indicating an apparent kinetic stasis of the initial microstructure. A computer model has been developed on the basis of the Cahn-Hilliard theory of spinodal decomposition, associated with the mobility form proposed by Martin (1990). We make the assumption that the initial profile is very close to the amplitude-dependent critical wavelength. Our calculations show that the thin film is unstable relative to wavelength modulations, resulting in the observed kinetic stasis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexandrov, Boian S.; Lliev, Filip L.; Stanev, Valentin G.
This code is a toy (short) version of CODE-2016-83. From a general perspective, the code represents an unsupervised adaptive machine learning algorithm that allows efficient and high performance de-mixing and feature extraction of a multitude of non-negative signals mixed and recorded by a network of uncorrelated sensor arrays. The code identifies the number of the mixed original signals and their locations. Further, the code also allows deciphering of signals that have been delayed in regards to the mixing process in each sensor. This code is high customizable and it can be efficiently used for a fast macro-analyses of data. Themore » code is applicable to a plethora of distinct problems: chemical decomposition, pressure transient decomposition, unknown sources/signal allocation, EM signal decomposition. An additional procedure for allocation of the unknown sources is incorporated in the code.« less
First-principles study on the initial decomposition process of CH3NH3PbI3
NASA Astrophysics Data System (ADS)
Xue, Yuanbin; Shan, Yueyue; Xu, Hu
2017-09-01
Hybrid perovskites are promising materials for high-performance photovoltaics. Unfortunately, hybrid perovskites readily decompose in particular under humid conditions, and the mechanisms of this phenomenon have not yet been fully understood. In this work, we systematically studied the possible mechanisms and the structural properties during the initial decomposition process of MAPbI3 (MA = CH3NH3+) using first-principles calculations. The theoretical results show that it is energetically favorable for PbI2 to nucleate and crystalize from the MAPbI3 matrix ahead of other decomposition products. Additionally, the structural instability is an intrinsic property of MAPbI3, regardless of whether the system is exposed to humidity. We find that H2O could facilitate the desorption of gaseous components, acting as a catalyst to transfer the H+ ion. These results provide insight into the cause of the instability of MAPbI3 and may improve our understanding of the properties of hybrid perovskites.
Eigenvector decomposition of full-spectrum x-ray computed tomography.
Gonzales, Brian J; Lalush, David S
2012-03-07
Energy-discriminated x-ray computed tomography (CT) data were projected onto a set of basis functions to suppress the noise in filtered back-projection (FBP) reconstructions. The x-ray CT data were acquired using a novel x-ray system which incorporated a single-pixel photon-counting x-ray detector to measure the x-ray spectrum for each projection ray. A matrix of the spectral response of different materials was decomposed using eigenvalue decomposition to form the basis functions. Projection of FBP onto basis functions created a de facto image segmentation of multiple contrast agents. Final reconstructions showed significant noise suppression while preserving important energy-axis data. The noise suppression was demonstrated by a marked improvement in the signal-to-noise ratio (SNR) along the energy axis for multiple regions of interest in the reconstructed images. Basis functions used on a more coarsely sampled energy axis still showed an improved SNR. We conclude that the noise-resolution trade off along the energy axis was significantly improved using the eigenvalue decomposition basis functions.
NASA Technical Reports Server (NTRS)
Chavez, Patrick F.
1987-01-01
The effort at Sandia National Labs. on the methodologies and techniques being used to generate strict hexahedral finite element meshes from a solid model is described. The functionality of the modeler is used to decompose the solid into a set of nonintersecting meshable finite element primitives. The description of the decomposition is exported, via a Boundary Representative format, to the meshing program which uses the information for complete finite element model specification. Particular features of the program are discussed in some detail along with future plans for development which includes automation of the decomposition using artificial intelligence techniques.
Jaber, Abobaker M; Ismail, Mohd Tahir; Altaher, Alsaidi M
2014-01-01
This paper mainly forecasts the daily closing price of stock markets. We propose a two-stage technique that combines the empirical mode decomposition (EMD) with nonparametric methods of local linear quantile (LLQ). We use the proposed technique, EMD-LLQ, to forecast two stock index time series. Detailed experiments are implemented for the proposed method, in which EMD-LPQ, EMD, and Holt-Winter methods are compared. The proposed EMD-LPQ model is determined to be superior to the EMD and Holt-Winter methods in predicting the stock closing prices.
Fast Sampling Gas Chromatography (GC) System for Speciation in a Shock Tube
2016-10-31
capture similar ethylene decomposition rates for temperature-dependent shock experiments. (a) Papers published in peer-reviewed journals (N/A for none...3 GC Sampling System Validation Experiments ............................................................................... 5 Ethylene ...results for cold shock experiments, and both techniques capture similar ethylene decomposition rates for temperature-dependent shock experiments. Problem
Decomposition and particle release of a carbon nanotube/epoxy nanocomposite at elevated temperatures
NASA Astrophysics Data System (ADS)
Schlagenhauf, Lukas; Kuo, Yu-Ying; Bahk, Yeon Kyoung; Nüesch, Frank; Wang, Jing
2015-11-01
Carbon nanotubes (CNTs) as fillers in nanocomposites have attracted significant attention, and one of the applications is to use the CNTs as flame retardants. For such nanocomposites, possible release of CNTs at elevated temperatures after decomposition of the polymer matrix poses potential health threats. We investigated the airborne particle release from a decomposing multi-walled carbon nanotube (MWCNT)/epoxy nanocomposite in order to measure a possible release of MWCNTs. An experimental set-up was established that allows decomposing the samples in a furnace by exposure to increasing temperatures at a constant heating rate and under ambient air or nitrogen atmosphere. The particle analysis was performed by aerosol measurement devices and by transmission electron microscopy (TEM) of collected particles. Further, by the application of a thermal denuder, it was also possible to measure non-volatile particles only. Characterization of the tested samples and the decomposition kinetics were determined by the usage of thermogravimetric analysis (TGA). The particle release of different samples was investigated, of a neat epoxy, nanocomposites with 0.1 and 1 wt% MWCNTs, and nanocomposites with functionalized MWCNTs. The results showed that the added MWCNTs had little effect on the decomposition kinetics of the investigated samples, but the weight of the remaining residues after decomposition was influenced significantly. The measurements with decomposition in different atmospheres showed a release of a higher number of particles at temperatures below 300 °C when air was used. Analysis of collected particles by TEM revealed that no detectable amount of MWCNTs was released, but micrometer-sized fibrous particles were collected.
Thermodynamic changes in mechanochemically synthesized magnesium hydride nanoparticles.
Paskevicius, Mark; Sheppard, Drew A; Buckley, Craig E
2010-04-14
The thermodynamic properties of magnesium hydride nanoparticles have been investigated by hydrogen decomposition pressure measurements using the Sieverts technique. A mechanochemical method was used to synthesize MgH(2) nanoparticles (down to approximately 7 nm in size) embedded in a LiCl salt matrix. In comparison to bulk MgH(2), the mechanochemically produced MgH(2) with the smallest particle size showed a small but measurable decrease in the decomposition reaction enthalpy (DeltaH decrease of 2.84 kJ/mol H(2) from DeltaH(bulk) = 74.06 +/- 0.42 kJ/mol H(2) to DeltaH(nano) = 71.22 +/- 0.49 kJ/mol H(2)). The reduction in DeltaH matches theoretical predictions and was also coupled with a similar reduction in reaction entropy (DeltaS decrease of 3.8 J/mol H(2)/K from DeltaS(bulk) = 133.4 +/- 0.7 J/mol H(2)/K to DeltaS(nano) = 129.6 +/- 0.8 J/mol H(2)/K). The thermodynamic changes in the MgH(2) nanoparticle system correspond to a drop in the 1 bar hydrogen equilibrium temperature (T(1 bar)) by approximately 6 degrees C to 276.2 +/- 2.4 degrees C in contrast to the bulk MgH(2) system at 281.8 +/- 2.2 degrees C. The reduction in the desorption temperature is less than that expected from theoretical studies due to the decrease in DeltaS that acts to partially counteract the effect from the change in DeltaH.
De-biasing the dynamic mode decomposition for applied Koopman spectral analysis of noisy datasets
NASA Astrophysics Data System (ADS)
Hemati, Maziar S.; Rowley, Clarence W.; Deem, Eric A.; Cattafesta, Louis N.
2017-08-01
The dynamic mode decomposition (DMD)—a popular method for performing data-driven Koopman spectral analysis—has gained increased popularity for extracting dynamically meaningful spatiotemporal descriptions of fluid flows from snapshot measurements. Often times, DMD descriptions can be used for predictive purposes as well, which enables informed decision-making based on DMD model forecasts. Despite its widespread use and utility, DMD can fail to yield accurate dynamical descriptions when the measured snapshot data are imprecise due to, e.g., sensor noise. Here, we express DMD as a two-stage algorithm in order to isolate a source of systematic error. We show that DMD's first stage, a subspace projection step, systematically introduces bias errors by processing snapshots asymmetrically. To remove this systematic error, we propose utilizing an augmented snapshot matrix in a subspace projection step, as in problems of total least-squares, in order to account for the error present in all snapshots. The resulting unbiased and noise-aware total DMD (TDMD) formulation reduces to standard DMD in the absence of snapshot errors, while the two-stage perspective generalizes the de-biasing framework to other related methods as well. TDMD's performance is demonstrated in numerical and experimental fluids examples. In particular, in the analysis of time-resolved particle image velocimetry data for a separated flow, TDMD outperforms standard DMD by providing dynamical interpretations that are consistent with alternative analysis techniques. Further, TDMD extracts modes that reveal detailed spatial structures missed by standard DMD.
Fast divide-and-conquer algorithm for evaluating polarization in classical force fields
NASA Astrophysics Data System (ADS)
Nocito, Dominique; Beran, Gregory J. O.
2017-03-01
Evaluation of the self-consistent polarization energy forms a major computational bottleneck in polarizable force fields. In large systems, the linear polarization equations are typically solved iteratively with techniques based on Jacobi iterations (JI) or preconditioned conjugate gradients (PCG). Two new variants of JI are proposed here that exploit domain decomposition to accelerate the convergence of the induced dipoles. The first, divide-and-conquer JI (DC-JI), is a block Jacobi algorithm which solves the polarization equations within non-overlapping sub-clusters of atoms directly via Cholesky decomposition, and iterates to capture interactions between sub-clusters. The second, fuzzy DC-JI, achieves further acceleration by employing overlapping blocks. Fuzzy DC-JI is analogous to an additive Schwarz method, but with distance-based weighting when averaging the fuzzy dipoles from different blocks. Key to the success of these algorithms is the use of K-means clustering to identify natural atomic sub-clusters automatically for both algorithms and to determine the appropriate weights in fuzzy DC-JI. The algorithm employs knowledge of the 3-D spatial interactions to group important elements in the 2-D polarization matrix. When coupled with direct inversion in the iterative subspace (DIIS) extrapolation, fuzzy DC-JI/DIIS in particular converges in a comparable number of iterations as PCG, but with lower computational cost per iteration. In the end, the new algorithms demonstrated here accelerate the evaluation of the polarization energy by 2-3 fold compared to existing implementations of PCG or JI/DIIS.
Comparison of Techniques for Sampling Adult Necrophilous Insects From Pig Carcasses.
Cruise, Angela; Hatano, Eduardo; Watson, David W; Schal, Coby
2018-02-06
Studies of the pre-colonization interval and mechanisms driving necrophilous insect ecological succession depend on effective sampling of adult insects and knowledge of their diel and successional activity patterns. The number of insects trapped, their diversity, and diel periodicity were compared with four sampling methods on neonate pigs. Sampling method, time of day and decomposition age of the pigs significantly affected the number of insects sampled from pigs. We also found significant interactions of sampling method and decomposition day, time of sampling and decomposition day. No single method was superior to the other methods during all three decomposition days. Sampling times after noon yielded the largest samples during the first 2 d of decomposition. On day 3 of decomposition however, all sampling times were equally effective. Therefore, to maximize insect collections from neonate pigs, the method used to sample must vary by decomposition day. The suction trap collected the most species-rich samples, but sticky trap samples were the most diverse, when both species richness and evenness were factored into a Shannon diversity index. Repeated sampling during the noon to 18:00 hours period was most effective to obtain the maximum diversity of trapped insects. The integration of multiple sampling techniques would most effectively sample the necrophilous insect community. However, because all four tested methods were deficient at sampling beetle species, future work should focus on optimizing the most promising methods, alone or in combinations, and incorporate hand-collections of beetles. © The Author(s) 2018. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Li, Yongfu; Chen, Na; Harmon, Mark E.; Li, Yuan; Cao, Xiaoyan; Chappell, Mark A.; Mao, Jingdong
2015-10-01
A feedback between decomposition and litter chemical composition occurs with decomposition altering composition that in turn influences the decomposition rate. Elucidating the temporal pattern of chemical composition is vital to understand this feedback, but the effects of plant species and climate on chemical changes remain poorly understood, especially over multiple years. In a 10-year decomposition experiment with litter of four species (Acer saccharum, Drypetes glauca, Pinus resinosa, and Thuja plicata) from four sites that range from the arctic to tropics, we determined the abundance of 11 litter chemical constituents that were grouped into waxes, carbohydrates, lignin/tannins, and proteins/peptides using advanced 13C solid-state NMR techniques. Decomposition generally led to an enrichment of waxes and a depletion of carbohydrates, whereas the changes of other chemical constituents were inconsistent. Inconsistent convergence in chemical compositions during decomposition was observed among different litter species across a range of site conditions, whereas one litter species converged under different climate conditions. Our data clearly demonstrate that plant species rather than climate greatly alters the temporal pattern of litter chemical composition, suggesting the decomposition-chemistry feedback varies among different plant species.
Li, Yongfu; Chen, Na; Harmon, Mark E.; Li, Yuan; Cao, Xiaoyan; Chappell, Mark A.; Mao, Jingdong
2015-01-01
A feedback between decomposition and litter chemical composition occurs with decomposition altering composition that in turn influences the decomposition rate. Elucidating the temporal pattern of chemical composition is vital to understand this feedback, but the effects of plant species and climate on chemical changes remain poorly understood, especially over multiple years. In a 10-year decomposition experiment with litter of four species (Acer saccharum, Drypetes glauca, Pinus resinosa, and Thuja plicata) from four sites that range from the arctic to tropics, we determined the abundance of 11 litter chemical constituents that were grouped into waxes, carbohydrates, lignin/tannins, and proteins/peptides using advanced 13C solid-state NMR techniques. Decomposition generally led to an enrichment of waxes and a depletion of carbohydrates, whereas the changes of other chemical constituents were inconsistent. Inconsistent convergence in chemical compositions during decomposition was observed among different litter species across a range of site conditions, whereas one litter species converged under different climate conditions. Our data clearly demonstrate that plant species rather than climate greatly alters the temporal pattern of litter chemical composition, suggesting the decomposition-chemistry feedback varies among different plant species. PMID:26515033
Cerebrospinal fluid PCR analysis and biochemistry in bodies with severe decomposition.
Palmiere, Cristian; Vanhaebost, Jessica; Ventura, Francesco; Bonsignore, Alessandro; Bonetti, Luca Reggiani
2015-02-01
The aim of this study was to assess whether Neisseria meningitidis, Listeria monocytogenes, Streptococcus pneumoniae and Haemophilus influenzae can be identified using the polymerase chain reaction technique in the cerebrospinal fluid of severely decomposed bodies with known, noninfectious causes of death or whether postmortem changes can lead to false positive results and thus erroneous diagnostic information. Biochemical investigations, postmortem bacteriology and real-time polymerase chain reaction analysis in cerebrospinal fluid were performed in a series of medico-legal autopsies that included noninfectious causes of death with decomposition, bacterial meningitis without decomposition, bacterial meningitis with decomposition, low respiratory tract infections with decomposition and abdominal infections with decomposition. In noninfectious causes of death with decomposition, postmortem investigations failed to reveal results consistent with generalized inflammation or bacterial infections at the time of death. Real-time polymerase chain reaction analysis in cerebrospinal fluid did not identify the studied bacteria in any of these cases. The results of this study highlight the usefulness of molecular approaches in bacteriology as well as the use of alternative biological samples in postmortem biochemistry in order to obtain suitable information even in corpses with severe decompositional changes. Copyright © 2014 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Buchanan, Piers; Soper, Alan K; Thompson, Helen; Westacott, Robin E; Creek, Jefferson L; Hobson, Greg; Koh, Carolyn A
2005-10-22
Neutron diffraction with HD isotope substitution has been used to study the formation and decomposition of the methane clathrate hydrate. Using this atomistic technique coupled with simultaneous gas consumption measurements, we have successfully tracked the formation of the sI methane hydrate from a water/gas mixture and then the subsequent decomposition of the hydrate from initiation to completion. These studies demonstrate that the application of neutron diffraction with simultaneous gas consumption measurements provides a powerful method for studying the clathrate hydrate crystal growth and decomposition. We have also used neutron diffraction to examine the water structure before the hydrate growth and after the hydrate decomposition. From the neutron-scattering curves and the empirical potential structure refinement analysis of the data, we find that there is no significant difference between the structure of water before the hydrate formation and the structure of water after the hydrate decomposition. Nor is there any significant change to the methane hydration shell. These results are discussed in the context of widely held views on the existence of memory effects after the hydrate decomposition.
Shatokhina, Iuliia; Obereder, Andreas; Rosensteiner, Matthias; Ramlau, Ronny
2013-04-20
We present a fast method for the wavefront reconstruction from pyramid wavefront sensor (P-WFS) measurements. The method is based on an analytical relation between pyramid and Shack-Hartmann sensor (SH-WFS) data. The algorithm consists of two steps--a transformation of the P-WFS data to SH data, followed by the application of cumulative reconstructor with domain decomposition, a wavefront reconstructor from SH-WFS measurements. The closed loop simulations confirm that our method provides the same quality as the standard matrix vector multiplication method. A complexity analysis as well as speed tests confirm that the method is very fast. Thus, the method can be used on extremely large telescopes, e.g., for eXtreme adaptive optics systems.
Polystyrene Foam EOS as a Function of Porosity and Fill Gas
NASA Astrophysics Data System (ADS)
Mulford, Roberta; Swift, Damian
2009-06-01
An accurate EOS for polystyrene foam is necessary for analysis of numerous experiments in shock compression, inertial confinement fusion, and astrophysics. Plastic to gas ratios vary between various samples of foam, according to the density and cell-size of the foam. A matrix of compositions has been investigated, allowing prediction of foam response as a function of the plastic-to-air ratio. The EOS code CHEETAH allows participation of the air in the decomposition reaction of the foam, Differences between air-filled, nitrogen-blown, and CO2-blown foams are investigated, to estimate the importance of allowing air to react with plastic products during decomposition. Results differ somewhat from the conventional EOS, which are generated from values for plastic extrapolated to low densities.
Analysis of ZDDP Content and Thermal Decomposition in Motor Oils Using NAA and NMR
NASA Astrophysics Data System (ADS)
Ferguson, S.; Johnson, J.; Gonzales, D.; Hobbs, C.; Allen, C.; Williams, S.
Zinc dialkyldithiophosphates (ZDDPs) are one of the most common anti-wear additives present in commercially-available motor oils. The ZDDP concentrations of motor oils are most commonly determined using inductively coupled plasma atomic emission spectroscopy (ICP-AES). As part of an undergraduate research project, we have determined the Zn concentrations of eight commercially-available motor oils and one oil additive using neutron activation analysis (NAA), which has potential for greater accuracy and less sensitivity to matrix effects as compared to ICP-AES. The 31P nuclear magnetic resonance (31P-NMR) spectra were also obtained for several oil additive samples which have been heated to various temperatures in order to study the thermal decomposition of ZDDPs.
Decomposition of aquatic plants in lakes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Godshalk, G.L.
1977-01-01
This study was carried out to systematically determine the effects of temperature and oxygen concentration, two environmental parameters crucial to lake metabolism in general, on decomposition of five species of aquatic vascular plants of three growth forms in a Michigan lake. Samples of dried plant material were decomposed in flasks in the laboratory under three different oxygen regimes, aerobic-to-anaerobic, strict anaerobic, and aerated, each at 10/sup 0/C and 25/sup 0/C. In addition, in situ decomposition of the same species was monitored using the litter bag technique under four conditions.
Finding Imaging Patterns of Structural Covariance via Non-Negative Matrix Factorization
Sotiras, Aristeidis; Resnick, Susan M.; Davatzikos, Christos
2015-01-01
In this paper, we investigate the use of Non-Negative Matrix Factorization (NNMF) for the analysis of structural neuroimaging data. The goal is to identify the brain regions that co-vary across individuals in a consistent way, hence potentially being part of underlying brain networks or otherwise influenced by underlying common mechanisms such as genetics and pathologies. NNMF offers a directly data-driven way of extracting relatively localized co-varying structural regions, thereby transcending limitations of Principal Component Analysis (PCA), Independent Component Analysis (ICA) and other related methods that tend to produce dispersed components of positive and negative loadings. In particular, leveraging upon the well known ability of NNMF to produce parts-based representations of image data, we derive decompositions that partition the brain into regions that vary in consistent ways across individuals. Importantly, these decompositions achieve dimensionality reduction via highly interpretable ways and generalize well to new data as shown via split-sample experiments. We empirically validate NNMF in two data sets: i) a Diffusion Tensor (DT) mouse brain development study, and ii) a structural Magnetic Resonance (sMR) study of human brain aging. We demonstrate the ability of NNMF to produce sparse parts-based representations of the data at various resolutions. These representations seem to follow what we know about the underlying functional organization of the brain and also capture some pathological processes. Moreover, we show that these low dimensional representations favorably compare to descriptions obtained with more commonly used matrix factorization methods like PCA and ICA. PMID:25497684
NASA Astrophysics Data System (ADS)
Kang, Shouqiang; Ma, Danyang; Wang, Yujing; Lan, Chaofeng; Chen, Qingguo; Mikulovich, V. I.
2017-03-01
To effectively assess different fault locations and different degrees of performance degradation of a rolling bearing with a unified assessment index, a novel state assessment method based on the relative compensation distance of multiple-domain features and locally linear embedding is proposed. First, for a single-sample signal, time-domain and frequency-domain indexes can be calculated for the original vibration signal and each sensitive intrinsic mode function obtained by improved ensemble empirical mode decomposition, and the singular values of the sensitive intrinsic mode function matrix can be extracted by singular value decomposition to construct a high-dimensional hybrid-domain feature vector. Second, a feature matrix can be constructed by arranging each feature vector of multiple samples, the dimensions of each row vector of the feature matrix can be reduced by the locally linear embedding algorithm, and the compensation distance of each fault state of the rolling bearing can be calculated using the support vector machine. Finally, the relative distance between different fault locations and different degrees of performance degradation and the normal-state optimal classification surface can be compensated, and on the basis of the proposed relative compensation distance, the assessment model can be constructed and an assessment curve drawn. Experimental results show that the proposed method can effectively assess different fault locations and different degrees of performance degradation of the rolling bearing under certain conditions.
Using Matrix and Tensor Factorizations for the Single-Trial Analysis of Population Spike Trains.
Onken, Arno; Liu, Jian K; Karunasekara, P P Chamanthi R; Delis, Ioannis; Gollisch, Tim; Panzeri, Stefano
2016-11-01
Advances in neuronal recording techniques are leading to ever larger numbers of simultaneously monitored neurons. This poses the important analytical challenge of how to capture compactly all sensory information that neural population codes carry in their spatial dimension (differences in stimulus tuning across neurons at different locations), in their temporal dimension (temporal neural response variations), or in their combination (temporally coordinated neural population firing). Here we investigate the utility of tensor factorizations of population spike trains along space and time. These factorizations decompose a dataset of single-trial population spike trains into spatial firing patterns (combinations of neurons firing together), temporal firing patterns (temporal activation of these groups of neurons) and trial-dependent activation coefficients (strength of recruitment of such neural patterns on each trial). We validated various factorization methods on simulated data and on populations of ganglion cells simultaneously recorded in the salamander retina. We found that single-trial tensor space-by-time decompositions provided low-dimensional data-robust representations of spike trains that capture efficiently both their spatial and temporal information about sensory stimuli. Tensor decompositions with orthogonality constraints were the most efficient in extracting sensory information, whereas non-negative tensor decompositions worked well even on non-independent and overlapping spike patterns, and retrieved informative firing patterns expressed by the same population in response to novel stimuli. Our method showed that populations of retinal ganglion cells carried information in their spike timing on the ten-milliseconds-scale about spatial details of natural images. This information could not be recovered from the spike counts of these cells. First-spike latencies carried the majority of information provided by the whole spike train about fine-scale image features, and supplied almost as much information about coarse natural image features as firing rates. Together, these results highlight the importance of spike timing, and particularly of first-spike latencies, in retinal coding.
Using Matrix and Tensor Factorizations for the Single-Trial Analysis of Population Spike Trains
Onken, Arno; Liu, Jian K.; Karunasekara, P. P. Chamanthi R.; Delis, Ioannis; Gollisch, Tim; Panzeri, Stefano
2016-01-01
Advances in neuronal recording techniques are leading to ever larger numbers of simultaneously monitored neurons. This poses the important analytical challenge of how to capture compactly all sensory information that neural population codes carry in their spatial dimension (differences in stimulus tuning across neurons at different locations), in their temporal dimension (temporal neural response variations), or in their combination (temporally coordinated neural population firing). Here we investigate the utility of tensor factorizations of population spike trains along space and time. These factorizations decompose a dataset of single-trial population spike trains into spatial firing patterns (combinations of neurons firing together), temporal firing patterns (temporal activation of these groups of neurons) and trial-dependent activation coefficients (strength of recruitment of such neural patterns on each trial). We validated various factorization methods on simulated data and on populations of ganglion cells simultaneously recorded in the salamander retina. We found that single-trial tensor space-by-time decompositions provided low-dimensional data-robust representations of spike trains that capture efficiently both their spatial and temporal information about sensory stimuli. Tensor decompositions with orthogonality constraints were the most efficient in extracting sensory information, whereas non-negative tensor decompositions worked well even on non-independent and overlapping spike patterns, and retrieved informative firing patterns expressed by the same population in response to novel stimuli. Our method showed that populations of retinal ganglion cells carried information in their spike timing on the ten-milliseconds-scale about spatial details of natural images. This information could not be recovered from the spike counts of these cells. First-spike latencies carried the majority of information provided by the whole spike train about fine-scale image features, and supplied almost as much information about coarse natural image features as firing rates. Together, these results highlight the importance of spike timing, and particularly of first-spike latencies, in retinal coding. PMID:27814363
NASA Astrophysics Data System (ADS)
Deraemaeker, A.; Worden, K.
2018-05-01
This paper discusses the possibility of using the Mahalanobis squared-distance to perform robust novelty detection in the presence of important environmental variability in a multivariate feature vector. By performing an eigenvalue decomposition of the covariance matrix used to compute that distance, it is shown that the Mahalanobis squared-distance can be written as the sum of independent terms which result from a transformation from the feature vector space to a space of independent variables. In general, especially when the size of the features vector is large, there are dominant eigenvalues and eigenvectors associated with the covariance matrix, so that a set of principal components can be defined. Because the associated eigenvalues are high, their contribution to the Mahalanobis squared-distance is low, while the contribution of the other components is high due to the low value of the associated eigenvalues. This analysis shows that the Mahalanobis distance naturally filters out the variability in the training data. This property can be used to remove the effect of the environment in damage detection, in much the same way as two other established techniques, principal component analysis and factor analysis. The three techniques are compared here using real experimental data from a wooden bridge for which the feature vector consists in eigenfrequencies and modeshapes collected under changing environmental conditions, as well as damaged conditions simulated with an added mass. The results confirm the similarity between the three techniques and the ability to filter out environmental effects, while keeping a high sensitivity to structural changes. The results also show that even after filtering out the environmental effects, the normality assumption cannot be made for the residual feature vector. An alternative is demonstrated here based on extreme value statistics which results in a much better threshold which avoids false positives in the training data, while allowing detection of all damaged cases.
NASA Technical Reports Server (NTRS)
White, Jeffery A.; Baurle, Robert A.; Passe, Bradley J.; Spiegel, Seth C.; Nishikawa, Hiroaki
2017-01-01
The ability to solve the equations governing the hypersonic turbulent flow of a real gas on unstructured grids using a spatially-elliptic, 2nd-order accurate, cell-centered, finite-volume method has been recently implemented in the VULCAN-CFD code. This paper describes the key numerical methods and techniques that were found to be required to robustly obtain accurate solutions to hypersonic flows on non-hex-dominant unstructured grids. The methods and techniques described include: an augmented stencil, weighted linear least squares, cell-average gradient method, a robust multidimensional cell-average gradient-limiter process that is consistent with the augmented stencil of the cell-average gradient method and a cell-face gradient method that contains a cell skewness sensitive damping term derived using hyperbolic diffusion based concepts. A data-parallel matrix-based symmetric Gauss-Seidel point-implicit scheme, used to solve the governing equations, is described and shown to be more robust and efficient than a matrix-free alternative. In addition, a y+ adaptive turbulent wall boundary condition methodology is presented. This boundary condition methodology is deigned to automatically switch between a solve-to-the-wall and a wall-matching-function boundary condition based on the local y+ of the 1st cell center off the wall. The aforementioned methods and techniques are then applied to a series of hypersonic and supersonic turbulent flat plate unit tests to examine the efficiency, robustness and convergence behavior of the implicit scheme and to determine the ability of the solve-to-the-wall and y+ adaptive turbulent wall boundary conditions to reproduce the turbulent law-of-the-wall. Finally, the thermally perfect, chemically frozen, Mach 7.8 turbulent flow of air through a scramjet flow-path is computed and compared with experimental data to demonstrate the robustness, accuracy and convergence behavior of the unstructured-grid solver for a realistic 3-D geometry on a non-hex-dominant grid.
Forecasting hotspots in East Kutai, Kutai Kartanegara, and West Kutai as early warning information
NASA Astrophysics Data System (ADS)
Wahyuningsih, S.; Goejantoro, R.; Rizki, N. A.
2018-04-01
The aims of this research are to model hotspots and forecast hotspot 2017 in East Kutai, Kutai Kartanegara and West Kutai. The methods which used in this research were Holt exponential smoothing, Holt’s additive dump trend method, Holt-Winters’ additive method, additive decomposition method, multiplicative decomposition method, Loess decomposition method and Box-Jenkins method. For smoothing techniques, additive decomposition is better than Holt’s exponential smoothing. The hotspots model using Box-Jenkins method were Autoregressive Moving Average ARIMA(1,1,0), ARIMA(0,2,1), and ARIMA(0,1,0). Comparing the results from all methods which were used in this research, and based on Root of Mean Squared Error (RMSE), show that Loess decomposition method is the best times series model, because it has the least RMSE. Thus the Loess decomposition model used to forecast the number of hotspot. The forecasting result indicatethat hotspots pattern tend to increase at the end of 2017 in Kutai Kartanegara and West Kutai, but stationary in East Kutai.
Mössbauer study of the thermal decomposition of alkali tris(oxalato)ferrates(III)
NASA Astrophysics Data System (ADS)
Brar, A. S.; Randhawa, B. S.
1985-07-01
The thermal decomposition of alkali (Li,Na,K,Cs,NH 4) tris(oxalato)ferrates(III) has been studied at different temperatures up to 700°C using Mössbauer, infrared spectroscopy, and thermogravimetric techniques. The formation of different intermediates has been observed during thermal decomposition. The decomposition in these complexes starts at different temperatures, i.e., at 200°C in the case of lithium, cesium, and ammonium ferrate(III), 250°C in the case of sodium, and 270°C in the case of potassium tris(oxalato)ferrate(III). The intermediates, i.e., Fe 11C 2O 4, K 6Fe 112(ox) 5. and Cs 2Fe 11 (ox) 2(H 2O) 2, are formed during thermal decomposition of lithium, potassium, and cesium tris(oxalato)ferrates(III), respectively. In the case of sodium and ammonium tris(oxalato)ferrates(III), the decomposition occurs without reduction to the iron(II) state and leads directly to α-Fe 2O 3.
NASA Astrophysics Data System (ADS)
Ushijima, T.; Yeh, W.
2013-12-01
An optimal experimental design algorithm is developed to select locations for a network of observation wells that provides the maximum information about unknown hydraulic conductivity in a confined, anisotropic aquifer. The design employs a maximal information criterion that chooses, among competing designs, the design that maximizes the sum of squared sensitivities while conforming to specified design constraints. Because that the formulated problem is non-convex and contains integer variables (necessitating a combinatorial search), for a realistically-scaled model, the problem may be difficult, if not impossible, to solve through traditional mathematical programming techniques. Genetic Algorithms (GAs) are designed to search out the global optimum; however because a GA requires a large number of calls to a groundwater model, the formulated optimization problem may still be infeasible to solve. To overcome this, Proper Orthogonal Decomposition (POD) is applied to the groundwater model to reduce its dimension. The information matrix in the full model space can then be searched without solving the full model.
Influence of Resin Composition on the Defect Formation in Alumina Manufactured by Stereolithography
Johansson, Emil; Lidström, Oscar; Johansson, Jan; Lyckfeldt, Ola; Adolfsson, Erik
2017-01-01
Stereolithography (SL) is a technique allowing additive manufacturing of complex ceramic parts by selective photopolymerization of a photocurable suspension containing photocurable monomer, photoinitiator, and a ceramic powder. The manufactured three-dimensional object is cleaned and converted into a dense ceramic part by thermal debinding of the polymer network and subsequent sintering. The debinding is the most critical and time-consuming step, and often the source of cracks. In this study, photocurable alumina suspensions have been developed, and the influence of resin composition on defect formation has been investigated. The suspensions were characterized in terms of rheology and curing behaviour, and cross-sections of sintered specimens manufactured by SL were evaluated by SEM. It was found that the addition of a non-reactive component to the photocurable resin reduced polymerization shrinkage and altered the thermal decomposition of the polymer matrix, which led to a reduction in both delamination and intra-laminar cracks. Using a non-reactive component that decomposed rather than evaporated led to less residual porosity. PMID:28772496
A new look at an old mass relation
NASA Astrophysics Data System (ADS)
Gérard, J.-M.; Goffinet, F.; Herquet, M.
2006-02-01
New data from neutrino oscillation experiments motivate us to extend a successful mass relation for the charged leptons to the other fundamental fermions. This new universal relation requires a Dirac mass around 3 ×10-2 eV for the lightest neutrino and rules out a maximal atmospheric mixing. It also suggests a specific decomposition of the CKM mixing matrix.
NASA Astrophysics Data System (ADS)
Zhang, Dashan; Guo, Jie; Jin, Yi; Zhu, Chang'an
2017-09-01
High-speed cameras provide full field measurement of structure motions and have been applied in nondestructive testing and noncontact structure monitoring. Recently, a phase-based method has been proposed to extract sound-induced vibrations from phase variations in videos, and this method provides insights into the study of remote sound surveillance and material analysis. An efficient singular value decomposition (SVD)-based approach is introduced to detect sound-induced subtle motions from pixel intensities in silent high-speed videos. A high-speed camera is initially applied to capture a video of the vibrating objects stimulated by sound fluctuations. Then, subimages collected from a small region on the captured video are reshaped into vectors and reconstructed to form a matrix. Orthonormal image bases (OIBs) are obtained from the SVD of the matrix; available vibration signal can then be obtained by projecting subsequent subimages onto specific OIBs. A simulation test is initiated to validate the effectiveness and efficiency of the proposed method. Two experiments are conducted to demonstrate the potential applications in sound recovery and material analysis. Results show that the proposed method efficiently detects subtle motions from the video.
Efficient model reduction of parametrized systems by matrix discrete empirical interpolation
NASA Astrophysics Data System (ADS)
Negri, Federico; Manzoni, Andrea; Amsallem, David
2015-12-01
In this work, we apply a Matrix version of the so-called Discrete Empirical Interpolation (MDEIM) for the efficient reduction of nonaffine parametrized systems arising from the discretization of linear partial differential equations. Dealing with affinely parametrized operators is crucial in order to enhance the online solution of reduced-order models (ROMs). However, in many cases such an affine decomposition is not readily available, and must be recovered through (often) intrusive procedures, such as the empirical interpolation method (EIM) and its discrete variant DEIM. In this paper we show that MDEIM represents a very efficient approach to deal with complex physical and geometrical parametrizations in a non-intrusive, efficient and purely algebraic way. We propose different strategies to combine MDEIM with a state approximation resulting either from a reduced basis greedy approach or Proper Orthogonal Decomposition. A posteriori error estimates accounting for the MDEIM error are also developed in the case of parametrized elliptic and parabolic equations. Finally, the capability of MDEIM to generate accurate and efficient ROMs is demonstrated on the solution of two computationally-intensive classes of problems occurring in engineering contexts, namely PDE-constrained shape optimization and parametrized coupled problems.
Chang, Chi-Ying; Chang, Chia-Chi; Hsiao, Tzu-Chien
2013-01-01
Excitation-emission matrix (EEM) fluorescence spectroscopy is a noninvasive method for tissue diagnosis and has become important in clinical use. However, the intrinsic characterization of EEM fluorescence remains unclear. Photobleaching and the complexity of the chemical compounds make it difficult to distinguish individual compounds due to overlapping features. Conventional studies use principal component analysis (PCA) for EEM fluorescence analysis, and the relationship between the EEM features extracted by PCA and diseases has been examined. The spectral features of different tissue constituents are not fully separable or clearly defined. Recently, a non-stationary method called multi-dimensional ensemble empirical mode decomposition (MEEMD) was introduced; this method can extract the intrinsic oscillations on multiple spatial scales without loss of information. The aim of this study was to propose a fluorescence spectroscopy system for EEM measurements and to describe a method for extracting the intrinsic characteristics of EEM by MEEMD. The results indicate that, although PCA provides the principal factor for the spectral features associated with chemical compounds, MEEMD can provide additional intrinsic features with more reliable mapping of the chemical compounds. MEEMD has the potential to extract intrinsic fluorescence features and improve the detection of biochemical changes. PMID:24240806
Wen, Zaidao; Hou, Zaidao; Jiao, Licheng
2017-11-01
Discriminative dictionary learning (DDL) framework has been widely used in image classification which aims to learn some class-specific feature vectors as well as a representative dictionary according to a set of labeled training samples. However, interclass similarities and intraclass variances among input samples and learned features will generally weaken the representability of dictionary and the discrimination of feature vectors so as to degrade the classification performance. Therefore, how to explicitly represent them becomes an important issue. In this paper, we present a novel DDL framework with two-level low rank and group sparse decomposition model. In the first level, we learn a class-shared and several class-specific dictionaries, where a low rank and a group sparse regularization are, respectively, imposed on the corresponding feature matrices. In the second level, the class-specific feature matrix will be further decomposed into a low rank and a sparse matrix so that intraclass variances can be separated to concentrate the corresponding feature vectors. Extensive experimental results demonstrate the effectiveness of our model. Compared with the other state-of-the-arts on several popular image databases, our model can achieve a competitive or better performance in terms of the classification accuracy.
Estimation of the axis of a screw motion from noisy data--a new method based on Plücker lines.
Kiat Teu, Koon; Kim, Wangdo
2006-01-01
The problems of estimating the motion and orientation parameters of a body segment from two n point-set patterns are analyzed using the Plücker coordinates of a line (Plücker lines). The aim is to find algorithms less complex than those in conventional use, and thus facilitating more accurate computation of the unknown parameters. All conventional techniques use point transformation to calculate the screw axis. In this paper, we present a novel technique that directly estimates the axis of a screw motion as a Plücker line. The Plücker line can be transformed via the dual-number coordinate transformation matrix. This method is compared with Schwartz and Rozumalski [2005. A new method for estimating joint parameters from motion data. Journal of Biomechanics 38, 107-116] in simulations of random measurement errors and systematic skin movements. Simulation results indicate that the methods based on Plücker lines (Plücker line method) are superior in terms of extremely good results in the determination of the screw axis direction and position as well as a concise derivation of mathematical statements. This investigation yielded practical results, which can be used to locate the axis of a screw motion in a noisy environment. Developing the dual transformation matrix (DTM) from noisy data and determining the screw axis from a given DTM is done in a manner analogous to that for handling simple rotations. A more robust approach to solve for the dual vector associated with DTM is also addressed by using the eigenvector and the singular value decomposition.
Nature of Driving Force for Protein Folding-- A Result From Analyzing the Statistical Potential
NASA Astrophysics Data System (ADS)
Li, Hao; Tang, Chao; Wingreen, Ned S.
1998-03-01
In a statistical approach to protein structure analysis, Miyazawa and Jernigan (MJ) derived a 20× 20 matrix of inter-residue contact energies between different types of amino acids. Using the method of eigenvalue decomposition, we find that the MJ matrix can be accurately reconstructed from its first two principal component vectors as M_ij=C_0+C_1(q_i+q_j)+C2 qi q_j, with constant C's, and 20 q values associated with the 20 amino acids. This regularity is due to hydrophobic interactions and a force of demixing, the latter obeying Hildebrand's solubility theory of simple liquids.
Localized motion in random matrix decomposition of complex financial systems
NASA Astrophysics Data System (ADS)
Jiang, Xiong-Fei; Zheng, Bo; Ren, Fei; Qiu, Tian
2017-04-01
With the random matrix theory, we decompose the multi-dimensional time series of complex financial systems into a set of orthogonal eigenmode functions, which are classified into the market mode, sector mode, and random mode. In particular, the localized motion generated by the business sectors, plays an important role in financial systems. Both the business sectors and their impact on the stock market are identified from the localized motion. We clarify that the localized motion induces different characteristics of the time correlations for the stock-market index and individual stocks. With a variation of a two-factor model, we reproduce the return-volatility correlations of the eigenmodes.
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Kim, Hye-Young; Junkins, John L.
2003-01-01
A new star pattern recognition method is developed using singular value decomposition of a measured unit column vector matrix in a measurement frame and the corresponding cataloged vector matrix in a reference frame. It is shown that singular values and right singular vectors are invariant with respect to coordinate transformation and robust under uncertainty. One advantage of singular value comparison is that a pairing process for individual measured and cataloged stars is not necessary, and the attitude estimation and pattern recognition process are not separated. An associated method for mission catalog design is introduced and simulation results are presented.
A Flexible CUDA LU-based Solver for Small, Batched Linear Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tumeo, Antonino; Gawande, Nitin A.; Villa, Oreste
This chapter presents the implementation of a batched CUDA solver based on LU factorization for small linear systems. This solver may be used in applications such as reactive flow transport models, which apply the Newton-Raphson technique to linearize and iteratively solve the sets of non linear equations that represent the reactions for ten of thousands to millions of physical locations. The implementation exploits somewhat counterintuitive GPGPU programming techniques: it assigns the solution of a matrix (representing a system) to a single CUDA thread, does not exploit shared memory and employs dynamic memory allocation on the GPUs. These techniques enable ourmore » implementation to simultaneously solve sets of systems with over 100 equations and to employ LU decomposition with complete pivoting, providing the higher numerical accuracy required by certain applications. Other currently available solutions for batched linear solvers are limited by size and only support partial pivoting, although they may result faster in certain conditions. We discuss the code of our implementation and present a comparison with the other implementations, discussing the various tradeoffs in terms of performance and flexibility. This work will enable developers that need batched linear solvers to choose whichever implementation is more appropriate to the features and the requirements of their applications, and even to implement dynamic switching approaches that can choose the best implementation depending on the input data.« less
NASA Astrophysics Data System (ADS)
Cafiero, M.; Lloberas-Valls, O.; Cante, J.; Oliver, J.
2016-04-01
A domain decomposition technique is proposed which is capable of properly connecting arbitrary non-conforming interfaces. The strategy essentially consists in considering a fictitious zero-width interface between the non-matching meshes which is discretized using a Delaunay triangulation. Continuity is satisfied across domains through normal and tangential stresses provided by the discretized interface and inserted in the formulation in the form of Lagrange multipliers. The final structure of the global system of equations resembles the dual assembly of substructures where the Lagrange multipliers are employed to nullify the gap between domains. A new approach to handle floating subdomains is outlined which can be implemented without significantly altering the structure of standard industrial finite element codes. The effectiveness of the developed algorithm is demonstrated through a patch test example and a number of tests that highlight the accuracy of the methodology and independence of the results with respect to the framework parameters. Considering its high degree of flexibility and non-intrusive character, the proposed domain decomposition framework is regarded as an attractive alternative to other established techniques such as the mortar approach.
ERIC Educational Resources Information Center
Wiederholt, Erwin
1983-01-01
DTA is a technique in which the temperature difference between sample/reference is measured as a function of temperature, while both are subject to a controlled temperature program. Use of a simple DTA-apparatus in demonstrating catalytic effects of manganese dioxide and aluminum oxide on decomposition temperature of potassium chlorate is…
ERIC Educational Resources Information Center
Feng, Mingyu; Beck, Joseph E.; Heffernan, Neil T.
2009-01-01
A basic question of instructional interventions is how effective it is in promoting student learning. This paper presents a study to determine the relative efficacy of different instructional strategies by applying an educational data mining technique, learning decomposition. We use logistic regression to determine how much learning is caused by…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Darbar, Devendrasinh; Department of Mechanical Engineering, National University of Singapore, 117576; Department of Physics, National University of Singapore, 117542
2016-01-15
Highlights: • MgCo{sub 2}O{sub 4} was prepared by oxalate decomposition method and electrospinning technique. • Electrospun MgCo{sub 2}O{sub 4} shows the reversible capacity of 795 and 227 mAh g{sup −1} oxalate decomposition MgCo{sub 2}O{sub 4} after 50 cycle. • Electrospun MgCo{sub 2}O{sub 4} show good cycling stability and electrochemical performance. - Abstract: Magnesium cobalt oxide, MgCo{sub 2}O{sub 4} was synthesized by oxalate decomposition method and electrospinning technique. The electrochemical performances, structures, phase formation and morphology of MgCo{sub 2}O{sub 4} synthesized by both the methods are compared. Scanning electron microscope (SEM) studies show spherical and fiber type morphology, respectively for themore » oxalate decomposition and electrospinning method. The electrospun nanofibers of MgCo{sub 2}O{sub 4} calcined at 650 °C, showed a very good reversible capacity of 795 mAh g{sup −1} after 50 cycles when compared to bulk material capacity of 227 mAh g{sup −1} at current rate of 60 mA g{sup −1}. MgCo{sub 2}O{sub 4} nanofiber showed a reversible capacity of 411 mAh g{sup −1} (at cycle) at current density of 240 mA g{sup −1}. Improved performance was due to improved conductivity of MgO, which may act as buffer layer leading to improved cycling stability. The cyclic voltammetry studies at scan rate of 0.058 mV/s show main cathodic at around 1.0 V and anodic peaks at 2.1 V vs. Li.« less
NASA Astrophysics Data System (ADS)
Murni, Bustamam, A.; Ernastuti, Handhika, T.; Kerami, D.
2017-07-01
Calculation of the matrix-vector multiplication in the real-world problems often involves large matrix with arbitrary size. Therefore, parallelization is needed to speed up the calculation process that usually takes a long time. Graph partitioning techniques that have been discussed in the previous studies cannot be used to complete the parallelized calculation of matrix-vector multiplication with arbitrary size. This is due to the assumption of graph partitioning techniques that can only solve the square and symmetric matrix. Hypergraph partitioning techniques will overcome the shortcomings of the graph partitioning technique. This paper addresses the efficient parallelization of matrix-vector multiplication through hypergraph partitioning techniques using CUDA GPU-based parallel computing. CUDA (compute unified device architecture) is a parallel computing platform and programming model that was created by NVIDIA and implemented by the GPU (graphics processing unit).
Tissue artifact removal from respiratory signals based on empirical mode decomposition.
Liu, Shaopeng; Gao, Robert X; John, Dinesh; Staudenmayer, John; Freedson, Patty
2013-05-01
On-line measurement of respiration plays an important role in monitoring human physical activities. Such measurement commonly employs sensing belts secured around the rib cage and abdomen of the test object. Affected by the movement of body tissues, respiratory signals typically have a low signal-to-noise ratio. Removing tissue artifacts therefore is critical to ensuring effective respiration analysis. This paper presents a signal decomposition technique for tissue artifact removal from respiratory signals, based on the empirical mode decomposition (EMD). An algorithm based on the mutual information and power criteria was devised to automatically select appropriate intrinsic mode functions for tissue artifact removal and respiratory signal reconstruction. Performance of the EMD-algorithm was evaluated through simulations and real-life experiments (N = 105). Comparison with low-pass filtering that has been conventionally applied confirmed the effectiveness of the technique in tissue artifacts removal.
NASA Technical Reports Server (NTRS)
McDowell, Mark
2004-01-01
An integrated algorithm for decomposing overlapping particle images (multi-particle objects) along with determining each object s constituent particle centroid(s) has been developed using image analysis techniques. The centroid finding algorithm uses a modified eight-direction search method for finding the perimeter of any enclosed object. The centroid is calculated using the intensity-weighted center of mass of the object. The overlap decomposition algorithm further analyzes the object data and breaks it down into its constituent particle centroid(s). This is accomplished with an artificial neural network, feature based technique and provides an efficient way of decomposing overlapping particles. Combining the centroid finding and overlap decomposition routines into a single algorithm allows us to accurately predict the error associated with finding the centroid(s) of particles in our experiments. This algorithm has been tested using real, simulated, and synthetic data and the results are presented and discussed.
Ma, Hong-Wu; Zhao, Xue-Ming; Yuan, Ying-Jin; Zeng, An-Ping
2004-08-12
Metabolic networks are organized in a modular, hierarchical manner. Methods for a rational decomposition of the metabolic network into relatively independent functional subsets are essential to better understand the modularity and organization principle of a large-scale, genome-wide network. Network decomposition is also necessary for functional analysis of metabolism by pathway analysis methods that are often hampered by the problem of combinatorial explosion due to the complexity of metabolic network. Decomposition methods proposed in literature are mainly based on the connection degree of metabolites. To obtain a more reasonable decomposition, the global connectivity structure of metabolic networks should be taken into account. In this work, we use a reaction graph representation of a metabolic network for the identification of its global connectivity structure and for decomposition. A bow-tie connectivity structure similar to that previously discovered for metabolite graph is found also to exist in the reaction graph. Based on this bow-tie structure, a new decomposition method is proposed, which uses a distance definition derived from the path length between two reactions. An hierarchical classification tree is first constructed from the distance matrix among the reactions in the giant strong component of the bow-tie structure. These reactions are then grouped into different subsets based on the hierarchical tree. Reactions in the IN and OUT subsets of the bow-tie structure are subsequently placed in the corresponding subsets according to a 'majority rule'. Compared with the decomposition methods proposed in literature, ours is based on combined properties of the global network structure and local reaction connectivity rather than, primarily, on the connection degree of metabolites. The method is applied to decompose the metabolic network of Escherichia coli. Eleven subsets are obtained. More detailed investigations of the subsets show that reactions in the same subset are really functionally related. The rational decomposition of metabolic networks, and subsequent studies of the subsets, make it more amenable to understand the inherent organization and functionality of metabolic networks at the modular level. http://genome.gbf.de/bioinformatics/
Domain Decomposition By the Advancing-Partition Method for Parallel Unstructured Grid Generation
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar Z.; Zagaris, George
2009-01-01
A new method of domain decomposition has been developed for generating unstructured grids in subdomains either sequentially or using multiple computers in parallel. Domain decomposition is a crucial and challenging step for parallel grid generation. Prior methods are generally based on auxiliary, complex, and computationally intensive operations for defining partition interfaces and usually produce grids of lower quality than those generated in single domains. The new technique, referred to as "Advancing Partition," is based on the Advancing-Front method, which partitions a domain as part of the volume mesh generation in a consistent and "natural" way. The benefits of this approach are: 1) the process of domain decomposition is highly automated, 2) partitioning of domain does not compromise the quality of the generated grids, and 3) the computational overhead for domain decomposition is minimal. The new method has been implemented in NASA's unstructured grid generation code VGRID.
Optimal cost design of water distribution networks using a decomposition approach
NASA Astrophysics Data System (ADS)
Lee, Ho Min; Yoo, Do Guen; Sadollah, Ali; Kim, Joong Hoon
2016-12-01
Water distribution network decomposition, which is an engineering approach, is adopted to increase the efficiency of obtaining the optimal cost design of a water distribution network using an optimization algorithm. This study applied the source tracing tool in EPANET, which is a hydraulic and water quality analysis model, to the decomposition of a network to improve the efficiency of the optimal design process. The proposed approach was tested by carrying out the optimal cost design of two water distribution networks, and the results were compared with other optimal cost designs derived from previously proposed optimization algorithms. The proposed decomposition approach using the source tracing technique enables the efficient decomposition of an actual large-scale network, and the results can be combined with the optimal cost design process using an optimization algorithm. This proves that the final design in this study is better than those obtained with other previously proposed optimization algorithms.
Domain Decomposition By the Advancing-Partition Method
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar Z.
2008-01-01
A new method of domain decomposition has been developed for generating unstructured grids in subdomains either sequentially or using multiple computers in parallel. Domain decomposition is a crucial and challenging step for parallel grid generation. Prior methods are generally based on auxiliary, complex, and computationally intensive operations for defining partition interfaces and usually produce grids of lower quality than those generated in single domains. The new technique, referred to as "Advancing Partition," is based on the Advancing-Front method, which partitions a domain as part of the volume mesh generation in a consistent and "natural" way. The benefits of this approach are: 1) the process of domain decomposition is highly automated, 2) partitioning of domain does not compromise the quality of the generated grids, and 3) the computational overhead for domain decomposition is minimal. The new method has been implemented in NASA's unstructured grid generation code VGRID.
Parallel processing for pitch splitting decomposition
NASA Astrophysics Data System (ADS)
Barnes, Levi; Li, Yong; Wadkins, David; Biederman, Steve; Miloslavsky, Alex; Cork, Chris
2009-10-01
Decomposition of an input pattern in preparation for a double patterning process is an inherently global problem in which the influence of a local decomposition decision can be felt across an entire pattern. In spite of this, a large portion of the work can be massively distributed. Here, we discuss the advantages of geometric distribution for polygon operations with limited range of influence. Further, we have found that even the naturally global "coloring" step can, in large part, be handled in a geometrically local manner. In some practical cases, up to 70% of the work can be distributed geometrically. We also describe the methods for partitioning the problem into local pieces and present scaling data up to 100 CPUs. These techniques reduce DPT decomposition runtime by orders of magnitude.
Díez-Pascual, Ana M.; Díez-Vicente, Angel L.
2014-01-01
Poly(3-hydroxybutyrate) (PHB)-based bionanocomposites incorporating different contents of ZnO nanoparticles were prepared via solution casting technique. The nanoparticles were dispersed within the biopolymer without the need for surfactants or coupling agents. The morphology, thermal, mechanical, barrier, migration and antibacterial properties of the nanocomposites were investigated. The nanoparticles acted as nucleating agents, increasing the crystallization temperature and the degree of crystallinity of the matrix, and as mass transport barriers, hindering the diffusion of volatiles generated during the decomposition process, leading to higher thermal stability. The Young’s modulus, tensile and impact strength of the biopolymer were enhanced by up to 43%, 32% and 26%, respectively, due to the strong matrix-nanofiller interfacial adhesion attained via hydrogen bonding interactions, as revealed by the FT-IR spectra. Moreover, the nanocomposites exhibited reduced water uptake and superior gas and vapour barrier properties compared to neat PHB. They also showed antibacterial activity against both Gram-positive and Gram-negative bacteria, which was progressively improved upon increasing ZnO concentration. The migration levels of PHB/ZnO composites in both non-polar and polar simulants decreased with increasing nanoparticle content, and were well below the current legislative limits for food packaging materials. These biodegradable nanocomposites show great potential as an alternative to synthetic plastic packaging materials especially for use in food and beverage containers and disposable applications. PMID:24941255
Díez-Pascual, Ana M; Díez-Vicente, Angel L
2014-06-17
Poly(3-hydroxybutyrate) (PHB)-based bionanocomposites incorporating different contents of ZnO nanoparticles were prepared via solution casting technique. The nanoparticles were dispersed within the biopolymer without the need for surfactants or coupling agents. The morphology, thermal, mechanical, barrier, migration and antibacterial properties of the nanocomposites were investigated. The nanoparticles acted as nucleating agents, increasing the crystallization temperature and the degree of crystallinity of the matrix, and as mass transport barriers, hindering the diffusion of volatiles generated during the decomposition process, leading to higher thermal stability. The Young's modulus, tensile and impact strength of the biopolymer were enhanced by up to 43%, 32% and 26%, respectively, due to the strong matrix-nanofiller interfacial adhesion attained via hydrogen bonding interactions, as revealed by the FT-IR spectra. Moreover, the nanocomposites exhibited reduced water uptake and superior gas and vapour barrier properties compared to neat PHB. They also showed antibacterial activity against both Gram-positive and Gram-negative bacteria, which was progressively improved upon increasing ZnO concentration. The migration levels of PHB/ZnO composites in both non-polar and polar simulants decreased with increasing nanoparticle content, and were well below the current legislative limits for food packaging materials. These biodegradable nanocomposites show great potential as an alternative to synthetic plastic packaging materials especially for use in food and beverage containers and disposable applications.
Sladkevich, Sergey; Dupont, Anne-Laurence; Sablier, Michel; Seghouane, Dalila; Cole, Richard B
2016-11-01
Cellulose paper degradation products forming in the "tideline" area at the wet-dry interface of pure cellulose paper were analyzed using gas chromatography-electron ionization-mass spectrometry (GC-EI-MS) and high-resolution electrospray ionization-mass spectrometry (ESI-MS, LTQ Orbitrap) techniques. Different extraction protocols were employed in order to solubilize the products of oxidative cellulose decomposition, i.e., a direct solvent extraction or a more laborious chromophore release and identification (CRI) technique aiming to reveal products responsible for paper discoloration in the tideline area. Several groups of low molecular weight compounds were identified, suggesting a complex pathway of cellulose decomposition in the tidelines formed at the cellulose-water-oxygen interface. Our findings, namely the appearance of a wide range of linear saturated carboxylic acids (from formic to nonanoic), support the oxidative autocatalytic mechanism of decomposition. In addition, the identification of several furanic compounds (which can be, in part, responsible for paper discoloration) plus anhydro carbohydrate derivatives sheds more light on the pathways of cellulose decomposition. Most notably, the mechanisms of tideline formation in the presence of molecular oxygen appear surprisingly similar to pathways of pyrolytic cellulose degradation. More complex chromophore compounds were not detected in this study, thereby revealing a difference between this short-term tideline experiment and longer-term cellulose aging.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gropp, W.D.; Keyes, D.E.
1988-03-01
The authors discuss the parallel implementation of preconditioned conjugate gradient (PCG)-based domain decomposition techniques for self-adjoint elliptic partial differential equations in two dimensions on several architectures. The complexity of these methods is described on a variety of message-passing parallel computers as a function of the size of the problem, number of processors and relative communication speeds of the processors. They show that communication startups are very important, and that even the small amount of global communication in these methods can significantly reduce the performance of many message-passing architectures.
Le, Huy Q.; Molloi, Sabee
2011-01-01
Purpose: Energy resolving detectors provide more than one spectral measurement in one image acquisition. The purpose of this study is to investigate, with simulation, the ability to decompose four materials using energy discriminating detectors and least squares minimization techniques. Methods: Three least squares parameter estimation decomposition techniques were investigated for four-material breast imaging tasks in the image domain. The first technique treats the voxel as if it consisted of fractions of all the materials. The second method assumes that a voxel primarily contains one material and divides the decomposition process into segmentation and quantification tasks. The third is similar to the second method but a calibration was used. The simulated computed tomography (CT) system consisted of an 80 kVp spectrum and a CdZnTe (CZT) detector that could resolve the x-ray spectrum into five energy bins. A postmortem breast specimen was imaged with flat panel CT to provide a model for the digital phantoms. Hydroxyapatite (HA) (50, 150, 250, 350, 450, and 550 mg∕ml) and iodine (4, 12, 20, 28, 36, and 44 mg∕ml) contrast elements were embedded into the glandular region of the phantoms. Calibration phantoms consisted of a 30∕70 glandular-to-adipose tissue ratio with embedded HA (100, 200, 300, 400, and 500 mg∕ml) and iodine (5, 15, 25, 35, and 45 mg∕ml). The x-ray transport process was simulated where the Beer–Lambert law, Poisson process, and CZT absorption efficiency were applied. Qualitative and quantitative evaluations of the decomposition techniques were performed and compared. The effect of breast size was also investigated. Results: The first technique decomposed iodine adequately but failed for other materials. The second method separated the materials but was unable to quantify the materials. With the addition of a calibration, the third technique provided good separation and quantification of hydroxyapatite, iodine, glandular, and adipose tissues. Quantification with this technique was accurate with errors of 9.83% and 6.61% for HA and iodine, respectively. Calibration at one point (one breast size) showed increased errors as the mismatch in breast diameters between calibration and measurement increased. A four-point calibration successfully decomposed breast diameter spanning the entire range from 8 to 20 cm. For a 14 cm breast, errors were reduced from 5.44% to 1.75% and from 6.17% to 3.27% with the multipoint calibration for HA and iodine, respectively. Conclusions: The results of the simulation study showed that a CT system based on CZT detectors in conjunction with least squares minimization technique can be used to decompose four materials. The calibrated least squares parameter estimation decomposition technique performed the best, separating and accurately quantifying the concentrations of hydroxyapatite and iodine. PMID:21361193
Factorization-based texture segmentation
Yuan, Jiangye; Wang, Deliang; Cheriyadat, Anil M.
2015-06-17
This study introduces a factorization-based approach that efficiently segments textured images. We use local spectral histograms as features, and construct an M × N feature matrix using M-dimensional feature vectors in an N-pixel image. Based on the observation that each feature can be approximated by a linear combination of several representative features, we factor the feature matrix into two matrices-one consisting of the representative features and the other containing the weights of representative features at each pixel used for linear combination. The factorization method is based on singular value decomposition and nonnegative matrix factorization. The method uses local spectral histogramsmore » to discriminate region appearances in a computationally efficient way and at the same time accurately localizes region boundaries. Finally, the experiments conducted on public segmentation data sets show the promise of this simple yet powerful approach.« less
Intelligent transportation systems data compression using wavelet decomposition technique.
DOT National Transportation Integrated Search
2009-12-01
Intelligent Transportation Systems (ITS) generates massive amounts of traffic data, which posts : challenges for data storage, transmission and retrieval. Data compression and reconstruction technique plays an : important role in ITS data procession....
Matched field localization based on CS-MUSIC algorithm
NASA Astrophysics Data System (ADS)
Guo, Shuangle; Tang, Ruichun; Peng, Linhui; Ji, Xiaopeng
2016-04-01
The problem caused by shortness or excessiveness of snapshots and by coherent sources in underwater acoustic positioning is considered. A matched field localization algorithm based on CS-MUSIC (Compressive Sensing Multiple Signal Classification) is proposed based on the sparse mathematical model of the underwater positioning. The signal matrix is calculated through the SVD (Singular Value Decomposition) of the observation matrix. The observation matrix in the sparse mathematical model is replaced by the signal matrix, and a new concise sparse mathematical model is obtained, which means not only the scale of the localization problem but also the noise level is reduced; then the new sparse mathematical model is solved by the CS-MUSIC algorithm which is a combination of CS (Compressive Sensing) method and MUSIC (Multiple Signal Classification) method. The algorithm proposed in this paper can overcome effectively the difficulties caused by correlated sources and shortness of snapshots, and it can also reduce the time complexity and noise level of the localization problem by using the SVD of the observation matrix when the number of snapshots is large, which will be proved in this paper.
NASA Technical Reports Server (NTRS)
Cowen, Jonathan E.; Hepp, Aloysius F.; Duffy, Norman V.; Jose, Melanie J.; Choi, D. B.; Brothers, Scott M.; Baird, Michael F.; Tomsik, Thomas M.; Duraj, Stan A.; Williams, Jennifer N.;
2009-01-01
We describe several related studies where simple iron, nickel, and cobalt complexes were prepared, decomposed, and characterized for aeronautics (Fischer-Tropsch catalysts) and space (high-fidelity lunar regolith simulant additives) applications. We describe the synthesis and decomposition of several new nickel dithiocarbamate complexes. Decomposition resulted in a somewhat complicated product mix with NiS predominating. The thermogravimetric analysis of fifteen tris(diorganodithiocarbamato)iron(III) has been investigated. Each undergoes substantial mass loss upon pyrolysis in a nitrogen atmosphere between 195 and 370 C, with major mass losses occurring between 279 and 324 C. Steric repulsion between organic substituents generally decreased the decomposition temperature. The product of the pyrolysis was not well defined, but usually consistent with being either FeS or Fe2S3 or a combination of these. Iron nanoparticles were grown in a silica matrix with a long-term goal of introducing native iron into a commercial lunar dust simulant in order to more closely simulate actual lunar regolith. This was also one goal of the iron and nickel sulfide studies. Finally, cobalt nanoparticle synthesis is being studied in order to develop alternatives to crude processing of cobalt salts with ceramic supports for Fischer-Tropsch synthesis.
Coarse-to-fine markerless gait analysis based on PCA and Gauss-Laguerre decomposition
NASA Astrophysics Data System (ADS)
Goffredo, Michela; Schmid, Maurizio; Conforto, Silvia; Carli, Marco; Neri, Alessandro; D'Alessio, Tommaso
2005-04-01
Human movement analysis is generally performed through the utilization of marker-based systems, which allow reconstructing, with high levels of accuracy, the trajectories of markers allocated on specific points of the human body. Marker based systems, however, show some drawbacks that can be overcome by the use of video systems applying markerless techniques. In this paper, a specifically designed computer vision technique for the detection and tracking of relevant body points is presented. It is based on the Gauss-Laguerre Decomposition, and a Principal Component Analysis Technique (PCA) is used to circumscribe the region of interest. Results obtained on both synthetic and experimental tests provide significant reduction of the computational costs, with no significant reduction of the tracking accuracy.
Perturbative approach to covariance matrix of the matter power spectrum
NASA Astrophysics Data System (ADS)
Mohammed, Irshad; Seljak, Uroš; Vlah, Zvonimir
2017-04-01
We evaluate the covariance matrix of the matter power spectrum using perturbation theory up to dominant terms at 1-loop order and compare it to numerical simulations. We decompose the covariance matrix into the disconnected (Gaussian) part, trispectrum from the modes outside the survey (supersample variance) and trispectrum from the modes inside the survey, and show how the different components contribute to the overall covariance matrix. We find the agreement with the simulations is at a 10 per cent level up to k ˜ 1 h Mpc-1. We show that all the connected components are dominated by the large-scale modes (k < 0.1 h Mpc-1), regardless of the value of the wave vectors k, k΄ of the covariance matrix, suggesting that one must be careful in applying the jackknife or bootstrap methods to the covariance matrix. We perform an eigenmode decomposition of the connected part of the covariance matrix, showing that at higher k, it is dominated by a single eigenmode. The full covariance matrix can be approximated as the disconnected part only, with the connected part being treated as an external nuisance parameter with a known scale dependence, and a known prior on its variance for a given survey volume. Finally, we provide a prescription for how to evaluate the covariance matrix from small box simulations without the need to simulate large volumes.
Yang, Xi; Han, Guoqiang; Cai, Hongmin; Song, Yan
2017-03-31
Revealing data with intrinsically diagonal block structures is particularly useful for analyzing groups of highly correlated variables. Earlier researches based on non-negative matrix factorization (NMF) have been shown to be effective in representing such data by decomposing the observed data into two factors, where one factor is considered to be the feature and the other the expansion loading from a linear algebra perspective. If the data are sampled from multiple independent subspaces, the loading factor would possess a diagonal structure under an ideal matrix decomposition. However, the standard NMF method and its variants have not been reported to exploit this type of data via direct estimation. To address this issue, a non-negative matrix factorization with multiple constraints model is proposed in this paper. The constraints include an sparsity norm on the feature matrix and a total variational norm on each column of the loading matrix. The proposed model is shown to be capable of efficiently recovering diagonal block structures hidden in observed samples. An efficient numerical algorithm using the alternating direction method of multipliers model is proposed for optimizing the new model. Compared with several benchmark models, the proposed method performs robustly and effectively for simulated and real biological data.
Nanostructure and giant magnetoresistive properties of granular systems.
Kooi, B J; Vystavel, T; De Hosson, J T
2001-03-01
This article aims to make a connection between the microstructures of various nanostructured alloys and giant magnetoresistive (GMR) properties. The GMR behavior of nanoclusters embedded in a nonmagnetic matrix differs considerably from an alloy with the content of a magnetic phase above the percolation threshold; that is to say, an increase of GMR effect upon going from 300 to 10 K for the former and a decrease of the GMR effect for the latter. The following materials systems were examined with high-resolution transmission electron microscopy and magnetoelectrical resistance measurements: magnetic Co and CoFe nanoclusters in a Au matrix, NiFe clusters in a Cu matrix, and NiFe/Cu spinodal decomposition waves with interconnection of the magnetic phase. After annealing (> or = 300 degrees C), Co particles in Au become semi- or incoherent, whereas under other conditions and in all other systems, the interfaces remain coherent. This state of coherency at the interface between magnetic particles and a nonmagnetic matrix turned out to have a detectable influence on the GMR behavior.
Unlu, Ilyas; Spencer, Julie A; Johnson, Kelsea R; Thorman, Rachel M; Ingólfsson, Oddur; McElwee-White, Lisa; Fairbrother, D Howard
2018-03-14
Electron-induced surface reactions of (η 5 -C 5 H 5 )Fe(CO) 2 Mn(CO) 5 were explored in situ under ultra-high vacuum conditions using X-ray photoelectron spectroscopy and mass spectrometry. The initial step involves electron-stimulated decomposition of adsorbed (η 5 -C 5 H 5 )Fe(CO) 2 Mn(CO) 5 molecules, accompanied by the desorption of an average of five CO ligands. A comparison with recent gas phase studies suggests that this precursor decomposition step occurs by a dissociative ionization (DI) process. Further electron irradiation decomposes the residual CO groups and (η 5 -C 5 H 5 , Cp) ligand, in the absence of any ligand desorption. The decomposition of CO ligands leads to Mn oxidation, while electron stimulated Cp decomposition causes all of the associated carbon atoms to be retained in the deposit. The lack of any Fe oxidation is ascribed to either the presence of a protective carbonaceous matrix around the Fe atoms created by the decomposition of the Cp ligand, or to desorption of both CO ligands bound to Fe in the initial decomposition step. The selective oxidation of Mn in the absence of any Fe oxidation suggests that the fate of metal atoms in mixed-metal precursors for focused electron beam induced deposition (FEBID) will be sensitive to the nature and number of ligands in the immediate coordination sphere. In related studies, the composition of deposits created from (η 5 -C 5 H 5 )Fe(CO) 2 Mn(CO) 5 under steady state deposition conditions, representative of those used to create nanostructures in electron microscopes, were measured and found to be qualitatively consistent with predictions from the UHV surface science studies.
NASA Astrophysics Data System (ADS)
Debnath, M.; Santoni, C.; Leonardi, S.; Iungo, G. V.
2017-03-01
The dynamics of the velocity field resulting from the interaction between the atmospheric boundary layer and a wind turbine array can affect significantly the performance of a wind power plant and the durability of wind turbines. In this work, dynamics in wind turbine wakes and instabilities of helicoidal tip vortices are detected and characterized through modal decomposition techniques. The dataset under examination consists of snapshots of the velocity field obtained from large-eddy simulations (LES) of an isolated wind turbine, for which aerodynamic forcing exerted by the turbine blades on the atmospheric boundary layer is mimicked through the actuator line model. Particular attention is paid to the interaction between the downstream evolution of the helicoidal tip vortices and the alternate vortex shedding from the turbine tower. The LES dataset is interrogated through different modal decomposition techniques, such as proper orthogonal decomposition and dynamic mode decomposition. The dominant wake dynamics are selected for the formulation of a reduced order model, which consists in a linear time-marching algorithm where temporal evolution of flow dynamics is obtained from the previous temporal realization multiplied by a time-invariant operator. This article is part of the themed issue 'Wind energy in complex terrains'.
Ge, Ni-Na; Wei, Yong-Kai; Zhao, Feng; Chen, Xiang-Rong; Ji, Guang-Fu
2014-07-01
The electronic structure and initial decomposition in high explosive HMX under conditions of shock loading are examined. The simulation is performed using quantum molecular dynamics in conjunction with multi-scale shock technique (MSST). A self-consistent charge density-functional tight-binding (SCC-DFTB) method is adapted. The results show that the N-N-C angle has a drastic change under shock wave compression along lattice vector b at shock velocity 11 km/s, which is the main reason that leads to an insulator-to-metal transition for the HMX system. The metallization pressure (about 130 GPa) of condensed-phase HMX is predicted firstly. We also detect the formation of several key products of condensed-phase HMX decomposition, such as NO2, NO, N2, N2O, H2O, CO, and CO2, and all of them have been observed in previous experimental studies. Moreover, the initial decomposition products include H2 due to the C-H bond breaking as a primary reaction pathway at extreme condition, which presents a new insight into the initial decomposition mechanism of HMX under shock loading at the atomistic level.
Addressing the United States Navy Need for Software Engineering Education
1999-09-01
taught in MA 1996 (5 - 0). Precalculus review, complex numbers and algebra, complex plane, DeMovire’s Theorem, matrix algebra, LU decomposition...This course was designed for the METOC and Combat Systems curricula. PREREQUISITE: Precalculus mathematics. MA1996 MATHEMATICS FOR SCIENTISTS AND...description for MAI995 (5 - 0). This course was designed for the METOC and Combat Systems curricula. PREREQUISITE: Precalculus mathematics. PHYSICS/SYSTEMS
Detection of entanglement with few local measurements
NASA Astrophysics Data System (ADS)
Gühne, O.; Hyllus, P.; Bruß, D.; Ekert, A.; Lewenstein, M.; Macchiavello, C.; Sanpera, A.
2002-12-01
We introduce a general method for the experimental detection of entanglement by performing only few local measurements, assuming some prior knowledge of the density matrix. The idea is based on the minimal decomposition of witness operators into a pseudomixture of local operators. We discuss an experimentally relevant case of two qubits, and show an example how bound entanglement can be detected with few local measurements.
NASA Technical Reports Server (NTRS)
Huff, Timothy L.
2002-01-01
Thermogravimetric analysis (TGA) is widely employed in the thermal characterization of non-metallic materials, yielding valuable information on decomposition characteristics of a sample over a wide temperature range. However, a potential wealth of chemical information is lost during the process, with the evolving gases generated during thermal decomposition escaping through the exhaust line. Fourier Transform-Infrared spectroscopy (FT-IR) is a powerful analytical technique for determining many chemical constituents while in any material state, in this application, the gas phase. By linking these two techniques, evolving gases generated during the TGA process are directed into an appropriately equipped infrared spectrometer for chemical speciation. Consequently, both thermal decomposition and chemical characterization of a material may be obtained in a single sample run. In practice, a heated transfer line is employed to connect the two instruments while a purge gas stream directs the evolving gases into the FT-IR. The purge gas can be either high purity air or an inert gas such as nitrogen to allow oxidative and pyrolytic processes to be examined, respectively. The FT-IR data is collected realtime, allowing continuous monitoring of chemical compositional changes over the course of thermal decomposition. Using this coupled technique, an array of diverse materials has been examined, including composites, plastics, rubber, fiberglass epoxy resins, polycarbonates, silicones, lubricants and fluorocarbon materials. The benefit of combining these two methodologies is of particular importance in the aerospace community, where newly developing materials have little available data with which to refer. By providing both thermal and chemical data simultaneously, a more definitive and comprehensive characterization of the material is possible. Additionally, this procedure has been found to be a viable screening technique for certain materials, with the generated data useful in the selection of other appropriate analytical procedures for further material characterization.
In Situ Observations of Phase Transitions in Metastable Nickel (Carbide)/Carbon Nanocomposites
2016-01-01
Nanocomposite thin films comprised of metastable metal carbides in a carbon matrix have a wide variety of applications ranging from hard coatings to magnetics and energy storage and conversion. While their deposition using nonequilibrium techniques is established, the understanding of the dynamic evolution of such metastable nanocomposites under thermal equilibrium conditions at elevated temperatures during processing and during device operation remains limited. Here, we investigate sputter-deposited nanocomposites of metastable nickel carbide (Ni3C) nanocrystals in an amorphous carbon (a-C) matrix during thermal postdeposition processing via complementary in situ X-ray diffractometry, in situ Raman spectroscopy, and in situ X-ray photoelectron spectroscopy. At low annealing temperatures (300 °C) we observe isothermal Ni3C decomposition into face-centered-cubic Ni and amorphous carbon, however, without changes to the initial finely structured nanocomposite morphology. Only for higher temperatures (400–800 °C) Ni-catalyzed isothermal graphitization of the amorphous carbon matrix sets in, which we link to bulk-diffusion-mediated phase separation of the nanocomposite into coarser Ni and graphite grains. Upon natural cooling, only minimal precipitation of additional carbon from the Ni is observed, showing that even for highly carbon saturated systems precipitation upon cooling can be kinetically quenched. Our findings demonstrate that phase transformations of the filler and morphology modifications of the nanocomposite can be decoupled, which is advantageous from a manufacturing perspective. Our in situ study also identifies the high carbon content of the Ni filler crystallites at all stages of processing as the key hallmark feature of such metal–carbon nanocomposites that governs their entire thermal evolution. In a wider context, we also discuss our findings with regard to the much debated potential role of metastable Ni3C as a catalyst phase in graphene and carbon nanotube growth. PMID:27746852